id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,909,950
User and Group Management Script
Script Explanation The script automates the creation of users and groups on a Linux system, assigns...
0
2024-07-03T10:03:30
https://dev.to/olatunbosun_salako/user-and-group-management-script-m8n
**Script Explanation** The script automates the creation of users and groups on a Linux system, assigns users to specified groups, sets random passwords, and logs the operations. 1. **Argument Check** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhi7l86cd0yu90y3btyx.png) The script checks if exactly one argument (the input file) is provided. If not, it displays an error message and exits. 2. **Setting up Variables and Directory and File Setup** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/578am9sozlimaz10fwxc.png) - - Assigns the input filename to the variable filename. - - Defines the path for the password file (passwd) and the log file (logfile). - - Creates the /var/secure directory if it does not exist. - - Creates or updates the password file (/var/secure/user_passwords.txt) and sets its permissions to 600 (read-write for the owner only). - - Creates the /var/log directory if it does not exist. - - Creates or updates the log file (/var/secure/user_management.log) and sets its permissions to 644 (read-write for the owner, read-only for others). 3.** Processing the Input File** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08ez7q4llezm5ouhheyj.png) - Reads the input file line by line, splitting each line into user and groups based on the ; delimiter. - Trims whitespace from username and groups. 4.** User Creation ** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k89mzfjd6vo9fna9iq5o.png) - Checks if the user already exists using id -u. - If the user does not exist, creates the user and logs the action. 5. Group Creation and Assigning of users ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zqkvi8fx9h5qipyn6xas.png) - Splits the groups string into an array of individual group names. - For each group, checks if it exists using getent group. - If the group does not exist, creates the group and logs the action. - Adds the user to each group and logs the action. 6. **Password Setting ** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nft4uh92rlxm4zuq9fwf.png) - Generates a random password using openssl rand -base64 12. - Sets the user's password using chpasswd. - Logs the password setting action Thanks for reading... Olatunbosun Salako Devops engineer hng tech internship
olatunbosun_salako
1,909,948
Functional Patterns: Recursions and Reduces
This is part 4 of a series of articles entitled Functional Patterns. Make sure to check out the...
0
2024-07-03T10:02:16
https://dev.to/if-els/functional-patterns-recursions-and-reduces-jhk
javascript, haskell, programming, learning
> This is part 4 of a series of articles entitled *Functional Patterns*. > > Make sure to check out the rest of the articles! > > 1. [The Monoid](https://dev.to/if-els/functional-patterns-the-monoid-22ef) > 2. [Compositions and Implicitness](https://dev.to/if-els/functional-patterns-composition-and-implicitness-4n08) > 3. [Interfaces and Functors](https://dev.to/if-els/functional-patterns-interfaces-and-functors-359e) ## State A big part of what defines functional programming— is the *immutability* of **state**. Because via this, we can immediately guarantee *idempotence*, a strict mapping of our input to output, just like in Mathematics. *Idempotence* or *referential transparency* is just a fancy way of saying: If you give me `x`, I will always give you `y`. Moreover, referential transparency specifically, says that you can replace all *occurrences* of your function with the actual function body and your code should be functionally the same. ```py # python def double(n): return n * 2 double(n) # should be replaceable by n * 2 ``` At first glance you might probably go: "Well, duh, a function call is pretty much substituting code." But this is sometimes not true, specifically when your function has **side-effects**. A side-effect is state *mutated* (or changed) outside of the scope of the function. Example: ```python counter = 0 def count(): global counter counter += 1 return counter count() # 1 count() # 2 count() # 3 ``` Even though we're calling `count` with the same argument (nothing!), we end up with different output! This is because this function has a *side-effect* of updating the state of `counter`, which is outside of its scope. Therefore, this function is not *purely* functional. This *immutability* of state allows us to think in a simpler, and more *pure* way of avoiding side-effects altogether. And in this design do we find that, already, a lot of bugs start becoming impossible by *design* and that is truly something to give merit to functional programming for. However, this does not come without its consequence (as it is a constraint, after all). But like any other constraint in functional programming, there is an elegant workaround to it. In this article we'll talk about *iteration*, in the pure functional sense, and realizing how overrated `for`-loops are. ## Recursion A problem with the aforementioned constraint of immutability, is that some code structures inherently *rely* on state, such as the `for`-loop. ```c for (int i = 0; i < 10; i++) { // ... } ``` Here's a ubiquitous `for`-loop construct. We declare some state `i`, our counter, and then we increment it on every iteration until we fail our condition. Looks good, works pretty well. But the problem lies in how we're relying on mutable state to ensure idempotent behavior. ```c for (int i = 0; i < 10; i++) { i--; } ``` When we modify the logic inside of the `for`-loop (not modifying the `for`-loop itself), we can introduce bugs such as this one that creates an infinite loop. All because our iteration relies on mutable state. You might be thinking: "Okay, so for-loops are evil, how are we going to do iterations now?" Okay no more assumptions on what you're thinking, you're likely more clever than that (or you've read the section header). Here's the answer: Every `for`-loop can be written **recursively**. ```c void loop(const int i) { if (i >= 10) return; loop(i+1) } ``` Oh yeah, computer science! Though significantly more code in an imperative sense, note that it's no longer possible to create the same bug we had in the `for`-loop without modifying the recursive construct itself, and in that case, skill issue. Here's that expression in both Haskell and Elixir, which are both functional languages: ```hs loop n | n >= 10 = undefined | otherwise = loop n+1 ``` ```elixir def loop(n) when n >= 10, do: nil def loop(n), do: loop n+1 ``` So much cleaner (and the only way to do standard iteration in these languages). Do note that although these do the same amount of iteration, they don't actually equal each other exactly. And that's because most `for`-loops rely on mutable state to be useful. The reason you're probably going to be looping 10 times is imperative, or introduces some side-effect (like printing to standard output). And to make up for that, functional recursions will always have you returning something, the reason you did recursion in the first place, no side-effects. ## The Call Stack If you have ever used recursion before, you will probably be aware of its biggest drawback when compared to iterative constructs, and that is— the **call stack**. Every time you enter a function, your program keeps track of it by putting it on the call stack as a *stack frame*. And when it's done, it simply has to pop that off, and now you're back to the function that called it, and again and again, until you get back to your `main` function. This poses as a constraint as maintaining the call stack takes significantly more memory than just keeping track of a counter for iteration. In fact, this leads us to a very common error, the *stack overflow*. A stack overflow occurs, when you are in a nested call *deeper* than what your call stack can handle, and this occurs *WAY* earlier than an *integer* overflow (when the computer can no longer represent your big number, so it loops back around to the smallest number), which is the hypothetical bound to our `for`-loop. This is problematic, because what if we, for some reason, *did* need to iterate that many times? ```hs -- haskell factorial 1 = 1 -- we set the base case via pattern matching factorial n = n * factorial (n - 1) ``` This factorial function would be bounded by the call stack! Not good. This form is what we call **head recursion**. This is the most common form of recursion, and you'd be forgiven to think that it's the *only* form of recursion. ## The Tail Call Optimization ```hs factorial a 1 = a factorial a n = factorial (a * n) (n - 1) ``` This is the same factorial function, but in **tail recursive** form. Spot the differences. For clarity, here's an imperative example as well: ```python # head recursive def factorial(n): if n == 1: return 1 return n * factorial(n-1) # tail recursive def factorial_t(a = 1, n): if n == 1: return a return factorial_t(a * n, n - 1) ``` The difference in these implementations is that— in tail recursion, we don't *need* the previous stack frame. We don't need anything from the previous call. We can store all the information we need in our `a` parameter, which stands for the *accumulator* (because this is where you accumulate your computation, instead of relying on the call stack). ![factorial](https://github.com/44mira/articles/assets/116419708/586702d4-b609-446c-808e-7f30a58aff0f) In fact, we can just *re-use* our stack frame for our new call. This is called **tail-call optimization**, a compiler trick done by functional languages (and some imperative languages such as Rust, Lua, and Javascript) wherein it sees that there is no more computation needed after the recursive call, so it reuses the same stack frame. Not only is this much faster, it actually removes the possibility of a stack overflow *entirely*. ![image](https://github.com/44mira/articles/assets/116419708/1cf0e0f0-b266-4f13-8f56-ec4fe3bd34fd) To better understand what it means to no longer have any other computation past the recursive call, think about how when you get to the lowest depth of head recursion, you now have to go back down the call stack, multiplying the accumulated `n`s while you were nesting calls. But in tail-recursion, you can get your return value (your accumulator) when you get to your final recursive call, no need for traversing the call stack twice. The most common example of tail-recursion, and is actually just the generalization of it, is the `reduce`/`fold` function. ## Function Origami If you are from a *Javascript* background, you've most likely encountered the `reduce` function. In fact, you might be familiar with this idiom for summing numbers: ```js sum = arr => arr.reduce((a,b) => a + b, 0) ``` `reduce` is actually (roughly) equivalent to this *tail recursive* function: ```py # python (because i don't want to deal with array prototype) def reduce(arr, fn, acc): if len(arr) == 0: return # base case, no more array acc = fn(acc, arr[0]) # apply fn to the head of the array and the accumulator return reduce(arr[1:], fn, acc) # recur with the rest of the array and the new accumulator ``` As we can see, `reduce` (also known as `fold`) is just a generalization over a tail recursive function! And you'd be surprised at how many things you can express as a `reduce`. > `reduce`/`fold` gets its name from the fact that it `reduces`/`folds` dimensions of an array. Reducing a 2D array yields you a 1D array, and reducing a 1D array yields you an *atom* (single value). Neat! Haskell provides us with two standard `fold` functions, `foldr` (reduces from the right) and `foldl` (reduces from the left). Here are some examples: ```hs sum = foldr (+) 0 -- partial application product = foldr (*) 1 -- it doesn't matter whether we use foldr or foldl any = foldr (||) False -- because these operations are monoids over their inputs all = foldr (&&) True -- (which guarantees associativity) factorial n = foldr1 (*) [1..n] -- foldr1 takes the first element as the initial value ``` ```hs -- Folding from the left and prepending the result to an accumulator returns you the reversed array reverse = foldl (\acc x -> x : acc) [] {- ARRAY | ACCUMULATOR [1 2 3] [ ] [2 3] [1] [3] [2 1] [ ] [3 2 1] -} ``` ```hs max = foldr1 compare where compare a b -- a helper function is defined to handle the condition | a > b = a | otherwise = b {- ARRAY | ACCUMULATOR [3 4 2 5] 3 [4 2 5] 3 [2 5] 4 [5] 4 [ ] 5 -} ``` And here is the imperative pattern that `reduce` generalizes: ```go // go // the input array is of type U because your accumulator doesn't have to be // the same type as your elements func reduce[T any, U any](arr []U, fn func(U, T) T, initial T) T { result := initial for i := 0; i < len(arr); i++ { // standard for loop syntax used for clarity result = fn(arr[i], result) } return result } ``` Best part is, most modern languages that support higher-order functions (Python, Rust, Kotlin, ...) come with a built-in `reduce`/`fold` function, so you don't have to implement your own, just have to read documentation :> > Except Go, Go likes to do its own thing. --- And that should be it for this part, I figured a break was needed right after the **Functors** article, so here's one that's a bit more application-oriented. As always, I hope you enjoyed the article, and learned something new!
if-els
1,909,813
Generics: A Boon for Strongly Typed Languages
Exciting News! Our blog has a new home!🚀 Background Consider a scenario, You have two...
0
2024-07-03T10:01:32
https://canopas.com/generics-a-boon-for-strongly-typed-languages-bbb18f0ed04a
go, webdev, programming, beginners
> Exciting News! Our blog has a new **[home](https://canopas.com/blog)**!🚀 ## Background Consider a scenario, You have two glasses one with milk and the other with buttermilk. Your task is to identify between milk and buttermilk before drinking it(of course without tasting and touching!). Hard to admit that, right? Just because they seem similar doesn’t mean they are the same. The same happens with loosely typed languages. When we provide arguments without mentioning types (as they don’t expect), the identification relies on runtime whether it will be a number or string(milk and buttermilk in the above example). _For example, You have a function that accepts a variable as input. Still, unless we specify its type, we can’t admit that it will always be a number or a string because it can be either one or completely a different one — boolean!_ There come the strictly/strongly typed languages in the picture. No doubt loosely typed languages come with more freedom but with the cost of robustness! In languages like JavaScript and PHP, variables dance to the beat of their assigned values, morphing from numbers to strings with nary a complaint. But for those who’ve migrated to Golang, the world of strict typing can feel…well, a bit rigid at first. Two separate functions for `int` and `string`? Even though having the same computation logic? Where's the flexibility? **But why to fear, when generics are here?** Generics are important for writing reusable and expressive code. In strongly typed languages, generics are a way to go. However, Golang had no support for generics until Go1.17, it started supporting from Go1.18 and later. In this blog, we will explore what are the disadvantages of loosely typed languages and how generics bridges a gap between the expressiveness of loose typing and strongly typed languages(Golang). ## Why choose strongly typed language over loosely typed? ### Loose Typing: Case of the Miscalculated Discount (PHP) Imagine an e-commerce website built with PHP, a loosely typed language. A function calculates a discount based on a user’s loyalty points stored in a variable $points. ``` function calculateDiscount($points) { if ($points > 100) { $discount = $points * 0.1; // Assuming points are integers } else { $discount = 0; } return $discount; } $userPoints = "Gold"; // User with "Gold" loyalty tier $discount = calculateDiscount($userPoints); // Unexpected behavior // This might result in a runtime error or unexpected discount calculation // due to the string value in $userPoints. ``` Here, the function expects an integer for $points to calculate the discount. However, due to loose typing, a string value ("Gold") is passed. This might lead to a runtime error(as it’s not pre-compiled) or an unexpected discount calculation, causing confusion for the user and requiring additional debugging efforts. ### Strict Typing: Catching the Discount Error Early (Go) Now, let’s consider the same scenario in Go, a strictly typed language. ``` func calculateDiscount(points int) float64 { if points > 100 { return float64(points) * 0.1 } return 0 } var userPoints int = 150 // User's loyalty points stored as an integer discount := calculateDiscount(userPoints) // This code will not compile due to the incompatible type of "userPoints" // if it's not declared as an integer initially. ``` In Go, the function calculateDiscount explicitly requires an integer for points. If we attempt to pass the string value "Gold", the code won't even compile. This early error detection prevents unexpected behavior at runtime and ensures data integrity. However, while strictly typed languages don’t provide as much flexibility as loosely typed it ensures the robustness of our code. While not being so rigid, strictly typed languages provide support for generics to ensure code reusability and cleanliness. ## What are generics? Generics are a powerful programming concept that allows you to write functions and data structures that can work with a variety of different data types without sacrificing type safety. ## Practical use case Let’s consider a simple example, A function that takes an array as input be it `[]int64` or `[]float64` and returns the sum of all its elements. Ideally, we would need to define two different functions for each. ``` // SumInts adds together the values of m. func SumInts(m []int64) int64 { var s int64 for _, v := range m { s += v } return s } // SumFloats adds together the values of m. func SumFloats(m []float64) float64 { var s float64 for _, v := range m { s += v } return s } ``` With generics, it’s possible to use a single function that behaves the same for different data types. ``` // SumIntsOrFloats sums the values of map m. It supports both int64 and float64 // as types for array values. func SumIntsOrFloats[T int64 | float64](m []T) T{ var s T for _, t:= range m { s += t } return s } ``` This function takes `[]T` as input and give `T` as output, it can be any of the two we mentioned `int64` or `float64`. The magic lies in the `T` placeholder. It represents a generic type that can be anything `int64` or `float64`. This eliminates the need for duplicate functions and keeps our code clean and concise. ## Type Constraints in Go Generics Go generics introduce type parameters, allowing functions and data structures to work with various types. However, sometimes, we need to ensure specific properties for those types. This is where type constraints come in. **The `comparable` Constraint: Not for Ordering** The `comparable` constraint is a natural choice for ordering elements. After all, it guarantees types can be compared for equality using the `==` operator. However, comparable doesn't imply the ability to use comparison operators like `<`, `>`, `<=`, and `>=`. Example: ``` func IntArrayContains(arr []int, target int) bool { for _, element := range arr { if element == target { return true } } return false } func StringArrayContains(arr []string, target string) bool { for _, element := range arr { if element == target { return true } } return false } ``` The above function takes int and string respectively and checks whether an element exists in an array or not. ``` func ArrayContains[T comparable](arr []T, target T) bool { for _, element := range arr { if element == target { return true } } return false } ``` The above generic function is a replacement if the `IntArrayContains()` and `StringArrayContains()` . It can also be used for float(Try providing `([]float64{2.3,4.5,1.2}, 4.5)` as argument! Where, **[T comparable]** —defines a generic type parameter `T` with `comparable` constraint. This ensures the elements in the slice can be compared using the `==` operator.  — The `ordered` Constraint: For Ordering For ordering elements within a generic function, Go offers the `ordered` constraint from the `golang.org/x/exp/constraints` package. The `ordered` constraint ensures the type parameter can be used with comparison operators like `<`, `>`, `<=`, and `>=`, making functions like Min or Maxpossible. Example: A function to find the minimum value in a slice. Traditionally, we might have written separate functions for different types like `int` and `string`. ``` func minInt(s []int) int { min := s[0] for _, val := range s { if val < min { min = val } } return min } func minString(s []string) string { min := s[0] for _, val := range s { if val < min { // String comparison might not be intuitive min = val } } return min } ``` With generics, we can define a single generic function `Min` that works with any comparable type. ``` func Min[T constraints.Ordered](s []T) T { min := s[0] for _, val := range s { if val < min { min = val } } return min } ``` **[T constraints.Ordered]** — It defines a generic type parameter `T`. The `constraints.Ordered` part specifies a constraint on T. It must be a type that implements the `Ordered` interface (meaning it supports comparison operators like `<, >` etc). > This blog post was originally published on **[canopas.com](https://canopas.com/blog)**. > To read the full version, please visit [**this blog**](https://canopas.com/generics-a-boon-for-strongly-typed-languages-bbb18f0ed04a). **That’s it for today. Keep exploring for the best!!** --- If you like what you read, be sure to hit 💖 button! — as a writer it means the world! I encourage you to share your thoughts in the comments section below. Your input not only enriches our content but also fuels our motivation to create more valuable and informative articles for you. Happy coding! 👋
cp_nandani
1,909,946
TailwindCSS Flexbox. Free UI/UX design course
Flexbox It's time to take a look at another famous Tailwind CSS tool - flexbox. In fact,...
25,935
2024-07-03T09:59:34
https://dev.to/keepcoding/tailwindcss-flexbox-free-uiux-design-course-5fnc
tailwindcss, ui, uidesign, tutorial
## Flexbox It's time to take a look at another famous Tailwind CSS tool - flexbox. In fact, flexbox itself is not a creation of Tailwind, but simply CSS, but thanks to Tailwind, we can comfortably use flexbox using the class utilities. But enough talk, let's explain it better with examples. ## Step 1 - add headings Our Hero Image is impressive, but since it contains no content, it is of little use. We need to add some kind of **Call to action**. One big heading and one subheading should be enough for now. Let's do it. Inside the _div_ with our **Hero Image**, let's add another div with headings inside. **HTML** ``` <!-- Background image --> <div class="h-screen bg-cover bg-no-repeat" style="margin-top: -56px; background-image: url('https://mdbcdn.b-cdn.net/img/new/fluid/city/018.jpg');"> <!-- Call to action --> <div class="pt-20"> <h1>I am learning Tailwind</h1> <h2>And what an exciting adventure it is!</h2> </div> </div> <!-- Background image --> ``` They appear in the upper left corner of the screen and are covered by the Navbar, so we need to add padding with _.pt-20 class_ to see anything at all. This is definitely not a satisfactory solution. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wacd7h7y45c8oo7zdnz4.png) We have to figure out a way to perfectly center them horizontally and vertically. Regardless of the size of the screen, we want our **Call to action** to appear in the **center**. Difficult task. But fortunately, we have **flexbox** at our disposal, thanks to which we will deal with it in the blink of an eye. ## Step 2 - add flexbox First, we need to place our Call To Action in an outer _div_ that will handle flexbox. **HTML** ``` <!-- Background image --> <div class="h-screen bg-cover bg-no-repeat" style="margin-top: -56px; background-image: url('https://mdbcdn.b-cdn.net/img/new/fluid/city/018.jpg');"> <!-- Wrapper for flexbox --> <div> <!-- Call to action --> <div class="pt-10"> <h1>I am learning Tailwind</h1> <h2>And what an exciting adventure it is!</h2> </div> </div> </div> <!-- Background image --> ``` Then we need to **enable flexbox**. We do this by adding the _.flex class_ to the outer wrapper _div_. **HTML** ``` <!-- Wrapper for flexbox --> <div class="flex"> <!-- Call to action --> <div class="pt-10"> <h1>I am learning Tailwind</h1> <h2>And what an exciting adventure it is!</h2> </div> </div> ``` So far, so good, but nothing changes after we save the file. And that's because enabling flexbox is only the first step. Now we need to choose one of the many available options to define how exactly we want to align given elements. **Horizontal alignment** To center elements horizontally, we use the justify-center class. Let's add it next to the _.flex_ class. **HTML** ``` <!-- Wrapper for flexbox --> <div class="flex justify-center"> <!-- Call to action --> <div class="pt-10"> <h1>I am learning Tailwind</h1> <h2>And what an exciting adventure it is!</h2> </div> </div> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8m970clpw5sen5blg5z.png) **Vertical alignment** To center elements vertically, we use the items-center class. Let's also add it next to the _.flex_ class. **HTML** ``` <!-- Wrapper for flexbox --> <div class="flex items-center justify-center"> <!-- Call to action --> <div class="pt-10"> <h1>I am learning Tailwind</h1> <h2>And what an exciting adventure it is!</h2> </div> </div> ``` After saving the file, it will turn out... that nothing has changed 🤔 However, if you look closely, you'll see that's not true - vertical centering worked as well. The problem, however, is that the height of the _div_ on which we run flexbox is only as high as the height of the elements it contains. As a result, there is no visual effect of vertical centering. ## Step 3 - set a height Let's do an experiment - let's add the _bg-red-500 class_ to the _div_ with our flexbox, which will give it a red background. Thanks to this, we will be able to see its actual height. **HTML** ``` <!-- Wrapper for flexbox --> <div class="flex items-center justify-center bg-red-500"> <!-- Call to action --> <div class="pt-10"> <h1>I am learning Tailwind</h1> <h2>And what an exciting adventure it is!</h2> </div> </div> ``` Look at the red rectangle - the _flexbox div_ ends and begins exactly where its contents end and begin - in this case, **Call to action** elements. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63l3y1flgb2pou21oc2o.png) To extend the flexbox div to the full height of our Hero Image, we need to set its height equal to 100% of the available space. This is very easy to do with Tailwind. Just add the _.h-full class_ to the flexbox _div_ ("h" for height, so h-full = height: 100%). **HTML** <!-- Wrapper for flexbox --> <div class="flex h-full items-center justify-center bg-red-500"> <!-- Call to action --> <div class="pt-10"> <h1>I am learning Tailwind</h1> <h2>And what an exciting adventure it is!</h2> </div> </div> After saving the file and refreshing the browser, you will see that this time Call to action is centered both horizontally and vertically. You can **remove** the _.bg-red-500_ class. It only served us to demonstrate the height of the flexbox div, so we don't need it anymore. We still have a lot to improve on our **Call to action **(like poor visibility), but we'll cover that in the future lessons. Regarding flexbox - in this lesson we have learned only the basic functionalities. We will cover advanced topics many times in the future, because flexbox is useful in virtually every project. _**Note:** If you want to practice on your own and have a look at more examples you can play with our **[flexbox generator](https://www.designtoolshub.com/tailwind-css/flexbox-generator)**._ **[DEMO AND SOURCE CODE FOR THIS LESSON](https://tw-elements.com/snippets/tailwind/ascensus/5324955)**
keepcoding
1,909,945
From Homework Help to Exam Prep: Best Online Class-Taking Services
In today's fast-paced world, finding time to attend traditional classes can be challenging. Whether...
0
2024-07-03T09:58:38
https://dev.to/john_hipo/from-homework-help-to-exam-prep-best-online-class-taking-services-3bg6
In today's fast-paced world, finding time to attend traditional classes can be challenging. Whether you’re a student juggling multiple responsibilities or a professional seeking to enhance your skills, online class-taking services offer a flexible and convenient solution. These platforms cater to various needs, from homework help to exam preparation. Let's explore the best class-taking services available online and how they can make your learning journey smoother. 1. Chegg Chegg is a well-known platform that provides a wide range of educational resources. It offers textbook rentals, homework help, and study guides. With Chegg Study, you can get step-by-step solutions to textbook problems and ask experts for help with difficult questions. Additionally, Chegg offers tutoring services, which are perfect for one-on-one learning and exam preparation. Their user-friendly interface and comprehensive resources make it one of the best class-taking services online. **2. Khan Academy** Khan Academy is a non-profit organization that aims to provide free, world-class education to anyone, anywhere. It covers a vast array of subjects, including math, science, economics, and history. The platform offers video tutorials, practice exercises, and personalized learning dashboards. Khan Academy is especially useful for students who need help with homework and exam prep, as it breaks down complex topics into easy-to-understand lessons. Its free access makes it an invaluable resource for learners of all ages. **3. Coursera** Coursera partners with top universities and organizations to offer online courses, specializations, and degrees. It covers a wide range of subjects, from computer science to business, and provides high-quality video lectures, quizzes, and peer-reviewed assignments. Coursera’s courses are designed by experts and can be taken at your own pace. For those preparing for exams or looking to gain new skills, Coursera is one of the best class-taking services available online. **4. Udemy** Udemy is a massive online learning platform that offers over 155,000 courses on virtually any topic imaginable. From coding and design to personal development and marketing, Udemy has something for everyone. The courses are created by industry experts and include video lectures, quizzes, and assignments. Udemy’s lifetime access to course materials allows you to learn at your own pace and revisit content whenever needed, making it ideal for both homework help and exam prep. **5. Tutor.com** Tutor.com offers personalized, one-on-one tutoring services in various subjects, including math, science, English, and social studies. The platform connects students with expert tutors who can provide homework help, test prep, and study skills coaching. Tutor.com is available 24/7, making it a flexible option for students with busy schedules. The interactive whiteboard and chat features enhance the learning experience, ensuring that students get the most out of their tutoring sessions. **6. Bestclasstakers.com** Study.com offers a vast library of video lessons, quizzes, and practice tests covering a wide range of subjects. It provides homework help, test prep, and even college credit courses. [Bestclasstakers.com](https://bestclasstakers.com)’s bite-sized video lessons make learning engaging and manageable, and the platform’s progress tracking helps students stay on top of their studies. For those looking for a comprehensive online class-taking service, Study.com is an excellent choice. **7. edX** edX, founded by Harvard and MIT, offers online courses from top universities and institutions worldwide. It covers a broad spectrum of subjects, including computer science, business, and humanities. edX provides high-quality video lectures, interactive labs, and assessments. With its self-paced and instructor-led courses, edX is perfect for students seeking homework help or preparing for exams. The platform also offers professional certificates and degrees, making it one of the best class-taking services for career advancement. **8. Wyzant** Wyzant connects students with private tutors for one-on-one learning sessions. It covers a wide range of subjects and skill levels, from elementary school to college and beyond. Wyzant’s tutors are vetted professionals who provide personalized instruction tailored to each student’s needs. The platform’s flexibility allows students to schedule sessions at their convenience, making it ideal for homework help and exam preparation. **9. Quizlet** Quizlet is a popular study tool that offers flashcards, quizzes, and games to help students learn and retain information. It covers a wide range of subjects and allows users to create their own study sets or use existing ones. Quizlet’s interactive and engaging approach to learning makes it an effective tool for exam prep. Its mobile app ensures that students can study on the go, making it a versatile addition to any student’s toolkit. **10. Brainly** Brainly is a peer-to-peer learning platform where students can ask and answer homework questions. It covers a wide range of subjects and grade levels, from elementary school to college. Brainly’s collaborative approach allows students to learn from each other and gain different perspectives on challenging topics. The platform’s community of knowledgeable users makes it a valuable resource for homework help and exam preparation. In conclusion, online class-taking services offer a wealth of resources for students and professionals alike. From comprehensive platforms like Chegg and Khan Academy to specialized services like Tutor.com and Wyzant, there’s something for everyone. These services not only provide homework help but also offer effective exam preparation tools, making them essential for anyone looking to succeed in their academic and professional endeavors.
john_hipo
1,909,944
Geospatial Indexing and Queries in MongoDB
MongoDB provides robust support for geospatial data and queries, allowing developers to efficiently...
0
2024-07-03T09:57:29
https://dev.to/platform_engineers/geospatial-indexing-and-queries-in-mongodb-4ojk
MongoDB provides robust support for geospatial data and queries, allowing developers to efficiently execute spatial queries on collections containing geospatial shapes and points. This blog post delves into the technical aspects of geospatial indexing and queries in MongoDB, highlighting the different types of indexes and query operators available. ### Geospatial Indexes MongoDB offers two types of geospatial indexes: `2dsphere` and `2d`. These indexes are used to improve the performance of geospatial queries by allowing the database to efficiently locate and filter geospatial data. #### 2dsphere Indexes `2dsphere` indexes support queries that interpret geometry on a spherical surface, such as the Earth. These indexes are particularly useful for applications that require calculations on a sphere, like determining distances between points on the Earth's surface. `2dsphere` indexes support both GeoJSON objects and legacy coordinate pairs. ```bash db.collection.createIndex( { location : "2dsphere" } ) ``` #### 2d Indexes `2d` indexes, on the other hand, support queries that interpret geometry on a flat surface. These indexes are suitable for applications that require calculations on a two-dimensional plane. `2d` indexes support legacy coordinate pairs. ```bash db.collection.createIndex( { location : "2d" } ) ``` ### Geospatial Query Operators MongoDB provides several geospatial query operators to perform various spatial operations. These operators can be used in conjunction with geospatial indexes to efficiently filter and retrieve geospatial data. #### $geoIntersects The `$geoIntersects` operator selects geometries that intersect with a specified geometry. This operator is useful for determining if a point or shape lies within another shape. ```bash db.collection.find( { location : { $geoIntersects : { $geometry : { type : "Polygon", coordinates : [ [ [ 0, 0 ], [ 3, 0 ], [ 3, 3 ], [ 0, 3 ], [ 0, 0 ] ] ] } } } ) ``` #### $geoWithin The `$geoWithin` operator selects geometries that are entirely within a specified shape. This operator is useful for determining if a point or shape is fully contained within another shape. ```bash db.collection.find( { location : { $geoWithin : { $geometry : { type : "Polygon", coordinates : [ [ [ 0, 0 ], [ 3, 0 ], [ 3, 3 ], [ 0, 3 ], [ 0, 0 ] ] ] } } } ) ``` #### $nearSphere The `$nearSphere` operator returns geospatial objects in proximity to a specified point on a sphere. This operator is useful for determining the nearest points to a given location. ```bash db.collection.find( { location : { $nearSphere : [ 0, 0 ], $maxDistance : 1000 } } ) ``` ### Geospatial Aggregation Stage [MongoDB](https://platformengineers.io/blog/seeding-a-mongo-db-database-using-docker-compose/) also provides a geospatial aggregation stage, `$geoNear`, which returns an ordered stream of documents based on the proximity to a geospatial point. ```bash db.collection.aggregate( [ { $geoNear : { near : [ 0, 0 ], distanceField : "distance" } } ] ) ``` ### Platform Engineering In the context of platform engineering, MongoDB's geospatial features can be particularly useful for building scalable and efficient applications that require spatial data processing. By leveraging geospatial indexes and query operators, developers can create high-performance applications that can handle large volumes of geospatial data. ### Conclusion In conclusion, MongoDB's geospatial features provide a robust and efficient way to handle spatial data and queries. By understanding the different types of geospatial indexes and query operators available, developers can build high-performance applications that can efficiently process and analyze geospatial data.
shahangita
1,909,943
Single Page Application: Authentication and Authorization in Angular
Introduction In a Single Page Application (SPA), each element has its own existence and...
0
2024-07-03T09:54:43
https://dev.to/starneit/single-page-application-authentication-and-authorization-in-angular-118b
webdev, javascript, programming, beginners
###Introduction In a Single Page Application (SPA), each element has its own existence and lifecycle, rather than being part of a global page state. Authentication and authorization can affect some or all elements on the screen. ###Authentication Process in an SPA 1. User login: Obtain an access and refresh token. 2. Client-side storage: Store tokens and minimal details in local storage. 3. Login resolver: Redirect away from the login page if the user is authenticated. 4. Router auth guard: Redirect to the login page if the user is unauthenticated. 5. Logout: Remove stored data. ###Additional Considerations 1. HTTP interceptor: Use the token for API calls. 2. 401 handling: Request a new token when necessary. 3. User display: Retrieve and display user details on the screen. 4. Redirect URL: Track additional information. 5. Other concerns include third-party adaptation and server-side rendering for obtaining access tokens. ###Basic Login Example Begin with a simple authentication form that requires a username and password. The API accepts the credentials and returns an access token and refresh token. ``` // Auth service @Injectable({ providedIn: 'root' }) export class AuthService { private _loginUrl = '/auth/login'; constructor(private _http: HttpClient) {} // login method Login(username: string, password: string): Observable<any> { return this._http.post(this._loginUrl, { username, password }).pipe( map((response) => { // prepare the response to be handled, then return return response; }) ); } } ``` When the login is successful, save the information in localStorage: ``` Login(username: string, password: string): Observable<any> { return this._http.post(this._loginUrl, { username, password }).pipe( map((response) => { const retUser: IAuthInfo = <IAuthInfo>(<any>response).data; // save in localStorage localStorage.setItem('user', JSON.stringify(retUser)); return retUser; }) ); } ``` Upon revisiting the site, user information should be populated from localStorage. ###Auth State Management To manage the authentication state, use RxJS BehaviorSubject and Observable: ``` // Auth service export class AuthService { // create an internal subject and an observable to keep track private stateItem: BehaviorSubject<IAuthInfo | null> = new BehaviorSubject(null); stateItem$: Observable<IAuthInfo | null> = this.stateItem.asObservable(); } ``` ###Handling User Status If the localStorage status indicates the user is logged in and the page is refreshed, the user should be redirected to the appropriate location. ###Logout Process To log out, remove the state and localStorage data: ``` // services/auth.service Logout() { this.RemoveState(); localStorage.removeItem(ConfigService.Config.Auth.userAccessKey); } ``` ###Auth Guard To protect private routes and redirect unauthorized users, use an AuthGuard: ``` // services/auth.guard @Injectable({ providedIn: 'root' }) export class AuthGuard implements CanActivate, CanActivateChild { constructor(private authState: AuthService, private _router: Router) {} canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> { return this.secure(route); } canActivateChild(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<boolean> { return this.secure(route); } private secure(route: ActivatedRouteSnapshot | Route): Observable<boolean> { return this.authState.stateItem$.pipe( map(user => { if (!user) { this._router.navigateByUrl('/login'); return false; } // user exists return true; }) ); } } ``` ###Additional Use Cases In future iterations, the access token can be used for API calls, and handling a 401 error can be implemented. ###HTTP Interceptor To automatically add the access token to API calls, use an HTTP interceptor. This will help manage authentication headers for all requests. ``` // services/auth.interceptor.ts @Injectable() export class AuthInterceptor implements HttpInterceptor { constructor(private authService: AuthService) {} intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { const authToken = this.authService.getAccessToken(); if (authToken) { const authReq = req.clone({ headers: req.headers.set('Authorization', 'Bearer ' + authToken), }); return next.handle(authReq); } else { return next.handle(req); } } } ``` Register the interceptor in the app.module.ts: ``` // app.module.ts @NgModule({ // ... providers: [ { provide: HTTP_INTERCEPTORS, useClass: AuthInterceptor, multi: true, }, ], }) export class AppModule {} ``` ###Handling 401 Errors To handle 401 errors, create another interceptor that detects the error and refreshes the access token if necessary. ``` // services/error.interceptor.ts @Injectable() export class ErrorInterceptor implements HttpInterceptor { constructor(private authService: AuthService, private router: Router) {} intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { return next.handle(req).pipe( catchError((error: HttpErrorResponse) => { if (error.status === 401) { // Refresh the access token and retry the request. return this.authService.refreshAccessToken().pipe( switchMap((newToken) => { const authReq = req.clone({ headers: req.headers.set('Authorization', 'Bearer ' + newToken), }); return next.handle(authReq); }) ); } else { return throwError(error); } }) ); } } ``` Register the error interceptor in the `app.module.ts`: ``` // app.module.ts @NgModule({ // ... providers: [ { provide: HTTP_INTERCEPTORS, useClass: ErrorInterceptor, multi: true, }, ], }) export class AppModule {} ``` ###User Display To display user information, create a component that subscribes to the `authService.stateItem$` Observable and updates the UI accordingly. ``` // components/user-display/user-display.component.ts @Component({ selector: 'app-user-display', templateUrl: './user-display.component.html', styleUrls: ['./user-display.component.scss'], }) export class UserDisplayComponent implements OnInit { user$: Observable<IUser>; constructor(private authService: AuthService) {} ngOnInit(): void { this.user$ = this.authService.stateItem$.pipe(map((state) => state?.payload)); } } ``` Include this component wherever it is needed to display user information. Redirect URL To redirect users to the original URL they were attempting to access before being redirected to the login page, store the URL during the authentication process. Update the AuthGuard to save the attempted URL: ``` // services/auth.guard.ts private secure(route: ActivatedRouteSnapshot | Route): Observable<boolean> { return this.authState.stateItem$.pipe( map((user) => { if (!user) { // Save the attempted URL this.authService.setRedirectUrl(route.url); this._router.navigateByUrl('/login'); return false; } return true; }) ); } ``` Add methods to the AuthService to store and retrieve the redirect URL: ``` // services/auth.service.ts private redirectUrl: string; setRedirectUrl(url: string) { this.redirectUrl = url; } getRedirectUrl(): string { return this.redirectUrl; } ``` Modify the login component to redirect the user to the stored URL upon successful login: ``` // login component login() { this.authService.Login('username', 'password').subscribe({ next: (result) => { const redirectUrl = this.authService.getRedirectUrl(); if (redirectUrl) { this.router.navigateByUrl(redirectUrl); } else { this.router.navigateByUrl('/private/dashboard'); } }, // ... }); } ``` This will ensure that users are taken back to the original URL they attempted to access after logging in. ###Third-Party Authentication To integrate third-party authentication providers, such as Google or Facebook, follow these general steps: 1. Register your application with the third-party provider and obtain the required credentials (client ID and secret). 2. Implement a "Login with [Provider]" button in your login component that redirects users to the provider's authentication page. 3. Create an endpoint in your backend to handle the authentication response and exchange the authorization code for an access token. 4. Save the third-party access token in your database and return a custom access token (JWT) to your SPA. 5. Adapt your AuthService to handle third-party authentication and store the received JWT. ###Server-Side Rendering (SSR) If your application uses server-side rendering, you'll need to handle the initial authentication state differently. In this case, the access token can be fetched during the SSR process: 1. In your server-side rendering logic, check for the access token in cookies or local storage. 2. If an access token is found, include it in the initial state of your application. 3. In your AuthService, use the APP_INITIALIZER to read the access token from the initial state instead of local storage. ###Token Refresh For better security, your application should implement token refresh logic. When the access token expires, the application should use the refresh token to request a new access token without requiring the user to log in again. 1. Add a method in the AuthService to request a new access token using the refresh token. 2. Update the ErrorInterceptor to call the refresh token method when a 401 status is encountered. 3. Store the new access token and update the authentication state. 4. Retry the original API call with the new access token. ###Roles and Permissions To further enhance the authorization process, you can implement role-based access control using roles and permissions. 1. Assign roles and permissions to users during the registration or login process. 2. Include the user's roles and permissions in the JWT payload. 3. Create a custom AuthRoleGuard that checks for the required roles and permissions in the JWT before granting access. 4. Protect routes using the AuthRoleGuard based on the roles and permissions required. By implementing these additional features, you can create a robust and secure authentication and authorization system for your Single Page Application. ###Conclusion In conclusion, adding a login system to a website makes it safer and easier to use. We talked about many steps like logging in, saving information, and making sure only the right people can see certain things. We also discussed some cool extra features, like using other websites to log in or giving different people different permissions. Following these steps will help create a cool and safe website that everyone enjoys using. Keep learning and improving your website over time!
starneit
1,909,940
What is Staking in Crypto? – Process, Benefits & Risks
Staking is a popular phenomenon in the crypto space. It allows crypto developers to secure their...
0
2024-07-03T09:51:50
https://dev.to/digivikas/what-is-staking-in-crypto-process-benefits-risks-1nli
cryptocurrency, web3, blockchain, learning
Staking is a popular phenomenon in the crypto space. It allows crypto developers to secure their networks while rewarding participants who stake their tokens. Crypto staking has lately emerged as a favored way for crypto enthusiasts and investors to make a side income from their crypto portfolio. It involves staking your crypto assets in a PoS blockchain network to earn a passive income in the form of additional tokens paid at regular intervals. Here’s all you need to know about crypto staking, including how it works, the benefits, and the risks involved. <h2>What is Crypto Staking?</h2> Staking works similarly to a savings account that pays regular interest. You deposit your money (crypto tokens) in an account (blockchain network) and earn interest on the amount deposited. Staking involves depositing (locking) your crypto assets on a staking platform for a fixed period. During this time, your deposits will earn regular interest, but you cannot withdraw them until the deadline ends. Staking is primarily employed by blockchains using the proof of stake consensus mechanism. It requires participants to stake a minimum amount of tokens to participate in the block generation process. Instead of mining cryptocurrency the traditional way, PoS involves staking tokens to generate new tokens. Staking helps strengthen and secure the network and rewards participants for their contributions. <b>Some popular blockchains that use staking include Ethereum, Solana, Cardano, and Polygon.</b> <h2>Proof of Stake (PoS) Consensus</h2> Staking is primarily employed by blockchain networks that are based on a proof of stake consensus. Staking is how these networks maintain a legitimate and strong ecosystem free of bugs. In a proof-of-stake consensus network, participants are required to stake their digital assets to participate in the consensus process. The more amount they stake, the more opportunities they get to validate transactions, add new blocks, and earn new tokens. A staking pool refers to a pool of funds contributed by multiple participants, who all earn staking rewards in proportion to their staked tokens without the need to individually validate and add new blocks. <h2>How Staking Works</h2> To participate in crypto staking, you need to have a cryptocurrency for a blockchain network that uses the PoS consensus mechanism. Some examples include ETH, NOMOX, SOL, and ADA. After selecting and buying the cryptocurrency you want to stake, find a staking platform where you can earn good returns for less work. Check out www.nomoex.org Now comes the best part. After joining the staking platform, all you need to do is stake (lock) your cryptocurrency for your preferred period. It can be a day, a week, a month, or even a year. The more tokens you invest for a longer period, the more rewards you’ll earn. Reward tokens are automatically deposited to your wallet, which you can withdraw or transfer. However, you cannot withdraw your staked tokens before the staking period ends. Some staking platforms even allow you to stake not just one but a range of tokens and digital assets. Nomoex, a fast-emerging crypto exchange, for instance, allows its users to stake a wide range of cryptocurrencies, including the native Nomoex token, allowing them to diversify their portfolio across multiple digital assets to reduce risks and increase potential returns. <b>Factors to consider when choosing a staking platform:</b> - Minimum staking amount - Types of tokens supported - Annual percentage yield (APY) - Rewards schedule <h2>Benefits of Staking Crypto</h2> Earn Rewards: Stakers earn regular rewards on their deposits based on the type and amount of token staked and the staking period. Easy to Do: Staking is easier than mining, trading, and other ways to make money with crypto. All you need to do is deposit your tokens. Diversification: Platforms like Nomoex allow you to stake multiple tokens to diversify your folio and reduce risk. Passive Income: For crypto investors looking to earn passive income (like interest or dividends) without selling their cryptocurrency, this is the best option. <h2>The Risks of Staking Crypto</h2> There are some risks involved in this type of investment. For one, your staked tokens are locked into the contract and cannot be withdrawn or traded during the staking period. Also, there is high risk with crypto investments due to the volatile nature of these assets. When investing in crypto, you should be ready for unexpected price swings and volatility in the market. Since you cannot sell your staked tokens, you won’t be able to leverage market upturns or sell when the price of your tokens increases. It is crucial to choose your validator carefully. Staking through a fraud or inexperienced validator can result in the loss of your tokens. <h2>How to stake crypto?</h2> We have already explained the process of crypto staking above. If you are looking to earn a passive income by staking your cryptocurrency, visit www.nomoex.com to get started.
digivikas
1,909,939
Frontend Technologies
As a beginner frontend developer you most likely have heard and used technologies such as HTML, CSS...
0
2024-07-03T09:51:04
https://dev.to/goldyn/frontend-technologies-4h4b
As a beginner frontend developer you most likely have heard and used technologies such as HTML, CSS and plain JavaScript. These are the stepping stone and building blocks of frontend development. However there is more. Concepts such as URL routing, state management, component-based architecture and much more can really be difficult when using these basic blocks. There is the need for a tool to make these task easier. These tools are called library or framework. In frontend development, majority of these tools are built on JavaScript. In these article we will discuss the pros and cons of two of these framework. ## **React** This is the most popular frontend framework. It was created by Facebook and first released in May 2013. Ever since React has continued to show its dominance in the dev community. Currently React has over 150 million downloads every week. Major companies such as Facebook, Instagram, Airbnb, Netflix all use React. React uses JSX which is a syntax for writing HTML within JavaScript. React focuses on component based architecture and a virtual DOM. Components are reusable pieces of the UI that manages their own state and login. React uses a virtual DOM to efficiently update and render components, minimizing direct manipulation of the actual DOM. ## Pros - Facebook provides React with significant corporate support and a sizable, vibrant community. This guarantees ongoing improvement and a plenty of resources. - JSX allows you to write HTML within JavaScript, providing a seamless way to define UI components and enhancing developer productivity by combining markup with logic. - React's virtual DOM implementation can lead to efficient updates and high performance, especially in applications with a lot of dynamic content. ## Cons - Some developers find JSX less intuitive compared to the template syntax used by Vue and Angular, particularly those with a traditional HTML/CSS background. - React often requires more boilerplate code and manual configuration (e.g., setting up state management with Redux or MobX). - While React itself is relatively easy to learn, mastering state management libraries like Redux or Context API can add complexity. ## **Vue** Vue.js is a popular JavaScript framework for building user interfaces and single-page applications. It is the second most popular framework behind React. It was developed by Evan You and first released in February 2014. Currently Vue has over 3 million download weekly. Major companies such as Alibaba, Xiaomi, and GitLab uses Vue. Like React, Vue.js uses component based architecture to build reusable user interface. Unlike React Vue.js uses a HTML based template syntax which binds the DOM to the Vue's instance's data. Vue’s reactivity system automatically updates the DOM when the underlying data changes. ## Pros - Vue has a gentle learning curve, making it accessible for developers with a basic understanding of HTML, CSS, and JavaScript. The documentation is comprehensive and beginner-friendly. - Vue has its own built-in state manager and router and do not need and external library unlike React to handle these task. - Vue is designed to be incrementally adoptable. You can use it to enhance parts of an existing project or build a full-fledged single-page application from scratch. It can also integrate smoothly with other libraries or existing projects. ## Cons - When compared to React, Vue has a smaller community and hence Fewer third-party libraries and tools for niche use cases. - May not be as robust for very large-scale applications as React with a comprehensive setup. Personally I have been using React for quite sometime and I have found it very useful and easy to use. The knowledge of React is not sufficient. In other to fully master React I must have hands on experience in the use of React. HNG has provided a platform for me to do just that (https://hng.tech/internship). I aim to increase my experience in the use of React and to learn from others as well as network with people (https://hng.tech/premium).
goldyn
1,909,896
Enhancing Video to Text Transcription with AI: An Asynchronous Solution on Google Cloud Platform
Asynchronous transcription can be applied in various contexts. For developers looking to implement a...
0
2024-07-03T09:50:46
https://dev.to/stack-labs/enhancing-video-to-text-transcription-with-ai-an-asynchronous-solution-on-google-cloud-platform-59el
devops, ai, python, googlecloud
Asynchronous transcription can be applied in various contexts. For developers looking to implement a robust, scalable, and efficient transcription solution, the Google Cloud Platform (GCP) offers an ideal environment. In this article, we’ll explore an asynchronous video-to-text transcription solution built with GCP using an event-driven and serverless architecture. ### Potential Applications The provided solution is particularly well-suited for long video-to-text transcriptions, efficiently handling videos that are more than an hour long. This makes it ideal for a wide array of applications across various sectors. Here are some examples: * **State Institutions or local authorities:** Transcribing meetings, hearings, and other official recordings to ensure transparency and accessibility. * **Company Meetings:** Creating accurate records of internal meetings, conferences, and training sessions to enhance communication. * **Educational Institutions:** Transcribing lectures, seminars, and workshops to aid in learning and research. For shorter videos, the Gemini Pro API can handle the entire video-to-text transcription process, offering a streamlined and efficient solution for quicker, smaller-scale transcription needs. ### Solution Overview ![Architecture overview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l77y0y5rb34xxrvyolvo.png) Our solution comprises three event-driven Cloud Functions, each triggered by specific events in different Cloud Storage buckets using an Eventarc trigger. Event-driven Cloud Functions are deployed pieces of code on GCP, invoked in response to an event in the cloud environment. In our case, we want our functions to be invoked when a file is upload in a specific Cloud Storage bucket. Eventarc is a standardized solution to manage events on GCP. Eventarc triggers route these events between resources. In this particular case, each Eventarc trigger listen to new objects in a specific Cloud Storage bucket, and then triggers the associated Cloud Function. The event data is passed to the function. [More information about Cloud Functions](https://cloud.google.com/functions/docs/writing/write-event-driven-functions#cloudevent-example-python) [More information about Eventarc triggers](https://cloud.google.com/functions/docs/calling/eventarc). The four buckets used in our architecture are: 1. **Video Files Bucket:** Where users upload their video files. 2. **Audio Files Bucket:** Stores the extracted audio files. 3. **Raw Transcriptions Bucket:** Contains the initial transcriptions generated by the Chirp speech-to-text model. 4. **Curated Transcriptions Bucket:** Stores the curated transcriptions, enhanced by Gemini. The application architecture is designed to be modular and scalable. Here’s a step-by-step breakdown of the workflow: 1. **Video Upload and Audio Extraction** When a user uploads a video file to the `video-files` bucket, the `video-to-audio` Cloud Function is triggered. This function uses `ffmpeg` to extract the audio from the video file and save it in the `audio-files` bucket. ```python import os import subprocess from google.cloud import storage import functions_framework import logging logger = logging.getLogger(__name__) @functions_framework.cloud_event def convert_video_to_audio(cloud_event): """Video to audio event-triggered cloud function.""" data = cloud_event.data bucket_name = data['bucket'] video_file_name = data['name'] destination_bucket_name = os.environ.get("AUDIO_FILES_BUCKET_NAME") if not video_file_name.endswith(('.mp4', '.mov', '.avi', '.mkv')): logger.info(f"File {video_file_name} is not a supported video format.") return storage_client = storage.Client() bucket = storage_client.bucket(bucket_name) blob = bucket.blob(video_file_name) tmp_video_file = f"/tmp/{video_file_name}" blob.download_to_filename(tmp_video_file) audio_file_name = os.path.splitext(video_file_name)[0] + '.mp3' tmp_audio_file = f"/tmp/{audio_file_name}" command = f"ffmpeg -i {tmp_video_file} -vn -acodec libmp3lame -q:a 2 {tmp_audio_file}" subprocess.call(command, shell=True) destination_bucket = storage_client.bucket(destination_bucket_name) destination_blob = destination_bucket.blob(audio_file_name) destination_blob.upload_from_filename(tmp_audio_file) os.remove(tmp_video_file) os.remove(tmp_audio_file) logger.info(f"Converted {video_file_name} to {audio_file_name} and uploaded to {destination_bucket_name}.") ``` 2. **Audio to Text Transcription** The upload of the audio file to the `audio-files` bucket triggers the `audio-to-text` Cloud Function. This function uses Chirp, a highly accurate speech-to-text model, and the Speech-to-Text API, to transcribe the audio and stores the raw transcription in the `raw-transcriptions` bucket. ```python from google.cloud.speech_v2 import SpeechClient from google.cloud.speech_v2.types import cloud_speech from google.api_core.client_options import ClientOptions from google.cloud import storage import functions_framework from typing import Dict import logging import time import os from . import chirp_model_long logger = logging.getLogger(__file__) def transcribe_batch_gcs( project_id: str, gcs_uri: str, region: str = "us-central1" ) -> cloud_speech.BatchRecognizeResults: """Transcribes audio from a Google Cloud Storage URI. Parameters ---------- project_id: The Google Cloud project ID. gcs_uri: The Google Cloud Storage URI. Returns ------- The RecognizeResponse. """ client = SpeechClient( client_options=ClientOptions( api_endpoint=f"{region}-speech.googleapis.com", ) ) config = cloud_speech.RecognitionConfig( auto_decoding_config=cloud_speech.AutoDetectDecodingConfig(), language_codes=["fr-FR"], model="chirp", ) file_metadata = cloud_speech.BatchRecognizeFileMetadata(uri=gcs_uri) request = cloud_speech.BatchRecognizeRequest( recognizer=f"projects/{project_id}/locations/{region}/recognizers/_", config=config, files=[file_metadata], recognition_output_config=cloud_speech.RecognitionOutputConfig( inline_response_config=cloud_speech.InlineOutputConfig(), ), processing_strategy=cloud_speech.BatchRecognizeRequest.ProcessingStrategy.DYNAMIC_BATCHING ) operation = client.batch_recognize(request=request) logger.info("Waiting for operation to complete...") response = operation.result(timeout=1000) transcript = "" for result in response.results[gcs_uri].transcript.results: if len(result.alternatives) > 0: logger.info(f"Transcript: {result.alternatives[0].transcript}") transcript += f" \n{result.alternatives[0].transcript}" logger.debug(f"Transcript: {transcript}") return transcript @functions_framework.cloud_event def speech_to_text_transcription(cloud_event): """Audio file transcription via Speech-to-text API call.""" data: Dict = cloud_event.data event_id = cloud_event["id"] event_type = cloud_event["type"] input_bucket = data["bucket"] audio_file_name = data["name"] destination_bucket_name = os.environ.get("RAW_TRANSCRIPTIONS_BUCKET_NAME") logger.info(f"Event ID: {event_id}") logger.info(f"Event type: {event_type}") logger.info(f"Bucket: {input_bucket}") logger.info(f"File: {audio_file_name}") storage_client = storage.Client() start = time.time() transcript = transcribe_batch_gcs( project_id=os.environ.get("PROJECT_ID"), gcs_uri=f"gs://{input_bucket}/{audio_file_name}" ) stop = time.time() raw_transcription_file_name = os.path.splitext(audio_file_name)[0] + '_raw.txt' destination_bucket = storage_client.bucket(destination_bucket_name) destination_blob = destination_bucket.blob(raw_transcription_file_name) destination_blob.upload_from_string(transcript, content_type='text/plain; charset=utf-8') logger.debug(transcript) logger.info(f"JOB DONE IN {round(stop - start, 2)} SECONDS.") ``` **About Chirp:** Chirp is a state-of-the-art speech-to-text model developed to provide highly accurate and fast transcription services. It supports a wide range of languages and dialects, making it a versatile choice for diverse transcription needs. It is available in the Speech-to-Text API. [More information about long audio to text transcription](https://cloud.google.com/speech-to-text/v2/docs/batch-recognize). 3. **Transcription Curation** Finally, the `curate-transcription` Cloud Function is triggered by the new transcription file in the `raw-transcriptions` bucket. This function sends the raw transcription to the Gemini API that uses `gemini-pro` model for curation and stores the refined transcription in the `curated-transcriptions` bucket. ```python import vertexai from vertexai.generative_models import GenerativeModel import functions_framework from google.cloud import storage import time import logging import os from typing import Dict logger = logging.getLogger(__file__) @functions_framework.cloud_event def transcription_correction(cloud_event): """Gemini API call to correct and enhance speech-to-text transcription.""" data: Dict = cloud_event.data event_id = cloud_event["id"] event_type = cloud_event["type"] input_bucket = data["bucket"] raw_transcription_filename = data["name"] destination_bucket_name = os.environ.get("CURATED_TRANSCRIPTIONS_BUCKET_NAME") logger.info(f"Event ID: {event_id}") logger.info(f"Event type: {event_type}") logger.info(f"Bucket: {input_bucket}") logger.info(f"File: {raw_transcription_filename}") storage_client = storage.Client() input_bucket = storage_client.get_bucket(input_bucket) input_blob = input_bucket.get_blob(raw_transcription_filename) transcript = input_blob.download_as_string() vertexai.init(project=os.environ.get("PROJECT_ID"), location="us-central1") model = GenerativeModel(model_name="gemini-1.0-pro-002") prompt = f""" YOUR CUSTOM PROMPT GOES HERE. PROVIDING CONTEXT AND GIVING INFORMATION ABOUT THE RESULT YOU EXPECT IS NECESSARY. {transcript} """ n_tokens = model.count_tokens(prompt) logger.info(f"JOB : SPEECH-TO-TEXT TRANSCRIPTION CORRECTION. \n{n_tokens.total_billable_characters} BILLABLE CHARACTERS") logger.info(f"RESPONSE WILL PROCESS {n_tokens.total_tokens} TOKENS.") start = time.time() response = model.generate_content(prompt) stop = time.time() curated_filename = raw_transcription_filename.replace("_raw", "_curated") destination_bucket = storage_client.bucket(destination_bucket_name) destination_blob = destination_bucket.blob(curated_filename) destination_blob.upload_from_string(response.text, content_type='text/plain; charset=utf-8') logger.debug(response.text) logger.info(f"JOB DONE IN {round(stop - start, 2)} SECONDS.") ``` The chosen architecture is modular and event-driven, which brings several advantages: 1. **Scalability:** This application can handle short or long videos, up to 8 hours. 2. **Flexibility:** The separation of concerns allows for easy maintenance and upgrades. If the user uploads a video in the `video-files` bucket, the three cloud function will be triggered. But if the user upload an audio in the `audio-files` bucket, then only the two last cloud functions will be triggered. 3. **Cost-Efficiency:** Cloud Functions are serverless. Using Cloud Functions ensures that resources are only used when necessary, reducing costs. ### Deployment with Terraform To ensure our solution is not only powerful but also easily manageable and deployable, we use Terraform for infrastructure as code (IaC). Terraform allows us to define our cloud resources in declarative configuration files, providing several key benefits: - Infrastructure configurations can be version-controlled using Git, following GitOps principles. This means changes to the infrastructure are tracked, reviewed, and can be rolled back if necessary. - As our application grows, Terraform makes it easy to manage our infrastructure by simply updating the configuration files. - Terraform makes the deployment of our application reliable and repeatable. In this particular case, four cloud storage buckets and three cloud functions are needed. We use one terraform resource for the cloud functions and another for the buckets. This provides a flexible code, and makes it easier to integrate and manage new buckets or cloud functions. More information about terraform : [Terraform documentation](https://developer.hashicorp.com/terraform?product_intent=terraform). ```python # locals.tf locals { function_definitions = [ { name = "convert_video_to_audio", source_dir = "../services/video_to_audio_cloud_function" input_bucket = var.video_files_bucket_name }, { name = "speech_to_text_transcription", source_dir = "../services/transcript_cloud_function" input_bucket = var.audio_files_bucket_name }, { name = "transcription_correction", source_dir = "../services/gemini_cloud_function" input_bucket = var.raw_transcriptions_bucket_name } ] } ``` From this `locals.tf` file, the user can add, configure or remove cloud functions very easily. The `cloud_functions.tf` file uses one terraform resource for all cloud functions, and loops over these function definitions. ```python # cloud_functions.tf resource "random_id" "bucket_prefix" { byte_length = 8 } resource "google_storage_bucket" "source_code_bucket" { name = "${random_id.bucket_prefix.hex}-source-code-bucket" location = var.location force_destroy = true uniform_bucket_level_access = true } data "archive_file" "function_sources" { for_each = { for def in local.function_definitions : def.name => def } type = "zip" output_path = "/tmp/${each.value.name}-source.zip" source_dir = each.value.source_dir } resource "google_storage_bucket_object" "function_sources" { for_each = data.archive_file.function_sources name = "${basename(each.value.output_path)}-${each.value.output_md5}.zip" bucket = google_storage_bucket.source_code_bucket.name source = each.value.output_path } data "google_storage_project_service_account" "default" {} resource "google_project_iam_member" "gcs_pubsub_publishing" { project = var.deploy_project role = "roles/pubsub.publisher" member = "serviceAccount:${data.google_storage_project_service_account.default.email_address}" } resource "google_service_account" "account" { account_id = "gcf-sa" display_name = "Test Service Account - used for both the cloud function and eventarc trigger in the test" } resource "google_project_iam_member" "roles" { for_each = { "invoking" = "roles/run.invoker" "event_receiving" = "roles/eventarc.eventReceiver" "artifactregistry_reader" = "roles/artifactregistry.reader" "storage_object_admin" = "roles/storage.objectUser" "speech_client" = "roles/speech.client" "insights_collector_service" = "roles/storage.insightsCollectorService" "aiplatform_user" = "roles/aiplatform.user" } project = var.deploy_project role = each.value member = "serviceAccount:${google_service_account.account.email}" depends_on = [google_project_iam_member.gcs_pubsub_publishing] } resource "google_cloudfunctions2_function" "functions" { for_each = { for def in local.function_definitions : def.name => def } depends_on = [ google_project_iam_member.roles["event_receiving"], google_project_iam_member.roles["artifactregistry_reader"], ] name = each.value.name location = var.location description = "Function to process ${each.value.name}" build_config { runtime = "python39" entry_point = each.value.name environment_variables = { BUILD_CONFIG_TEST = "build_test" } source { storage_source { bucket = google_storage_bucket.source_code_bucket.name object = google_storage_bucket_object.function_sources[each.key].name } } } service_config { min_instance_count = 1 max_instance_count = 3 available_memory = "256M" timeout_seconds = 60 available_cpu = 4 environment_variables = { PROJECT_ID = var.deploy_project AUDIO_FILES_BUCKET_NAME = var.audio_files_bucket_name RAW_TRANSCRIPTIONS_BUCKET_NAME = var.raw_transcriptions_bucket_name CURATED_TRANSCRIPTIONS_BUCKET_NAME = var.curated_transcriptions_bucket_name } ingress_settings = "ALLOW_INTERNAL_ONLY" all_traffic_on_latest_revision = true service_account_email = google_service_account.account.email } event_trigger { trigger_region = var.location event_type = "google.cloud.storage.object.v1.finalized" retry_policy = "RETRY_POLICY_RETRY" service_account_email = google_service_account.account.email event_filters { attribute = "bucket" value = google_storage_bucket.video_transcription_bucket_set[each.value.input_bucket].name } } } ``` Similarly the `buckets.tf` file uses only one terraform resource for all Cloud Storage buckets. ```python # buckets.tf resource "google_storage_bucket" "video_transcription_bucket_set" { for_each = toset([ var.video_files_bucket_name, var.audio_files_bucket_name, var.raw_transcriptions_bucket_name, var.curated_transcriptions_bucket_name ]) name = each.value location = var.location storage_class = "STANDARD" force_destroy = true uniform_bucket_level_access = true } ``` ### Costs **Storage:** $0.026 per gigabyte per month **Speech-to-Text API v2:** Depends on the amount of audio you plan to process: - $0.016 per minute processed per month for 0 to 500,000 minutes of audio - $0.01 per minute processed per month for 500,000 to 1,000,000 minutes of audio - $0.008 per minute processed per month for 1,000,000 to 2,000,000 minutes of audio - $0.004 per minute processed per month for over 2,000,000 minutes of audio [Pricing details](https://cloud.google.com/speech-to-text/pricing/?gad_source=1&gclid=Cj0KCQjw7ZO0BhDYARIsAFttkCiLzcwnuxXb6_NolGidjKV6bKwQ8DKOYA17MNfWi3Oj8jBIToaoD_saAsbBEALw_wcB&gclsrc=aw.ds&hl=en) **Gemini API:** Under the following limits, the service is free of charge: - 15 requests per minute - 1 million tokens per minute - 1,500 requests per day If you want to exceed these limits, a pay-as-you-go policy is applied. [Pricing details](https://ai.google.dev/pricing) **Cloud Functions:** The pricing depends on how long the function runs, how many times it is triggered, and the resources that are provisioned. [The following link explains the pricing policy for event-driven cloud functions](https://cloud.google.com/functions/pricing#simple_event-driven_function). Estimate the costs of your solution with [Google Cloud’s pricing calculator](https://cloud.google.com/products/calculator?hl=en). **Example:** A state institution wants to automate transcript generation for meetings. The average duration of these meetings is 4 hours. The records are uploaded to GCP using this solution. Let’s simulate the costs of one transcription for this specific use case using the simulator: The final cost per month, with one transcription per month, is estimated to be $5.14. More than half the costs are due to Speech-to-Text API use. | Service Display Name | Name | Quantity | Region | Total Price (USD) | |-----------------------------------|----------------------------------------|-----------|---------------|-------------------| | Speech-to-Text V2 | Cloud Speech-to-Text Recognition | 240.0 | global | 3.84 | | Cloud Functions 1 | CPU Allocation Time (2nd Gen) | 40080000 | us-central1 | 0.96192 | | Cloud Functions 1 | Memory Allocation Time (2nd Gen) | 25600000000 | us-central1 | 0.0625 | | Cloud Functions 1 | Invocations (2nd Gen) | 1000.0 | global | 0 | | Cloud Functions 2 | CPU Allocation Time (2nd Gen) | 4008000.0 | europe-west1 | 0.09619 | | Cloud Functions 2 | Memory Allocation Time (2nd Gen) | 2560000000 | europe-west1 | 0.00625 | | Cloud Functions 2 | Invocations (2nd Gen) | 1000.0 | global | 0 | | Cloud Functions 3 | CPU Allocation Time (2nd Gen) | 4008000.0 | europe-west1 | 0.09619 | | Cloud Functions 3 | Memory Allocation Time (2nd Gen) | 2560000000 | europe-west1 | 0.00625 | | Cloud Functions 3 | Invocations (2nd Gen) | 1000.0 | global | 0 | | Cloud Storage 1 | Standard Storage Belgium | 3.0 | europe-west1 | 0.06 | | Cloud Storage 2 | Standard Storage Belgium | 0.5 | europe-west1 | 0.01 | | Cloud Storage 3 | Standard Storage Belgium | 0.01 | europe-west1 | 0.0002 | | Cloud Storage 4 | Standard Storage Belgium | 0.01 | europe-west1 | 0.0002 | | | | | | | | | **Total Price:** | | | **5.1397** | | | | | | | | **Prices are in US dollars, effective date is 2024-07-01T08:32:56.935Z** | | | | | | | | | | | | **The estimated fees provided by Google Cloud Pricing Calculator are for discussion purposes only and are not binding on either you or Google. Your actual fees may be higher or lower than the estimate.** | | | | | | | | | | | | **Url to the estimate:** | [Link to estimate](https://cloud.google.com/calculator?dl=CiRkY2ZmMWNiOC1hNjY5LTQ4M2YtYTZlNi1mYjgzMWNkYmNlMzYaJEUyQUE4MUM5LTA1RTMtNEIzRS1BNEJGLUY4QzU3NTQ3MzVCMg==) | | | | ### Conclusion Leveraging AI to enhance video-to-text transcription on Google Cloud Platform offers significant benefits in scalability, flexibility, and efficiency. By integrating Chirp for speech-to-text conversion and Gemini Pro for transcription curation, and managing the deployment with Terraform, this solution provides a robust, easily deployable framework for high-quality transcriptions across various applications. Thanks for reading! I’m Maximilien, data engineer at Stack Labs. If you want to discover the [Stack Labs Data Platform](https://cloud.stack-labs.com/cloud-data-platform) or join [an enthousiast Data Engineering team](https://www.welcometothejungle.com/fr/companies/stack-labs), please contact us.
maximilien_soviche_9af134
1,909,938
19 Best Startup Directories to Promote Your Business for Free
Launching a startup is quite an exhilarating experience. You’ll face hurdles, but these tips and...
0
2024-07-03T09:48:35
https://dev.to/martinbaun/19-best-startup-directories-to-promote-your-business-for-free-32ej
startup, career, productivity
Launching a startup is quite an exhilarating experience. You’ll face hurdles, but these tips and directories will help you overcome them. ## Challenges Getting your product in front of the right audience is a challenge you’ll run into. Your product will likely fail if you do not present it to the correct consumer base. It won’t matter how well you showcase it or how well you’ve created it. Fortunately, you can showcase your business on online startup directories. These directories will likely help you reach your target audience. ## What is a startup directory? These are websites where you submit your brand's offerings and receive comments, press coverage, and even early adopters. These websites have audiences interested in a particular niche, which bodes well for marketers targeting a specific demographic. Read: *[The Utility of Websites in Your Business](https://martinbaun.com/blog/posts/the-utility-of-websites-in-your-business/)* ## Advantages of startup directories Registering your brand with these startup directories has several perks. Examples of such are: - They often contain follow and no-follow backlinks to your website. - These directories usually have high domain authority. - Listing on them can significantly improve your site's SEO ranking, prompting more traffic and business growth. ## What are the best startup directories to promote your business? Local directories are great tools for getting noticed. Unfortunately, most directories on the web are spammy and can hurt your brand's reputation if you register with them. We've done the legwork and ranked 20 top websites to find startups and market your business. Read: *[Businesses to start as a software developer](https://martinbaun.com/blog/posts/businesses-to-start-as-a-software-developer/)* ## Best Startup directories to Submit and List your product ### MicroLaunch MicroLaunch is a startup directory for solo and small business owners. It is great for the early stage of startup promotion. It helps with creating backlinks that promote businesses and startups as well. It is a good alternative to Hacker News and offers massive benefits to your company and startup. The traffic that goes to your startup is slow but sustained. It works best for slow and sustained growth. This is part of the strategy that makes it exceptional to use. ### Feed My App Feed My App boasts over 5000 users. It is a platform that lists and reviews recently launched web and mobile apps. Click "Submit an App" on the homepage to get your app featured on the platform. This prompts forms you'll need to fill in with information about your app. If it stands out, the Feed My App team will review your application and feature it on the homepage. Submissions are free, but you can pay fees starting from $9 to increase your chances of being featured. ### Betalist Betalist showcases new startups in the technology niche, software, and hardware. A listed Quick API app enables you to create APIs from any data without coding. Click "Submit Startup" on the landing page to list your app. You'll need to sign in with your Twitter account. Beta List uses specific submission criteria that your startup must meet. ### KillerStartups Killerstartups.com is an online community of founders, entrepreneurs, and their clientele. The site boasts over 125,000 unique monthly views. Since its inception, it has discovered several notable companies, such as Uber, Tinder, and Wego. Click "Submit your Startup" on the landing page to register your company. The registration process is free of charge. ### Startup Ranking Startup Ranking is a website that lists the best startup companies from all over the world. It generates a unique ranking for each startup registered with it. A proprietary algorithm determines this ranking. The algorithm utilizes factors such as social media engagement and SEO enhancers like inbound and outbound links. Since these factors fluctuate, this site posts a new ranking every day. Notable companies on the platform include Telegram and W3Schools. You can register your company for free by clicking the "Create" button. This will give you access to your brand's metrics and up to 200 search results. You may subscribe to the Pro package for unlimited searches for $169 per month. ### My Startup Tool My Startup Tool is a startup listing website with 183 directories from which you can list your startup. You gain access to this curated list by signing up for free. You can then manually register your startup in the directories you desire. You could pay a one-time fee starting from 149 Euros to have the site's team list your startup on the most suitable directories in their database. ### Startup Tabs Startup Tabs is a browser extension that shows you new startups with each tab you open. It is free and also allows free startup registrations. Click "Startup Lister" on its official website's header to submit. Click on "Submit Your Startup" and fill in your company's details Under "Traffic." Notable brands listed on this startup discovery engine include Slack and Centrallo. ### Startup Inspire Startup Inspire is a website that lists top web and mobile startups. Click "Submit Startup" on the site's header and fill out the forms to register your startup. It'll be published and featured on the homepage if it stands out. ### Starticorn Starticorn is a free startup directory that lists the world's hottest startups, or "unicorns," as they call them. The site lists all the startups submitted to it on the landing page. Hit the "Submit" button and fill in your startup's details. This registration is free. ### Launching Next Launching Next is a website that contains over 30,000 startups. It showcases the top startups in an eye-catching grid format on its homepage. Each featured company includes a button that links to its website. Click "Submit Startup" and fill in the required details to register your startup. Registration is free. ### Betapage Betapage is a curated list of top new startups. You can list your startup for free by clicking "Submit Product" and providing a short description. Users can then upvote your product to increase the website's ranking. ### StartupWizz StartupWizz is a website that lists the most disruptive startups on the internet and informative blogs in tech and gaming. Startup submission is free and entails filling out forms with your startup's details. You can also sell your startup on the platform. ### Launched Launched is one of the best websites for startups. Founders showcase their projects and get feedback from early adopters. The site presents apps in a grid structure, with a logo and a synopsis of the startup in each square. Click "Submit Startup" and fill in the relevant details to register your project on the site. ### GeekWire GeekWire is an authoritative technology news site. It provides a startup list database and a monthly top 200 startup ranking. It only accepts submissions from startups headquartered in Washington, Oregon, Idaho, or British Columbia. You can add your company for free by tapping the "Lists" dropdown on the header and selecting the startup list option. Next, tap the "Submit Your Startup" button. The GeekWire team will review your submission and promote it on their site if it meets their standards. ### Product Hunt Product Hunt is a popular platform for startups to gain visibility and build a user base. Its listing includes web and mobile apps, websites, hardware projects, and other tech products. The ranking of startups on the platform's homepage relies on user upvotes. Adding a startup is pretty straightforward. The website features a "how to post a product" feature that walks founders through the entire submission process. ### Tech to Market Tech to Market is a commercialization platform for new products and tech startups. Founders can launch pre-sales and list their products for sale, after which they are matched with clients. Click "Submit a Product" and follow the prompts to showcase your startup on this site. The Tech to Market team will review it and get back to you. ### Startup Buffer Startup Buffer is a directory that lists and promotes startups on their website, social networks, and Android and iOS apps. You can discover popular startups from all over the world or submit your own. The latter entails filling in your startup's details and a short elevator pitch of the project. ### StartupBase StartupBase is a platform that promotes tech startups on its homepage. Each listing includes the name and face of each company founder. Submission is free, but the website stipulates strict criteria your startup must fulfill before being considered for review. You could log in using your Twitter account and a few personal details. ### FeedMyStartup FeedMyStartup is a platform that showcases several up-and-coming startups, emphasizing the story behind their journeys. The platform has shared several articles on entrepreneurship and its hardships. Click "Submit Your Startup" on the header or email them a 700-word article on your company to submit your startup. ### Startup 88 Startup 88 is a directory that showcases stories behind popular startups on the web. Their model revolves around storytelling, as a well-told story can beat advertising in attracting investors and customer interest. Click "Pitch Your Startup or Product" and follow the prompts to submit your startup. ## Startup directories and listing Take-Aways Marketing a startup through campaigns and ads can prove expensive, as most founders struggle with obtaining funding. You can grow your company's popularity by listing it on directories of similar businesses. These will help you reach your intended clients and grow your user base. Read: *[Crazy Marketing Strategy Goleko.com to gain insights on marketing strategies.](https://martinbaun.com/blog/posts/crazy-marketing-strategy-goleko/)* ----- ## FAQs ### What is the best way to promote your startup? There's no one good way to promote your startup. Using directories is a good way to promote your products and services. You can find a startup community online that can help you maneuver this as well. ### What are some of the best directories? Numerous directories can help you stand out and get noticed. They boost your online presence and are worth your time. Hacker News is a good example. Indie hackers are a popular choice among new startups. Products get visibility online. You can get other good options on the blog above. ### What details are needed for these directories? Create an account with the directory of your choice. Input your contact details, company information, and business information. This helps people search for your company within the directory. Take time to find the right sites to submit your startup. List your business. Include your business profile to allow prospective angel investors to contact you. ### Do popular enterprises use directories to promote their startups? Yes, they do. Some of these tech enthusiasts use directories to promote their startups. Some started here and grew to Fortune 1000 status. This article has a list of the best directories to promote your startup in the early stages. Submitting your business to these directories can help it grow and become better known. ### Why are directories popular? Innovators turn to directories because of their efficiency. People discover new startups and new tech products in these directories. The Appsumo team has been excellent at promoting new tech products. It's free, and the audience is crazy about startups. ### How can I get my startup in a directory? It is a relatively simple task to undertake. Ensure you have the correct startup information which includes the startup founders. Then choose the directory of your choice and begin promoting your startup online. Some communities offer mentorship to help you with your startup pitch, registration, and promotion. There's enough help available to get you on potential angel lists for angel investors. ----- *For these and more thoughts, guides, and insights visit my blog at [martinbaun.com.](http://martinbaun.com)* *You can find me on [YouTube.](https://www.youtube.com/channel/UCJRgtWv6ZMRQ3pP8LsOtQFA)*
martinbaun
1,909,936
Top boarding schools in Delhi NCR
Looking for the best boarding schools in Delhi for your child? Look no further! Global Edu Consulting...
0
2024-07-03T09:44:32
https://dev.to/globaleduconsulting/top-boarding-schools-in-delhi-ncr-375e
boarding, boaardingscholsindelhi
Looking for the **[best boarding schools in Delhi](https://www.doonedu.com/boarding-schools-delhi)** for your child? Look no further! Global Edu Consulting has narrowed down the top options for you. With our extensive knowledge of boarding schools across India, we are here to guide you towards making the best choice for your child's education. Get in touch with us now and let us help you find the perfect boarding school that suits your child's needs and preferences.
globaleduconsulting
1,909,935
10 Reasons Why Google Vertex AI is a Game Changer
In today's rapidly evolving technological landscape, artificial intelligence (AI) has become an...
0
2024-07-03T09:44:30
https://dev.to/kodexolabs/10-reasons-why-google-vertex-ai-is-a-game-changer-22ak
In today's rapidly evolving technological landscape, artificial intelligence (AI) has become an indispensable tool for businesses seeking to gain a competitive edge. With the launch of Google Vertex AI, Google has once again raised the bar for AI platforms, offering a comprehensive solution that promises to revolutionize the way organizations leverage machine learning capabilities. Here are 10 reasons why [Google Vertex AI](https://kodexolabs.com/google-vertex-ai/) is a game changer: ## 1. - Unified Platform: Google Vertex AI provides a unified platform for building, deploying, and managing machine learning models. By consolidating various tools and services into a single platform, Vertex AI streamlines the entire machine learning workflow, from data preparation to model deployment. ## 2. - AutoML Capabilities: With Google Vertex AI's AutoML capabilities, even users with limited machine learning expertise can build high-quality models. AutoML features enable automated model training, hyperparameter tuning, and model selection, empowering organizations to rapidly develop AI solutions without the need for extensive manual intervention. ## 3. - Scalability: Google Vertex AI is built on Google Cloud, leveraging its unparalleled scalability to handle large-scale machine learning workloads. Whether you're processing massive datasets or deploying models to serve millions of users, Vertex AI offers the scalability needed to support your AI initiatives. ## 4. - Pre-built Models: Vertex AI offers a library of pre-built machine learning models tailored for various use cases, ranging from image recognition to natural language processing. These pre-trained models enable organizations to accelerate their AI projects by leveraging Google's expertise and infrastructure. ## 5. - Customization: While pre-built models offer convenience, Google Vertex AI also provides extensive customization options for users who require tailored solutions. Whether you need to fine-tune a pre-trained model or build a custom model from scratch, Vertex AI offers the flexibility to meet your specific requirements. ## 6. - Model Monitoring and Management: Google Vertex AI includes robust tools for monitoring and managing machine learning models in production. From tracking model performance metrics to detecting drift and managing versioning, Vertex AI simplifies the task of maintaining AI systems over time. ## 7. - Integration with Google Cloud Services: Vertex AI seamlessly integrates with other Google Cloud services, such as BigQuery for data analytics and Cloud Storage for data storage. This tight integration enables organizations to leverage the full power of Google Cloud ecosystem while building and deploying AI solutions. ## 8. - Collaboration Tools: Google Vertex AI provides collaboration tools that facilitate teamwork among data scientists, engineers, and other stakeholders involved in AI projects. With features such as shared model repositories and real-time collaboration, Vertex AI promotes collaboration and knowledge sharing across teams. ## 9. - Cost Efficiency: By leveraging Google Cloud's pay-as-you-go pricing model, Vertex AI offers cost-efficient machine learning solutions. Organizations can scale their AI initiatives according to their needs without incurring upfront infrastructure costs, making AI more accessible to businesses of all sizes. ## 10. - Advanced Capabilities: Beyond its core features, Google Vertex AI offers advanced capabilities such as explainability tools for understanding model predictions, as well as support for specialized hardware like GPUs and TPUs for accelerating model training and inference. In conclusion, Google Vertex AI represents a significant milestone in the evolution of AI platforms, offering a comprehensive solution that combines ease of use, scalability, and advanced capabilities. Whether you're a small startup or a large enterprise, Vertex AI provides the tools and infrastructure needed to harness the power of machine learning and drive innovation in your organization. As businesses increasingly rely on AI to gain insights and make data-driven decisions, Google Vertex AI stands out as a game changer that empowers organizations to unlock the full potential of artificial intelligence. ## For more Blogs [What is STEM Education](https://kodexolabs.com/google-vertex-ai/)
kodexolabs
1,909,934
Nostra Games
How does Nostra empower gaming developers to create engaging free online games with no...
0
2024-07-03T09:41:14
https://dev.to/claywinston/nostra-games-138n
mobilegames, gamedev, freeonlinegames, games
## How does Nostra empower gaming developers to create engaging free online games with no download required for players to enjoy on the platform? [**Nostra**](https://nostra.gg/articles/conquer-boredom-with-free-online-games.html?utm_source=referral&utm_medium=article&utm_campaign=Nostra) empowers gaming developers to craft engaging free online games with no download required, providing a seamless experience for players on our platform. Through our developer tools and resources, gaming developers can easily create and publish their games, reaching a vast audience of players eager for instant entertainment. By eliminating the need for downloads, Nostra ensures accessibility and convenience, allowing players to dive straight into gameplay without any barriers. Gaming developers can leverage our platform to showcase their creativity and innovation, contributing to the diverse library of games available on Nostra. With Nostra, gaming developers have the opportunity to captivate players with engaging [**free online games**](https://medium.com/@adreeshelk/how-to-play-hundreds-of-games-on-your-lock-screen-without-downloading-anything-4f03e0173441?utm_source=referral&utm_medium=Medium&utm_campaign=Nostra), driving engagement and fostering a thriving gaming community.
claywinston
1,909,933
Next.js 15 is Here: Exploring the New Features
Next.js 15 and React 19 are set to revolutionize the web development landscape with their latest...
0
2024-07-03T09:40:08
https://dev.to/helloworldttj/nextjs-15-is-here-exploring-the-new-features-5bp3
javascript, nextjs, react, news
![Nextjs 15](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2lpse11sbaamm65avivc.png)Next.js 15 and React 19 are set to revolutionize the web development landscape with their latest features and improvements. In this blog post, we'll dive deep into what Next.js 15 brings to the table, how it can enhance your development experience, and why it's worth your attention. ## Overview of Next.js 15 Features Next.js 15 introduces a plethora of new features aimed at improving performance, developer experience, and overall efficiency. Here's a comprehensive look at what you can expect: ### Key Features 1. **Improved Caching** - Next.js 15 enhances caching mechanisms, reducing hydration errors and making caching more efficient. 2. **React 19 Support** - Full support for React 19, including the new React compiler, which optimizes your code out of the box. 3. **Partial Pre-rendering (PPR)** - Combines Static Site Generation (SSG) and Server-Side Rendering (SSR) on the same page, allowing for dynamic and static content to coexist seamlessly. 4. **Next After** - A new feature that allows the server to prioritize essential tasks, ensuring faster initial load times by deferring non-essential tasks. 5. **Turbo Pack Integration** - A high-speed bundler that promises faster and smoother development processes, set to replace Webpack in development mode. 6. **Bundling External Packages** - Improves cold start performance by bundling external packages in the app directory by default. ### Detailed Breakdown | Feature | Description | |-----------------------------|----------------------------------------------------------------------------------------------| | **Improved Caching** | Enhanced caching mechanisms to reduce hydration errors. | | **React 19 Support** | Supports new React compiler for out-of-the-box optimization. | | **Partial Pre-rendering** | Allows SSG and SSR to coexist on the same page. | | **Next After** | Prioritizes essential server tasks for faster initial loads. | | **Turbo Pack Integration** | High-speed bundler for smoother development, replacing Webpack in development mode. | | **Bundling External Packages** | Bundles external packages by default to improve cold start performance. | ## Benefits of Next.js 15 ### Enhanced Performance Next.js 15's improved caching and React compiler support ensure your applications run faster and more efficiently. The ability to combine SSG and SSR on a single page (Partial Pre-rendering) offers the best of both worlds, enhancing the user experience. ### Better Developer Experience With the integration of Turbo Pack, developers can expect a smoother and faster development process. The improved bundling of external packages reduces latency and enhances the overall developer experience. ### Future-Proof Your Applications By adopting the latest features of Next.js 15 and React 19, you ensure your applications are built on cutting-edge technology, making them more resilient to future changes and improvements. ## Real-World Applications ### Partial Pre-rendering in Action Partial Pre-rendering (PPR) is one of the standout features of Next.js 15. It allows parts of a page to be statically generated (SSG) while other parts remain dynamic (SSR). This is particularly useful for e-commerce sites where product details can be static, but pricing and recommendations need to be dynamic. ### Next After: Prioritizing Essential Tasks Next After ensures that essential tasks are prioritized, reducing initial load times. For instance, when a user requests a YouTube video, the video loads first while other tasks like updating view counts happen in the background. ## Getting Started with Next.js 15 To start using Next.js 15, you can create a new project using the following command: ```bash npx create-next-app@rc ``` During the setup, you will be prompted to choose Turbo Pack as your bundler, which is recommended for a faster development experience. ## Conclusion Next.js 15 is a significant update that brings numerous enhancements to the table. From improved caching and React 19 support to innovative features like Partial Pre-rendering and Next After, it offers a robust set of tools to enhance both performance and developer experience. Embrace these new features to future-proof your applications and stay ahead in the competitive web development landscape.
helloworldttj
1,909,932
Corrosion Inhibitors Market: Opportunities and Threats
According to the new market research report "Corrosion Inhibitors Market by Compound(Organic,...
0
2024-07-03T09:39:59
https://dev.to/aryanbo91040102/corrosion-inhibitors-market-opportunities-and-threats-da4
news
According to the new market research report "Corrosion Inhibitors Market by Compound(Organic, Inorganic), Type(Water Based, Oil Based and VCI), Application, End-Use (Power Generation, Oil & Gas, Metal & Mining, Pulp & Paper, Utilities, Chemical), and Region - Global Forecast to 2026", published by MarketsandMarkets™, the global Corrosion Inhibitor Market size is projected to grow from USD 7.9 billion in 2021 to USD 10.1 billion by 2026, at a CAGR of 4.9% between 2021 and 2026. Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=246 Browse in-depth TOC on "Corrosion Inhibitors Market" 190 – Tables 50 – Figures 260 – Pages View Detailed Table of Content Here: https://www.marketsandmarkets.com/Market-Reports/Study-Corrosion-Inhibitor-Market-246.html Corrosion inhibitor is a chemical compound that, when added to a liquid or gas, decreases the corrosion rate of a material, typically a metal or an alloy. Corrosion inhibitors will remain one of the largest product segments within the water treatment chemicals market. In terms of value, the organic segment is projected to account for the largest share of the corrosion inhibitor market, by compound, during the forecast period. Organic inhibitor is projected to be the largest compound segment in the corrosion inhibitor market. Organic corrosion inhibitors are widely used in various industries because of their effectiveness at a wide range of temperatures, compatibility with protected materials, good solubility of water, and low costs. These inhibitors absorb on the surface to form a protective film, which displaces water and protects it against corrosion. Power generation is projected to register the highest CAGR during the forecast period. The power plants are designed on the presumption to be continuously operated for many years to come. Across the world, demand for reliable economic power has raised the need to operate these systems at full capacity and as low a cost as possible. Rapid industrialization has led to economic growth, which resulted in improved quality of life, and in turn, propelled the demand for electricity in emerging economies. The power generation sector uses corrosion inhibitors mainly for treating boiler feed water, boiler makeup water, and cooling water. Request Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=246 Volatile Corrosion Inhibitor is projected to be the fastest-growing market, based on type. Volatile Corrosion Inhibitors (VCI) will register the highest CAGR for the type segment of corrosion inhibitor market, during the forecast period. VCIs work by changing the pH of the outside atmosphere to less acidic conditions to regulate corrosion. Examples include morpholine and hydrazine, which are used to control the corrosion of the condenser pipes in boilers. Key industrial applications are polymer films, paper, foam, powder and oil, and grease industrial applications.
aryanbo91040102
1,909,930
How to Use Software Reviews to Predict Long-Term Software Performance?
When selecting software for your business or personal use, it's essential to consider more than just...
0
2024-07-03T09:37:50
https://dev.to/amara_nielson_bb770edefd8/how-to-use-software-reviews-to-predict-long-term-software-performance-n75
software, softwarereview, review, rating
When selecting software for your business or personal use, it's essential to consider more than just its current features. Understanding its long-term performance is equally crucial. Software reviews can be an invaluable resource for making this prediction. Here’s a straightforward guide to using them effectively: **1. Check for Consistency in Reviews** Examine user feedback across multiple review platforms for patterns. If numerous reviews highlight the same strengths or issues, it’s a strong indicator of the software’s performance over time. Consistent positive feedback suggests reliability, whereas repeated complaints might signal potential problems. **2. Focus on Recent Reviews** Software evolves with updates and patches. Recent [Software Reviews](https://www.softwareworld.co/) offer insights into the software's current performance. Pay attention to feedback from the last few months to gauge its recent performance and any ongoing issues. **3. Evaluate Customer Support Experiences** Good customer support often correlates with long-term software reliability. Reviews that mention responsive and helpful support teams usually indicate a company committed to resolving issues and improving their product. **4. Look for Reviews from Long-Term Users** Users who have been using the software for a long period can provide valuable insights into its long-term performance. They can share how well the software has aged, how updates have impacted its functionality, and whether initial issues were resolved over time. **5. Analyze Software Updates and Improvements** Check if the software company regularly updates its product based on user feedback. A history of continuous improvement and adaptation to user needs is a good sign that the software will perform well in the long run. **6. Consider the Software’s Adaptability** Software that integrates well with other tools and adapts to new technologies tends to have better long-term performance. Reviews that mention the software’s flexibility and integration capabilities can indicate how well it will fit into evolving tech environments. **7. Watch for Common Issues** Be on the lookout for recurring problems mentioned in reviews. If multiple users report the same issues, this might point to fundamental weaknesses in the software that could affect its long-term performance. **Conclusion:** By applying these strategies, you can better predict how well software will perform in the future. Reading reviews with a focus on these aspects helps you make informed decisions, ensuring that the software you choose will meet your needs both now and in the future.
amara_nielson_bb770edefd8
1,909,929
What Technologies Empower Cloud-Native Development, And How Do They Work Together?
The cloud has become the new normal today, but many businesses continue to use outdated, heavy-duty...
0
2024-07-03T09:34:54
https://dev.to/andrew050/what-technologies-empower-cloud-native-development-and-how-do-they-work-together-1lg3
cloudnative, cloudcomputing
The cloud has become the new normal today, but many businesses continue to use outdated, heavy-duty computer software that is slow and prone to failure. Many companies have understood the value of the cloud and are focusing on application migration, which involves migrating legacy apps and functionality to a cloud platform. The tech community has advanced with a cloud-enabled approach, in which monolithic applications are refactored, to cloud-based, in which applications use the cloud's capabilities and resources to run the app without requiring a complete overhaul, to cloud-native, in which applications are purpose-built and optimized. But to do this effectively, you need to [hire a cloud-native application development company](https://successive.tech/cloud-native-application-development/). Here’s a quick brief of the technologies widely utilized for cloud-native application development. ## What is Cloud Native Architecture? Cloud-native architecture is a design process for creating, developing, and deploying applications specifically intended for the cloud and fully utilizing the benefits of the cloud computing model. Microservices and containerization are at the heart of cloud-native application architecture, making it easier to switch between cloud providers and deploy services in many languages or frameworks without conflict or downtime. Let’s look at some of the most popular Cloud-Native technologies helping businesses on their cloud-native journey. ### Kubernetes Kubernetes is Google's open-source container-centric management software, which has become a go-to for delivering and running containerized applications. Kubernetes simplifies application management by automating container management operations such as deployment, change rollout, scalability, and monitoring. It provides virtualization across containers, eliminating the need to manage them individually. It also supports autonomous storage orchestration, eliminating users' need to allocate and install storage spaces. Kubernetes dynamically adjusts to the cluster size required to run a service, allowing for effective scaling, increased development velocity, faster app development, and application deployment from anywhere. ### Prometheus Prometheus is an open-source monitoring and alerting system for containerized applications. It collects and aggregates metrics as time series data and optional key-value pairs known as labels. Soundcloud created Prometheus, a community project backed by the Cloud Native Computing Foundation (CNCF). Prometheus is a good tool for capturing numerical time series. It can monitor both machine-centric and highly dynamic service-oriented architectures. It also enables multidimensional data collection and querying. ### gRPC gRPC is an open-source, high-performance remote procedure call (RPC) framework that can be used anywhere. gRPC enables transparent communication between client and server applications, facilitating the construction of networked systems. This framework employs HTTP/2, protocol buffers, and other advanced technology stacks to ensure optimal API security, performance, and scalability. It provides pluggable load balancing, tracing, health checking, and authentication capabilities, allowing it to connect services within and between data centers. It supports multiple significant languages and includes numerous iOS and Android client libraries. It decreases the latency of remote procedural calls in distributed computing systems and is intended for large-scale architectures. ## Conclusion In your cloud native journey you need a solution which is well suited to your organization's specific needs. The wide range of cloud-native technologies available on the market, whether open-source or vendor-agnostic, is critical because it ensures collaboration, innovation, modernization, cost savings, reliability, and usability, whether you're using a public cloud platform, your private cloud network, or a hybrid of cloud and dedicated server resources. Contact an organization offering cloud-native application development services to find the right fit for your cloud-native journey.
andrew050
1,909,928
Navigating the World of Mobile Development: A Journey of Self-Discovery and Growth with HNG Internship
Introduction: As a mobile developer, I've come to realize that learning to sell myself is just as...
0
2024-07-03T09:32:42
https://dev.to/big_dre007/navigating-the-world-of-mobile-development-a-journey-of-self-discovery-and-growth-with-hng-internship-352b
**Introduction**: As a mobile developer, I've come to realize that learning to sell myself is just as important as learning to code. With the ever-evolving landscape of mobile development, it's crucial to stay up-to-date with the latest platforms and architecture patterns. In this article, I'll be sharing my knowledge on mobile development platforms and common software architecture patterns used in the industry. I'll also share my personal journey and why I'm excited to embark on the HNG Internship program. **Mobile Development Platforms:** When it comes to mobile development, there are three primary platforms: Native, Cross-Platform, and Hybrid. - Native: Developing apps for a specific platform (iOS or Android) using platform-specific languages and tools. Pros: optimal performance, direct access to device hardware, and a seamless user experience. Cons: requires separate codebases for each platform, increasing development time and cost. - Cross-Platform: Building apps that can run on multiple platforms using a single codebase. Pros: faster development, cost-effective, and easier maintenance. Cons: may compromise on performance, and limited access to device hardware. - Hybrid: Combining native and web technologies to create apps that can run on multiple platforms. Pros: fast development, cost-effective, and access to device hardware. Cons: may compromise on performance, and limited flexibility. **Software Architecture Patterns:** When it comes to software architecture patterns, there are three common ones used in mobile development: MVC (Model-View-Controller): Separates the app into three interconnected components, making it easier to manage complexity. Pros: easy to implement, scalable, and maintainable. Cons: can become complex, and may lead to tight coupling between components. MVP (Model-View-Presenter): A variation of MVC, where the presenter acts as an controller between the model and view. Pros: easier to test, and reduces coupling between components. Cons: can be over-engineered, and may lead to complexity. MVVM (Model-View-ViewModel): This is variant of MVC that is useful for complex user interface applications. It was introduced by John Gossman, A pattern that uses a view model to expose data and commands to the view. Pros: easy to implement, scalable, and maintainable. Cons: can be complex, and may lead to tight coupling between components. Other Software development architectures includes Clean architecture and Redux Architecture. As a developer, you must understand that selecting the appropriate architecture is not a one-size-fits-all decision, but rather a project-specific one. A thorough grasp of each architectural pattern, including its strengths and drawbacks, is essential for making sound judgments and producing code that is efficient, maintainable, and successful. **My Journey with HNG Internship**: As a mobile developer, I'm always looking to improve my skills and stay ahead of the curve. That's why I'm excited to embark on the HNG Internship program. With HNG, I'll have the opportunity to work on real-world projects that simulate real life applications, giving practical knowledge that goes beyond theoretical thinking, collaborate with experienced developers, and gain valuable industry insights. I'm particularly drawn to HNG's focus on practical skills development and mentorship. As someone who's passionate about mobile development, I believe that HNG's program will help me take my skills to the next level. I'm looking forward to learning from experienced mentors, working on challenging projects, and building a network of like-minded developers. **Why I Want to Do the HNG Internship**: I want to do the HNG Internship because I believe it will help me achieve my goals as a mobile developer. With HNG, I'll have the opportunity to: Gain practical experience working on real-world projects. Develop my skills in mobile development platforms and architecture patterns. Collaborate with experienced developers and learn from their experiences. Build a network of like-minded developers and industry professionals. Enhance my portfolio and increase my chances of getting hired by top companies. If you're interested in learning more about the HNG Internship program, I encourage you to check out their website: https://hng.tech/internship. HNG internship is not only for interns, it can be also useful to companies that want to hire top tech talent that would fit in the tech industry, You can also learn more about their hiring process and how to get hired by top companies: https://hng.tech/hire. Interns who pay for HNG premium would be added to HNG pool of tech talent for companies and clients who want to hire. https://hng.tech/premium **Conclusion**: In conclusion, mobile development platforms and software architecture patterns are crucial aspects of building successful mobile apps. As a mobile developer, it's essential to stay up-to-date with the latest trends and best practices. I'm excited to embark on the HNG Internship program, which will help me take my skills to the next level and achieve my goals as a mobile developer. Thanks for reading, and I hope you found this article informative and engaging!
big_dre007
1,909,924
What is CFG Scale Stable Diffusion and How to Use It?
Understanding the CFG scale in Stable Diffusion. Learning how to use it to enhance image quality in...
0
2024-07-03T09:30:15
https://dev.to/novita_ai/what-is-cfg-scale-stable-diffusion-and-how-to-use-it-n2g
Understanding the CFG scale in Stable Diffusion. Learning how to use it to enhance image quality in our blog. ## Introduction The CFG scale, also known as the Classifier Free Guidance scale, plays a crucial role in controlling the adherence of Stable Diffusion to your text prompt, which can be used in both **[text-to-image](What is CFG Scale Stable Diffusion and How to Use It?)** (txt2img) and **[image-to-image](What is CFG Scale Stable Diffusion and How to Use It?)** (img2img) generations. In this blog, we'll give you a comprehensive introduction to the CFG scale in Stable Diffusion, including its relation with Stable Diffusion and the technology behind it. Moreover, we'll show you a detailed guide on how to use it in Stable Diffusion and how to avoid common mistakes. Let's dive into the world of the CFG scale now! ## Understanding CFG Scale in Stable Diffusion In Stable Diffusion, the acronym CFG represents the "Classifier Free Guidance" scale, which plays a crucial role in determining the quality of the output images. ### Evolution of the CFG (Classifier Free Guidance) Initially, diffusion models used an explicit classifier to guide the generation process, involving training a classifier on noisy images to categorize and guide the generation of specific classes, such as cats or dogs. However, this required an extra model. So comes Classifier-Free Guidance, using image captions to train a conditional diffusion model. ### What is the CFG Scale? The CFG scale, or Configuration scale, is a parameter that controls the intensity of the diffusion process. It determines how much the pixel values are spread or dispersed, that's to say, it determines the extent to which Stable Diffusion follows your prompt. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nhg4qiv6xc4s5syy04h7.png) ## How Does the CFG Scale Work in Stable Diffusion? By default, the CFG scale value is set to 7, striking a balance between creative freedom and prompt guidance. ### Relation Between CFG Scale and Stable Diffusion Stable diffusion is a concept in the field of image processing and computer graphics that refers to the process of spreading or dispersing pixel values across an image. This technique is often used to create a variety of effects, such as blurring, sharpening, and edge detection. The process is governed by a set of parameters, one of which is the CFG scale. ### How Does the CFG Scale Affect Image Quality? The CFG Scale determines the coefficient applied to the prompt words in the diffusion process. A lower CFG scale value can preserve more details but might not achieve the desired diffusion effect. On the other hand, a higher CFG scale value can create a strong diffusion effect but might result in the loss of image details. Therefore, finding the right balance is key to achieving high-quality output images.  ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3moej0q7pcokxk2tgff.png) Adjusting the CFG scale in stable diffusion depends on the desired outcome. If the goal is to create a subtle diffusion effect, a lower CFG scale value would be appropriate. Conversely, if the aim is to create a strong diffusion effect, a higher CFG scale value would be needed.  While using the Stable Diffusion Web UI, CFG is limited to positive numbers ranging from 1 to 30. However, when utilizing Stable Diffusion via a Terminal, CFG can be set as high as 999 and can even take negative values which indicates the desire for Stable Diffusion to generate content opposite to your text prompt.  ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pnhsy2v3ov2qayz5p260.png) ## How to Use the CFG Scale in Stable Diffusion? To learn how to use the CFG scale in Stable Diffusion, you should have the Stable Diffusion model in your project. In this section, we'll teach you how to use it step by step from integrating stable Diffusion into your program. ### Step-by-Step Guide The benefit of getting Stable Diffusion by Integrating API rather than downloading it is that you are able to train and make some adjustments to the models according to your needs. - Step 1: Open the **[Novita AI](https://novita.ai/)** website and create an account on it. - Step 2: Navigate to the "API" and find the one you want. Novita AI features various APIs like "**[Text to Image](https://novita.ai/reference/image_generator/text_to_image.html)**", "**[Image to Image](https://novita.ai/reference/image_generator/image_to_image.html)**", and so on. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i69bwwqa5z1cw9m6fzrq.png) - Step 3: Get the API key and integrate it into your project. - Step 4: Turn to your Stable Diffusion interface. - Step 5: Select a Stable Diffusion model you want from the list and enter the prompts of your image. Novita AI provides many models including **[Stable Diffusion XL](https://novita.ai/)** and Stable Diffusion 3. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wsko9ijedlp4qaz3sst7.png) - Step 6: Adjust the CFG scale value and generate the image. - Step 7: Experiment with different CFG scale values to uncover the specific one that brings out the most impressive result. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tasqj5sngnzg1g2lmhk2.png) ### Hardware Considerations About Using the CFG Scale The performance and outcome of Stable Diffusion can be influenced by the hardware used. - Graphics Processing Unit (**[GPU](https://novita.ai/)**): A powerful GPU is essential for running Stable Diffusion efficiently. The model leverages the GPU for the computationally intensive tasks involved in image generation.  - Random Access Memory (RAM): Adequate system RAM is important for overall system responsiveness and the ability to handle large datasets. A minimum of 16GB RAM is recommended, with 32GB for more demanding tasks. - Operating System: Stable Diffusion is compatible with various operating systems, including Windows, macOS, and Linux. However, the specific version and updates may affect compatibility and performance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2wm3eaoeb2mzth7xihzo.png) ## Use Cases of the CFG Scale for Stable Diffusion The CFG Scale in Stable Diffusion allows users to fine-tune the image generation process according to their needs. ### Optimizing Image Quality Users can adjust the CFG Scale to optimize image quality. A value of 7 is often recommended, as it provides a good balance between realism and fidelity to the input prompt.  ### Negative Prompts The CFG Scale can be used in conjunction with negative prompts, which can help create images that exclude certain elements while still adhering to the main text prompt. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iai0y0gtcfvwlaalilyj.png) ### Case Study By adjusting the CFG scale value in the case study, we can observe how different levels of guidance affect the generated images, further understanding the importance of the CFG scale's role in achieving high-fidelity output images. Additionally, Novita AI also provides a playground of "**[image-to-image](https://blogs.novita.ai/enhance-your-generator-stable-diffusion-image-to-image-mastery/)**". You can take your case study on it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axyqfwdhcod74qvpjsk2.png) ## Conclusion In conclusion, the CFG scale is a vital parameter in stable diffusion that controls the intensity of the diffusion process. Understanding how to adjust the CFG scale based on the desired outcome and the quality of the original image can significantly improve the results of stable diffusion. As with many things in image processing and computer graphics, finding the optimal CFG scale value often involves a process of trial and error and depends on the specific requirements of each project. > Originally published at [Novita AI](https://blogs.novita.ai/what-is-cfg-scale-stable-diffusion-and-how-to-use-it/?utm_source=dev_image&utm_medium=article&utm_campaign=cfg) > [Novita AI](https://novita.ai/?utm_source=dev_image&utm_medium=article&utm_campaign=what-is-cfg-scale-stable-diffusion-and-how-to-use-it) is the all-in-one cloud platform that empowers your AI ambitions. With seamlessly integrated APIs, serverless computing, and GPU acceleration, we provide the cost-effective tools you need to rapidly build and scale your AI-driven business. Eliminate infrastructure headaches and get started for free - Novita AI makes your AI dreams a reality.
novita_ai
1,908,757
3. Essential Keymapping and Settings
Now I am about to write a lot of neovim configuration code, the first configuration will be settings...
27,945
2024-07-03T09:29:30
https://dev.to/stroiman/3-essential-keymapping-and-settings-3e8
neovim, vim
Now I am about to write a lot of neovim configuration code, the first configuration will be settings that help me working with the configuration itself. [Read more about keyboard sequences and leader keys](https://dev.to/stroiman/leader-keys-and-mapping-keyboard-sequences-3ehm) ## Open and re-source the configuration As I already mentioned that the number one reason for why I structure my configuration the way I do is because I need to be able to changes, when I'm in my daily work, and I find that something is lacking in my configuration. I need to quickly make the change, and get back to where I was, which is why closing and reopening vim for these kinds of changes is something I really want to avoid. So the first thing, I do is setup keyboard shortcuts to edit and rerun, or "re-source" the configuration. This tip, I learned from [Learn Vimscript The Hard Way](https://learnvimscriptthehardway.stevelosh.com/chapters/07.html) I also want to map my leader key to something easily accessible, and my thumb is always resting on the space bar, so that will be my leader key. ```lua local function reload() dofile(vim.env.MYVIMRC) print("Configuration reloaded") end vim.g.mapleader = " " vim.g.maplocalleader = " " vim.keymap.set("n", "<leader>ve", ":tabnew $MYVIMRC<cr>", { desc = "Open neovim configuration file" }) vim.keymap.set("n", "<leader>vs", reload, { desc = "Re-source neovim configuration file" }) ``` Note: while resourcing could have been achieved using `:source $MYVIMRC`, I will later need to add more behaviour to the reloading, which is why I already have made a function for the behaviour. For more info, see `:help :source` For more information on leader keys and mapping sequences, see Leader keys and mapping keyboard sequences ## Escape hatch The default key for exiting _insert_-mode, <kbd>&lt;Esc&gt;</kbd> is a little far away. I have remapped that to <kbd>jk</kbd>, a sequence of keys that quick exits to normal mode, and it's a sequence that is never used in real words or code sequences (I've used this mapping for about 8 years now, and the only time it conflicts with a real use case is when I write about my vim configuration) - this was also a tip I learned from "Learn Vimscript ...". ```lua vim.keymap.set("i", "jk", "<esc>", { desc = "Exit insert mode" }) ``` If you are used to vim, and want to adopt to this pattern, it can be helpful to remap <kbd>&lt;Esc&gt;</kbd>, to the no-op (see `:help <nop>`) ```lua vim.keymap.set("i", "<esc>", "<nop>") ``` I had this mapping setup when I first adopted this strategy. Now, it's no longer necessary for me. ## Quicker save After having used Windows for many years, I'm used to <kbd>&lt;Ctrl&gt;+s</kbd> for saving a file. And I have remapped the <kbd>CAPS LOCK</kbd> key to work as <kbd>&lt;Ctrl&gt;</kbd>, so that combination is one that only requires my left pinky to move slightly to the left to reach. I also want to have this mapping available in both _normal_ and _insert_-mode, as every edit normally ends with a save anyway, i.e. after saving from _insert_-mode, I'll be back in normal mode for _editing_. ```lua vim.keymap.set("n", "<C-s>", ":w<cr>", { desc = "Save current file" }) vim.keymap.set("i", "<C-s>", "<esc>:w<cr>", { desc = "Save current file" }) ``` ## Sensible indentation While writing the `reload` function, vim by default inserted a `tab`. I want spaced, and I want two of them. ```lua vim.opt.expandtab = true vim.opt.tabstop = 2 vim.opt.shiftwidth = 2 vim.opt.softtabstop = 2 ``` Note, that these are now the default settings. Each file can overwrite the settings. E.g. in Go, I do want to have tabs[^1]. ## Disable swap file Vim keeps a "swapfile" that serves as a backup while working, and can help recover unsaved work in the case of a crash. I don't find this very helpful. I save often, and I have version history in git. The swapfile is not so much a nuisance in neovim as it was in vim, as neovim stores the swapfile away from your working directory. The vim versions I have used stored the swap file in the same folder as the edited file, making it a lot of noise in the file system. ```lua vim.opt.swapfile = false ``` When connecting to a remote server, a swapfile _can_ be helpful to recover from a broken connection. But [tmux](https://github.com/tmux/tmux/wiki) can also solve the same problem. ## Line numbers Showing line numbers can be helpful, as it helps you navigate quickly to the desired line, e.g. <kbd>30gg</kbd> or <kbd>30G</kbd> both jump straight to line 30. Relative line numbers on the other hand will show how many lines above or below each line is, often making for a shorter input, e.g. <kbd>8j</kbd>, rather then <kbd>128gg</kbd>. Combine `relativenumber` with `number` to show the absolute line number for the current line, and relative numbers for all other lines ```lua vim.opt.number = true vim.opt.relativenumber = true ``` ![[Pasted image 20240702110735.png|Screenshot showing the actual line number for the current line, and for all other lines, it shows the number of lines above or below]] I have previously been a bit annoyed by the constant moving around of the numbers in the gutter, but right now I'm trying out relative numbers. I may disable it again in the future if I'm unhappy with it. Note: Absolute line numbers may be more helpful for pair programming, as the navigator can quickly reference the concrete line, "I'm not sure the conditional logic on line 30 is correct" (the number remains correct even if the driver moves around at the same time). ## Search options By default, when searching with <kbd>/</kbd> or <kbd>?</kbd>, the search is case sensitive. I generally don't want to be bothered with typing the right case. Just that the right letters are in the search should be enough. I set `ignorecase` to have case insensitive search When searching, the search results are highlighted, and the command `:nohighlight` or `:noh` disables it again. I have setup a shortcut to more quickly dismiss this ``` vim.opt.ignorecase = true vim.opt.smartcase = true vim.keymap.set("n", "<leader>h", vim.cmd.nohlsearch) ``` ## Remove useless functions If the cursor is located on a number, the shortcuts <kbd>&lt;Ctrl&gt;+a</kbd> and <kbd>&lt;Ctrl&gt;+x</kbd> increments/decrements that number. Not only do I not have any use for this behaviour, <kbd>&lt;Ctrl&gt;+a</kbd> is also used to control [tmux](https://github.com/tmux/tmux/wiki), which I _normally_ use together with neovim. When I am unexpectedly not in a tmux session, and I try to control tmux, I have on more than one occasion incremented a number, not realising it until I observe a bug in my code. So I get rid of these two useless mappings ```lua vim.keymap.set("n", "<C-a>", "<nop>") vim.keymap.set("n", "<C-x>", "<nop>") ``` ## A note on `termguicolors` Other tutorials often set the `termguicolors`. This _shouldn't_ be necessary in neovim, only vim. By default neovim detects if your terminal supports 24-bit colours. If you find that colours are off if you don't enable this setting it's more likely that it's your terminal is misconfigured, not neovim. From `:help termguicolors` > Nvim will automatically attempt to determine if the host terminal > supports 24-bit color and will enable this option if it does > (unless explicitly disabled by the user). ## Coming up Now, I have the bare essentials up and running, and now is a good time to commit to the git repository. That's a task I want to perform from within neovim itself, so in the next part in the series, I will add git support to neovim, which again requires some kind of package management, so I'll also add a package manager rather than managing that by myself (although that is perfectly doable). [^1]: Go is so opinionated about formatting that there is only one way to format indentation, and that is using tabs. You can configure your editor to how you want it to look. It still uses spaces for alignment, ensuring it is _not_ dependent on editor settings.
stroiman
1,909,927
Mastering MuleSoft: Your Gateway to Seamless Integration
In today's digitally-driven world, businesses rely heavily on efficient data integration to...
0
2024-07-03T09:28:46
https://dev.to/mylearnnest/mastering-mulesoft-your-gateway-to-seamless-integration-4jgg
In today's digitally-driven world, businesses rely heavily on efficient data integration to streamline operations, enhance customer experiences, and drive innovation. As companies adopt diverse applications and systems, the need for robust integration solutions becomes paramount. MuleSoft, a [leading integration platform](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/), offers a comprehensive suite of tools and services that empower organizations to connect applications, data, and devices seamlessly. This article delves into the world of MuleSoft, exploring its features, benefits, and the value it brings to businesses. **Understanding MuleSoft:** MuleSoft, founded in 2006 and acquired by Salesforce in 2018, is renowned for its flagship product, Anypoint Platform™. This unified platform integrates software as a service (SaaS), on-premises software, legacy systems, and APIs, facilitating a seamless data flow across the enterprise. MuleSoft's architecture is built on open-source technologies, making it a versatile and scalable solution for businesses of all sizes. **Key Features of MuleSoft:** **Anypoint Platform:** At the heart of MuleSoft's offerings, the Anypoint Platform provides a [comprehensive integration](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/) solution. It encompasses various tools, including Anypoint Studio, Anypoint Design Center, Anypoint Exchange, and Anypoint Management Center, ensuring end-to-end integration lifecycle management. **API-led Connectivity:** MuleSoft's API-led connectivity approach promotes reusability and agility. By designing APIs as reusable assets, businesses can accelerate integration processes, reduce development time, and foster innovation. **Hybrid Integration:** MuleSoft supports hybrid integration, enabling seamless connectivity between [cloud-based and on-premises applications](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/). This flexibility allows businesses to leverage existing investments while embracing modern cloud solutions. **Data Transformation:** MuleSoft offers powerful data transformation capabilities, facilitating seamless data mapping and conversion between disparate systems. This ensures data consistency and accuracy across the enterprise. **Pre-built Connectors:** MuleSoft provides a vast library of pre-built connectors for popular applications and systems, such as Salesforce, SAP, and AWS. These connectors simplify integration processes, reducing the need for custom development. **Benefits of Using MuleSoft:** **Increased Efficiency:** MuleSoft streamlines integration processes, reducing manual efforts and minimizing errors. Automated workflows and data synchronization enhance overall operational efficiency. **Enhanced Agility:** With MuleSoft's API-led connectivity, businesses can quickly adapt to changing market demands. The ability to reuse APIs accelerates development cycles, enabling rapid deployment of new services and features. **Scalability:** MuleSoft's [scalable architecture](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/) accommodates growing data volumes and evolving business needs. Whether you're integrating a few applications or hundreds, MuleSoft can handle the workload seamlessly. **Cost Savings:** By leveraging pre-built connectors and reusable APIs, businesses can significantly reduce development and maintenance costs. MuleSoft's robust integration capabilities also eliminate the need for costly point-to-point integrations. **Improved Customer Experience:** Seamless integration of customer data across various touchpoints ensures a consistent and personalized experience. MuleSoft enables real-time data access, empowering businesses to deliver timely and relevant interactions. **Real-World Applications of MuleSoft:** **Retail:** In the retail industry, MuleSoft enables seamless integration of e-commerce platforms, inventory management systems, and [customer relationship management (CRM)](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/) tools. This ensures a unified view of customer data, optimizing inventory management and enhancing customer engagement. **Healthcare:** MuleSoft plays a crucial role in healthcare integration by connecting [electronic health records (EHR)](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/) systems, patient management systems, and wearable devices. This facilitates secure data exchange, enabling better patient care and streamlined operations. **Financial Services:** In the financial sector, MuleSoft helps integrate banking systems, payment gateways, and fraud detection tools. This ensures secure and efficient transactions, compliance with regulations, and improved customer experiences. **Manufacturing:** MuleSoft supports the integration of supply chain management systems, [enterprise resource planning (ERP)](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/) software, and IoT devices in the manufacturing industry. This enhances production efficiency, reduces downtime, and enables predictive maintenance. **Key Components of MuleSoft Training:** **Anypoint Platform Overview:** Training programs typically begin with an overview of the Anypoint Platform, covering its architecture, components, and key features. This foundational knowledge is crucial for understanding how MuleSoft operates. **API Design and Development:** Participants learn how to design and develop APIs using Anypoint Studio and Anypoint Design Center. They gain hands-on experience in creating APIs, defining data models, and implementing security measures. **DataWeave Transformation:** [DataWeave](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/), MuleSoft's powerful data transformation language, is a critical component of training. Participants learn how to perform data mapping, filtering, and transformation tasks to ensure seamless data integration. **Integration Patterns:** Training programs cover various integration patterns, such as point-to-point, hub-and-spoke, and API-led connectivity. Understanding these patterns helps professionals choose the right approach for different integration scenarios. **Deployment and Management:** Participants learn how to deploy and manage MuleSoft applications using Anypoint Management Center. This includes monitoring, troubleshooting, and ensuring high availability and performance. **Real-World Projects:** Practical, [real-world projects](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/) are an integral part of MuleSoft training. These projects provide hands-on experience, allowing participants to apply their knowledge to solve complex integration challenges. **Why Choose My Learn Nest for MuleSoft Training in Hyderabad?** For those seeking top-notch MuleSoft training in Hyderabad, My Learn Nest stands out as a premier choice. With a team of experienced instructors and a comprehensive curriculum, My Learn Nest ensures that participants gain in-depth knowledge and practical skills in MuleSoft integration. **Expert Instructors:** My Learn Nest boasts a team of certified MuleSoft instructors with extensive industry experience. They bring real-world insights and best practices to the training sessions, ensuring participants receive high-quality education. **Comprehensive Curriculum:** The training programs at My Learn Nest cover all aspects of [MuleSoft integration](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/), from basic concepts to advanced techniques. The curriculum is designed to equip participants with the skills needed to excel in their careers. **Hands-on Learning:** My Learn Nest emphasizes hands-on learning, providing participants with ample opportunities to work on real-world projects. This practical experience is invaluable in mastering MuleSoft integration. **Flexible Learning Options:** My Learn Nest offers flexible learning options, including classroom-based training, online courses, and corporate training programs. This ensures that learners can choose a format that suits their schedule and preferences. **Certification Support:** My Learn Nest provides comprehensive support for MuleSoft certification exams. Participants receive guidance on exam preparation, practice tests, and tips for success, increasing their chances of achieving certification. **Conclusion:** In an era where seamless integration is crucial for business success, MuleSoft emerges as a powerful solution that bridges the gap between [diverse applications and systems](https://www.mylearnnest.com/best-mulesoft-training-in-hyderabad/). Its robust features, API-led connectivity, and hybrid integration capabilities make it an invaluable asset for organizations across industries. By investing in MuleSoft training at My Learn Nest in Hyderabad, professionals can unlock the full potential of MuleSoft, driving efficiency, agility, and innovation in their organizations. Whether you're a developer, architect, or IT professional, mastering MuleSoft opens doors to exciting career opportunities and empowers you to make a significant impact in the world of integration.
mylearnnest
1,909,912
How to enhance your video quality?
Video has rapidly become a cornerstone of modern communication, whether it's through social media,...
0
2024-07-03T09:27:13
https://dev.to/alinaxiaoya/how-to-enhance-your-video-quality-1b1f
webdev, tutorial
Video has rapidly become a cornerstone of modern communication, whether it's through social media, professional presentations, or content creation for personal projects. The quality of your video can greatly influence the viewer's perception and engagement. Therefore, enhancing the quality of the video content you've already created is crucial. This guide is dedicated to providing you with practical steps to quickly improve the quality of your existing videos by using [Tencent Media Processing Service](https://mps.live/), ensuring they meet professional standards and capture your audience's attention. Whether it's for your YouTube channel, online course, or simply sharing with friends and family. The key factors that contribute to video quality include: 1. **Resolution**: The number of pixels in each dimension that a video contains. Higher resolutions like 1080p or 4K offer more detail. 2. **Frame Rate**: The frequency at which consecutive images called frames appear on a display. A higher frame rate (measured in frames per second or fps) results in smoother motion. 3. **Color Depth**: How many colors the video can display. More colors mean higher color fidelity and a more vibrant image. 4. **Noise**: Random variations of brightness or color information in images. Less noise results in a cleaner image, especially in low light conditions. ## Try demo ![Demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p9s6dk1o8a6mc62uxobb.png) [Tencent MPS](https://mps.live/products/enhancement) has powerful video enhancement capabilities, including super-resolution, frame interpolation, color enhancement, noise reduction. We can first try the [demo](https://mps.live/demo/enhancement/repair) to experience the basic functionalities of video enhancement. The demo divides video enhancement into four different pages based on scenes, with each scene having different enhancement options. The templates have already been configured. First, we select a video file that needs enhancement and upload it. ![Select Video File](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3py3ik4l4yrpbuwroua1.png) Then, we click the button to initiate the task and wait for some time. ![Start Processing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pd42uhopq0e2e7fkmg2.png) Afterward, we can see the enhanced results. The execution time of the task depends on the length of the video. The longer the video, the longer the processing time. To download the processed video, please follow these steps: 1. Click on the "History" in the top right corner. ![History icon](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zvdhkls79a6bdj5f1uqr.png) 2. Find the corresponding task. 3. Click the download button to download the video. Now you can enjoy the quality-enhanced video locally. ## Using MPS Console If you have a large number of videos that need to be processed in batches, you can consider using the [MPS console](https://console.tencentcloud.com/mps). The MPS console provides the functionality of VOD orchestration. Uploading files will automatically trigger the orchestration execution and output the results to the specified location. After successfully logging into the [MPS console](https://console.tencentcloud.com/mps), first, create a video enhancement template by clicking on "[Templates > Audio/Video Enhancement > Create template](https://console.tencentcloud.com/mps/templates/enhs/enh)". ![Create template](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/07tpshcqiurnz3zzo0j8.png) Follow the instructions on the page to fill in the template parameters and select the specific video enhancement details according to your needs, such as super-resolution, HDR, color enhancement, detail enhancement, and so on. Then click on "[Orchestrations > VOD Orchestration > Create VOD Orchestration](https://console.tencentcloud.com/mps/workflows/vod/add)". On this page, create an orchestration and configure the relevant information based on your business scenario requirements. The video files can be stored on AWS or Tencent Cloud COS, so choose according to your needs. ![Create VOD Orchestration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zk15mq7whvf9j9u5kwb9.png) Configure actions by adding an Audio/Video Enhancement node, then edit the node and select the newly created video enhancement template. ![Add Enhancement Node](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iq7x9mb3o8uqm4lk1vui.png) For more detailed tutorials, you can refer to [VOD Orchestration Guide](https://mps.live/document/58242). After creating the orchestration, it will be disabled by default. Now let's enable it. Once the orchestration is enabled, automatic task triggering will be activated. When a new file is uploaded to the trigger bucket configured in the orchestration, the system will automatically initiate the processing task without the need to manually create tasks in the console. ![Enable Orchestration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fqrn2jy0emyt9kz2c43n.png) Now, all you need to do is upload the video file to the trigger bucket configured in the orchestration. This will automatically initiate the task to process the video. Just wait for the new video file to be generated. According to the output configuration in the orchestration, you can find the new video file in the corresponding bucket.
alinaxiaoya
1,909,926
How to Use Scroll Area in Next.js with Shadcn UI
In this tutorial, we will see how to implement a scroll area in Next.js 13 using the Shadcn...
0
2024-07-03T09:26:31
https://frontendshape.com/post/how-to-use-scroll-area-in-next-js-with-shadcn-ui
nextjs, shadcnui, webdev
In this tutorial, we will see how to implement a scroll area in Next.js 13 using the Shadcn UI. Before using the scroll area in Next.js with Shadcn UI, you need to install it with npx shadcn-ui@latest and add the scroll-area component. ``` npx shadcn-ui@latest add scroll-area # or npx shadcn-ui@latest add ``` ### NextJS with Shadcn UI Scroll Area Example 1. Create a scroll area in Next.js 13 using the Shadcn UI ScrollArea component. ```jsx import { ScrollArea } from "@/components/ui/scroll-area" export default function ScrollAreaDemo() { return ( <div> <ScrollArea className="h-[200px] w-[350px] rounded-md border p-4"> Jokester began sneaking into the castle in the middle of the night and leaving jokes all over the place: under the king's pillow, in his soup, even in the royal toilet. The king was furious, but he couldn't seem to stop Jokester. And then, one day, the people of the kingdom discovered that the jokes left by Jokester were so funny that they couldn't help but laugh. And once they started laughing, they couldn't stop. </ScrollArea> </div> ) } ``` ![scroll area](https://frontendshape.com/wp-content/uploads/2024/06/xRUTLvitMpWFTqulPjzfupfN9AQ628qkRfOKKz24.png) 2.Implementing tagged lists with scroll areas in Next.js using Shadcn UI. ```jsx import * as React from "react" import { ScrollArea } from "@/components/ui/scroll-area" import { Separator } from "@/components/ui/separator" const tags = Array.from({ length: 50 }).map( (_, i, a) => `v1.2.0-beta.${a.length - i}` ) export default function ScrollAreaDemo() { return ( <ScrollArea className="h-72 w-48 rounded-md border"> <div className="p-4"> <h4 className="mb-4 text-sm font-medium leading-none">Tags</h4> {tags.map((tag) => ( <React.Fragment> <div className="text-sm" key={tag}> {tag} </div> <Separator className="my-2" /> </React.Fragment> ))} </div> </ScrollArea> ) } ``` ![ tags lists scroll area](https://frontendshape.com/wp-content/uploads/2024/06/drDh0GDx1SbA5ydMkMHjvCQFCaSQEnnRk7diM863.png) 3. Next.js with Shadcn UI Horizontal Image Scrolling. ```jsx import * as React from "react" import Image from "next/image" import { ScrollArea, ScrollBar } from "@/components/ui/scroll-area" export interface Artwork { artist: string art: string } export const works: Artwork[] = [ { artist: "Ornella Binni", art: "https://images.unsplash.com/photo-1465869185982-5a1a7522cbcb?auto=format&fit=crop&w=300&q=80", }, { artist: "Tom Byrom", art: "https://images.unsplash.com/photo-1548516173-3cabfa4607e9?auto=format&fit=crop&w=300&q=80", }, { artist: "Vladimir Malyavko", art: "https://images.unsplash.com/photo-1494337480532-3725c85fd2ab?auto=format&fit=crop&w=300&q=80", }, ] export function ScrollAreaHorizontalDemo() { return ( <ScrollArea className="w-96 whitespace-nowrap rounded-md border"> <div className="flex w-max space-x-4 p-4"> {works.map((artwork) => ( <figure key={artwork.artist} className="shrink-0"> <div className="overflow-hidden rounded-md"> <Image src={artwork.art} alt={`Photo by ${artwork.artist}`} className="aspect-[3/4] h-fit w-fit object-cover" width={300} height={400} /> </div> <figcaption className="pt-2 text-xs text-muted-foreground"> Photo by{" "} <span className="font-semibold text-foreground"> {artwork.artist} </span> </figcaption> </figure> ))} </div> <ScrollBar orientation="horizontal" /> </ScrollArea> ) } ``` ![ Horizontal Image Scrolling](https://frontendshape.com/wp-content/uploads/2024/06/Scroll-area-shadcn-ui-1-1.png)
aaronnfs
1,909,925
The Cloud Revolution Shaking Up Video Production
Hey folks, if you've been around the broadcasting scene as long as I have, you know how much the...
0
2024-07-03T09:24:24
https://dev.to/kevintse756/the-cloud-revolution-shaking-up-video-production-434g
Hey folks, if you've been around the broadcasting scene as long as I have, you know how much the landscape has changed in recent years. The move from old analog systems to internet-driven digital workflows has turned traditional broadcasting on its head. These massive changes might seem intimidating, especially for us old hands, but the reality is that embracing innovation is the only way for media companies to survive and thrive in today's multi-platform world. I wanted to share my thoughts on how cloud solutions like [TVU Networks](https://www.tvunetworks.com/)' MediaHub are equipping the next generation of video production and delivery with flexibility and future-readiness. The Drawbacks of Old Hardware Oh man, I still get flashbacks to the hardware-heavy days of those master control rooms. Routing a video signal back then meant physically patching cables between various devices—routers, converters, monitors. Just switching a single feed required manually rewiring connections and hoping you didn’t accidentally knock the whole station off the air! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ub3rcgyj0to5qr8oixyp.png) Production was entirely localized, meaning you had to be there in person with all that bulky gear. Any changes required a complicated rewiring process that could easily mess up live broadcasts if anything went wrong. There was almost no room for experimentation within these rigid old systems. How the Cloud Changed Everything Nowadays, platforms like MediaHub make routing, production, and delivery happen seamlessly in the cloud. Instead of physical cables and hardware, signals are aggregated and switched virtually using cloud software. Here are some big advantages of this cloud-based approach: Flexibility: Access and route any feed from anywhere without complex on-prem setups. Agility: Set up new remote cameras and go live in seconds without the usual hassle. Distributed workflows: Production teams can collaborate from anywhere through the cloud. Multi-platform publishing: Instantly share content across linear TV, web, social media, and more. The Future of Broadcasting Is Cloud-Powered These cloud innovations are incredibly empowering, but I know there's more disruption ahead with emerging tech like 5G, VR, and smart production tools. But at its core, video production will always be about creative storytelling. These cloud platforms just provide the tools to tell those stories without outdated limitations. I've decided to embrace change rather than resist it. Solutions built for this multi-platform age are letting us completely rethink broadcasting. I'm excited to see where the cloud will take video production next! What do you think about how platforms like https://www.tvunetworks.com/products/tvu-mediahub-cloud-router/ are shaking up traditional video workflows? Share your thoughts in the comments!
kevintse756
1,908,754
2. Creating a Sandbox Environment
When I started creating the configuration, I already had a working configuration, that should stay...
27,945
2024-07-03T09:18:01
https://dev.to/stroiman/2-creating-a-sandbox-environment-4p06
vim, neovim
When I started creating the configuration, I already had a working configuration, that should stay the default configuration until I was happy with the new configuration. Fortunately, this is quite easy. By default, neovim loads the configuration from `$HOME/.config/nvim`. But you can customise this through environment variables. First, I create a directory for the configuration and an empty lua file. ```sh mkdir -p $HOME/.config/nvim-new # 1. Create a new folder cd $HOME/.config/nvim-new touch init.lua # 2. Create an empty init.lua file git init # 3. I also want to have this in version control git add init.lua git commit -m "Initial commit" ``` Now that an empty configuration is created, I point to the new folder using the environment variable, `NVIM_APPNAME`, so I can launch neovim with ```sh NVIM_APPNAME=nvim-new nvim ``` In neovim, you can open your config file, `:e $MYVIMRC`, and see that it is in fact the new empty `init.lua` file that was created. This is another reason I added an empty file, because if it doesn't exist, neovim will open `init.vim` instead. Opening neovim normally of courses uses the original configuration in `$HOME/.config/nvim`. ### A note on version control Many have all their "dotfiles"[^1] under version control, and so do I. But my vim configuration is such a complex beast, that while many have that under their dotfiles repository, I prefer to have a separate repository for my vim configuration itself. This allows me to create branches, or like here, easily experiment with different configurations. Having that in the same dotfiles repository would lead to a lot of complexity in my dotfile configuration management. btw, I use [Rake](https://github.com/ruby/rake) to make the setup the proper symlinks to my dotfiles repository. Maybe I'll write about that one day. ### Make the sandbox easier to use I can make it easier to launch this by creating an alias. ```sh alias nvim-new="NVIM_APPNAME=nvim-from-scratch nvim" ``` When I run `nvim-new`, the new empty configuration file is loaded, if I open it with `:e $MYVIMRC`, I can confirm this in the filename. ![Screenshot of neovim displaying that the loaded file is "~/.config/nvim-from-scratch/init.lua](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2l3hg8jlk9pwzgp8l7ad.png) I don't want to create this alias in every session, so I add it to my `.zshrc` file, ```sh echo 'alias nvim-new="NVIM_APPNAME=nvim-from-scratch nvim"' >> ~/.zshrc ``` And now every new zsh session have `nvim-new` to open my work-in-progress configuration. Eventually, my `nvim` folder was renamed to `nvim-old`, `nvim-new` was renamed to `nvim`, and the alias was removed from my `.zshrc` file. I keep the old config, so I can quickly reference it if I miss something from my old configuration (I have a keyboard shortcut for that, of course) ### Next up: Now that I have a sandbox configuration, it's time to actually start writing the configuration. In the next part of the series, I will add the very fundamental configuration, that will help me edit the configuration. [^1]: Dotfiles is a term used for user configuration files, as the filename start with a "`.`", making it a hidden file on unix-systems. A more recent approach is to have configuration files in (non-hidden) subfolders of a `.config` folder.
stroiman
1,909,923
Automating User Management on Linux with Bash Scripting
As a SysOps engineer, efficient user management is crucial for maintaining system security and...
0
2024-07-03T09:21:38
https://dev.to/hellowale/automating-user-management-on-linux-with-bash-scripting-1jk1
productivity, devops, bash
As a SysOps engineer, efficient user management is crucial for maintaining system security and functionality. I've developed a bash script called create_users.sh that automates user creation, password management, group assignment, and logging on Linux systems to simplify this process. This script is designed to dynamically handle multiple users and groups from a structured input file. ## Script Overview The create_users.sh script performs the following tasks: Reading Input: It reads from an input file (users.txt), where each line contains a username and associated groups separated by semicolons (username;group1,group2). User Creation: Checks if each user exists; if not, creates the user with a personal group (same name as the username). Password Management: Generates a random password for each user and securely stores it in /var/secure/user_passwords.txt. Group Management: Adds users to their group and optionally to additional groups specified in the input file. Logging: Records all actions in /var/log/user_management.log for audit purposes. Error Handling: Skips invalid lines and manages existing users gracefully. ## Benefits of Using This Script Efficiency: Automates tedious user management tasks, reducing manual errors and saving time. Security: Ensures passwords are generated securely and stored in a protected file. Flexibility: Handles varying user and group configurations dynamically from a single input file. Auditability: Logs every action performed, providing accountability and traceability. Usage Instructions To utilize the script: Ensure the users.txt file is formatted correctly with each line containing username; and groups. Run the script with ./create_users.sh users.txt on your Linux machine. ## Conclusion Automating user management tasks is essential for maintaining system integrity and security. The create_users.sh script simplifies these tasks by leveraging bash scripting capabilities. It ensures consistency, security, and efficiency in managing users and groups on Linux systems. For those interested in exploring system operations and infrastructure management opportunities, I recommend checking out the [HNG Internship](https://hng.tech/internship) program. It offers valuable insights and practical experience in the tech industry, preparing aspiring professionals for rewarding careers. Additionally, you can learn more about opportunities at [HNG Premium](https://hng.tech/premium), which provides advanced training and mentorship for tech enthusiasts looking to accelerate their career growth. Feel free to access the script on GitHub and adapt it to suit your system's specific requirements.
hellowale
1,909,922
Telemedicine Practitioners
At Telemedicine Practitioners, Barbara Grubbs provides numerous health and wellness services to both...
0
2024-07-03T09:20:29
https://dev.to/barbara_grubbs_ad2bf09ff7/telemedicine-practitioners-1dk2
healthcare, onlinemedicalcare
At [Telemedicine Practitioners](https://www.telemedicinepractitioners.com/), Barbara Grubbs provides numerous health and wellness services to both men and women. My area of expertise encompasses medically supervised weight loss, focusing on providing safe, convenient medications to your doorstep or pharmacy to help you improve your health for a lifetime by modifying your body. This may include prescription or natural pills or injections to help reset your cravings and response to food. Barbara Grubbs is a telehealth provider dedicated to offering comprehensive advice and consultation to patients from the comfort of their homes. She helps people with Weight loss / anti-aging / wellness / peptides. Barbara's commitment to telehealth is rooted in her belief that accessible healthcare can lead to better health outcomes. Her approach is patient-centric, aiming to reduce the barriers to medical care by providing timely and accurate consultations. Whether managing chronic conditions, addressing acute concerns, or offering preventive care advice, Barbara Grubbs is a reliable resource for patients seeking quality healthcare remotely.
barbara_grubbs_ad2bf09ff7
1,909,921
Exploring the Future of Waterproofing with Silicone Sealants
According to the new market research report “Construction Silicone Sealants Market by Type (One...
0
2024-07-03T09:20:15
https://dev.to/aryanbo91040102/exploring-the-future-of-waterproofing-with-silicone-sealants-27id
news
According to the new market research report “Construction Silicone Sealants Market by Type (One Component, two Component), Curing Type (Acetoxy, Alkoxy, Oxime), Application, End-Use Industry (Residential, Commercial, Industrial) and Region — Global Forecast to 2026”, published by MarketsandMarkets™, the global Construction Silicone Sealants Market size is expected to grow from USD 3.5 billion in 2021 to USD 4.5 billion by 2026, at a CAGR of 5.0% during the forecast period. Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=97297482 Browse in-depth TOC on “Construction Silicone Sealants Market” 219 — Tables 51 — Figures 207 — Pages View Detailed Table of Content here: https://www.marketsandmarkets.com/Market-Reports/construction-silicone-sealants-market-97297482.html The growth is due to the growing demand for windows & doors systems, weatherproofing and other applications throughout the world. The silicone sealants are widely used for glazing, bathroom, and kitchen applications. The increasing demand from residential housing and commercial offices, along with rising infrastructure output from key sub-sectors, such as roads, rail, energy, and water and sewerage, is boosting the demand for construction silicone sealants. Insulating glass is expected to be the fastest-growing application in the Construction silicone sealants market during the forecast period. Insulating Glass is the fastest-growing application segment in the Construction silicone sealants market. Rising number of energy-efficient construction projects in residential and commercial end-use industries will drive the demand for silicone sealants in the insulating glass application. It accounted for a share of about 14.4% of the construction silicone sealants market, in terms of value, in 2020. APAC is expected to hold the largest market share in the global construction silicone sealants market during the forecast period. APAC accounted for the largest share of the Construction silicone sealants market in 2020. The market in the region is growing because of growing building & construction activities in emerging countries, increasing domestic demand, income levels, and easy access to resources. The market is also driven by foreign investments, supported by cheap labor and economical and accessible raw materials. Request Free Sample Pages: https://www.marketsandmarkets.com/requestsampleNew.asp?id=97297482 Construction Silicone Sealants Market Key Players Dow (US), Wacker Chemie AG (Germany), Elkem ASA (Norway), Momentive (US), and Shin-Etsu Chemical Co. Ltd. (Japan), are the leading construction silicone sealants manufacturers, globally. 3M Company, also known as Minnesota Mining and Manufacturing Company, is a multinational conglomerate corporation based in the United States. Founded in 1902, 3M is headquartered in St. Paul, Minnesota. The company operates in various sectors, including industrial, healthcare, consumer goods, and safety and graphics. 3M is well-known for its innovation and diverse range of products. The company’s success lies in its ability to develop and manufacture a wide array of products across multiple industries. 3M’s core strength is in research and development, with a focus on applying science and technology to create practical solutions for customers. It operates with production sites in 70 countries worldwide and offers products and solutions to customers in approximately 200 countries in the Americas, Asia Pacific, Europe, and the Middle East & Africa. Dow (US), Wacker Chemie AG (Germany), Elkem ASA (Norway), Momentive (US), and Shin-Etsu Chemical Co. Ltd. (Japan), among others, are the leading construction silicone sealants manufacturers, globally. These companies adopted new product launch, expansion, agreements & contracts and merger & acquisition, as their key growth strategies between 2016 and 2021 to earn a competitive advantage in the construction silicone sealants market. TABLE OF CONTENTS 1 INTRODUCTION (Page No. - 32) 1.1 OBJECTIVES OF THE STUDY 1.2 MARKET DEFINITION 1.2.1 INCLUSIONS & EXCLUSIONS 1.2.2 MARKET SCOPE FIGURE 1 CONSTRUCTION SILICONE SEALANTS: MARKET SEGMENTATION 1.2.3 REGIONS COVERED 1.2.4 YEARS CONSIDERED FOR THE STUDY 1.3 CURRENCY 1.4 UNIT CONSIDERED 1.5 LIMITATIONS 1.6 STAKEHOLDERS 2 RESEARCH METHODOLOGY (Page No. - 35) 2.1 RESEARCH DATA FIGURE 2 CONSTRUCTION SILICONE SEALANTS MARKET: RESEARCH DESIGN 2.1.1 SECONDARY DATA 2.1.1.1 Critical secondary inputs 2.1.1.2 Key data from secondary sources 2.1.2 PRIMARY DATA 2.1.2.1 Critical primary inputs 2.1.2.2 Key data from primary sources 2.1.2.3 Key industry insights 2.1.2.4 Breakdown of primary interviews 2.2 BASE NUMBER CALCULATION APPROACH 2.2.1 ESTIMATION OF CONSTRUCTION SEALANT MARKET SIZE BASED ON MARKET SHARE ANALYSIS FIGURE 3 MARKET SIZE ESTIMATION: SUPPLY-SIDE ANALYSIS FIGURE 4 MARKET SIZE ESTIMATION: DEMAND-SIDE ANALYSIS 2.3 MARKET SIZE ESTIMATION 2.3.1 MARKET SIZE ESTIMATION METHODOLOGY: BOTTOM-UP APPROACH 2.3.2 MARKET SIZE ESTIMATION METHODOLOGY: TOP-DOWN APPROACH 2.4 DATA TRIANGULATION FIGURE 5 CONSTRUCTION SILICONE SEALANTS MARKET: DATA TRIANGULATION 2.5 RESEARCH ASSUMPTIONS AND LIMITATION 2.5.1 LIMITATIONS 2.5.2 GROWTH RATE ASSUMPTIONS 3 EXECUTIVE SUMMARY (Page No. - 44) FIGURE 6 ONE COMPONENT DOMINATED CONSTRUCTION SILICONE SEALANTS MARKET FIGURE 7 ACETOXY CURE LED CONSTRUCTION SILICONE SEALANTS MARKET FIGURE 8 WINDOW & DOOR SYSTEMS ACCOUNTED FOR LARGEST SHARE IN CONSTRUCTION SILICONE SEALANTS MARKET FIGURE 9 COMMERCIAL END-USE INDUSTRY LED THE MARKET FIGURE 10 APAC LED CONSTRUCTION SILICONE SEALANTS MARKET IN 2020 4 PREMIUM INSIGHTS (Page No. - 48) 4.1 ATTRACTIVE OPPORTUNITIES IN CONSTRUCTION SILICONE SEALANTS MARKET FIGURE 11 GROWING USE OF CONSTRUCTION SILICONE SEALANTS IN END-USE APPLICATIONS TO DRIVE MARKET 4.2 CONSTRUCTION SILICONE SEALANTS MARKET, BY REGION FIGURE 12 APAC TO BE THE LARGEST MARKET BETWEEN 2021 AND 2026 4.3 APAC: CONSTRUCTION SILICONE SEALANTS MARKET, BY COUNTRY AND END-USE INDUSTRY FIGURE 13 CHINA AND COMMERCIAL SEGMENT ACCOUNTED FOR LARGEST SHARES 4.4 CONSTRUCTION SILICONE SEALANTS MARKET: BY MAJOR COUNTRIES FIGURE 14 CHINA TO BE THE FASTEST-GROWING MARKET BETWEEN 2021 AND 2026 5 MARKET OVERVIEW (Page No. - 50) 5.1 INTRODUCTION 5.1.1 FOUR BASIC FUNCTIONS OF SEALANTS 5.1.2 IMPORTANCE OF SEALANTS 5.1.3 ADVANTAGES OF CONSTRUCTION SILICONE SEALANTS 5.2 COVID-19 ECONOMIC ASSESSMENT FIGURE 15 REVISED GDP FORECASTS FOR SELECT G20 COUNTRIES IN 2020 5.3 MARKET DYNAMICS FIGURE 16 DRIVERS, RESTRAINTS, OPPORTUNITIES, AND CHALLENGES IN CONSTRUCTION SILICONE SEALANTS MARKET 5.3.1 DRIVERS 5.3.1.1 Increased demand in residential housing and infrastructure sectors 5.3.1.2 Rising demand from developing countries 5.3.1.3 Safety and ease of applications 5.3.1.4 Increasing demand from structural glazing and panels in new high-rise buildings 5.3.2 RESTRAINTS 5.3.2.1 Environmental regulations hindering market growth 5.3.3 OPPORTUNITIES 5.3.3.1 Growing demand for low VOC, green, and sustainable sealants 5.3.4 CHALLENGES 5.3.4.1 Shifting rules and changing standards Continued...
aryanbo91040102
1,909,920
Commercial Vehicle Online Platform TrucksBuses.com
When it comes to secure and comfortable transport of a group of people then the 52 seater bus becomes...
0
2024-07-03T09:18:50
https://dev.to/ravi_kumar_4131905e53bcea/commercial-vehicle-online-platform-trucksbusescom-3786
trucks, buses, minitruck, pickup
When it comes to secure and comfortable transport of a group of people then the 52 seater bus becomes the perfect solution. Different [52 seater bus](https://www.trucksbuses.com/pc/52-seater-bus) brands have been produced by leading bus manufacturers in India to provide for the varying demand for buses in the market especially when it comes to transporting students and staff, tourists among other customers. Best 52 Passenger Bus Models Get the luxury of comfort with efficiency in the Mahindra Cruzio Grande School 4440, the Ashok Leyland Oyster School 4200, and the ever-popular Tata Starbus School LP 810. These fantastic 52 seater bus marvels not only meet social utility and practicality but are also stylish and luxurious for the passengers and owners to boot. 52-Seater Bus Price Range Huge savings for the best 52 seater bus brands as top players unveil competitive prices. Check out the detailed specifications of Ashok Leyland Oyster School 4200 for Rs 28.1 lakh, or travel in the Tata Starbus School LP 810 which ranges for between Rs 26. 50 lacs to Rs 25.72 lakh. If one wants to go a notch higher in terms of capability and features, then Tata Starbus School LP 909 CNG is priced at Rs 27.88 lakh. True Fuel Efficiency & Performance Indeed, since the company aims to balance its operational expenses and the environmental impact, the 52 seater bus models exhibit optimal fuel consumption figures. Starting from the diesel versions that can travel 5 to 8 kilometres per litre and CNG options that can travel 5 to 7 kilometres per kilogram, these buses are made to perform efficiently at low operational cost. Comfort and Safety In addition to the excellent features, the appropriate 52 seater buses for sale in India focus on the comfort and security of the riders. Facilities such as leg space, seat belts, fire fighting apparatus, and space for emergency exit also add confidence to both passengers as well as operators of these buses. Moreover, the sixteen positions of flexible seating layout from 2 by 2 to 3 by 3 make the journey comfortable for everyone. Long wheel base 52 seater bus models Relating to the different transport requirements, the 52 seater bus segment comes with a variation of wheelbase. The wheelbase ranges from as low as 4100 mm in the compact model to as large as 5240 mm in the large model; this provides flexibility in accommodating different passenger loads and luggage as well. Bus Offers from TrucksBuses At TrucksBuses, we feel the same way and understand the need to find the perfect 52 seater bus. With our easy-to-use search tool, you can locate reputable dealers near you in India and choose from a vast range of excellent quality 52 seater buses models. You get a hassle-free and professional service as you use the advanced search functions to make your 52 seater bus hire decision. Improve your transportation experience with the finest 52 seater buses offered in India. Love comfort, quality, and durability like never before, and set off on a journey that is beyond compare.
ravi_kumar_4131905e53bcea
1,909,867
CSS One-Liners to Improve (Almost) Every Project
A collection of simple one-line CSS solutions to add little improvements to any web page.
0
2024-07-03T09:18:50
https://alvaromontoro.com/blog/68055/ten-css-one-liners-for-almost-every-project#limit-content-width
css, webdev, listicle
--- title: CSS One-Liners to Improve (Almost) Every Project published: true description: A collection of simple one-line CSS solutions to add little improvements to any web page. tags: css,webdev,listicle cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vwz130pfkgx462jzw3t7.png canonical_url: https://alvaromontoro.com/blog/68055/ten-css-one-liners-for-almost-every-project#limit-content-width --- Most of these one-liners will be one declaration inside the CSS rule. In some cases, the selector will be more than just a simple element; in others, I will add extra declarations as recommendations for a better experience, thus making them more than a one-liner —my apologies in advance for those cases. Some of these one-liners are more of a personal choice and won't apply to all websites (not everyone uses tables or forms). I will briefly describe each of them, what they do (with sample images), and why I like using them. Notice that the sample images may build on top of previous examples. Here's a summary of what the one-liners do: - Limit the content width within the viewport - Increase the body text size - Increase the line between rows of text - Limit the width of images - Limit the width of text within the content - Wrap headings in a more balanced way - Form control colors to match page styles - Easy-to-follow table rows - Spacing in table cells and headings - Reduce animations and movement --- ## Limit the content width in the viewport ```scss body { max-width: clamp(320px, 90%, 1000px); /* additional recommendation */ margin: auto; } ``` Adding this one-liner will reduce the content size to occupy 90% of the viewport, limiting its width between 320 and 1000 pixels (feel free to update the minimum and maximum values). This change will automatically make your content look much nicer. It will no longer be a vast text block but something that looks more structured and organized. And if you also add `margin: auto;` to the `body`, the content will be centered on the page. Two lines of code make the content look so much better. ![Side-by-side comparison of change. Left side (before): a big block of text. Right side (after): text with padding on the sides. Still big but with more spaces.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a4a9iqjpyj2zuz4czrtc.png) Aligned and contained text looks better than a giant wall of text --- ## Increase the text size ```scss body { font-size: 1.25rem; } ``` Let's face reality: **browsers' default 16px font size is small.** Although that may be a personal opinion based on me getting old 😅 One quick solution is to increase the font size in the body. Thanks to the cascade and `em` units browsers use, all the text on a web page will automatically increase. ![side by side comparison. Left (before): column with text. Right (after): column with text at a larger size.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7fwdl4lvkithaiuyv15w.png) Larger text size makes things easier to read. --- ## Increase the space among lines ```scss body { line-height: 1.5; } ``` Another preference for improving readability and breaking the dreaded wall of text is increasing the space between lines in paragraphs and content. We can easily do it with the `line-height` property. ![side by side comparison. Left (before): column with text. Right (after): column with text (more spaced).](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ngaiejjy2m023phs7oc.png) Spaces between lines break the wall of text and the rivers of white. This choice (with the previous two) will considerably increase our page's vertical size, but I assure you the text will be more readable and friendlier for all users. --- ## Limit the size of images ```scss img { max-width: 100%; } ``` Images should be approximately the size of the space they will occupy, but sometimes, we end up with really long pictures that cause the content to shift and create horizontal scrolling. One way to avoid this is by setting a maximum width of 100%. While this is not a fool-proof solution (margins and paddings may impact the width), it will work in most cases.  ![side by side comparison. Left (before): an image overflows the content size causing scrollbars to appear. Right (after): the image adjust to the content size.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnwu4sxaw5b2yf7zz9hw.png) Prevent horizontal scrolling and make images flow better with the text --- ## Limit the width of text within the content ```scss p { max-width: 65ch; } ``` Another tactic to avoid the dreaded wall of text and rivers of space is to apply this style even in conjunction with the max width in the body. It may look unnecessary and sometimes weird, as paragraphs will be narrower than other elements. But I like the contrast and the shorter lines. A value of 60ch or 65ch has worked for me in the past, but you can use a different value and adjust the max width to match your needs. Play and explore how it looks on your web page. ![side by side comparison. Left (before): the text occupies the whole width. Right (after): the text occupies most of the width.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rm2mom0zob6qyzj7grmo.png) Break the bigger chunks of text into smaller blocks for readability --- ## Wrap headings in a more balanced way ```scss h1, h2, h3, h4, h5, h6 { text-wrap: balance; } ``` Headings are an essential part of the web structure, but due to their larger size and short(-er) content, they may look weird. Especially when they occupy more than one line. A solution that will help is balancing the headings with `text-wrap`. Although balance seems to be the most popular value for text-wrap, it is not the only one. We could also use `pretty`, which moves an extra word to the last row if needed instead of balancing all the content. Unfortunately, pretty has yet to count on broad support. ![side by side comparison. Left (before): a heading occupies two rows, the second one has only 1 word. Right (after): the heading occupies two rows of similar width.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b7kn4nu0p5txby7r3mvh.png) Balanced wrapping can improve visibility and readability --- ## Form control colors to match page styles ```scss body { accent-color: #080; /* use your favorite color */ } ``` Another small change that does not have a significant impact but that makes things look better. Until recently, we could not style native form controls with CSS and were stuck with the browser display. But things have changed. Developing a whole component can be a pain, but setting a color that is more similar to the rest of the site and the design system is possible and straightforward with this one-liner. ![side by side comparison. Left (before): form controls are the default blue . Right (after): form controls color match the heading and link colors (green).](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u841w6hxaettw5o1lj40.png) It's the small details (and colors) that bring the page together --- ## Easy-to-follow table rows ```scss :is(tbody, table) > tr:nth-child(odd) { background: #0001; /* or #fff1 for dark themes */ } ``` We must use tables to display data, not for layout. But tables are ugly by default, and we don't want data to look ugly. In particular, one thing that helps organize the data and make it more readable is having a zebra table with alternating dark/light rows. The one-liner displayed above makes achieving that style easy. It can be simplified to be only `tr` without considering the `tbody` or `table` parent, but it would also apply to the table header, which we may not want. It's a matter of taste. ![side by side comparison. Left (before): all table rows are white. Right (after): even table rows are slightly darker.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gojzbbwkh4wha7vkp10a.png) Easier to follow the data horizontally (by row) --- ## Spacing in table cells and headings ```scss td, th { padding: 0.5em; /* or 0.5em 1em... or any value different from 0 */ } ``` One last change to make tables more accessible and easier to read is to space the content slightly by adding padding to the table cells and headers. By default, most browsers don't have any padding, and the text of different cells touches, making it confusing to differentiate where one begins and the other ends. We can change the padding value to adjust it to our favorite size. However, avoid overdoing it to avoid unnecessary scrolling or too much blank space. ![side by side comparison. Left (before): table cells text content is altogether. Right (after): table cells content is clearly separated from other table cells.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/31ovktiaisdopgh0n972.png) Easier to follow data horizontally and vertically --- ## Reduce animations and movement ```scss @media (prefers-reduced-motion) { *, *::before, *::after { animation-duration: 0s !important; /* additional recommendation */ transition: none !important; scroll-behavior: auto !important; } } ``` Okay, okay. This code is way more than a one-liner. It has a one-liner version (removing animations by setting their duration to zero seconds), but other things on a web page make elements move. By setting these declarations inside the prefers-reduced-motion media query, we will respect the person's choice to have less movement. **This approach is somewhat radical because it removes all movement, which may not necessarily be the user's intent -it is "reduced motion" and not "no motion."** We can still preserve movement on a case-by-case basis if appropriate. ![side by side comparison. Left (before): an image moves over a web page. Right (after): the image is static.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4094ca9xn9a2pf7xwdm.png) No animations? No problem!
alvaromontoro
1,909,919
How I got 50 Followers on ProductHunt in 5 Minutes using AI
Everybody who knows anything about ProductHunt, knows it's filled exclusively with marketing managers...
0
2024-07-03T09:16:58
https://ainiro.io/blog/how-i-got-50-followers-on-producthunt-in-5-minutes-using-ai
Everybody who knows anything about [ProductHunt](https://www.producthunt.com), knows it's filled exclusively with marketing managers and click leeches. You can see this by nobody really caring about having an actual discussion, which shows by everybody asking questions. The idea behind this is to artificially inflate their followers and engagement, to generate _"karma"_, such that they'll have more pull during launch day. > By phrasing your original discussion as a question, the probability of that somebody will engage with your discussion increases, resulting in artificially inflating your _"karma"_ by gaming the system This has nothing to do with AI, and ProductHunt has always been like this. ## Buying upvotes In addition it's an _"open secret"_ that there's no way you can reach the top for a product launch without buying upvotes. There's an entire industry of people willing to sell you upvotes. I would know, because I had 10 offers yesterday during our own launch day. Below is one example. ![Buying ProductHunt links](https://docs.ainiro.io/assets/images/buying-producthunt-links.png) This results in that every single launch winner is unwillingly participating in, and subsidising, human trafficking and slavery-like click farms. For the record, I didn't buy a single upvote - Something you can see by our [product having 13 upvotes](https://www.producthunt.com/posts/ainiro-ai-workflows). ## Slavery-like Click Farms When you buy upvotes for your product, you're purchasing services from click farms in 3rd world countries. A lot of these click farms are literally abducting teenagers from their families, by luring them in with false promises of good jobs with good salaries, for then to have them fly to a different country, and steal their passports as they start their jobs. Their job is sitting like Sardines for 80 hours per week, clicking 500+ links per hour, for then to do some _"action"_. For ProductHunt this implies upvoting the product, and possibly copying and pasting some comment given to them by their manager in some Excel sheet or something. The advanced ones might scrape the product launch page, and maybe use AI to generate a relevant comment. Something I've demonstrated how you can do in one of my previous videos. Most of these click farms demands 80+ hours of work per week from their employees. Some of them will forcibly deny their staff to leave the factory, sometimes with armed guards. And all of them will pay minimum wage, if they even pay any salary at all. There are historical accounts of employees having been able to escape their slavery facility, for then to go to the police, which arrested them, detained them, and returned the escaped workers back to the factory owner. This is such a huge problem that the Chinese government has issued _"travel warnings"_ to their citizens for specific countries. Hundreds of thousands of Chinese teenagers have been abducted this way, working under slave-like working conditions, 80 hours per week, some of whom are upvoting your product launch. > Basically, it's modern slavery, and there are hundreds of thousands of teenagers working under such conditions in south east Asia ## ProductHunt is unwillingly subsidising Slavery By ProductHunt not being willing or able to deal with this, ProductHunt is unwillingly subsidising modern slavery. **YOU** as a marketing manager is also subsidising such slavery by purchasing upvotes from people such as the guy in the above screenshot. The above guy is of course just a sales executive, and would probably deny the allegations I put forth in this article if confronted with it - But the problem is so large that the Chinese government publicly warned its young citizens to travel to specific countries because of the risk of being abducted and sold into slavery to such click farms. > When the CCP warns about slavery, you know it's bad! ## YOU are Subsidising Slavery In addition, **you** are subsidising slavery by purchasing upvotes from click farms such as these. Not only are you subsidising slavery, but you're also subsidising the same companies that are maliciously destroying your ad campaigns, by using the Google Ads Display Network feature to click farm harvest clicks from Google Ads, resulting in 70% commission from Google on ads displayed on their shallow websites, exclusively created to take advantage of Google's commission for clicks generated on their web pages. > Yes that's right, you're **paying the same people that are destroying your ad campaigns**! The reasons are simple; These click farms have a portfolio of products, and upvotes on ProductHunt is just _one_ of their products! ## Google is Subsidising Slavery In addition to these problems, Google is also ipso facto subsidising slavery. The method is simple to understand, and goes as follows. 1. You setup a shallow one page website with a JavaScript snippet displaying Google Search results, in addition to Google Ads 2. You apply to become a _"Google Partner"_ (new speak for _"slave manager"_), resulting in 70% commission on all ads clicked on your page 3. You abduct 5,000 teenagers and force them into slavery 4. You send these slaves an excel sheet of 5,000 links per day they need to click on, for then to click on every single ad they can find on that page 5. The slavery owner earns thousands of dollars per day in _"affiliate commission"_ from Google Since CPC is sometimes between 2 to 15 dollars for each click, and each slave can typically click on 500+ links per hour, this implies the slavery owner makes between $1,000 to $7,500 per hour for each slave he has abducted, while usually not giving more than $1 in salary, if even that to his slaves doing the actual work. > This is an industry generating hundreds of billions of dollars annually! The end result becomes that you're wasting sometimes 80 to 98 percent of your advertisement budget, while subsidising slavery-like click farms in South East Asia, resulting in unfathomable amounts of suffering and grief. > All of this because you bought 400 upvotes from the guy in the above screenshot ## The fix to the problem The solution is quite easy to fix; **Stop purchasing upvotes and clicks**! Which leaves you with a new problem; _"How can you increase angagement on your ProductHunt launch?"_ Well, it's a slight more demanding job - But in the following YouTube video I illustrate how I added 50 new followers to my ProductHunt profile with 5 minutes of work, by intelligently using AI. These followers will be _organic followers_, as in _actual human beings_, often having a _legitimate interest_ in new products and new launches. {% embed https://www.youtube.com/watch?v=vxEXq3vdh6k %} The above illustrate a solution that allows you to organically grow your ProductHunt profile, resulting in more followers, that will be notified 1 to 2 months down the road as you launch your next product. Increasing the probability of that your launch will actually be successful. Since these are legitimate human beings too, it's more likely they will actually share your launch to their network, increasing the quality and amounts of customers you can get from your launch campaign, probably by 10x to 1,000x. > It's more work, it requires more planning, but it will probably bring you 100x better results After all, what's the point of winning _"Best product of the day"_ when the only ones who actually saw it were click farm employees? ## Let AI Free the Slaves There's been a lot of talk lately of how AI will steal your job. I'm not going to go down that rabbot hole, except for informing you about that there are hundreds of thousands of teenagers living in slavery today. I'm willing to bet a kidney on that all of these would be more than happy to have _"the AI steal their jobs"_, such that they could be released and sent back to their families. AINIRO is a company, and we're dependent upon selling products and services. However, nothing brings me greater pleasure than when we can _combine_ our ability to earn money with positively contributing to society. If you're interested in helping us, while also helping yourself, and maybe even help some abducted slaves in South East Asia at the same time - You can contact us below. * [Contact us](https://ainiro.io/contact-us) > Happy Hunting 😊 And yes, the comment I posted at ProductHunt for this particular link was AI generated 😁
polterguy
1,909,918
Is tail call optimization truly necessary?
When I was learning about a new functional programming language, one of the first things I would...
0
2024-07-03T09:16:56
https://dev.to/mrdimosthenis/is-tail-call-optimization-truly-necessary-ojo
functional, scala
When I was learning about a new functional programming language, one of the first things I would check would be if it supports tail call optimization. My thinking was that if this feature isn't available, then the tail-recursive functions could blow the stack. Tail call optimization is a technique that makes recursive function calls efficient by eliminating the need for new stack frames. When the last expression of a function is a call to itself, this optimization reuses the current function's stack frame rather than creating a new one. So, it prevents stack overflow errors and improves performance. Scala supports tail call optimization. However, it is not enabled by default. You need to use the `@annotation.tailrec` annotation to tell the compiler that a function should be optimized. ```scala def factorial(n: Int): Int = @annotation.tailrec def go(m: Int, acc: Int): Int = m match case 0 => acc case _ => go(m - 1, m * acc) go(n, 1) ``` The above code is an implementation of the factorial function. The `go` function is tail-recursive because the last expression is a call to itself `go(m - 1, m * acc)`. The `@annotation.tailrec` annotation ensures that the compiler will optimize this function. Recently, I've started wondering whether tail call optimization is really necessary. Let me explain why. The majority of functional programming languages have collections whose elements can be processed lazily, without loading them all into memory at once. Among other things, we use these collections to process large datasets. For example, we treat the contents of a huge file as a lazy sequence of lines, and we perform operations like `map`, `filter`, `fold`, `reduce`, etc. to compute some result. Interestingly, we can generate these collections with methods like `unfold` or `iterate`. It seems to me that almost all tail-recursive functions can be rewritten using lazy collections. ```scala def factorial(n: Int): Int = Iterator .iterate((n, 1)) { (m, acc) => (m - 1, m * acc) } .dropWhile { (m, _) => m > 0 } .next() ._2 ``` In the code above, the iteration is managed without additional call-stack usage, and the memory footprint is low because the elements are created and consumed on the fly. Additionally, the code is easier to understand compared to the tail-recursive version. We can achieve similar functionality with other programming languages that follow the functional paradigm. So, the question remains: is tail call optimization truly necessary?
mrdimosthenis
1,909,917
How to Apply a Magento 2 Patch
Keeping your Magento 2 store secure and up-to-date is crucial for maintaining its stability and...
0
2024-07-03T09:16:49
https://dev.to/amineyaakoubii/how-to-apply-a-magento-2-patch-5aom
magento, security, patch
Keeping your Magento 2 store secure and up-to-date is crucial for maintaining its stability and protecting it from vulnerabilities. In this guide, we’ll walk you through the steps to apply a Magento 2 patch, often referred to as a "composer patch," without actually using Composer. ## Step-by-Step Guide to Applying a Magento 2 Patch ### Step 1: Download the Patch First, download the patch that is suitable for your Magento 2 version. Adobe regularly releases patches to address security vulnerabilities and other issues. For this example, we'll use the security update available for Adobe Commerce APSB24-40. You can find the relevant patch at the following link: [Adobe Commerce APSB24-40 Patch](https://experienceleague.adobe.com/en/docs/commerce-knowledge-base/kb/troubleshooting/known-issues-patches-attached/security-update-available-for-adobe-commerce-apsb24-40-revised-to-include-isolated-patch-for-cve-2024-34102) ### Step 2: Unzip the Patch Once you have downloaded the patch, unzip the file. The unzipped contents should include a file with a `.composer.patch` extension. ### Step 3: Apply the Patch To apply the patch manually, follow these steps: 1. **Navigate to your Magento root directory**: ```sh cd /path/to/your/magento2/root ``` 2. **Apply the patch**: ```sh patch -p1 < %patch_name%.composer.patch ``` Replace `%patch_name%` with the actual name of the patch file you unzipped. For example, if the file is named `VULN-27015-2.4.6x.composer.patch`, you would run: ```sh patch -p1 < VULN-27015-2.4.6x.composer.patch ``` ### Step 4: Verify the Patch After applying the patch, it's important to verify that the patch was applied correctly and that your Magento store is functioning as expected. You can do this by: - Checking the patch log files for any errors. - Testing key functionalities of your store to ensure everything is working smoothly. - Running a security scan to confirm that the vulnerability addressed by the patch has been resolved. ### Conclusion Applying patches to your Magento 2 store is an essential part of maintaining its security and performance. By following these steps, you can ensure your store is protected against the latest vulnerabilities and continues to operate seamlessly. For more detailed information and additional patches, always refer to the official Adobe Commerce knowledge base. And that's it! You've successfully patched your Magento 2 store.
amineyaakoubii
1,909,916
Asynchronous Programming in C#: Async/Await Patterns
Asynchronous programming is nowadays common in modern software development and allows applications to...
0
2024-07-03T09:15:20
https://dev.to/wirefuture/asynchronous-programming-in-c-asyncawait-patterns-47nk
csharp, dotnet, aspdotnet, webdev
Asynchronous programming is nowadays common in modern software development and allows applications to do work without stalling the main thread. The async and await keywords in C# provide a much cleaner method for creating asynchronous code. This article explores the internals of asynchronous programming in C# including error handling and [performance optimization](https://wirefuture.com/post/how-to-improve-performance-of-asp-net-core-web-applications). ## Understanding Async and Await ### The Basics of Async and Await The async keyword is used to declare a method as asynchronous. This means the method can perform its work asynchronously without blocking the calling thread. The await keyword is used to pause the execution of an async method until the awaited task completes. ``` public async Task<int> FetchDataAsync() { await Task.Delay(1000); // Simulate an asynchronous operation return 42; } public async Task MainAsync() { int result = await FetchDataAsync(); Console.WriteLine(result); } ``` In this example, FetchDataAsync is an asynchronous method that simulates a delay before returning a result. The await keyword in MainAsync ensures that the method waits for FetchDataAsync to complete before proceeding. ## How Async and Await Work Under the Hood When the compiler encounters the await keyword, it breaks the method into two parts: the part before the await and the part after it. The part before the await runs synchronously. When the awaited task completes, the part after the await resumes execution. The compiler generates a state machine to handle this [asynchronous behavior](https://wirefuture.com/post/mastering-net-a-deep-dive-into-task-parallel-library). This state machine maintains the state of the method and the context in which it runs. This allows the method to pause and resume without blocking the main thread. ## Task vs. ValueTask In addition to Task, C# 7.0 introduced ValueTask. While Task is a reference type, ValueTask is a value type that can reduce heap allocations, making it more efficient for scenarios where performance is critical, and the operation completes synchronously most of the time. Here’s an example using ValueTask: ``` public async ValueTask<int> FetchDataValueTaskAsync() { await Task.Delay(1000); return 42; } public async Task MainValueTaskAsync() { int result = await FetchDataValueTaskAsync(); Console.WriteLine(result); } ``` ## Best Practices for Error Handling Error handling in asynchronous methods can be challenging. Here are some best practices to ensure robust error handling in your async code: **Use Try-Catch Blocks** Wrap your await statements in try-catch blocks to handle exceptions gracefully. ``` public async Task MainWithErrorHandlingAsync() { try { int result = await FetchDataAsync(); Console.WriteLine(result); } catch (Exception ex) { Console.WriteLine($"An error occurred: {ex.Message}"); } } ``` **Use Cancellation Tokens** Cancellation tokens allow you to cancel asynchronous operations gracefully. This is particularly useful for long-running tasks. ``` public async Task<int> FetchDataWithCancellationAsync(CancellationToken cancellationToken) { await Task.Delay(1000, cancellationToken); return 42; } public async Task MainWithCancellationAsync(CancellationToken cancellationToken) { try { int result = await FetchDataWithCancellationAsync(cancellationToken); Console.WriteLine(result); } catch (OperationCanceledException) { Console.WriteLine("Operation was canceled."); } } ``` **Handle AggregateException** When multiple tasks are awaited using Task.WhenAll, exceptions are wrapped in an AggregateException. Use a try-catch block to handle these exceptions. ``` public async Task MainWithAggregateExceptionHandlingAsync() { var tasks = new List<Task<int>> { FetchDataAsync(), FetchDataAsync() }; try { int[] results = await Task.WhenAll(tasks); Console.WriteLine($"Results: {string.Join(", ", results)}"); } catch (AggregateException ex) { foreach (var innerException in ex.InnerExceptions) { Console.WriteLine($"An error occurred: {innerException.Message}"); } } } ``` ## Performance Optimization Techniques Optimizing the performance of asynchronous code involves understanding how the runtime handles async/await and employing best practices to minimize overhead. **Avoid Unnecessary Async** If a method does not perform any asynchronous operations, avoid marking it as async. This can save the overhead of the state machine. ``` public Task<int> FetchDataWithoutAsync() { return Task.FromResult(42); } public async Task MainWithoutUnnecessaryAsync() { int result = await FetchDataWithoutAsync(); Console.WriteLine(result); } ``` **Use ConfigureAwait(False)** By default, await captures the current synchronization context and uses it to resume execution. This can lead to performance issues, especially in UI applications. Using ConfigureAwait(false) avoids capturing the context, improving performance. ``` public async Task<int> FetchDataWithConfigureAwaitAsync() { await Task.Delay(1000).ConfigureAwait(false); return 42; } public async Task MainWithConfigureAwaitAsync() { int result = await FetchDataWithConfigureAwaitAsync(); Console.WriteLine(result); } ``` **Parallelize Independent Tasks** If you have multiple independent tasks, you can run them in parallel to improve performance using Task.WhenAll. ``` public async Task MainWithParallelTasksAsync() { var tasks = new List<Task<int>> { FetchDataAsync(), FetchDataAsync() }; int[] results = await Task.WhenAll(tasks); Console.WriteLine($"Results: {string.Join(", ", results)}"); } ``` **Use ValueTask for Performance-Critical Paths** As mentioned earlier, ValueTask can be more efficient than Task for performance-critical paths where most operations complete synchronously. ``` public ValueTask<int> FetchDataWithValueTask() { return new ValueTask<int>(42); } public async Task MainWithValueTaskAsync() { int result = await FetchDataWithValueTask(); Console.WriteLine(result); } ``` ## Debugging Asynchronous Code Debugging asynchronous code can be challenging due to the state machine and context switching. Here are some tips to make debugging easier: **Use Task.Exception Property** Check the Exception property of a task to inspect the exception without causing the application to crash. ``` public async Task MainWithTaskExceptionAsync() { Task<int> task = FetchDataAsync(); try { int result = await task; Console.WriteLine(result); } catch { if (task.Exception != null) { Console.WriteLine($"Task failed: {task.Exception.Message}"); } } } ``` **Use Debugger Attributes** Use the DebuggerStepThrough and DebuggerHidden attributes to control how the debugger steps through async methods. ``` [DebuggerStepThrough] public async Task<int> FetchDataWithDebuggerStepThroughAsync() { await Task.Delay(1000); return 42; } [DebuggerHidden] public async Task<int> FetchDataWithDebuggerHiddenAsync() { await Task.Delay(1000); return 42; } ``` **Use Logging** Implement logging to track the flow of asynchronous operations and capture exceptions. ``` public async Task<int> FetchDataWithLoggingAsync(ILogger logger) { logger.LogInformation("Fetching data..."); try { await Task.Delay(1000); logger.LogInformation("Data fetched successfully."); return 42; } catch (Exception ex) { logger.LogError($"An error occurred: {ex.Message}"); throw; } } public async Task MainWithLoggingAsync(ILogger logger) { int result = await FetchDataWithLoggingAsync(logger); Console.WriteLine(result); } ``` ## Conclusion Asynchronous programming with async and await in C# is a powerful tool to write responsive, scalable applications. Asynchronous programming requires knowing async/await internals, following best practices for error handling and optimizing performance. Using techniques like using ValueTask, avoiding unnecessary async and running tasks in parallel can improve the performance and reliability of your applications. Also, debugging and logging practices are important to maintain and troubleshoot asynchronous code.
tapeshm
1,909,915
SSRF Vulnerability in HiTranslate: A Technical Breakdown
Server-side request Forgery (SSRF) is a security vulnerability that allows an attacker to induce the...
0
2024-07-03T09:14:48
https://dev.to/tecno-security/ssrf-vulnerability-in-hitranslate-a-technical-breakdown-1dpm
security, tecno, cybersecurity, bounty
Server-side request Forgery (SSRF) is a security vulnerability that allows an attacker to induce the server-side application to make HTTP requests to an arbitrary domain chosen by the attacker. This article details the discovery, exploitation, and mitigation of an SSRF vulnerability in the HiTranslate application, a popular app used to translate text between different languages. **1.Detecting SSRF Vulnerabilities** Security researchers can employ various methods to detect SSRF vulnerabilities during security assessments: **① Fuzzing URL Parameters** Utilize automated tools to fuzz URL parameters with different payloads to identify potential SSRF points. **② Monitoring Outbound Requests** Monitor outbound network requests made by the application for unusual or unauthorized destinations. **③ Testing with Collaborator Services** Use services like Burp Collaborator to track and confirm whether external requests are being made by the application. **④ Reviewing Source Code** Perform code reviews to identify unvalidated URL inputs or improper handling of external requests. **② Preventing SSRF Vulnerabilities** To effectively prevent SSRF vulnerabilities, several best practices and mitigation strategies should be implemented: **① Input Validation** - Allowlist Approach: Implement strict allowlisting of acceptable domains. Only permit URLs that are known and trusted. - Denylist Approach: Use a denylist to block known malicious domains, though this is less effective due to the ease of bypassing with new domains. **② Network Segmentation** Segregate internal and external network resources to minimize the risk of SSRF attacks accessing sensitive internal services. **③ Metadata Service Protection** Restrict access to cloud metadata services. Many cloud providers offer configuration options to disable or limit metadata service access from instances. **④ Proxy Configuration** - Ensure the proxy only forwards requests to a restricted set of domains. - Avoid resolving custom domains to internal IP addresses by verifying that resolved IPs belong to trusted networks. **⑤ Use Web Application Firewalls (WAFs)** Implement WAFs to detect and block malicious traffic patterns indicative of SSRF attacks. **⑥ Regular Security Audits and Penetration Testing** Conduct regular security audits and penetration testing to identify and mitigate potential vulnerabilities before they can be exploited. Discovery of the SSRF Vulnerability: SSRF Vulnerability in HiTranslate: A Technical Breakdown[https://security.tecno.com/SRC/blogdetail/271?lang=en_US]
tecno-security
1,906,960
Understanding LINQ While Writing Your Own
In this article, we will learn what LINQ is and how it works behind the scenes while writing your own...
0
2024-07-03T09:10:31
https://dev.to/rasulhsn/understanding-linq-while-writing-your-own-klj
dotnet, linq, csharp
In this article, we will learn what LINQ is and how it works behind the scenes while writing your own LINQ methods. LINQ stands for Language-Integrated Query. It is one of the most powerful tools in the .NET platform that provides an abstract way of querying data for various data sources. In other words, it abstracts away us from concrete data source dependencies. Every data source has its language and format. But with LINQ in your arsenal, you don’t need to talk to the data source using its language. You use a central language (LINQ in this case) and it acts like an API Gateway to different data sources. LINQ supports common query methods that can be used with various data sources like runtime objects, relational databases, and XML. A small note before getting started: We will focus on the LINQ to Object, and will talk about other LINQ forms in our next articles. ## Language-Integrated Query LINQ lives in System.Linq.dll and is a query language (library + syntax). The main purpose of it is to provide a consistent, declarative, and type-safe way to query and manipulate data across various sources. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/moboofg7c35co7ibedu2.png) The image above shows how the LINQ works within the .NET frameworks with various data sources. - **LINQ to Objects** lets you query collections like arrays and lists in memory using LINQ syntax. It offers powerful filtering, ordering, and grouping capabilities directly on these collections. - **LINQ to Entities** is used to query databases through the Entity Framework, an ORM (Object-Relational Mapper). It allows you to write queries in C# instead of SQL, translating them into SQL for database execution. - **LINQ to SQL (old)** is a component that provides a runtime infrastructure for managing relational data as objects. It translates LINQ queries into SQL queries to interact directly with the SQL Server database. - **LINQ to DataSet** enables querying and manipulating data stored in ADO.NET DataSets. It is useful for working with disconnected data that is retrieved from databases and stored in memory. - **LINQ to XML** provides the ability to query and manipulate XML data using LINQ. It simplifies working with XML documents by offering a more readable and concise way to handle XML data. In built-in LINQ there are most common methods for querying from data sources. - **Where**: Filters elements based on a predicate. - **Select**: Projects each element into a new form. - **OrderBy**: Sorts elements in ascending/descending order. - **GroupBy**: Groups elements that share a common attribute. - **Join**: Joins two sequences based on a key. To view the list, please [click](https://learn.microsoft.com/en-us/dotnet/api/system.linq.enumerable?view=net-8.0). So, LINQ is easier than you think because LINQ has two ways of querying data sources. - **Query syntax or language level syntax** — this often called “SQL-like syntax”, and is a declarative way to write LINQ queries. ```csharp List<int> numbers = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; var evenNumbers = from num in numbers where num % 2 == 0 select num; foreach (var num in evenNumbers) { Console.WriteLine(num); } ``` - **Method syntax** — also known as “fluent syntax” or “method chaining”. This syntax is often more concise and can be more powerful when dealing with complex queries. ```csharp List<int> numbers = new List<int> { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; var evenNumbers = numbers.Where(num => num % 2 == 0); foreach (var num in evenNumbers) { Console.WriteLine(num); } ``` The design of the LINQ library stands on iteration elements and execution of methods at different times. Everything starts with IEnumerable and IEnumerator interfaces; ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/swhjne8qi3jjpo3objsj.png) ```csharp public interface IEnumerable { IEnumerator GetEnumerator(); } public interface IEnumerator { object Current { get; } bool MoveNext(); void Reset(); } ``` These interfaces lead to an iterator pattern which helps to encapsulate the iteration process for all data sources, specifically in LINQ to objects as known collections implement these interfaces. They also provide the contract for the LINQ base. The LINQ methods are implemented as extension methods. Also, the trick is they are method-based extensible, and not coupled with each other. But everything starts with method syntax and their execution. Let’s start with one of the popular methods of LINQ which is **Where**. ```csharp public static IEnumerable<TSource> Where<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { ThrowHelper.ThrowArgumentNullException(ExceptionArgument.source); } if (predicate == null) { ThrowHelper.ThrowArgumentNullException(ExceptionArgument.predicate); } if (source is Iterator<TSource> iterator) { return iterator.Where(predicate); } if (source is TSource[] array) { return array.Length == 0 ? Empty<TSource>() : new WhereArrayIterator<TSource>(array, predicate); } if (source is List<TSource> list) { return new WhereListIterator<TSource>(list, predicate); } return new WhereEnumerableIterator<TSource>(source, predicate); } ``` As you can see the source code above shows the Where extension method with encapsulating Source and Predicate. As defined before, the method helps to filter with custom conditions. So, the method participates as a high-order function and has accepted the delegate for the hiding custom part. We are closing the critical part, why does the method use the **WhereEnumerableIteration** class? Because of the lazy loading, when executing the method chaining (shown below), it helps to avoid immediate loading for optimizing memory and performance. It’s executed in case of need. On the other hand, there are also some methods that execute immediately like ToList, ToArray, Single, etc. Sample of lazy loading with fluent API. ```csharp List<string> strList = new List<string>() { "Rasul", "Huseynov", "LINQ", "Q123" }; strList.Where(x => x.Contains("Q")) .Select(x => x.Substring(0, 2)) .OrderBy(x => x); ``` Note: Other LINQ providers work through the extended part of **IQueryable** also with different interfaces, which we will cover in another article. ## Writing OWN Methods In this section, we will extend the LINQ with our own method. As I mentioned before, everything starts with IEnumerable, so in our case, we want to implement a general filter by choosing the first two elements by condition. It is a little bit easy but the idea is to understand how to extend LINQ by yourself. In our case, we will use method chaining and to avoid loading issues we need to implement our method using lazy loading. Let’s get started! Our custom extension is similar to “**Where**” but it has an item count rule. That means when iterating over data we will take the first two items that match the given condition. ```csharp public static class CustomEnumerableExtensions { public static IEnumerable<TSource> FirstTwoItems<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) throw new ArgumentNullException(nameof(source)); if (predicate == null) throw new ArgumentNullException(nameof(predicate)); return new FirstTwoItemsEnumerable<TSource>(source, predicate); } private sealed class FirstTwoItemsEnumerable<TSource> : IEnumerable<TSource>, IEnumerator<TSource> { private int _itemCount; private int _state; private TSource _current; private readonly IEnumerable<TSource> _source; private readonly Func<TSource, bool> _predicate; private IEnumerator<TSource>? _enumerator; public FirstTwoItemsEnumerable(IEnumerable<TSource> source, Func<TSource, bool> predicate) { _source = source; _predicate = predicate; } public TSource Current { get { return _current; } } object IEnumerator.Current { get { return _current; } } public void Dispose() { if (_enumerator != null) { _enumerator.Dispose(); _enumerator = null; } _current = default; _state = -1; _itemCount = 0; } public IEnumerator<TSource> GetEnumerator() { var instance = new FirstTwoItemsEnumerable<TSource>(this._source, this._predicate); instance._state = 1; return instance; } IEnumerator IEnumerable.GetEnumerator() { var instance = new FirstTwoItemsEnumerable<TSource>(this._source, this._predicate); instance._state = 1; return instance; } public bool MoveNext() { switch (_state) { case 1: _enumerator = _source.GetEnumerator(); _state = 2; goto case 2; case 2: while (_enumerator.MoveNext()) { if (_itemCount == 2) break; TSource item = _enumerator.Current; if (_predicate(item)) { _itemCount++; _current = item; return true; } } Dispose(); break; } return false; } public void Reset() { if (_enumerator != null) { _enumerator.Dispose(); _enumerator = null; } _current = default; _state = 1; _itemCount = 0; } } } ``` So, for it to be lazy loading, there is a **FirstTwoItemsEnumerable** wrapper class for managing states. The implementation of code stands on the default iteration pattern and behaves like other extensions. With this concept, you can write your own methods and use them in case of LINQ to objects is needed. Sample of usage; ```csharp List<string> strList = new List<string>() { "rasul", "huseynov", "turkey", "usa" }; var result = strList.FirstTwoItems(x => x.Contains("u")).Select(x => x.ToUpper()); foreach (var item in result) { Console.WriteLine(item); } ``` ## Conclusion LINQ (Language-Integrated Query) is a powerful library of the .NET framework that enables developers to query data in a more intuitive and readable way. By understanding how LINQ works under the hood and learning to write your own LINQ methods, you can harness its full potential to make your code cleaner, more efficient, and easier to maintain. Through this explanation, we have seen how LINQ makes working with data easier and helps us write clear, simple code. By making your own LINQ methods, you can learn how LINQ works and can create solutions that fit your needs. Stay tuned!
rasulhsn
1,909,913
Automating User Creation: A Streamlined Approach
Managing user accounts can be a time-consuming task, especially when dealing with frequent...
0
2024-07-03T09:09:47
https://dev.to/udoh_deborah_b1e484c474bf/automating-user-creation-a-streamlined-approach-p90
webdev, devops, hng, bashscript
Managing user accounts can be a time-consuming task, especially when dealing with frequent onboarding. on my stage one task with https://hng.tech/internship, I took a deep dive into automating User Creation. This guide introduces a Bash script, `create_users.sh`, that automates user creation and management based on a text file. - The Script's Purpose `create_users.sh` aims to automate user account creation on Linux systems. It reads a user data file containing usernames and associated groups. The script then performs a series of actions to ensure each user is set up correctly with appropriate permissions and group memberships. ``` #!/bin/bash # Log file location LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Check if the input file is provided if [ -z "$1" ]; then echo "Error: No file was provided" echo "Usage: $0 <name-of-text-file>" exit 1 fi # Create log and password files mkdir -p /var/secure touch $LOGFILE $PASSWORD_FILE chmod 600 $PASSWORD_FILE generate_random_password() { local length=${1:-10} # Default length is 10 if no argument is provided LC_ALL=C tr -dc 'A-Za-z0-9!?%+=' < /dev/urandom | head -c $length } # Function to create a user create_user() { local username=$1 local groups=$2 if getent passwd "$username" > /dev/null; then echo "User $username already exists" | tee -a $LOGFILE else useradd -m $username echo "Created user $username" | tee -a $LOGFILE fi # Add user to specified groups groups_array=($(echo $groups | tr "," "\n")) for group in "${groups_array[@]}"; do if ! getent group "$group" >/dev/null; then groupadd "$group" echo "Created group $group" | tee -a $LOGFILE fi usermod -aG "$group" "$username" echo "Added user $username to group $group" | tee -a $LOGFILE done # Set up home directory permissions chmod 700 /home/$username chown $username:$username /home/$username echo "Set up home directory for user $username" | tee -a $LOGFILE # Generate a random password password=$(generate_random_password 12) echo "$username:$password" | chpasswd echo "$username,$password" >> $PASSWORD_FILE echo "Set password for user $username" | tee -a $LOGFILE } # Read the input file and create users while IFS=';' read -r username groups; do create_user "$username" "$groups" done < "$1" echo "User creation process completed." | tee -a $LOGFILE ``` **Step-by-Step Breakdown** 1. - Creating the Script: * Use `touch create_users.sh` to create the script file. * Make the script executable with `chmod +x create_users.sh`. 2. **Input File Check: * The script checks if you provided a user data file containing user and group information. This prevents errors and ensures proper usage. * Create a sample data file (e.g., `user_data.txt`) using `sudo nano user_data.txt`. 3. **Key Script Components:** * The script defines essential variables like `LOG_FILE` and `PASSWORD_FILE` to manage file paths throughout the script. This improves readability and simplifies maintenance. 4. **Security Measures:** * Prioritizing security, the script creates necessary directories (if missing) and initializes a password file (`/var/secure/user_passwords.csv`) with strict permissions (chmod 600). This restricts access to sensitive password information. 5. **Modular Functions:** * The script defines functions for better organization: * `generate_password()`: Uses OpenSSL to generate strong, random passwords. * `log_message()`: Logs detailed actions with timestamps to a log file for troubleshooting and auditing. 6. **Processing the Input File:** * The script reads each line in the user data file, parses usernames and groups, and performs actions for each user: * Checks for existing users to avoid duplicates. * Creates the user with their primary group and a secure home directory (if the user doesn't exist). * Generates a random password stored securely in the password file. * Creates additional groups (if needed) and adds the user to those groups. 7. **Script Completion:** * Upon successful user creation, the script logs a message and prompts you to review the log file for details. **Important Considerations** * **Password Security:** The script leverages OpenSSL for strong passwords and stores them securely with restricted permissions. * **Detailed Logging:** Logging aids in troubleshooting and provides an audit trail for accountability. * **Error Handling:** The script anticipates potential issues (missing files, existing users) and handles them gracefully to avoid disruptions. * **Modular Functions:** Functions promote code reuse and maintainability. * **Group Management:** The script dynamically manages groups, ensuring proper user access control. **Real-World Application** This script can be valuable in various scenarios, such as: * **Efficient User Provisioning:** During project expansions, the script can streamline user creation, reducing manual effort. * **Enhanced Security:** Secure password generation and storage practices improve overall system security. *Learn more about the HNG community on https://hng.tech/premium
udoh_deborah_b1e484c474bf
1,909,911
Alight Motion vs. Other Video Editing Apps: A Comparison
When it comes to video editing on mobile devices, Alight Motion stands out as a powerful and...
0
2024-07-03T09:04:23
https://dev.to/mark_3793ebf46aad4787c3ba/alight-motion-vs-other-video-editing-apps-a-comparison-38eo
productivity
When it comes to video editing on mobile devices, Alight Motion stands out as a powerful and versatile tool. However, it's important to compare it with other popular video editing apps to understand its strengths and weaknesses. Here's a detailed comparison of Alight Motion with some other well-known video editing apps: 1. Alight Motion Pros: Keyframe Animation: Keyframe Animation is standout features of [Alight Motion Pro APK without watermark](https://motionalightapk.com/) its allowing for precise control over animations and movements. Vector Graphics: Supports vector graphics, enabling scalable and high-quality visuals. Effects Library: Extensive library of visual effects and color correction tools. Multi-Layer Support: Ability to work with multiple layers of video, audio, and graphics. User-Friendly Interface: Intuitive design that's accessible for beginners yet powerful enough for advanced users. Cons: Learning Curve: While powerful, some features have a steep learning curve. Performance Issues: May experience lag or crashes on older devices. 2. Adobe Premiere Rush Pros: Cross-Platform: Syncs projects across devices, including desktop and mobile. Integration with Adobe Suite: Seamlessly integrates with other Adobe products like Photoshop and After Effects. Easy to Use: Simple interface that is great for quick edits. Cons: Subscription-Based: Requires an Adobe Creative Cloud subscription for full features. Limited Advanced Features: Not as feature-rich as Alight Motion for complex animations. 3. KineMaster Pros: User-Friendly: Intuitive interface that's easy to navigate. Real-Time Recording: Allows for real-time recording and editing. Rich Feature Set: Offers a wide range of transitions, effects, and audio enhancements. Cons: Watermark in Free Version: Free version includes a watermark on exported videos. Subscription for Full Features: Requires a subscription to unlock all features and remove the watermark. 4. LumaFusion Pros: Professional-Grade Editing: Offers advanced editing tools comparable to desktop software. Multi-Track Editing: Supports multiple video and audio tracks. High-Quality Export Options: Provides various export settings, including 4K resolution. Cons: iOS Exclusive: Only available for iOS devices. Steeper Price: One-time purchase cost is higher compared to other apps. 5. InShot Pros: Easy to Use: Very user-friendly interface suitable for beginners. Social Media Integration: Optimized for creating content for social media platforms. Affordable: Offers many features for free, with affordable in-app purchases for additional tools. Cons: Limited Advanced Features: Not suitable for complex editing projects. Watermark in Free Version: Free version includes a watermark on exported videos. Conclusion Choosing the right video editing app depends on your specific needs and the complexity of your projects. Alight Motion excels in keyframe animation and vector graphics, making it ideal for users looking for advanced animation capabilities. Adobe Premiere Rush is perfect for those already integrated into the Adobe ecosystem. KineMaster offers a great balance of features and usability. LumaFusion is the go-to choice for professional-grade mobile editing. InShot is excellent for quick and easy social media content creation. Each app has its unique strengths, so consider what features are most important to you and choose accordingly. Happy editing!
mark_3793ebf46aad4787c3ba
1,864,678
Optimizing Kubernetes Deployments: Tips and Tricks
Kubernetes has become the de facto standard for container orchestration, providing a powerful...
27,507
2024-07-03T09:04:00
https://dev.to/idsulik/optimizing-kubernetes-deployments-tips-and-tricks-5a13
kubernetes, tips, tricks, optimization
[Kubernetes](https://kubernetes.io/) has become the de facto standard for container orchestration, providing a powerful platform for deploying, scaling, and managing containerized applications. However, optimizing Kubernetes deployments can be challenging due to the complexity of the system and the wide array of configuration options available. In this article, we'll explore essential tips and tricks to help you optimize your Kubernetes deployments for better performance, reliability, and cost-efficiency. ## 1. Efficient Resource Management ### Set Resource Requests and Limits Setting [resource requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) for your containers ensures that they have the necessary CPU and memory to function properly without over-consuming cluster resources. This helps in avoiding resource contention and ensures fair resource distribution among pods. ```yaml resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" ``` ### Use Horizontal Pod Autoscaler [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) (HPA) automatically scales the number of pods based on observed CPU utilization or other custom metrics. This ensures that your application can handle varying loads efficiently. ```yaml apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hpa-demo-deployment spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hpa-demo-deployment minReplicas: 2 maxReplicas: 5 targetCPUUtilizationPercentage: 60 ``` ## 2. Optimize Scheduling ### Use Node Selectors and Affinity/Anti-affinity Rules [Node selectors](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) and affinity/anti-affinity rules help control the placement of pods on nodes, ensuring that workloads are appropriately distributed and can leverage node-specific features. ```yaml spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/e2e-az-name operator: In values: - e2e-az1 - e2e-az2 ``` ### Taints and Tolerations [Taints and tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) allow you to ensure that specific pods are scheduled on appropriate nodes, avoiding nodes with limited resources or special workloads. ```yaml spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" ``` ## 3. Enhance Application Resilience ### Use Readiness and Liveness Probes [Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) help Kubernetes determine the health of your applications, enabling it to restart containers that are unhealthy and ensuring that traffic is only routed to healthy pods. ```yaml readinessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 3 periodSeconds: 10 livenessProbe: httpGet: path: /live port: 8080 initialDelaySeconds: 3 periodSeconds: 10 ``` ## 4. Optimize Storage ### Use Persistent Volumes and Persistent Volume Claims [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) (PVs) and [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) (PVCs) provide a way to manage storage resources in Kubernetes, ensuring data persistence across pod restarts. ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ``` ### Leverage Storage Classes Storage classes define different types of storage (e.g., SSDs, HDDs) that can be dynamically provisioned. This allows you to optimize storage based on the performance requirements of your workloads. ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: fast provisioner: kubernetes.io/gce-pd parameters: type: pd-ssd ```
idsulik
1,909,910
bartowski/Phi-3.1-mini-4k-instruct-GGUF-torrent
https://aitorrent.zerroug.de/bartowski-phi-3-1-mini-4k-instruct-gguf-torrent/
0
2024-07-03T09:01:19
https://dev.to/octobreak/bartowskiphi-31-mini-4k-instruct-gguf-torrent-o1i
ai, machinelearning, chatgpt, beginners
https://aitorrent.zerroug.de/bartowski-phi-3-1-mini-4k-instruct-gguf-torrent/
octobreak
1,910,233
Use the MGT SearchBox control in a SPFx solution
Proceeding with the appointments with the MGT (Microsoft Graph Toolkit) controls today I want to talk...
0
2024-07-04T06:42:59
https://iamguidozam.blog/2024/07/03/use-the-mgt-searchbox-control-in-a-spfx-solution/
development, mgt, spfx
--- title: Use the MGT SearchBox control in a SPFx solution published: true date: 2024-07-03 09:00:00 UTC tags: Development,MGT,SPFx canonical_url: https://iamguidozam.blog/2024/07/03/use-the-mgt-searchbox-control-in-a-spfx-solution/ --- Proceeding with the appointments with the MGT (Microsoft Graph Toolkit) controls today I want to talk about the **SearchBox** control. * * * _I will not cover in detail the implementation, it’s not the scope of this post, if you’re wondering how to achieve all the steps to enable you to use MGT inside SPFx you can have a look at my [previous post](https://iamguidozam.blog/2023/12/20/use-microsoft-graph-toolkit-with-spfx-and-react/) or have a look at the code of this [sample here](https://github.com/GuidoZam/blog-samples/tree/main/MGT/mgt-searchbox)._ * * * The **SearchBox** control is used to display a textbox control to enable the user to search for something, this control can be used for whatever you need to. I’ve prepared a sample solution to show the possible control configurations so, starting with the whole result here are the instances of the control: ![](https://iamguidozam.blog/wp-content/uploads/2024/05/image-7.png?w=1015) The **basic usage** one just show the control without any configuration. With the **custom search term** instance you can specify a custom search value to be set in the control value. The **custom placeholder** instance shows how the control renders when setting a custom placeholder instead of the default placeholder string. In the end there are two instances that display a more real world scenario. The **custom debounce delay** instance shows how to handle the change of the search term after a pre defined delay: ![](https://iamguidozam.blog/wp-content/uploads/2024/05/image-8.png?w=992) The **search term changed event** instance display a similar behavior of the previous control instance but it will not get delayed: ![](https://iamguidozam.blog/wp-content/uploads/2024/06/image-44.png?w=993) ## Show me the code To enable the use of the **SearchBox** control first thing first you have to import it from the MGT React package: ``` import { SearchBox } from '@microsoft/mgt-react'; ``` The basic, but not functional configuration, is the following: ``` <SearchBox></SearchBox> ``` This is just to display how the control renders without any customization. The following instance displays how to programmatically set the search term through the **searchTerm** property: ``` <SearchBox searchTerm={"custom term"}></SearchBox> ``` Another possible configuration for the UX is the **placeholder** property where it’s possible to specify a custom placeholder value to be displayed when no value has been specified: ``` <SearchBox placeholder={strings.Placeholder}> </SearchBox> ``` In the next sample you can see how to use the **debounceDelay** property, this property set the **milliseconds** delay before triggering the **searchTermChanged** event: ``` <SearchBox debounceDelay={2000} searchTermChanged={(e) => this.setState({ changedDebounceSearchTerm: e?.detail })}> </SearchBox> ``` Finally the **searchTermChanged** property allows to specify what to do when a new search text has been specified, to access the inserted text it’s possible to access the **detail** property of the method argument: ``` <SearchBox searchTermChanged={(e) => this.setState({ changedSearchTerm: e?.detail})}> </SearchBox> ``` ## Conclusions The control **SearchBox** does not offer a lot of customization options but it’s pretty useful, moreover because it’s already styled with the Fluent UI style! Hope this helps!
guidozam
1,909,908
Restaurant Bookkeeping services India
We are specialists in restaurant bookkeeping services India. With our all-inclusive accounting...
0
2024-07-03T08:56:41
https://dev.to/globalbookkeeping/restaurant-bookkeeping-services-india-1ab4
accounting, bookkeeping, tax, payroll
We are specialists in **_[restaurant bookkeeping services India](https://globalbookkeeping.net/service/restaurant-bookkeeping)_**. With our all-inclusive accounting services for restaurants, you can guarantee success and financial transparency. Keep an advantage while improving your earnings.
globalbookkeeping
1,909,906
If you are a WeChat mini-program developer, WeTest penetration testing is a must
Imagine a world where your online shopping WeChat mini-program is not only user-friendly but also...
0
2024-07-03T08:56:14
https://dev.to/wetest/if-you-are-a-wechat-mini-program-developer-wetest-penetration-testing-is-a-must-2oe6
miniprogram, softwaredevelopment, webdev, javascript
![penetration testing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5d53dlnpyu16ow2muiz.png) Imagine a world where your online shopping WeChat mini-program is not only user-friendly but also secure from potential threats. In this success story, we reveal how WeTest's penetration testing service transformed a renowned company's mini-program by identifying and addressing **8 security risks**, including high-stake vulnerabilities like "**free purchase**" economic loss risk and employee privacy information leakage risk. Our security experts delivered a detailed vulnerability report and a comprehensive security reinforcement plan. The result? After regression testing, all medium and high-risk issues were resolved, reducing the overall security risk level of the mini-program to low risk. Dive in to learn more about this incredible security makeover! ## About the Customer The client is a well-known retail company with an online shopping mini program focusing on providing innovative digital solutions and personalized shopping experiences. The client realized that their mini program was facing various potential cyberattacks and data leakage risks. To protect their business and the interests of their clients, they decided to conduct a comprehensive security penetration test to evaluate the security risks and reinforcement plans of the mini program system. ## Business Pain Points - **Lack of security capabilities in internal technical personnel**: The client's in-house technical development personnel are relatively unfamiliar with security testing and do not have a deep understanding of various penetration tools and testing methods. They also lack experience and knowledge of common system and business vulnerabilities in the industry, making it difficult for them to conduct a comprehensive system penetration test on their own, which could potentially overlook security vulnerabilities. - **High cost of security tools and learning**: Market security tools and security policies iterate quickly, and different tools focus on different types of vulnerabilities. With the growth of the black and gray markets, various capabilities are also being updated at all times. If internal development personnel were to start learning immediately, both time and financial costs would be significant. - **Business blind spots due to self-development**: Internal employees have a good understanding of the mini program system and inherent knowledge of their business. However, this high level of understanding may lead to blind spots in detection and penetration testing. ## WeTest Solution - **Professional hacker mindset and adaptive methods**: WeTest's penetration experts conducted static and dynamic manual penetration testing on the client's mini program, focusing on general web security, server system security, service component security, program code security, business logic security, and other aspects. This aimed to obtain security risks in the mini program's data usage, user data input, storage processing, network transmission, and system environment, providing a professional and reliable basis for mini program security reinforcement. - **Customized inspection items for retail business**: WeTest leveraged its experience in retail/online shopping mini programs' business vulnerabilities and customized 92 inspection items for the client's mini program and key business processes, including baseline inspection, data validation, data transmission, authorization, authentication, and session management. - **Reverse analysis from a business development perspective**: WeTest's security team reverse-engineered the mini program and analyzed the program's business logic from a developer's perspective. This allowed them to deeply study the internal logic and implementation details of the application, discovering potential vulnerabilities and security issues. By analyzing the source code, they could identify potential input validation deficiencies, buffer overflows, authentication issues, etc., which might not be discovered through traditional black-box testing methods. - **Advanced attacks and vulnerability exploitation**: WeTest's security team has extensive penetration testing skills and experience and has demonstrated excellent capabilities in advanced attacks and vulnerability exploitation. By deeply understanding the internal workings and logic of the target system, they were able to develop customized attack tools and exploit code to verify the system's security, such as discovering two security vulnerabilities of 0 yuan purchase of gifts through reverse engineering, guessing, and combining multiple security risks. - **Clear and detailed penetration test report and interpretation**: WeTest's security team provided a detailed test report and repair suggestions, and explained the principles, exploitation methods, risks, and repair suggestions for each security risk to the client through remote meetings. They are committed to helping clients improve the security of their programs and data assets. ## Business Results After a comprehensive security assessment, WeTest's penetration testing team rated the client's online shopping mini program risk level as high risk. We discovered 8 security risksin the test results: _2 high-risk, 5 medium-risk, and 1 low-risk_. - Some examples are as follows: 1. Order interface risk of free riding 2. Shopping cart interface risk of free riding 3. Bypassing front-end restrictions to add an excessive amount to the shopping cart WeTest provided corresponding solutions for the vulnerabilities in the mini-program. - Some examples are as follows: 1. For **Vulnerability 1**: WeTest's security team found that the shopping cart interface's parameter verification was not strict, allowing users to bypass the restriction of not being able to add gifts to the shopping cart and purchase gifts for 0 yuan, causing significant economic losses. Repair suggestion: Strengthen server-side parameter verification logic, prohibit gift IDs from being added to the shopping cart as product IDs. 2. For **Vulnerability 2**: WeTest's security team cracked the encryption method of the order interface and found that by forging data, gifts could be purchased directly for 0 yuan. Repair suggestion: Increase the complexity of the signature method, and have the order interface verify scenarios where only gifts are present and products are empty. 3. For **Vulnerability 3**: WeTest's security team discovered a crawler vulnerability by reverse engineering and forging the mini program's request token, leading to product information being crawled. Repair suggestion: Strengthen the mini program's source code to increase the difficulty of cracking or move the token generation logic to the server-side. ## Customer Testimonial > "In our developed online shopping mini program, WeTest team discovered and helped fix system vulnerabilities that could potentially lead to significant economic losses and user data leaks. We sincerely thank the professional team at WeTest for their efforts and expertise in providing important security guarantees for our system. In the future, we will continue to focus on the security of our applications, conduct regular inspections, and carry out point-to-point reinforcement." ## Conclusion Don't wait any longer to safeguard your digital assets and ensure a secure environment for your customers. Experience the power of WeTest's penetration testing service and join the ranks of satisfied clients. Click the link to get started → [WeTest - Penetration Testing](https://www.wetest.net/products/penetration-testing). Secure your future with WeTest Global today! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mqz4kky0gq5ugc7a6fpd.png)
wetest
1,909,903
This is title
This is ordered list 1 This is ordered list 2 This is unordered list This is unordered list
0
2024-07-03T08:54:09
https://dev.to/marufhossen/this-is-title-18dd
django
1. This is ordered list 1 2. This is ordered list 2 - This is unordered list - This is unordered list
marufhossen
1,909,905
How Amazom Dyanamo DB Works?
Amazon DynamoDB is a fully managed NoSQL database service provided by Amazon Web Services (AWS) that...
0
2024-07-03T08:54:00
https://dev.to/adish_ghimire_07ac8af0c87/how-amazom-dyanamo-db-works-25m7
amazon, amazondynanamodb, dyanamodb, fundamentalsofdynamodb
**Amazon DynamoDB** is a fully managed NoSQL database service provided by Amazon Web Services (AWS) that offers fast and predictable performance with seamless scalability. Here’s a detailed overview of how DynamoDB works: Key Concepts Tables: The primary structure that holds data in DynamoDB. Each table is a collection of items, and each item is a collection of attributes. Items: Similar to rows in a relational database, items are the individual records in a table. Each item is uniquely identified by a primary key. Attributes: The data elements that compose an item, analogous to columns in a relational database. Primary Key Partition Key: A single attribute primary key, where the value is hashed to determine the partition where the item is stored. Composite Key: Consists of a partition key and a sort key. The partition key determines the partition, and the sort key allows multiple items with the same partition key to be stored together and queried in sorted order. Data Model Schema-less: Unlike traditional databases, DynamoDB does not require a predefined schema. Each item in a table can have a different number of attributes. Data Types: Supports scalar types (e.g., String, Number, Binary), document types (e.g., List, Map), and sets (e.g., String Set, Number Set). Read and Write Operations Read Operations: GetItem: Retrieves a single item by primary key. Query: Retrieves multiple items using the primary key and an optional sort key. Scan: Retrieves all items in a table, with optional filtering. Write Operations: PutItem: Creates a new item or replaces an existing item. UpdateItem: Modifies one or more attributes of an existing item. DeleteItem: Deletes an item by primary key. Consistency Models Eventually Consistent Reads: Returns data that might not reflect the results of a recently completed write operation. This is the default and offers higher throughput. Strongly Consistent Reads: Returns the most up-to-date data, reflecting all writes that received a successful response prior to the read. Scaling and Performance Provisioned Capacity: Allows specifying the number of read and write capacity units for a table. On-Demand Capacity: Automatically scales to accommodate workloads without the need to specify capacity. Auto Scaling: Adjusts the provisioned throughput automatically based on traffic patterns. Indexing Secondary Indexes: Provides more flexible querying capabilities. Global Secondary Index (GSI): Allows querying on non-primary key attributes. Consists of a partition key and an optional sort key. Local Secondary Index (LSI): Allows querying on non-primary key attributes within the same partition key. Consists of the same partition key but a different sort key. Transactions ACID Transactions: Supports atomicity, consistency, isolation, and durability across multiple items and tables. Allows operations to be grouped and executed together, ensuring data integrity. Streams and Triggers DynamoDB Streams: Captures data modification events (e.g., inserts, updates, deletes) in a table. Can be used to trigger AWS Lambda functions or replicate data to other services. Security and Access Control IAM Policies: Controls access to DynamoDB resources using AWS Identity and Access Management (IAM). Encryption: Encrypts data at rest using AWS Key Management Service (KMS). Also supports TLS for data in transit. Integration with Other AWS Services AWS Lambda: Can be triggered by DynamoDB Streams for real-time processing. Amazon Redshift: Can be used to import data for complex analytics. AWS Glue: Provides ETL (Extract, Transform, Load) capabilities to move data between DynamoDB and other data stores. Use Cases Web Applications: High-traffic web applications requiring low-latency data access. IoT Applications: Storing and querying large volumes of time-series data. Gaming Applications: Leaderboards, player data, and game state management. Mobile Applications: Offline data synchronization and real-time data access. By understanding these concepts and features, you can leverage DynamoDB to build scalable and high-performance applications on AWS. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u4o2n0t1o494svxx5d2z.jpg) ``` {% embed %} ```
adish_ghimire_07ac8af0c87
1,909,902
Master Digital Marketing with Top-notch Courses in Lucknow
Mastering the art of digital marketing is essential for individuals and businesses alike. Whether...
0
2024-07-03T08:51:35
https://dev.to/digitalearnseo/master-digital-marketing-with-top-notch-courses-in-lucknow-3g98
digitalmarketingcourse, onlinemarketing, digitalmarketingtraining, masterindigitalmarketing
Mastering the art of digital marketing is essential for individuals and businesses alike. Whether you're a beginner or an experienced marketer, digital marketing courses in Lucknow cater to all levels of expertise. It covers many topics to ensure you receive a holistic understanding of the field. So, Are you in Lucknow eager to enhance your digital marketing skills? Look no further! In this blog, we'll help you find a comprehensive digital marketing academy designed to equip you with the knowledge and tools needed to excel in the online landscape. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uqukorjbg2x4mbq3034s.jpg) ## Unparalleled Digital Marketing Courses in Lucknow When you enroll in a [digital marketing course in Lucknow](https://digitalearn.in/course/digital-marketing-training/), you're embarking on a journey toward success. By mastering these skills, you'll be well-equipped to navigate the dynamic digital landscape confidently. Whether you're a student, entrepreneur, or working professional, digital marketing courses in Lucknow are tailored to fit your busy schedule. [DigitaLearn Academy in Lucknow](https://digitalearn.in/) imparts individuals with the advanced knowledge and skills required to excel in the Information Technology sector. Our primary focus is to empower students with practical insights and industry-level expertise, enabling them to thrive and significantly impact the ever-evolving IT landscape. ## Why choose DigitaLearn Academy for digital marketing? Opting for DigitaLearn can help you acquire industry-driven expertise and practical skills, propelling your IT career forward. It offers distinct advantages, including: - The comprehensive curriculum includes in-depth modules on search engine optimization (SEO), social media marketing, pay-per-click (PPC) advertising, content marketing, and more. - We are led by experienced instructors with real-world expertise, ensuring you receive practical insights into your digital marketing endeavors. - Our institute also offers flexible class timings, allowing you to balance your learning journey with your other commitments. Join our vibrant community of learners today! - We are proud to be affiliated with accredited companies for seamless placement and internship opportunities. Secure your future with hands-on experience in our esteemed organizations. - With modern collaborative spaces, we provide students with a dynamic setting to enhance their skills. Our well-equipped facilities ensure a comprehensive learning experience. ## Our Affordable Digital Marketing Academy in Lucknow We understand the importance of accessible education, which is why our [digital marketing course in Lucknow](https://digitalearn.in/blog/digital-marketing-courses-in-Lucknow) is comprehensive and affordable. Refrain from letting financial constraints hold you back from acquiring valuable skills. Explore our digital marketing course in Lucknow fees and payment options to find a plan that suits your budget. Learn from professionals who have successfully executed digital marketing campaigns, optimized websites for search engines, and crafted compelling content that engages audiences. We also believe in learning by doing. Our digital marketing course covers every aspect of digital marketing, from the basics to the most advanced strategies. It includes hands-on projects and practical assignments that allow you to apply the concepts you've learned in real-world scenarios. So, gain valuable experience and build a portfolio that showcases your skills to potential employers or clients. ## The Final Words: So, join the digital revolution today because there's no better time to equip yourself with digital marketing prowess. Enroll in our digital marketing courses in Lucknow at DigitaLearn and stay ahead in this ever-evolving landscape.
digitalearnseo
1,909,900
Building .NET MAUI Barcode Scanner with Visual Studio Code on macOS
Recently, Dynamsoft released a new .NET MAUI Barcode SDK for building barcode scanning applications...
0
2024-07-03T08:48:23
https://www.dynamsoft.com/codepool/dotnet-maui-barcode-sdk-tutorial.html
dotnet, maui, csharp, android
Recently, Dynamsoft released a new **.NET MAUI Barcode SDK** for building barcode scanning applications on **Android** and **iOS**. In this tutorial, we will use **Visual Studio Code** to create a .NET MAUI Barcode Scanner from scratch. Our application will decode barcodes and QR codes from both image files and camera video stream. <video src="https://github.com/yushulx/maui-barcode-qrcode-scanner/assets/2202306/b76aae4d-cc59-4370-a2ba-df8d46532713" controls="controls" muted="muted" style="max-height:640px;max-width:100%;"></video> ## Prerequisites To get started, you'll need to install the following tools: - [.NET 8.0 SDK](https://dotnet.microsoft.com/download/dotnet/8.0) - [Visual Studio Code](https://code.visualstudio.com/) - [.NET MAUI extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.dotnet-maui) - [C# Kit](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) - [Android SDK](https://developer.android.com/studio) - [iOS SDK](https://developer.apple.com/xcode/) For detailed installation instructions, refer to the [Microsoft tutorial](https://learn.microsoft.com/en-us/dotnet/maui/get-started/installation?view=net-maui-8.0&tabs=visual-studio-code). ### Why Not Visual Studio for Mac? Microsoft has announced the retirement of Visual Studio for Mac, with support ending on **August 31, 2024**. The new .NET MAUI extension for Visual Studio Code offers a superior development experience for cross-platform applications. ## Step 1: Scaffold a .NET MAUI Project Create a new .NET MAUI project in Visual Studio Code: 1. Open the command palette by pressing `Cmd + Shift + P` or `F1`. 2. Type `> .NET: New Project` and press `Enter`. 3. Select `.NET MAUI App` and press `Enter`. 4. Enter the project name and choose the location to save the project. To run the project on an Android device or iOS device: 1. Open the command palette and type `> .NET MAUI: Pick Android Device` or `> .NET MAUI: Pick iOS Device` to select a device. ![Pick Android Device](https://www.dynamsoft.com/codepool/img/2024/07/visual-studio-code-dotnet-maui-development.png) 2. Press `F5` to build and run the project. ## Step 2: Install Dependencies for Barcode Detection and Android Lifecycle Notifications To enable barcode detection and handle Android lifecycle notifications, install the following NuGet packages: - [Dynamsoft.BarcodeReaderBundle.Maui](https://www.nuget.org/packages/Dynamsoft.BarcodeReaderBundle.Maui): A .NET MAUI barcode SDK. - [CommunityToolkit.Mvvm](https://www.nuget.org/packages/CommunityToolkit.Mvvm): A messaging library for Android lifecycle notifications. Run the following commands to add these packages to your project: ```bash dotnet add package Dynamsoft.BarcodeReaderBundle.Maui dotnet add package CommunityToolkit.Mvvm ``` Next, configure the dependencies in the `MauiProgram.cs` file: ```csharp using Microsoft.Extensions.Logging; using Dynamsoft.CameraEnhancer.Maui; using Dynamsoft.CameraEnhancer.Maui.Handlers; using CommunityToolkit.Maui; using Microsoft.Maui.LifecycleEvents; using CommunityToolkit.Mvvm.Messaging; namespace BarcodeQrScanner; public static class MauiProgram { public static MauiApp CreateMauiApp() { var builder = MauiApp.CreateBuilder(); builder .UseMauiApp<App>().UseMauiCommunityToolkit() .ConfigureFonts(fonts => { fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular"); fonts.AddFont("OpenSans-Semibold.ttf", "OpenSansSemibold"); }) .ConfigureLifecycleEvents(events => { #if ANDROID events.AddAndroid(android => android .OnResume((activity) => { NotifyPage("Resume"); }) .OnStop((activity) => { NotifyPage("Stop"); })); #endif }) .ConfigureMauiHandlers(handlers => { handlers.AddHandler(typeof(CameraView), typeof(CameraViewHandler)); }); #if DEBUG builder.Logging.AddDebug(); #endif return builder.Build(); } private static void NotifyPage(string eventName) { WeakReferenceMessenger.Default.Send(new LifecycleEventMessage(eventName)); } } ``` ## Step 3: Add Permission Descriptions for Android and iOS To enable the application to pick images from the gallery and access the camera, you need to add permission descriptions in the `AndroidManifest.xml` and `Info.plist` files. **Android** Add the following permissions to your `AndroidManifest.xml` file: ```xml <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.CAMERA" /> ``` **iOS** Add the following keys to your Info.plist file to describe why your app needs access to the photo library, camera, and microphone: ```xml <key>NSPhotoLibraryUsageDescription</key> <string>App needs access to the photo library to pick images.</string> <key>NSCameraUsageDescription</key> <string>This app is using the camera</string> <key>NSMicrophoneUsageDescription</key> <string>This app needs access to microphone for taking videos.</string> ``` By adding these permission descriptions, you ensure that your application has the necessary access to the device's camera and photo library, complying with Android and iOS security requirements. ## Step 4: Activate Dynamsoft Barcode Reader SDK To use the Dynamsoft Barcode Reader SDK, you need to activate it with a valid license key in the `MainPage.xaml.cs` file. You can obtain a [30-day free trial license](https://www.dynamsoft.com/customer/license/trialLicense/?product=dbr) from Dynamsoft. ```csharp public partial class MainPage : ContentPage, ILicenseVerificationListener { public MainPage() { InitializeComponent(); LicenseManager.InitLicense("LICENSE-KEY", this); } public void OnLicenseVerified(bool isSuccess, string message) { if (!isSuccess) { Debug.WriteLine(message); } } } ``` ## Step 5: Add Two Buttons to the Main Page First, create a `PicturePage` for decoding barcodes from image files and a `CameraPage` for scanning barcodes from the camera video stream. Then, add two buttons to the main page: one for picking an image from the gallery and navigating to `PicturePage`, and another for requesting camera permissions and navigating to `CameraPage`. **MainPage.xaml** ```xml <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="BarcodeQrScanner.MainPage" > <ScrollView> <StackLayout> <Button x:Name="takePhotoButton" Text="Image File" HorizontalOptions="Center" VerticalOptions="CenterAndExpand" Clicked="OnTakePhotoButtonClicked" /> <Button x:Name="takeVideoButton" Text="Video Stream" HorizontalOptions="Center" VerticalOptions="CenterAndExpand" Clicked="OnTakeVideoButtonClicked" /> </StackLayout> </ScrollView> </ContentPage> ``` **MainPage.xaml.cs** ```csharp async void OnTakePhotoButtonClicked(object sender, EventArgs e) { try { var result = await FilePicker.Default.PickAsync(new PickOptions { FileTypes = FilePickerFileType.Images, PickerTitle = "Please select an image" }); if (result != null) { await Navigation.PushAsync(new PicturePage(result)); } } catch (Exception ex) { // Handle exceptions if any Console.WriteLine($"An error occurred: {ex.Message}"); } } async void OnTakeVideoButtonClicked(object sender, EventArgs e) { var status = await Permissions.CheckStatusAsync<Permissions.Camera>(); if (status == PermissionStatus.Granted) { await Navigation.PushAsync(new CameraPage()); } else { status = await Permissions.RequestAsync<Permissions.Camera>(); if (status == PermissionStatus.Granted) { await Navigation.PushAsync(new CameraPage()); } else { await DisplayAlert("Permission needed", "I will need Camera permission for this action", "Ok"); } } } ``` ## Step 6: Read Barcodes from Image Files 1. Add an `Image` control to display the selected image and a `GraphicsView` control to overlay the barcode results. ```xml <ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" x:Class="BarcodeQrScanner.PicturePage" Title="Barcode Reader"> <Grid> <Image x:Name="PickedImage" Aspect="AspectFit" VerticalOptions="CenterAndExpand" HorizontalOptions="CenterAndExpand" SizeChanged="OnImageSizeChanged"/> <GraphicsView x:Name="OverlayGraphicsView" /> </Grid> </ContentPage> ``` Ensure the size of the `GraphicsView` matches the size of the Image control, and update the `GraphicsView` size when the `Image` control size changes. ```csharp private void OnImageSizeChanged(object sender, EventArgs e) { // Adjust the GraphicsView size to match the Image size OverlayGraphicsView.WidthRequest = PickedImage.Width; OverlayGraphicsView.HeightRequest = PickedImage.Height; } ``` 2. Get the image width and height for calculating the overlay position: ```csharp public PicturePage(FileResult result) { InitializeComponent(); LoadImageWithOverlay(result); } async private void LoadImageWithOverlay(FileResult result) { // Get the file path var filePath = result.FullPath; var stream = await result.OpenReadAsync(); float originalWidth = 0; float originalHeight = 0; try { var image = PlatformImage.FromStream(stream); originalWidth = image.Width; originalHeight = image.Height; ... } catch (Exception ex) { Console.WriteLine($"An error occurred: {ex.Message}"); } } ``` 3. Reset the file stream position for displaying the image: ```csharp stream.Position = 0; ImageSource imageSource = ImageSource.FromStream(() => stream); PickedImage.Source = imageSource; ``` 4. Decode barcodes from the image file: ```csharp private CaptureVisionRouter router = new CaptureVisionRouter(); CapturedResult capturedResult = router.Capture(filePath, EnumPresetTemplate.PT_READ_BARCODES); DecodedBarcodesResult? barcodeResults = null; if (capturedResult != null) { // Get the barcode results barcodeResults = capturedResult.DecodedBarcodesResult; } ``` 5. Draw the barcode results over the image: ```csharp public class ImageWithOverlayDrawable : IDrawable { private readonly DecodedBarcodesResult? _barcodeResults; private readonly float _originalWidth; private readonly float _originalHeight; private bool _isFile; public ImageWithOverlayDrawable(DecodedBarcodesResult? barcodeResults, float originalWidth, float originalHeight, bool isFile = false) { _barcodeResults = barcodeResults; _originalWidth = originalWidth; _originalHeight = originalHeight; _isFile = isFile; } public void Draw(ICanvas canvas, RectF dirtyRect) { // Calculate scaling factors float scaleX = (int)dirtyRect.Width / _originalWidth; float scaleY = (int)dirtyRect.Height / _originalHeight; // Set scaling to maintain aspect ratio float scale = Math.Min(scaleX, scaleY); canvas.StrokeColor = Colors.Red; canvas.StrokeSize = 2; canvas.FontColor = Colors.Red; if (_barcodeResults != null) { var items = _barcodeResults.Items; foreach (var item in items) { Microsoft.Maui.Graphics.Point[] points = item.Location.Points; if (_isFile){ canvas.DrawLine((float)points[0].X * scale, (float)points[0].Y * scale, (float)points[1].X * scale, (float)points[1].Y * scale); canvas.DrawLine((float)points[1].X * scale, (float)points[1].Y * scale, (float)points[2].X * scale, (float)points[2].Y * scale); canvas.DrawLine((float)points[2].X * scale, (float)points[2].Y * scale, (float)points[3].X * scale, (float)points[3].Y * scale); canvas.DrawLine((float)points[3].X * scale, (float)points[3].Y * scale, (float)points[0].X * scale, (float)points[0].Y * scale); } canvas.DrawString(item.Text, (float)points[0].X * scale, (float)points[0].Y * scale - 10, HorizontalAlignment.Left); } } } } var drawable = new ImageWithOverlayDrawable(barcodeResults, originalWidth, originalHeight, true); // Set drawable to GraphicsView OverlayGraphicsView.Drawable = drawable; OverlayGraphicsView.Invalidate(); ``` ![Barcode Reader](https://www.dynamsoft.com/codepool/img/2024/07/dotnet-maui-barcode-reader.jpg) ## Step 7: Scan Barcodes from Camera Video Stream 1. In the `CameraPage` layout, add a `CameraView` control to display the camera video stream and a `GraphicsView` control to overlay the barcode results on the video stream. **CameraPage.xaml** ```xml <?xml version="1.0" encoding="utf-8" ?> <ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui" xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml" xmlns:controls="clr-namespace:Dynamsoft.CameraEnhancer.Maui;assembly=Dynamsoft.CaptureVisionRouter.Maui" x:Class="BarcodeQrScanner.CameraPage" Title="Barcode Scanner"> <Grid Margin="0"> <controls:CameraView x:Name="CameraPreview" SizeChanged="OnImageSizeChanged"/> <GraphicsView x:Name="OverlayGraphicsView"/> </Grid> </ContentPage> ``` **CameraPage.xaml.cs** ```csharp private void OnImageSizeChanged(object sender, EventArgs e) { // Adjust the GraphicsView size to match the Image size OverlayGraphicsView.WidthRequest = PickedImage.Width; OverlayGraphicsView.HeightRequest = PickedImage.Height; } ``` 2. In the `CameraPage.xaml.cs` file, instantiate `CameraEnhancer` and start the camera preview. Use `WeakReferenceMessenger` to handle Android lifecycle events. ```csharp using Dynamsoft.Core.Maui; using Dynamsoft.CaptureVisionRouter.Maui; using Dynamsoft.BarcodeReader.Maui; using Dynamsoft.CameraEnhancer.Maui; using System.Diagnostics; using CommunityToolkit.Mvvm.Messaging; public partial class CameraPage : ContentPage, ICapturedResultReceiver, ICompletionListener { private CameraEnhancer? enhancer = null; private CaptureVisionRouter router; private float previewWidth = 0; private float previewHeight = 0; public CameraPage() { InitializeComponent(); enhancer = new CameraEnhancer(); router = new CaptureVisionRouter(); router.SetInput(enhancer); router.AddResultReceiver(this); WeakReferenceMessenger.Default.Register<LifecycleEventMessage>(this, (r, message) => { if (message.EventName == "Resume") { if (this.Handler != null && enhancer != null) { enhancer.Open(); } } else if (message.EventName == "Stop") { enhancer?.Close(); } }); } protected override void OnHandlerChanged() { base.OnHandlerChanged(); if (this.Handler != null && enhancer != null) { enhancer.SetCameraView(CameraPreview); enhancer.Open(); } } protected override async void OnAppearing() { base.OnAppearing(); await Permissions.RequestAsync<Permissions.Camera>(); router?.StartCapturing(EnumPresetTemplate.PT_READ_BARCODES, this); } protected override void OnDisappearing() { base.OnDisappearing(); enhancer?.Close(); router?.StopCapturing(); } } ``` 3. Receive the barcode results via callback functions and draw them on the video stream using the `GraphicsView`. ```csharp public void OnCapturedResultReceived(CapturedResult result) { MainThread.BeginInvokeOnMainThread(() => { var drawable = new ImageWithOverlayDrawable(null, previewWidth, previewHeight, false); // Set drawable to GraphicsView OverlayGraphicsView.Drawable = drawable; OverlayGraphicsView.Invalidate(); }); } public void OnDecodedBarcodesReceived(DecodedBarcodesResult result) { if (previewWidth == 0 && previewHeight == 0) { IntermediateResultManager manager = router.GetIntermediateResultManager(); ImageData data = manager.GetOriginalImage(result.OriginalImageHashId); // Create a drawable with the barcode results previewWidth = (float)data.Width; previewHeight = (float)data.Height; } MainThread.BeginInvokeOnMainThread(() => { var drawable = new ImageWithOverlayDrawable(result, previewWidth, previewHeight, false); // Set drawable to GraphicsView OverlayGraphicsView.Drawable = drawable; OverlayGraphicsView.Invalidate(); }); } ``` ![Barcode Scanner](https://www.dynamsoft.com/codepool/img/2024/07/dotnet-maui-qrcode-scanner.jpg) ## Known Issues **iOS Text Rendering Issue** The `canvas.DrawString` method does not work properly on **iOS**, resulting in no text being rendered on the `GraphicsView`. ## Source Code [https://github.com/yushulx/maui-barcode-qrcode-scanner](https://github.com/yushulx/maui-barcode-qrcode-scanner)
yushulx
1,909,899
Running npm install on a Server with 1GB Memory using Swap
Running npm install on a server with only 1GB of memory can be challenging due to limited RAM....
0
2024-07-03T08:46:46
https://victorleungtw.com/2024/07/03/swap/
swap, memory, server, optimization
Running `npm install` on a server with only 1GB of memory can be challenging due to limited RAM. However, by enabling swap space, you can extend the virtual memory and ensure smooth operation. This blog post will guide you through the process of creating and enabling a swap partition on your server. ![](https://victorleungtw.com/static/7ecafa93725e944a117bf45acefb6e4b/a9a89/2024-07-03.webp) #### What is Swap? Swap space is a designated area on a hard disk used to temporarily hold inactive memory pages. It acts as a virtual extension of your physical memory (RAM), allowing the system to manage memory more efficiently. When the system runs out of physical memory, it moves inactive pages to the swap space, freeing up RAM for active processes. Although swap is slower than physical memory, it can prevent out-of-memory errors and improve system stability. #### Step-by-Step Guide to Enable Swap Space 1. **Check Existing Swap Information** Before creating swap space, check if any swap is already configured: ```bash sudo swapon --show ``` 2. **Check Disk Partition Availability** Ensure you have enough disk space for the swap file. Use the `df` command: ```bash df -h ``` 3. **Create a Swap File** Allocate a 1GB swap file in the root directory using the `fallocate` program: ```bash sudo fallocate -l 1G /swapfile ``` 4. **Enable the Swap File** Secure the swap file by setting appropriate permissions: ```bash sudo chmod 600 /swapfile ``` Format the file as swap space: ```bash sudo mkswap /swapfile ``` Enable the swap file: ```bash sudo swapon /swapfile ``` 5. **Make the Swap File Permanent** To ensure the swap file is used after a reboot, add it to the `/etc/fstab` file: ```bash sudo cp /etc/fstab /etc/fstab.bak ``` Edit `/etc/fstab` to include the swap file: ```bash echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab ``` 6. **Optimize Swap Settings** Adjust the `swappiness` value to control how often the system uses swap space. A lower value reduces swap usage, enhancing performance. Check the current value: ```bash cat /proc/sys/vm/swappiness ``` Set the `swappiness` to 15: ```bash sudo sysctl vm.swappiness=15 ``` Make this change permanent by adding it to `/etc/sysctl.conf`: ```bash echo 'vm.swappiness=15' | sudo tee -a /etc/sysctl.conf ``` Adjust the `vfs_cache_pressure` value to balance cache retention and swap usage. Check the current value: ```bash cat /proc/sys/vm/vfs_cache_pressure ``` Set it to 60: ```bash sudo sysctl vm.vfs_cache_pressure=60 ``` Make this change permanent: ```bash echo 'vm.vfs_cache_pressure=60' | sudo tee -a /etc/sysctl.conf ``` ### Conclusion Creating and enabling swap space allows your server to handle memory-intensive operations, such as `npm install`, more efficiently. While swap is not a substitute for physical RAM, it can provide a temporary solution to memory limitations, ensuring smoother performance and preventing out-of-memory errors. By following the steps outlined above, you can optimize your server's memory management and enhance its overall stability.
victorleungtw
1,909,898
My Journey in Software Engineering: A Challenge That Taught Me A Great Deal
My name is Desmond Okeke and I am currently a Software engineer with 3+ years in building full-stack...
0
2024-07-03T08:44:37
https://dev.to/desmond_okeke_80749135147/my-journey-in-software-engineering-a-challenge-that-taught-me-a-great-deal-57ab
My name is Desmond Okeke and I am currently a Software engineer with 3+ years in building full-stack solutions. I am proficient in front-end stack including but not limited to **HTML/CSS, SCSS, JavaScript, Vue.js, Nuxt.js, Tailwindcss, and Buefy**, as well as some back-end stacks such as Python Flask, Node.js, Postgresql, MySql. I also work with vast programming tools like Java, C, etc., and enjoy problem-solving. When I am not on my laptop I am practicing my violin. I was introduced to programming in my secondary school days when Co-Creation Hub aimed to empower unprivileged teenagers with digital technology. Over the years, I grew to love programming and have gathered much experience. My journey in tech is like a learning curve for me, because in every experience I get, there is something new for me to learn. This unique nature of programming has kept me intrigued for over a decade. At a point in my life, something fascinating happened that made me question my ability to be the professional software engineer I had always wanted to be. I was given a task to call the third-party API to validate and transform an endpoint to a URL. I was supposed to call third-party endpoints and send authorization through the API. This task involved manipulating a third-party payload response by changing the image type from base 64 to a URL endpoint. I had to use Axios to get an authorization token. After that, I used the token to access the service to get the URL location. Explaining it now might seem pretty straightforward, but it took a lot of hard work, grit, and research to figure this out. I did not just learn how to call third-party services the right way, but I also got exposed to the art of problem-solving. Shying away from difficult tasks will never be an option for me because I believe that with the right attitude and mindset, every problem has a solution. In the journey of continuous learning and growth, I decided to register for HNG11 using this link: [https://hng.tech/internship](https://hng.tech/internship). I believe the internship will get me busy with projects that will sharpen my ability to solve coding problems more intelligently. Also, I intend to join the premium network using this link: [https://hng.tech/premium](https://hng.tech/premium). This premium package will be of great benefit to me in terms of networking, getting job opportunities, and being constantly motivated to be better. At the end of this internship, I hope to be more vast in terms of my skill set and also volunteer as an instructor of HNG in the backend track. Thank you
desmond_okeke_80749135147
1,909,897
Nested validation in .NET
In this blog's opening post, I discuss the problem of validating nested Data Transfer Objects in...
0
2024-07-03T08:39:29
https://ilya-chumakov.com/nested-validation-in-.net/
dotnet, csharp, dotnetcore
In this blog's opening post, I discuss the problem of validating nested Data Transfer Objects in modern .NET. Nesting simply means that the root object can reference other DTOs, which in turn can reference others and so on, potentially forming a _cyclic graph_ of unknown size. For each node in the graph, its data properties are validated against a quite typical rule set: nullability, range, length, regular expressions etc. And for DTO types, let's declare the following conventions: - It _may_ have DataAnnotation attributes, including custom ones. - It _may_ implement IValidatableObject. - It _should_ avoid third-party dependencies if possible. You may have guessed that the _graph_ is the tricky part. Indeed, a built-in [DataAnnotations.Validator](https://learn.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations.validator) doesn't do nested validation by design, and this was a default behaviour [for decades](https://stackoverflow.com/questions/2493800/how-can-i-tell-the-data-annotations-validator-to-also-validate-complex-child-pro). But the fix is trivial, right? Just implement any kind of graph traversal with cycle detection! Well, yes and no. In this post, I compare popular third-party libraries that support nested validation. Looking ahead, there is a big performance difference even among robust production-ready solutions. There are many ways to define validation rules in .NET, each with its own advantages and disadvantages. For example: - **Attributes**: explicit, useful for OpenAPI document generation. - **IValidatableObject**: more flexible yet still self-contained. - **External**: This is a jack of all trades. It leaves DTOs clean and provides maximum flexibility ([FluentValidation](https://github.com/FluentValidation/FluentValidation) is the best example of this approach). - **Manual validation**: the most naive approach, it simply has inlined `if` clauses without declaring validation rules at all. As a result, it gives unbeatable performance at the cost of scalability, and it doesn't apply to a graph of unknown length/topology. Later it is used as a benchmark baseline. To finish this long intro and save everyone's time, let me highlight what is _not covered_ in this article: - [ASP.NET Model Validation](https://learn.microsoft.com/en-us/aspnet/core/mvc/models/validation). Although it comes with full support for DataAnnotations attributes, it is still an inseparable part of a large and complex framework that deals with both server-side application and Web APIs, ModelState, version backward comparability, etc... a topic that undoubtedly deserves its own article. - `IOptions<T>` validation. Ironically, with the [arrival](https://github.com/dotnet/runtime/pull/90275) of `[ValidateObjectMembers]` and `[ValidateEnumeratedItems]` in .NET 8, `OptionsBuilder<TOptions>` now supports validation of nested options. And there are now at least 3 different validation algorithms shipped with ASP.NET. ### What is validation? Let's say we're processing a user's registration email address. What should we check? - The address should be in the correct format. This is _validation_. - The address domain should not be on our blacklist. This is a _business rule_. - The address should be unique in our database. This is a _business rule_. What is the difference? Validation is a _pure function_. It is deterministic (same input - same output) and has no side effects. That's why looking for a domain in a list is not validation: such lists are subject to change, so they're not deterministic. A good rule of thumb for mere enterprise developers like me: - **Validation**: self-contained (we only need the data from the DTO itself) - **Business rule**: anything that touches mutable data (database, API, file system etc.) And my advice is: don't mix them up. Validate your input before the control flow even reaches your business domain. Just like ASP.NET does with model binding. Regardless of the application architecture, in many cases you actually _want_ fail fast on invalid/malicious input and avoid unnecessary allocation of your scoped and transient services. Then, testing: covering pure functions with tests is trivial. Well, at least it is way easier to do separately, than mocking a database and couple of APIs for all-at-once validator. Put some effort into the quality of the data coming into your domain, and you'll get a clearer and more concise domain logic. To go deeper, please read Mark Seemann's [Validation and business rules](https://blog.ploeh.dk/2023/06/26/validation-and-business-rules/) post, discussing the topic in great detail. Let me say a few things about the libraries under consideration, and we can finally get on with the benchmarking. ### DataAnnotationsValidator Our first contender is the [DataAnnotationsValidator.NETCore](https://www.nuget.org/packages/DataAnnotationsValidator.NETCore) package. It is long dead and has performance issues, so **strongly not recommended**. However, this library [illustrates](https://github.com/ovation22/DataAnnotationsValidatorRecursive/blob/master/DataAnnotationsValidator/DataAnnotationsValidator/DataAnnotationsValidator.cs#L51) well the idea behind many home-made solutions: - Reflection to read metadata. - Recursive depth-first search for traversing a graph. - A hash set for cycle detection. ### MiniValidation Alive and well-designed, [MiniValidation](https://github.com/DamianEdwards/MiniValidation) offers smooth experience in nested validation. While implementing a similar depth-first search for visiting a DTO graph, it adds [metadata caching](https://github.com/DamianEdwards/MiniValidation/blob/main/src/MiniValidation/MiniValidator.cs#L385) to the mix, resulting in much better performance. ### FluentValidation [FluentValidation](https://github.com/FluentValidation/FluentValidation) is undoubtedly the most popular third-party validation library on .NET. It is a robust choice if you need clean POCOs or multiple validation maps per type. However, its performance may surprise you. ### Benchmark: DataAnnotation and FluentValidation Our first benchmark is to validate a fairly typical DataAnnotation-marked DTO, containing both a single nested object and a collection of them (each is expected to be validated): ```csharp public class Parent { [Range(1, 9999)] public int Id { get; set; } [Required(AllowEmptyStrings = false)] [StringLength(12, MinimumLength = 12)] public string? Name { get; set; } [Required] public Child? Child { get; set; } [Required] public List<Child> Children { get; init; } = new(0); } public class Child : IChild { [Required] public DateTime? ChildCreatedAt { get; set; } [AllowedValues(true)] public bool ChildFlag { get; set; } } ``` Of course, FluentValidation has no use for these attributes, so its validators are created separately while repeating the same rules: ```csharp public class ParentValidator : AbstractValidator<Parent> { public ParentValidator() { RuleFor(x => x.Id).InclusiveBetween(1, 9999); RuleFor(x => x.Name).NotEmpty().Length(min: 12, max: 12); RuleFor(x => x.Child).NotNull().SetValidator(new ChildValidator()); RuleForEach(x => x.Children).NotNull().SetValidator(new ChildValidator()); } } public class ChildValidator : AbstractValidator<Child> { public ChildValidator() { RuleFor(x => x.ChildCreatedAt).NotNull(); RuleFor(x => x.ChildFlag).Equal(true); } } ``` Finally, the `Manual` benchmark uses explicit `if` checks and serves as a baseline. Each benchmark is runned against of _the same_ `Parent` collection. There are the results depending on the collection size: | Method | Size | Mean | Allocated | Alloc Ratio | |---------------------------- |-----: |----------:|------------:|------------:| | Manual |**100**| 3&nbsp;μs | 34&nbsp;KB | 1 | | MiniValidation |100 | 162&nbsp;μs | 427&nbsp;KB | 12 | | DataAnnotationsValidator |100 | 302&nbsp;μs | 614&nbsp;KB | 17 | | FluentValidation |100 | 314&nbsp;μs | 946&nbsp;KB | 27 | | Manual |**1000**| 33&nbsp;μs | 343&nbsp;KB | 1 | | MiniValidation |1000 | 1586&nbsp;μs | 4260&nbsp;KB | 12 | | DataAnnotationsValidator |1000 | 3084&nbsp;μs | 6150&nbsp;KB | 17 | | FluentValidation |1000 | 3300&nbsp;μs | 9586&nbsp;KB | 27 | | Manual |**10000**|342&nbsp;μs | 3437&nbsp;KB | 1 | | MiniValidation |10000 | 16237&nbsp;μs | 42619&nbsp;KB | 12 | | DataAnnotationsValidator |10000 | 31223&nbsp;μs | 61480&nbsp;KB | 17 | | FluentValidation |10000 | 32364&nbsp;μs | 95911&nbsp;KB | 27 | Well, DataAnnotationsValidator is expectedly bad, but FluentValidation... is even worse in both time and space! At first I thought there was a bug (there was not). Then I did my best to look for FluentValidation settings that might help to optimise its performance (there weren't any, except "fail fast", see below). The overall result distribution remains the same. But look at MiniValidation! The same algorithm, but optimised for performance, gives a quite impressive 2x boost over DataAnnotationsValidator. ### Benchmark: IValidatableObject As you probably know, `IValidatableObject` is an alternative to explicit DataAnnotations attributes, with all the validation logic encapsulated within DTOs. This benchmark uses the same validation rules but implemented in `Validate` method, so it's all about traversing a graph and calling `Validate` at each node. FluentValidation is not on the list this time. ```csharp public class ChildValidatableObject : IValidatableObject { public DateTime? ChildCreatedAt { get; set; } public bool ChildFlag { get; set; } public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) { if (ChildCreatedAt == null) { yield return new ValidationResult("foo error message #2", new[] { nameof(ChildCreatedAt) }); } if (ChildFlag == false) { yield return new ValidationResult("foo error message #3", new[] { nameof(ChildFlag) }); } } } ``` | Method | Size | Mean | Allocated | Alloc Ratio | |--------------------------------- |-----: |---------:|------------:|------------:| | 'Manual with IVO.Validate call' | **100** | 21&nbsp;μs | 109&nbsp;KB | 1.00 | | 'MiniValidation + IVO' | 100 | 59&nbsp;μs | 199&nbsp;KB | 1.82 | | 'DataAnnotationsValidator + IVO' | 100 | 151&nbsp;μs | 442&nbsp;KB | 4.04 | | 'Manual with IVO.Validate call' | **1000** | 206&nbsp;μs | 1093&nbsp;KB | 1.00 | | 'MiniValidation + IVO' | 1000 | 565&nbsp;μs | 1992&nbsp;KB | 1.82 | | 'DataAnnotationsValidator + IVO' | 1000 | 1511&nbsp;μs | 4421&nbsp;KB | 4.04 | | 'Manual with IVO.Validate call' | **10000** | 2141&nbsp;μs | 10937&nbsp;KB | 1.00 | | 'MiniValidation + IVO' | 10000 | 6608&nbsp;μs | 19921&nbsp;KB | 1.82 | | 'DataAnnotationsValidator + IVO' | 10000 | 16254&nbsp;μs | 44219&nbsp;KB | 4.04 | Again, MiniValidation wins by an even larger margin. Now let's merge the results and look at the overall performance (values rounded for readability): | Method | Size | Mean | Allocated | |--------------------------------- |-----: |----------:|----------:| | Manual | 10000 | 342&nbsp;μs | 3437&nbsp;KB | | Manual with IVO.Validate call | 10000 | 2141&nbsp;μs | 10937&nbsp;KB | | MiniValidation + IVO | 10000 | 6608&nbsp;μs | 19921&nbsp;KB | | MiniValidation | 10000 | 16237&nbsp;μs | 42619&nbsp;KB | | DataAnnotationsValidator + IVO | 10000 | 16254&nbsp;μs | 44219&nbsp;KB | | DataAnnotationsValidator | 10000 | 31223&nbsp;μs | 61480&nbsp;KB | | FluentValidation | 10000 | 32364&nbsp;μs | 95912&nbsp;KB | You may notice that MiniValidation + `IValidatableObject` give the best results of all third party libraries. ### Benchmark: Fail fast And yet FluentValidation has the feature that other competitors lack: [CascadeMode.Stop](https://docs.fluentvalidation.net/en/latest/cascade.html#validator-class-level-cascade-modes). It's flexible and can be set at different levels (rule, class, global): ```csharp public class FailfastChildValidator : AbstractValidator<Child> { public FailfastChildValidator() { ClassLevelCascadeMode = CascadeMode.Stop; //All the rules are declared as usual //... } } ``` | Method | Size | Mean | Allocated | |----------------------------- |------ |--------------:|------------:| | FluentValidation + Fail Fast | 10000 | 9012&nbsp;μs | 38556&nbsp;KB | | FluentValidation | 10000 | 32364&nbsp;μs | 95911&nbsp;KB | Of course, the fail-fast version is much faster. Most of the time I prefer the full validation report, but fail-fast is an option worth mentioning when talking about performance. ### Summary In this post I discussed the problem of validating nested objects in .NET. Since the built-in DataAnnotations validator doesn't traverse complex properties, we have to rely on third-party libraries for this. I explained the difference between validation and business rules, and why this is important. As for the benchmark results: - The MiniValidation library shows the best overall performance. - FluentValidation, despite its popularity, is generally 2x slower. There are some faster alternatives, such as [Validot](https://github.com/bartoszlenar/Validot), but I would like to leave the burden of benchmarking to its maintainers. And don't get me wrong. If you want to decouple your rules from DTOs and get a simple, stable and production-tested solution - just take FluentValidation, because its performance difference is negligible in many cases. If you need self-describing DTOs - stick with MiniValidation. And for performance driven code - inline your checks where possible. The obvious next step in the development of general purpose validation libraries, is, of course, the adoption of ~~ChatGPT~~ source generators. A validation generator such as [this one](https://learn.microsoft.com/en-us/dotnet/core/extensions/options-validation-generator) would potentially eliminate the performance gap between general usage validation libraries and inlined validation. In fact, we already have all the necessary technology shipped with .NET, so stay tuned for news! All the code from the article is available on Github: https://github.com/ilya-chumakov/PaperSource.DtoGraphValidation.
ilya-chumakov
1,909,895
Sacred Vestments Alb and Cincture Set for Reverent Service
Our Alb and Cincture set combines simplicity and elegance, tailored for clergy and altar servers. The...
0
2024-07-03T08:38:26
https://dev.to/afra_bsource_fc8c3cee1d08/sacred-vestments-alb-and-cincture-set-for-reverent-service-45ob
Our Alb and Cincture set combines simplicity and elegance, tailored for clergy and altar servers. The Alb, made from high-quality fabric, offers a comfortable fit and a dignified appearance during Mass and other sacred ceremonies. Paired with the Cincture, which features a symbolic cord to secure the Alb, this **_[Alb And Cincture](https://clergywearshop.com/product-category/altar-server-vestments/altar-server-cassocks/)_** enhances the solemnity of worship and spiritual devotion. Ideal for enhancing your spiritual attire with authentic and respectful garments that reflect dedication and reverence in your sacred duties. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b3cayz6rllwx3837n7o7.jpg) [Alb and cincture](https://clergywearshop.com/product-category/altar-server-vestments/altar-server-cassocks/), [Catholic Altar Cloths](https://clergywearshop.com/product-category/altar-server-vestments/altar-server-cassocks/), [Catholic Altar Items](https://clergywearshop.com/product-category/altar-server-vestments/altar-server-cassocks/),[Clergy Alb](https://clergywearshop.com/product-category/altar-server-vestments/altar-server-cassocks/), [Catholic Church Altar](https://clergywearshop.com/product-category/altar-server-vestments/altar-server-cassocks/)
afra_bsource_fc8c3cee1d08
1,909,894
Automating Linux User Management with a Bash Script
Managing user accounts in a Linux environment can be a tedious and error-prone process, especially...
0
2024-07-03T08:36:25
https://dev.to/zkyusya/automating-linux-user-management-with-a-bash-script-1f07
devops, linux, bash
Managing user accounts in a Linux environment can be a tedious and error-prone process, especially when dealing with a large number of users. As a SysOps engineer, ensuring that each user is created with the correct permissions, groups, and secure credentials is crucial for maintaining system security and efficiency. As part of HNG Internship, was assigned a real-world scenario of writing a Bash script designed to automate the process of user and group creation, home directory setup, and password management. This script not only simplifies the user management process but also ensures consistency and security across the system. This project is also available on my [github repository](https://github.com/Zkyusya/stage1_hnginternship/tree/main) <u>**Task**</u> Your company has employed many new developers. As a SysOps engineer, write a bash script called `create_users.sh `that reads a text file containing the employee’s usernames and group names, where each line is formatted as user;groups. The script should create users and groups as specified, set up home directories with appropriate permissions and ownership, generate random passwords for the users, and log all actions to `/var/log/user_management.log.` Additionally, store the generated passwords securely in `/var/secure/user_passwords.txt.` Ensure error handling for scenarios like existing users and provide clear documentation and comments within the script. <u>**Project Setup**</u> To begin with, the script should automate the following; **1.Read User and Group Information:** The script will read from a text file that contains user and group details. **2. Create User and Group:** For each user in the file, the script will create a user account and a corresponding personal group. **3. Assign Additional Groups:** If additional groups are specified for a user (e.g., hng;hng0,hng1), the script will add the user to those groups. **4. Create Home Directories:** A dedicated home directory will be created for each user. **5. Generate Random Passwords:** Secure random passwords will be generated for each user and stored in `var/secure/user_passwords.csv.` **6. Log Actions:** All activities performed by the script will be logged to `/var/log/user_management.log.` <u>**Creating the User List File**</u> The Bash script relies on a text file to define the users and groups it needs to create. Create a text file named `user-list.txt` that contains the usernames and groups. Example content for `user-list.txt`: ``` light;sudo,dev,www-data idimma;sudo mayowa;dev,www-data ``` You can replace the usernames and groups with the names you are working with. Next, we create a bash script file that interacts with this text file. <u>**Creating the Bash Script File**</u> Now, we create a bash script called `create_users.sh` using a text editor (nano). This script checks for root privileges, reads the user list file, creates users and groups, assigns users to groups, generates random passwords, and logs all actions. <u>**Step by Step Guide on Creating the Bash Script**</u> **1. Check Root Permissions** Ensures the script is run by the root user. If the script is not run as root user, it exits with an error message. ``` bash Copy code #!/bin/bash # Check if the script is run as root if [ "$EUID" -ne 0 ]; then echo "This script must be run as root" exit 1 fi ``` **2. Validate Input File** The script checks if a filename was provided as an argument. If not, it exits with a usage message. ``` # Check if the input file is provided if [ -z "$1" ]; then echo "Usage: $0 <user_list_file>" exit 1 fi ``` **3. Create environment variables for the file** Create environment variables to hold the paths for the input text file `(text_file.txt)`,log file `(/var/log/user_management.log) `and the password file `(/var/secure/user_passwords.csv)`. ``` # Log file and password file paths INPUT_FILE="$1" LOG_FILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` **4. Create or clear log and password files with root privileges** Create the log and password files and give them the necessary permissions ``` # Create or clear log and password files with root privileges mkdir -p /var/log mkdir -p /var/secure touch $LOG_FILE chmod 700 /var/secure # Set permissions for the log file : > $LOG_FILE # Clear the log file touch $PASSWORD_FILE chmod 600 $PASSWORD_FILE # Set secure permissions for the password file : > $PASSWORD_FILE # Clear the password file ``` - `touch $LOG_FILE `Creates a `/var/log/user_management.log `file if they don't exist. - `mkdir -p /var/secure` Creates a `/var/secure` directory that will hold the password file. - `chmod 700 /var/secure `Sets the permissions so that only the user has read, write, and execute permissions for the `/var/secure` directory - `touch $PASSWORD_FILE `Creates a `/var/secure/user_passwords.csv `file if they don't exist. - `chmod 600 $PASSWORD_FILE `Sets the permissions so that only the user has read and write permissions for the `/var/secure/user_passwords.csv` file. - The `: > `command clears the contents of the log and password files. **5. Generate Random Passwords** create the `log_message()` and `generate_password()` functions. These functions will handle creating log messages for each action and generating user passwords. ``` # Function to generate logs and random passwords log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE } generate_password() { openssl rand -base64 12 } ``` **6. Process User List** Reads the input file line by line, processes each username and associated groups ``` # Read the user list file and process each line while IFS=';' read -r username groups || [ -n "$username" ]; do username=$(echo "$username" | xargs) # Trim whitespace groups=$(echo "$groups" | xargs) ``` **7. Create Users and Groups** Creates users, personal groups, and additional groups if they do not exist and adds user to each group. ``` # Check if the personal group exists, create one if it doesn't if ! getent group "$username" &>/dev/null; then echo "Group $username does not exist, adding it now" groupadd "$username" log_message "Created personal group $username" fi # Check if the user exists if id -u "$username" &>/dev/null; then echo "User $username exists" log_message "User $username already exists" else # Create a new user with the created group if the user does not exist useradd -m -g $username -s /bin/bash "$username" log_message "Created a new user $username" fi # Check if the groups were specified if [ -n "$groups" ]; then # Read through the groups saved in the groups variable created earlier and split each group by ',' IFS=',' read -r -a group_array <<< "$groups" # Loop through the groups for group in "${group_array[@]}"; do # Check if the group already exists if ! getent group "$group" &>/dev/null; then # If the group does not exist, create a new group groupadd "$group" log_message "Created group $group." fi # Add the user to each group usermod -aG "$group" "$username" log_message "Added user $username to group $group." done fi ``` **8. Trimming Whitespace** Remove any leading or trailing spaces from the username and groups variables. ``` # Remove the trailing and leading whitespaces and save each group to the group variable group=$(echo "$group" | xargs) # Remove leading/trailing whitespace ``` - `xargs` command removes any whitespace at the beginning or end of the variable values. **9. Generating and Setting a User Password** Generates a random password for the user, logs it in the password file, and logs the action in the log file. ``` # Create and set a user password password=$(generate_password) echo "$username:$password" | chpasswd # Save user and password to a file echo "$username,$password" >> $PASSWORD_FILE ``` **10. Feeding the Input File into the Loop** The operator, `<` tells the while loop to read its input from the file specified by `$INPUT_FILE` ``` done < "$INPUT_FILE" ``` **11. Final Log Message** Logs the completion of the user creation process. ``` log_message "User created successfully" echo "Users have been created and added to their groups successfully" ``` The final script file should [look like this ](https://github.com/Zkyusya/stage1_hnginternship/edit/main/create_users.sh) <u>**Running the Script**</u> To execute this script, you need to be logged as a root user and run script using the bash command; `create_users.sh user-list.txt` **1. Ensure the script is executable** using the following command ``` chmod +x create_users.sh ``` **2. Run the script** ``` ./create_user.sh ./user-list.txt ``` The use of `./ `before the script name ensures that the script is executed from the current directory. If the script is located in a different directory, navigate to that directory first using the `cd` command. Also, check your `/var/log/user_management.log` file to see your logs by running this command: ``` cat /var/log/user_management.log ``` Check your `/var/secure/user_passwords.csv` file to see the users and their passwords using the command ``` cat /var/secure/user_passwords.csv ``` If the script is running successfully, you should see the following in the terminal ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oybmz3ffq70d1pv6af0i.PNG) To ensure that the script performs its functions well, I edited the `user-list.txt` file for different users and groups. The script successfully created the non-existent users, groups, and added the users to the new groups. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rui5irffzgv07pcf7z2c.PNG) <u>**Key Points**</u> **•User and Group Creation:** The script ensures each user has a personal group with the same name. It handles the creation of multiple groups and adds users to these groups. **•Home Directory Setup:** Home directories are created with appropriate permissions and ownership. **•Password Generation and Security:** Random passwords are generated and stored securely. Only the file owner can read the password file. **•Logging:** All actions are logged for auditing purposes. This script simplifies the task of user management in a Linux environment, ensuring consistency and security. I hope you enjoyed reading this article and can now manage users, groups, and their passwords using bash script. Learn more about the HNG Internship and opportunities to grow as a developer: [HNGInternship Cohort 11](https://hng.tech/internship) [HNGInternship2024](https://hng.tech/hire)
zkyusya
1,909,893
React Native vs. Native App Development: Choosing the Right Path for Your Mobile App
In today's mobile-first world, crafting an engaging and functional app is crucial for business...
0
2024-07-03T08:34:58
https://dev.to/ngocninh123/react-native-vs-native-app-development-choosing-the-right-path-for-your-mobile-app-16d0
webdev, reactnative, nativea
In today's mobile-first world, crafting an engaging and functional app is crucial for business success. However, with so many development options available, choosing the right technology can be a challenge. Two prominent contenders are React Native and native app development. Let's delve into the pros and cons of each approach to help you make an informed decision. ## React Native: The Cross-Platform Powerhouse [React Native](https://www.hdwebsoft.com/blog/what-is-react-native-a-quick-guide.html), a popular framework by Facebook, allows you to build apps using JavaScript and React that run on both Android and iOS. Here's what makes it shine: ### The advantages of React Native **Faster Development**: Leveraging a single codebase for both platforms streamlines development, potentially saving time and resources. **Cost-Effectiveness**: Reduced development time translates to potentially lower costs compared to building separate native apps. **Large Developer Community**: React's vast and active community provides extensive support, tutorials, and libraries. **Hot Reloading**: See code changes reflected instantly in the app during development, accelerating the iterative process. **Performance Advantages**: While not always a match for fully native apps, React Native has matured significantly, and with proper optimization, you can [optimize performance](https://www.hdwebsoft.com/blog/optimizing-performance-in-react-native-development.html) for many applications. This is especially true for less graphically demanding apps or those that are text-heavy. ### The disadvantages of React Native **Performance Trade-Offs**: React Native apps might not perform as flawlessly as fully native apps, especially for highly complex functionalities. **Limited Access to Native Features**: Accessing certain device-specific features might require additional effort or third-party libraries. **Potential Debugging Challenges**: Debugging issues can be trickier compared to native development due to the abstraction layer between code and platform. ## Native App Development: Platform-Specific Perfection Building separate apps for Android (Java/Kotlin) and iOS (Swift/Objective-C) offers a high degree of control and optimization. Here's a breakdown: ### The advantages of Native App **Optimal Performance**: Native apps leverage device capabilities directly, resulting in smoother performance and a more responsive user experience. **Full Access to Native Features**: Native development grants unrestricted access to all the functionalities and hardware features of each platform. **Seamless User Experience**: Native apps integrate seamlessly with the look and feel of the specific platform, creating a familiar and intuitive experience for users. ### The disadvantages of Native App **Increased Development Time**: Building separate codebases for each platform can be time-consuming and require more resources. **Higher Costs**: The need for separate development teams for each platform can lead to potentially higher development costs. **Limited Code Reusability**: Code cannot be easily shared between platforms, requiring more development effort to maintain separate codebases. ## Choosing the Right Path Making the decision between the two is a handful of tasks. The ideal choice hinges on your project's specific requirements. I have them all listed [here](https://www.hdwebsoft.com/blog/react-native-or-native-app-for-mobile-application-development.html). Next are some of the guiding factors: **Project Scope and Complexity**: For simpler apps with a tight deadline, React Native's faster development might be preferable. Complex apps with a focus on performance might benefit more from native development. **Budget Constraints**: React Native's potential cost-efficiency can be a deciding factor for budget-conscious projects. **Target Audience**: React Native can be a good option if you need to reach users on both Android and iOS with minimal differences in the app experience. ## The Verdict: It's Not Always Black and White React Native and native app development aren't mutually exclusive. Hybrid approaches can leverage React Native for core functionalities while integrating specific native modules for platform-dependent features. Ultimately, the best approach depends on your unique project needs. By carefully considering the pros and cons of each approach, you can make an informed decision that sets your mobile app on the path to success.
ngocninh123
1,882,271
Determine which CBV (classed-base-view) of Django to use
Introduction Choosing the right class-based view (CBV) in Django can be streamlined by...
0
2024-07-03T08:31:49
https://dev.to/doridoro/determine-which-cbv-classed-base-view-of-django-to-use-4gf1
django
## Introduction Choosing the right class-based view (CBV) in Django can be streamlined by understanding the purpose and features of each type. Django offers several built-in CBVs, each designed to handle common web application patterns efficiently. Here’s a guide to help you select the appropriate CBV for your needs: ### 1. Determine the Type of Operation The first step is to understand what kind of operation you want your view to perform. Broadly, operations can be classified into three categories: - Rendering templates or static pages: For views that display templates or static content. - Handling forms: For views that process forms, including both displaying forms and handling form submissions. - Performing CRUD operations: For views that interact with models to Create, Read, Update, or Delete data. ### 2. Identify the Built-in CBV That Matches Your Need Rendering Templates or Static Pages **TemplateView:** Use this when you simply need to render a template with some context data. It’s ideal for static content or pages that don’t require form submission or data manipulation. ```python from django.views.generic import TemplateView class HomePageView(TemplateView): template_name = 'home.html' ``` #### 2.1 Handling Forms **FormView:** Use this when you need to handle form submissions. It manages displaying the form and processing the submitted data, whether for validation or saving to the database. ```python from django.views.generic.edit import FormView from .forms import ContactForm class ContactFormView(FormView): template_name = 'contact.html' form_class = ContactForm success_url = '/thanks/' def form_valid(self, form): # Perform actions with valid form data return super().form_valid(form) ``` #### 2.2 Performing CRUD Operations **ListView:** Use this to display a list of objects. It’s ideal for showing collections of items, such as a list of articles or products. ```python from django.views.generic import ListView from .models import Article class ArticleListView(ListView): model = Article template_name = 'article_list.html' ``` **DetailView:** Use this to display detailed information for a single object. It’s perfect for showing the details of a specific item, like an article or a product. ```python from django.views.generic import DetailView from .models import Article class ArticleDetailView(DetailView): model = Article template_name = 'article_detail.html' ``` **CreateView:** Use this to handle the creation of new objects. It manages both displaying a form and saving new records to the database. ```python from django.views.generic.edit import CreateView from .models import Article from .forms import ArticleForm class ArticleCreateView(CreateView): model = Article form_class = ArticleForm template_name = 'article_form.html' success_url = '/articles/' ``` **UpdateView:** Use this to handle updating existing objects. It manages displaying a form pre-filled with the existing data and saving the updated data. ```python from django.views.generic.edit import UpdateView from .models import Article from .forms import ArticleForm class ArticleUpdateView(UpdateView): model = Article form_class = ArticleForm template_name = 'article_form.html' success_url = '/articles/' ``` **DeleteView:** Use this to handle the deletion of objects. It typically requires confirmation before performing the delete action. ```python from django.views.generic.edit import DeleteView from .models import Article class ArticleDeleteView(DeleteView): model = Article template_name = 'article_confirm_delete.html' success_url = '/articles/' ``` ### 3. Use Mixed Functionality Views If you need to combine multiple functionalities in one view, Django provides `View` and `TemplateView` which you can extend and customize. You can define `get`, `post`, `put`, etc., to handle different HTTP methods. **Custom View with Multiple Methods:** ```python from django.views import View from django.shortcuts import render, redirect class CustomView(View): def get(self, request): # Handle GET request return render(request, 'template.html') def post(self, request): # Handle POST request return redirect('success_url') ``` ### 4. Consider Advanced Scenarios For more advanced scenarios or custom behavior, you may subclass and extend these views to include custom logic or mix different functionalities. **Mixins:** Django offers various mixins that you can use to add functionality to your views without duplicating code. For example, `LoginRequiredMixin` can be combined with other views to ensure that the user is logged in before accessing the view. ### 5. Review Django’s CBV Documentation Refer to Django’s official documentation on class-based views for an exhaustive list and examples of how to use each CBV. By understanding the function of each class-based view and how they map to common web application needs, you can efficiently choose the right one for your Django project.
doridoro
1,909,892
Bitcoin As A Safe Haven Asset: A Viable Option Or Not?
Introduction The idea of Bitcoin as a safe haven asset has garnered significant attention, especially...
0
2024-07-03T08:27:56
https://dev.to/cleaningmarble_667c21bf45/bitcoin-as-a-safe-haven-asset-a-viable-option-or-not-4h6f
programming, ai, discuss, datascience
Introduction The idea of **[Bitcoin as a safe haven asset ](https://thomas-stray.com/ai-business-tools-for-starting-a-business-a-quick-8-step-guide/)**has garnered significant attention, especially during times of economic uncertainty. But what exactly makes an asset a "safe haven"? Typically, these assets maintain or increase in value during market turbulence. Let's dive into whether Bitcoin fits this mold. Understanding Bitcoin A Brief History of Bitcoin Bitcoin, introduced in 2009 by an anonymous entity known as Satoshi Nakamoto, revolutionized the financial landscape. As the first decentralized cryptocurrency, Bitcoin operates on a peer-to-peer network, allowing transactions without intermediaries. How Bitcoin Works Bitcoin transactions are recorded on a blockchain, a distributed ledger technology. Miners verify transactions by solving complex mathematical problems, ensuring security and transparency. Characteristics of Safe Haven Assets Stability in Value Safe haven assets are known for their stability, often maintaining or increasing in value during economic downturns. Low Correlation with Other Assets These assets typically show low correlation with stock markets and other volatile investments, making them a refuge during market turmoil. Liquidity and Accessibility For an asset to be a safe haven, it must be easily bought and sold (liquid) and accessible to a wide range of investors. Bitcoin vs. Traditional Safe Haven Assets Gold: The Classic Safe Haven Gold has long been considered the ultimate safe haven due to its enduring value and historical significance. Government Bonds: Stability and Security Government bonds offer security and steady returns, making them a traditional choice for risk-averse investors. Comparing Bitcoin with Gold and Bonds Bitcoin differs from gold and bonds in its digital nature and high volatility. While gold and bonds are tangible and stable, Bitcoin's value can swing dramatically. The Volatility Factor Bitcoin's Price Fluctuations Bitcoin is notorious for its price volatility, with significant fluctuations even within short periods. Historical Price Volatility of Bitcoin Historical data shows extreme highs and lows, reflecting Bitcoin's speculative nature. How Volatility Affects Bitcoin’s Safe Haven Status High volatility can deter investors seeking stability, a key feature of safe haven assets. Bitcoin's Correlation with Other Assets Correlation with Stock Market Bitcoin often shows a low correlation with traditional stock markets, though not consistently. Correlation with Traditional Currencies Its correlation with fiat currencies varies, sometimes acting as a hedge against currency devaluation. Examining Bitcoin's Unique Position Bitcoin’s unique position as a digital asset offers both potential and uncertainty in its correlation with traditional assets. Liquidity and Accessibility of Bitcoin How Liquid Is Bitcoin? Bitcoin is highly liquid, with numerous exchanges facilitating rapid buying and selling. Accessibility of Bitcoin Compared to Traditional Assets Digital wallets and exchanges make Bitcoin accessible globally, unlike physical gold or specific bonds. The Role of Exchanges and Wallets Exchanges and wallets play a crucial role in Bitcoin's liquidity and security, impacting its safe haven status. Regulatory Environment Government Regulations and Their Impact Regulations vary by country, influencing Bitcoin’s adoption and perceived safety. The Legal Status of Bitcoin Around the World Different countries have different legal stances, affecting Bitcoin’s global acceptance. Future Regulatory Trends Anticipating future regulations is crucial for understanding Bitcoin’s potential as a safe haven. Security Concerns Security of Bitcoin Transactions Blockchain technology ensures secure transactions, but risks remain. Risks of Hacking and Fraud Despite robust security, hacking and fraud incidents highlight vulnerabilities. How to Secure Bitcoin Investments Using secure wallets and exchanges, and staying informed about security practices, can mitigate risks. Adoption and Acceptance Increasing Acceptance of Bitcoin by Businesses More businesses are accepting Bitcoin, enhancing its utility and acceptance. Institutional Adoption of Bitcoin Institutional investment in Bitcoin adds credibility and stability to its market. The Role of Public Perception Public perception influences Bitcoin’s adoption and market dynamics. Technological Advancements Impact of Blockchain Technology Blockchain technology underpins Bitcoin’s security and operational framework. Innovations Enhancing Bitcoin's Security and Utility Ongoing innovations aim to enhance Bitcoin’s usability and security. Future Technological Trends Future advancements could further solidify Bitcoin’s role in financial markets. Economic and Geopolitical Factors Impact of Economic Crises on Bitcoin's Value Bitcoin often gains attention during economic crises as an alternative investment. Bitcoin as a Hedge Against Inflation Bitcoin is increasingly viewed as a hedge against inflation, akin to gold. Geopolitical Stability and Bitcoin Geopolitical events can influence Bitcoin’s value and perceived safety. Case Studies Bitcoin During the COVID-19 Pandemic During the pandemic, Bitcoin saw increased interest as a potential safe haven. Bitcoin in Economically Unstable Countries In countries facing economic instability, Bitcoin has emerged as an alternative to devaluing local currencies. Real-Life Examples of Bitcoin as a Safe Haven Various case studies highlight Bitcoin’s potential and limitations as a safe haven. Expert Opinions Financial Experts' Views on Bitcoin as a Safe Haven Experts are divided, with some advocating for Bitcoin and others cautioning against its volatility. Perspectives from Cryptocurrency Enthusiasts Crypto enthusiasts often highlight Bitcoin’s potential for high returns and digital innovation. Contrasting Opinions and Debates Ongoing debates reflect the diverse perspectives on Bitcoin’s role in financial markets. Conclusion Bitcoin’s viability as a safe haven asset remains a complex and debated topic. While it offers unique advantages, such as high liquidity and low correlation with traditional assets, its volatility and regulatory uncertainties pose significant challenges. As Bitcoin continues to evolve, its role in financial markets may become clearer, offering both risks and opportunities for investors. FAQs Is Bitcoin a Reliable Safe Haven Asset? Bitcoin's reliability as a safe haven is debated due to its volatility and regulatory uncertainties. How Does Bitcoin Compare to Gold as a Safe Haven? Gold is more stable and traditionally accepted, while Bitcoin offers high liquidity and digital innovation. Can Bitcoin Protect Against Inflation? Bitcoin is increasingly seen as a hedge against inflation, similar to gold. What Are the Risks of Investing in Bitcoin? Risks include high volatility, regulatory changes, and potential security threats. How Can I Safely Invest in Bitcoin? Use secure wallets, reputable exchanges, and stay informed about market and security trends.
cleaningmarble_667c21bf45
1,909,891
Phone Number Verification Bot 📞✅
Welcome to the Phone Number Verification Bot! This Telegram bot helps you verify phone numbers,...
0
2024-07-03T08:26:30
https://dev.to/kidddevs/phone-number-verification-bot-15ia
telegram, telegrambot, dakidarts, bot
Welcome to the **[Phone Number Verification Bot](https://dakidarts.com/)**! This Telegram bot helps you verify phone numbers, providing details about the country, location, phone type, and more. Built with Python and leveraging the power of RapidAPI, this bot is here to ensure you have accurate information at your fingertips. ![Phone number verification bot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/435136v776qeccb7juz5.png) ## Features - **Phone Number Verification**: Validate phone numbers with ease. - **Country and Location Information**: Get details about the phone number's country and specific location. - **Phone Type Identification**: Know whether the phone number is mobile, landline, etc. - **Carrier Information**: Find out the carrier associated with the phone number. - **Timezone Information**: Get the timezone related to the phone number. ## Commands - `/start`: Welcome message and introduction to the bot. - `/help`: Display a list of available commands and their usage. - `/verify <phone> <country>`: Verify a phone number with the specified country code. ## Example Usage To verify a phone number, simply use the following command: `/verify 6502530000 US` ## Usage You don't need to install anything to use this bot. Simply go to [t.me/dws_verify_phone_bot](https://t.me/dws_verify_phone_bot) on Telegram and start verifying phone numbers right away! ## Contact For any inquiries or support, reach out to us at [t.me/dakidarts](https://t.me/dakidarts). --- Happy Verifying! 😊
kidddevs
1,909,889
How to Solve RankerX Captcha using CaptchaAI
In today's digital environment, automation tools like RankerX significantly streamline SEO tasks, but...
0
2024-07-03T08:25:55
https://dev.to/media_tech/how-to-solve-rankerx-captcha-using-captchaai-53c6
In today's digital environment, automation tools like RankerX significantly streamline SEO tasks, but they often encounter challenges such as solving captchas. This is where CaptchaAI comes into play, offering a robust solution for captcha solving services, including reCaptcha solving services and **image captcha solving**. This article delves deep into how CaptchaAI can effectively handle captchas, ensuring uninterrupted operation of RankerX for your SEO campaigns. **Understanding Captcha Challenges in SEO Tools** Captcha (Completely Automated Public Turing test to tell Computers and Humans Apart) presents a challenge designed to differentiate human users from bots. SEO tools like RankerX automate various tasks, including link building and content submission, which frequently encounter captcha verifications. These captchas can be in various forms, such as text-based, image recognition, or the more complex Google reCAPTCHA. **Why Captcha Solving is Crucial for Automation** The primary hurdle with automation tools facing captchas is the interruption of automated processes. Each captcha prompt requires a human intervention, which defeats the purpose of automation. To maintain the efficiency and streamline workflows, a reliable captcha solving service is essential. This is not just about convenience; it's about scalability and operational continuity in SEO efforts. **CaptchaAI: Your Partner in Seamless Captcha Solving** CaptchaAI is designed to integrate seamlessly with tools like RankerX. It provides a powerful **reCaptcha solving service** that can decode a wide array of captcha types quickly and accurately. Here’s how CaptchaAI stands out: **Versatile Captcha Recognition:** Capable of solving text, image, and Google’s reCAPTCHA, ensuring broad coverage. **High Accuracy:** Utilizes advanced algorithms to provide high success rates in captcha solving. **Fast Response Times:** Minimizes delays in captcha solving, which is crucial for maintaining the speed of automated tasks in RankerX. **Ease of Integration:** Designed to be easily integrated with any tool that encounters captchas, including RankerX. **Integrating CaptchaAI with RankerX** Integrating CaptchaAI into RankerX is straightforward. Once you set up your account with CaptchaAI, you simply configure the API settings within RankerX to redirect captcha challenges to CaptchaAI’s service. This setup ensures that whenever RankerX encounters a captcha, it is automatically sent to CaptchaAI for solving, and the response is quickly fed back to continue the automated process without manual input. **Benefits of Using CaptchaAI with RankerX** **Enhanced Automation:** By resolving captchas automatically, CaptchaAI helps maintain the continuity of RankerX’s automation. **Increased Efficiency:** Reduces the downtime caused by captcha interruptions, thus increasing overall task efficiency. **Scalability:** With captcha issues handled, you can scale up your SEO efforts without additional human resources. **Cost-Effectiveness:** Saves on the labor cost associated with manually solving captchas during large-scale SEO campaigns. **Real-World Applications and Success Stories** Many SEO professionals have seen significant improvements in their project turnaround times using CaptchaAI with RankerX. One notable case involved an SEO agency that managed to triple its link-building campaigns' volume without increasing its workforce, all thanks to the robust captcha solving capabilities of CaptchaAI. **Choosing the Right Captcha Solving Service** When selecting a **captcha solving service**, consider factors such as accuracy, speed, compatibility, and cost. CaptchaAI not only excels in all these areas but also offers excellent customer support and competitive pricing, making it an ideal choice for both small-scale SEO practitioners and large agencies. **In conclusion,** for SEO professionals using RankerX, CaptchaAI provides an indispensable solution to the captcha challenge, streamlining operations and enhancing efficiency. By automating the captcha solving process, it allows you to focus more on strategic aspects of SEO rather than operational hurdles.
media_tech
1,909,888
Discover How Sinopec No.5 Construction Maximizes NocoBase to Drive Informatization of Singapore's CRISP Project!
Sinopec Fifth Construction Co., Ltd., a significant player in the petrochemical industry, is using...
0
2024-07-03T08:24:15
https://dev.to/nocobase/discover-how-sinopec-no5-construction-maximizes-nocobase-to-drive-informatization-of-singapores-crisp-project-4ekf
nocode, lowcode
Sinopec Fifth Construction Co., Ltd., a significant player in the petrochemical industry, is using NocoBase as part of its digital transformation under the leadership of IT director Mr. Dong Ke. [Let's dive into the story!](https://www.nocobase.com/en/blog/SFCC) ## 1. **Background and Challenge** Sinopec No.5 Construction Co., Ltd. (after this referred to as SFCC) is the earliest large-scale construction enterprise engaged in petrochemical construction in China. One of the company's main businesses is constructing petrochemical infrastructures, including oil refining, chemical, natural gas, storage, and transportation for domestic and international regions. One typical project SFCC implemented is the new refinery project in Kuwait, which had a contract of up to US\$519 million. After the system is used, Kuwait's oil production will increase by 31.5 million tonnes per year[[1](http://industry.people.com.cn/n1/2019/0410/c413883-31023309.html)]. ![petrochemical industry](https://static-docs.nocobase.com/296cb1e212172bd83f826ef2d78610c6.PNG) As a significant player in petrochemical construction, SFCC is also one of the earliest enterprises in the industry to embrace informatization and digitalization. The technical leader, Mr. Dong Ke, experienced an essential digital transformation period with SFCC. The digital transformation of traditional industries can be traced back to the mid-90s of the 20th century, with the rapid development of information and automation technology. After decades of development, the market for industry ERP systems continues to mature, and thousands of solutions emerge. Nevertheless, it is still difficult for the petrochemical industry to find a versatile digital software or solution. This is primarily due to the industry's unique nature. As Mr. Dong said, "(The question is that) software vendors can't understand the specific requirements of the petrochemical industry, while we don't have professional software developers to turn our sophisticated requests into codes." Consequently, Mr. Dong and his team had to start from scratch, explore various technologies, and try to build the alpha version of the system with Access and VB macro languages. This solution was quite low-cost and quick to get started, which helped SFCC enter the fast lane of digitalization. At the beginning phase, the solution worked and was cost-effective. As the project moved on to the middle phase, the drawbacks of the Access and VB macro languages began to emerge. Firstly, Access databases are ideal for small to medium-sized database applications. However, performance issues may arise when dealing with large amounts of data or simultaneously accessed by multiple users. Secondly, updating the system to meet rapidly changing business requirements would require a significant amount of manual programming work, leading to increased maintenance costs and the risk of errors. ## 2. **Seeking for alternatives** The turning point occurred when Mr. Dong sought a new software solution to satisfy the increased requirements of SFCC's new management workbench project. The project aimed to adopt the principles of overall quality management into the digital management system, which integrates the five key factors that affect product quality: personnel, machines, raw materials, methods, and environment. After years of practice, Mr. Dong Ke prefers in-house development. Since there was no out-of-the-box solution, he turned to open-source projects or development frameworks to minimize the cost and problems of starting from scratch. He utilized a series of tools to implement the different elements of the system: Ant Design Pro to design system user interfaces, Formily to implement forms management, and the tool G2Plot to present charts and graphs, etc. However, there is still a long way to go to complete the development of an entire system. Imagine building a house with ready tools: shovels, hammers, bags of cement, and bricks, but you still have to make the entire house brick by brick. The same applies to software development. Before use, numerous functions need implementation, including user system, privilege management, chart library invocation, and form library referencing... Just as Mr. Dong continued to explore suitable open-source tools on GitHub, NocoBase caught his eyes with the following highlights: * Data Model Driven: in line with the logic of system development, where data modeling comes first. * WYSIWYG Editor: makes it very easy to build forms & pages. * Plug-and-Play: new plugins can be developed and installed to realize new requirements. He thought that if his team could develop the SFCC's system based on NocoBase, it would help to solve current pain points and thus significantly reduce the workload and complexity of system development. Let's revisit the analogy of building a house. Prefabricated components and automated construction equipment have transformed the construction process. Instead of brick by brick, prefabricated house parts can be quickly assembled using automated equipment, leading to precise and efficient construction. This approach significantly reduces the manual labor and time required for building. ![building a house, The image is generated by ChatGPT.](https://static-docs.nocobase.com/a93725c4ef24a8d05c8401f28b521ff5.PNG) Likewise, NocoBase is a No-code platform that offers pre-made components and automated construction tools for SFCC to build software without coding line by line. Additionally, the technology stack used by NocoBase is well-aligned with SFCC projects, enabling Dong and his team to quickly start using NocoBase for the proof of concept. Using the WYSIWYG editor, general requirements of SFCC can also be quickly implemented through the no-code part of NocoBase by simple drag-and-drop. For customized functions, Dong’s team can develop new plugins based on dedicated [development documents](https://docs.nocobase.com/development). **The flexibility is enabled by NocoBase's microkernel and plug-in architecture.** Once they validated the product concepts, Dong's team started building the entire system using NocoBase. Meanwhile, Dong has actively contributed to the NocoBase open-source community for over two years. ## 3. Progress and Result As NocoBase has grown over the past two years, Mr. Dong is responsible for system innovation and in-house development in SFCC's overseas branches. During this time, he led the team to accomplish the system improvement and promoting of the **Singapore CRISP project**, one of SFCC's critical projects. NocoBase's ****straightforward**** ****and standard**** API design specifications facilitate the swift input and output of service resources through HTTP RESTful APIs. Additionally, NocoBase can quickly scale out and integrate third-party services, allowing for flexible interaction and scheduling of resources both within and outside the system. As a result, Mr. Dong and his team can efficiently handle data requests and business logic and choose the most suitable interaction mode based on their needs. Currently, the Human Resource Working Efficiency Management system, part of the Singapore CRISP project, has been developed with NocoBase and runs smoothly. This case has yielded invaluable best practices that will significantly enhance the success of the company-wide rollout. ### The technical architecture of the system **NocoBase - The control and application center is a central hub** for building, developing, and managing applications and integrating different systems. * Authentication plugins: are responsible for authentication and authorization to ensure system security. * User & Permissions: to manage users and their permissions to ensure that different users have appropriate access. * Resources APIs: to provide external and internal resource access interfaces and to support RESTful APIs and internal service calls (RPC). * Data visualization: to visualize data for analysis and decision-making. * Automated workflow: to automate business processes to improve productivity. * Applications (business applications): to effectively meet the diverse demands of business scenarios. **Integrated third-party platforms:** * Cloudflare Workers: Handles HTTP requests to improve content distribution speed and application response time. * Lark Integration Platform: Integrates Lark services (an IM application) to achieve collaboration and communication within the company. * Supabase: Provides database storage and authentication as a back-end service. * Logto: Responsible for authentication and authorization to ensure system security. ![technical architecture](https://static-docs.nocobase.com/ed5586e6e972ff5beaa41548e55b844e.png) Mr. Dong shared, "NocoBase's best feature is its one-click configuration mode switching and comprehensive [HTTP API support](https://docs.nocobase.com/handbook/api-doc)." NocoBase allows developers to click the mode-switching button in the upper right corner to switch from developing to product mode. As a result, developers can build a page within just a few minutes. Developers can easily integrate NocoBase into their existing systems by interacting with the data via API requests, which allows for customizable and extended data processing capabilities. For instance, Dong’s team utilizedNocoBase to develop the data dashboards for monitoring human resource efficiency of SFCC.The system was linked to around 60 attendance machines via NocoBase API. Once the raw data was collected via the API, it was processed using automated business process management in NocoBase Workflow and then delivered to the managers' dashboards. Previously, the total data collection, automatic processing, and visualization cost was more than 1 million CNY (or 140,000 dollars). **Using NocoBase can reduce the cost by 85%.** ![dashboard](https://static-docs.nocobase.com/a54b49b032513b3628ecb24909b0167d.jpeg) Currently, the other parts of the system development are also making steady progress. Mr. Dong has estimated that it will take a total of 5 months to complete the entire system and go live, which is half a year earlier than the initial planning time of 11 months, **reducing the delivery time by 55%.** ![dashboard](https://static-docs.nocobase.com/3d7ccb4af4f8c5ad8b1c1557ed838533.png) As a senior front-end developer, Mr. Dong understands the challenge of balancing low code, performance, and flexibility. However, based on his experience, he believes that NocoBase has achieved a good balance among these three features. Completing the project verification for SFCC, a leading enterprise in the petrochemical industry, was a surprising journey for NocoBase. The product capabilities have not only been enhanced by the real project, but it also demonstrates NocoBase's ability to drive digital transformation for such a complex and in-depth industrial project. With the [release of NocoBase 1.0](https://www.nocobase.com/en/blog/release-v10), the product team is now focusing on **improving stability and performance,** meeting the expectations of developers like Mr. Dong. The NocoBase product team aims to help SFCC accelerate business rollout in the petrochemical industry and promote overall business efficiency. --- NocoBase is a private, open-source, no-code platform offering total control and infinite scalability. It empowers teams to adapt quickly to changes while significantly reducing costs. Avoid years of development and substantial investment by deploying NocoBase in minutes. Homepage: https://www.nocobase.com/ Demo: https://demo.nocobase.com/new Documentation: https://docs.nocobase.com/ GitHub: https://github.com/nocobase/nocobase
nocobase
1,909,877
The Perpetual Rivalry Between iOS and Android
The mobile operating system landscape has been dominated by two giants for over a decade: iOS and...
0
2024-07-03T08:22:21
https://dev.to/klimd1389/the-perpetual-rivalry-between-ios-and-android-1okm
ios, android, devops, productivity
The mobile operating system landscape has been dominated by two giants for over a decade: iOS and Android. This rivalry is more than just a competition between Apple and Google; it embodies the ongoing debate about technology, user preferences, and innovation. Both platforms have carved out distinct identities, each with its own set of strengths and weaknesses. In this article, we explore the perpetual rivalry between iOS and Android, examining their differences, user experiences, and what the future might hold for these two titans of the tech world. User Experience and Interface Design iOS: Elegance and Simplicity iOS, Apple's proprietary operating system, is renowned for its elegant and user-friendly interface. Apple has always emphasized a seamless user experience, with intuitive navigation and a consistent design language across its ecosystem. The uniformity of iOS devices ensures that apps and features work harmoniously, providing a polished and reliable user experience. Android: Customization and Flexibility Android, on the other hand, offers unparalleled customization options. Developed by Google, Android provides users with the freedom to tailor their devices to their preferences. From home screen widgets to custom ROMs, Android's flexibility appeals to tech enthusiasts and those who enjoy a personalized touch. The diversity of Android devices, ranging from budget phones to high-end flagships, also means that there is an Android device for every need and budget. App Ecosystem and Compatibility iOS: Quality and Optimization The App Store, Apple's app marketplace, is known for its stringent quality control. Apps on the App Store undergo rigorous testing to ensure they meet Apple's high standards for performance and security. This often results in more polished and optimized applications for iOS devices. Additionally, developers often prioritize iOS when releasing new apps, sometimes leading to exclusive or early access to certain applications. Android: Quantity and Diversity Google Play, Android's app store, boasts a larger number of apps compared to the App Store. While this means a greater variety of apps to choose from, it also implies a wider range of quality. The open nature of Android allows for more experimental and niche applications, but it also means users need to be more vigilant about app security and performance. Hardware Integration and Ecosystem iOS: Seamless Integration Apple's tight control over both hardware and software results in a seamless integration across its devices. Features like Handoff, Continuity, and AirDrop work effortlessly between iPhones, iPads, Macs, and Apple Watches. This cohesive ecosystem is a significant selling point for users invested in Apple's suite of products. Android: Versatility and Choice Android's open-source nature allows for a wide range of hardware options from various manufacturers. This leads to an ecosystem rich in diversity, with devices that cater to different tastes, needs, and price points. However, this fragmentation can sometimes result in inconsistencies in user experience and delayed software updates across different devices. Security and Privacy iOS: Robust Security Measures Apple places a strong emphasis on security and privacy. iOS is known for its robust security features, including regular updates, encrypted messaging with iMessage, and hardware-based security with the Secure Enclave. Apple's commitment to user privacy is also evident in features like App Tracking Transparency, which gives users control over how their data is shared. Android: Improving but Varied While Android has made significant strides in enhancing security, its open nature and device fragmentation pose challenges. Google has implemented measures like Google Play Protect and regular security patches, but the effectiveness of these measures can vary across different manufacturers and devices. Users must also be proactive in managing app permissions and security settings. The Future of iOS and Android As technology continues to evolve, so too will the rivalry between iOS and Android. Both platforms are likely to innovate in areas such as artificial intelligence, augmented reality, and seamless connectivity. The competition will drive both Apple and Google to push the boundaries of what mobile operating systems can achieve, ultimately benefiting consumers with more advanced and user-centric features. Conclusion The rivalry between iOS and Android is a testament to the dynamic nature of the tech industry. Each platform has its unique strengths and caters to different user preferences. Whether you value the polished, seamless experience of iOS or the customizable, diverse nature of Android, both operating systems have a lot to offer. As a developer, understanding the nuances of both platforms can help you create better apps and experiences for a wider audience. Feel free to share your thoughts on the iOS vs. Android debate in the comments below! What do you prefer and why?
klimd1389
1,909,841
CRISP-DM: The Essential Methodology for Structuring Your Data Science Projects
As with any IT project, Machine Learning projects need a framework. However, classical methodologies...
0
2024-07-03T08:18:39
https://dev.to/moubarakmohame4/crisp-dm-the-essential-methodology-for-structuring-your-data-science-projects-3fk2
machinelearning, datascience, data, datastructures
As with any IT project, Machine Learning projects need a framework. However, classical methodologies do not apply or apply very poorly to Data Science. Among the existing methodologies, CRISP-DM is the most commonly used and will be presented here. Several variants exist. > Be careful, CRISP is a framework and not a rigid structure. The purpose of using a methodology is not to have a magic formula or to be limited. It mainly provides an idea of the progress and steps, as well as good practices to follow. CRISP-DM stands for "Cross Industry Standard Process for Data Mining." It is, therefore, a standard process that does not depend on the application domain. Originally, the methodology was created as part of a European project led by a consortium of companies. The presence of Data Mining in its name (and not Machine Learning) indicates that this method is old. Indeed, its first version dates back to 1996. A second version was being drafted from 2006 but was abandoned in 2011 before its release. This method is iterative. Five phases follow one another until the result is validated by the business, with possible back-and-forths between some. These phases are as follows: - **Business Understanding**: this is the business understanding phase, which is not a technical phase. - Data Understanding: the data understanding phase corresponds mainly to a phase of descriptive statistics and allows for familiarization with the provided data. - **Data Preparation**: it is in this data preparation phase that format modifications or feature creations are carried out. - **Modeling**: this is the phase of creating models and optimizing parameters. - **Evaluation**: the evaluation is not conducted by Data Scientists in this phase. Business experts will assess whether the quality of the proposed solution meets the production constraints. These five phases are then followed by a sixth phase that only occurs once: the Deployment phase. More than a deployment phase, it is actually a preparation phase for a more classic project: the production deployment of the model, which will become one among other components in the solution's architecture. The overall schema is as follows: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ogutt73c92iyl1gz5164.png) **1. Business Understanding** The business understanding phase comes first within the iterations. Its goal is to fully understand the business needs and project objectives. This phase is divided into several steps, namely determining business objectives, assessing the situation, setting Machine Learning goals, and planning the project. These steps are mostly carried out through meetings and workshops with business experts, by studying the processes on-site where Machine Learning will be implemented. **2. Data Understanding** The data understanding phase is the first technical phase. It involves loading the data and beginning to understand it. There are three main steps leading to the completion of part of the deliverable: - The identity card of the dataset(s) - The description of the fields - The statistical analysis of each field Before these three steps, the data must be retrieved and loaded into the software of choice, or access must be obtained if the data is directly in databases. **3. Data Preparation** In most cases, the data preparation phase is lengthy (up to 50% of the project time). The goal is to transform raw data into usable data for the modeling phase. **4. Modeling** The modeling phase involves creating various models, comparing them, and choosing the model(s) to present to the business. For each model tested, it is important to track all the decisions made. This requires successively: - Indicating how the model will be tested and the metrics chosen for evaluation. - Specifying the chosen technique and briefly describing it. Indicating the constraints of the chosen model (which may lead to a return to the data preparation phase if necessary). - Specifying the parameters used (and those tested in the case of hyperparameter optimization). - Calculating the model metrics (in the vast majority of cases, multiple metrics are needed) and, if possible, an analysis of the model. - Indicating whether the model is acceptable for the next phase or if it is insufficient. This phase represents the largest part of this book and is detailed in the chapters dedicated to the different techniques (from the Modeling and Evaluation chapter to the Unsupervised Algorithms chapter). **5. Evaluation** Contrary to what its name might suggest, the evaluation phase is not a phase for calculating metrics (which should be done in the Modeling phase) but rather a business evaluation phase. Indeed, the results are compared against the needs identified in the Business Understanding phase. The aim is to determine whether the best models obtained are practically usable, often through tests on real data. If a model is validated, it must then be reviewed to ensure that the assumptions made throughout the process are valid and correspond to reality. If no issues are detected, deployment can proceed. In all other cases, it is necessary to determine the course of action to restart a complete iteration and achieve a result that can be validated, or decide to stop the project. Depending on the results obtained, several avenues can be explored: - Add more data, for example through new extractions or by adding attributes. - Test other models. - Change the desired task by breaking it down into several subtasks. - Put the project on hold until algorithms improve if the current state-of-the-art is insufficient or until new data becomes available. **6. Deployment** The deployment phase is not a technical phase. The deployment itself will be a different project, managed according to the technical team's usual practices. However, in this phase, it is necessary to prepare the model's production deployment and, most importantly, its future lifecycle: - What is the expected schedule for delivering the model? - In what form will it be provided to the technical team? - How will monitoring be conducted? - In case of failure or deviations, how will the model be retrained and at what frequency? - What is the model maintenance procedure (in case of issues, for example)? - What are the risks and mitigations?
moubarakmohame4
1,909,840
How do you file cuticles with a nail drill?
Utilizing a Nail Drill to File Cuticles Introduction You've probably come across a nail drill if...
0
2024-07-03T08:18:05
https://dev.to/edwin_padilla_4aa17388914/how-do-you-file-cuticles-with-a-nail-drill-29bm
nail, art, drill, bit
Utilizing a Nail Drill to File Cuticles Introduction You've probably come across a nail drill if you’re a nail lover. This is often a tool used to file, buff, and shape fingernails. Nonetheless, are you aware it to register cuticles that one may also use? We'll be speaking about the benefits and safety with this innovation, how to use a nail drill to file cuticles, therefore the quality associated with application. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/agvdovg09r76ewbsvmx2.png) Safety The matter first should note is the fact that safety is most important when utilizing a nail drill to register cuticles. Before usage, make sure you have clean fingernails and a tools for nail arts anitized. Its also wise to understand the kind of drill bits you are using and select the right one. It's essential to be cautious and just take things gradually to avoid any accidents which can be prospective. Use Ensure before you start filing your cuticles you know how to deal with the nail drill. The machine will be able to adjust the speed and strength for the drill for you personally. The Nail Drill Bits should be lowered towards the appropriate degree, then moved gently over your cuticles in a motion back-and-forth. Make certain that the drill bit just makes connection with the loosened, overgrown cuticles to avoid tearing the skin. The versatile nature of the nail drill means it to address various nail shapes and textures, rendering it well suited for novices and experts alike as you are able to make use of. Utilizing Before usage, you have to ensure that you have actually clean nails and a nail drill sanitized. Start by picking the absolute most bit suitable and form to match your preferences. Once you've the bit correct start by establishing the drill to a lower speed and work your path up as desired. Gently go the drill over your cuticle nail drill bits in a motion back-and-forth ensuring you don’t touch your skin layer. Once you’re complete, you should use a mini vacuum cleaner to wash the area up, then moisturize in order to complete from the process. Provider and Quality The quality and effectiveness of solution supplied by a nail drill rely on just how well it is maintained. Ruined or nails which are crashed a side-effect of poor maintenance. You need to always take the time to sanitize and store your nail drill correctly. Its also wise to ensure that you change usually drill bits to help keep the device in exceptional condition. Application Finally, when working with a nail drill to register cuticles, there are many ways to apply and achieve designs that are unique. Professionals add extra flair and sparkle to nails with the addition of rhinestones or glitter in various sizes and shapes. You can try out various colors and textures to achieve finishes which can be unique.
edwin_padilla_4aa17388914
1,909,838
Unit Testing and TDD With PostgreSQL is Easy
Note: this is a repost from my personal blog: Unit Testing and TDD With PostgreSQL is Easy I...
0
2024-07-03T08:14:13
https://dev.to/vbilopav/unit-testing-and-tdd-with-postgresql-is-easy-2hej
postgressql, tdd, testing, sql
> >Note: this is a repost from my personal blog: [Unit Testing and TDD With PostgreSQL is Easy > ](https://vb-consulting.github.io/blog/unit-testing-postgresql/) > >I believe this subject matter is important and it deserves a bigger audience. > I keep hearing that PostgreSQL, as well as all other databases - are not testable. That is, of course, completely wrong. Not only it is possible but I found it to be even easier and faster than traditional methods. And no, I'm not talking about some integration testing or some Docker magic. Just plain-old PostgreSQL, that's all. So, let's do some TDD with PostgreSQL, shall we? Note: this article is neither an endorsement nor even a criticism of TDD, it is merely a demonstration of how easy is to do such things with PostgreSQL. ## The Problem Suppose we have a schema: ```sql create table devices ( device_id int generated always as identity primary key, name text ); create table measurements ( timestamp timestamp not null, device_id int not null references devices(device_id), primary key(timestamp, device_id), value numeric not null ); ``` We want to write a functionality that will have the following parameters: - Period (starting and ending timestamps). - Time interval. - Device. > > The result will be a **cumulative sum** for a device and for each given period between start and end, divided by the interval parameter. > Sounds good? Let's totally do it, it will be fun, I promise. ## Database Setup First, we need a little schema setup to make it a little bit more testable. Unit tests, by definition, must not interfere with each other in any shape or form and they must be able to run in parallel. That means that tests will have their own transactions, and any test data inserted will be rollback-ed at the end of tests. However, inserting test data into a relational database can be a bit tricky. Usually, tables will reference some other tables that will reference some other tables too, and before you know it - in order to insert a few test records - we must insert data in all tables in the database. That is certainly possible, but still, inconvenient and tedious. So we don't want to do that. Luckily, PostgreSQL offers a simple solution to this. In our example, we have table `measurements` that reference the `devices` table. This relation will be checked immediately: meaning, a moment when we insert a new measurement the PostgreSQL will check does that device even exists in a database to keep data integrity in check. We can change the behavior of that check to be performed at the end of the transaction. And since the transaction in our unit tests will be rollback-ed anyhow, that will allow us to insert some test data safely without inserting it into a dozen other tables, not concerned with our tests. To enable this deferred check, a reference has to be created with `deferrable initially deferred` declaration: ```sql create table measurements ( timestamp timestamp not null, device_id int not null references devices(device_id) deferrable initially deferred, primary key(timestamp, device_id), value numeric not null ); ``` Note, there is ˙the other option too: `deferrable initially immediate` or simply `deferrable`. That means that we can tell the running transaction to defer all reference checks until the end of transactions with the declaration `set all constraints deferred` ([docs](https://www.postgresql.org/docs/current/sql-set-constraints.html)). If your table is already created then you'll have to recreate the constraint: ```sql begin; alter table only measurements drop constraint measurements_device_id_fkey; alter table only measurements add constraint measurements_device_id_fkey foreign key (device_id) references devices(device_id) deferrable initially deferred; end; ``` Or, if you want to do that for the entire database because you had an architect or ORM who wasn't aware of this feature, you can simply execute this script: ```sql $$ declare _table text; _fk text; _def text; begin for _table, _fk, _def in ( select conrelid::regclass, conname, pg_get_constraintdef(oid) from pg_constraint where contype = 'f' and condeferrable is false and connamespace 'public' ) loop raise info 'setting fk % on table % to deferrable', _fk, _table; execute(format('alter table only %s drop constraint %s', _table, _fk)); execute(format('alter table only %s add constraint %s %s deferrable initially deferred', _table, _fk, _def)); end loop; end; $$ ``` That will do it. And one more little thing: I usually like to create a special schema for unit tests called simply: `test`. And now we're ready. ## TDD Ok, let's first create a fresh SQL file, add an empty test and then execute it immediately: ```sql create or replace procedure test.cumulative_sum() language plpgsql as $$ begin -- arange -- act -- assert rollback; end; $$; call test.cumulative_sum(); ``` With this approach when we execute a file in the editor, our changes to the test are applied, and the test is immediately executed. This allows for an extremely fast test loop. Now, first, let's arrange some data: ```sql create or replace procedure test.cumulative_sum() language plpgsql as $$ begin -- arange insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:02:00', 0, 0.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:03:00', 0, 2.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:07:00', 0, 2.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:08:00', 0, 3.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:11:00', 0, 3.0); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:12:00', 0, 4.0); -- act -- assert rollback; end; $$; call test.cumulative_sum(); ``` This will add some measurements to a non-existing device (id = 0). If we execute our hypothetical calculation for this device from timestamps between 2021-01-01 00:00:00 and 2021-01-01 00:15:00 for 5-minute intervals, we should get the following results: | timestamp | sum | | ----------- | ----: | | `2021-01-01 00:05:00` | **3** | | `2021-01-01 00:10:00` | **9** | | `2021-01-01 00:15:00` | **16** | You can use Excel or a calculator to verify the validity of these cumulative sum calculations. Fine, now that we know what we must get, we can add act and assertion parts. First, we will add the act part, which we will call our non-existing function. Since we need to assert results multiple times (count and for each row), we can put the results into a temporary table: ```sql create or replace procedure test.cumulative_sum() language plpgsql as $$ begin -- arange insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:02:00', 0, 0.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:03:00', 0, 2.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:07:00', 0, 2.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:08:00', 0, 3.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:11:00', 0, 3.0); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:12:00', 0, 4.0); -- act create temp table result on commit drop as select * from cumulative_sum('2021-01-01 00:00:00', '2021-01-01 00:15:00', '5 minutes', 0); -- assert rollback; end; $$; call test.cumulative_sum(); ``` This, of course, will fail, because we haven't written this `cumulative_sum` function yet. But, before we do that, let's also add the assertion part to verify our results. Luckily for us, PostgreSQL supports assertions and [assert statements](https://www.postgresql.org/docs/current/plpgsql-errors-and-messages.html#PLPGSQL-STATEMENTS-ASSERT): ```sql create or replace procedure test.cumulative_sum() language plpgsql as $$ begin -- arange insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:02:00', 0, 0.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:03:00', 0, 2.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:07:00', 0, 2.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:08:00', 0, 3.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:11:00', 0, 3.0); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:12:00', 0, 4.0); -- act create temp table result on commit drop as select * from cumulative_sum('2021-01-01 00:00:00', '2021-01-01 00:15:00', '5 minutes', 0); -- assert assert (select count(*) from result) = 3, 'Expected 3 rows, got ' || (select count(*) from result)::text; assert (select sum from result where timestamp = '2021-01-01 00:05:00') = 3, 'Expected 3, got ' || (select sum from result where timestamp = '2021-01-01 00:05:00')::text; assert (select sum from result where timestamp = '2021-01-01 00:10:00') = 9, 'Expected 9, got ' || (select sum from result where timestamp = '2021-01-01 00:10:00')::text; assert (select sum from result where timestamp = '2021-01-01 00:15:00') = 16, 'Expected 16, got ' || (select sum from result where timestamp = '2021-01-01 00:15:00')::text; rollback; end; $$; call test.cumulative_sum(); ``` That looks a bit ugly, but to be fair, most of those assertions were autocompleted by the Copilot. This will still fail because we haven't written this `cumulative_sum` function yet. But at least we can extract input and output data **contracts** for this function now. - The parameters will be these: ```sql from timestamp, to timestamp, interval interval, device_id int ``` - The resulting table will be this: ```sql table ( "timestamp" timestamp, sum numeric ) ``` So now, we know what our function should look like. We can add the first prototype, just above our failing tests: ```sql create or replace function cumulative_sum( _from timestamp, _to timestamp, _interval interval, _device_id int ) returns table ( "timestamp" timestamp, sum numeric ) language sql as $$ select null::timestamp, null::numeric; $$; ``` Again, we've placed this `create or replace function cumulative_sum` above our failing tests, and again we are executing the entire file. Which in turn gives an extremely fast TDD refactor-red-green loop. However, our tests are still in the red, since obviously, our newly created function is returning nonsense. So, let's refactor this: ```sql create or replace function cumulative_sum( _from timestamp, _to timestamp, _interval interval, _device_id int ) returns table ( "timestamp" timestamp, sum numeric ) language sql as $$ select p.period_to, sum(coalesce(m.sum, 0)) over (rows unbounded preceding) from ( select series as period_from, series + _interval as period_to from generate_series(_from, _to - _interval, _interval) series ) p left join lateral ( select sum(coalesce(value, 0)) as sum from measurements where timestamp > p.period_from and timestamp <= p.period_to and device_id = _device_id ) m on true $$; ``` And now, this seems to be correct, and our tests are in green now. We can continue improving and optimizing this function while our tests are green as much as we want without any fear that something will be broken. Here is the final content of our work in a single file: ```sql create or replace function cumulative_sum( _from timestamp, _to timestamp, _interval interval, _device_id int ) returns table ( "timestamp" timestamp, sum numeric ) language sql as $$ select p.period_to, sum(coalesce(m.sum, 0)) over (rows unbounded preceding) from ( select series as period_from, series + _interval as period_to from generate_series(_from, _to - _interval, _interval) series ) p left join lateral ( select sum(coalesce(value, 0)) as sum from measurements where timestamp > p.period_from and timestamp <= p.period_to and device_id = _device_id ) m on true $$; create or replace procedure test.cumulative_sum() language plpgsql as $$ begin -- arange insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:02:00', 0, 0.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:03:00', 0, 2.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:07:00', 0, 2.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:08:00', 0, 3.5); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:11:00', 0, 3.0); insert into measurements (timestamp, device_id, value) values ('2021-01-01 00:12:00', 0, 4.0); -- act create temp table result on commit drop as select * from cumulative_sum('2021-01-01 00:00:00', '2021-01-01 00:15:00', '5 minutes', 0); -- assert assert (select count(*) from result) = 3, 'Expected 3 rows, got ' || (select count(*) from result)::text; assert (select sum from result where timestamp = '2021-01-01 00:05:00') = 3, 'Expected 3, got ' || (select sum from result where timestamp = '2021-01-01 00:05:00')::text; assert (select sum from result where timestamp = '2021-01-01 00:10:00') = 9, 'Expected 9, got ' || (select sum from result where timestamp = '2021-01-01 00:10:00')::text; assert (select sum from result where timestamp = '2021-01-01 00:15:00') = 16, 'Expected 16, got ' || (select sum from result where timestamp = '2021-01-01 00:15:00')::text; rollback; end; $$; call test.cumulative_sum(); ``` ## Conclusion The arguments I hear all the time are as follows: - Testing in a database is impossible. No, it isn't. I just showed you how. It may be in other databases, but PostgreSQL isn't that *other* database. - Testing in a database is hard. No, it isn't. SQL may be hard if you haven't honed your skills. So, start learning. - Testing in a database is slow. No, it isn't. It is way faster than anything out there. Again, hone your skills. But what about running all tests in parallel, perhaps in a CI/CD pipeline you might ask. Well, it's just a matter of a runner that will run all those parameterless procedures in parallel connections under some criteria. On my setup, that is the test schema. So if, it is a procedure and it is in test schema and it doesn't have parameters - it's a test. Writing such a test runner would be really, really simple. In fact, that is precisely what I did by using NodeJS, for my projects: [@vbilopav/pgmigrations](https://www.npmjs.com/package/@vbilopav/pgmigrations). Here is an example project and how it is used in the GitHub actions: [`teamserator/.github/workflows /build-and-test.yml`](https://github.com/vb-consulting/teamserator/blob/master/.github/workflows/build-and-test.yml) But in all fairness, it's so easy that anyone could do it.
vbilopav
1,907,789
Reconsider Using UUID in Your Database
Using UUIDs (Universally Unique Identifiers) as primary keys in databases is a common practice for us...
0
2024-07-03T08:12:16
https://dev.to/gisakaze/reconsider-using-uuid-in-your-database-3c4k
performance, development, database
Using UUIDs (Universally Unique Identifiers) as primary keys in databases is a common practice for us as developer, but this approach can have significant performance drawbacks. I am going to explore with you two major performance issues associated with using UUIDs as keys in your database tables. ### What are UUIDs? A UUID (Universal Unique Identifier) is a 128-bit value used to uniquely identify an object or entity on the internet. Among various versions, UUIDv4 is the most popular. Here’s an example of a UUIDv4: ``` 123e4567-e89b-12d3-a456-426614174000 ``` UUIDs are like the star players in a football team – they stand out and are unique, but not always the best choice for every play. ### Problem 1: Insert Performance – The Fumble When a new record is inserted into a table, the primary key index must be updated to maintain optimal query performance. Indexes are constructed using the B+ Tree data structure, which requires rebalancing with each insertion to stay efficient. With UUIDs, the inherent randomness complicates this process, leading to significant inefficiencies. As your database scales, millions of nodes need rebalancing, drastically reducing insert performance when using UUID keys. **Example:** ``` CREATE TABLE players ( id UUID PRIMARY KEY, name VARCHAR(255) ); INSERT INTO players (id, name) VALUES ('123e4567-e89b-12d3-a456-426614174000', 'Kevin Lebron'); ``` _Tip_: Consider using UUIDv7 instead, as it has inherent ordering that simplifies indexing, like having a well-coordinated offensive line 😊! ### Problem 2: Higher Storage Requirements UUIDs consume much more storage compared to auto-incrementing integer keys. Auto-incrementing integers use 32 bits per value, whereas UUIDs use 128 bits – four times more per row. When stored in human-readable form, a UUID can consume up to 688 bits, approximately 20 times more per row. This is like getting a penalty flag for excessive celebration. It’s unnecessary and costly. **Example:** ``` CREATE TABLE players ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) ); INSERT INTO players (name) VALUES ('Kevin Lebron'); ``` Let’s simulate the impact with two tables: - Table 1: Contains 1 million rows with UUIDs. - Table 2: Contains 1 million rows with auto-incrementing integers. **Results**: - **Total table size**: The UUID table is about 2.3 times larger than the integer table. - **ID field size**: An individual UUID field requires 9.3 times more storage space than an integer field. - **ID column size**: Excluding other attributes, the UUID column is 3.5 times larger than the integer column. Using UUIDs is like getting hit with repeated penalty flags – your storage requirements balloon unnecessarily, impacting performance. ### Conclusion While UUIDs are excellent for ensuring uniqueness, they present significant scalability challenges. The performance issues discussed are more noticeable at scale, so for smaller applications, the impact might be minimal. However, it is crucial to understand these implications and design your database accordingly. **Remember**: In both football and databases, it's all about making the right plays at the right time! If you found this article helpful, please share, and connect with me! **_References_** [What is a UUID, and what is it used for?](https://www.cockroachlabs.com/blog/what-is-a-uuid/) [The Problem with UUID](https://www.youtube.com/watch?v=a-K2C3sf1_Q) [B Trees and B+ Trees. How they are useful in Databases](https://www.youtube.com/watch?v=aZjYr87r1b8)
gisakaze
1,909,837
The Power of Full Project Context using LLM's
I've tried integrating RAG into the DevoxxGenie plugin, but why limit myself to just some parts found...
0
2024-07-03T08:10:25
https://dev.to/stephanj/the-power-of-full-project-context-using-llms-463c
devoxxgenie, claudeai, idea, intelli
I've tried integrating RAG into the DevoxxGenie plugin, but why limit myself to just some parts found through similarity search when I can go all out? > RAG is so June 2024 😂 Here's a mind-blowing secret: most of the latest features in the Devoxx Genie plugin were essentially 'developed' by the latest Claude 3.5 Sonnet large language model using the entire project code base as prompt context 🧠 🤯 It's like having an expert senior developer guiding the development process, suggesting 100% correct implementations for the following Devoxx Genie features: - Allow a streaming response to be stopped - Keep selected LLM provider after settings page - Auto complete commands - Add files based on filtered text - Show file icons in list - Show plugin version number in settings page with GitHub link - Support for higher timeout values - Show progress bar and token usage bar I've rapidly stopped my OpenAI subscription and gave my credit card details to Anthropic... ## Full Project Context > A Quantum Leap Beyond GitHub Copilot Imagine having your entire project at your AI assistant's fingertips. That's now a reality with the latest version of the Devoxx Genie IDEA plugin together with cloud based models like Claude Sonnet 3.5. BTW How long will it take until we can do this with local models?! ## Add full project to prompt The latest version of the plugin allows you to add the full project to your prompt, your entire codebase now becomes part of the AI's context. This feature offers a depth of understanding that traditional code completion tools can only dream of. ![Full Project Context](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zjapt9yhylw5692gq7xk.jpg) ## Smart Model Selection and Cost Estimation The language model dropdown is not just a list anymore, it's your 'compass' for smart model selection 🤩 👇🏼 - See available context window sizes for each cloud model - View associated costs upfront - Make data-driven decisions on which model to use for your project ![Smart Model DropDown](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ikg0q81jr411jsqivt2m.jpg) ## Visualizing Your Context Usage Leverage the prompt cost calculator for precise budget management: - Track token usage with a progress bar - Get real-time updates on how much of the context window you're using Calculate token cost with Claude Sonnet 3.5 ![Claude Sonnet 3.5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6u8kv9csjhl6hiwj70h.jpg) Calculate cost with Google Gemini 1.5 Flash ![Gemini 1.5 Flash](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sd05dj2q61eune0drg0d.jpg) ![Project Added](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p9a0vsxjb7ypyf1zppcb.jpg) ## Cloud Models Overview Via the plugin settings pages you can see the "Token Cost & Context Window" for all the available cloud models. In a near future release you will be able to update this table. I should probably also support the local models context windows... #PullRequestsAreWelcome ![Cloud Models Overview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qwgiz7qkz3b8kgdyjvqq.jpg) ## Handling Massive Projects? "But wait, my project is HUGE!" you might say 😅 Fear not. We've got options: ### Leverage Gemini's Massive Context: Gemini's colossal 1 million token window isn't just big, it's massive. We're talking about the capacity to ingest approximately 30,000 lines of code in a single prompt. That's enough to digest many codebases, from the tiniest scripts to some decent big projects. But if that's not enough you have more options... BTW Google will be releasing 2M and even 10M token windows in the near future. ### Smart Filtering: The new "Copy Project" plugin settings panel lets you - Exclude specific directories - Filter by file extensions - Remove JavaDocs to slim down your context ![Smart Filtering](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekwilqkm2lmb659zjl14.jpg) ### Selective Inclusion Right-click to add only the most relevant parts of your project to the context and/or clipboard. ![Right Click Options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cdvikarhvwbov100wmrt.jpg) You can also copy your project to the clipboard, allowing you to paste your project code into an external chat window. This is a useful technique for sharing and collaborating on code 👍🏼 ## The Power of Full Context: A Real-World Example The DevoxxGenie project itself, at about 70K tokens, fits comfortably within most high-end LLM context windows. This allows for incredibly nuanced interactions – we're talking advanced queries and feature requests that leave tools like GitHub Copilot scratching their virtual heads! ## Conclusion: Stepping into the Future of Development With Claude 3.5 Sonnet, Devoxx Genie isn't just another developer tool... it's a glimpse into the future of software engineering. As we eagerly await Claude 3.5 Opus, one thing is clear: we're witnessing a paradigm shift in AI-augmented programming. > Alan Turing, were he here today, might just say we've taken a significant leap towards AGI (for developers with Claude Sonnet 3.5) Welcome to the cutting edge of AI-assisted development - welcome to DevoxxGenie 🚀 [X Twitter](https://x.com/devoxx.genie) - [GitHub](https://github.com/devoxx/DevoxxGenieIDEAPlugin) - [IntelliJ MarketPlace](https://plugins.jetbrains.com/plugin/24169-devoxxgenie)
stephanj
1,909,834
ASTRO JS | P2 | SSG and SSR
Hello my fellow web developers, today i will be continuing my astro js series with the second part...
0
2024-07-03T08:08:53
https://dev.to/shubhamtiwari909/astro-js-p2-ssg-and-ssr-2l2l
html, javascript, webdev, tutorial
Hello my fellow web developers, today i will be continuing my astro js series with the second part where we are going to cover some more topics like SSG, SSR, and Hybrid mode. It is mostly related to how we are going to render our pages and in which mode - [What is SSG?](#ssg) - [What is SSR?](#ssr) - [What is Hybrid Mode?](#hybrid-mode) - [Enabling SSG, SSR and Hybrid(SSG + SSR) mode](#render-mode) - [SSG and SSR on single page](#ssg-ssr-single) - [Data fetching](#data-fetching) <a id="ssg"></a> ## What is SSG? * Definition: SSG involves generating the HTML for your pages at build time, meaning the pages are pre-rendered as static HTML files. * Use Case: Ideal for content that doesn't change often, such as blogs, documentation sites, and marketing pages. ### Benefits: 1. Fast performance: Since the content is pre-rendered, it can be served quickly from a CDN. 2. Lower server load: No need to generate content on-the-fly. 3. Improved SEO: Search engines can easily index the static content. <a id="ssr"></a> ## What is SSR? * Definition: SSR involves generating the HTML for your pages on-the-fly for each request. This is done on the server before sending the content to the client's browser. * Use Case: Suitable for dynamic content that changes frequently, such as user dashboards, e-commerce sites, and personalized content. ### Benefits: 1. Fresh content: Always serves the latest content, as it's generated per request. 2. SEO friendly: Since the content is rendered on the server, search engines can index it. 3. Reduced client-side workload: Less JavaScript is needed on the client side compared to client-side rendering. <a id="hybrid-mode"></a> ## What is Hybrid mode? * Definition: Hybrid rendering combines both SSG and SSR within the same project. You can choose which pages or parts of your site are statically generated and which are server-rendered. * Use Case: Useful for websites that have both static and dynamic content. For example, a blog with static posts but a dynamic user profile page. ###Benefits: 1. Flexibility: Allows you to optimize each part of your site according to its needs. 2. Performance: Static pages can be served quickly while dynamic pages provide up-to-date content. 3. SEO and user experience: Balances the benefits of both SSG and SSR, ensuring both good SEO and a dynamic user experience where needed. <a id="render-mode"></a> ## Enabling SSG, SSR and Hybrid(SSG + SSR) mode * Updating astro.config.mjs file to enable SSG(default), SSR and Hybrid. You need to specify the SSR adapter you want to use in case of SSR and Hybrid. * The primary purpose of SSR adapters is to bridge the gap between Astro and the target server environment, allowing developers to deploy their SSR-capable Astro applications to platforms like Vercel, Netlify, Node.js servers, and others. ```js import { defineConfig } from 'astro/config'; import vercel from '@astrojs/vercel/serverless'; export default defineConfig({ output: 'server', // server - enable SSR, hybrid - enable both SSR and SSG, static - SSG(default) adapter: vercel(), }); ``` <a id="ssg-ssr-single"></a> ## SSG and SSR on single page * If the mode is static or hybrid, add 1 prerender variable with false value will make the page opt out of the pre rendering mode and generates the html on server side. ```js --- export const prerender = false; --- <html> <body> <h1>Dynamic Page</h1> <p>This page is not statically generated.</p> </body> </html> ``` * If the mode is server, add 1 prerender variable with true value will make the page opt out of the ssr mode and generates the html at build time just like ssg. ```js --- export const prerender = true; import Layout from "../layouts/Layout.astro"; import Bears from "../components/Bears"; import CardComponent from "../components/Card"; --- <Layout title="Welcome to Astro."> <div class="py-10"> <h1 class="text-center text-3xl mb-6">Home page</h1> <Bears client:load /> <CardComponent client:load /> </div> </Layout> ``` <a id="data-fetching"></a> ## Data Fetching ### SSG * In SSG, we use a method called getStaticPaths to generates the pages for a possible route at build time, routes other than these will show 404 page. ```js --- export const prerender = true; import Layout from "../../layouts/Layout.astro"; import { getCollection, type CollectionEntry } from "astro:content"; export const getStaticPaths = async () => { const blogs = await getCollection("blog"); const paths = blogs.map((blog) => { return { params: { blogId: blog.slug, }, props: { blog, }, }; }); return paths; }; type Props = { blog: CollectionEntry<"blog">; }; const { blog } = Astro.props; const { Content } = await blog.render(); --- <Layout title={blog?.data.title}> <div class="bg-slate-900 text-slate-100 grid place-items-center min-h-screen pt-16" > <h1 class="text-3xl mb-4">{blog?.data.title}</h1> <div class="px-10 prose lg:prose-xl text-slate-100"> <Content /> </div> </div> </Layout> ``` * In this example, we are fetching some blogs build in markdown with getCollectionMethod, it is a method to fetch the markdown blogs created with createCollection method. * Then we are mapping over the blogs array and returning params value, which will be the routes that are going to generate at build time and second is the props which has the individual blog data. At the end, we are returning the path variable itself. * CollectionEntry type binds the blog data with the schema we have created for our collection, in this case, it is "blog". * Finally, we are destructuring our blog using Astro.props and from blog, we are destructuring the Content from the blog.render() method, which helps in rendering the markdown content in astro file, and it is used as a component like this `<Content />` ### SSR * In SSR,we can do the data fetching directly and could use try catch block to handle the exceptions and errors. ```js --- import Layout from "../../layouts/Layout.astro"; import { Image } from "astro:assets"; import type Blogs from "../../interfaces/blogs"; import fetchApi from "../../lib/strapi"; const { blogId } = Astro.params; let article: Blogs; try { article = await fetchApi<Blogs>({ endpoint: `blogs`, wrappedByKey: "data", wrappedByList: true, query: { populate: "*", "filters[slug][$eq]": blogId, }, }); } catch (error) { console.log("Error", error); return Astro.redirect("/404"); } if (!article) { return Astro.redirect("/404"); } --- <Layout title={article?.attributes.meta.title} description={article?.attributes.meta.description} > <section class="min-h-screen bg-slate-900 text-slate-100"> <div class="grid justify-center pt-20 px-10"> <div class="prose prose--md prose-invert"> <Image src={`${article?.attributes.image.data.attributes.url}`} alt={article?.attributes.image.data.attributes.alternativeText} inferSize loading="lazy" class="object-cover max-h-[400px]" /> <h1 class="text-3xl mb-4 text-center">{article?.attributes.title}</h1> <p class="mb-4 text-base">{article?.attributes.body}</p> </div> </div> </section> </Layout> ``` * This is an example of fetching blog data from strapi, which is a headless CMS. * Firstly, we are going to get the params using Astro.params as we are going to use the page slug to find the individual blog. * Then using a custom fetchApi method, get the blogs data and stored it in an article variable. * If the article is not there, It will redirect to 404 page. * Finally we have mapped our data to the UI. That's it for this post, in part 3, we will be covering Astro api references You can contact me on - Instagram - https://www.instagram.com/supremacism__shubh/ LinkedIn - https://www.linkedin.com/in/shubham-tiwari-b7544b193/ Email - shubhmtiwri00@gmail.com You can help me with some donation at the link below Thank you👇👇 https://www.buymeacoffee.com/waaduheck Also check these posts as well {% link https://dev.to/shubhamtiwari909/button-component-with-cva-and-tailwind-1fn8 %} {% link https://dev.to/shubhamtiwari909/microfrontend-react-solid-vue-333b %} {% link https://dev.to/shubhamtiwari909/codium-ai-assistant-for-devs-57of %} {% link https://dev.to/shubhamtiwari909/zustand-a-beginners-guids-fh7 %}
shubhamtiwari909
1,909,832
6 Most Important Used Design Principle for UX Design.
UX Principles The user comes first: One of the most important principles in UX design is...
0
2024-07-03T08:07:30
https://dev.to/iam_divs/6-most-important-used-design-principle-for-ux-design-1cln
webdev, javascript, ui, ux
UX Principles ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rhdik9y72vhjtxkb5cfe.png) 1. The user comes first: One of the most important principles in UX design is understanding that the user always comes first. UX design isn’t UX without user research. 2. Useful, usable, and used: When designing a great user experience, your product or service should be “useful, usable, and used.” 3. Design for relevance: While the concept of use is crucial to meaningful user experiences, establishing relevance is equally important. 4. Embrace accessibility: This includes catering to the preferences and needs of people with disabilities. For example, adding various text size options on your website. 5. Maintain consistency and familiarity: This includes: => Keeping your design elements consistent in how they look and function across all products, platforms, screens, and venues. => Consistently meeting what your users expect of how your product or service should behave based on their previous experience with your brand or similar products on the market.
iam_divs
1,909,831
The huge gap between Innovation & Accessibility
Here's why we still have a lot to do for improving accessibility in life: A blind person can take a...
0
2024-07-03T08:06:20
https://dev.to/abdermaiza/the-huge-gap-between-innovation-accessibility-22ca
a11y, webdev, beginners
Here's why we still have a lot to do for improving accessibility in life: - A blind person can take a picture of his kid playing in a park with a smartphone, but he probably can't book tickets on his own, as ticketing interfaces are very inaccessible these days. - We can now pay contacless with our smartphone, but you can't log on to your bank and make a bank transfer online. - We can scan the menu of an Italian restaurant, have it translated into French and read aloud, but you can't register your children at the school canteen. - You can orient yourself very precisely with your GPS, but you can't access connections on the public transit application.
abdermaiza
1,909,830
Published my first npm package: react-popupify!
What is react-popupify? React-Popupify is a simple and easy to use popup component...
0
2024-07-03T08:06:00
https://dev.to/viditkushwaha/react-popupify-simplify-popup-management-in-your-react-apps-4109
react, npm, opensource, javascript
<img src="https://i.ibb.co/m6t3dQc/download-1.gif" alt="GIF"> ## What is `react-popupify`? [`React-Popupify`](https://www.npmjs.com/package/react-popupify) is a simple and easy to use popup component library for Reactjs applications. It manages popups using a singleton pattern with a central popup manager instead relies on a global manager to handle popups. JavaScript's default alert, confirm, and prompt dialogues have long served as a standard for simplicity. However, they suffer from several UI inconsistencies, especially when integrated into modern web applications with custom styling and user interfaces. These inconsistencies can lead to a jarring user experience, disrupting the overall aesthetic and feel of the application. To address these issues, react-popupify come into place, a React library designed to provide consistent and customizable popup dialogs. With feature like controlled visibility and customizable behavior to support for transitions and focus management, the library addresses many common needs when working with popups and modals, ensuring a smooth and accessible user experience. ## Key Features - **Easy to Use**: Simple API for quick integration. - **Highly Customizable**: Customizable transitions, styles, and behavior. - **Custom Animations**: Supports various animation types such as `bounce`, `flip`, `zoom`, and `fade`. - **Auto-Close**: Option to auto-close the popup after a specified duration. - **Event Handlers**: Callbacks for when the popup is entered and exited. - **Esc Key and Outside Click**: Configurable options to close the popup using the escape key or clicking outside. - **Component-Based**: Built with modern React principles and practices. ## Installation You can install react-popupify via npm: ```bash npm install react-popupify ``` ## How It Works Using react-popupify is straightforward. Here's a quick guide to get you started: 1 **Adding CSS** ```jsx import "react-popupify/dist/bundle.css"; ``` 2 **Create Popup Components** ```jsx import React from 'react'; import { Popup } from 'react-popupify'; const CustomPopup = () => { return ( <Popup popupId="customPopupId" animation="bounce" open={false} closeOnEscape={true} closeOnOutsideClick={true} closeButton={true} > Say Hello to React-Poupify !! </Popup> ); } export default CustomPopup ``` 3 **Import CustomPopup into to root of your project**: ```jsx import React from 'react'; const App = () => { return ( // importing custom popup to root of project <CustomProject /> ); } export default App ``` 4 **Display Popups** Use the `showPopup` instance to show popups: ```tsx import React from 'react'; import { showPopup } from 'react-popupify' const PopupButton= () => { const popup = () => showPopup('customPopupId', { open: true }) return ( <div> <button onClick={popup}>Show Popup!</button> </div> ); } export default PopupButton ``` ## Contributing I welcome contributions to [react-popupify](https://www.npmjs.com/package/react-popupify). If you have ideas for new features, improvements, or bug fixes, feel free to open an issue or submit a pull request on GitHub. Check out the [GitHub repository](https://github.com/Vidit-Kushwaha/react-popupify) for more details, and don't forget to ⭐ star the project if you like it! Happy coding! 🚀
viditkushwaha
1,909,829
Custom Mailer Boxes That Wow Your Customers
Packaging has become a very important part of the marketing mix for brands in this age of increased...
0
2024-07-03T08:03:37
https://dev.to/exam-dumsp/custom-mailer-boxes-that-wow-your-customers-52p4
mailerboxes, custommailerboxeswholesale, customprintedmailerboxes, custommailerboxeswithlogo
Packaging has become a very important part of the marketing mix for brands in this age of increased competition, and it affects the growth of customer satisfaction and brand recognition. In the extensive array of packaging solutions available, **custom mailer boxes** are the same kind that have been adopted by brand owners to be able to make a statement and at the same time ensure product security during its delivery. ## Rise Of Mailer Boxes In these digital times, when e-commerce rules supreme over high street stores, the moment of the unboxing is no doubt a key factor for shoppers. One of the advantages of custom mailer boxes is giving companies the chance to make an impact on customers from once they receive the package until the last moment which is when they unpack it using the box. ## Benefits Of Mailer Boxes ### Brand Identity Not only customized **mailer boxes** give brands a platform for speaking about their brand identity and character, but facilitate them to differentiate themselves from a lot of ordinary packaging. ### Protection Besides being attractive to consumers, [**custom box mailers**](https://customboxesmarket.com/custom-mailer-boxes/) allow to insure all products against damage during the delivery process that may result in extra expenses due to stock loss. ### Sustainability A lot of businesses use paper or cardstock as ecologic materials, when designing boxes for custom mailers, as this can attract conscious consumers of the environment. ### Cost-Effectiveness Though the **mailer boxes custom** exude a luxury look, manufacturers can make those at an affordable fee that is still a practical packaging option for businesses regardless of the size. ## Designing Your Mailer Boxes Such potential is limitless in the procession of crafting individualized boxes of mail. Starting from picking the right materials to adding something spectacular in packaging, you can be sure that each element would be part of what a buyer would be getting a good impression of. Here are some key considerations: ### Material Selection Custom mailer boxes are handy to be fabricated with various materials, cardboard (corrugated), kraft paper, and **custom rigid mailers** pooling. For example, each material has its pros and cons in terms of strength, appearance, and sustainability among others. ### Printing Techniques If you're seeking to apply any kind of print technique, whether it's offset printing, digital printing, or flexography, choosing the right one, to make your boxes more beautiful, is the right step. Try using the whole palette of colors (vivid colors, including metallic shades or embossing for the 3D effect. ### Structural Design Besides the appearance, the structural design of your [**Custom Boxes Market**](https://customboxesmarket.com/) can advantage the functionality and usability simultaneously. Do you think you should back up this with functions like tear strips, resealable tables, or custom inserts to accelerate the open/unboxing process and also boost overall customer satisfaction? ## Making Your Boxes Stand Out So, the fact is that you created really special mailer boxes, and how do you ensure that they will be in customers’ minds for so long? - **Personalization:** Make sure that your custom mailer boxes are in line with your core brand message and the target audience that you want to address. Take into account including individual messages, similar products, or any kind of special discount to achieve personal relationships with customers. - **Visual Impact:** With the help of bright pictures, strong colors, and distinctive forms the viewer not only looks but also lets him become involved. The custom mailer box is not just a mode of conveying your goods but is a dynamic tactile extension of your brand. - **Interactive Elements:** Motivate the engagement by using various interactive materials in the design of **Printed mailer boxes**. Perhaps, as an example, there could be QR links to unique material, interactive riddles, or even some kind of small reward such as a gift with purchase. - **Unforgettable Unboxing Experience:** Pay strong attention to every detail of the unboxing process, from the moment your customer receives the parcel package to the time your product is shown. Feel free to take into account items like tissue paper, branded stickers, or thank-you notes which can generate the feeling and make the experience lasting. ## Conclusion **Custom mailer boxes** add businesses an extra option at the face of packaging which is more than just a solution that exists to fulfil the function. Through the usage of design and innovation as tools, you can make sleeves of mail that not only take care of your products but also etch your mind as well to your customers. &nbsp;Whether you are an entrepreneur of a small boutique or a big brand company, consider the superior branding power custom mailer boxes can bring in for you by offering a tailored packaging experience that will resonate with your customers every time they receive your product through a [delivery](https://dev.to/)
exam-dumsp
1,909,828
Automating User Management on Linux with Bash Scripting
Managing user accounts and permissions on a Linux system is a fundamental task for system...
0
2024-07-03T08:01:39
https://dev.to/madeblaq/automating-user-management-on-linux-with-bash-scripting-1i6a
Managing user accounts and permissions on a Linux system is a fundamental task for system administrators and DevOps teams. Automating this process can streamline operations and ensure consistency across environments. In this guide, we'll walk through how to use a Bash script (create_users.sh) to automate user and group management efficiently. **Overview of create_users.sh** The `create_users.sh` script is designed to read user and group information from an input file (input_file.txt), create user accounts with specified groups, generate secure passwords, and log all actions for audit purposes. This script is particularly useful in environments where user provisioning and access control need to be automated. **Prerequisites** Before you begin, ensure you have the following: Access to a Linux-based system (e.g., Ubuntu, CentOS) MobaXterm for SSH operations (optional for remote execution) Basic understanding of Bash scripting and command-line operations Step-by-Step Guide #### 1. Clone the repository first, clone the github repository containing the 'create_users.sh' script: ```bash git clone https://github.com/Madeblaq/Bash-script.git ``` #### 2. Prepare the Input File ```bash subomi;sudo,dev,www-data chukwu;dev,www-data tolani;sudo hassan;sudo,www-data ``` #### 3. Upload Files to Your Linux System using Mobaxterm - Open MobaXterm and connect to your Linux server via SSH. - Use the SFTP panel on the left side to upload `create_users.sh` and `input_file.txt` to your Linux server. Drag and drop the files into the desired directory. #### 4. Convert Line Endings and Make the Script Executable Connect to your Linux server via SSH using MobaXterm and navigate to the directory where create_users.sh is located. Then, ensure proper line endings with dos2unix and make the script executable: ```bash sudo apt-get install dos2unix dos2unix create_users.sh chmod +x create_users.sh ``` #### 5. Execute the Script Run the `create_users.sh script` with sudo to create users and groups based on `input_file.txt`: ```bash sudo ./create_users.sh input_file.txt ``` #### 6. Verification - Check the Log File: ```bash sudo cat /var/log/user_management.log ``` - Check the Password File: ```bash sudo cat /var/secure/user_passwords.txt ``` - Verify User Creation ```bash grep 'USERNAME' /etc/passwd ``` Replace the 'USERNAME' with one of the usernames from the input file. - Check Group Membership ```bash groups USERNAME ``` ##### 7. Conclusion By leveraging `create_users.sh`, you streamline user management tasks on your Linux system, enhancing operational efficiency and security. This script is not only a time-saver but also promotes consistency and accuracy in user provisioning, crucial for maintaining a well-managed IT infrastructure. ##### 8. Learn More About HNG Internship This project is part of the [HNG Internship] (https://hng.tech), aimed at developing practical skills in software development. Interested in hiring from the HNG Internship? Explore opportunities [here] (https://hng.tech/hire) or discover premium services [here] (https://hng.tech/premium). This blog post provides a technical guide on setting up and using create_users.sh for automated user management on Linux. The step-by-step format and copyable code snippets make it easy to follow and implement. Adjust the paths and commands as per your specific setup, and feel free to enhance the content with visuals or additional technical insights as needed. Thank you for reading! If you have any questions or feedback, feel free to leave a comment below.
madeblaq
1,909,827
Top PR Firm in London: IMCWire's Proven Strategies
Expertise and Experience Our team of PR professionals brings a wealth of experience and industry...
0
2024-07-03T08:00:33
https://dev.to/vondacleveland/top-pr-firm-in-london-imcwires-proven-strategies-l4d
webdev
Expertise and Experience Our team of PR professionals brings a wealth of experience and industry knowledge to the table. We stay ahead of industry trends and continuously refine our strategies to deliver the best possible outcomes for our clients. Our expertise spans various sectors, allowing us to provide specialized insights and [url=https://imcwire.com/]best pr agencies in the us[/url]solutions. Personalized Approach At IMC Wire, we understand that no two clients are the same. We take the time to get to know your business, your goals, and your challenges. This personalized approach allows us to tailor our services to meet your specific needs, ensuring that you receive the most effective and relevant solutions. Proven Results Our track record speaks for itself. We have helped countless clients achieve their PR objectives, from increasing [url=https://imcwire.com/]PR Firm in London[/url] brand awareness to managing complex crises. Our success stories are a testament to our commitment to excellence and our ability to deliver tangible results. Comprehensive Service Offerings We offer a full suite of PR services, making us a one-stop-shop for all your communication needs. Whether you need media relations, crisis management, social media strategy, content creation, or event management, we have the expertise and resources to deliver. https://imcwire.com/
vondacleveland
1,909,825
The Art of Packaging Custom Chocolate Boxes that Delight and Impress
How Custom Shoe Boxes Create A Memorable First Impression To the extent that first...
0
2024-07-03T07:55:44
https://dev.to/exam-dumsp/the-art-of-packaging-custom-chocolate-boxes-that-delight-and-impress-27f0
customshoebox, customshoeboxeswholesale, wholesaleshoeboxes, customshoeboxes
# How Custom Shoe Boxes Create A Memorable First Impression To the extent that first impressions comprise a critical aspect, packaging in a world where consumers prefer to buy products they are familiar with and can metaphorically hold in their hands means everything. Large **custom shoe boxes** step up to be the unseen identity that represents the true value of the footwear industry in protecting and branding the brand. ## Essence Of Shoe Boxes Custom shoeboxes can be understood as the literal thing that its name points to – perfect-fit boxes designed with the particularities of shoe manufacturers, retailers, and clients of the whole shoe industry in mind. Beyond being mere boxes, these are also embodiments of your brand, bearing your foot-crafting genius as they travel the world. ### Crafting An Identity [**Wholesale Custom shoe box**](https://www.premiumcustomboxes.com/custom-shoe-boxes/) being the same size from one company to another would be nearly impossible. Personalization stands as a favorite trick with the frequent use of individual requests by the brand in the packaging design: character, purpose, or worldview. Selecting the fabric and design is where your fantasy should start. You will be able to add a bit of embroidery, change the neckline, add the preferred ribbon and pattern, and finally add all the desired finishing touches. ## Anatomy Of Shoe Boxes ### Materials Depending on the Shoe Box brand, materials used for production are different yet durable, stylish, as well as ecological. The broadest variety includes corrugated cardboard, cardboard, kraft paper, or rigid board, all suitable for shielding and eco-friendliness. ### Design The designing of **custom socks boxes wholesale**, which is where creativity is the front runner, also allows consumers to imagine themselves wearing personalized footwear. Designing your website involves picking either basic, straightforward lines and stylish branding or a complex design, but mainly, it should deliver your brand message and be attractive to your target group. ### Size and Shape Uniqueness is the sine qua non of the footwear industry. Therefore, unique packaging will not raise any eyebrows in this industry. Shoe custom box can come in various sizes and forms making it an excellent choice for the priceless likes of dainty ballet pumps and strong work boots. Correct sizing isn't only a determinant of appearance but also helps to size down damages during the transit of machines and other industrial logistics. ### Printing And Branding To customize their brands, [Premium Custom Market](https://www.premiumcustomboxes.com/) use printing technology to innovate the labeling of their products through packing. Brand-specific shoes have enough marketing elements like the logo, pay-off, description of the shoes, etc. With the help of excellent printing techniques, graphics appear as sharp as possible and striking in terms of shelf attention factor. ### Importance Of Functionality Although the aesthetics of custom **shoe packaging boxes** must be evaluated, they must also work well regarding being efficient. Eventually, all these activities will be performed to keep shoes away from the damage. Some of them include storage, transportation, and display. Characteristics like solid corners, plush cushions, and reliably secured closures all serve to prevent the scuffing of your shoes in transit and position you for a winsome start. ### Sustainability In a time when the world is coming together to fight against climate change, consumers, and the manufacturing sector in a range of industries are increasingly adopting eco-friendly wrapping solutions. &nbsp;Various forms of custom shoe boxes and sustainability can be found in daily life with options made of recyclable material, biodegradable coating, and water-based ink. By selecting green packaging to label their products, manufacturers not only reap substantial green gains but also attract more customers lead by ecological lifestyle. ### The Business Side If you are a business owner and you are looking for an expedited method of packaging at lower nominal costs, then buy **custom shoe boxes wholesale** now. Wholesale buyers supply quality wholesale goods for bulk orders at a price below the ordinary price, which enhances the brand to have significant multiple or even many orders. ### Beyond Shoes It is socks that complete the housing rather than footwear. To further build brand recognition, footwear retailers will also use **custom socks boxes with logo**. These bags can be staged to conform to the appearance of the shoe packaging, putting together a distinctive brand experience from the head to the toe. ## Conclusion In the retail industry, where revenue is mostly dependent on customer experience, even a simple act, like packaging, is critical─ customers notice details. **Custom shoe boxes** not only protect your products but also serve as powerful brand ambassadors, leaving a lasting impression on consumers. Embracing novelty, convenience, sustainability, and wholesale opportunities will not only enable brands to improve their packaging game but will also help them navigate this changing game with [confidence](https://dev.to/ ).
exam-dumsp
1,908,941
3 Easy Steps to Setup Gmail Less Secure Apps(Django)
Introduction On your software development journey, you'll likely want to integrate email...
0
2024-07-03T07:52:09
https://dev.to/titusnjuguna/3-easy-steps-to-setup-gmail-less-secure-appsdjango-2eoe
emailintegration, gmail, lesssecureapps, webdev
##Introduction On your software development journey, you'll likely want to integrate email to test event-driven email sending. Pre-deployment testing of this feature requires an email client to verify functionalities. While you might initially think you need endless bandwidth for mass testing, that's not necessarily the case. Gmail, a product of Google and one of the most popular email services, is a great option for email testing. I've personally found it highly effective for solving this issue over the past five years of development. In this article, I'll break down the three steps of Gmail integration with Django. 1. Log in to your Gmail account. In the top right corner, click your profile picture and then select the "Manage account" option. ![Gmail Profile Picture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x76jsg0zjdr62s2dben4.png) 2. Go to the search bar and search "App Passwords" and click on the matching option: ![Search Bar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2rusdn4gvp9b2alb330.png) The information you need to provide will depend on your account's security settings. Once you meet all the security checks, you'll be redirected to a page where you can name your app and create it. ![create less secure apps](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pptzw8a863yayrfgfffe.png) Afterward, the following will be prompted showing the password. copy the password to a secure location.KEEP IT SECRET ![Less secure app passwords](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y6potmyzab78evxtgmfc.png) 3. Django Email Integration In production environments, it's crucial to secure your passwords. A common approach is to use a .env file to store your email credentials. Here's how to set it up: - Create the following variables in your .env file: EMAIL_HOST = 'smtp.gmail.com' EMAIL_USE_TLS = True EMAIL_USE_SSL = False EMAIL_PORT = 587 EMAIL_HOST_USER = 'your-gmail-email-address' EMAIL_HOST_PASSWORD = 'password-generated-above-from-app-passwords' - Open the project settings file and add the following settings from dotenv import load_dotenv load_dotenv() EMAIL_HOST = os.getenv('EMAIL_HOST') EMAIL_USE_TLS = os.getenv('EMAIL_USE_TLS') EMAIL_USE_SSL = os.getenv('EMAIL_USE_SSL') EMAIL_PORT = os.getenv('EMAIL_PORT') EMAIL_HOST_USER = os.getenv('EMAIL_HOST_USER') EMAIL_HOST_PASSWORD = os.getenv('EMAIL_HOST_PASSWORD') Congratulations! you have successfully set up Django to send emails. Thank you.
titusnjuguna
1,909,823
Data Science Essentials: Building a Strong Foundation
In today’s data-driven world, the demand for skilled data scientists is skyrocketing. Companies...
0
2024-07-03T07:47:46
https://dev.to/alex101112/data-science-essentials-building-a-strong-foundation-530g
ai, datascience
In today’s data-driven world, the demand for skilled data scientists is skyrocketing. Companies across various industries are seeking professionals who can analyze data, extract valuable insights, and drive decision-making processes. If you're looking to embark on a career in this exciting field, it's crucial to build a strong foundation in data science. This article will guide you through the essential steps to get started and introduce you to a data science job assistance program that can help you land your dream job. ## Understanding Data Science Data science is an interdisciplinary field that combines statistics, computer science, and domain knowledge to extract meaningful information from data. It involves various processes, including data collection, cleaning, analysis, visualization, and interpretation. The ultimate goal of data science is to make data-driven decisions that can improve business outcomes and solve complex problems. ## Key Skills for Data Scientists To become proficient in [data science](https://en.wikipedia.org/wiki/Data_science), you need to develop a range of skills: 1. Programming: Proficiency in programming languages such as Python or R is essential for data manipulation and analysis. 2. Statistics and Mathematics: A strong understanding of statistical methods and mathematical concepts is crucial for analyzing data and building models. 3. Data Wrangling: The ability to clean and preprocess data is fundamental, as raw data is often messy and incomplete. 4. Machine Learning: Knowledge of machine learning algorithms and techniques enables you to create predictive models and automate decision-making processes. 5. Data Visualization: Skills in data visualization tools like Tableau or Matplotlib help in presenting data insights in an easily understandable manner. 6. Domain Expertise: Understanding the specific industry or domain you are working in helps contextualize data and derive more relevant insights. Building Your Data Science Foundation To build a strong foundation in data science, follow these steps: 1. Educational Background: While a formal degree in data science or a related field can be beneficial, many successful data scientists come from diverse educational backgrounds. Online courses, bootcamps, and certifications are excellent alternatives. 2. Hands-on Practice: Practical experience is vital. Work on projects, participate in competitions like Kaggle, and collaborate with others in the field. 3. Networking: Join data science communities, attend conferences, and connect with professionals on platforms like LinkedIn. Networking can open doors to job opportunities and mentorship. 4. Stay Updated: The field of data science is constantly evolving. Keep up with the latest trends, tools, and technologies by reading blogs, research papers, and industry news. ## Data Science Job Assistance Program One of the biggest challenges for aspiring data scientists is finding the right job. This is where a [data science job assistance program](https://www.pickl.ai/course/data-science-bootcamp-online) can make a significant difference. These programs offer a range of services to help you secure a position in the industry, including: •Resume Building: Experts help you craft a compelling resume that highlights your skills, experience, and accomplishments. •Interview Preparation: Mock interviews and coaching sessions prepare you for technical and behavioral questions commonly asked in data science job interviews. •Job Placement: Many programs have partnerships with leading companies, providing you with direct access to job openings and hiring managers. •Networking Opportunities: Programs often organize networking events, webinars, and workshops where you can meet industry professionals and potential employers. •Mentorship: Experienced data scientists mentor you throughout your job search, offering guidance, support, and valuable insights. ## Conclusion Building a strong foundation in data science requires dedication, continuous learning, and practical experience. By developing key skills and following a structured learning path, you can position yourself as a competitive candidate in the job market. Additionally, leveraging a data science job assistance program can provide the support and resources needed to land your dream job. Embrace the journey, stay curious, and let data science open doors to exciting career opportunities.
alex101112
1,909,822
API Testing: An Essential Guide
Introduction Application Programming Interfaces (APIs) are integral to modern software architecture,...
0
2024-07-03T07:47:30
https://dev.to/keploy/api-testing-an-essential-guide-4e3m
ai, devops, opensource, css
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6wht7jiemmt2a042xwxd.png) Introduction Application Programming Interfaces (APIs) are integral to modern software architecture, facilitating communication between different software systems. Ensuring the reliability, security, and performance of APIs is crucial. [API test](https://keploy.io/api-testing) plays a vital role in achieving this by verifying that APIs function as expected. This guide provides an overview of API testing, its importance, types, best practices, tools, and how to get started. What is API Testing? API testing involves testing APIs directly and as part of integration testing to determine if they meet expectations for functionality, reliability, performance, and security. Unlike UI testing, which focuses on the look and feel of an application, API testing focuses on the business logic layer of the software architecture. Importance of API Testing 1. Validation of Core Functionality: Ensures that the core functionalities of the application are working as expected. 2. Improved Test Coverage: API testing provides better test coverage by allowing access to the application without a user interface. 3. Early Detection of Issues: Identifies issues at an early stage in the development cycle, reducing the cost of fixing bugs. 4. Language-Independent Testing: As APIs use standardized protocols (like HTTP and REST), tests can be executed across different languages and environments. 5. Faster and More Efficient: API tests are faster and more efficient than UI tests, enabling quicker feedback and iteration. Types of API Testing 1. Functional Testing: Verifies that the API performs its intended functions correctly. It checks endpoints, response codes, and data validation. 2. Load Testing: Measures the API's performance under load to ensure it can handle high traffic and stress conditions. 3. Security Testing: Ensures that the API is secure from vulnerabilities and unauthorized access. This includes authentication, encryption, and penetration testing. 4. Validation Testing: Confirms that the API's responses and data structures are correct and comply with the specifications. 5. Integration Testing: Ensures that the API integrates well with other services and systems. 6. Regression Testing: Verifies that new changes do not break existing functionality. Best Practices for API Testing 1. Understand the API Requirements: Thoroughly understand the API specifications, including endpoints, request methods, response formats, and authentication mechanisms. 2. Design Comprehensive Test Cases: Cover various scenarios, including positive, negative, edge cases, and boundary conditions. 3. Use Automated Testing Tools: Leverage automated testing tools to execute tests efficiently and repeatedly. 4. Validate Responses: Check not only the status codes but also the data returned in the responses. 5. Test for Performance and Security: Include performance and security tests in your API testing strategy. 6. Maintain and Update Tests: Regularly update your test cases to accommodate changes in the API. 7. Mock External Services: Use mock services to simulate dependencies and isolate the API being tested. 8. Continuous Integration: Integrate API tests into the CI/CD pipeline for continuous validation. Popular API Testing Tools 1. Postman: A widely-used tool for API development and testing. It supports automated testing, mock servers, and monitoring. 2. SoapUI: An open-source tool for testing SOAP and REST APIs. It provides advanced features for functional, security, and load testing. 3. RestAssured: A Java library for testing RESTful APIs. It simplifies writing tests with a fluent interface and supports BDD. 4. JMeter: A tool primarily for performance testing but also supports functional API testing. It can handle various protocols. 5. Karate: An open-source framework combining API testing and BDD. It uses Gherkin syntax for writing tests and supports both HTTP and HTTPS. 6. Tavern: A Python-based tool for testing RESTful APIs. It integrates with Pytest, providing a robust testing environment. 7. Newman: The command-line companion for Postman, allowing execution of Postman collections in CI/CD pipelines. Getting Started with API Testing 1. Define Test Objectives: Determine what you need to test and set clear objectives. 2. Set Up the Testing Environment: Configure the necessary tools and frameworks for your testing needs. 3. Design Test Cases: Based on the API specifications, design comprehensive test cases covering all scenarios. 4. Automate Test Execution: Use automated tools to create and run test scripts. 5. Analyze Test Results: Review the results to identify issues, performance bottlenecks, and security vulnerabilities. 6. Report and Fix Issues: Generate detailed reports and collaborate with the development team to address the identified issues. 7. Iterate and Improve: Continuously improve your testing strategy based on feedback and evolving requirements. Example of a Simple API Test Using Postman 1. Create a Collection: Organize your API tests into a collection. 2. Add a Request: Define an HTTP request with the necessary parameters, headers, and body. 3. Write Test Scripts: Use JavaScript to write test scripts for validating the response. javascript Copy code pm.test("Status code is 200", function () { pm.response.to.have.status(200); }); pm.test("Response time is less than 500ms", function () { pm.expect(pm.response.responseTime).to.be.below(500); }); pm.test("Response contains expected data", function () { var jsonData = pm.response.json(); pm.expect(jsonData.name).to.eql("Example"); }); 4. Run the Collection: Execute the collection manually or using Newman for automation. Conclusion API testing is a critical aspect of modern software development, ensuring that APIs function correctly, perform well under load, and are secure. By following best practices, leveraging automated tools, and continuously improving your testing strategy, you can enhance the quality and reliability of your APIs. With the right approach, API testing becomes an efficient and effective process, enabling faster delivery of robust software solutions.
keploy
1,909,818
Install LEMP LAMP LLMP LEPP LAPP or LLPP using parameters only
This is an availability notice for my software project. Use my toolkit to automatically install on...
0
2024-07-03T07:47:14
https://dev.to/wintersysprojects/install-lemp-lamp-llmp-lepp-lapp-or-llpp-on-linux-using-parameters-only-423b
nginx, apache, lighttpd, linux
This is an availability notice for my software project. Use my [toolkit](https://github.com/wintersys-projects/adt-build-machine-scripts) to automatically install on Ubuntu or Debian LEMP LAMP LLMP LEPP LAPP or LLPP using parameters only. My solution currently supports Digital Ocean, Linode, Exoscale or Vultr platforms. There is in-built support for Joomla, Wordpress, Drupal and Moodle CMS systems. I consider my toolkit to be a DMS or "Deployment Management System". It has horizontal scaling capacity and so can scale to any level of compute. To initially get started you can follow my [Quick Demos](https://github.com/wintersys-projects/adt-build-machine-scripts/wiki/Quick-Start-Demos). The quick demos only work for the Linode platform. If you want to do more of a deep dive into how my toolkit works, you can follow my [tutorials](https://github.com/wintersys-projects/adt-build-machine-scripts/wiki/Tutorials). I intend to use my software for serious social network deployments in my local community to meet community needs as described [here](https://github.com/wintersys-projects/adt-build-machine-scripts/wiki/Philosophy)
wintersysprojects
1,909,821
Oracle Cloud’s 24C Release Is Coming: Are You Prepared to Test for Success?
Oracle’s 24C release is just around the corner. The tentative Oracle 24C release date is set for...
0
2024-07-03T07:44:54
https://www.opkey.com/blog/oracle-cloud-24c-release
oracle, cloud, release
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g3o16arrhb1zjau897u9.png) Oracle’s 24C release is just around the corner. The tentative Oracle 24C release date is set for July 2024. This release brings changes to the Financials, Supply Chain Management, and Human Capital Management modules. Oracle customers are wondering what new features and functionalities they’ll receive with this major update, and how they should be testing for maximum efficiency. Opkey is an Oracle test automation partner, the number one app in the Oracle Cloud Marketplace, and a trusted source for information regarding system maintenance. Opkey’s in-depth Oracle 24C Release Advisory Document will provide you with information on the changes coming to the system, as well as practical testing advice. The doc isn’t available quite yet but fill in the form below to get it directly in your inbox as soon as it is ready. **Opkey for Oracle Cloud Testing: Oracle 24C**  Opkey’s AI-enabled, No-Code platform helps businesses validate new releases seamlessly with features that also help to unlock the full potential of your Oracle environment.     Opkey specializes in providing reduced test cycles and achieving improved test coverage with the power of No-Code automation. How do we cut quarterly update times down and save you time and money? The answer lies in 7,000+ pre-built Oracle Cloud test cases, automatic Impact Analysis reports, AI-enabled test generation and execution, and more. These allow you to reduce testing timelines by 70% while taking the labor burden off your team.   Opkey's test automation capabilities ensure thorough testing of Oracle releases for functionality, security, and performance.  
johnste39558689
1,909,819
Nepal's favorite online shopping service. Shop from Amazon, eBay, Flipkart and many more
International shopping made easy Shop from US and Indian websites We deliver right to your door steps
0
2024-07-03T07:43:54
https://dev.to/iwishbag/nepals-favorite-online-shopping-service-shop-from-amazon-ebay-flipkart-and-many-more-1oeo
International **[shopping](https://www.iwishbag.com/)** made easy Shop from US and Indian websites We deliver right to your door steps
iwishbag
1,909,817
Machine Learning in Finance: Powering Predictions and Minimizing Risks with Data Science
The financial world thrives on information and calculated risks. Today, machine learning (ML) is...
0
2024-07-03T07:42:52
https://dev.to/fizza_c3e734ee2a307cf35e5/machine-learning-in-finance-powering-predictions-and-minimizing-risks-with-data-science-3fc2
machinelearning, datascience
The financial world thrives on information and calculated risks. Today, machine learning (ML) is transforming the way financial institutions operate, offering powerful tools for predictive analytics and risk management. Let's explore how ML is shaping the future of finance and the data science expertise needed to navigate this exciting domain. **Predictive Analytics with Machine Learning** Imagine having a crystal ball for the financial markets. Machine learning, while not quite magic, can analyze vast datasets to identify patterns and make predictions with surprising accuracy. Here's how ML empowers financial institutions: **Stock Market Forecasting:** ML algorithms can analyze historical data, market trends, and news sentiment to predict future stock prices and market movements, aiding informed investment decisions. **Credit Risk Assessment:** By analyzing financial history, demographics, and other factors, ML models can assess a borrower's creditworthiness, enabling lenders to make better loan decisions and manage risk. **Fraud Detection:** ML algorithms can sift through financial transactions in real time, identifying anomalies and suspicious patterns that might indicate fraudulent activity. **Machine Learning for Robust Risk Management** Financial institutions navigate a complex web of risks. Machine learning offers a powerful shield: **Market Risk Analysis:** ML models can analyze various factors that impact market volatility, helping institutions develop strategies to mitigate risk and protect their portfolios. **Operational Risk Management:** ML can analyze historical data to identify potential operational risks like system failures or human errors, allowing for proactive risk mitigation strategies. **Stress Testing:** Machine learning can simulate various economic scenarios, allowing institutions to test their portfolios under stress and identify potential vulnerabilities. **Equipping Yourself for a Career in Financial Data Science** To leverage the power of machine learning in finance, a strong foundation in data science is crucial. Consider enrolling in a comprehensive [data scientist course](https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/) to develop the necessary skills: **Data Analysis and Manipulation:** Learn essential tools and techniques for cleaning, manipulating, and analyzing financial data using Python libraries like Pandas and NumPy. **Machine Learning Algorithms:** Understand core ML algorithms like linear regression, decision trees, and random forests, and how they can be applied to financial forecasting and risk assessment. **Financial Modeling:** Gain insights into financial modeling techniques and how they can be integrated with machine learning models for more robust financial analysis. By delving into a data scientist course details, you'll gain the expertise to: **Work with Financial Data:** Learn how to handle financial data, including time series analysis, market data feeds, and alternative data sources. **Model Development and Deployment:** Develop, implement, and monitor machine learning models for various financial applications. **Communication and Visualization:** Effectively communicate complex data insights and machine learning results to financial stakeholders. **The Future of Finance is Data-Driven** Machine learning is rapidly transforming the financial landscape. By equipping yourself with data science expertise, you can be at the forefront of this revolution, building intelligent systems that drive smarter financial decisions and navigate the ever-evolving world of finance.
fizza_c3e734ee2a307cf35e5
1,909,810
Automating User Managment system with Bash: in a linux environment.
Onboarding several new workers can make managing users on a Linux system a repetitious chore. This...
0
2024-07-03T07:41:28
https://dev.to/mauricemakafui/automating-user-managment-system-with-bash-in-a-linux-environment-5dj3
Onboarding several new workers can make managing users on a Linux system a repetitious chore. This procedure may be automated to save time and lower the possibility of human mistake. This tutorial will walk you through creating a bash script that creates users, assigns them to groups, configures their home directories, and generates passwords based on information retrieved from a text file. We'll also make sure that passwords are safely saved and that all activities are recorded. **Basic requirements** 1. a Linux system having root (administrative) rights. 2. basic familiarity with bash scripting. 3.A text editor for creating and editing files (such as vim or nano) **How to Use the Script** Follow these steps to use the create_users.sh script: **Clone the Repository**: Start by cloning the GitHub repository to your local machine or server. ``` git clone https://github.com/Maurice-Makafui/STAGE_1_HNG_11.git cd STAGE_1_HNG_11 ``` **Prepare the Input File**: Create a text file with the desired usernames and groups. Each line should be formatted as username;group1,group2,group3. Here’s an example: plaintext ``` Maurice1;staging,development,deployment Gwenny;prayergroup Felix;fitness,gymgroup ``` **Run the Script**: Execute the script with the input file as an argument. ``` sudo bash ./create_users.sh users.txt ``` **Verify the Results**: Check the Passwords: ``` cat /var/secure/user_passwords.csv ``` **Check the Log File**: ``` cat /var/log/user_management.log ``` **List Users and Groups**: ``` cat /etc/passwd cat /etc/group ``` **Verify Home Directories**: ``` cd /home && ls ``` **Check Group Membership**: ``` getent group dev ``` **Here is what the Bash Script looks like** ``` #!/bin/bash # Log file location LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" # Ensure the script is run as root if [[ "$EUID" -ne 0 ]]; then echo "Please run as root" exit 1 fi # Check if the input file is provided if [ -z "$1" ]; then echo "Error: No file was provided" echo "Usage: $0 <name-of-text-file>" exit 1 fi # Create log and password files mkdir -p /var/secure touch "$LOGFILE" "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" # Function to generate a random password generate_random_password() { local length=${1:-12} # Default length is 12 if no argument is provided LC_ALL=C tr -dc 'A-Za-z0-9!?%+=' < /dev/urandom | head -c $length } # Function to create a user create_user() { local username=$1 local groups=$2 if getent passwd "$username" > /dev/null; then echo "User $username already exists" | tee -a "$LOGFILE" else useradd -m -g "$username" -s /bin/bash "$username" echo "Created user $username" | tee -a "$LOGFILE" fi # Create the user's personal group if ! getent group "$username" > /dev/null; then groupadd "$username" echo "Created group $username" | tee -a "$LOGFILE" fi # Add user to specified groups IFS=',' read -r -a groups_array <<< "$groups" for group in "${groups_array[@]}"; do if ! getent group "$group" >/dev/null; then groupadd "$group" echo "Created group $group" | tee -a "$LOGFILE" fi usermod -aG "$group" "$username" echo "Added user $username to group $group" | tee -a "$LOGFILE" done # Set up home directory permissions chmod 700 /home/"$username" chown "$username:$username" /home/"$username" echo "Set up home directory for user $username" | tee -a "$LOGFILE" # Generate a random password password=$(generate_random_password) echo "$username:$password" | chpasswd echo "$username,$password" >> "$PASSWORD_FILE" echo "Set password for user $username" | tee -a "$LOGFILE" } # Read the input file and create users while IFS=';' read -r username groups; do # Skip empty lines if [[ -z "$username" ]]; then continue fi create_user "$username" "$groups" done < "$1" echo "User creation process completed." | tee -a "$LOGFILE" ``` **Heres the breakdown** 1. Shebang and Variable Definitions ``` #!/bin/bash # Log file location LOGFILE="/var/log/user_management.log" PASSWORD_FILE="/var/secure/user_passwords.csv" ``` #!/bin/bash: Specifies that the script should be run with the Bash shell. LOGFILE: Path to the log file where script actions will be recorded. PASSWORD_FILE: Path to the file where usernames and passwords will be saved. 2. Check for Root Privileges ``` # Ensure the script is run as root if [[ "$EUID" -ne 0 ]]; then echo "Please run as root" exit 1 fi ``` This section checks if the script is run as the root user. EUID is the effective user ID, and 0 corresponds to the root user. If not root, the script prints an error message and exits. 3. Check for Input File Argument ``` # Check if the input file is provided if [ -z "$1" ]; then echo "Error: No file was provided" echo "Usage: $0 <name-of-text-file>" exit 1 fi ``` Checks if a filename argument is provided when running the script. If not, it prints an error message and shows the correct usage. 4. Create Log and Password Files ``` # Create log and password files mkdir -p /var/secure touch "$LOGFILE" "$PASSWORD_FILE" chmod 600 "$PASSWORD_FILE" ``` Creates the directory /var/secure if it does not exist. Creates the log file and password file. Sets the permissions of PASSWORD_FILE to 600 (read and write only for the owner). 5. Generate Random Password Function ``` # Function to generate a random password generate_random_password() { local length=${1:-12} # Default length is 12 if no argument is provided LC_ALL=C tr -dc 'A-Za-z0-9!?%+=' < /dev/urandom | head -c $length } ``` generate_random_password(): A function that generates a random password of a specified length (default is 12 characters). tr -dc 'A-Za-z0-9!?%+=' < /dev/urandom: Filters out characters from /dev/urandom. head -c $length: Limits the output to the desired length. 6. Create User Function ``` # Function to create a user create_user() { local username=$1 local groups=$2 if getent passwd "$username" > /dev/null; then echo "User $username already exists" | tee -a "$LOGFILE" else useradd -m -g "$username" -s /bin/bash "$username" echo "Created user $username" | tee -a "$LOGFILE" fi # Create the user's personal group if ! getent group "$username" > /dev/null; then groupadd "$username" echo "Created group $username" | tee -a "$LOGFILE" fi # Add user to specified groups IFS=',' read -r -a groups_array <<< "$groups" for group in "${groups_array[@]}"; do if ! getent group "$group" >/dev/null; then groupadd "$group" echo "Created group $group" | tee -a "$LOGFILE" fi usermod -aG "$group" "$username" echo "Added user $username to group $group" | tee -a "$LOGFILE" done # Set up home directory permissions chmod 700 /home/"$username" chown "$username:$username" /home/"$username" echo "Set up home directory for user $username" | tee -a "$LOGFILE" # Generate a random password password=$(generate_random_password) echo "$username:$password" | chpasswd echo "$username,$password" >> "$PASSWORD_FILE" echo "Set password for user $username" | tee -a "$LOGFILE" } ``` create_user(): A function to add a new user with specific groups and set up the home directory. Check if User Exists: Checks if the user already exists. If not, creates the user. Create Personal Group: Checks if a group named after the user exists. If not, creates it. Add User to Groups: Parses the groups from the input and adds the user to these groups. Home Directory Setup: Sets the correct permissions and ownership for the user’s home directory. Generate and Set Password: Generates a password, updates the user’s password, and logs it. 7. Read Input File and Create Users ``` # Read the input file and create users while IFS=';' read -r username groups; do # Skip empty lines if [[ -z "$username" ]]; then continue fi create_user "$username" "$groups" done < "$1" ``` Reads the input file line by line. Each line should contain a username and groups, separated by a semicolon. Calls create_user for each valid line. 8. Final Message ``` echo "User creation process completed." | tee -a "$LOGFILE" ``` Prints a message indicating that the user creation process is complete and logs it. **How Everything Works** 1.Configuration: The directories for the password and log files are specified by the script. 2.User File Check: It determines whether the user file is attached and is there. 3.Setup: Makes the required files and folders with the right permissions. 4.Compiling Every User: reads every line in the file, handles the groups and username, and then carries out the following operations 5.Finalization: Documents the accomplishment of the user creation procedure. **Conclusion** On a Linux system, you may automate the process of generating users and grouping them by using this method. During the user creation process, this script helps to maintain consistency, saves time, and lowers the possibility of mistakes. Please feel free to make any changes to the script to suit your needs. **Acknowledgments** I would want to express my appreciation to [HNG Hire](https://hng.tech/hire) for giving me the chance and means to create this solution. Explore resources and programs such as the [HNG Internship](https://hng.tech/internship) and [HNG Premium](https://hng.tech/premium) for more advanced subjects and automation tips.
mauricemakafui
1,909,816
Effective Digital Marketing Solutions, Every Time
Digify Local is a dynamic digital marketing agency based in Texas, committed to helping businesses...
0
2024-07-03T07:40:41
https://dev.to/digifylocal/effective-digital-marketing-solutions-every-time-25li
Digify Local is a dynamic digital marketing agency based in Texas, committed to helping businesses thrive in the online world. With our deep understanding of the local market, we develop customized strategies that boost your brand's online presence and deliver concrete outcomes. Our expert team utilizes data-driven insights, cutting-edge tactics, and captivating storytelling to maximize engagement, generate high-quality leads, and foster customer loyalty. From comprehensive search engine optimization (SEO) to impactful social media marketing, compelling content creation, and stunning **[texas website development](https://digifylocal.com/)**, we drive measurable growth for businesses of all scales. Unleash your digital potential, establish market dominance, and propel your success in Texas by partnering with Digify Local.
digifylocal
1,909,702
Building a Cloud Development Kit (CDK)
What Exactly is a Cloud Development Kit (CDK)? Imagine you're a developer who needs to set...
0
2024-07-03T07:39:46
https://dev.to/samyfodil/building-a-cloud-development-kit-cdk-3lgd
cloudcomputing, cdk, webassembly, cloudpractitioner
### What Exactly is a Cloud Development Kit (CDK)? Imagine you're a developer who needs to set up a bunch of cloud resources. Traditionally, you might deal with endless lines of JSON or YAML configurations. It's precise but can get pretty tedious, right? Well, this is where a Cloud Development Kit, or CDK, comes in handy. Instead of those endless configuration files, you use a programming language you’re already comfortable with—like TypeScript, Python, or Java. This means you can code your cloud infrastructure just like you’d code anything else. ### Why is the CDK a Game Changer? Let’s break it down: 1. **Productivity on Steroids**: Forget about switching gears between applications and infrastructure. Now, it's all in one place, with tools you already love (and understand!). Autocomplete, refactoring, and error checking are right there with you. 2. **Reuse and Recycle**: Craft a piece of infrastructure once, wrap it up into a component, and reuse it anywhere you need it. This not only saves time but also keeps your setups consistent. 3. **Tailor-Made Solutions**: Extend the basic setups with your own tweaks. Need a special kind of storage or a unique authentication method? Just code it in. 4. **Transparent and Controlled**: It’s all in your codebase, visible and version-controlled. Every change is clear and trackable—no surprises. ### AWS CDK AWS was one of the first to jump on this bandwagon with their CDK. It takes the power of AWS CloudFormation and makes it friendlier. Instead of wading through those YAML or JSON templates, you write in a comfortable, expressive programming language. Here’s an example of AWS CDK with TypeScript: ```typescript import * as cdk from '@aws-cdk/core'; import * as s3 from '@aws-cdk/aws-s3'; class MyCloudStack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); // Setting up an S3 bucket with version control new s3.Bucket(this, 'MyFirstBucket', { versioned: true, removalPolicy: cdk.RemovalPolicy.DESTROY, }); } } const app = new cdk.App(); new MyCloudStack(app, 'MyCloudStack'); ``` See how intuitive and straightforward defining cloud resources can be with CDK. It’s just like writing any other piece of software! ### AWS CDK Workflow Explained ![AWS CDK Workflow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxiom683wjzusnmnqtn6.png) The AWS Cloud Development Kit (AWS CDK) utilizes a sophisticated workflow that integrates various AWS services to provide a seamless development and deployment experience. Here’s a breakdown of the typical workflow and how it leverages these services: 1. **Define Infrastructure as Code**: You start by defining your cloud resources using familiar programming languages such as TypeScript or Python. This code defines what resources you need, such as databases, storage buckets, or compute instances. 2. **Synthesis**: The AWS CDK app takes this high-level code and compiles it into a lower-level CloudFormation template. This process, known as synthesis, involves the CDK CLI (`cdk synth`) transforming your declarative setup into a set of instructions that AWS CloudFormation can understand. 3. **Deployment**: Once the CloudFormation template is synthesized, you deploy it using the CDK CLI (`cdk deploy`). This command instructs AWS CloudFormation to provision and manage the resources as specified in the template. 4. **Utilization of AWS Services**: Throughout this process, AWS CDK interacts with various AWS services like AWS CodePipeline and AWS CodeCommit for continuous integration and delivery, enhancing the CI/CD pipeline's efficiency and robustness. ### Building a CDK Now that we understand what a CDK is, let's look at designing one for a cloud platform. Here, I'll be focusing on [tau](https://github.com/taubyte/tau), an open-source CDN PaaS I founded. Tau uses YAML for resource definition. This approach is not unique; Kubernetes and many other platforms use YAML as well. However, what sets tau apart is its design philosophy: Git is the single source of truth. This means there are no APIs to call for defining resources like storage buckets. Instead, everything is managed through Git. This design is advantageous because it essentially provides us with an equivalent to AWS CloudFormation, but integrated directly with Git. This tight integration simplifies infrastructure management and makes it inherently version-controlled and collaborative. All we need to do is figure out a way to generate and edit YAML files with code. ### The Schema Package Because Tau is a cloud platform you can write tests for, we already built a package called [schema](https://github.com/taubyte/tau/tree/main/pkg/schema) to manipulate YAML configuration files. The catch is that it has to be in Go, while most CDKs at least support JavaScript/TypeScript and Python. This creates a challenge: how can we extend our CDK to support these popular languages while leveraging the existing schema package? ### WebAssembly Go is one of the first languages that supported WebAssembly as a compilation target. Additionally, most programming languages either have a native implementation or bindings for a WebAssembly runtime. So, if we compile the [schema package](https://github.com/taubyte/tau/tree/main/pkg/schema) into a WebAssembly module, we should be able to call it from a variety of languages. This approach leverages the strengths of Go and the flexibility of WebAssembly, making it possible to interact with Tau's infrastructure definitions from languages like JavaScript, TypeScript, and Python. ### Extism Writing code to initiate a WebAssembly runtime, load our module, and call exported functions can be quite a lot of work. Luckily, the team at [dylibso](https://dylibso.com/) has thought about that and built [Extism](https://extism.org/), a framework for building plugins with WebAssembly. ### Example of Extism Plugin Here’s a simple example of a Go plugin using Extism: ```go package main import ( "github.com/extism/go-pdk" ) //export greet func greet() int32 { input := pdk.Input() greeting := `Hello, ` + string(input) + `!` pdk.OutputString(greeting) return 0 } func main() {} ``` Loading and calling `greet` from JavaScript is straightforward: ```js import createPlugin from '@extism/extism'; const plugin = await createPlugin('plugin.wasm', { useWasi: true, }); const out = await plugin.call('greet', 'Samy'); console.log(out.text()); await plugin.close(); ``` Using Extism, we can easily compile our Go code into WebAssembly and call it from other languages like JavaScript, making the integration process seamless and efficient. This capability significantly reduces the complexity of working with WebAssembly, enabling us to extend Tau’s CDK functionalities to various programming environments effortlessly. ### The Plugin The [plugin](https://github.com/taubyte/tau/tree/main/cdk/plugin) is a wrapper around the schema package that uses the Extism PDK. Just like schema, the first step involves opening a project by reading a folder that contains configuration files (the folder can also be empty). Here's how you can do it in Go: ```go package main import ( "github.com/extism/go-pdk" proj "github.com/taubyte/tau/pkg/schema/project" ) var project proj.Project //export openProject func openProject() int32 { var err error project, err = proj.Open(proj.SystemFS("/mnt")) if err != nil { pdk.SetError(err) return 1 } return 0 } ``` You might notice the global variable `project`. This isn't a novice mistake: 1. We will load the module each time we open a project. 2. WebAssembly is single-threaded, at least for now. ### The JavaScript CDK On the JavaScript side, we have: ```js import createPlugin from '@extism/extism'; import * as path from 'path'; export async function core(mountPath: string): Promise<any> { const coreWasmPath = path.resolve(__dirname, '../core.wasm'); return await createPlugin( coreWasmPath, { useWasi: true, allowedPaths: { '/mnt': mountPath }, } ); } ``` This function, `core`, loads the module and attaches the folder containing the configuration files to it as `/mnt`. To open a project, we load the Wasm module, call `openProject`, and then return a `Project` object that references an instance of the core module initialized to our project. ```js export async function open(mountPath: string): Promise<Project> { const plugin = await core(mountPath); await plugin.call('openProject'); return new Project(plugin); } ``` From there, we can write code like this: ```js import open from '@taubyte/cdk'; const prj = await open('path/to/folder'); await prj.functions().new({ name: 'ping', method: 'http', // other configurations }); await prj.close(); ``` ### Next Steps Now that we have a CDK, the next step is to integrate it with Tau's CI/CD workflow. This integration will enable seamless deployment and management of cloud resources directly from your development pipeline. I’ll cover this integration in a separate article, where we will dive into the specifics of automating your deployments. ### Conclusion In this article, we explored what a Cloud Development Kit (CDK) is and why it’s a game changer for cloud infrastructure management. We looked at how AWS CDK simplifies defining and deploying cloud resources using familiar programming languages. By leveraging WebAssembly and the Extism framework, we were able to create a versatile CDK that supports multiple languages, making tau’s infrastructure management more accessible and efficient. My vision, is that this will evolve to also allows developers to embed infrastructure definitions directly within their application code, providing a seamless and cohesive development experience. Stay tuned for the next article, where we’ll dive into integrating the CDK with tau’s CI/CD workflow.
samyfodil
1,909,815
Crafting the Perfect Custom Chocolate Packaging
The Evolvement of Packaging Chocolate brands must understand the consumers’ desire for...
0
2024-07-03T07:39:29
https://dev.to/adil_qadeer_458a15a126710/crafting-the-perfect-custom-chocolate-packaging-53i5
chocolateboxespackaging, custompackaging, customboxeswholesale, chocolateboxesforgifts
## The Evolvement of Packaging Chocolate brands must understand the consumers’ desire for good packaging as they prepare their marketing plans. Currently, these containers are no longer narrowly seen as just storage, but they can be used as an expressive medium by displaying art and innovative solutions. ## Up-to-Date Design Trends What is the latest in [custom chocolate box designs](https://www.premiumcustomboxes.com/custom-chocolate-boxes/) ranging from simple and classy to deluxe and romantic with vibrant and colourful print? Packaging elements like embossed, foiled, and unique shapes have become the motivation force for customers drawn to it. ## Personalization Personalization is the new fad in wholesale chocolate box packaging, and custom chocolate boxes are appropriate to be included in the list of this top-rate innovation. The brands allow the customers to customize the packaging with the name, message, or photo to feed their needs to create something that will turn out as unique notable presentable. ## Enrich Customer Experience Custom chocolate boxes do not only serve a protective purpose but they also help to create a story around the brand, promoting values, and quality perception. The right box, which will add to the chocolate inside's perceived value and even give it a feeling of excitement and waiting for something promised, is the very same box that the customer gets. ## Gifts for Various Occasions Chocolate boxes are chosen as bespoke gifts for special occasions, they are frequently used as custom birthday gift boxes for weddings, birthdays, and holidays. The brands now provide themed containers which adhere to the peculiar celebrations and create a taste of class and luxury through the mercy of gifting. ## Marketing Strategies According to recent surveys, those consumers are more likely to recommend a brand to others if they receive a premium gift package. Thus, personalized chocolate boxes can assist companies with a unique branding approach. Through utilizing branding mechanisms, emblems, signs, and messages on the packaging, firms can potentially boost the visibility as well as popularity of brands among the end consumers. ## The Sustainability Factor Environmentally responsible packaging is a developing trend brands design eco-friendly packaging solutions the awareness about environmental issues is increasing. Consumers who carefully consider the environmental side choose more sustainable chocolate boxes made from recycled materials or biodegradable options with reusable designs. ## Chocolate Boxes Retail Benefits for Businesses Ordering custom chocolate boxes in bulk offers several incentives for businesses, including price-cutting, overall discounts, and simplified production processes. Available chocolate box wholesale options will enable companies to retain the same brand consistency in the packaging above and the cost of packaging as well. ## Innovations in Box Materials The departure from the past in making custom chocolate box packaging to meet the standards of customers and brands caused the manufacturing sector to evolve in the selection of materials. Making use of eco-friendly options such as kraft paper and cardboard and also having access to luxurious finishes such as velvet and silk, manufacturers can design and launch beautiful and premium packaging that does justice to the purpose. ## Future Trends What we will witness in the coming years in terms of [Premium Custom Boxes](https://www.premiumcustomboxes.com/) is possibly focused on sustainability, personalization, and innovation. Brands will rely heavily on unique design techniques, materials, and technology to package products which will not only protect the commodities but also look very appealing which will in the long run improve the overall customer [experience](https://dev.to/). ## Conclusion In conclusion, the custom chocolate box creation is a complicated process that includes the artist's side of the artwork, the strategic mix of different elements and the careful approach to every detail. Designing personal characteristics with design, personalization, and sustainability is the foremost belief of brands and brands can turn their business, subtitle and sales upside-down with these. The targeted custom chocolate boxes will not be only the things which protect chocolates, but they will be the symbols of craft and thoughtful care that consumers gladly associate with this version.
adil_qadeer_458a15a126710
1,909,814
Why There’s No Need for forwardRef in React
Introduction Understanding why forwardRef exists and whether it’s truly necessary in React...
0
2024-07-03T07:37:16
https://dev.to/sharoztanveer/why-theres-no-need-for-forwardref-in-react-4fa5
react, webdev, javascript, typescript
## Introduction Understanding why `forwardRef` exists and whether it’s truly necessary in React has been a complex journey. After identifying seven significant issues with `forwardRef` and recognising a simpler and superior alternative, it became evident that `forwardRef` could be removed from React without much impact. In fact, there’s already an open [RFC](https://github.com/reactjs/rfcs/pull/107#issuecomment-466304382) to remove it. ## Understanding `ref` in React When we pass a `ref` to a native HTML element like an `<input>`, it attaches automatically to the DOM node. We gain access to the native DOM API for this node. ```ts import React, { useRef, useEffect } from 'react'; const App: React.FC = () => { const inputRef = useRef<HTMLInputElement>(null); useEffect(()=>{ if (inputRef.current) { // Accessing the DOM node inputRef.current.focus(); } }, []); return <input ref={inputRef} />; }; ``` For class components, the `ref` attaches to the instance of the class, allowing us to access its internal properties and methods. However, for functional components, the `ref` results in a `null` value and a warning. To make `ref` work with functional components, we wrap them in the `forwardRef` API: ```ts import React, { forwardRef, useRef } from 'react'; const Child = forwardRef<HTMLDivElement, {}>((props, ref) => ( <div ref={ref}>Child Component</div> )); const App: React.FC = () => { const childRef = useRef<HTMLDivElement>(null); return <Child ref={childRef} />; }; ``` ## The Problems with `forwardRef` - **Lack of Support for Multiple Refs:** `forwardRef` only allows one argument, making it cumbersome to handle multiple refs without workarounds. For example: ```ts import React, { forwardRef, Ref, useImperativeHandle, useRef } from 'react'; interface FormHandles { inputRef1: Ref<HTMLInputElement>; inputRef2: Ref<HTMLInputElement>; } const Form = forwardRef<FormHandles>((props, ref) => { const inputRef1 = useRef<HTMLInputElement>(null); const inputRef2 = useRef<HTMLInputElement>(null); useImperativeHandle(ref, () => ({ inputRef1, inputRef2, })); return ( <form> <input ref={inputRef1} /> <input ref={inputRef2} /> </form> ); }); const App: React.FC = () => { const formRef = useRef<FormHandles>(null); return <Form ref={formRef} />; }; ``` - **Anonymous Functions in Dev Tools:** Using arrow functions with `forwardRef` results in anonymous functions in Dev Tools unless you name the function twice: ```ts const NamedComponent = forwardRef<HTMLDivElement>((props, ref) => ( <div ref={ref}>Named Component</div> )); ``` ```ts const NamedComponent = forwardRef<HTMLDivElement>(function NamedComponent(props, ref) { return <div ref={ref}>Named Component</div>; }); ``` - **Extra Boilerplate:** We need to use additional API and imports, making our code more complex and less readable. - **Nested Components:** Passing refs through multiple layers of components adds unnecessary complexity. ```ts const InnerComponent = forwardRef<HTMLDivElement>((props, ref) => ( <div ref={ref}>Inner Component</div> )); const OuterComponent = forwardRef<HTMLDivElement>((props, ref) => ( <InnerComponent ref={ref} /> )); const App: React.FC = () => { const outerRef = useRef<HTMLDivElement>(null); return <OuterComponent ref={outerRef} />; }; ``` - **Non-Descriptive Prop Names:** Generic `ref` names like `ref` are not descriptive, making it unclear where the `ref` is being attached. - **Typing Issues with Generics:** `forwardRef` breaks TypeScript generics, making type inference harder and less reliable. You can find more information [here](https://fettblog.eu/typescript-react-generic-forward-refs/). - **Potential Performance Issues:** Wrapping components in `forwardRef` can slow down rendering, especially in stress tests with a large number of components. You can find more information [here](https://github.com/facebook/react/issues/13456). ## The Simpler Alternative A simpler and better alternative exists: using custom `ref` props. Instead of `ref`, we can use any other prop name like `firstInputRef`. This pattern works automatically with functional components, solving all the issues mentioned: ```ts import React, {Ref, useRef, useEffect } from 'react'; interface ChildProps { firstInputRef: Ref<HTMLInputElement>; } const Child: React.FC<ChildProps> = ({ firstInputRef }) => ( <div> <input ref={firstInputRef} /> </div> ); const App: React.FC = () => { const inputRef = useRef<HTMLInputElement>(null); useEffect(() => { if (inputRef.current) { console.log(inputRef.current); // <input /> } }, []); return <Child firstInputRef={inputRef} />; }; ``` ## Conclusion For most cases, using custom `ref` props is a better solution than `forwardRef`. It simplifies our code, improves readability, and avoids many issues. `forwardRef` is only necessary in specific scenarios like single element proxy components or when simulating instance refs. With the new RFC potentially removing `forwardRef`, we can look forward to a simpler, more intuitive way of handling refs in React. > Image by [u_vplf3ftkcz](https://pixabay.com/users/u_vplf3ftkcz-41657635/) from [Pixabay](https://pixabay.com/). Was that helpful?
sharoztanveer
1,909,812
How to Transfer SOL and SPL Tokens Using Anchor
The Anchor framework is a Rust-based framework for building Solana programs using Solana blockchain...
0
2024-07-03T07:33:17
https://dev.to/donnajohnson88/how-to-transfer-sol-and-spl-tokens-using-anchor-21ef
solidity, solana, learning, webdev
The Anchor framework is a Rust-based framework for building Solana programs using [Solana blockchain development](https://blockchain.oodles.io/solana-blockchain-development-services/?utm_source=devto). It simplifies the process of developing Solana smart contracts by providing a higher-level abstraction over Solana’s low-level programming model. Anchor abstracts away much of the complexity involved in writing Solana programs, making it easier for developers to focus on application logic rather than low-level details. Discover everything you need to know about installing Rust, Solana, Yarn, and more. Dive into the complete blog here: https://blockchain.oodles.io/dev-blog/how-transfer-sol-spl-tokens-Anchor/?utm_source=devto
donnajohnson88
1,909,809
Is Unit testing really useful?
In the realm of software development, unit testing often sparks debates. While some developers swear...
0
2024-07-03T07:30:27
https://dev.to/zafar_hayat_50df31884c827/is-unit-testing-really-usefull-12c7
In the realm of software development, unit testing often sparks debates. While some developers swear by its efficacy, others question its real-world value. This article delves into the varied opinions on unit testing based on personal experiences, rather than presenting a purely technical guide. **The Concept of Unit Testing** Unit testing involves writing tests for individual units of code—usually functions or methods—to ensure they work as intended. The primary goal is to catch bugs early, facilitate changes, and provide documentation for the codebase. However, the effectiveness of unit testing largely depends on how these tests are written and what they aim to achieve. **The Coverage Conundrum** One of the most contentious aspects of unit testing is code coverage. Code coverage is a metric that measures the percentage of code executed by tests. On the surface, higher coverage seems desirable. But is it truly indicative of robust, bug-free code? From my experience, developers can manipulate test cases to boost code coverage without necessarily improving code quality. For instance, consider a function that returns a user object. A developer might write a test case that simply asserts the return type is a user object: ``` assert(returnedValue instanceof User); ``` While this assertion may pass and increase coverage, it doesn't guarantee the correctness of the returned object's properties. A more thorough test would compare each property of the user object: ``` assert(returnedValue.name === expectedName); assert(returnedValue.email === expectedEmail); assert(returnedValue.age === expectedAge); ``` This comprehensive approach provides a more accurate verification of the function's behavior. Unfortunately, it's not uncommon to see developers opting for the easier, less thorough path, leading to high coverage but potentially flawed functionality. **The Real Value of Unit Testing** To truly appreciate unit testing, we must look beyond metrics like code coverage. The real value lies in the confidence it provides. Well-written tests can: - Catch Bugs Early: Identifying issues at an early stage saves time and resources. - Facilitate Refactoring: With a robust test suite, developers can confidently refactor code without fear of breaking functionality. - Serve as Documentation: Tests can serve as an additional form of documentation, illustrating how functions are intended to work. However, achieving these benefits requires discipline and a focus on meaningful tests. Tests should aim to validate business logic and edge cases rather than merely boosting coverage statistics. **A Balanced Approach** While unit testing has its advantages, it's crucial to strike a balance. Over-reliance on unit tests can lead to an illusion of security. It's equally important to incorporate other testing methodologies such as integration testing, end-to-end testing, and user acceptance testing. These methods ensure that different parts of the application work together seamlessly and meet user requirements. **Conclusion** Unit testing, when done right, is undoubtedly useful. It can improve code quality, catch bugs early, and provide peace of mind. However, the focus should be on writing meaningful tests rather than chasing high coverage numbers. By adopting a balanced approach to testing, we can ensure our software is not only well-tested but also robust and reliable.
zafar_hayat_50df31884c827
1,909,789
Introducing Kuery Client for those who love writing SQL in Kotlin/JVM
Introducing Kuery Client for those who love writing SQL in Kotlin/JVM
0
2024-07-03T07:25:50
https://dev.to/behase/introducing-kuery-client-for-those-who-love-writing-sql-in-kotlinjvm-1g51
kotlin, r2dbc, jdbc, spring
--- title: Introducing Kuery Client for those who love writing SQL in Kotlin/JVM published: true description: Introducing Kuery Client for those who love writing SQL in Kotlin/JVM tags: kotlin, r2dbc, jdbc, spring --- ## First of all - Repository - https://github.com/be-hase/kuery-client - Document - https://kuery-client.hsbrysk.dev/ ## How can it be written? I'd like to start with a preamble, but first, let me give you a quick overview. ```kotlin suspend fun search(status: String, vip: Boolean?): List<User> = kueryClient .sql { +""" SELECT * FROM users WHERE status = $status """ if (vip != null) { +"vip = $vip" } } .list() ``` - **You can concatenate and build SQL using + (unaryPlus)** - If you want to build SQL dynamically, please use constructs like `if` - (Of course, if there is no need to build it dynamically, you can write it directly using a heredoc) - **Use string interpolation to embed dynamic values** - Naturally, you might think this could lead to SQL injection, but by implementing a Kotlin compiler plugin, it is evaluated as a placeholder --- ## Motivation ### Originally, I liked MyBatis because I could write SQL by hand In the world of JVM, there are many database client libraries, but I preferred using [MyBatis](https://mybatis.org/mybatis-3/index.html). The reasons for this preference are roughly as follows: - **I want to write SQL directly** - Because it is used in a large-scale environment, I prefer to write SQL directly even if it takes a bit more effort - Writing SQL makes it easier to perform operations like Explain - Occasionally, there are cases where I want to specify an Index Hint, and I can respond quickly to such cases - **I don't want to learn the library's unique syntax or DSL** - It would be nice if knowing SQL alone is enough Writing SQL directly means depending on a specific database, but cases of migrating databases (e.g., from MySQL to PostgreSQL) are virtually non-existent. Even if such an opportunity arises, I would take the time to thoroughly verify it (compared to this, fixing SQL is not much effort), so I don't see this as much of a disadvantage. ### MyBatis does not support R2DBC Recently, I have had more opportunities to write applications using Spring WebFlux & Coroutines for work, and this makes me want to use [R2DBC](https://r2dbc.io/). (Previously, I was wrapping JDBC calls with `withContext(Dispatchers.IO) {...}`) Unfortunately, the aforementioned MyBatis does not support R2DBC. ### R2DBC libraries that allow writing SQL by hand Existing libraries that support R2DBC often use unique syntax/DSL, which do not match my preferences. On the other hand, these DSLs have the advantage of being type-safe. They prevent mistakes such as inserting a string into an integer column. However, personally, I write corresponding unit tests when I write SQL, so I am indifferent to this aspect. ### (Bonus) sqlc and SQLDelight Recently, [sqlc](https://github.com/sqlc-dev/sqlc) has been getting attention and seems to match my preferences very well. (I like the gRPC-like concept) However, although it seems to support Kotlin, it unfortunately only supports JDBC. Looking at the [generated code](https://github.com/sqlc-dev/sqlc-gen-kotlin/blob/main/internal/tmpl/ktsql.tmpl), it seems to have only the minimum implementation compared to the Go support of sqlc. Also, in the case of Kotlin, there is a similar tool called [SQLDelight](https://github.com/cashapp/sqldelight). While it claims to support R2DBC, it is difficult to use connection pools as the arguments are not [ConnectionFactory](https://github.com/cashapp/sqldelight/issues/4762), and its [Transaction support](https://github.com/cashapp/sqldelight/issues/4836) is also inadequate, giving the impression that it is still in development. Furthermore, neither supports constructing dynamic queries. (There are varying opinions on the desirability of such dynamic queries) ### I want to write using string templates & string interpolation If I'm writing SQL by hand, I also want to write using string templates & string interpolation like Doobie, which is popular in Scala. ```scala def find(n: String): ConnectionIO[Option[Country]] = sql"select code, name, population from country where name = $n".query[Country].option ``` However, unfortunately, in Kotlin, you cannot customize the behavior of string interpolation. (Although in Java, [this has recently become possible](https://blog1.mammb.com/entry/2023/04/28/090000)...) Incidentally, this topic has been discussed here: - https://discuss.kotlinlang.org/t/custom-string-templates/16504/19 - https://youtrack.jetbrains.com/issue/KT-64632/Support-Java-21-StringTemplate.Processor ## Knowing Kotlin Compiler Plugin amidst all this I originally focused on Kotlin/JVM, but recently I have become engrossed in KMP (Kotlin Multiplatform). (The motivation is simple... it would be convenient if everything could be written in Kotlin...) I noticed that libraries for KMP often provide features including Kotlin Compiler Plugin. ([Kotlin Serialization](https://github.com/Kotlin/kotlinx.serialization) is a prime example) I was already using the noarg plugin and allopen plugin, but I realized that even third parties are making them. Perhaps, I thought, I could use this to change the behavior of string interpolation for specific methods...? And amidst all this, I came across the slides for the following presentation at Kotlin Fest 2024. (I couldn't attend due to a schedule conflict...) https://speakerdeck.com/kitakkun/kotlin-fest-2024-motutokotlinwohao-kininaru-k2shi-dai-nokotlin-compiler-pluginkai-fa As a result, I managed to create an SQL client that can be used as mentioned at the beginning. ## How to Use Kuery Client Like other libraries, just add it to your Gradle dependencies. However, since a Kotlin Compiler Plugin is required, please also use the Gradle Plugin provided by Kuery Client. ```kotlin plugins { id("dev.hsbrysk.kuery-client") version "{{version}}" } implementation("dev.hsbrysk.kuery-client:kuery-client-spring-data-r2dbc:{{version") ``` ## Features of Kuery Client ### Builder and String Interpolation In Kotlin, a style of creating a Builder Scope and constructing dynamically within it, as represented by `buildString` or `buildList`, is often adopted. ```kotlin buildString { append("hoge") if (...) { append("bar") } } ``` This writing style is also adopted in [kotlinx.html](https://github.com/Kotlin/kotlinx.html). Text nodes are added using `+`. ```kotlin val body = document.body ?: error("No body") body.append { div { p { +"Welcome!" +"Here is " a("https://kotlinlang.org") { +"official Kotlin site" } } } } ``` Following this writing style, the style mentioned at the beginning is used. (Repost) ```kotlin suspend fun search(status: String, vip: Boolean?): List<User> = kueryClient .sql { +""" SELECT * FROM users WHERE status = $status """ if (vip != null) { +"vip = $vip" } } .list() ``` ### Based on spring-data-r2dbc or spring-data-jdbc Currently, it is implemented based on these widely used and proven technologies. So, it can be used with both R2DBC/JDBC. (The original motivation was to create it for R2DBC, but I decided to also support JDBC) Depending on these for the requirements of Kuery Client may be somewhat too much, but it allows for using transaction support and type conversion as is... (If I feel like it, I might create a module that doesn't depend on these) ### Transaction You can use Spring's Transaction support as is. For more details, please see here. https://kuery-client.hsbrysk.dev/transaction.html ### Observation It supports Micrometer Observation, so it can handle both Metrics and Tracing. For more details, please see here. https://kuery-client.hsbrysk.dev/observation.html ### Type Conversion Since it is based on spring-data, please use Spring's type conversion. Even with custom types, it can be handled flexibly. It should be able to handle cases where only specific types need to be encrypted. For more details, please see here. https://kuery-client.hsbrysk.dev/type-conversion.html ### Detekt Custom Rule Writing in the following incorrect way may cause SQL Injection or similar issues. ```kotlin kueryClient.sql { // BAD !! val sql = "SELECT * FROM user WHERE id = $id" +sql } ``` Since the string interpolation is customized for specific methods of Kuery Client, such writing is not allowed. To detect such incorrect writing, we provide a Detekt Custom Rule. https://kuery-client.hsbrysk.dev/detekt.html ### Example Code Sample code combined with Spring Boot. - Spring WebFlux and R2DBC - https://github.com/be-hase/kuery-client/tree/main/examples/spring-data-r2dbc - Spring WebMVC and JDBC - https://github.com/be-hase/kuery-client/tree/main/examples/spring-data-jdbc ## Conclusion For more detailed information, please check the documentation site. (Although it's very simple at the moment...) https://kuery-client.hsbrysk.dev/ I've already started using it for personal projects and find it quite convenient and enjoyable to use. Looking ahead, I vaguely think it would be nice to integrate SQL-related linters since I'm writing SQL by hand. Also, I want to implement something similar to the query type checks available in Scala's Doobie. (I haven't used it much, so I'm not very familiar yet) https://tpolecat.github.io/doobie/docs/06-Checking.html Although I wrote as I did in the aforementioned motivation, it is certainly better if things can be made robust. > On the other hand, DSLs have the advantage of being type-safe. They prevent mistakes such as inserting a string into an integer column. However, personally, I write corresponding unit tests when I write SQL, so I am indifferent to this aspect.
behase
1,909,808
Best Practices for Reporting Errors
Effective error reporting is crucial for efficient troubleshooting and resolution. A well-documented...
0
2024-07-03T07:25:20
https://dev.to/msnmongare/best-practices-for-reporting-errors-3i1p
beginners, productivity, tutorial, ai
Effective error reporting is crucial for efficient troubleshooting and resolution. A well-documented error report can save time and resources by helping developers quickly understand and address the issue. Here are some best practices for reporting errors: #### 1. Describe the Issue Clearly describe what the system is doing incorrectly. Provide specific details to avoid ambiguity. - **Example**: "The system incorrectly allows multiple financiers to be selected instead of restricting the selection to only one." #### 2. Expected Behavior State what you expect the system to do. This helps developers understand the intended functionality and how the system is deviating from it. - **Example**: "The system should restrict the selection to only one financier." #### 3. Steps to Reproduce Provide the steps to reproduce the issue if applicable. Detailed reproduction steps allow developers to experience the error themselves and understand the context in which it occurs. - **Example**: 1. Navigate to the financier selection page. 2. Attempt to select a financier from the list. 3. Observe that multiple financiers can be selected. #### 4. Environment Details Mention any relevant details about the environment, such as the browser, operating system, and version of the software. This information can help in identifying whether the issue is environment-specific. - **Example**: "Browser: Google Chrome, Version 91.0; OS: Windows 10; Application Version: 2.3.1." #### 5. Include Screenshots/Logs If possible, include screenshots or logs to provide more context. Visual evidence and log files can give developers additional clues about what might be going wrong. - **Example**: "Attached is a screenshot showing the multiple selections." ### Example of a Well-Reported Error **Title**: System selects multiple financiers instead of one **Description**: When attempting to select a financier, the system allows multiple selections instead of restricting to a single financier. **Expected Behavior**: The system should restrict the selection to only one financier. **Steps to Reproduce**: 1. Navigate to the financier selection page. 2. Attempt to select a financier from the list. 3. Observe that multiple financiers can be selected. **Environment**: - Browser: Google Chrome, Version 91.0 - OS: Windows 10 - Application Version: 2.3.1 **Additional Information**: Attached is a screenshot showing the multiple selections. ### Additional One-Sentence Error Examples Here are a few more examples of how to describe errors in one sentence: 1. "The login button does not respond when clicked on the mobile version of the site." 2. "The checkout page fails to load when the cart contains more than ten items." 3. "The search function returns no results even for known existing entries." 4. "The application crashes when trying to upload a file larger than 5MB." 5. "The email notification system sends duplicate emails for the same event." 6. "The user profile page displays incorrect information after editing the details." 7. "The report generation feature times out for reports exceeding 50 pages." 8. "The password reset link in the email leads to a 404 error page." 9. "The dropdown menu overlaps with other content on smaller screen sizes." 10. "The API returns a 500 error when queried with a specific date range." By following these best practices, you can ensure that your error reports are clear, concise, and actionable, leading to quicker resolutions and more efficient troubleshooting.
msnmongare
1,909,807
5 Things to Keep in Mind Regarding Workday Release
Keeping up with the latest releases of an enterprise resource planning (ERP) system is essential for...
0
2024-07-03T07:24:44
https://getfont.net/5-things-to-keep-in-mind-regarding-workday-release/
workday, release
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3ht7aek0c3n9hlrcy3l.jpg) Keeping up with the latest releases of an enterprise resource planning (ERP) system is essential for a firm to sustain their competitive edge and operational efficiency in an ever-evolving world of these systems. Through its release cycles, Workday, a top Cloud-based ERP solution, frequently rolls out new features, improvements, and upgrades. It’s critical for a Workday user to be well-informed and ready for Workday release in order to facilitate a seamless transition and optimize the advantages of new features. 1. **Understanding the Release Cycle** Workday is released in a planned cycle that happens on average. These releases include new features, enhancements, and bug fixes to a current Workday platform; they are not typical software updates. To prevent any hiccups or surprises, it’s imperative that the company become acquainted with a release timetable and make plans appropriately. 2. **Comprehensive Testing and Validation** Extensive testing and validation are key to a successful Workday release. It’s crucial to carry out thorough testing in the unique environment of the company before implementing the new features and updates. This procedure makes sure that any possible problems or conflicts are found early on and resolved, and that the new features are in line with business operations. 3. **Change Management and Communication** When it comes to Workday releases, good communication and change management are crucial. It’s critical to tell all parties involved, including end users, of the impending changes and how they might affect daily operations. Since users will be more accustomed to the new features and functionalities, clear and concise communication helps to ensure a seamless transition and reduces interruptions. 4. **Training and User Adoption** There might be new features, interfaces, or workflows added with every Workday release. Ensuring that end users receive sufficient training and assistance is essential to the effective implementation of new functionalities. Putting money into user training helps get the most out of the Workday platform’s return on investment while also boosting productivity. 5. **Continuous Improvement and Feedback** New updates of Workday are intended to improve the user experience overall and cater to the changing needs of businesses. Adopting a continual improvement approach is crucial, and companies should let Workday know what they think about any new features, improvements, or areas that need more attention. This cooperative strategy develops a solid alliance between Workday and company, which eventually results in an ERP solution that is more specialized and effective. **Conclusion** A new Workday version’s implementation is a challenging process that needs careful planning and continuous assistance. Opkey offers a no-code automation that streamlines the Workday update testing process. The platform utilizes built-in intelligence to analyze steps and generate automated scripts with a simple click. It provides change impact analysis, identifying precisely which areas require testing after updates. Pre-built accelerators are available for functional, regression, performance, and security testing, potentially reducing testing workloads by 70%. Opkey improves risk coverage through test discovery by mining the existing environment for tests and identifying gaps. With self-healing capabilities for fixing broken scripts automatically, Opkey empowers business users to ensure thorough, efficient Workday testing without coding expertise required.
rohitbhandari102
1,909,543
GraphQL Types
Field declaration By default, it's valid for any field in your schema to return null...
27,944
2024-07-03T07:21:31
https://dev.to/jacktt/graphql-types-26og
graphql
## Field declaration By default, it's valid for any field in your schema to return null instead of its specified type. You can require that a particular field doesn't return null with an exclamation mark !, like so: ```graphql type Author { name: String! # Can't return null books: [Book!] # This list can be null but its list items can't be null articles: [Article!]! This list can't be null AND its list items can't be null } ``` These fields are non-nullable. If your server attempts to return null for a non-nullable field, an error is thrown. This is a cheatsheet: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qw92dttpwht7he248glp.png) ## Supported types - Scalar (Data types) - Object - This includes the three special root operation types: Query, Mutation, and Subscription. - Input - Enum - Union ### Scalar types Scalar types are similar to primitive types in your favorite programming language. They always resolve to concrete data. GraphQL's default scalar types are: - Int: A signed 32‐bit integer - Float: A signed double-precision floating-point value - String: A UTF‐8 character sequence - Boolean: true or false - ID (serialized as a String): A unique identifier that's often used to refetch an object or as the key for a cache. Although it's serialized as a String, an ID is not intended to be human‐readable. ### Custom Scalar type To define a custom scalar, add it to your schema like so: ```graphql scalar YesNo ``` You can now use this scalar as a type for your fields. ```graphql type Student { active: YesNo } ``` However, it's only declared in GraphQL. To fully declare a custom scalar, you need to define ho Server interacts with it. For instance, if you're using [99designs/gqlgen](https://github.com/99designs/gqlgen), you must follow this [docs](https://gqlgen.com/reference/scalars/#custom-scalars-with-user-defined-types). In summary, you will need to define a Go type, implement the `grqphql.Marshaler` and `graphql.UnMarshaler` interfaces, and declare this type in `gqlgen.yml`. (If you're using Apollo Graphql, you can follow this [docs](https://www.apollographql.com/docs/apollo-server/schema/custom-scalars#example-the-date-scalar)) ### Object type An object type contains a collection of fields, each of which has its own type. For instnace, in the following declaration, Author is an Object type. ```graphql type Book { title: String author: Author } ``` Every object type in your schema automatically has a field named `__typename` (you don't need to define it). The `__typename` field returns the object type's name as a String (e.g., Book or Author). Clients can access this field the same way as they access other fields. ```graphql query Book { title author { __typename name } } ``` ### Query type The Query type is a special object type that defines all of the top-level entry points for queries that clients can request. It corresponds to API endpoints in RESTFul API. ```graphql type Query { books: [Book] book(id: ID): Book } ``` This Query corresponds to the following APIs in RESTFul API: - /books - /books/:id Read more about how client call queries [here](https://dev.to/jacktt/graphql-fundamental-236k#variable). ### Mutation type Mutation type defines entry points for write operations. ```graphql type Mutation { addBook(name: String, author_id: String): Book } ``` ### Input type If a query/mutation requires multiple arguments, you should create an Input type. For example: ```graphql input EditBookRequest { book_id: ID! new_name: String! } type Mutation { renameBook(input: EditBookRequest): Book } ``` Request: ```graphql mutation updateSomething($editBookRequest: EditBookRequest) { renameBook($editBookRequest) { name } } ``` Values: ```json { "editBookRequest": { "book_id": 1, "new_name": "2" } } ``` - Input types can’t have fields that are other objects, only basic scalar types, list types, and other input types. - Start with `input` keyword instead of `type` keyword. ### Enum Enum type define the allowed values for a type ```graphql enum AllowedColor { RED GREEN BLUE } ``` ```graphql type Query { changeColor(color: AllowedColor): Boolean } ``` It restricts the input color to be within AllowedColor values. ### Union type Union types help us define a new type from a set of Types. An instance of this type may be one of the union's types. For example: ```graphql union Media = Book | Movie type Query { allMedia: [Media] } ``` The output of the `allMedia` function may contain both book and movie records. ### Interface type An interface specifies a set of fields. If an object type implements an interface, it must include all of that interface's fields. ### Subscription type ```graphql subscription PostFeed { postCreated { author comment } } ``` Read more detail [here](https://www.apollographql.com/docs/apollo-server/data/subscriptions). ## Reference - https://www.apollographql.com/docs/apollo-server/schema/schema#the-schema-definition-language
jacktt
1,909,794
structure ideas:
Introduction Defining Telemarketer Memes: A brief introduction to what telemarketer memes are and why...
0
2024-07-03T07:20:13
https://dev.to/md_rabiulislamein42/structure-ideas-1g5g
Introduction Defining Telemarketer Memes: A brief introduction to what telemarketer memes are and why they have become popular. Purpose of the post: Explain why you are writing about the telemarketer meme and what readers can expect. Telemarketer Meme Types Text-Based Meme: An example of a meme that uses text to describe an interaction with a telemarketer in a humorous way. Image-based memes: Explore memes that use images or GIFs to convey frustrations or funny situations related to [Canada Phone Numbers](https://telemadata.com/canada-phone-numbers/) telemarketing calls. Humor about Telemarketing Common Themes: Discuss recurring themes in telemarketer memes, such as the frustration of answering the phone at an inconvenient time. Cultural References: Highlight memes that reference pop [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a8mkjnztm7txg6n3bs71.png)](https://telemadata.com/canada-phone-numbers/) culture or current events relevant to telemarketing. Reach and Virality Why it goes viral: Discover why telemarketer memes resonate with people and spread quickly on social media. Community Engagement: Discuss how people interact with these memes through likes, shares, and comments. Ethical Considerations Boundaries: Address the line between humor and potential harm when creating or sharing telemarketer memes. Respect vs. Offend: Tips for creating funny memes without offending telemarketers or others. Conclusion Abstract: Review of the appeal of the telemarketer meme and its role in online humor. Call to Action:
md_rabiulislamein42
1,907,813
Powerful AI Tools You Should Know
Hello Devs👋 Nowadays there are many AI tools available to boost your productivity but finding the...
0
2024-07-03T07:19:17
https://dev.to/dev_kiran/powerful-ai-tools-you-should-know-1gf1
webdev, ai, productivity, programming
Hello Devs👋 Nowadays there are many AI tools available to boost your productivity but finding the most effective ones can be challenging and time-consuming. In this article, I'll be sharing some fantastic AI tools that can significantly boost your productivity and save you a lot of time.🕧 ![Save It](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/axpa4kz9he9fnathsein.png) Lets get started🚀 #💠Momen Momen is a no-code platform that empowers developers and non-developers alike to design, develop, and scale custom web apps effortlessly. By using Momen AI you can build your own AI apps easily with your own data. ✨Here are some key features of Momen: ⚡**Drag-and-Drop Editor**: Create web apps with an intuitive drag-and-drop interface, eliminating the need for coding. ⚡**Full Stack Development**: Momen enables users to build both the frontend and backend of their applications seamlessly. ⚡**Robust Backend**: Momen has robust data capabilities, ensuring users can manage their backend efficiently. ⚡**AI Apps Building**: Momen AI simplifies the process of building AI applications, making it accessible for users with no technical background. You can easily integrate the AI app into your existing business logic. ⚡**Easy Setup & Flexible Data Source**: Users can easily configure the prompts using modules(Pre-set LLM models GPT-3.5/4). Also users can upload the local files, their data stored in Momen database, or calling API to retrieve external data as the context of their AI apps. ✅[**Try out Momen**](https://momen.app/) ![Momen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtr1t9t38gjzt4jn2smh.gif) #💠CodiumAI CodiumAI is an innovative AI-powered code quality platform designed to enhance the developer's development process. It provides developers with smart code analysis, ensuring that the codebase remains clean, efficient, and error-free. ✨Here are some key products of CodiumAI: 🔹**IDE Plugin -> Codiumate** > CodiumAI works as an IDE plugin, and it integrates directly into your development environment. It is specifically built for code analysis, Test paln and Test code. 🔹**Git Plugin -> PR-Agent:** > CodiumAI's PR-Agent automates the code review process for all pull requests, ensuring that only high-quality code is merged into the main codebase. 🔹**CLI tool -> Cover-Agent:** > The Cover-Agent allows you to easily handle tedious yet critical tasks, such as increasing test coverage. ✅[**Try out CodiumAI**](https://codium.ai/) ![CodiumAI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9q4oqispzuil67rp4k5n.gif) #💠GitFluence: GitFluence is an AI-driven solution that helps you find the right command for any Git task. It can save you time and hassle by generating the appropriate Git command for your needs. It can also help you learn Git, improve your workflow, and avoid common errors. ✅[**Try out GitFluence**](https://www.gitfluence.com/) ![Gitfluence](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9doxggbcwakqlrhyepyo.gif) #💠Builder.IO Using builder.io's **Visual Capilot**, you can convert design mockups into code swiftly and accurately. It uses AI to convert your design-to-code workflow. You can convert Figma designs into clean code with Visual Copilot. ✅[**Try out Builder.IO**](https://www.builder.io/) ![Builder.IO](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aykybtxzerjkfnqwvw23.gif) #💠Code Snippets AI Code Snippet AI is an intelligent tool designed to assist developers by generating and suggesting code snippets based on natural language prompts. It allows teams to save, share, and access code snippets throughout their workflow. ✅[**Try out Code Snippets AI**](https://codesnippets.ai/) ![Code Snippets AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c5q3dmlfkxwp4ecr5mk7.gif) #💠v0 v0 by Vercel is a revolutionary platform that utilizes AI to streamline UI creation for developers. You can simply describe your desired interface to v0 like buttons, layouts etc and v0 generates the corresponding code for you. All you need is a Vercel account. ✅[**Try out v0**](https://v0.dev/) ![v0](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5u4l92iia6qmm6fr4jyi.gif) #💠PurecodeAI PureCode AI is a front-end developer tool where you can use text to describe and generate or customize software user interfaces. ✅[**Try out PurecodeAI**](https://purecode.ai/) ![PurecodeAI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ikv352mtb46vvrob0kc8.gif) #💠HTTPie AI HTTPie AI is a new way to interact with APIs. It’s built into HTTPie for Web & Desktop and uses state-of-the-art artificial intelligence to increase your productivity when testing and talking to APIs. ✅[**Try out HTTPie AI**](https://httpie.io/) ![HTTPie AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5ya7q8mfu5mvowykkq0.gif) #💠ChatwithPDF AI ChatwithPDF.AI is an innovative tool that allows you to interact with PDF documents using natural language queries. It leverages advanced AI to understand and extract information from PDFs, enabling users to ask questions and receive precise answers directly from the document's content. It also summarizes the YouTube video. ✅[**Try out ChatwithPDF AI**](https://chatwithpdf.ai/) ![ChatwithPDF AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7uxi14lq73s68c1oyb37.gif) #💠UIzard UIzard is an innovative AI-powered design assistant tool that transforms rough sketches into professional user interface designs, enabling designers to create stunning visuals with unprecedented speed and efficiency. ✅[**Try out UIzard**](https://uizard.io/) ![UIzard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q8ui2xhvyo0u8dbz17eh.gif) #💠Bugasura Bugasura is a bug management tool where you can report, track, and resolve issues efficiently with AI-enabled bug reporting and issue tracking. ✅[**Try out Bugasura**](https://bugasura.io) ![Bugasura](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/38aqh0jtih9v5u179h50.gif) #💠Invideo AI Invideo AI instantly turns your text inputs into publish-worthy videos. It simplifies the process of generating the script and adding video clips, subtitles, background music, and transitions. ✅[**Try out Invideo AI**](https://invideo.io/) ![Invideo AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f2yroqqetcqo8ejp76an.gif) ##That's It. Thank you for reading this far. If you find this article useful, please like and share this article. Someone could find it useful too.💖 Connect with me on [**X**](https://x.com/kiran__a__n), [**GitHub**](https://github.com/Kiran1689), [**LinkedIn**](https://www.linkedin.com/in/kiran-a-n) <a href="https://www.buymeacoffee.com/Kiran1689" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/default-yellow.png" alt="Buy Me A Coffee" height="41" width="174"></a> {% embed https://dev.to/dev_kiran %}
dev_kiran
1,909,788
Understanding the Australia 408 Visa: Temporary Activity Visa
The Australia 408 Visa, also known as the Temporary Activity Visa, is a versatile visa category...
0
2024-07-03T07:17:03
https://dev.to/overseas_consultancy_6bb9/understanding-the-australia-408-visa-temporary-activity-visa-d0l
australia408visa, subclass408visa, 408visa, career
The Australia 408 Visa, also known as the Temporary Activity Visa, is a versatile visa category designed to allow individuals to participate in a wide range of short-term activities in Australia. Whether you are an entertainer, sports professional, religious worker, or involved in specific activities such as research or training, the 408 Visa provides a pathway to temporarily reside in Australia. This guide offers a detailed overview of the Subclass 408 Visa, including its purpose, eligibility criteria, application process, and benefits. Purpose of the 408 Visa The Temporary Activity Visa (subclass 408) is intended for individuals who wish to come to Australia for a variety of short-term activities. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7gyqwnizi7kkhsxy26vh.png) These activities can include: Entertainment: Performing in or supporting performances, film, television, or live productions. Sport: Competing, coaching, or adjudicating in sports events. Religious Work: Participating in religious activities or duties. Research: Conducting or observing research projects. Superyacht Crew: Working as a crew member on superyachts. Invited Participant: Participating in events at the invitation of an Australian organization. Special Programs: Engaging in approved special programs, including youth exchange, cultural enrichment, or community programs. Domestic Work for Executives: Working as a domestic worker for executives. Australian Government Endorsed Events: Participating in events endorsed by the Australian Government, such as the COVID-19 pandemic response. Eligibility Criteria Sponsorship or Support: Applicants must be sponsored or supported by an Australian organization, except for certain activities where this may not be required. Genuine Temporary Entrant: Applicants must demonstrate a genuine intention to stay in Australia temporarily and return home after their visa expires. Health and Character Requirements: All applicants must meet health and character requirements, including undergoing a medical examination and providing police clearances. Specific Activity Requirements: Depending on the activity, there may be additional requirements, such as skill or experience qualifications, endorsements, or letters of invitation. Application Process Sponsorship/Support Approval: The sponsoring or supporting Australian organization must apply for approval to sponsor or support the applicant. This involves providing evidence of the organization’s legitimacy and the purpose of the applicant’s visit. Lodging the Visa Application: Once the sponsorship or support is approved, the applicant can lodge their visa application online. This includes providing detailed information about the applicant, their intended activities in Australia, and supporting documents. Processing and Decision: The Department of Home Affairs processes the application, which can take several weeks. Applicants may be required to provide additional information or attend an interview. Visa Grant: If the application is successful, the applicant receives a visa grant notice, allowing them to stay in Australia for the duration specified in their visa grant, which can be up to 2 years. Benefits of the 408 Visa Work Rights: Depending on the activity, visa holders can work in their specified field during their stay. Multiple Entries: The visa allows for multiple entries, enabling visa holders to travel in and out of Australia during their visa period. Family Inclusion: Applicants can include eligible family members in their visa application, allowing them to accompany the primary visa holder to Australia. Flexibility: The 408 Visa covers a wide range of activities, providing flexibility for individuals engaged in various fields. Specific Streams under the 408 Visa Entertainment Activities: This stream is for individuals working in the entertainment industry, including actors, musicians, and production staff. Sporting Activities: Designed for athletes, coaches, and adjudicators participating in sports events or training. Religious Work: For religious workers participating in religious activities and duties. Research Activities: This stream is for academics conducting or observing research in Australian institutions. Special Programs: Covers participants in youth exchange programs, cultural enrichment activities, and other approved special programs. Australian Government Endorsed Events: Includes participants in government-endorsed events such as COVID-19 pandemic response activities. Conclusion The Australia 408 Visa offers a unique opportunity for individuals from diverse backgrounds to engage in short-term activities in Australia. Its flexibility and wide range of applicable activities make it a valuable option for temporary residency. By understanding the eligibility criteria, application process, and benefits, prospective applicants can effectively navigate the pathway to obtaining the 408 Visa and making the most of their time in Australia. Whether you are an entertainer, sportsperson, researcher, or religious worker, the Subclass 408 Visa provides a gateway to contribute to and experience life in Australia temporarily. Visit: https://www.y-axis.com.au/visa/work/australia/subclass-408/
overseas_consultancy_6bb9
1,909,787
Mythbusting DOM: Was DOM Invented Alongside HTML?
There is a common belief that the DOM emerged simultaneously with HTML and has always been an...
0
2024-07-03T07:15:24
https://dev.to/babichweb/mythbusting-dom-was-dom-invented-alongside-html-3fme
webdev, html, development, history
There is a common belief that the DOM emerged simultaneously with HTML and has always been an integral part of web development, with developers having tools for dynamic manipulation of HTML elements from the very beginning. However, this is far from the truth. In reality, nearly a decade passed between the emergence of HTML and the creation of the DOM! How did this come about? It's undeniable that the web's development in the mid-90s progressed at an explosive rate. Just imagine — only four years passed from the creation of the first web page by Tim Berners-Lee to the launch of amazon.com. By 1996, the internet had become so widespread that the first promotional website for a movie, Space Jam, was launched. However, web development itself was still quite primitive, with a very limited set of tools that couldn't keep up with the rapid industry growth. Consider this — the second numbered version of HTML appeared in 1995 (there wasn't officially a first version), JavaScript's first version was developed in the same year, and CSS1 was released in December 1996. Amidst all this, the DOM was still a distant prospect. So what prompted the community to create a unified standard? In the mid-90s, the so-called First Browser War was in full swing, with two giants of the time, Netscape Navigator and Internet Explorer, battling it out. In the fight for market share, developers came up with new tricks and features, exacerbating the biggest problem of the time — the lack of a unified approach to implementing standards. Yes, I'm looking at you, Internet Explorer, and your ActiveX. As a result, each browser had its own tools for working with HTML, meaning simple scripts for animating snowflakes might not work in a competitor's browser if you only tested your code in Internet Explorer or vice versa, in Netscape Navigator. This could and did lead to unpredictable behaviour, bloated code, and logical errors. In 1994, the World Wide Web Consortium (W3C) was established to standardize web technologies and make life easier for web developers. One of the key initiatives of this organization was the creation of the DOM, or Document Object Model, to standardize interactions with web documents. The first version of the DOM documentation was published in 1998, marking a significant milestone in web development history. Finally, a standardized way of representing and interacting with HTML documents was introduced, allowing developers to hope their snowflakes would fall the same way in all relevant browsers. The first DOM became the foundation for modern web applications. However, this didn't mean all web development problems were solved that year. Rather, they reached a new level. Now, besides incompatibility with competitors, most browsers became incompatible with the standard. Some tried to fix this, some ignored it, and some pretended that the most standard standards were only what they did, while other standards were not so standard. The fact that the famous jQuery emerged only in 2006 vividly indicates that the cross-browser compatibility issue not only didn't disappear but flourished eight years after the DOM standard appeared. But that's a story for another time.
babichweb
1,909,786
Approval Testing with Widget tests | Flutter / Dart 🎯
Hi there, I already wrote an article about ApprovalTests for unit tests. In this article we will...
0
2024-07-03T07:15:23
https://dev.to/yelmuratoff/approval-testing-with-widget-tests-flutter-dart-925
dart, flutter, testing, approval
Hi there, I already wrote an article about [ApprovalTests for unit tests](https://dev.to/yelmuratoff/approval-testing-and-why-its-important-dart-3pic). In this article we will touch on Widget tests and how this package can be useful for us. ## Let's briefly go through again what exactly are ApprovalTests? In manual code testing, developers usually write automated tests to guard against regressions, specifying the expected outcomes directly in the code. `ApprovalTests` uses a similar method but stores the expected outcomes in a file instead. This file is generated by the test library rather than the developer, making the process of writing and maintaining tests more efficient. As a result, developers can spend more time on the functional code rather than the test code. ## 📋 How it works step by step - The first run of the test automatically creates an `approved` file if there is no such file. - If the changed results match the `approved` file perfectly, the test passes. - If there's a difference, a `reporter` tool will highlight the mismatch and the test fails. - If the test is passed, the `received` file is deleted automatically. You can change this by changing the `deleteReceivedFile` value in `options`. If the test fails, the received file remains for analysis. Instead of writing: ```dart testWidgets('home page', (WidgetTester tester) async { await tester.pumpWidget(const MyApp()); await tester.pumpAndSettle(); expect(find.text('You have pushed the button this many times:'), findsOneWidget); expect(find.text('0'), findsOneWidget); expect(find.byWidgetPredicate( (Widget widget) => widget is Text && widget.data == 'hello' && widget.key == ValueKey('myKey'), ), findsOneWidget); expect(find.text('Approved Example'), findsOneWidget); }); ``` Write this: ```dart testWidgets('home page', (WidgetTester tester) async { await tester.pumpWidget(const MyApp()); await tester.pumpAndSettle(); await tester.approvalTest(); }); ``` To include your project's custom widget types in your test, and to perform post-test checks, add calls to `Approved.setUpAll()` to your tests' `setUpAll` calls, like so: ```dart main() { setUpAll(() { Approved.setUpAll(); }); } ``` And the result will be something like this: ![Flutter example result](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/msitbtlk6rw18zccdwgk.png) ![Console log result](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88m1vm3eddakiy4o6690.png) But if we for example change something and run the text, it will show us the difference, where it differs. There are several types of `Reporter` in the project, but the standard is `CommandLineReporter`, which gives the difference on the command line. Other reporters can be found in the project's Github repository. ![CommandLineReporter example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hx6ogxdxu7azbi02ldt6.png) ## Approving Results Approving results just means saving the `.approved.txt` file with your desired results. We’ll provide more explanation in due course, but, briefly, here are the most common approaches to do this. #### • Via Diff Tool Most diff tools have the ability to move text from left to right, and save the result. How to use diff tools is just below, there is a `Comparator` class for that. #### • Via CLI command You can run the command in a terminal to review your files: ```bash dart run approval_tests:review ``` After running the command, the files will be analyzed and you will be asked to choose one of the options: - `y` - Approve the received file. - `n` - Reject the received file. - `v`iew - View the differences between the received and approved files. After selecting `v` you will be asked which IDE you want to use to view the differences. The command `dart run approval_tests:review` has additional options, including listing files, selecting files to review from this list by index, and more. For its current capabilities, run ```bash dart run approval_tests:review --help ``` #### • Via approveResult property If you want the result to be automatically saved after running the test, you need to use the `approveResult` property in `Options`: ```dart void main() { test('test JSON object', () { final complexObject = { 'name': 'JsonTest', 'features': ['Testing', 'JSON'], 'version': 0.1, }; Approvals.verifyAsJson( complexObject, options: const Options( approveResult: true, ), ); }); } ``` this will result in the following file `example_test.test_JSON_object.approved.txt` ```txt { "name": "JsonTest", "features": [ "Testing", "JSON" ], "version": 0.1 } ``` #### • Via file rename You can just rename the `.received` file to `.approved`. ## ❓ Which File Artifacts to Exclude from Source Control You must add any `approved` files to your source control system. But `received` files can change with any run and should be ignored. For Git, add this to your `.gitignore`: ```gitignore *.received.* ``` Show some 💙 and star the [repo](https://github.com/approvals/ApprovalTests.Dart) to support the project! 🫰 For any questions, feel free to reach out via [Telegram](https://t.me/yelmuratoff) or email at [yelaman.yelmuratov@gmail.com](mailto:yelamanyelmuratov@gmail.com).
yelmuratoff
1,909,785
Interiors By Design
From meticulous preparation to faultless execution, we ensure your dream comes true. Our Kitchen...
0
2024-07-03T07:14:56
https://dev.to/interiorsbydesignuk/interiors-by-design-man
From meticulous preparation to faultless execution, we ensure your dream comes true. Our Kitchen Manufacturing procedure includes accurate planning, professional guidance, and flawless implementation. From the first design to the manufacturing stage, we collaborate closely with you the entire way. Our solid relationship with our fitters guarantees clear communication throughout the fitting procedure and enables us to handle unforeseen circumstances easily. We design kitchens with passion and accuracy to enrich your home! Visit : https://www.interiorsbydesignuk.com/
interiorsbydesignuk
1,909,784
Get Started with ChatGPT: Free Course for All Levels
Welcome to a Free ChatGPT Course designed for all levels. Understand the benefits of learning ChatGPT...
0
2024-07-03T07:14:26
https://dev.to/alex101112/get-started-with-chatgpt-free-course-for-all-levels-mg
ai, chatgpt
Welcome to a [Free ChatGPT Course](https://www.pickl.ai/course/chatgpt-free-certification-online) designed for all levels. Understand the benefits of learning ChatGPT and get an overview of what this course offers. ## Account Setup and Basics Learn how to create your ChatGPT account and navigate the platform. This foundational knowledge is crucial for getting started. ## Core Features and Functions Explore the key features and functions of ChatGPT. Understand basic commands and operations that will help you utilize ChatGPT effectively. ## Everyday Applications Discover how ChatGPT can be used in daily life. From writing and content creation to customer service and productivity tools, this section covers various practical applications. ## Customization and Advanced Use Take your skills to the next level by learning how to personalize your ChatGPT experience. Integrate it with other applications and explore advanced techniques for power users. ## Interactive Learning Engage with hands-on projects and challenges. Case studies and success stories provide real-world context, while peer interaction and feedback enhance the learning experience. ## Conclusion and Further Learning Summarize key points and explore resources for continued education. Join the ChatGPT community for support and stay updated with new developments in AI.
alex101112
1,909,783
AccessToken, RefreshToken
참조:...
0
2024-07-03T07:13:40
https://dev.to/sunj/accesstoken-refreshtoken-24j0
_참조: https://tae-jun.tistory.com/19_ _https://sokdak-sokdak.tistory.com/11_ _https://velog.io/@park2348190/JWT%EC%97%90%EC%84%9C-Refresh-Token%EC%9D%80-%EC%99%9C-%ED%95%84%EC%9A%94%ED%95%9C%EA%B0%80_
sunj
1,909,781
Spatial Reader Tech Support
Getting Support: mail: wenxian890105@qq.com or leave comment below.
0
2024-07-03T07:10:01
https://dev.to/wenxian_liu_9ca538b426cda/spatial-reader-tech-support-36h8
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nufuvy3shdk2jx8dqgn4.png)Getting Support: mail: wenxian890105@qq.com or leave comment below.
wenxian_liu_9ca538b426cda