id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,893,991
Next.js Server Actions with next-safe-action
TLDR: Add type safe and validated server actions to your Next.js App Router project with...
0
2024-06-25T20:15:46
https://www.davegray.codes/posts/nextjs-server-actions-with-next-safe-action
nextjs, typesafety, validation, serveractions
--- title: Next.js Server Actions with next-safe-action published: true date: 2024-06-18 00:00:00 UTC tags: nextjs,typesafety,validation,serveractions canonical_url: https://www.davegray.codes/posts/nextjs-server-actions-with-next-safe-action cover_image: https://raw.githubusercontent.com/gitdagray/my-blogposts/main/images/nextjs-server-actions-with-next-safe-action.png --- **TLDR:** Add type safe and validated server actions to your Next.js App Router project with next-safe-action. ## Next.js Server Actions [Server Actions](https://nextjs.org/docs/app/building-your-application/data-fetching/server-actions-and-mutations) are asynchronous functions executed on the server in Next.js. They are defined with the `"use server"` directive and can be used in both server and client components for handling form submissions and data mutations. Over the last year, I've seen them applied in a variety of ways, and I've used them in projects myself, too. Now, I have recently discovered the [next-safe-action](https://next-safe-action.dev/) library, and I like the structure, ease-of-use, and extra features it provides. ## An Example Server Action without next-safe-action I think the best way to show why I like [next-safe-action](https://next-safe-action.dev/) is to show how I implemented a Next.js server action without the library first. Afterwards, I will show the refactor with [next-safe-action](https://next-safe-action.dev/). Here's an example server action from a repository and tutorial I recently published on creating a [Next.js Modal Form with react-hook-form, ShadCN/ui, Server Actions and Zod validation](https://youtu.be/WyL_Jc6_-sY). ```ts // src/app/actions/actions.ts "use server" import { UserSchema } from "@/schemas/User" import type { User } from "@/schemas/User" type ReturnType = { message: string, errors?: Record<string, unknown> } export async function saveUser(user: User): Promise<ReturnType> { // Check valid login here const parsed = UserSchema.safeParse(user) if (!parsed.success) { return { message: "Submission Failed", errors: parsed.error.flatten().fieldErrors } } await fetch(`http://localhost:3500/users/${user.id}`, { method: 'PATCH', headers: { "Content-Type": "application/json", }, body: JSON.stringify({ firstname: user.firstname, lastname: user.lastname, email: user.email, }) }) return { message: "User Updated! 🎉" } } ``` You can see above that I was using a local `json-server` instance in the tutorial to update user data. Before the update occurs, the data is validated with [Zod](https://zod.dev/). If the validation fails, Zod validation errors are sent back to the client component with the ZodError type `flatten` method applied. Now let's compare to the refactored version using [next-safe-action](https://next-safe-action.dev/). ## An Example Server Action with next-safe-action ```ts // src/app/actions/actions.ts "use server" import { UserSchema } from "@/schemas/User" import { actionClient } from "@/lib/safe-action" import { flattenValidationErrors } from "next-safe-action" export const saveUserAction = actionClient .schema(UserSchema, { handleValidationErrorsShape: (ve) => flattenValidationErrors(ve).fieldErrors, }) .action(async ({ parsedInput: { id, firstname, lastname, email } }) => { // Check valid login here await fetch(`http://localhost:3500/users/${id}`, { method: 'PATCH', headers: { "Content-Type": "application/json", }, body: JSON.stringify({ firstname: firstname, lastname: lastname, email: email, }) }) return { message: "User Updated! 🎉" } }) ``` The file has shrunk down from 33 lines to 26 lines of code. Starting at the top you can see I still import the Zod UserSchema I have defined. The inferred User type is no longer imported. New imports include `actionClient` and `flattenValidationErrors`. Instead of `export async function`, I'm using `export const` and starting the definition of `saveUserAction` with the `actionClient`. I chain the `schema` method to the `actionClient` while passing in the `UserSchema`. I also set the `handleValidationErrorsShape` option to use the imported `flattenValidationErrors` method. This method is similar to the ZodError type method `flatten` that I used in the original function. Next, I chain the `action` method and call the async function inside of it. It supplies a `parsedInput` prop. I destructure the prop to get the input data sent to the server action. The remainder of the function remains unchanged. Note that in this refactored version I did not define a `ReturnType`. The return type is the result defined by the [useAction hook return object](https://next-safe-action.dev/docs/execution/hooks/useaction#useaction-return-object). I apply the `useAction` hook in the client component. While some overhead is saved in the server action code you see above, even more is saved in the client component. Below, I again show before and after code versions. This time the before and afters are of the client component using [react-hook-form](https://react-hook-form.com/). ## An Example Client Component without next-safe-action ```ts // src/app/edit/[id]/UserForm.tsx "use client" import { useForm } from "react-hook-form" import { Form } from "@/components/ui/form" import { Button } from "@/components/ui/button" import { InputWithLabel } from "@/components/InputWithLabel" import { zodResolver } from "@hookform/resolvers/zod" import { UserSchema } from "@/schemas/User" import type { User } from "@/schemas/User" import { saveUser } from "@/app/actions/actions" import { useState, useEffect } from "react" import { useRouter } from "next/navigation" type Props = { user: User } export default function UserForm({ user }: Props) { const [message, setMessage] = useState('') const [errors, setErrors] = useState({}) const router = useRouter() const form = useForm<User>({ mode: 'onBlur', resolver: zodResolver(UserSchema), defaultValues: { ...user }, }) useEffect(() => { // boolean to indicate if form has not been saved localStorage.setItem("userFormModified", form.formState.isDirty.toString()) }, [form.formState.isDirty]) async function onSubmit() { setMessage('') setErrors({}) /* No need to validate here because react-hook-form already validates with the Zod schema */ const result = await saveUser(form.getValues()) if (result?.errors) { setMessage(result.message) setErrors(result.errors) return } else { setMessage(result.message) // update client-side cache router.refresh() // reset dirty fields form.reset(form.getValues()) } } return ( <div> {message ? ( <h2 className="text-2xl">{message}</h2> ) : null} {errors ? ( <div className="mb-10 text-red-500"> {Object.keys(errors).map(key => ( <p key={key}>{`${key}: ${errors[key as keyof typeof errors]}`}</p> ))} </div> ) : null} <Form {...form}> <form onSubmit={(e) => { e.preventDefault() form.handleSubmit(onSubmit)(); }} className="flex flex-col gap-4"> <InputWithLabel fieldTitle="First Name" nameInSchema="firstname" /> <InputWithLabel fieldTitle="Last Name" nameInSchema="lastname" /> <InputWithLabel fieldTitle="Email" nameInSchema="email" /> <div className="flex gap-4"> <Button>Submit</Button> <Button type="button" variant="destructive" onClick={() => form.reset()} >Reset</Button> </div> </form> </Form> </div> ) } ``` In the above example, I had to set state for both the message and errors that the original server action could return. I also needed to consider that state in the onSubmit function. In the refactored version below, you can see how this is simplified. ## An Example Client Component with next-safe-action ```ts // src/app/edit/[id]/UserForm.tsx "use client" import { useForm } from "react-hook-form" import { Form } from "@/components/ui/form" import { Button } from "@/components/ui/button" import { InputWithLabel } from "@/components/InputWithLabel" import { zodResolver } from "@hookform/resolvers/zod" import { UserSchema } from "@/schemas/User" import type { User } from "@/schemas/User" import { saveUserAction } from "@/app/actions/actions" import { useEffect } from "react" import { useRouter } from "next/navigation" import { useAction } from "next-safe-action/hooks" import { DisplayServerActionResponse } from "@/components/DisplayServerActionResponse" type Props = { user: User } export default function UserForm({ user }: Props) { const router = useRouter() const { execute, result, isExecuting } = useAction(saveUserAction) const form = useForm<User>({ resolver: zodResolver(UserSchema), defaultValues: { ...user }, }) useEffect(() => { // boolean to indicate if form has not been saved localStorage.setItem("userFormModified", form.formState.isDirty.toString()) }, [form.formState.isDirty]) async function onSubmit() { /* No need to validate here because react-hook-form already validates with the Zod schema */ execute(form.getValues()) // update client-side cache router.refresh() // reset dirty fields form.reset(form.getValues()) } return ( <div> <DisplayServerActionResponse result={result} /> <Form {...form}> <form onSubmit={(e) => { e.preventDefault() form.handleSubmit(onSubmit)(); }} className="flex flex-col gap-4"> <InputWithLabel fieldTitle="First Name" nameInSchema="firstname" /> <InputWithLabel fieldTitle="Last Name" nameInSchema="lastname" /> <InputWithLabel fieldTitle="Email" nameInSchema="email" /> <div className="flex gap-4"> <Button>{isExecuting ? "Working..." : "Submit"}</Button> <Button type="button" variant="destructive" onClick={() => form.reset()} >Reset</Button> </div> </form> </Form> </div> ) } ``` In this refactored version, I imported the [useAction](https://next-safe-action.dev/docs/execution/hooks/useaction) hook supplied by next-safe-action and a custom component I created called `DisplayServerActionResponse`. I eliminated all usage of `useState`. `DisplayServerActionResponse` receives the `result` that is provided by the useAction hook. It holds the data sent back from the server action. `useAction` also provides an `execute` function and an `isExecuting` boolean. (Check the [docs](https://next-safe-action.dev/docs/introduction) for what else it can provide, too.) All of this greatly reduces the logic I needed to put in the `onSubmit` function. Receiving the `result` from the server action makes it easy to abstract the displayed response to the custom `DisplayServerActionResponse` component, too. Here's a quick look at that component as well.. ## Displaying the Server Action Result ```ts type Props = { result: { data?: { message?: string, }, serverError?: string, fetchError?: string, validationErrors?: Record<string, string[] | undefined> | undefined, } } export function DisplayServerActionResponse({ result }: Props) { const { data, serverError, fetchError, validationErrors } = result return ( <> {/* Success Message */} {data?.message ? ( <h2 className="text-2xl my-2">{data.message}</h2> ) : null} {serverError ? ( <div className="my-2 text-red-500"> <p>{serverError}</p> </div> ) : null} {fetchError ? ( <div className="my-2 text-red-500"> <p>{fetchError}</p> </div> ) : null} {validationErrors ? ( <div className="my-2 text-red-500"> {Object.keys(validationErrors).map(key => ( <p key={key}>{`${key}: ${validationErrors && validationErrors[key as keyof typeof validationErrors]}`}</p> ))} </div> ) : null} </> ) } ``` Above, you can see that [next-safe-action](https://next-safe-action.dev/) provides not only validation errors from the Zod schema I constructed, but it also provides server errors and fetch errors. In addition, the result object contains the success message I provided from the server action. ## Learn More This is just one example and a simple one at that! Dive into the docs and solve your own specific use case to see what else [next-safe-action](https://next-safe-action.dev/) is capable of. I plan to refactor my old server actions and use next-safe-action going forward. <hr /> ## Let's Connect! Hi, I'm Dave. I work as a full-time developer, instructor and creator. If you enjoyed this article, you might enjoy my other content, too. **My Stuff:** [Courses, Cheat Sheets, Roadmaps](https://courses.davegray.codes/) **My Blog:** [davegray.codes](https://www.davegray.codes/) **YouTube:** [@davegrayteachescode](https://www.youtube.com/davegrayteachescode) **X:** [@yesdavidgray](https://x.com/yesdavidgray) **GitHub:** [gitdagray](https://github.com/gitdagray) **LinkedIn:** [/in/davidagray](https://www.linkedin.com/in/davidagray/) **Patreon:** [Join my Support Team!](patreon.com/davegray) **Buy Me A Coffee:** [You will have my sincere gratitude](https://www.buymeacoffee.com/davegray) Thank you for joining me on this journey. Dave
gitdagray
1,891,800
Importance of JPG Images
What Are JPG Images? JPG, also known as JPEG (Joint Photographic Experts Group), is a...
0
2024-06-17T23:41:25
https://dev.to/msmith99994/importance-of-jpg-images-3ffe
## What Are JPG Images? JPG, also known as JPEG (Joint Photographic Experts Group), is a widely-used image format that employs lossy compression to reduce file size while maintaining acceptable image quality. Introduced in 1992, the JPG format has become the standard for digital photography and web images due to its balance of quality and file size. ## Characteristics of JPG Images **- Lossy Compression:** JPG images use a compression method that reduces file size by discarding some of the image data, which can result in a loss of quality, especially at higher compression levels. **- Color Range:** JPG supports 24-bit color, which can display millions of colors, making it ideal for complex images like photographs. **- Adjustable Compression:** The level of compression can be adjusted, allowing users to choose between higher quality or smaller file size. ## Where Are JPG Images Used? JPG images are ubiquitous across various platforms and applications: **- Digital Photography:** JPG is the standard format for digital cameras and smartphones, balancing quality and file size to store large numbers of photos. **- Web Design:** JPG images are widely used on websites for photographs and images with gradients and complex colors, as they load quickly due to their smaller file size. **- Social Media:** Platforms like Facebook, Instagram, and Twitter use JPG for sharing images, ensuring fast loading times and efficient storage. **- Email and Document Sharing:** JPG files are commonly used in emails and documents due to their manageable size and compatibility with most software. ## Advantages and Disadvantages of JPG Images ### Advantages **- Small File Size:** JPG's lossy compression significantly reduces file size, making it ideal for web use and storage. **- Wide Compatibility:** JPG is supported by virtually all devices, software, and web browsers, ensuring seamless viewing and sharing. **- High Color Depth:** With 24-bit color, JPG images can display millions of colors, making them suitable for detailed and colorful images like photographs. **- Adjustable Quality:** Users can adjust the compression level to find a balance between quality and file size that suits their needs. ### Disadvantages - **Lossy Compression:** The compression process discards some image data, which can lead to visible artifacts and a loss of quality, especially at higher compression levels. - **Limited Editing Capability:** Repeatedly editing and saving JPG files can degrade quality over time due to cumulative compression losses. - **No Transparency Support:** Unlike PNG or WebP, JPG does not support transparency, limiting its use for images requiring clear backgrounds or overlays. ## How to Convert WebP to JPG Converting [WebP to JPG](https://cloudinary.com/tools/webp-to-jpg) is a straightforward process that can be accomplished using various tools and methods: **1. Using Online Tools** Websites like Convertio and Online-Convert allow you to upload WebP files and download the converted JPG files. **2. Using Image Editing Software** Software like Adobe Photoshop and GIMP support WebP format. You can open your WebP file and save it as JPG. **3. Command Line Tools** Command-line tools like dwebp from the WebP library can be used for conversion. **4. Programming Libraries** Programming libraries such as Python's Pillow or JavaScript's sharp can be used to automate the conversion process in applications. ## Final Words JPG images remain a cornerstone of digital imaging, offering a practical balance of quality and file size. They are extensively used in digital photography, web design, social media, and document sharing due to their wide compatibility and efficient storage. While the lossy compression can lead to a reduction in image quality, the advantages of smaller file sizes and adjustable compression make JPG a versatile and valuable format. Understanding how to convert between WebP and JPG ensures flexibility and compatibility across various digital platforms, making it an essential skill for modern digital content management.
msmith99994
1,888,752
Enhance Your Tailwind CSS Skills: Essential Tips and Tricks
Hey folks, In my previous post, introducing the Rocketicons, a powerful icon library designed to be...
0
2024-06-17T23:19:26
https://dev.to/amorimjj/enhance-your-tailwind-css-skills-essential-tips-and-tricks-hp0
css, tailwindcss, webdev, beginners
Hey folks, In my previous [post](https://dev.to/amorimjj/introducing-rocketicons-the-perfect-companion-for-react-and-tailwind-css-developers-417b), introducing the [Rocketicons](https://rocketicons.io), a powerful icon library designed to be used with [Tailwind](https://tailwindcss.com), I expressed my love for the framework, how amazing I think it is, and encouraged its use. A colleague shared an interesting insight that caught my attention. ![Comment pointing few issues regarding the use of Tailwind](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z39tb1wg9xyfsb8er66s.png) On one hand, I don’t think this is a problem exclusive to Tailwind, but on the other, I can’t deny it’s a good point that deserves special care from developers. Tailwind is really easy to use and can invite you to just start doing things, but its indiscriminate use can generate problems. You've probably heard about clean code, good practices, and things like that. If you haven’t figured it out yet, those aren’t about creating code people admire or getting kudos. It’s about how easy it is to look at your code and understand what it does, because this way, you can continue your work. It’s about having a structure that is easy to extend, change, and maintain. Sometimes, even the creator, after some time, looks at the code and thinks, "What does this do? How does it work? How can I change it?" These are situations we must find ways to avoid. _If you're trying to understand why good practices are important, take a look on [Understanding Code Quality: The Why Behind Best Practices](https://www.linkedin.com/pulse/understanding-code-quality-why-behind-best-practices-jeferson-amorim-gd5if)_. I’m not here to dictate rules because I don’t believe there is a single right way of doing things. Therefore, this is not the ultimate guide for all Tailwind’s good practices, but perhaps with community collaboration, it can become one someday. The absence of a right way definitely does not exclude the possibility of a wrong way, because it does exist. Failing to follow a few principles in code writing will create a maintenance nightmare. By the way, once you create any code, keep in mind that someone will need to maintain it, and that someone could be you! I’m going to share tips and explanations to help you make great use of Tailwind, using the power of the tool on your side to create amazing experiences for your users. ### Learn CSS #### How can you style if you don’t know anything about styling?! Understanding the fundamentals of CSS is crucial before diving into Tailwind. In fact, that point is valid for any framework you are trying to learn. While Tailwind simplifies many styling tasks, a base knowledge of CSS is required to understand how styling works. It’s not just about understanding how Tailwind works under the hood, which helps you use it effectively, but also about seeing the big picture of what styling is. Besides, knowing CSS also empowers you to create custom utilities in Tailwind when the predefined ones don’t meet your needs. _So, the first one is_: **Learn CSS!** ### Dominate the tool #### Tailwind is powerful, but only if you know how to use it properly. Tailwind is powerful, but only if you know how to use it properly. Spend time reading the documentation, experimenting with different classes, and understanding how utility-first CSS works. Pay special attention to understanding configuration and how plugins work. This will help you master the tool and know when and how to use each feature. The documentation includes plenty of examples and explanations to get you started and help you become an expert on the framework. _Become a_ **Master of the Tool!** ### Use of the colors #### Tailwind offers a predefined palette of colors, but you can (and should) customize it to match your brand guidelines. The available options offered out of the box are amazing, inviting us to just use them, creating combinations and coloring everything. What’s the problem, right? I’m just using blue and slate variations all over the code, why is it a problem? But imagine you face a rebranding, and blue now becomes sky. How many places will require changes? How safe is it to just run a replace all? Define a set of colors in the Tailwind configuration file and stick to them. This ensures consistency across your application and makes it easier to manage changes globally. The documentation is very detailed, so I don’t think you will face any problems doing this. Just take a look at [naming your colors](https://tailwindcss.com/docs/customizing-colors#naming-your-colors) to help you achieve it. My personal preference is for the [Material Color System](https://m3.material.io/styles/color/system/overview) convention. I think it’s clear and suitable for most situations. The color names, such as primary, secondary, and surface, are descriptive and not connected to any specific brand, making them suitable for multiple applications. That’s why I choose it. But don’t be afraid to experiment with different options. _Just_ **avoid hardcoding colors in your HTML.** ### Customize to match your needs* #### Sometimes you need something more specific for your project. Tailwind's default utilities cover a lot of ground, but don’t hesitate to create your own utilities. This can help keep your HTML clean and make your styling more manageable. For example, if you frequently use a specific combination of classes, like for text color where you use `text-primary dark:text-primary-700`, consider creating a custom utility. ```javascript // tailwind.config.js module.exports = { theme: { extend: {}, }, variants: {}, plugins: [ ({ addUtilities }) => { addUtilities({ '.text-default': { "@apply text-primary dark:text-primary-700": {} } }) } ] } ``` or using css syntax ```css @tailwind base; @tailwind components; @tailwind utilities; ... @layer utilities { .text-default { @apply text-primary dark:text-primary-700; } } ``` The same goes for variations, breakpoints, etc. _Do be afraid to_ **create our own utilities!** ### Reuse your code* #### Have you heard about DRY (Don’t Repeat Yourself)? It’s not just because repetition is ugly or wrong. How can code you wrote once nicely and runs well stop working or looking bad just because you copied it twice? The problem, once again, is regarding maintenance. Imagine you find an issue and need to update the routine for any reason. How many places will need an update? What are the odds you miss one spot and deploy code with a bug? That’s why the DRY principle is so important. If you’re repeating yourself using Tailwind, take a break and think about how to fix that. What is the best approach to solving that? A component, a loop, a utility, using `@apply`? Learn how [reusing styles](https://tailwindcss.com/docs/reusing-styles) can help you avoid this problem. _If you think about_ **copy and paste, just don’t!** ### Do not run from the configuration complexity #### For God’s sake, you are a developer! You’re built for that! Sass, SCSS, or even creating breaking points manually in simple CSS are not simple either. The sooner you embrace the complexity of those things, the stronger and more skilled you become. And, to be honest, [getting started](https://tailwindcss.com/docs/installation) with Tailwind for learning purposes is really easy, and for most cases, just configuring your color scheme should be enough. ### Avoid the Unnecessary Use of Markups #### One of the principles of writing clean and maintainable code is to avoid the unnecessary use of markups. Tailwind encourages a utility-first approach, which means you can apply styles directly to your HTML elements without needing additional wrappers or extraneous elements. Keep in mind the number of style classes will increase a lot, so we must be smart regarding the markup structure. This not only keeps your HTML clean but also enhances readability and maintainability. Unnecessary markups can bloat your HTML, making it harder to read and maintain. By avoiding them, you can ensure your code remains streamlined and efficient. Here are a few tips to help you avoid unnecessary markups: - **Use Tailwind's Utility Classes**: Tailwind provides a comprehensive set of utility classes that can be applied directly to HTML elements. This eliminates the need for additional CSS classes or extra HTML elements. ```html <!-- AVOID: Using extra divs just styling --> <div class="p-4"> <div class="bg-surface"> <p class="text-center text-primary">Hello, World!</p> </div> </div> <!-- PREFER: Applying utilities directly to the element --> <p class="p-4 bg-surface text-center text-primary">Hello, World!</p> ``` - **Simplify Your Structure**: Evaluate your HTML structure and remove any elements that don’t serve a specific purpose. Each element should have a clear role, whether it’s for semantic structure or styling purposes. ```html <!-- AVOID: Nested divs without a clear purpose --> <div class="outer-wrapper"> <div class="inner-wrapper"> <div class="content"> <p class="text-lg">This is a paragraph.</p> </div> </div> </div> <!-- PREFER: Simplified structure --> <p class="text-lg">This is a paragraph.</p> ``` - **Use Tailwind’s Flex and Grid Utilities**: Tailwind's flex and grid utilities can often eliminate the need for additional containers. By using these utilities, you can create complex layouts with minimal markup. ```html <!-- AVOID: Using extra containers for layout --> <div class="container"> <div class="row"> <div class="col"> <p>Item 1</p> </div> <div class="col"> <p>Item 2</p> </div> </div> </div> <!-- PREFER: Using Tailwind's grid utilities and semantic markup --> <ul class="grid grid-cols-2 gap-4"> <li>Item 1</li> <li>Item 2</li> </ul> ``` By avoiding unnecessary markups, you can keep your codebase clean, reduce complexity, and make your project easier to maintain. This practice, combined with Tailwind's powerful utility-first approach, will enable you to create efficient and scalable designs. ### Use the documentation #### Tailwind's documentation is one of its greatest strengths. It is comprehensive, well-organized, and full of examples. Make it a habit to consult the documentation regularly. Whether you’re looking for a specific utility, trying to understand how to customize the configuration, or searching for best practices, the documentation should be your go-to resource. ### Think about global changes #### It’s a good exercise to find the balance between writing code that is easy to maintain and not worrying about a future that may not come. Having that in mind, it’s up to you to decide what should be handled globally or not. For example, if you decide to update your brand colors or adjust the spacing scale, what changes will be required? Can you do it in one place, and the changes will propagate throughout your entire project? Tailwind makes it easy to apply global changes through its configuration file. Take advantage of this feature to maintain consistency and manage styles efficiently. #### \*Be careful on the use of the `@apply`. Otherwise, we’ll just be writing CSS in a different way. That is not the purpose of Tailwind. The magic of this tool is keeping the view and style in the same place, making it easy to understand how it will behave and appear. The appeal of having all the variations, breakpoints, and pseudo-classes directly in HTML facilitates understanding what is going on, centralizing all the view core changes in one place. That is priceless! I know, I know… I’ve been talking about good practices and principles until now, and suddenly I’m talking about having the view and style in the same place. It might sound nonsensical, breaking the separation of concerns principle, but I’d like to propose a reflection here. Should we always follow the principles? Do they always make sense? What about when the paradigm changes? And about evolution? Are those principles still valid nowadays, for the current technologies? In my personal opinion, based on the code I’ve written, having all the aspects of the view in the same place is a good thing, especially because style doesn’t work without the view. It’s not like a data layer that works alone as a data provider and can be used for multiple applications. The only reason the styling exists is the view. Looking across CSS files to find which styles apply to the view and must be updated can be a tricky task. But for years, no better way of doing that was available, and keeping those separate was by far the best option. But using Tailwind, it’s not required anymore. I think, maybe, keeping those together can be the modern way. Anyway, I’m just a guy from Brazil, sharing a few thoughts… Let me know what you think about it. [Cover Image by freepik](https://www.freepik.com/free-ai-image/futurism-perspective-digital-nomads-lifestyle_138710890.htm#fromView=image_search_similar&page=1&position=1&uuid=f5292c00-86a3-43a2-882e-4418e0aebe7d)
amorimjj
1,891,799
El estado actual de la IA en LMS de código abierto: Comparación entre Moodle, Canvas, Open edX y Sakai
En los últimos años, la integración de la Inteligencia Artificial (IA) en los Sistemas de Gestión del...
0
2024-06-17T23:36:59
https://krestomatio.com/es/blog/current-state-ai-open-source-lms/
lms, ai, hosting, krestomatio
En los últimos años, la integración de la Inteligencia Artificial (IA) en los Sistemas de Gestión del Aprendizaje (LMS) ha revolucionado la tecnología educativa. Se espera que las plataformas LMS de código abierto como Moodle, Canvas, Open edX y Sakai lideren esta revolución, aportando capacidades de IA únicas para mejorar las experiencias de enseñanza y aprendizaje. Este blog explora el estado actual de la integración de la IA en estas populares plataformas LMS de código abierto, comparando sus características, fortalezas, debilidades y adopción mundial. ## El papel de la IA en las plataformas LMS modernas La IA generativa ha transformado el panorama educativo al automatizar tareas repetitivas, personalizar experiencias de aprendizaje y mejorar la accesibilidad. Todas las principales plataformas LMS de código abierto han reconocido este potencial y están trabajando activamente en la integración de la IA para optimizar los resultados educativos. ## Comparación de características de IA en plataformas LMS de código abierto ### Moodle™ LMS **Sitio web**: [Moodle](https://moodle.com/es) Moodle está a la vanguardia de la integración de la IA con un subsistema de IA integral en desarrollo. Las características clave incluyen: - **[Principios de IA](https://moodle.com/us/about/moodle-ai-principles/)**: Moodle se adhiere a un conjunto de principios de IA centrados en la transparencia, la configurabilidad, la protección de datos, la igualdad, la práctica ética y la educación. - **[Subsistema de IA](https://tracker.moodle.org/browse/MDL-80889)**: Proporciona formas fáciles de interactuar con la IA, como generar contenido, resumir textos y crear imágenes. - **Plugins**: Múltiples plugins de IA como [AI Connector](https://moodle.org/plugins/local_ai_connector), [AI Questions Generator](https://moodle.org/plugins/local_aiquestions) y [OpenAI Chat Block](https://moodle.org/plugins/block_openai_chat) mejoran las capacidades de Moodle. - **[Grupo de Investigación de IA](https://moodle.org/enrol/index.php?id=17254)**: Explora nuevas tecnologías para dar forma al futuro de la plataforma Moodle. Colaboran con la comunidad de Moodle a través de encuestas y discusiones para entender cómo se puede utilizar mejor la IA para mejorar las experiencias de aprendizaje. #### Fortalezas y debilidades de Moodle LMS **Fortalezas**: - Principios de IA integrales que garantizan un uso ético. - Robusto ecosistema de plugins para la integración de IA. - Activo grupo de investigación de IA y apoyo comunitario. **Debilidades**: - Complejidad en la gestión de múltiples plugins de IA. - Requiere experiencia técnica para una configuración y uso óptimos. #### Adopción mundial y madurez comunitaria de Moodle LMS Moodle cuenta con más de 180,000 instalaciones y 200 millones de usuarios en todo el mundo, apoyado por una comunidad madura y activa de código abierto. Su extenso directorio de plugins y actualizaciones frecuentes lo convierten en un favorito entre educadores e instituciones. ### Canvas LMS **Sitio web**: [Canvas](https://www.instructure.com/canvas) Canvas también está aprovechando la IA para agilizar los procesos educativos. Las características notables de la IA incluyen: - **Herramientas de IA para docentes**: Herramientas para acelerar el proceso de aprendizaje mediante la automatización de tareas administrativas y la provisión de conocimientos más profundos. - **Gamificación y personalización**: Gamificación impulsada por IA para involucrar a los estudiantes y personalizar las rutas de aprendizaje. - **Cumplimiento**: Alineado con la Orden Ejecutiva sobre IA de la Casa Blanca, asegurando un uso ético y responsable de las tecnologías de IA. #### Fortalezas y debilidades de Canvas LMS **Fortalezas**: - Fuerte enfoque en el apoyo y eficiencia docente. - Personalización y gamificación impulsadas por IA. - Cumplimiento de estándares éticos y órdenes ejecutivas. **Debilidades**: - Transparencia limitada sobre funcionalidades específicas de IA. - Puede requerir recursos adicionales para una utilización completa de la IA. #### Adopción mundial y madurez comunitaria de Canvas LMS Canvas es ampliamente adoptado en Norteamérica y más allá, con un fuerte apoyo institucional y una vibrante comunidad. Su interfaz fácil de usar y características poderosas contribuyen a su popularidad. ### Open edX **Sitio web**: [Open edX](https://openedx.org) Open edX utiliza la IA para mejorar la creación de contenido y la interacción con los estudiantes. Las características clave incluyen: - **Creación de cursos impulsada por IA**: Utiliza modelos de lenguaje grandes (LLM) para crear contenido de cursos atractivo. - **ChatGPT XBlock**: Integra ChatGPT para experiencias de aprendizaje interactivas. - **Impacto en el aprendizaje**: Se centra en cómo la IA puede transformar el aprendizaje en línea mejorando el compromiso y la calidad del curso. #### Fortalezas y debilidades de Open edX **Fortalezas**: - Herramientas avanzadas de creación de contenido impulsadas por IA. - Características de IA interactivas como ChatGPT XBlock. - Fuerte énfasis en mejorar el compromiso de aprendizaje. **Debilidades**: - Requiere conocimientos técnicos para integrar herramientas avanzadas de IA. - Menor enfoque en funcionalidades administrativas de IA en comparación con otros. #### Adopción mundial y madurez comunitaria de Open edX Open edX es utilizado por instituciones prestigiosas como MIT y Harvard, lo que demuestra su fiabilidad y escalabilidad. Su comunidad de código abierto es innovadora, contribuyendo significativamente a su desarrollo. ### Sakai **Sitio web**: [Sakai](https://www.sakailms.org) Sakai está actualmente rezagado en la integración de la IA en comparación con otras plataformas. Aunque hay potencial, las características y herramientas de IA detalladas aún no se han desarrollado y presentado prominentemente. #### Fortalezas y debilidades de Sakai **Fortalezas**: - Potencial de crecimiento en la integración de IA. - Apoyo comunitario activo. **Debilidades**: - Actualmente carece de características significativas de IA. - Necesita más desarrollo para ponerse al día con otras plataformas. #### Adopción mundial y madurez comunitaria de Sakai Sakai, aunque no tan ampliamente adoptado como Moodle o Canvas, tiene una base de usuarios y comunidad dedicados. Es particularmente popular en ciertas instituciones académicas y continúa creciendo. ## Conclusión La integración de la IA en las plataformas LMS está transformando nuestra forma de abordar la educación, haciéndola más eficiente, personalizada y accesible. Mientras que Moodle, Canvas y Open edX están liderando la carga con características innovadoras de IA, Sakai tiene el potencial de crecer en este espacio. Al elegir el LMS adecuado y aprovechar las capacidades de IA, las instituciones educativas pueden mejorar significativamente sus procesos de enseñanza y aprendizaje. Para obtener información más detallada sobre las capacidades de IA de cada LMS, visite sus respectivos sitios web: - [Moodle](https://moodle.com/es) - [Canvas](https://www.instructure.com/canvas) - [Open edX](https://openedx.org) - [Sakai](https://www.sakailms.org) Considera el servicio gestionado de Krestomatio para una experiencia sin complicaciones con Moodle LMS. Visita [nuestra página de precios de suscripción](https://krestomatio.com/es/pricing/) para obtener más información. Al mantenerse informado sobre los últimos avances en IA y plataformas LMS, los educadores y las instituciones pueden aprovechar al máximo estas poderosas herramientas para mejorar los resultados del aprendizaje y la eficiencia operativa.
jobcespedes
1,891,798
Deploying a "Hello World" Application to AWS Elastic Beanstalk
Introduction AWS Elastic Beanstalk as an easy-to-use service for deploying and scaling web...
27,646
2024-06-17T23:36:11
https://dev.to/prakash_rao/deploying-a-hello-world-application-to-aws-elastic-beanstalk-pag
**Introduction** AWS Elastic Beanstalk as an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. This post will demonstrate how to deploy your application in AWS Elastic Beanstalk. **Prerequisites** - AWS account - Basic knowledge of web application development - Familiarity with AWS services is helpful but not required ## Step 1: Creating a Simple Web Application **Initial Setup** - Choose a programming language and framework. For this example, let's use Python with Flask. - Create a new directory for your project and navigate into it. - Initialize a new Python virtual environment and activate it (optional but recommended). **Application Code** - Create a file named application.py and add the following Flask application code: ``` from flask import Flask application = Flask(__name__) @application.route('/') def hello_world(): return 'Hello, World!' if __name__ == '__main__': application.run() ``` - Create a requirements.txt file specifying Flask: ``` Flask==1.1.2 ``` **Local Testing (Optional)** - Run the application locally to ensure it works. - Open a browser and navigate to http://localhost:5000 to see the "Hello, World!" message. ## Step 2: Preparing the Application for Deployment - Zip the application.py and requirements.txt files together. ``` zip myapp.zip application.py requirements.txt ``` ## Step 3: Creating an Elastic Beanstalk Environment - Log in to the AWS Management Console. - Navigate to the Elastic Beanstalk service and click "Create New Application". - Enter an application name and description. - Create a new environment within this application – choose the Web server environment. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sswwzw8ymrlieueqcjej.png) - Select the Python platform and choose the appropriate version. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4usb5iyxordc40sddlh.png) ## Step 4: Uploading and Deploying the Application - When prompted to upload your code, choose the "Upload your code" option and upload the myapp.zip file. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ae188lmaioi86s0hvhjx.png) - Configure more options if needed, or simply click "Create environment". ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfk6yva6fg80ty2szysf.png) ## Step 5: Configuring Service Access - For Elastic Beanstalk selecting role is crucial. For this lab purpose let's chose existing service role and existing EC2 instance profile. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hrkgmjw8e9fm4utzcst7.png) ## Step 6: Environment Configuration and Launch - For rest of the optional parameters, keep it as is. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rmqb5pl8299hzj30ua4v.png) - AWS Elastic Beanstalk will now create your environment and deploy the application. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tyr44iqhh4ndvcmyve1h.png) - This process might take a few minutes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfmd3hplnq4cskn4pw90.png) ## Step 7: Accessing the Deployed Application - Once the environment is ready, click the provided URL to access your application. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dygnp5emofpvt7f9r9kf.png) - You should see the "Hello, World!" message served from AWS Elastic Beanstalk. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kikfpcoiellmpd7sz7zf.png) ## Step 8: Clean Up - To avoid incurring charges, delete the Elastic Beanstalk environment and application. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ubidvclbcubxwtaa4tkk.png) ## Conclusion Congratulations! You've just taken a significant step into the world of web application deployment using AWS Elastic Beanstalk. By following the steps outlined in this post, you have learned how to deploy a "Hello World" application. As you become more comfortable with AWS Elastic Beanstalk, I encourage you to explore its advanced features, experiment with different configurations, and consider how you can integrate other AWS services to enhance your application's functionality and performance. Now that you have the basics down, the sky is the limit. Keep learning, keep experimenting, and most importantly, have fun building! ## Additional Resources To further your knowledge and skills in AWS Elastic Beanstalk and related AWS services, here are some additional resources you may find useful: [AWS Elastic Beanstalk Developer Guide](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/): The official Elastic Beanstalk documentation provides detailed information on how to use and configure the service. [AWS Elastic Beanstalk Sample Applications](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/samples.html): Explore sample applications provided by AWS that you can deploy and study. [AWS Training and Certification](https://www.aws.training/): AWS offers various training courses and certifications that can help you deepen your understanding of AWS services. [AWS Developer Forums](https://forums.aws.amazon.com/forum.jspa?forumID=86): Elastic Beanstalk: Join the community forum to ask questions, share experiences, and get insights from other AWS developers. [Flask Documentation](https://flask.palletsprojects.com/en/latest/): The official documentation for Flask is an excellent resource for learning more about building web applications with this micro web framework.
prakash_rao
1,891,797
The Current State of AI in Open Source LMS: Moodle, Canvas, Open edX, and Sakai Compared
In recent years, the integration of Artificial Intelligence (AI) into Learning Management Systems...
0
2024-06-17T23:35:30
https://krestomatio.com/blog/current-state-ai-open-source-lms/
lms, ai, hosting, krestomatio
In recent years, the integration of Artificial Intelligence (AI) into Learning Management Systems (LMS) has been a game-changer for educational technology. Open-source LMS platforms like Moodle, Canvas, Open edX, and Sakai are expected to lead this revolution, bringing unique AI capabilities to enhance teaching and learning experiences. This blog post explores the current status of AI integration in these popular open-source LMS platforms, comparing their features, strengths, weaknesses and worldwide adoption. ## The Role of AI in Modern LMS Platforms Generative AI has transformed the landscape of education by automating repetitive tasks, personalizing learning experiences, and improving accessibility. All the major open-source LMS platforms have recognized this potential and are actively working on integrating AI to optimize educational outcomes. ## Comparing AI Features in Open Source LMS Platforms ### Moodle™ LMS **Website**: [Moodle](https://moodle.com) Moodle is at the forefront of AI integration with a comprehensive AI subsystem in development. Key features include: - **[AI principles](https://moodle.com/us/about/moodle-ai-principles/)**: Moodle adheres to a set of AI principles focused on transparency, configurability, data protection, equality, ethical practice, and education. - **[AI Subsystem](https://tracker.moodle.org/browse/MDL-80889)**: Provides user-friendly ways for interaction with AI, such as generating content, summarizing text, and creating images. - **Plugins**: Multiple AI plugins like [AI Connector](https://moodle.org/plugins/local_ai_connector), [AI Questions Generator](https://moodle.org/plugins/local_aiquestions), and [OpenAI Chat Block](https://moodle.org/plugins/block_openai_chat) enhance Moodle's capabilities. - **[AI Research Group](https://moodle.org/enrol/index.php?id=17254)**: Explores new technologies to shape the future of the Moodle platform. They collaborate with the Moodle community through surveys and discussions to understand how AI can best be used to enhance learning experiences. #### Strengths and Weaknesses of Moodle LMS **Strengths**: - Comprehensive AI principles ensuring ethical use. - Robust plugin ecosystem for AI integration. - Active AI research group and community support. **Weaknesses**: - Complexity in managing multiple AI plugins. - Requires technical expertise for optimal setup and use. #### Worldwide Adoption and Community Maturity of Moodle LMS Moodle boasts over 180,000 installations and 200 million users worldwide, supported by a mature and active open-source community. Its extensive plugin directory and frequent updates make it a favorite among educators and institutions. ### Canvas LMS **Website**: [Canvas](https://www.instructure.com/canvas) Canvas is also leveraging AI to streamline educational processes. Notable AI features include: - **AI Tools for Teachers**: Tools to accelerate the learning process by automating administrative tasks and providing deeper insights. - **Gamification and Personalization**: AI-driven gamification to engage students and personalize learning paths. - **Compliance**: Aligns with the White House's Executive Order on AI, ensuring ethical and responsible use of AI technologies. #### Strengths and Weaknesses of Canvas LMS **Strengths**: - Strong focus on teacher support and efficiency. - AI-driven personalization and gamification. - Compliance with ethical standards and executive orders. **Weaknesses**: - Limited transparency on specific AI functionalities. - May require additional resources for full AI utilization. #### Worldwide Adoption and Community Maturity of Canvas LMS Canvas is widely adopted in North America and beyond, with strong institutional support and a vibrant community. Its user-friendly interface and powerful features contribute to its popularity. ### Open edX **Website**: [Open edX](https://openedx.org) Open edX utilizes AI to enhance content creation and student interaction. Key features include: - **AI-Driven Course Creation**: Uses large language models (LLMs) to craft engaging course content. - **ChatGPT XBlock**: Integrates ChatGPT for interactive learning experiences. - **Impact on Learning**: Focuses on how AI can transform online learning by improving engagement and course quality. #### Strengths and Weaknesses of Open edX **Strengths**: - Advanced AI-driven content creation tools. - Interactive AI features like ChatGPT XBlock. - Strong emphasis on improving learning engagement. **Weaknesses**: - Requires technical know-how for integrating advanced AI tools. - Less focus on administrative AI functionalities compared to others. #### Worldwide Adoption and Community Maturity of Open edX Open edX is used by prestigious institutions like MIT and Harvard, showcasing its reliability and scalability. Its open-source community is innovative, contributing significantly to its development. ### Sakai **Website**: [Sakai](https://www.sakailms.org) Sakai is currently lagging behind in AI integration compared to other platforms. While there is potential, detailed AI features and tools are yet to be prominently developed and showcased. #### Strengths and Weaknesses of Sakai **Strengths**: - Potential for growth in AI integration. - Active community support. **Weaknesses**: - Currently lacks significant AI features. - Needs more development to catch up with other platforms. #### Worldwide Adoption and Community Maturity of Sakai Sakai, while not as widely adopted as Moodle or Canvas, has a dedicated user base and community. It is particularly popular in certain academic institutions and continues to grow. ## Conclusion The integration of AI into LMS platforms is transforming the way we approach education, making it more efficient, personalized, and accessible. While Moodle, Canvas, and Open edX are leading the charge with innovative AI features, Sakai has the potential to grow in this space. By choosing the right LMS and leveraging AI capabilities, educational institutions can significantly enhance their teaching and learning processes. For more detailed insights into the AI capabilities of each LMS, visit their respective websites: - [Moodle](https://moodle.com) - [Canvas](https://www.instructure.com/canvas) - [Open edX](https://openedx.org) - [Sakai](https://www.sakailms.org) Consider Krestomatio's managed service for a hassle-free Moodle LMS experience. Visit [our subscription pricing page](https://krestomatio.com/pricing/) for more information. By staying informed about the latest advancements in AI and LMS platforms, educators and institutions can make the most of these powerful tools to enhance learning outcomes and operational efficiency.
jobcespedes
1,891,796
Exposing myself live and DMCA takedowns
Some months ago I took the next step in sharing my experiences with a wider audience. I began...
0
2024-06-17T23:31:53
https://dev.to/davidsoleinh/exposing-myself-live-and-dmca-takedowns-3e8
Some months ago I took the next step in sharing my experiences with a wider audience. I began live-streaming my activities on platforms like Twitch and Kick. The motivation behind this decision stemmed from the presentation by Charlie Coppinger ([Twitch](https://www.twitch.tv/thecoppinger)), who emphasized the concept of "Building in Public on Twitch" inside the Small Bets community. Intrigued by his approach and after exploring his channel, I felt inspired to give it a try. Setting up my Twitch stream proved to be a more intricate process than anticipated, involving considerations such as microphone setup, choosing the right streaming application and application setup. I plan to get deeper into these technical aspects in upcoming posts, sharing insights and tips for those interested. For my inaugural stream, I opted to play the video game Palworld, reminiscent of Pokemon but with a unique twist involving guns. While there were initial nerves, once immersed in the flow, the awareness of being live on camera faded away. A highlight from my first day of streaming was receiving a sudden "[raid](https://help.twitch.tv/s/article/how-to-use-raids?language=en_US)" on my channel, with around 20 people joining and engaging in lively chat to promote my content. On the second day, I decided to share a different aspect of my life by streaming myself as a student. I like to learn, and this was a perfect occasion to use streaming as a tool. I undertook the "ChatGPT Prompt Engineering for Developers" course on the deeplearning.ai website. Given the course's content and my admiration for Andrew Ng's teachings from past experiences, I wanted to document my learning journey. Adhering to the website's Terms of Use, which discouraged streaming without prior written consent, I took a calculated risk. I blurred the screen and added ambient music to maintain compliance while sharing my educational experience. Although the audience for this stream was limited, I found joy in combining my educational pursuits with the unique experience of live streaming. Even though there are always chances to get to a DMCA takedown when you use other’s content, I was lucky to not get into it. ## Reflecting on a Past DMCA Takedown In July 2016, six months after completing my master's degree in Computer Vision and venturing into my first indie hacker journey, the popular game app Pokemon Go made its debut. There was a lot of hype around the release, and the first web apps that gathered information about Pokemon locations started to appear. Those were my first steps in reverse engineering APIs on web apps but I wasn’t able to reverse engineer the Pokemon GO game. One of those web apps became the basis for my Android App, "Live PokeMap for Pokemon Go." To prevent a potential DMCA takedown, I removed the content inside Pokemon images from the App. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f2nln66b36xp3i3vo2ey.jpg) I also incorporated Google AdMob ads to generate revenue, and the app gained traction quickly. The initial excitement culminated in a revenue of $43.03 on the first full day. That day I remember I couldn’t sleep from excitement, my brain was thinking about improvements and new features. I even fantasized about living off the app. However, my elation was short-lived as, on the fifth day, Google removed all Android apps revealing Pokemon locations from the Play Store. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdxfvyknkxsolzffeclw.png) Despite reaching over 20K downloads in the first days, the app's removal from the Play Store meant AdMob stopped displaying ads, resulting in no income. Nowadays, the app is still present on app crawlers like [APKCombo](https://apkcombo.com/es/live-pokemap/com.presentforyou.pokemongomap/), even though the app is not working. This experience heightened my determination to continue building innovative projects while considering alternatives to relying solely on major platforms like Google or Apple.
davidsoleinh
1,891,794
Demo: Automating GitHub Repo Configuration and Security with Minder
If you're like many project owners or maintainers, your software project might span tens or...
0
2024-06-17T23:26:02
https://dev.to/ninfriendos1/demo-automating-github-repo-configuration-and-security-with-minder-4imp
demo, security, opensource, github
{% embed https://youtu.be/HJDSBBFgzLE %} If you're like many project owners or maintainers, your software project might span tens or hundreds of GitHub repos, and your repo configuration may be wildly variable. How do you make sure that your repos always have a standard configuration in place, like a code of conduct, a security.md file, a license file, secret scanning, and Dependabot? It's a lot to remember and to continuously monitor. Fortunately, you don't have to—there are free tools like Minder available to help. In this demo, Stacklok engineer Eleftheria Stein-Kousathana demos how to use Minder, an open source software supply chain security platform, to help you keep your GitHub repos consistently configured and secure for your end users. Try it out at https://cloud.stacklok.com Read the docs at https://docs.stacklok.com
ninfriendos1
1,891,758
Day 972 : On air
liner notes: Saturday : Actually got to the station around the time I used to before my Japan trip....
0
2024-06-17T23:19:13
https://dev.to/dwane/day-972-on-air-179h
hiphop, code, coding, lifelongdev
_liner notes_: - Saturday : Actually got to the station around the time I used to before my Japan trip. I set up for the show and did the on air broadcast. Had a good time as usual. Pretty normal day. The recording of this week's show is at https://kNOwBETTERHIPHOP.com ![Radio show episode image of 3 piles of papers with the words June 15th 2024 Overthinking](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/msfy46xjm6y2essml8um.jpg) - Sunday : Did the study sessions at https://untilit.works Didn't get a lot of coding done, mostly testing some technologies out, but I did finally get some packages ready to be shipped. Also, I got some work done on a logo I've been meaning to finish up. Ended the night watching an episode of "The Boys". Also realized that I've been missing episodes of "Demon Slayer". - Professional : Pretty good day. Had a couple of meetings in the morning. Met with my manager to plan some things. Responded to some community questions. Got some work done on a refactoring project. Spent the rest of the day once again filling out and submitting a form to get a visa. - Personal : I've decided to "start from scratch" with a project that I created. The framework and adapter have since been updated and when I tried to upgrade another project, I ran into some issues and just started over. It's not really starting from scratch, I should be able to just copy over the components into the new project. We'll see. I still want to finish up my current project for which the logo was for and possibly add a new project to it. Want to get that done this week. ![A photo of a snow-capped mountain with a blue sky in the background. The mountain is in the distance and appears to be very tall. The sky is clear and there are no clouds. The photo was taken from a low angle, making the mountain appear even more imposing. The mountain is Annapurna I, located in the Annapurna mountain range in Nepal.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kpfifozxa1psfpxs5d7a.jpg) Filling out the visa form, again, took a lot longer than I wanted. Getting a late start to my evening, but I think I'm going to add the logo to my current side project, maybe add another page to it. I may see if I can add a web technology that I learned about recently. Going to watch an episode of "The Boys" and maybe "Demon Slayer". Going to eat and get to work. Have a great night! peace piece Dwane / conshus https://dwane.io / https://HIPHOPandCODE.com {% youtube GT5iRgsKBeM %}
dwane
1,891,756
"Xcel Project Idea Evaluation - Your Opinions Are Important!"
```"Xcel Project Idea Evaluation - Your Opinions Are...
0
2024-06-17T23:07:53
https://dev.to/robert_mularczyk_a0179a6f/xcel-project-idea-evaluation-your-opinions-are-important-102i
business, ai, llm, strategy
```"Xcel Project Idea Evaluation - Your Opinions Are Important!" --- https://github.com/Brooda21/Xcel/discussions/4#discussion-6832706 ` ``` Hi everyone! I am working on an innovative project called Xcel, which aims to support small and medium-sized enterprises (SMEs) by providing modern management tools and strategies. I would like to ask for your help in evaluating my idea and gathering opinions that will help me move Xcel into the next phase of development. Your responses are extremely valuable to me. Thank you for your time! ### Survey Questions: 1. **Do you run a small or medium-sized enterprise (SME)?** - Yes - No 2. **What are the biggest challenges your business faces? (You can choose more than one)** - Crisis management - Business process optimization - Introducing new products to the market - Human resources management - Financial analysis and forecasting - Other (please specify in the comments) 3. **How do you rate the need for crisis management tools in your company?** - Very necessary - Necessary - Neutral - Not very necessary - Unnecessary 4. **Do you currently use any tools for business process optimization?** - Yes - No 5. **Which features in a crisis management tool would be most useful to you? (You can choose more than one)** - Automatic generation of action recommendations - Access to interim managers and experts - Advanced data analysis and forecasting - Personalization of management strategies - Other (please specify in the comments) 6. **How do you rate your company's readiness to invest in new technologies and management support tools?** - Very high - High - Medium - Low - Very low 7. **Do you think the Xcel project, which offers crisis management and process optimization tools, could bring value to your company?** - Definitely yes - Yes - Maybe - Probably not - Definitely not 8. **Which aspects of the Xcel project are most important to you? (You can choose more than one)** - Reduction of operational costs - Increase in operational efficiency - Increase in revenues - Improvement in risk management - Other (please specify in the comments) 9. **Would you be interested in participating in the beta testing of the Xcel project and providing feedback?** - Yes - No - Maybe 10. **Additional comments and suggestions:** - [Text field] Thank you for filling out the survey! Your responses will help me better understand the market needs and tailor the Xcel project to meet user expectations. If you have any additional comments or would like to participate in further development stages, please get in touch.
robert_mularczyk_a0179a6f
1,891,755
Importance of PNG Images
What Are PNG Images? PNG, which stands for Portable Network Graphics, is a popular raster...
0
2024-06-17T23:07:30
https://dev.to/msmith99994/importance-of-png-images-14p9
## What Are PNG Images? PNG, which stands for Portable Network Graphics, is a popular raster graphics file format that supports lossless data compression. Created as an improved, non-patented replacement for Graphics Interchange Format (GIF), PNG was designed to handle the shortcomings of GIF and provide a better alternative for images on the web. ## Characteristics of PNG Images **- Lossless Compression:** PNG images retain all their data when compressed, meaning there is no loss in quality. **- Transparency:** One of the standout features of PNG is its ability to handle transparency. PNG supports 8-bit transparency, allowing for varying levels of transparency within the same image. **- Color Depth:** PNG supports a broad range of colors, from grayscale images to 24-bit RGB or 32-bit RGBA (with an alpha channel for transparency). ## Where Are PNG Images Used? PNG images are widely used across various platforms and for numerous purposes: **- Web Design:** Due to their support for transparency and lossless compression, PNGs are ideal for logos, icons, and other web graphics that require sharp edges and clear backgrounds. **- Digital Art and Photography:** Artists and photographers use PNG for its high quality and ability to display intricate details without compression artifacts. **- Screenshots:** PNG is the preferred format for screenshots because it captures the screen's content precisely without any loss of quality. **- Image Editing:** When working with images that require multiple edits, PNG is often used to maintain quality throughout the editing process. ## Advantages and Disadvantages of PNG Images ### Advantages **- High Quality:** PNG images maintain their original quality regardless of how many times they are edited or saved. **- Transparency:** The support for transparency makes PNG ideal for images that need to be overlaid on different backgrounds. **- Wide Color Range:** PNG supports millions of colors, making it suitable for complex images like photographs and digital art. **- Interlacing:** PNG supports interlacing, allowing images to load progressively, which can enhance user experience on the web. ### Disadvantages **- File Size:** Due to its lossless compression, PNG files can be larger compared to other formats like JPEG, which can be an issue for web usage where load times and bandwidth are concerns. **- Limited Animation Support:** Unlike GIF, PNG does not natively support animations, although the related MNG (Multiple-image Network Graphics) format does. **- Not Ideal for Print:** PNG is optimized for web use and digital displays, not for print. For printing purposes, formats like TIFF or PDF are usually preferred. ## How to Convert PNG to WebP WebP is a modern image format developed by Google that provides superior lossless and lossy compression for images on the web. Converting [PNG to WebP](https://cloudinary.com/tools/png-to-webp) can significantly reduce file size while maintaining image quality, which is particularly beneficial for web performance. ## Conversion Methods **1. Using Online Tools:** There are various online converters available such as TinyPNG and Convertio where you can upload your PNG file and download the WebP version. **2. Using Image Editing Software:** Software like Adobe Photoshop and GIMP support WebP format. You can open your PNG file and save it as WebP. **3. Command Line Tools:** For those comfortable with the command line, tools like cwebp from the WebP library can be used. **4. Programming Libraries:** Programming libraries such as Python's Pillow or JavaScript's Sharp can be used to automate the conversion process in applications. ## Final Words PNG images play a crucial role in the digital world, offering high quality, transparency, and a broad color range. They are extensively used in web design, digital art, and screenshots, among other applications. While PNGs have the advantages of lossless compression and excellent transparency handling, they can be larger in size and are not suited for animation or print. Converting PNG to WebP can help reduce file sizes, enhancing web performance without sacrificing image quality. As technology evolves, understanding these formats and their uses is essential for optimizing digital content.
msmith99994
1,891,754
naive explanation of cryptography.
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-17T23:06:29
https://dev.to/thedigitalbricklayer/naive-explanation-of-cryptography-gpo
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Cryptography is the act of shuffling a message, and the only way to read the message and its contents is to know how the message was encrypted or to have the key to decrypt it. Turing broke the Enigma and helped to win the war. ## Additional Context https://en.wikipedia.org/wiki/Cryptography https://pt.wikipedia.org/wiki/Alan_Turing
thedigitalbricklayer
1,891,753
O que é BDD e quando você deve considerar
Olá, Mentes Tech! O desenvolvimento orientado por comportamento (Behavior-Driven Development, ou...
0
2024-06-17T23:06:10
https://dev.to/devxbr/o-que-e-bdd-e-quando-voce-deve-considerar-4160
braziliandevs, bdd, tdd, go
Olá, Mentes Tech! O desenvolvimento orientado por comportamento (Behavior-Driven Development, ou BDD) é uma abordagem de desenvolvimento de software que estende o Test-Driven Development (TDD) ao focar na colaboração entre desenvolvedores, testadores e stakeholders não técnicos. O objetivo do BDD é garantir que todos os envolvidos no desenvolvimento do software compartilhem uma compreensão clara do comportamento desejado do sistema. [Repositório Github](https://github.com/devxbr/go-bdd) Implementação realizada usando Golang e [GoDog](https://github.com/cucumber/godog) ### Como Funciona o BDD 1. **Especificação de comportamentos**: No BDD, os requisitos do software são definidos como especificações de comportamento, geralmente utilizando a linguagem Gherkin. Esta linguagem permite escrever cenários de teste em um formato de história, como "Dado, Quando, Então", que são compreensíveis tanto para técnicos quanto para não técnicos. 2. **Cenários de teste**: Esses cenários são descrições claras e detalhadas de como o sistema deve se comportar em determinadas situações. Por exemplo: ```gherkin Feature: Calculator As a user I want to use a calculator So that I can add numbers Scenario: Add two numbers Given I have a calculator When I add 2 and 3 Then the result should be 5 ``` 3. **Automatização de testes**: Esses cenários são então usados como base para criar testes automatizados. Ferramentas como Cucumber, SpecFlow, ou Godog (para Go) executam esses cenários e verificam se o comportamento do sistema corresponde às especificações. 4. **Desenvolvimento iterativo**: O BDD promove um ciclo iterativo onde cenários de teste são escritos antes do desenvolvimento do código. Isso garante que o desenvolvimento seja guiado pelos requisitos de comportamento do usuário. ### Benefícios do BDD 1. **Melhor comunicação e colaboração**: O BDD facilita uma comunicação clara entre todos os membros da equipe. As especificações de comportamento são escritas em uma linguagem natural, compreensível por todos, o que promove uma colaboração mais eficaz. 2. **Requisitos claros e não ambíguos**: A escrita de cenários de teste antes da codificação ajuda a esclarecer os requisitos, reduzindo ambiguidades e mal-entendidos. 3. **Foco no valor de negócio**: O BDD mantém o foco no comportamento que traz valor para o usuário final. Isso ajuda a garantir que o software entregue realmente atenda às necessidades do negócio. 4. **Documentação viva**: Os cenários de BDD servem como uma documentação viva do sistema, sempre atualizada e refletindo o comportamento atual do software. 5. **Testes automatizados**: O BDD promove a criação de uma suite de testes automatizados robusta, facilitando a detecção precoce de erros e a realização de regressão. 6. **Facilita a refatoração**: Com uma suite de testes automatizados baseada em cenários de BDD, é mais fácil refatorar o código com confiança, sabendo que qualquer regressão será detectada imediatamente. 7. **Maior confiança na qualidade**: A combinação de especificações claras, colaboração eficaz e testes automatizados robustos resulta em maior confiança na qualidade do software entregue. ### Por que você, como desenvolvedor, deve considerar usar BDD - **Alinhamento com as expectativas do usuário**: O BDD ajuda a garantir que você está desenvolvendo funcionalidades que realmente atendem às necessidades do usuário final, reduzindo retrabalho e ajustes tardios. - **Redução de erros**: Ao escrever testes antes do código, você pode detectar e corrigir problemas logo no início do processo de desenvolvimento, economizando tempo e esforço a longo prazo. - **Documentação automática**: Os cenários de teste servem como documentação do sistema, facilitando a manutenção e a compreensão do código, tanto para você quanto para novos membros da equipe. - **Melhoria contínua**: O BDD promove uma cultura de melhoria contínua, onde você está constantemente revisando e refinando tanto o código quanto os testes, resultando em um produto final de alta qualidade. - **Confiança na refatoração**: Com uma suite de testes robusta, você pode refatorar o código com segurança, sabendo que as mudanças serão verificadas automaticamente contra os cenários de comportamento especificados. Em resumo, o BDD não só melhora a qualidade do software, mas também facilita a comunicação, colaboração e alinhamento entre todos os envolvidos no desenvolvimento, resultando em um processo de desenvolvimento mais eficiente e eficaz.
devxbr
1,891,752
[Game of Purpose] Day 30 - Flying Drone with Physics
Today I finally managed to make my drone fly entirely with physics. Engine needs to be turned on and...
27,434
2024-06-17T23:00:30
https://dev.to/humberd/game-of-purpose-day-30-23mk
gamedev
Today I finally managed to make my drone fly entirely with physics. Engine needs to be turned on and then the propellers start rotating. {% embed https://youtu.be/HwzVHhCmTbs %}
humberd
1,891,710
Mastering API Integrations: A Step-by-Step Guide to Secure API Authentication in Java
Introduction 🛤️ Integrating with external APIs is a common practice in software...
0
2024-06-17T22:55:08
https://dev.to/joaomarques/mastering-api-integrations-a-step-by-step-guide-to-secure-api-authentication-in-java-3b36
java, api, webdev, codequality
### Introduction 🛤️ Integrating with external APIs is a common practice in software development, allowing your applications to consume third-party services and expand their functionalities. Among these APIs, some stand out as robust solutions for managing subscriptions, products, and other variety of operations. In this article, I will explain how to configure a service in Java to authenticate and interact with a REST API, specifically focusing on the authentication process. The goal is to provide a detailed guide that facilitates the creation of a secure and efficient integration with the external API, ensuring that your applications can fully leverage the resources offered by the platform without making unnecessary refresh token requests. ### About ♟️ An external API allows you to manage the lifecycle of items related to the data that you need, and it will charge you for the amount of requests you make. To interact with the API, you need to authenticate your requests using a token-based authentication method. This method ensures that only authorized users can access the API resources. This is the structure that we will talk about in this article: ![The auth service structure in the project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/akvlnnm3mchywdgmxdc0.png) ### Environment 🧩 To test and develop integrations with an external API, you can use the development environment provided by the service, accessible at a specific URL. This environment allows you to perform test operations without affecting production data, providing a safe space for developing and validating your integrations. **Creating a Sandbox Account**: First, you need to create a sandbox account with the external service. This account allows you to access the development environment and test all API functionalities without risks. **Obtaining API Credentials**: After creating your sandbox account, obtain your API credentials (client ID and client secret). These credentials will be used to authenticate your requests. **API Endpoints**: Use the development environment's endpoints to make your requests. For example, the endpoint for authentication might be something like `https://rest.test.external-service.com/oauth/token` Note that it is possible to define requests that require the presence of the authentication token and those that do not. ### The basis 🥾 First, I'll create the base of the HTTP service, where the base HTTP methods will be present. ```java public class BaseHttpService { private static final String DEFAULT_CHARSET = "UTF-8"; protected String post(String url, String body, Map<String, String> headers) throws IOException { HttpPost httpPost = new HttpPost(url); addHeadersToRequest(httpPost, headers); StringEntity requestBody = new StringEntity(body, DEFAULT_CHARSET); httpPost.setEntity(requestBody); return executeRequest(httpPost); } protected String get(String url, Map<String, String> headers) throws IOException { HttpGet httpGet = new HttpGet(url); addHeadersToRequest(httpGet, headers); return executeRequest(httpGet); } private String executeRequest(HttpUriRequest request) throws IOException { CloseableHttpClient httpClient = HttpClients.createDefault(); HttpResponse response = httpClient.execute(request); HttpEntity entity = response.getEntity(); if (entity == null) { return null; } String jsonResponse = EntityUtils.toString(entity); httpClient.close(); return jsonResponse; } protected void addHeadersToRequest(HttpRequestBase httpRequest, Map<String, String> headers) { if (headers == null) { return; } for (var header : headers.entrySet()) { httpRequest.addHeader(header.getKey(), header.getValue()); } } } ``` These methods will be used to authenticate with the external API and also to complement the methods that require authentication. Therefore, they form the base of the service. ### The Auth class 🛂 The next class we'll discuss is `ExternalApiAuthenticatedRequestService`. Since it is quite large, I'll divide it into smaller parts to explain each one individually. The first part, and the most important, is to know which properties we will use in this class to maintain maximum encapsulation. These properties should be related to authentication, such as: **SecretValues**: An object created to store the client ID and client secret, depending on the environment your application is running in. **baseUrl**: The URL of your external service environment. `EXTERNAL_API_AUTH_PATH`: The path to obtain the authentication token. **isAuthenticated**: A boolean used to facilitate authentication control. **bearerToken**: The authentication token, defined as private to ensure complete control and encapsulation over this property, being acesses only through this class. **externalApiTokenExpirationTimeInMillis**: The expiration time of the token in milliseconds. We check this value when calling the getBearerToken method. If it has expired, we need to authenticate again. ```java private final SecretValues secretValues; protected final String baseUrl; private static final String EXTERNAL_API_AUTH_PATH = "/oauth/token"; private boolean isAuthenticated; private String bearerToken; private long externalApiTokenExpirationTimeInMillis; ``` ### Constructor Method 🏗️ The constructor method should set the values for secretValues and baseUrl. This depends on each application. ```java public ExternalApiAuthenticatedRequestService(SecretValues secretValues, String baseUrl) { this.secretValues = secretValues; this.baseUrl = baseUrl; // We authenticate with external Api on start, so it's faster when the first request comes authenticateWithExternalApi(); logger.info("Started ExternalApiAuthenticatedRequestService"); } ``` ### The Authentication Method 🔑 Here is the most crucial method, which we use to authenticate with the external service by calling the post method defined above. ```java private void authenticateWithExternalApi() { logger.info("Authenticating to {}", getAuthUrl()); String authRequestBody = buildAuthRequestBody(secretValues); Map<String, String> authHeaders = new HashMap<>(); authHeaders.put("Content-Type", "application/x-www-form-urlencoded"); try { String response = super.post(getAuthUrl(), authRequestBody, authHeaders); ExternalApiAuthenticationResponseDTO externalApiTokenExpirationTimeInMillis = Utility.convertStringToObject(response, ExternalApiAuthenticationResponseDTO.class); externalApiTokenExpirationTimeInMillis = Long.parseLong(externalApiAuthenticationResponseDTO.getExpiresIn()) * 1000 + System.currentTimeMillis(); bearerToken = externalApiAuthenticationResponseDTO.getAccessToken(); isAuthenticated = true; logger.info("Auth token retrieved from {}", getAuthUrl()); } catch (IOException e) { isAuthenticated = false; logger.error("Could not authenticate with external Api, error: {}", e.getMessage()); } } ``` Important Points about this Method 1. It sends a POST request using the post method from the base class. 2. It obtains the token from the external API and stores it in the bearerToken variable. 3. It updates the token expiration time. We will use this value later. 4. Calls `buildAuthRequestBody` and `getAuthUrl` ### The methods that hold hands 🧑‍🤝‍🧑 ```java private String buildAuthRequestBody(SecretValues secretValues) { return "grant_type=client_credentials&client_id=" + secretValues.getClientId() + "&client_secret=" + secretValues.getClientSecret(); } ``` This is to good way to not leave too much responsibility to `authenticateWithExternalApi` method. Its easy to create tests with this separation of concerns. ```java private String getAuthUrl() { return baseUrl + EXTERNAL_API_AUTH_PATH; } ``` ### Get the precious bearer token 💍 ```java private String getBearerToken() throws ExternalApiAuthenticationException { // Validate token existence if (!isAuthenticated) { logger.warn("External Api is not authenticated, authenticating"); reAuthenticateWithExternalApi(); } // Validate token expiration if (externalApiTokenExpirationTimeInMillis < System.currentTimeMillis()) { logger.info("External Api token expired, authenticating again"); reAuthenticateWithExternalApi(); } return bearerToken; } ``` Notice that here we cannot just call `authenticateWithExternalApi`, because in case it fails we wanna do something about it. That's why I added this method `reAuthenticateWithExternalApi`, in some application you might wanna try authenting again one or two times before throwing your exception. You might want to define a variable called `numberOfTrialsForAuthentication` in your `application-{environment}.xml` to call `reAuthenticateWithExternalApi` recursively. ```java private void reAuthenticateWithExternalApi() throws ExternalApiAuthenticationException { authenticateWithExternalApi(); if (!isAuthenticated) { throw new ExternalApiAuthenticationException("Could not authenticate with external Api"); } } ``` ### The magic methods 🖌️ And finally your service can make authenticated get and post requests: ```java @Override protected String post(String url, String body, Map<String, String> headers) throws IOException { if (headers == null) { headers = new HashMap<>(); } headers.put("Authorization", "bearer " + getBearerToken()); return super.post(url, body, headers); } @Override protected String get(String url, Map<String, String> headers) throws IOException { if (headers == null) { headers = new HashMap<>(); } headers.put("Authorization", "bearer " + getBearerToken()); return super.get(url, headers); } ``` Note that we re-use the `get` and `post` methods from the base http service. Please, leave in the comments if you could understand the thought process. ### The action 👨‍💻 ```java public class ExternalApiHttpService extends ExternalApiAuthenticatedRequestService{ private static final Logger logger = LoggerFactory.getLogger(ExternalApiHttpService.class); private static final String API_VERSION = "/v1"; private static final String GET_PRODUCTS = "/products/accounts/%s"; public ExternalApiHttpService(SecretValues secretValues, String baseUrl) { super(secretValues, baseUrl); } public ProductsResponseDTO retrieveProductsFromAccount(String accountId) throws IOException, ExternalApiAuthenticationException { logger.info("Retrieving products from account: {}", accountId); String requestPath = buildBaseUrl() + String.format(GET_PRODUCTS, accountId); String jsonResponse = get(requestPath, new HashMap<>()); return Utility.convertStringToObject(jsonResponse, ProductsResponseDTO.class); } private String buildBaseUrl(){ return baseUrl + API_VERSION; } } ``` And finally, you can add some http api requests in this class. notice that I only added `/products/accounts/%s` but you can find a good way to organize your project using this structure. If you feel that this class will grow too much, maybe would be a good idea to create a separated auth service, and call this auth service with your requests. In my case I added it as a parent class because at the moment only few requests are necessary. So I don't need to hold the authentication service reference and it saves me some minutes of coding 😌. Please share with me your thoughts, there is no right and wrong in this dev community, there are alternatives and developers should identify which one fits better for their outcome.
joaomarques
1,891,751
WebSocket vs HTTP
What differentiates one from the other? Which one is better at the expense of...
0
2024-06-17T22:47:38
https://dev.to/elinatsovo/websocket-vs-http-3d5p
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uhaot2u771bgdshydo8q.png) ## What differentiates one from the other? ## Which one is better at the expense of the other? Developers, especially juniors, often wonder about the differences between WebSocket and HTTP, and which one is more suitable for different scenarios. This article aims to clarify these questions, providing clear guidance to help choose the most appropriate technology according to specific needs. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b5e85oth0lh2gbh1drux.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/etk4vptqev9y8ryzvla3.png) ## Conclusion: The choice between HTTP and WebSocket depends on the needs of your application. If you need real-time communication with low latency, WebSocket is the best option. If you need discrete requests and static web pages, HTTP is more suitable.
elinatsovo
1,891,750
Quamtum Computing and its importance
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-17T22:38:39
https://dev.to/elmerurbina/quamtum-computing-and-its-importance-29on
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer <!-- Explain a computer science concept in 256 characters or less. --> Quantum computing is the implementation of quantum mechanic's principles into computers; they're called quantum computers, their most relevant feature is that they can solve problems faster, and problems than traditional computers can't solve. <!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. --> <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
elmerurbina
1,891,732
Latest Redux Toolkit: Using the Builder Callback Notation in createReducer 💻
In recent updates to Redux Toolkit, the way we define reducers has evolved. The traditional object...
0
2024-06-17T22:19:27
https://dev.to/adii/latest-redux-toolkit-using-the-builder-callback-notation-in-createreducer-1p5
redux, typescript
In recent updates to Redux Toolkit, the way we define reducers has evolved. The traditional object notation for `createReducer` has been replaced by a more flexible and powerful builder callback notation. This change is designed to offer better TypeScript support and more control over reducer logic. Let's dive into the difference and see how to upgrade your code. ### The Old Way: Object Notation Previously, we could define our reducers using an object where keys were action types and values were the corresponding reducer functions. Here’s an example: ```jsx import { createReducer } from '@reduxjs/toolkit'; let id = 0; const tasksReducer = createReducer([], { ADD_TASK: (state, action) => { state.push({ id: ++id, task: action.payload.task, completed: false, }); }, REMOVE_TASK: (state, action) => { const index = state.findIndex((task) => task.id === action.payload.id); if (index !== -1) { state.splice(index, 1); } }, COMPLETE_TASK: (state, action) => { const index = state.findIndex((task) => task.id === action.payload.id); if (index !== -1) { state[index].completed = true; } }, }); export default tasksReducer; ``` ### The New Way: Builder Callback Notation With the new builder callback notation, we define reducers using a builder pattern. This approach provides a more structured and scalable way to handle actions, especially in larger applications. ```jsx import { createReducer } from '@reduxjs/toolkit'; let id = 0; const tasksReducer = createReducer([], (builder) => { builder .addCase('ADD_TASK', (state, action) => { state.push({ id: ++id, task: action.payload.task, completed: false, }); }) .addCase('REMOVE_TASK', (state, action) => { const index = state.findIndex((task) => task.id === action.payload.id); if (index !== -1) { state.splice(index, 1); } }) .addCase('COMPLETE_TASK', (state, action) => { const index = state.findIndex((task) => task.id === action.payload.id); if (index !== -1) { state[index].completed = true; } }); }); export default tasksReducer; ```
adii
1,891,731
OpenAI has a new .NET SDK!
Hello world! I’m Michael and in this video, we’re going to talk about the OpenAI .NET SDK, including using ChatGPT to answer our questions, DallE-3 to generate images, and Whisper to transcribe speech to text. While OpenAI has had a .NET SDK for a while, we’ll be showing off the latest beta release of version 2 of that SDK and it’s a dramatic improvement.
0
2024-06-17T22:07:56
https://dev.to/michaeljolley/openai-has-a-new-net-sdk-4lpg
dotnet, csharp
--- title: OpenAI has a new .NET SDK! published: true description: Hello world! I’m Michael and in this video, we’re going to talk about the OpenAI .NET SDK, including using ChatGPT to answer our questions, DallE-3 to generate images, and Whisper to transcribe speech to text. While OpenAI has had a .NET SDK for a while, we’ll be showing off the latest beta release of version 2 of that SDK and it’s a dramatic improvement. tags: dotnet, csharp cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z6ojspzpxdh9ret6o6fd.png --- {%youtube BKeaojX45w0 %}
michaeljolley
1,892,551
Basic Python project setup
Originally published on peateasea.de. Setting up an existing Python project from a fresh clone...
0
2024-06-19T09:20:45
https://peateasea.de/basic-python-project-setup/
python, bash
--- title: Basic Python project setup published: true date: 2024-06-17 22:00:00 UTC tags: Python,Bash canonical_url: https://peateasea.de/basic-python-project-setup/ cover_image: https://peateasea.de/assets/images/start-up-python.png --- *Originally published on [peateasea.de](https://peateasea.de/basic-python-project-setup/).* Setting up an existing Python project from a fresh clone shouldn’t be a chore. Automate the process with a setup script. ## Simple setup How do I set up this project again? Do I use virtualenv or just the stdlib’s venv module? How do I install the dependencies? Are the documented setup instructions still up to date? These are just some of the questions that whiz through my mind when setting up a Python project either from a fresh clone or if I need to start from scratch. One way I make my life easier is by having a setup script which handles all these things for me. This idea is, of course, not new<sup id="fnref:not-the-first" role="doc-noteref"><a href="#fn:not-the-first" rel="footnote">1</a></sup> and I’ve been using a variation of what I present below for several years. The thing is, I found my solution to be suboptimal: a script called `install-deps` residing in a sub-subdirectory called `devops/bin/`. Although this solution worked it still felt somehow clunky and inefficient. I remember seeing a post from [@b0rk](https://jvns.ca/) a while ago (that I unfortunately can’t find anymore) which mentioned using a simple `setup.sh` script located in the project’s base directory. This seemed like a much better solution and is the pattern I now like to follow. Here’s what I use: ```shell #!/bin/bash # setup.sh - set up virtual environment and install dependencies # create venv if it doesn't already exist if [! -d venv] then python3 -m venv venv fi # shellcheck source=/dev/null # don't check venv activate script source venv/bin/activate pip install -r requirements.txt # vim: expandtab shiftwidth=4 softtabstop=4 ``` Thus, if I’ve moved a project to a new directory (and hence have to rebuild the `venv`) or if I’ve checked out a fresh clone onto a new machine, running ```shell $ ./setup.sh ``` will get me up to speed quickly and simply. ## Script dissection For those brave souls who would like more detail, let’s pick the script apart a bit. ### Shebang ```shell #!/bin/bash ``` The first line is the [shebang](https://en.wikipedia.org/wiki/Shebang_(Unix)) line and ensures that we use [bash](https://www.gnu.org/software/bash/) when running the script. Bash has been my shell of choice for over 25 years, so it’s a hard habit to drop. It works well enough for my needs and is still in active development, so there’s been little pressure for me to change to something newer. I can’t say I haven’t tried something else though! Even so, I keep gravitating back to bash. Oh well. ### Quick docs ```shell # setup.sh - set up virtual environment and install dependencies ``` Next, there’s a quick comment to say what’s getting set up. This will be more helpful in more complex situations where extra info will come in handy. In the basic, simple situation shown here, it’s probably not necessary, although it could be useful background information for onboarding new project members. ### A familiar environment ```shell # create venv if it doesn't already exist if [! -d venv] then python3 -m venv venv fi ``` This snippet creates and initialises the virtual environment directory if it doesn’t already exist. The square brackets test the condition within them and pass a true/false result to the `if` statement. This then handles which code to run depending upon the result it receives. The test condition checks for the absence of a directory (`! -d`; i.e. “not directory exists”) called `venv`. If the directory doesn’t exist, then we initialise the virtual environment within a directory called `venv` by using the [`venv` module from the Python standard library](https://docs.python.org/3/library/venv.html). Once upon a time, I used to use [virtualenv](https://virtualenv.pypa.io/en/latest/) to create the virtual environment but at some point switched to the `venv` module from the standard library. Although virtualenv does have more features, the standard module is sufficient for my purposes. Thus, I avoid having to install virtualenv separately as an operating-system-level prerequisite before initialising a Python project. Now, Python is often the only OS-level prerequisite, which is a nice simplification. ### Environment activation ```shell # shellcheck source=/dev/null # don't check venv activate script source venv/bin/activate ``` To install the Python requirements (the following step), we first need to activate the virtual environment. This is as simple as sourcing the appropriate file.<sup id="fnref:source-is-bash" role="doc-noteref"><a href="#fn:source-is-bash" rel="footnote">2</a></sup> The comment above the `source` line tells [`shellcheck`](https://www.shellcheck.net/)<sup id="fnref:shellcheck-linter" role="doc-noteref"><a href="#fn:shellcheck-linter" rel="footnote">3</a></sup> not to check the `activate` script. This isn’t my code, hence it doesn’t make any sense to check it for linter issues. ### Dependencies installation ```shell pip install -r requirements.txt ``` Now we’re ready to do the actual hard work: installing the upstream Python dependencies. This step assumes that a file called `requirements.txt` exists in the base project directory. Using a single requirements file is fine for small projects, or when starting a new project. Yet, as a project gets larger, it is useful to separate the development- and production-related requirements into separate files. In that case, it’s a good idea to create a `requirements/` _directory_ in the base project directory and put the (appropriately named) requirements files in there. In such a situation the dependencies installation step would look like this: ```shell pip install -r requirements/base.txt ``` to install the base dependencies only required in the production environment, or ```shell pip install -r requirements/dev.txt ``` to install the development-related dependencies in addition to the base dependencies. ### Vim standardisation ```shell # vim: expandtab shiftwidth=4 softtabstop=4 ``` The final line is the [`vim`](https://www.vim.org/) coda. This is an old habit but a useful one: it ensures that `vim` expands tabs to spaces and sets how far to indent the code. Although I define this in my main `vim` config, I also find it helpful to specify this information explicitly in source files. ## Always ready to run One small thing: set the executable bit on the script so that you can run it directly, i.e. without specifying an explicit interpreter. In other words, make the script executable like so: ```shell $ chmod 755 setup.sh` ``` ## Extend as needed; avoid unnecessary docs With the basic structure in place, one can now extend it to more complex setup situations. This is one of the great things about using a script for this purpose: we push all the gory details and complexity down to a lower level of abstraction. Thus, we put complexity behind a simple interface which does what it says on the box: set things up. This interface is simple, and–when used across multiple projects–consistent, thus reducing cognitive load. Your brain is now free to focus on more interesting things. Putting these steps into a script makes the setup repeatable and automated; there’s no need to keep detailed setup instructions in a `README` or similar document. By avoiding detailed setup documentation, one avoids such instructions getting out of date. Also, one reduces the risk of human error through missed steps or misspelled commands. The setup documentation is then simply “run the setup script”. In other words: keep it simple. :smiley: ## Summing up In short, dump any project setup details into a script and automate away your setup documentation. 1. And I’m definitely not the first to have thought of it! [↩](#fnref:not-the-first) 2. [`source`](https://ss64.com/bash/source.html) is a `bash` built in command and is equivalent to `. ` (dot-space) in [POSIX](https://en.wikipedia.org/wiki/POSIX)-compatible shells. I find `source <script-name>` easier to understand than `. <script-name>` and less easy to confuse with script execution, i.e. `.<script-name>`. [↩](#fnref:source-is-bash) 3. `shellcheck` is a [linter](https://en.wikipedia.org/wiki/Lint_(software)) for shell scripts. It’s awesome. You should use it! [↩](#fnref:shellcheck-linter)
peateasea
1,891,385
My first experience with the LAMP stack
I carried out this work as part of a university internship. It main task was to set up a web server...
0
2024-06-17T21:59:47
https://dev.to/marko_k/my-first-experience-with-the-lamp-stack-b0h
linux, apache, mysql, php
I carried out this work as part of a university internship. It main task was to set up a web server with own resume, a login window and a connected mysql database. I want to share my first experience with the LAMP stack and perhaps help someone with this topic. **Brief theory** A software stack is a set of layered tools, libraries, programming languages, and technologies used to create, manage and run an application. The stack consists of software components that support the application in various ways, such as visualization, database, networking, and security. _LAMP stack architecture:_ - Linux is an open source operating system. Resides at the first level of the LAMP stack and supports other components at higher levels. - Apache is an open source web server that forms the second layer of the LAMP stack. The Apache module stores website files and communicates with the browser using HTTP, an Internet protocol for transmitting website information in plain text. - MySQL is an open source relational database management system and the third layer of the LAMP stack. The LAMP model uses MySQL to store, query, and manage information in relational databases. - PHP Last in the stack is a scripting language that allows websites to run dynamic processes. A dynamic process involves information in software that is constantly changing. It is used to allow the web server, database, and operating system to process requests from browsers in a consistent manner. **Preparation** _Installing Apache on Ubuntu._ Happens using the commands: sudo apt-get update – used to download package information from all configured sources sudo apt-get install apache2 – used to install Apache itself _Installing MySQL_ `sudo apt-get install mysql-server` – used to install mysql _Installing PHP_ `sudo apt install php` – used to install PHP _To check the results of installing the LAMP stack, let's create a test file:_ `nano /var/www/html/info.php` nano - text editor (you can use any one of your choice) Add `<! ?php phpinfo();? >` ![1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ap74sx8phmud0cortj9a.png) After this, restart apache2 with the command - `service apache2 restart` Enter http://my-ip-address/info.php into our web browser This should appear: ![2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/istiuxwpj4hte3q6we17.png) **Creating a Web Server** By default, Apache comes with a built-in base site. You can change its contents in `/var/www/html` or edit the settings. Virtual Host file located in Virtual Host which is located in `/etc/apache2/sites-enabled/000-default.conf` You can change the algorithm for processing incoming requests or support several web resources on one web server using virtual host support. Let's create an example.html file for example. `sudo mkdir /var/www/test/` - create a folder for your web server `nano /var/www/test/ example.html` – create a file example.html Let's write simple code in it: ``` <html> <head> <title> LAMP on Ubuntu! </title> </head> <body> <p> I'm running this website on an Ubuntu Server! </body> </html> ``` ![3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e6t7uq0k5p9d99n4qsa3.png) _VirtualHost configuration_ VirtualHost is a directive in the Apache web server configuration file, designed to map IP addresses, domains and directories available on the server, as well as manage the sites available on the server. The <VirtualHost> tag specifies the IP addresses and ports that are used on the server. We will use the default configuration: `cd /etc/apache2/sites-available/` `sudo cp 000-default.conf test.conf` `sudo nano test.conf` Edit for yourself: ![4](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lmmodx5rd768ks5wb14s.png) 8090 is the port on which my web server will listen. As part of the practical assignment, we had to create our resume on a web server. ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>CV</title> <style> body { font-family: Arial, sans-serif; } .resume { max-width: 600px; margin: 0 auto; padding: 20px; border: 1px solid #ccc; border-radius: 5px; } h1, h2, h3 { color: #333; } p { margin-bottom: 10px; } ul { list-style-type: none; padding-left: 0; } ul li::before { content: '\2022'; color: #007bff; font-weight: bold; display: inline-block; width: 1em; margin-left: -1em; } </style> </head> <body> <div class="resume"> <h1>Name and surname</h1> <p>City, Country</p> <p>Email: .....@gmail.com</p> <p>Mobile phone: </p> <h2>Education</h2> <ul> <h3> School</h3> <p>Primary, middle and high school</p> <p>Semptember - June </p> <h3>University </h3> <p>Speciality: </p> <p>Semptember - June </p> </ul> <h2>Skills</h2> <ul> <li>Programming on HTML, C++, Python, PHP</li> <li>Use of database MySQL</li> <li>Knowledge of English at Intermediate level</li> <li>Knowledge of Windows and Linux Ubuntu at the administration level</> </li> </ul> </ul> <h2>Personal qualities</h2> <ul> <li>Stress resistance</li> <li>Teamwork</li> <li>Punctuality</li> </ul> </div> </body> </html> ``` _Web server activation_ `sudo a2ensite test.conf` **Database creation** ``` CREATE DATABASE my_database; USE my_database; CREATE TABLE users ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(50), email VARCHAR(100) ); ``` Adding users using the command `INSERTT INTO ysers (name, email) values ​​('Elena', 'ElenaCon@gmail.com');` Exit the editor using the command – `exit`. We give all privileges to our host using the commands: `GRANT ALL PRIVILEGES ON my_database. * TO 'Your@localhost';` `FLUSH PRIVILEGES;` **Creating a login window** We create two files: one is the site itself for authorization (site.php) The second one is for connecting the authorization site to the database (connect.php) `nano /var/www/html site.php` `nano /var/www/html connect.php` Site.php code: ``` !DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Authorization</title> </head> <body> <h2>Authorization form</h2> <form action="connect.php" method="post"> <div> <label for="name">User name:</label> <input type="text" id="name" name="name" required> </div> <div> <label for="email">Email:</label> <input type="email" id="email" name="email" required> </div> <button type="submit">Login</button> </form> </body> </html> ``` Code for connect.php: ``` <?php $host = "localhost"; $username = "....."; $password = "....."; $database = "my_...."; $conn = new mysqli($host, $username, $password, $database); if ($conn->connect_error) { die("Error " . $conn->connect_error); } if ($_SERVER["REQUEST_METHOD"] == "POST") { $name = $_POST["name"]; $email = $_POST["email"]; $sql = "SELECT * FROM users WHERE name='$name' AND email='$email'"; $result = $conn->query($sql); if ($result->num_rows > 0) { header("Location: http://you_ip_address:(port)"); } else { echo "Error"; } } $conn->close(); ?> ``` Install the PHP extension to create a web server with a database using the command: `php –m | grep mysqli` `apt-get install php-mysql` After calling these commands, PHP will act as a web server that can accept HTTP requests and process them. We start the web server using the command: http://you_ip_address:(any port) **Result:** Authorization page: ![5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9r92s5kdnicp5dz4y3fg.png) CV page: ![6](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c28jz0aa9k0ffoi0jhlq.png) I didn’t have any major difficulties creating my first web server. There were only moments that really slowed me down because they didn’t work out the first and second times, such as the correct operation of the database or the correctly written codes for the site.php and connect.php files. Overall, I really enjoyed studying this topic and now, as a result, I have my own web server, which can be used for new experiments, such as in my first article
marko_k
1,891,730
🚀 Exciting News! Chainbrary’s New QR Code Feature for Crypto Payments 🚀
We’re excited to share a new feature we’ve been working hard on: Personalized QR Codes for Crypto...
0
2024-06-17T21:56:12
https://dev.to/chainbrary/exciting-news-chainbrarys-new-qr-code-feature-for-crypto-payments-478g
blockchain, web3, smartcontract
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eqnow9lrixj5kbdjmn0t.png)We’re excited to share a new feature we’ve been working hard on: **Personalized QR Codes for Crypto Payments**! 🎉 **Benefits:** - **Easy Customization**: Enter your business name and crypto address, then print your QR code. - **Simple Transactions**: Customers scan, select network and token, confirm, and pay. - **Expand Your Reach**: Attract more customers by accepting cryptocurrencies. Your support means the world to us! Please like, comment, and share to show the strength of our community. We’re committed to continuous improvement, so let us know what you think and what can be enhanced! Check out our [Medium article](https://medium.com/@Chainbrary/embrace-seamless-crypto-transactions-with-chainbrarys-new-qr-code-feature-123456) for more details. Visit our [Chainbrary Platform](https://chainbrary.io) today.
chainbrary
1,877,483
🥳 Mobitag.nc... 25 ans plus tard, des sms en SaaS via API{GEE}
❔ A propos Il y a 25 ans de cela, en 1999, via Mobilis l'OPT-NC lançait la marque Mobitag:...
21,192
2024-06-17T21:44:54
https://dev.to/optnc/mobitagnc-25-ans-plus-tard-des-sms-en-saas-via-apigee-2h9e
apigateway, api, showdev, saas
## ❔ A propos Il y a [25 ans de cela, en 1999](https://scontent.fnou1-1.fna.fbcdn.net/v/t31.18172-8/14206112_1088604524599086_3925776389787353140_o.jpg?_nc_cat=105&ccb=1-7&_nc_sid=5f2048&_nc_ohc=T4G1-ra0VQcQ7kNvgER-_pA&_nc_ht=scontent.fnou1-1.fna&oh=00_AYBy3G0LJCnnE1ow6xeGXzSVkODfjT_4zERdeOxxEQM1Pw&oe=66876EC5), via [Mobilis](https://www.facebook.com/Mobilis.NC) l'[OPT-NC](https://www.opt.nc/) lançait [la marque](https://office.opt.nc/fr/nous-connaitre/domaines-d-activites/marques) `Mobitag`: > _"Mobitag.nc est un service gratuit d'envoi de SMS depuis un ordinateur, réservé aux particuliers et à destination des mobiles de Nouvelle-Calédonie."_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gj2ubnj69h0uycrx3g1h.png) [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ih108vk60oagen2l6xwd.png)](https://scontent.fnou1-1.fna.fbcdn.net/v/t31.18172-8/14206112_1088604524599086_3925776389787353140_o.jpg?_nc_cat=105&ccb=1-7&_nc_sid=5f2048&_nc_ohc=T4G1-ra0VQcQ7kNvgER-_pA&_nc_ht=scontent.fnou1-1.fna&oh=00_AYD-wXRPgsIbKVP-W7gxh8WBUKFfUbpVRb1zUKQYnH7dCA&oe=66873685) **25 ans plus tard, l'OPT-NC se dote d'une technologie de portail d'API**, d'abord sur RapidAPI, puis d'APIGEE, avec l'ambition d'accélerer la transition digitiale en proposant ses services digitaux... **directement en SaaS.** 👉 Le but de cette démarche est de **permettre des intégrations aisées, en self-service** avec: - Des clients finaux (B2C), - Des partenaires (B2B) désireux de développer de nouveaus services sur ceux de l'OPT-NC - Des plateformes SaaS ... ou tout simplement pour lui-même afin **d'accélérer ses projets en réduisant le _Time to Market_.** ## 🎯 Objectif Le but de ce ce post est **d'illustrer concrétement un cas** autour de ce service gratuit tant apprécié des calédoniens. ## 🪝 Ce que l'on va découvrir Dans la démo qui suit, on va voir, comment: - 🦥 Avec **très peu de code**, - ☁️ Directement **depuis le cloud** ([Kaggle](https://www.kaggle.com/code/optnouvellecaldonie/mobitag-nc-for-dummies)) on peut envoyer un sms de la part de [`mobitag.nc`](www.mobitag.nc). ## 🍿 Démo {% youtube KnRrtYKEtUc %} ## 🤔 Re-Contextualisation Lors de l'épisode précédent de cette série: {% embed https://dev.to/optnc/api-marketplaces-innovation-explained-and-showcased-li1 %} avait été présentées **les nombreurses opportunités et bénéfices** liés à cette technologie, ie. de délivrer du service directement sur le web et manière interopérable... dans le but de **faciliter les intégrations,... en SaaS** : [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/feq05ybg00g3lu1p79ow.png)](https://youtu.be/cfwEP6Oqjew?si=7AUpC97hU1ZL71hL&t=2893) ## 🔖 Ressources - [📲 Mobitag.nc for dummies](https://www.kaggle.com/code/optnouvellecaldonie/mobitag-nc-for-dummies) - http://www.mobitag.nc - [Marques - OPT-NC](https://office.opt.nc/fr/nous-connaitre/domaines-d-activites/marques) - [🛣️ API Marketplaces & innovation, explained and showcased](https://dev.to/optnc/api-marketplaces-innovation-explained-and-showcased-li1)
adriens
1,891,729
Empowering Software Development with Docker: A Course Retrospective
Introduction Docker has become a pivotal tool in modern software development. Its capacity to...
0
2024-06-17T21:42:03
https://dev.to/agagag/empowering-software-development-with-docker-a-course-retrospective-34nj
docker, devops, containers
**Introduction** Docker has become a pivotal tool in modern software development. Its capacity to streamline the creation, deployment, and operation of applications through containers is revolutionary. This article reflects on the key takeaways from the "Docker for Developers" course, underscoring how Docker not only enhances software development practices but also accelerates career advancement. ## Why Docker? 1. **DevOps Enabler**: Docker bridges the gap between development and operations, facilitating frequent updates and stable deployments via containers. It integrates seamlessly with orchestration tools to automate deployment processes. 2. **Solving Dependency Conflicts**: Docker containers house their own dependencies, averting runtime conflicts. This isolation allows for straightforward upgrades and supports diverse application hosting on the same infrastructure. 3. **Easy Scaling**: Docker ensures consistency across multiple application instances by maintaining uniform dependencies, thereby simplifying the scaling process. Container orchestration tools enable rapid deployment across numerous servers. 4. **Seamless Upgrades**: With Docker, updating servers and dependencies becomes hassle-free. Orchestrators incrementally replace containers with new versions, managing traffic rerouting to minimize service interruptions. ## Core Concepts Simplified - **Containers**: Isolated environments that contain everything needed to run applications. - **Images**: Templates used to create containers; akin to blueprints. - **Registries**: Repositories where images are stored, ready for deployment. ## Practical Application and Commands - **Container Management**: Essential Docker commands such as docker run, docker ps, and docker rm streamline container lifecycle management. - **Running Server Containers**: Containers are ideal for hosting servers, APIs, and databases. Commands like docker run -d facilitate background operations. - **Volume Management**: Using volumes prevents data loss by persisting data beyond the container's lifecycle. ## Advanced Topics - **Image Publishing and Management**: The course covered how to build, tag, and push images to registries like Docker Hub, emphasizing the importance of image size management and efficient use of resources. - **Orchestration and High Availability**: Docker's compatibility with orchestration tools like Kubernetes and Docker Swarm was highlighted, teaching participants how to automate and optimize container management. ## Example ![A model of creating duplicate containers using an orchestrator](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6g8urk4woskh8dz4the7.png) This diagram shows how Docker containers and orchestration tools work together to manage application deployment across multiple servers. Here's a simple explanation: **Registry**: This is where different versions of an application (like app1:0.5 and app1:1.0) are stored. Think of it as a library of app versions. **Servers**: There are four servers, each running a specific version of the application (app1:0.5 on Servers 1, 2, and 3, and app1:1.0 on Server 4). **Reverse Proxy**: This component directs incoming traffic to the appropriate server based on the app version or other criteria. It ensures users get the right version of the app. **Orchestrator**: This tool manages the deployment and scaling of containers. It pulls the needed app version from the registry and deploys it to the servers. It also handles updates and scaling by starting and stopping containers as needed. In simple terms, this setup ensures that different versions of an app can run on different servers without conflicts. The orchestrator automates the deployment and updates, making it easy to manage many instances of the application. ## Conclusion The "Docker for Developers" course has equipped participants with critical skills and knowledge, enabling them to leverage Docker's full potential in software development. The course not only provided a deep dive into Docker's functionalities but also prepared developers for the demands of modern software environments. As Docker continues to shape the tech landscape, mastery of this tool is undoubtedly a valuable asset for any software professional.
agagag
1,883,174
Rendering Images The Good Way In Your React Application
Introduction Most times on my initial visit to a website, I take my time to observe and...
0
2024-06-17T21:36:45
https://dev.to/stan015/rendering-images-the-good-way-in-your-react-application-3fdl
react, webdev, frontend, ui
## Introduction Most times on my initial visit to a website, I take my time to observe and enjoy how images are rendered. In fact, if the website happens to include an image on their hero section, that is what first catches my attention before I go ahead to read the texts on the page. Visual appealing images aids good User Experience (UX) while browsing a web or mobile application. It also play a crucial role in beautifying the User Interface (UI). In some cases, images speaks louder than the texts associated with it. Rendering images poorly in your application could harm your UI, thereby causing bad UX. In this article, I will walk you through _"rendering images the good way in your react application"._ ## Prerequisites This article is for everyone who loves building and beautifying web and mobile applications, and those who use them. However, having basic understanding of the below tools and concepts in web development is a plus as we would write some code to better explain some concepts. - Basic knowledge of Frontend Development (HTML/CSS/JS) - Basic Knowledge of React ## What does Image Rendering mean? Image Rendering is the process of displaying images on the UI of an application for the users to see and interact with them. When you visit an application and an image gets loaded on the screen, that process of the image loading is what is referred to as _Image Rendering_. ## Why should we care about how images are rendered? There are more to _Image Rendering_ than just loading images on the User Interface (UI). If images are not properly rendered, they tend to make a bad UI and even harm User Experience (UX). This is why _proper image rendering_ is considered a critical skill every Frontend Developer should possess. ## How to render images the good way in a react app? The basic rendering of images in most Frontend Web Development languages, libraries and frameworks follow a simply convention where the HTML `img` element is used. The `img` tag primarily consists of two major standard attributes; `src` and `alt` attributes. The `src` attribute takes a string of the url or the relative path of the image to be rendered, while the `alt` attribute takes a string containing a description of the image to be rendered. The description provided in the `alt` attribute is what would be rendered if the image is broken or failed to render. The `alt` attribute also aids accessibility (a11y) by describing to screen readers what the image is. Other `img` tag standard attributes include `srcset`, `sizes`, `loading`, `crossorigin`, `usemap`, etc. ![HTML img tag attributes' meaning breakdown](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6fk4kvbm2s1q9cxjsydk.jpg) React renders HTML elements using JSX. The JSX translates to HTML under the hood. In order to render images the good way in a react application, below are the tips and tricks that would magnify how they are rendered by react: ### 1. Emphases on the alt attribute The `alt` attribute is very important and should never be omitted when rendering images with the `img` tag. The `alt` attribute does not only display the provided description when the image failed to load, but also play a crucial role in aiding a11y by providing screen readers contents to read out to the user about the image. I have once debugged someone's code and discovered that all `img` tags in the entire code base does not have `alt` attribute, I had to include it and provide good description for each each image. Not including the `alt` attribute is a bad practice and should be avoided at all times. <br> _**Do:**_ `<img src="imageURL/relativePath" alt="a good image description" />` <br> _**Don't:**_ ~~`<img src="imageURL/relativePath" />`~~ ### 2. Best image format and file size Generally, the image file format and size matters a lot. They determine the clarity and the speed at which the image is loaded. > The smaller the image size, the faster it would render. It is highly recommended that the file format for icons and small graphics should always be **_SVG_** as they are usually smaller in size and scalable. `<img src="iconUrl.svg" alt="a good icon description" />` <br> The **_WebP_** format is one of the best formats to save and render images of high quality and size. It is a modern format that provides great compression and high quality when compared to old formats like _**JPEG**_ and **_PNG_**. `<img src="imageUrl.webp" alt="a good image description" />` ![squoosh.app image format converter](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4rhka0cq1dmznvzjsw7g.png) The image above shows an image format conversion from **_JPG_** to **_WebP_** using **Squoosh.app**. In this conversion, the 2.79MB **_JPG_** image (see bottom-left corner of the image) was converted to a 705kB **_WebP_** format, saving 75% of the file size while still maintaining the image quality (see bottom-right). You can further adjust the quality and compare the difference by dragging the divider left-right to get smaller size. **Squoosh.app** is a great tool for image conversion that does not upload your image elsewhere as you are basically doing all your conversion offline and with your computer, and it has a download button that you can click to simply save your file after conversion. The **Squoosh.app** also converts to other image formats which you can explore.<br> Other image formats (e.g, **_JPEG_** and **_PNG_**) are still valid and can be used if the image size is small, but should be used with caution. Since **_WebP_** saves us a lot image size while still maintaining high quality, why not just use it all the time in your application? Its support across browsers as at the date of the publish of this article is _96.9%_, which means it has good support across all major browsers. ### 3. Proper image box sizing Providing a proper size for the image to be rendered should be practiced. This backs the certainty of the image fitting the available space for it across screens as desired. The intrinsic sizing being discussed in this case include `width`, `height` and `aspect-ratio`. <br> `<img src="imageURL/relativePath" alt="a good image description" width="200px" height="300px" />` <br> The image sizing, just like every other HTML element, could be styled inline, internally or with an external css file. `<img src="imageURL/relativePath" alt="a good image description" />` ```css img { width: 200px; height: 300px; } ``` If the image is wrapped with a container, e.g `div` or `span`, and the desire is to make it cover the container, in most cases the container would take the size of the `img` tag. But if the container has a specified width and height, the `img` tag could be given a relative value (e.g `width="100%"`), although this should be applied with careful examination as the `aspect-ratio` of the image may affect the styling and should be adjusted accordingly. ```html <div width="200px" height="300px"> <img src="imageURL/relativePath" alt="a good image description" width="100%" height="100%" /> </div> ``` In order for `aspect-ratio` to take effect, at least one of the image sizes must be `auto`.<br> `<img src="imageUrl/relativePath" alt="a good image description" width="200px" height="auto" aspect-ratio="1/1" />`<br> Sizing the image is one of the basic good practice that aids good User Interface (UI). When an image is properly sized, you are certain as a Frontend Developer that its shape across screens gives a pleasing view to the user. ### 4. Lazy loading Diving deeper into _"rendering images the good way"_, one of the critical aspect to be considered is _how_ the image is rendered. One of the key ways to render an image wholly is to lazy-load it. This ensures that the image is in the viewport and has fully been loaded before displaying it on the UI. To achieve this, you can use the `img` tag loading attribute where you set the value to _"lazy"_, or you can use other small-size React image lazy-loading libraries that may include extra loading features. <br> #### - <u>_The loading attribute_</u> The loading attribute gives us the ability to control how the image is rendered. It helps to command the image to either load immediately the UI is loaded regardless of whether the image is in the viewport (`loading="eager"`), or load only when the image reaches a certain viewport (`loading="lazy"`). This means that the `loading` attribute can take values, _eager_ and _lazy_. By default, images are rendered eagerly (`loading="eager"`).<br> `<img src="imageURL/relativePath" alt="a good image description" loading="eager" />`<br> `<img src="imageURL/relativePath" alt="a good image description" loading="lazy" />`<br> Lazy-loading images generally improves image rendering. It ensures that the image only loads when needed, and in most cases, it forces the image to only display when it is fully ready, thereby preventing the image from rendering part-by-part on slow or poor network. ![img tag loading attribute](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tv3i4bnejt3ngm9czqtv.png) #### - _<u>React lazy-loading libraries</u>_ Using a third-party library to lazy-load images is also an option. One may prefer to use a library which simply involves installing and importing the library, and then using it to control the `img` tag in your application as the library's documentation suggests. Although in most cases, using the `loading` attribute works just fine rather than importing a third-party library if you are rendering a few simple images with less configurations. Using a third-party library was preferred due to its extra features and poor browser support for the `loading` attribute in the past, but as at the time of the publish of this article, the `img` tag `loading` attribute is 95.73% supported across all major browsers, which means it's safe to simple use the `loading` attribute to lazy-load images. <br> Further more on the third-party library option, there are two React lazy-loading libraries I recommend for use; they are **_react-lazyload_** and **_react-lazy-load-image-component_**. Both libraries supports lazy-loading images and components. <br> - _**react-lazyload:**_ **Installation:** `pnpm install --save react-lazyload` <br> **Basic Usage** ```jsx import LazyLoad from 'react-lazyload'; function App() { return ( <> <h1>Lazy loaded image</h1> <LazyLoad height="100%" placeholder={ <p>Loading...</p> }> <img src="imageURL/relativePath" alt="a good image description" width="200px" height="300px" /> </LazyLoad> </> ) } export default App; ``` The **_react-lazyload_** library wraps the `img` tag with it's `LazyLoad` component just as shown in the code block above. It can only wrap one child `img` tag at a time. This library performs well with speed at which it renders images. Its simplicity and features makes it unique. Some of the props the `LazyLoad` component can take include placeholder, once, height, unmountIfInvisible, debounce/throttle, overflow, etc. - **_react-lazy-load-image-component library:_** **Installation:** `pnpm i --save react-lazy-load-image-component` <br> **Basic Usage** ```jsx import { LazyLoadImage } from 'react-lazy-load-image-component'; function App() { return ( <> <h1>Lazy loaded image</h1> <LazyLoadImage src="imageURL/relativePath" placeholder={ <p>Loading...</p> } alt="a good image description" height="400px" width="600px" /> </> ) } export default App; ``` The features of this library makes it easier to handle the loading state of the image. Unlike **_react-lazyload_** library which wraps the `img` tag with an opening and closing tag (`<LazyLoad><img src=“” alt=“” /></LazyLoad>`), this library makes it seem as though you are still using the `img` tag, you simply replace the `img` with `LazyLoadImage` and all the attributes of `img` tag can be passed to the `LazyLoadImage` component as props. **React-lazy-load-image-component** library performs great when the images to be rendered are many, it ensures that only images that would fit the viewport are fetched, the more you scroll, the more it fetches, saving the amount of network requests that would have been made if all images were to be fetched at once. The props that could be passed to this component aside all `img` tag attributes include placeholderSrc, threshold, effect, useIntersectionObserver, debounce/throttle, onLoad, beforeLoad, afterLoad, placeholder, etc. <br> ### 5. Responsive Images Generally, _responsiveness_ is a very important aspect of Frontend Development that ensures that all UI components and elements of an application are displayed in correct sizes across devices. The concept of displaying the correct size of an element with respect to the device's screen size can be applied to the `img` tag with the `srcset` attribute for image rendering, where different sizes are passed to the `sizes` attribute with respect to the number of image URL passed to the `srcset` attribute. ```html <img src="imageURL-small" srcSet="imageURL-small 500w, imageURL-medium 1000w, imageURL-large 1500w" sizes="(max-width: 600px) 480px, (max-width: 1200px) 800px, 1200px" alt="a good image description" /> ``` The above code block shows a basic application of responsive image where different image URLs were provided in the `srcset` attribute, and their respective sizes were provided and separated with commas (,) in the `sizes` attribute. This ensures that the correct size of the device screen size is loaded, which improves rendering speed with respect to the image file size. A default URL is passed to the `src` attribute. ### 6. Progressive Image Loading The concept of _progressive image loading_ involves rendering a low-quality image placeholders (LQIP) or a blurred version of the original image to be displayed while it is loading. A typical example of this type of image rendering is seem on **unsplash.com** and **pexels.com**. ![progressive image loading](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjint2yrcu8vvcr5fdf3.png) I personally recommend rending images progressively as it gives your users/visitors a picture of what is expected to be loaded. This improves both UI and UX because it initially occupy a space for the loading image which prevents Cumulative Layout Shift (CLS), and also provide the user a hint of what is loading. <br> <u>**Implementation of Progressive Image Loading with BlurHash in React**</u> > BlurHash is a compact representation of a placeholder for an image. **Installation:** `pnpm install --save blurhash react-blurhash` **Basic Structure of React Blurhash component** ```jsx <Blurhash hash="LEHV6nWB2yk8pyo0adR*.7kCMdnj" width={350} height={323} resolutionX={32} resolutionY={32} punch={1} /> ``` The React Blurhash components takes `hash`, `width`, `height`, `resolutionX`, `resolutionY`, and `punch` as props. The `hash` prop's value is a string of the encoded hash of the original image to be rendered. The `width` and `height` props can be a valid CSS string or number value for width and height of the Blurhash image. The `resolutionX`, `resolutionY` controls the horizontal axis and vertical axis resolution of the blurred image, while the `punch` takes care of contrast of the blurred image. <br> To generate a BlurHash string, we would simply use the generator on the BlurHash website (**blurha.sh**) for this article. On **blurha.sh**, scroll down to find the image upload button, upload the image you need the BlurHash string for and copy the generated hash. ![blurhash website](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m1gxitry80uwxqooxk0f.png) **Basic Usage** ```jsx import { useState, useEffect } from "react"; import { Blurhash } from "react-blurhash"; const [imageLoaded, setImageLoaded] = useState(false); const [imageError, setImageError] = useState(false); const [imageSrc, setImageSrc] = useState(null); useEffect(() => { const img = new Image(); img.onload = () => { setImageSrc(img.src); setImageLoaded(true); }; img.onerror = () => { setImageError(true); }; img.src = "imageUrl/relativePath"; }, []); function App() { return ( <div> {!imageLoaded && !imageError && ( <div> <Blurhash hash="LEHV6nWB2yk8pyo0adR*.7kCMdnj" width="100%" height="100%" resolutionX={32} resolutionY={32} punch={1} /> </div> )} {imageError && <div>Error loading image</div>} {imageSrc && <img src={imageSrc} alt="a good image description" loading="lazy" />} </div> ) } export default App; ``` In the above use case of the React BlurHash component, I manually handled the loading state and error state of the image without using a React library. The code simply uses `useEffect()` to watch updates of the state of the rendering image. When the loading is `true`, the BlurHash component is shown, but when the image is fully loaded, the original image is shown to replace the the BlurHash component, and on error, the error state becomes `true` and an error message is displayed instead (in our case _'Error loading image'_). You may be asking _why all these lines of code just to display an image?_. The desire to better handle image rendering in React got us here. Although we can use a React library to save our time and reduce the lines of code. <br> **Using _react-lazy-load-image-component_ library with BlurHash** The _react-lazy-load-image-component_ performs well with BlurHash when you simply pass the Bluhash component to its `placeholder` prop. ```jsx import { LazyLoadImage } from 'react-lazy-load-image-component'; import { Blurhash } from 'react-blurhash' function App() { return ( <> <h1>Lazy loaded image</h1> <LazyLoadImage src="imageUrl/relativePath" placeholder={ <Blurhash hash="LEHV6nWB2yk8pyo0adR*.7kCMdnj" height={400} width={600} resolutionX={32} resolutionY={32} punch={1} /> } alt="a good image description" height="400px" width="600px" /> </> ) } export default App; ``` The above code block greatly reduced lines of code compared to when the image state was controlled manually. You can also decide to observe the loading state of the `LazyLoadImage` component with the `onLoad` prop and update the state when the image is loaded (e.g, `const [loading, setLoading] = useState(true)`) rather than using the `placeholder` prop. The `onLoad` prop can be assigned a function that updates the loading state (e.g, `() => setLoading(false)`), which then gives the ability to conditionally display the `Blurhash` component based on the loading state of the image. Although the option of using the `onLoad` prop to manually manage loading state can cause Cumulative Layout Shift (CLS) if the wrapper `div` or `span` containing the `Blurhash` component and the `LazyLoadImage` component is not properly styled. ### 7. Other Ways Of Rendering Images Efficiently There are lots of good ways of rendering images. This means we have options as developers. These options include, but are not limited to _Preloading Key Images_, _Using Image CDN_, _Using Background Images for Decorative Images_, and _Exploring Other Methods Of Progressive Image Rendering_. ## Conclusion Effectively rendering images in your application is crucial for optimizing performance and greatly improving User Experience (UX). Also, this great act makes your User Interface (UI) more charming and lovely to visit regularly. > "When images are properly rendered on the UI of your application, kindly note that the joy I have in heart as your dearest user and frequent visitor is beyond imagination". Always enable code analysis tools like ESLint in your coding environment to alert you when you make mistakes like forgetting to include the almighty important `alt` attribute in your `img` tag. As a Frontend Developer or a UI Engineer, make it a priority to render images following best practices that would greatly improve _how fast_ they are rendered by firstly considering the sizes and formats of your images before using them in your application. Thank you for reading through to this part of the article! Let's keep following best practices while coding! 🚀 ## Resources **1. <u>React lazy-loading libriries** </u> - [_react-lazyload_](https://www.npmjs.com/package/react-lazyload) - [_react-lazy-load-image-component_](https://www.npmjs.com/package/react-lazy-load-image-component) **2. <u>Example websites that uses Progressive Image Loading** </u> - [_unsplash.com_](https://unsplash.com/) - [_pexels.com_](https://www.pexels.com/) **3. <u>BlurHash</u>** - [_React BlurHash github repo_](https://github.com/woltapp/react-blurhash) - [_BlurHash website: blurha.sh_](https://blurha.sh/) - [_BlurHash npm package_](https://www.npmjs.com/package/blurhash) **4. <u>Image format converter**</u> - [_Squoosh_](https://squoosh.app/)
stan015
1,891,727
Task Unity- Achieve More Together
Task Unity: Revolutionizing Task Management and Collaboration In today's fast-paced world,...
0
2024-06-17T21:34:46
https://dev.to/mdkaifansari04/task-unity-achieve-more-together-2co2
## Task Unity: Revolutionizing Task Management and Collaboration In today's fast-paced world, effective task management and seamless collaboration are key to achieving success in any organization. Introducing **Task Unity**, a cutting-edge task management tool designed to empower teams with multifunctional admin and user dashboards, fostering transparent communication and maximizing productivity. ### Introduction Managing tasks and collaborating efficiently can be challenging, especially for growing teams. **Task Unity** aims to address these challenges by providing a comprehensive solution that streamlines task assignments, encourages teamwork, and offers clear insights into task progress and performance. With a robust tech stack and user-friendly interface, Task Unity is set to transform the way teams work together. ### Key Objectives **Task Unity** is built with the following key objectives in mind: 1. **Efficiency**: Streamline task assignments for clarity and effectiveness. 2. **Collaboration**: Encourage seamless communication and teamwork. 3. **Transparency**: Provide clear insights into task progress and performance. 4. **Productivity**: Empower individuals and teams to achieve peak productivity. ### Prerequisites Before diving into **Task Unity**, ensure you have the following prerequisites installed on your system: - **Node.js**: [Download Node.js](https://nodejs.org/) - **npm (Node Package Manager)**: Ensure the latest version by running `npm install npm@latest -g` - **MongoDB**: [Download MongoDB](https://www.mongodb.com/try/download/community) ### Core Functionality **Task Unity** offers a range of features designed to enhance task management and collaboration: - **User Authentication**: Secure login using JWT. - **Admin Dashboard**: View, edit, delete, and add users; assign tasks; chat with users. - **User Dashboard**: View assigned tasks, mark tasks as completed, chat with admin. - **Profile Management**: Update user profiles. - **Task Management**: Detailed task tracking and progress updates. - **Communication**: Built-in chat feature for seamless communication between admin and users. - **Search Functionality**: Quickly find tasks and users. - **Responsive Design**: Accessible on various devices with dark and light mode options. ### Installation Setting up **Task Unity** is straightforward. Follow these steps: 1. **Clone the repository:** ```sh git clone https://github.com/Mdkaif-123/Task-Unity.git ``` 2. **Navigate to the backend folder and install dependencies:** ```sh cd Task-Unity/backend npm install ``` 3. **Navigate to the frontend folder and install dependencies:** ```sh cd ../frontend npm install ``` 4. **Set up your backend `.env` file:** ```sh MONGO_DB_URL="mongodb://127.0.0.1:27017/taskUnityDB" AUTH_SECRET_KEY="thisistheauthsecretkeyforauthenticationpurpose" ``` 5. **Set up your frontend `.env` file:** ```sh REACT_APP_HOST="http://localhost:8000" ``` 6. **Run the backend server:** ```sh cd ../backend npm start ``` 7. **Run the frontend server:** ```sh cd ../frontend npm start ``` ### Technology Stack **Task Unity** leverages a powerful technology stack to deliver an exceptional user experience: - **Frontend**: React, Tailwind CSS, Material Tailwind, Chart.js, Flowbit Components, React Router, Multi Avatar. - **Backend**: Node.js, Express, MongoDB, JWT. For more information on these technologies, refer to their documentation: - [React](https://reactjs.org/docs/getting-started.html) - [Tailwind CSS](https://tailwindcss.com/docs) - [Material Tailwind](https://material-tailwind.com/docs) - [Chart.js](https://www.chartjs.org/docs/latest/) - [React Router](https://reactrouter.com/web/guides/quick-start) - [Node.js](https://nodejs.org/en/docs/) - [Express](https://expressjs.com/) - [MongoDB](https://docs.mongodb.com/) - [JWT](https://jwt.io/introduction/) ### Future Scope **Task Unity** is constantly evolving, with exciting future enhancements planned: - **AI Bot Integration**: Enhance task management with AI-driven insights and automation. - **Email and Notification Features**: Stay updated with task progress and important notifications. - **Super Admin Functionality**: Enable advanced admin controls and website customization. - **Courses Platform**: Offer training and resources for users to improve their skills. ### Conclusion **Task Unity** is more than just a task management tool; it's a platform designed to revolutionize how teams collaborate and achieve their goals. By simplifying task management and fostering transparent communication, Task Unity empowers teams to reach new heights of productivity and efficiency. Join us on this journey to redefine teamwork and productivity. For more details and to get started with **Task Unity**, visit our [GitHub repository](https://github.com/Mdkaif-123/Task-Unity) and [live demo](https://task-unity.onrender.com/).
mdkaifansari04
1,891,726
Hello world, we finally decided to come here
A post by Chainbrary Team
0
2024-06-17T21:34:11
https://dev.to/chainbrary/hello-world-we-finally-decided-to-come-here-n53
blockchain, web3
chainbrary
1,891,722
THE ROTATION THING PART 2
HEYYYYY Sorry for being gone for 6 days We left off with how we are going to move this turret to do...
0
2024-06-17T21:23:46
https://dev.to/kevinpalma21/the-rotation-thing-part-2-3nnm
tutorial, learning, design, productivity
HEYYYYY Sorry for being gone for 6 days We left off with how we are going to move this turret to do a 360. Well, if you remember from my last post, the other part of the turret I was building, I have made some few adjustments to this (image1). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vmzxh89hl96toudled8s.png)(image1) Here I have made the four corners a little thicker and wider so when I do screw it down on the other base, it will hold up nicely and be stable (image2). Then you will see how I have little sawtooths on the outer of the circle; this is just for appearance. I don't want my turret to look like a Minecraft block. So that is why I started to make it more circular, also because I will be putting teeth like from a gear on the side of it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xq932lsy95mwk605acog.png)(image2) Now onto some calculations and mathy stuff. So first, here is the motor I will be using so I can know the torque, the power, and some other stuff (image3). So I already solved that stuff, and you can check it down below (image4). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mu362k0cl2ztify7kl2k.png)(image3) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nave7rzetdbxpqj7d597.png)(image4) Alright, now onto how many gear teeth do we need and how wide and the length of it, and the distance from each other. These are all the things we need to know. So first, I did some research and found out for projects like this, 60 teeth is a pretty good starting point, so I went with that. The more teeth, the less stress all the teeth will experience, and it will make it smoother when turning. So good to keep in mind. So now I used all the following formulas to really see what the length of these teeth and width and by how much they are separated by (image5). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q9ebrsdicjtzd604f6d6.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tv25g4i0i44dxop7cih3.png) (image5) These formulas are very important when you are making gears. They are the fundamentals but with a few tweaks. So this was the finished product of my rotation base (image6). Bought some new 3D printing material and now the product is being printed as we speak. I did promise a live photo, so you will most definitely see it on the next one for sure. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o00b15e9pqh69bhhw0ff.png)(image6) Thank you for following this journey. Onto the next part, I will start making little gears and making it fit with the motor I will be using or something else. UNTIL NEXT TIME. SEE YALL my EVO people.
kevinpalma21
1,891,721
Encryption And Decryption - Securing Modern Technology
Explainer Encryption converts plaintext to ciphertext using algorithms and keys to ensure...
0
2024-06-17T21:23:01
https://dev.to/maame_deveer/encryption-and-decryption-securing-modern-technology-5026
devchallenge, cschallenge, computerscience, beginners
## Explainer Encryption converts plaintext to ciphertext using algorithms and keys to ensure data confidentiality. Decryption is the reverse process of encryption; converting ciphertext back to plaintext. These processes secure technologies against cyber threats. ## Additional Context Alan Turing is a prominent British mathematician who is widely revered as the father of theoretical computer science and artificial intelligence. During WWII, he helped break the Nazi Enigma encryption, developing the Bombe machine to decode messages. His breakthrough not only significantly influenced the field of computer science and cryptography but also provided crucial intelligence that shaped Allied strategies during the war. Turing's legacy extends to modern encryption techniques, integral in safeguarding today’s digital security.
maame_deveer
1,889,465
Are Sync Engines The Future of Web Applications?
Look at the GIF below — it shows a real-time Todo-MVC demo, syncing across windows and smoothly...
27,923
2024-06-17T21:13:15
https://dev.to/isaachagoel/are-sync-engines-the-future-of-web-applications-1bbi
webdev, svelte, javascript, programming
Look at the GIF below — it shows a real-time [Todo-MVC demo](https://todo-replicache-sveltekit.onrender.com/), syncing across windows and smoothly transitioning in and out of offline mode. While it’s just a simple demo app, it showcases important, cutting-edge concepts that every web developer should know. This is a [Replicache](https://replicache.dev/) demo app that I ported from an Express backend and web components frontend to SvelteKit to learn about the technology and concepts behind it. I want to share my learnings with you. The source code is available [on Github](https://github.com/isaacHagoel/todo-replicache-sveltekit). ![sveltekit-replicache-demo](https://github.com/isaacHagoel/todo-replicache-sveltekit/assets/20507787/11b5ae10-049d-4cc7-82bf-45d8287701f0) ## Context and motivation Web applications face some fundamentally hard problems, problems most web frameworks seem to ignore. These problems are so hard that only very few apps actually solve them well, and those apps stand head and shoulders above other apps in their respective space. Here are some such problems I had to deal with in actual commercial apps I worked on: 1. Getting the app to feel snappy even when it talks to the server, even over slow or patchy network. This applies not only to the initial load time but also to interactions after the app has loaded. [SPAs](https://developer.mozilla.org/en-US/docs/Glossary/SPA)were an early and ultimately insufficient attempt at solving this. 2. Implementing undo/ redo and version history for user generated content (e.g site building, e-commerce, online courses builder). 3. Getting the app to work correctly when open simultaneously by the same user on multiple tabs/ devices. 4. Handling long-lived sessions running an old version of the frontend, which users might not want to refresh to avoid losing work. 5. Making collaboration features/multiplayer functionalities work correctly and near real-time, including conflict resolution. I encountered these problems while working on totally normal web applications, nothing too crazy, and I believe most web apps will hit some or all of them as they gain traction. A pattern I noticed in dev teams that start working on a new product is to ignore these problems completely, even if the team is aware of them. The reasoning is usually along the lines of "we'll deal with it when we start actually having these problems." The team would then go on to pick some well-established frameworks (pick your favorite) thinking these tools surely offer solutions to any common problem that may arise. Months later, when the app hits ten thousand active users, reality sinks in: the team has to introduce partial, patchy solutions that add complexity and make the system even more sluggish and buggy, or rewrite core parts (which no one ever does right after launch). Ouch. I felt this pain. The pain is real. Enter "Sync Engine." ## What the hell is a sync engine? Remember I said that some apps address these issues much better than others? Recent famous examples are [Linear](https://linear.app/isaach) and [Figma](https://www.figma.com/). Both have disrupted incredibly competitive markets by being technologically superior. Other examples are [Superhuman](https://superhuman.com/) and a decade prior, [Trello](https://trello.com/). When you look into what they did, you discover that they all converged on very similar patterns, and they all developed their respective implementations in-house. You can read about how they did it (highly recommended) in these links: [Figma](https://www.figma.com/blog/how-figmas-multiplayer-technology-works/), [Linear](https://www.youtube.com/live/WxK11RsLqp4?feature=share&t=2175), [Superhuman](https://blog.superhuman.com/superhuman-is-built-for-speed/), [Trello (series)](https://www.atlassian.com/engineering/sync-architecture). At the core of the system, there is always a sync engine that acts as a persistent buffer between the frontend and the backend. At a high level, this is how it works: - The client always reads from and writes to a local store that is provided by the engine. As far as the app code is concerned, it runs locally in memory. - That store is responsible for updating the state optimistically, persisting the data locally in the browser's storage, and syncing it back and forth with the backend, including dealing with potential complications and edge cases. - The backend implements the other half of the engine, to allow pulling and pushing data, notifying the clients when data has changed, persisting the data in a database, etc. Different implementations of sync engines make different tradeoffs, but the basic idea is always the same. ## Not a new idea but... If you've been following trends in the web-dev world, you'd know that sync engines have been a centrepiece in several of them, namely: [progressive web apps](https://web.dev/articles/what-are-pwas), [offline-first apps](https://offlinefirst.org/), and the lately trending term: [local-first software](https://www.inkandswitch.com/local-first/). You might have even looked into some of the databases that offer a built-in sync engine such as [PouchDb](https://pouchdb.com/) or online services that do the same (e.g., [Firestore](https://firebase.google.com/docs/firestore)). I have too, but my general feeling over the last few years has been that none of it is quite hitting the nail on the head. Progressive web apps were about users "installing" shortcuts to websites on their home screens as if they were native apps, despite not needing installation being maybe "the" benefit of the web. "Offline-first" made it sound like offline mode is more important than online, which for 99% of web apps is simply not the case. "Local-first" is admittedly the best name so far, but the official [local-first manifesto](https://www.inkandswitch.com/local-first/) talks about peer-to-peer communication and [CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) (a super cool idea but one that is rarely used for anything besides collaborative text editing) in a world of full client-server web applications that are trying to solve practical problems like the ones I described above. Ironically, many tools that are part of the current "local-first" wave adopted the name without adopting all the principles. The one that drew my attention and interest the most is called "Replicache." Specifically, I was intrigued by it exactly because it's NOT a self-replicating database or a black-box SaaS service that you have to build your entire app around. Instead, it offers much more control, flexibility, and separation of concerns than any off-the-shelf solution I have encountered in this space. ## What is Replicache? Replicache is a library. On the frontend, it requires very little wiring and effectively functions as a normal global store (think Zustand or a Svelte store). It has a chunk of state (in our example, each list has its own store). It can be mutated using a set of user-defined functions called "mutators" (think reducers) like "addItem", "deleteItem," or anything you want, and exposes a subscribe function (I am simplifying, full API [here](https://doc.replicache.dev/api/classes/Replicache)). Behind this familiar interface lies a robust and performant client-side sync engine that handles: 1. Initial full download of the relevant data to the client. 2. Pulling and pushing "mutations" to and from the backend. A mutation is an event that specifies which mutator was applied, with which parameters (plus some metadata). - When pushing, these changes are applied optimistically on the client, and rolled back if they fail on the server. Any other pending changes would be applied on top (rebase). - The sync mechanism also includes queuing changes if the connection is lost, retry mechanisms, applying changes in the right order, and de-duping. 3. Caching everything in memory (performance) and persisting it to the browser storage (specifically IndexedDB) for backup. 4. Since the same storage is accessible from all the tabs of the same application, the engine deals with all the implications of that—like what to do when there was a schema change but some tabs have refreshed and some haven't and are still using the old schema. 5. Keeping all the tabs in sync instantly using a broadcast channel (since relying on the shared storage is not fast enough). 6. Dealing with cases in which the browser decides to wipe out the local storage. You might have noticed that this right here addresses a big chunk of the problems I listed at the top of this post. Being mutations-based also lends itself to features like undo/redo. In order for all of this to work, it's your backend's job to implement the protocol that Replicache defines. Specifically: 1. You need to implement [push](https://doc.replicache.dev/reference/server-push) and [pull](https://doc.replicache.dev/reference/server-pull) APIs. These endpoints need to be able to activate mutators similarly to the frontend (though they don't have to run the same logic). The backend is authoritative, and conflict resolution is done by your code within the mutator implementation. 2. Your database needs to support snapshot isolation and run operations within transactions. 3. The Replicache client polls the server periodically to check for changes, but if you want close to real-time sync between clients, you need to implement a "poke" mechanism, namely a way to notify the clients that something has changed and they need to pull now. This could be done via [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events) or [websockets](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket). It's an interesting API design choice—changes are never pushed to the client; the client always pulls them. I believe it is done this way for simplicity and ease of reasoning about the system. One thing for sure: it's good that they didn't make websockets mandatory because that would have made the protocol incompatible with HTTP (server-sent events stream over a normal HTTP connection), which would have required extra infrastructure and presented additional integration challenges. 4. Depending on the [versioning strategy](https://doc.replicache.dev/strategies/overview), you might need to implement additional operations (e.g., createSpace). If it sounds non-trivial to you, you are right. I don't think I fully wrapped my head around all the details of how it operates with the database. I'll need to do a follow-up project in which I totally refactor the database structure and/or add meaningful features to the example (e.g., version history) in order to get closer to fully grokking it. The thing is, I know how valuable this level of control is when building and maintaining real production apps. In my book, spending a week or two thinking deeply about and setting up the core part of your application is a great investment if it creates a strong foundation to build and expand upon. ## Porting a non-trivial example The best (and arguably only) way to learn anything new is by getting your hands dirty—dirty enough to experience some of the tradeoffs and implications that would affect a real app. As I was going over the [examples on the Replicache website](https://doc.replicache.dev/examples/todo), I noticed there were none for Sveltekit. I am a huge Svelte fan since Svelte 3 was released, but only recently started playing with Sveltekit. I thought this would be an awesome opportunity to learn by doing and create a useful reference implementation at the same time. Porting an existing codebase to a different technology is educational because, as you translate the code, you are forced to understand and question it. Throughout the process, I experienced multiple eureka moments as things that seemed odd at first clicked into place. ## Learnings #### Sveltekit 1. Sveltekit [doesn't natively support WebSockets](https://github.com/sveltejs/kit/issues/1491), and even though it does support server-sent events, it does so in a [clumsy way](https://stackoverflow.com/questions/74879852/how-can-i-implement-server-sent-events-sse-in-sveltekit). Express supports both nicely. As a result, I used [svelte-sse](https://github.com/razshare/sveltekit-sse) for server-sent events. One somewhat annoying quirk I ran into is that since svelte-sse returns a Svelte store, which my app wasn't subscribing to (the app doesn't need to read the value, just to trigger a pull as I described above), the whole thing was just optimized away by the compiler. I was initially scratching my head about why messages were not coming through. I ended up having to implement a workaround for that behavior. I don't blame the author of the library; they assumed a meaningful value would be sent to the client, which is not the case with 'poke'. 2. SvelteKit's filesystem-based routing, load functions, layouts, and other features allowed for a better-organized codebase and less boilerplate code compared to the original Express backend. Needless to say, on the frontend, Svelte is miles ahead of web components, resulting in a frontend codebase that is smaller and more readable even though it has more functionality (the original example TodoMVC was missing features such as "mark all as complete" and "delete completed"). 3. Overall, I love Sveltekit and plan to keep using it in the future. If you haven't tried it, [the official tutorial](https://learn.svelte.dev/tutorial/introducing-sveltekit) is an awesome introduction. ### Replicache Overall, I am super impressed by Replicache and would recommend trying it out. At the basic level (which is all I got to try at this point), it works very well and delivers on all its promises. With that said, here are some general concerns (not todo app related) I have and thoughts related to them: - **Performance-related:** - **Initial load time** (first time, before any data was ever pulled to the client) might be long when there is a lot of data to download (think tens of MBs). Productivity apps in which the user spends a lot of time after the initial load are less sensitive to this, but it is still something to watch for. Potential mitigation: partial sync (e.g., Linear only sends open issues or ones that were closed over the last week instead of sending all issues). - **Chatty network (?)** - Initially, it seemed to me that there was a lot of chatter going back and forth between the client and the server with all the push, pull, and poke calls flying around. On deeper inspection, I realized my intuition was wrong. There is frequent communication, yes, but since the mutations are very compact and the poke calls are tiny (no payload), it amounts to much less than your normal REST/GraphQL app. Also, a browser full reload (refresh button or opening the page again in a new tab/window after it was closed) loads most of the data from the browser's storage and only needs to pull the diffs from the server, which leads me to the next point. - **Coming back after a long period of time offline**: I haven't tested this one, but it seems like a real concern. What happens if I was working offline for a few days making updates while my team was online and also making changes? When I come back online, I could have a huge amount of diffs to push and pull. Additionally, conflict resolution could become super difficult to get right. This is a problem for every collaborative app that has an offline mode and is not unique to Replicache. The Replicache docs [warn about this situation](https://doc.replicache.dev/concepts/offline) and propose implementing "the concept of history" as a potential mitigation. - What about **bundle size**? Replicache is [34kb gzipped](https://bundlephobia.com/package/replicache@14.2.2), and for what you get in return, it's easily worth it. - [This page](https://doc.replicache.dev/concepts/performance) on the Replicache website makes me think that, in the general case, performance should be very good. - **Functionality-related:** - Unlike native mobile or desktop apps, it is possible for users to **lose the local copy of their work** because the browser's storage doesn't provide the same guarantees as the device's file system. Browsers can just decide to delete all the app's data under certain conditions. If the user has been online and has work that didn't have a chance to get pushed to the server, that work would be lost in such a case. Again, this problem is not unique to Replicache and affects all web apps that support offline mode, and based on what I read, it is unlikely to affect most users. It's just something to keep in mind. - I was surprised to see that the **schema in the backend database** in the Todo example I ported doesn't have the "proper" relational definitions I would expect from a SQL database. There is no "items" table with fields for "id", "text", or "completed". The reason I would want that to exist is the same reason I want a relational database in the first place—to be able to easily slice and dice the data in my system (which I always missed down the line when I didn't have). I don't think it is a major concern since Replicache is supposed to be backend-agnostic as long as the protocol is implemented according to spec. I might try to refactor the database as a follow-up exercise to see what that means in terms of complexity and ergonomics. - I find **version history and undo/redo** super useful and desirable in apps with user-editable content. With regards to undo/redo there is an [official package](https://github.com/rocicorp/undo) but it seems to [lack support for the multiplayer usecase](https://github.com/rocicorp/replicache/issues/1008) (which is where the problems come from). As for version-history, the Replicache documentation mentions "the concept of history" but [suggests talking to them](https://doc.replicache.dev/concepts/offline) if the need arises. That makes me think it might not be straightforward to achieve. Another idea for a follow-up task. - **Collaborative text editing** - the existing conflict resolution approach won't work well for collaborative text editing, which requires [CRDTs](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) or [OT](https://en.wikipedia.org/wiki/Operational_transformation). I wonder how easy it would be to integrate Replicache with something like [Yjs](https://yjs.dev/). There is an [official example repo](https://github.com/rocicorp/replicache-yjs), but I haven't looked into it yet. - **Scaling-related:** - Since the server is stateful (holds open HTTP connections for server-sent events), I wonder how well it would scale. I've worked on production systems with >100k users that used WebSockets before, so I know it is not that big of a deal, but still something to think about. - **Other:** - - In theory, Replicache can be **added into existing apps** without rewriting the frontend (as long as the app already uses a similar store). The backend might be trickier. If your database doesn't support snapshot isolation, you are out of luck, and even if it does, the existing schema and your existing endpoints might need some serious rework. If you're going to use it, do it from day one (if you can). - Replicache is **not open source** (yet! see the point below) and is [free only as long as you're small or non-commercial](https://replicache.dev/#pricing). Given the amount of work (>2 years) that went into developing it and the quality of engineering on display, it seems fair. With that said, it makes adopting Replicache more of a commitment compared to picking up a free, open library. If you are a tier 2 and up paying customer, you get a [source license](https://doc.replicache.dev/howto/source-access) so that if Replicache shuts down for some reason, your app is safe. Another option is to roll out your own sync engine, like the big boys (Linear, Figma) have done, but getting to the quality and performance that Replicache offers would be anything but easy or quick. - **Crazy plot twist** (last minute edit): As I was about to publish this post I discovered that Replicache is going to be opened sourced in the near future and that its parent company is planning to launch a new sync-engine called "Zero". [Here is the official announcement](https://zerosync.dev/). It reads: "We will be open sourcing [Replicache](https://replicache.dev/) and [Reflect](https://reflect.net/). Once Zero is ready, we will encourage users to move." Ironically, Zero seems to be yet another solution that automagically syncs the backend database with the frontend database, which at least for me personally seems less attractive (because I want separation of concerns and control). With that said, these guys are experts in this domain and I am just a dude on the internet so we'll have to wait and see. In the meanwhile, I plan on playing with Replicache some more. ## Should a sync engine be used for everything? No, a sync engine shouldn't be used for everything. The good news is that you can have parts of your app using it while other parts still submit forms and wait for the server's response in the conventional manner. SvelteKit and other full-stack frameworks make this integration easy. Obvious situations where using a sync engine is a bad idea: 1. Optimistic updates make sense only when client changes are highly likely to succeed (with rollbacks being rare) and when the client possesses enough information to predict outcomes. For instance, in an online test where a student's answer must be sent to the server for grading, optimistic updates (and hence a sync engine) wouldn't be feasible. The same applies to critical actions such as placing orders or trading stocks. A good rule of thumb is that any action dependent on the server and incapable of functioning offline should not rely on a sync engine. 2. Any app dealing with huge datasets that cannot be fit on user machines. For example, creating a local-first version of Google or an analytics tool processing gigabytes of data to generate results is impractical. However, in scenarios where partial synchronisation suffices, a sync engine can still be beneficial. For instance, Google Maps can download and cache maps on client devices to operate offline, without needing high-resolution maps for every location worldwide all the time. ## A word on developer productivity and DX My impression is that having a sync engine can make DX (developer experience) much nicer. Frontend engineers just work with a normal store that they can subscribe to updates, and the UI always stays up to date. No need to think about fetching anything, calling APIs or server actions for the parts of the app that are governed by the sync engine. On the backend, I can't say much yet. It seems like it won't be harder than a traditional backend but I can't say for sure. ### Closing thoughts It's exciting to imagine the future of web apps as planet scale, realtime multi-player collaboration tool that work reliably regardless of network conditions, while at the same time making these nasty problems I started this post with a thing of the past. I highly recommend fellow web developers to get themselves familiar with these new concepts, experiment with them and maybe even contribute. Thanks for reading. Leave a comment if you have any questions or thoughts. Peace. . **P.S** [This interview](https://youtu.be/cgTIsTWoNkM?si=Sssrbj09Z936QxEf ) with Aaron Boodman, the founder of the company that created Replicache, is great. Watch it and thanks me later.
isaachagoel
1,891,146
Analyzing Svenskalag Data using DBT and DuckDB
As a youth football coach and data engineer, I have been dreaming of that Moneyball moment to become...
0
2024-06-17T21:11:59
https://dev.to/calleo/analyzing-svenskalag-data-using-dbt-and-duckdb-4lcf
dataengineering, dbt, duckdb, football
As a youth football coach and data engineer, I have been dreaming of that [Moneyball](https://www.imdb.com/title/tt1210166/) moment to become true. But coaching 10-year old girls, we (the coaching team) are less concerned about batting averages and player valuations. Our primary goal is to get everyone to enjoy football and develop as a player and a teammate. By getting these foundational parts right, we hope that many of todays' team members will keep playing football for a long time. But this doesn't mean that you can't use data to improve. Most clubs these days use standardized software to track attendance and team members over time. This is extremely helpful just to get players to show up to practice and games, but the data can also be used when planning for the season: * Which days to practice? * How many teams to register? * Which level of difficulty (league) to pick? * If you are bringing in an external coach, which day of the week would be the best one? There are many more potential questions you might find the answer to, especially these days when teams keep track of scores, shots, running distance, etc. I have been wanting to try DuckDB for a long time, and this seemed like the perfect excuse. Follow along to see how to scrape a website using Python and transform data using DBT. Best of all, it's all done on your local machine using DuckDB for persistence and querying. ## Getting the Data Within our club, we use [Svenskalag.se](www.svenskalag.se) which has become a very popular system used to manage sports teams in Sweden. This system offers some basic reporting functionality, but you quickly run out of options if you want to do anything else than just seeing how many training sessions each player has attended. There is no public API available to extract the data, so the only option left is the dirtiest trick in the book: web scraping! Using [Scrapy](https://scrapy.org/) I fetched the data needed (activities and attendance). Scrapy handled authentication using a form request in a very simple way: ``` yield scrapy.FormRequest( login_url, formdata={'UserName': username, 'UserPass': password}, callback=self.parse ) ``` Scrapy relies on XPath to extract data. I admit it, I rarely get those expressions right the first time, so it was a big help to use Chrome Developer Tools to test them. At the beginning I searched for individual elements to extract the data. However, after a while I noticed that all the data I needed was rendered as JavaScript/JSON within script tags. **Example:** ``` <script> var initData = { teams: [ { id: 7335, name: "Soccer Dads" }, { id: 9495, name: "Soccer Moms" } ] } </script> ``` This made things I whole lot easier. By getting the text content from the script tag, I could use [calmjs.parse](https://github.com/calmjs/calmjs.parse) to convert JavaScript into a Python data structure. Much easier than finding tags and extracting text using XPath. ## Data Modelling After fetching the data I ended up with JSON-objects that I stored in DuckDB. These needed to be transformed into something that could be analyzed more easily. I decided to use [DBT](https://www.getdbt.com/) for this task, together with the DuckDB connector. DuckDB is especially brilliant when you are working locally on a project like this. I had some issues at the beginning, but it was because DBT is very picky about the naming of the profile file (`profiles.yml` and NOT `profiles.yaml`) 🤦 In the DBT profile I configured DuckDB by attaching the database with the raw data and loading the extensions needed (ICU for time zone parsing). After that it felt like any other DBT project I have worked on. As a frequent Snowflake user, I appreciate the simplicity when it comes to handling unstructured data. Turns out DuckDB can do it just as well. I used `UNNEST` to pick a part the JSON payload, which was almost hassle free ([learnt about 1-based indexing](https://github.com/duckdb/duckdb/issues/2575)). A positive surprise was the inclusion of `QUALIFY` into DuckDB. ## Analysis Over time I have gotten to know my coach colleagues pretty well, and they like Microsoft Excel. I recognize this all too well from my day-time job: You create fancy data models in a data warehouse, just for them to be exported into Microsoft Excel or Google Sheets. But this time I came prepared and built a wide table containing all the data you could imagine, which can then be exported to a CSV file (using DuckDB) with a simple command. Anyone can then open it in Excel and be happy 🙂 ## Summary It is a breeze to get your analytics project up and running using tools such as DBT and DuckDB. And although fragile, web scraping can be a life saver and tool worth having around. You can find the source code for this project on [GitHub](https://github.com/calleo/svenskalag-analytics). Don't be a stranger, I'd love to hear your feedback. Now let's get back to enjoying the football played at Euro 2024 🎉
calleo
1,891,682
Learning Full-stack Web Development for Beginners Guides: How to Get Started
Today, I am gonna show you how to get started with Programming for the beginner or who does not have...
0
2024-06-17T21:04:59
https://dev.to/fahim_hasan/learning-full-stack-web-development-for-beginners-guides-how-to-get-started-5ghm
Today, I am gonna show you how to get started with Programming for the beginner or who does not have direction. I was in your show when I started learning, I don't know what, where, and How I can get started with programming/coding/web development. I read and watched many tutorials and videos but nothing helped. I fell into the trap of tutorial hell. When I take the course, I can code along. No issue. But when I tried to code along, I was stuck and I didn't know where to begin. Nothing helps, why? What am I lacking? For me it was not having a clear direction, I just watched the random tutorial and did not have a clear direction on How to learn code. Here are 4 things that help to get out of tutorial hell. **1. Have a Plan:** You should have a very good plan. Having a good plan is very important. Why should you have a good plan? Because learning coding/programming or web development should not be done within a week or just a weekend. Rather you should have a consistent at least 6-month plan. **2. Structural Material:** After you determine that, we will spend at least 6 months on coding. You need a path/structural Material to go from A to Z instead of just jumping around random tutorials from start to finish. **3. Have a good Retention system:** When you have good structural material/ course, you need to learn the material, understand the material, memorize the concept, and practice retention/Recall very often. That is a key to learning to code. We will often hear that you don't need to memorize, we all google 24/7. They are 100% right when you may be coding for 3/4 years and how to find the solution. It is bad advice for beginners or those who are in stuck tutorial hell. **Yes, as long as you're not just going through it for the sake of completing it. You have to ensure you're retaining or understanding what you're learning. Ensure you're reviewing previous material.** **4. Build crappy/tiny projects:** When you are learning HTML and CSS, build small projects like buttons, nav bars horizontal, vertical, hamburger menus, etc. Why build crappy projects? Because it is - Small and easy-to-build - Build the momentum - Unstuck the stuck **5. Build, Build, Build** Make 2 hours of uninterrupted time each day for you to sit down and build one shitty program 10 times over and over again, each time adding just ONE feature, **with the language you already started to learn. ** And whenever adding a new feature feels too hard, don't take 2 steps back. **Take 200 steps back**. Rebuild the entire damn thing. I may build a basic CRUD express.JS app 100+ times. Yes, it's very boring. But you know what that means? It means I know how to lift off a basic express.JS CRUD server so well that it's **boring** to me. I sometimes do it without looking at the screen.
fahim_hasan
1,891,617
AWS Community Day 2024: A Landmark Event in Kenya's Tech Landscape
Nearly 50 days later, the excitement and inspiration from AWS Community Day 2024 still resonate with...
0
2024-06-17T20:43:43
https://dev.to/aws-builders/aws-community-day-2024-a-landmark-event-in-kenyas-tech-landscape-5073
--- title: AWS Community Day 2024: A Landmark Event in Kenya's Tech Landscape published: true description: tags: # cover_image: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f4nnnbekq2oo76g6lwpf.jpg) # Use a ratio of 100:42 for best results. # published_at: 2024-06-17 09:08 +0000 --- Nearly 50 days later, the excitement and inspiration from AWS Community Day 2024 still resonate with me. This event has left a lasting impact on Kenya’s tech community, offering a blend of insightful learnings and the chance to meet incredible people. It was a remarkable milestone for everyone involved. AWS Community Day was the culmination of over 30 virtual meet-ups and 4 physical meet-ups, bringing together a dynamic community that’s growing every day! I joined the AWS User Group in 2021 and became part of the organizing team in 2022. Since then, the community has expanded exponentially to over 3000 members, underscoring the importance of sharing AWS knowledge and advancing cloud computing. Here’s a recap of this inaugural event that brought our vibrant community together. _TL;DR_ Theme: "Learn and Be Curious" The event's theme was inspired by Amazon’s leadership principles, emphasizing the importance of continuous learning and exploring new possibilities. **Keynote Speakers:** 1. Jeff Barr - Chief Evangelist for AWS. [Watch it here](https://www.youtube.com/watch?v=RKB-TKCKJEE) 2. Dr. Aminah Zawedde - PhD, CISA, Permanent Sec. of Uganda's Ministry of ICT. [Watch it here](https://www.youtube.com/watch?v=VCj7Wce7SjY) 3. Eng. John Kipchumba Tanui, MBS - Principal Secretary - State Department for ICT and Digital Economy. [Watch it here](https://www.youtube.com/watch?v=_KSoUUH6VU4) 4. Robin Njiru - AWS Public Sector Lead for Sub-Saharan Africa. [Watch it here](https://www.youtube.com/watch?v=rk9QBMM7sWQ) Venue: KCA University Event by the Numbers: ![AWS Community Day Kenya 2024 by the numbers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a9a02hshzv9ga97lnp49.png) **Planning the Event:** We began planning in November 2023. Securing the date and venue were the most critical aspects, we were initially aiming for April 13th but later settling on Saturday, April 20th, 2024. We had an amazing team that pulled together to get the venue locked. The organizing committee comprised AWS community builders, well-wishers, and cloud captains leading various key aspects of the event. **Sponsors:** Sponsors played a key role in ensuring the event was a success. Their generous support from financial resources to experts for panels & exhibiting during the event helped see our 1st community day be one for the books. This support did not come easy as potential sponsors needed to see the value they would get from the event. We had an AWS Community Day concept paper that communicated the event's aim and the attendee profile we targeted to attend for the event. Below are our esteemed sponsors who supported us. We could not be more grateful for walking with us. ![AWS Community Day Kenya 2024 Sponsors](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nghtmuns1n9xcb6534z5.png) **Call for Papers and Agenda:** The most crucial aspect of the event was the Call for Papers and the crafting of the agenda. The organizing committee aimed to spark curiosity and provide a platform for learning and showcasing Kenya and Africa's amazing talent and knowledge. The call for presentations resulted in 38 submissions, of which 28 were accepted. _Tracks and Submissions:_ - DevOps:18 submissions, 13 accepted - Gen AI/ML:9 submissions, 6 accepted - Cybersecurity:5 submissions, all accepted - Sustainability and Environment: 6 submissions, 4 accepted Types of Presentations: - Paper Presentations: 17 - Technical Demonstrations: 21 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/meahlefo57untaghdgl7.png) Below are the various Presentations and Panels: **Gen AI/ML:** 1. Farah Abdirahman: Building Generative AI Applications Using Amazon Bedrock 2. Chibuike Nwachukwu: Unleashing RAG: Serverless Q&A with Kendra, Bedrock & Amplify 3. Nicabed Gathaba: Intro to Quantum Computing 4. Shadab Hussain: Unveiling the Quantum Leap in Financial Modeling using AWS Braket 5. Alvin Kamau Ndirangu: Augmenting Human Intelligence with AI **DevOps:** 1. Peter Muchemi: Building Multitenant Application with AWS 2. Elsie Marion: Containers Don't Contain: Container Security 101 3. Lewis Sawe: Decentralized Authentication with AWS Cognito 4. Kadima Samuel: DevOps: The Big Picture 5. Kelvin Kaoka: AWS SAM For Event-Driven Architectures: Building Serverless Architectures 6. Moracha Jacob: Building Serverless Application With Amplify 7. Chris Otta: AWS IAM Roles for Secure GitOps 8. Daniel Kimani: SDLC Automation 9. Abby Nduta: Demystifying AWS Billing for Beginners 10. Kevin Kiruri: Serverless Architecture on AWS 11. Wanjohi Christopher: Best Practices for Database Migration to AWS: A Technical Deep Dive 12. Evans Kiprotich: Optimizing Your DR Strategy: Leveraging AWS DRS for Seamless Recovery **Cybersecurity:** 1. Kurtis Bengo: Beyond the Perimeter: Safeguarding Your AWS Cloud Fortress with Battle-Tested Practices 2. Adonijah Kiplimo: Deepfakes: The Looming Threat & Building Cloud Security Defenses 3. Emmanuel Mingle: AWS Certification Paths: Which Certification is Well Suited for My Future Role? 4. Albertini Francis: Hacked!!...Not again!!... A Guide on Reducing Your Attack Surface on AWS Cloud 5. Zipporah Wachira: A Defense in Depth Approach to Cloud Security **Sustainability and Environment:** 1. Diana Chepkirui/Clive Kamaliki: IoT & Sustainability 2. Sumaiya Nalukwago: Developing Transferable/Soft Skills for a Sustainable Tech Career 3. Vanessa Alumasa: AWS Cloud: Innovation for Sustainability 4. Judith Soi: Successful AWS Training These are the various Panels that were held: _Cybersecurity Panel Discussion_ - Host: Nelly Nyadzua - Experts: Yinka Daramola, Washington,Licio Lentimo, Purity Njeri Gachuhi _DevOps Panel Discussion_ - Host: Mark Orina - Experts: Antony Wanyugi , Yinka Daramola ,Lemayian Nakolah, Kevin Karanga Kiruri _Women in Tech Panel Discussion_ - Host: Linet Kendi - Experts: Zipporah Wachira, Ms. Cherin Onyango, Dr. Aminah Zawedde, Diana Muthoni _AI/ML Panel Discussion HOST_ - Host: DR Kevin Mugoye - Experts: Mr David Opondo, Daniells Adebimp, Lawrence Muema, Chris Otta **AWS DeepRacer League:** The Deep Racer League exemplified the power of teamwork, resilience, and continuous learning. The team organized virtual tracks of the AWS DeepRacer 3D racing simulator where each team put their models to the test. The rounds are virtual with the single fastest lap. The racers who were consistent progressed on to the final race. The team that cut were: 1.MMU 2.DEKUT 3.KCA 4.TUK 5.Strath 6.emobilis The final round was held on the day of the community day which saw the winning team receive various gifts and prizes. The winning team was the MMU team. You can watch their presentation on how they were able to win the league: [How to get into AWS Deep Racer & AWS AI & ML Scholarship program]( https://www.youtube.com/watch?v=mfPeoIhdekc&t=6s) Thanks to all the hard work of the AWS DeepRacer team, mentors, competition administrators, and emobilis for making this Race happen. AWS Community Day 2024 was a testament to the power of community and the endless possibilities unlocked by learning and curiosity. The event not only provided a platform for knowledge sharing and skill development but also reinforced the collaborative spirit that defines the AWS User Group - Kenya. We look forward to the next AWS Community Day, where we will once again come together to learn, share, and innovate. Until then, let's keep the spirit of curiosity alive and continue to explore the vast potential of AWS. We are looking forward to AWS Community DAY Kenya 2024. Join our community here : [MeetUps](https://www.meetup.com/aws-user-group-nairobi/) [Twitter](https://twitter.com/aws_UGkenya) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqwapbcpxs0gvqhzt16o.png)
markorina
1,891,708
Class Design Guidelines
Class design guidelines are helpful for designing sound classes. This section summarizes some of the...
0
2024-06-17T20:43:16
https://dev.to/paulike/class-design-guidelines-58gc
java, programming, learning, beginners
Class design guidelines are helpful for designing sound classes. This section summarizes some of the guidelines. ## Cohesion A class should describe a single entity, and all the class operations should logically fit together to support a coherent purpose. You can use a class for students, for example, but you should not combine students and staff in the same class, because students and staff are different entities. A single entity with many responsibilities can be broken into several classes to separate the responsibilities. The classes **String**, **StringBuilder**, and **StringBuffer** all deal with strings, for example, but have different responsibilities. The **String** class deals with immutable strings, the **StringBuilder** class is for creating mutable strings, and the **StringBuffer** class is similar to **StringBuilder** except that **StringBuffer** contains synchronized methods for updating strings. ## Consistency Follow standard Java programming style and naming conventions. Choose informative names for classes, data fields, and methods. A popular style is to place the data declaration before the constructor and place constructors before methods. Make the names consistent. It is not a good practice to choose different names for similar operations. For example, the **length()** method returns the size of a **String**, a **StringBuilder**, and a **StringBuffer**. It would be inconsistent if different names were used for this method in these classes. In general, you should consistently provide a public no-arg constructor for constructing a default instance. If a class does not support a no-arg constructor, document the reason. If no constructors are defined explicitly, a public default no-arg constructor with an empty body is assumed. If you want to prevent users from creating an object for a class, you can declare a private constructor in the class, as is the case for the **Math** class. ## Encapsulation A class should use the **private** modifier to hide its data from direct access by clients. This makes the class easy to maintain. Provide a getter method only if you want the data field to be readable, and provide a setter method only if you want the data field to be updateable. For example, the **Rational** class provides a getter method for **numerator** and **denominator**, but no setter method, because a **Rational** object is immutable. ## Clarity Cohesion, consistency, and encapsulation are good guidelines for achieving design clarity. Additionally, a class should have a clear contract that is easy to explain and easy to understand. Users can incorporate classes in many different combinations, orders, and environments. Therefore, you should design a class that imposes no restrictions on how or when the user can use it, design the properties in a way that lets the user set them in any order and with any combination of values, and design methods that function independently of their order of occurrence. For example, the **Loan** class contains the properties **loanAmount**, **numberOfYears**, and **annualInterestRate**. The values of these properties can be set in any order. Methods should be defined intuitively without causing confusion. For example, the **substring(int beginIndex, int endIndex)** method in the **String** class is somewhat confusing. The method returns a substring from **beginIndex** to **endIndex – 1**, rather than to **endIndex**. It would be more intuitive to return a substring from **beginIndex** to **endIndex**. You should not declare a data field that can be derived from other data fields. For example, the following **Person** class has two data fields: **birthDate** and **age**. Since **age** can be derived from **birthDate**, **age** should not be declared as a data field. `public class Person { private java.util.Date birthDate; private int age; ... }` ## Completeness Classes are designed for use by many different customers. In order to be useful in a wide range of applications, a class should provide a variety of ways for customization through properties and methods. For example, the **String** class contains more than 40 methods that are useful for a variety of applications. ## Instance vs. Static A variable or method that is dependent on a specific instance of the class must be an instance variable or method. A variable that is shared by all the instances of a class should be declared static. For example, the variable **numberOfObjects** in **CircleWithPrivateDataFields** in [here](https://dev.to/paulike/data-field-encapsulation-4i7b) is shared by all the objects of the **CircleWithPrivateDataFields** class and therefore is declared static. A method that is not dependent on a specific instance should be defined as a static method. For instance, the **getNumberOfObjects()** method in **CircleWithPrivateDataFields** is not tied to any specific instance and therefore is defined as a static method. Always reference static variables and methods from a class name (rather than a reference variable) to improve readability and avoid errors. Do not pass a parameter from a constructor to initialize a static data field. It is better to use a setter method to change the static data field. Thus, the following class in (a) is better replaced by (b). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jz2ya446dg79ey3g80qq.png) Instance and static are integral parts of object-oriented programming. A data field or method is either instance or static. Do not mistakenly overlook static data fields or methods. It is a common design error to define an instance method that should have been static. For example, the **factorial(int n)** method for computing the factorial of **n** should be defined static, because it is independent of any specific instance. A constructor is always instance, because it is used to create a specific instance. A static variable or method can be invoked from an instance method, but an instance variable or method cannot be invoked from a static method. ## Inheritance vs. Aggregation The difference between inheritance and aggregation is the difference between an is-a and a has-a relationship. For example, an apple is a fruit; thus, you would use inheritance to model the relationship between the classes **Apple** and **Fruit**. A person has a name; thus, you would use aggregation to model the relationship between the classes **Person** and **Name**. ## Interfaces vs. Abstract Classes Both interfaces and abstract classes can be used to specify common behavior for objects. How do you decide whether to use an interface or a class? In general, a strong is-a relationship that clearly describes a parent–child relationship should be modeled using classes. For example, since an orange is a fruit, their relationship should be modeled using class inheritance. A weak is-a relationship, also known as an is-kind-of relationship, indicates that an object possesses a certain property. A weak is-a relationship can be modeled using interfaces. For example, all strings are comparable, so the **String** class implements the **Comparable** interface. A circle or a rectangle is a geometric object, so **Circle** can be designed as a subclass of **GeometricObject**. Circles are different and comparable based on their radii, so **Circle** can implement the **Comparable** interface. Interfaces are more flexible than abstract classes, because a subclass can extend only one superclass but can implement any number of interfaces. However, interfaces cannot contain concrete methods. The virtues of interfaces and abstract classes can be combined by creating an interface with an abstract class that implements it. Then you can use the interface or the abstract class, whichever is convenient.
paulike
1,764,923
5 great skills for programmers (and everybody else)
In Software development world, where challenges are as diverse as the code itself, the prowess to...
0
2024-06-17T20:42:38
https://dev.to/tecnomage/5-great-skills-for-programmers-and-everybody-else-j3a
programming, softskills, career
--- In Software development world, where challenges are as diverse as the code itself, the prowess to solve complex problems stands as a defining characteristic of exceptional developers. In a captivating exploration on Dev.to, lpasqualis dissects "The 5 Problem-Solving Skills of Great Software Developers," shedding light on the essential abilities that propel developers to new heights. As we embark on this journey through the intricacies of coding conundrums, let's unravel the key skills that transform good developers into great problem solvers. * Problem-Solving: Indeed, dissecting complex problems into manageable pieces is like untangling a knotted thread. Whether you’re debugging code or designing an algorithm, this skill is fundamental. It’s akin to solving a puzzle, and the joy of finding that missing piece is what keeps developers going. * Efficient Laziness: I love that term! It’s all about working smarter, not harder. By automating repetitive tasks, you reclaim time for more creative endeavors. Imagine writing a script to handle mundane chores—like a digital assistant that tidies up your workspace while you focus on the fun stuff! * Self-Motivation and Independence: Ah, the quiet determination that fuels late-night coding sessions. When you’re deep in thought, wrestling with a bug, it’s your inner fire that keeps you going. Independence matters too; sometimes, the best solutions emerge when you’re alone with your thoughts. * Perseverance: Bugs are like mischievous gremlins—they pop up unexpectedly. But every squashed bug is a victory. Perseverance is the secret sauce that turns frustration into triumph. It’s the “I won’t give up until this works” attitude that defines great developers. * Collaboration: While individual skills matter, collaboration stitches the tapestry together. Pair programming, code reviews, and brainstorming sessions—these interactions weave innovation. A diverse team brings different perspectives, enriching the narrative of software development. In this intricate tapestry of coding challenges, we’ve unraveled the threads that bind great developers. From dissecting problems to automating tasks, from self-motivation to unwavering perseverance, these skills form the warp and weft of our journey. But remember, it’s not a solo endeavor; collaboration adds vibrant hues to our canvas. So, whether you’re debugging code or untangling life’s complexities, embrace these skills—they’re the compass guiding us toward innovation and resilience. 🌟
tecnomage
1,891,707
What are the Benefits of Using ECMAScript Classes Over Traditional Prototype-Based Inheritance?
JavaScript has long utilized prototype-based inheritance as a core mechanism to build reusable code....
0
2024-06-17T20:35:41
https://dev.to/orases1/what-are-the-benefits-of-using-ecmascript-classes-over-traditional-prototype-based-inheritance-2j7h
ecmascript, javascript, webdev, programming
[JavaScript](https://www.javascript.com/) has long utilized prototype-based inheritance as a core mechanism to build reusable code. This traditional approach of leveraging prototypes to define methods and properties that JavaScript objects can inherit has served developers well over the years by offering flexibility and dynamic features that help drive web innovation. But with ECMAScript 2015, also known as ES6, JavaScript embraced a new syntactic feature—classes. These [ECMAScript](https://en.wikipedia.org/wiki/ECMAScript) classes provide a much clearer and more familiar syntax for creating objects and dealing with inheritance, drawing close parallels with classical object-oriented programming languages. ## Understanding Prototype-Based Inheritance In JavaScript, objects utilize prototype-based inheritance, a form of object-oriented programming that lets them get properties and methods from other objects. This dynamic system allows objects to be extended and modified on the fly, leading to a more flexible and simplified coding approach. However, this flexibility may also introduce complexity, particularly in large-scale projects where the prototype chain can become deeply nested and difficult to manage. Among the common challenges of prototype-based inheritance is the confusion surrounding the ‘this’ keyword, which can lead to bugs when it does not point to the object that the programmer expects. Inefficient property look-up times can also occur as JavaScript engines search through long prototype chains to access properties not found on the immediate object. While powerful, this mechanism requires careful management to avoid performance bottlenecks and maintainability issues. ## Introducing ECMAScript Classes ECMAScript classes streamline the process of creating objects and managing inheritance in JavaScript by offering a more straightforward and readable syntax. Essentially, they offer a more intuitive and streamlined syntax over the conventional prototype-based inheritance model. Defined using the ‘class’ keyword, these classes streamline object-oriented programming by reducing the need for manual prototype management. For example, a class can be defined simply as ‘class MyClass { constructor() { this.myProperty = 'value'; } }’. ECMAScript classes enhance code readability and structure compared to traditional constructor functions, simplifying both understanding and overall maintenance. They encapsulate the prototype manipulation behind a more traditional object-oriented facade, offering a familiar and intuitive approach to inheritance and object construction. ## Clearer Syntax and Structure ECMAScript classes enhance JavaScript with a syntax that is both cleaner and more structured, akin to classical object-oriented languages. Using the ‘class’ and ‘extends’ keywords, this structured approach makes the code more intuitive and easier to follow, especially for developers familiar with other programming languages. The clear delineation of constructor functions, methods, and inheritance mechanisms significantly improves readability and maintainability. ECMAScript classes allow developers to more clearly interpret the code, predict its behavior, and manage updates, which decreases the chances of errors and makes the development process more efficient. ## Encapsulation and Abstraction ECMAScript classes support encapsulation by letting developers distinguish between private and public members in a given class, ultimately helping to protect and manage access to the data at hand. Using the ‘#’ prefix for private fields, such as ‘#privateData’, classes restrict access to internal properties, ensuring that they can only be manipulated through methods defined within the class itself. This promotes data hiding and security. Getters and setters also provide a controlled interface to an object's data, facilitating better abstraction. For example, a setter can validate input before setting a value, and a getter can format output data, thus preserving the internal state while presenting an external interface tailored to specific requirements. This structured approach enhances both the robustness and the integrity of the code. ## Inheritance and Extensibility ECMAScript classes can directly help streamline the overall process of defining and extending objects. For example, creating a subclass is as simple as using the ‘extends’ keyword: ‘class SubClass extends BaseClass { constructor() { super(); } }’. This syntax clearly indicates the inheritance relationship and automatically handles prototype chaining, reducing complexity. Under the hood, when a subclass extends a base class, JavaScript automatically sets up the prototype chain, ensuring that instances of the subclass inherit properties and methods from the base class. This mechanism simplifies code while enhancing its extensibility by allowing for easy modifications and additions to class hierarchies. ## Static Methods and Properties In ECMAScript classes, static methods and properties are defined on the class rather than on instances of the class, meaning they belong to the class itself. Defined using the ‘static’ keyword, these members are typically used for functionality that is common to all instances or that belongs to the class conceptually but does not operate on instance data. For example, a utility function that converts input data or a constant value that’s used across various instances. The benefits of static members include memory efficiency since they’re not replicated across instances and the convenience of shared functionality accessible without instantiating the class. This ultimately makes them ideal for utility functions and constants that support the class's operations. ## Compatibility and Tooling Support ECMAScript classes are widely supported across modern JavaScript environments, including all major browsers and Node.js, ensuring that [developers](https://orases.com/) can use this syntax without compatibility concerns. They integrate seamlessly with ECMAScript modules and many third-party libraries, facilitating modern web development practices. Furthermore, popular development tools such as Visual Studio Code, WebStorm, and Babel provide robust tooling support for ECMAScript classes. These tools provide functionalities such as code completion, syntax highlighting, and advanced refactoring options, which boost productivity and enhance the development experience when working with classes. ## Performance Considerations ECMAScript classes may offer performance improvements over traditional prototype-based inheritance due to optimizations in modern JavaScript engines. These engines can more efficiently handle class syntax, potentially leading to faster property access and method invocation. However, specific benchmarks and studies vary, with performance gains depending on the context and the complexity of operations. To optimize performance when using ECMAScript classes, developers should focus on minimizing class and method complexities, avoiding excessive inheritance chains, and leveraging static properties where practical. These practices help maintain optimal execution speeds, all while actively reducing runtime overhead.
orases1
1,891,689
Creating a browser extension for Chrome / Edge
We will create a simple extension to explore the power of browser extensions.
0
2024-06-17T20:32:40
https://dev.to/prakashm88/creating-a-browser-extension-for-chrome-edge-3d69
chrome, edge, extension, javascript
--- title: Creating a browser extension for Chrome / Edge published: true description: We will create a simple extension to explore the power of browser extensions. tags: chrome, edge, extension, javascript # cover_image: https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/Browser-extensions-1536x878.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-17 20:14 +0000 --- Creating a browser extension has never been easier, thanks to the comprehensive documentation and support provided by browser vendors. Below, I’ll walk through the steps to create a simple extension for both Chrome and Microsoft Edge using Manifest V3. We will use this tool to print the list of HTTP requests that are fired in a given webpage and list it in the extension. **Basics of extensions:** Manifests – A manifest is a JSON file that contains metadata about a browser extension, such as its name, version, permissions, and the files it uses. It serves as the blueprint for the extension, informing the browser about the extension’s capabilities and how it should be loaded. **Key Components of a Manifest File:** Here are the key components typically found in a Manifest V3 file: 1\. Manifest Version: There are different versions of the manifest file, with Manifest V3 being the latest and most widely adopted version. Manifest V3 introduces several changes aimed at improving security, privacy, and performance with lot of controversies around it. Read more about the controversies at Ghostery. 2\. Name and Version: These fields define the name and version of the extension. Choose a unique name and version. An excellent guide of version semantics is available here. 3\. Description: A short description of the extension’s functionality. 4\. Action: Defines the default popup and icon for the browser action (e.g., toolbar button). 5\. Background: Specifies the background script that runs in the background and can handle events like network requests and alarms. 6\. Content Scripts: Defines scripts and stylesheets to be injected into matching web pages. 7\. Permissions: Lists the permissions the extension needs to operate, such as access to tabs, storage, and specific websites. 8\. Icons: Specifies the icons for the extension in different sizes. For this post I created a simple icon using [Microsoft Designer.](https://designer.microsoft.com/) I gave a simple prompt with the description above and I got the below image. Extension requires different sizes for showing it in different places. I used [Chrome Extension Icon Generator](https://alexleybourne.github.io/chrome-extension-icon-generator/) and generated different sizes as needed. ![](https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/icons-150x150.png) 9\. Web Accessible Resources: Defines which resources can be accessed by web pages. **Create a project structure as follows:** ``` HttpRequestViewer/ |-- manifest.json |-- popup.html |-- popup.js |-- background.js |-- history.html |-- history.js |-- popup.css |-- styles.css |-- icons/ |-- icon.png |-- icon16.png |-- icon32.png |-- icon48.png |-- icon128.png ``` **Manifest.json** ``` { "name": "API Request Recorder", "description": "Extension to record all the HTTP request from a webpage.", "version": "0.0.1", "manifest\_version": 3, "host\_permissions": \[""\], "permissions": \["activeTab", "webRequest", "storage"\], "action": { "default\_popup": "popup.html", "default\_icon": "icons/icon.png" }, "background": { "service\_worker": "background.js" }, "icons": { "16": "icons/icon16.png", "32": "icons/icon32.png", "48": "icons/icon48.png", "128": "icons/icon128.png" }, "content\_security\_policy": { "extension\_pages": "script-src 'self'; object-src 'self';" }, "web\_accessible\_resources": \[{ "resources": \["images/\*.png"\], "matches": \["https://\*/\*"\] }\] } ``` **popup.html** We have two options with the extension. 1\. A button with record option to start recording all the HTTP requests 2\. Link to view the history of HTTP Requests recorded ``` <!DOCTYPE html> <html> <head> <title>API Request Recorder</title> <link rel="stylesheet" href="popup.css" /> </head> <body> <div class="heading"> <img class="logo" src="icons/icon48.png" /> <h1>API Request Recorder</h1> </div> <button id="startStopRecord">Record</button> <div class="button-group"> <a href="#" id="history">View Requests</a> </div> <script src="popup.js"></script> </body> </html> ``` **popup.js** Two event listeners are registered for recording (with start / stop) and viewing history. First event is used to send a message to the background.js, while the second one instructs chrome to open the history page in new tab. ``` document.getElementById("startStopRecord").addEventListener("click", () => { chrome.runtime.sendMessage({ action: "startStopRecord" }); }); document.getElementById("history").addEventListener("click", () => { chrome.tabs.create({ url: chrome.runtime.getURL("/history.html") }); }); ``` **history.html** ``` <!DOCTYPE html> <html> <head> <title>History</title> <link rel="stylesheet" href="styles.css" /> </head> <body> <h1>History Page</h1> <table> <thead> <tr> <th>Method</th> <th>URL</th> <th>Body</th> </tr> </thead> <tbody id="recorded-data-body"> <!-- Data will be populated here --> </tbody> </table> <script src="history.js"></script> </body> </html> ``` **history.js** Requests background.js to “getRecordedData” and renders the result in the html format. ``` document.addEventListener("DOMContentLoaded", () => { chrome.runtime.sendMessage({ action: "getRecordedData" }, (response) => { const tableBody = document.getElementById("recorded-data-body"); response.forEach((record) => { const row = document.createElement("tr"); const urlCell = document.createElement("td"); const methodCell = document.createElement("td"); const bodyCell = document.createElement("td"); urlCell.textContent = record.url; methodCell.textContent = record.method; bodyCell.textContent = record.body; row.appendChild(methodCell); row.appendChild(urlCell); row.appendChild(bodyCell); tableBody.appendChild(row); }); }); }); ``` **background.js** Background JS works as a service worker for this extension, listening and handling events. The background script does not have access to directly manipulate the user page content, but can post results back for the popup/history script to handle the cosmetic changes. ``` let isRecording = false; let recordedDataList = \[\]; chrome.runtime.onMessage.addListener((message, sender, sendResponse) => { console.log("Obtined message: ", message); if (message.action === "startStopRecord") { if (isRecording) { isRecording = false; console.log("Recording stopped..."); sendResponse({ recorder: { status: "stopped" } }); } else { isRecording = true; console.log("Recording started..."); sendResponse({ recorder: { status: "started" } }); } } else if (message.action === "getRecordedData") { sendResponse(recordedDataList); } else { console.log("Unhandled action ..."); } }); chrome.webRequest.onBeforeRequest.addListener( (details) => { if (isRecording) { let requestBody = ""; if (details.requestBody) { if (details.requestBody.formData) { requestBody = JSON.stringify(details.requestBody.formData); } else if (details.requestBody.raw) { requestBody = new TextDecoder().decode(new Uint8Array(details.requestBody.raw\[0\].bytes)); } } recordedDataList.push({ url: details.url, method: details.method, body: requestBody, }); console.log("Recorded Request:", { url: details.url, method: details.method, body: requestBody, }); } }, { urls: \[""\] }, \["requestBody"\] ); ``` **Lets load the Extension** All set, now lets load the extension and test it. * Open Chrome/Edge and go to chrome://extensions/ or edge://extensions/ based on your browser. * Enable “Developer mode” using the toggle in the top right corner. * Click “Load unpacked” and select the directory of your extension. ![Load extension](https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/load-extention-150x150.png) ![upload extension](https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/extention-loaded-150x150.png) ![upload extension 1](https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/extention-loaded-1-150x150.png) * Your extension should now be loaded, and you can interact with it using the popup. * When you click the “Record” button, it will start logging API requests to the console. [![](https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/extention-loaded-2-150x150.png)](https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/extention-loaded-2.png) * Click the “Record” button again and hit the “View requests” link in the popup to view the history of APIs. I have a sample page (https://itechgenie.com/demos/apitesting/index.html) with 4 API calls, which also loads images based on the API responses. You could see all the API requests that is fired from the page including the JS, CSS, Images and API calls. ![Console logs](https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/console-logs.png) ![View requests](https://itechgenie.com/myblog/wp-content/uploads/sites/2/2024/06/view-requests.png) Now its up to the developers imagination to build the extension to handle these APIs request and response data and give different experience. Code is available in GitHub at [HttpRequestViewer](https://github.com/ITechGenie/HttpRequestViewer) Post originally shared at [ITechGenie.com](https://itechgenie.com/myblog/2024/06/browser-extension-sample-chrome-edge/)
prakashm88
1,888,346
How to Manage Terraform Versions
The simplest method for handling Terraform versions is to tenv. tenv is a version manager for...
0
2024-06-17T20:32:07
https://dev.to/kvendingoldo/how-to-manage-terraform-versions-2e2l
terraform, opentofu, tutorial, devops
The simplest method for handling Terraform versions is to [tenv](https://github.com/tofuutils/tenv). [tenv](https://github.com/tofuutils/tenv) is a version manager for Terraform, OpenTofu, Terragrunt, and Atmos, written in Go. This versatile version manager simplifies the complexity of version control, helping to avoid spending time on IaC tools’ version management and ensuring developers and DevOps can focus on what is important the most - crafting innovative products and driving business value. ## Why do I need Terraform version manager? Managing a single Terraform project makes installing, upgrading, or switching to tools like OpenTofu straightforward. However, handling multiple projects with different Terraform versions can be challenging. Regular upgrades and tool switches require careful coordination to maintain functionality and stability across projects. The list of key challenges: * Version Compatibility: Different projects may need specific Terraform versions, which might not be backward compatible. * Dependency Management: Dependencies for each project must match the Terraform version of that project. * Environment Consistency: It becomes challenging to maintain consistency throughout the development, staging, and production environments. * Tooling and Integration: Various Terraform versions may require modifications to CI/CD pipelines and integrations. **tenv** terraform version manager cover all described challenged under the hood in a single binary that helps to manage Terraform versions transparently. ## 🚀 tenv installation ### MacOS ``` brew install tenv ``` ### Windows ``` choco install tenv ``` ### Linux For Linux, you can install tenv version manager via packaged binaries (.deb, .rpm, .apk, pkg.tar.zst , .zip or .tar.gz format) by visiting the release page or by apk/yay/snap/nix package managers. To get more information about the Linux tenv installation, check [README.md](https://github.com/tofuutils/tenv/blob/main/README.md). ## Manage Terraform versions via tenv version manager ![manage terraform version](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44cifcpwr84w47lh0dvd.jpg) Once you have tenv version manager installed, you can use it to install specific versions of Terraform. To install Terraform, do the following steps: Open a terminal, go to the directory with Terraform code (if you have any) and execute the following command to install terraform version: ``` tenv tf install ``` Based on .tf code, tenv version manager automatically detect and install the necessary version of Terraform. If no version detected in sources, the latest version will be installed. On the other hand, if necessary, a specific Terraform version can also be installed. Let's try to install Terraform 1.8.5: ``` tenv tf install 1.8.5 ``` The install command also supports version constraints such as: - `latest` - the latest available stable version - `latest-pre` - the latest available version, including unstable ones - `latest-allowed` or min-required - [tenv](https://github.com/tofuutils/tenv) will scan your Terraform files to detect which version is maximally allowed or minimally required. Now, verify the Terraform version. To do it, use the following command: ``` terraform version ``` That's it. No symlinks, additional commands or download required. To read about more installation cases for Terraform, you can check the official [README.md](https://github.com/tofuutils/tenv/blob/main/README.md) file. ## Support Us, Contact Us If you like this post, support us, download tenv, try to use it and give us feedback in our official [discussions channel](https://github.com/tofuutils/tenv/discussions)! Press a star 🌟 [on GitHub](https://github.com/tofuutils/tenv) if you like the tenv version manager.
kvendingoldo
1,891,705
Polling Requests to an API in JavaScript
Polling is a technique that repeatedly requests data from a server at regular intervals until a...
0
2024-06-17T20:26:19
https://dev.to/siddharthssb11/polling-requests-to-an-api-in-javascript-1g2d
javascript, react, api, reactnative
Polling is a technique that repeatedly requests data from a server at regular intervals until a desired response is received or a timeout period elapses. In this article, we will explore how to implement a polling request method in JavaScript to repeatedly hit an API every 5 seconds for up to 300 seconds (5 minutes) or until a successful response is received. ## Understanding Polling Polling involves sending periodic requests to an API to check for an update or a specific response. This can be particularly useful in scenarios where you need to wait for a process to complete on the server and then perform actions based on the result. ## Polling Logic Here is a step-by-step breakdown of the polling logic we will implement: 1. Define the API endpoint and desired success response. 2. Set the polling interval (5 seconds) and maximum polling duration (300 seconds). 3. Use a loop or a recursive function to repeatedly send requests to the API at the specified interval. 4. Check the API response after each request. 5. Stop polling if a successful response is received or the maximum polling duration is reached. ## Implementing Polling in JavaScript Let's implement this logic in JavaScript: Step 1: Define the API Endpoint and Success Response ``` const apiEndpoint = 'https://example.com/api/endpoint'; // Replace with your API endpoint const successResponse = 'success'; // Define what constitutes a success response ``` Step 2: Set Polling Interval and Maximum Duration ``` const pollingInterval = 5000; // 5 seconds in milliseconds const maxPollingDuration = 300000; // 300 seconds (5 minutes) in milliseconds ``` Step 3: Implement the Polling Function We will create a function pollApi that handles the polling logic. This function will use setTimeout to schedule the next API request if necessary. ``` async function pollApi(apiEndpoint, successResponse, pollingInterval, maxPollingDuration) { const startTime = Date.now(); // Record the start time const makeRequest = async () => { try { const response = await fetch(apiEndpoint); // Make request const data = await response.json(); if (data.status === successResponse) { console.log('Success response received:', data); return; // // Stop polling if success response } const elapsedTime = Date.now() - startTime; if (elapsedTime < maxPollingDuration) { setTimeout(makeRequest, pollingInterval); // Schedule next request } else { console.log('Maximum polling duration reached. Stopping polling.'); } } catch (error) { console.error('Error making API request:', error); const elapsedTime = Date.now() - startTime; if (elapsedTime < maxPollingDuration) { setTimeout(makeRequest, pollingInterval); // Schedule next request } else { console.log('Maximum polling duration reached. Stopping polling.'); } } }; makeRequest(); // Start the first request } ``` Step 4: Start Polling Call the pollApi function to start polling the API. ``` pollApi(apiEndpoint, successResponse, pollingInterval, maxPollingDuration); ``` and voila-it works. ### Recursive Approach over Loop A recursive function is used in the provided solution rather than a loop. The function makeRequest calls itself using setTimeout to create the delay between API requests. This approach leverages JavaScript's asynchronous capabilities to avoid blocking the main thread while waiting for the next request to be made. The recursive aspect comes from the makeRequest function calling itself via setTimeout, allowing it to schedule future executions without blocking the main thread. This approach ensures that the next request is only made after the specified interval, maintaining the polling cadence. This recursive approach is effective for polling in JavaScript, ensuring that the application remains responsive while waiting for the desired API response or the maximum polling duration to be reached. ## Conclusion In this article, we demonstrated how to implement a polling request method in JavaScript to repeatedly hit an API at 5-second intervals for up to 300 seconds or until a successful response is received. Polling can be an effective way to wait for asynchronous processes on the server and react to their outcomes in real time. With this approach, you can ensure your application remains responsive and capable of handling real-world scenarios that require periodic data checks like payment status, ETA status, etc.
siddharthssb11
1,891,704
shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 1
In this article, I discuss how Blocks page is built on ui.shadcn.com. Blocks page has a lot of...
0
2024-06-17T20:22:24
https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-is-blocks-page-built-part-1-1l1k
javascript, opensource, nextjs, shadcnui
In this article, I discuss how [Blocks page](https://ui.shadcn.com/blocks) is built on [ui.shadcn.com](http://ui.shadcn.com). [Blocks page](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx) has a lot of utilities used, hence I broke down this Blocks page analysis into 5 parts. 1. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 1 2. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 2 (Coming soon) 3. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 3 (Coming soon) 4. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 4 (Coming soon) 5. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 5 (Coming soon) In part 1, we will look at the following: 1. Where to find blocks page code in the shadcn-ui/ui repository? 2. getAllBlockIds function 3. \_getAllBlocks function These function further call other utility functions that will be explained in the other parts. Where to find blocks page code in the shadcn-ui/ui repository? -------------------------------------------------------------- [blocks/page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx) is where you will find Blocks page related code in the [shadcn-ui/ui](https://github.com/shadcn-ui/ui/tree/main) repository ![](https://media.licdn.com/dms/image/D4E12AQGtURsQ-dgNeg/article-inline_image-shrink_1500_2232/0/1718654911835?e=1724284800&v=beta&t=PGoUPbRno7TF7ZPMNNvquXUE6f73uUmfykL2ed-0QOw) Just because it has only 10 lines of code does not mean it is a simple page, there is a lot going on behind these lines, especially in the [lib/blocks.ts](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L75), but don’t worry, we will understand the utility functions used in depth later in this article and other parts as well. BlocksPage gets the blocks from a function named [getAllBlockIds()](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L20) which is imported from [lib/blocks](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L75) and these blocks are mapped with a BlockDisplay component that shows blocks on the Blocks page. Let’s find out what is in getAllBlockIds() ![](https://media.licdn.com/dms/image/D4E12AQEdyKdc4S4WAw/article-inline_image-shrink_1500_2232/0/1718654912544?e=1724284800&v=beta&t=jFtvj-6buDjyHvdirBgML2F0dhPBOldJqmmROkYbmcU) getAllBlockIds function ----------------------- The below code snippet is picked from [lib/blocks.ts](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L20) ```js export async function getAllBlockIds( style: Style\["name"\] = DEFAULT\_BLOCKS\_STYLE ) { const blocks = await \_getAllBlocks(style) return blocks.map((block) => block.name) } ``` This code snippet is self explanatory, style parameter gets a default value DEFAULT\_BLOCKS\_STYLE because in the Blocks page, we call getAllBlockIds without any params as shown below: ```js const blocks = await getAllBlockIds() ``` But wait, what is the value in `DEFAULT\_BLOCKS\_STYLE`? At [line 14 in lib/blocks](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L14), you will find this below code: ``` const DEFAULT\_BLOCKS\_STYLE = "default" satisfies Style\["name"\] ``` “default” satisfies Style\[“name”\], Style is from [register/styles](https://github.com/shadcn-ui/ui/blob/main/apps/www/registry/styles.ts#L12). I just admire the quality of Typescript written in this shadcn-ui/ui. So, \_getAllBlocks gets called with a param named style that is initiated to “default”. So far, the code is straight forward. Let’s now understand what is in [\_getAllBlocks](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L75) \_getAllBlocks function ----------------------- The below code snippet is picked from [lib/blocks.ts](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L20) ```js async function \_getAllBlocks(style: Style\["name"\] = DEFAULT\_BLOCKS\_STYLE) { const index = z.record(registryEntrySchema).parse(Index\[style\]) return Object.values(index).filter( (block) => block.type === "components:block" ) } ``` Even though, getAllBlockIds from above calls this function with a parameter, this function still has a default value set to the style parameter. ```js const index = z.record(registryEntrySchema).parse(Index\[style\]) ``` Code above has the following: ### z.record [Record schema in Zod](https://zod.dev/?id=records) are used to validate types such as Record<string, number>. This is particularly useful for storing or caching items by ID. ### registryEntrySchema [registryEntrySchema](https://github.com/shadcn-ui/ui/blob/main/apps/www/registry/schema.ts#L16) defines a schema for the blocks ```js export const registryEntrySchema = z.object({ name: z.string(), description: z.string().optional(), dependencies: z.array(z.string()).optional(), devDependencies: z.array(z.string()).optional(), registryDependencies: z.array(z.string()).optional(), files: z.array(z.string()), source: z.string().optional(), type: z.enum(\[ "components:ui", "components:component", "components:example", "components:block", \]), category: z.string().optional(), subcategory: z.string().optional(), chunks: z.array(blockChunkSchema).optional(), }) ``` ### parse(Index\[style\]) [parse](https://zod.dev/?id=parse) is a schema method to check data is valid. If it is, a value is returned with full type information! Otherwise, an error is thrown. Example: ```js const stringSchema = z.string(); stringSchema.parse("fish"); // => returns "fish" stringSchema.parse(12); // throws error ``` [Index](https://github.com/shadcn-ui/ui/blob/06cc0cdf3d080555d26abbe6639f2d7f6341ec73/apps/www/__registry__/index.tsx#L6) is imported from \_registry\_folder and contains all the components used in shadcn-ui/ui. ![](https://media.licdn.com/dms/image/D4E12AQGt9pk2Rk8QgQ/article-inline_image-shrink_1000_1488/0/1718654912965?e=1724284800&v=beta&t=9v-PvJOlqLqvuPBMCS5bMObX-8AYOHJfX0wjIUF6-fI) Looks like this file gets auto generated by [scripts/build-registry.ts](https://github.com/shadcn-ui/ui/blob/main/apps/www/scripts/build-registry.mts) and this is also used in CLI package to add shadcn components into your project, more on this in the upcominhg articles. Basically, we validate Index\[“default”\] against the registry schema to ensure the auto generated code is valid and is ready for further processing such as showing in blocks page. \_getAllBlocks filters the blocks based on the block type as shown below: ```js return Object.values(index).filter( (block) => block.type === "components:block" ) ``` This is how you are able to see components that are specific to Blocks page. Conclusion: ----------- We looked at two important module functions named getAllBlockIds and \_getAllBlocks. I find this code to be pretty self explanatory, I do admire the way zod’s Record schema validations are used on the auto generated registry index json. > _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/) About me: --------- Website: [https://ramunarasinga.com/](https://ramunarasinga.com/) Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/) Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga) Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com) References: ----------- 1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx) 2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L20](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L20) 3. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L75](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L75) 4. [https://github.com/shadcn-ui/ui/blob/main/apps/www/registry/schema.ts#L16](https://github.com/shadcn-ui/ui/blob/main/apps/www/registry/schema.ts#L16)
ramunarasinga
1,891,703
CYBER SPECE HACK PRO A CERTIFIED CRYPTO RECOVERY EXPERT
Ever since my last experience with a fake online investment company where I invested a total amount...
0
2024-06-17T20:22:09
https://dev.to/douglas_gerald_b3c281c1a3/cyber-spece-hack-pro-a-certified-crypto-recovery-expert-4o6o
Ever since my last experience with a fake online investment company where I invested a total amount of $330k worth of BTC, to be in the company's monthly payroll and make some interest, little did I know that I was dealing with a fraudulent company, when it was time for me to make a withdrawal I was being restricted from doing so even when I can still read my money from the dashboard, I got depressed about this I almost gave up on life cause I felt it is over I was dying inside till I eventually opened up to a friend who referred me to a hacker who he claimed was able to recover my lost money, at first I was skeptical about it cause we all know once you give out your seed phrase out nothing can be done to recover your money…well after several days of debating within myself I decided to give it a try and to my greatest surprise he was able to recover my money within three days. I was so happy, thank you so much To the team at Cyber Space hack pro, it would be selfish of me if I didn’t refer this hacker to you…contact him Via Gmail: Cyberspacehackpro. WhatsApp:  +1 (440) 7423096 https://cyberspacehackpro0.wixsite.com/cyberspacehackpro ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mv7iraobdf974r7tyvhf.jpg)
douglas_gerald_b3c281c1a3
1,891,701
Turkish sofa bed
Forget the cramped studio struggles or the guest room that doubles as a storage unit. The Turkish...
0
2024-06-17T20:20:25
https://dev.to/jasontodd220/turkish-sofa-bed-2o7j
javascript, beginners
Forget the cramped studio struggles or the guest room that doubles as a storage unit. The **[Turkish sofa bed](https://sofaonyourchoice.co.uk/product-category/sofa-beds/)** emerges as a space-saving hero, blending practicality with a touch of Ottoman flair. This isn't just any convertible sofa; it's a statement piece that offers comfort, adaptability, and a sprinkle of historical charm. A Legacy Woven in Threads of Time The Turkish sofa bed, often referred to as a "divan," boasts a rich heritage. Its story begins in the Ottoman Empire, where maximizing space was paramount in both grand palaces and humble abodes. Ottoman designers were pioneers of multifunctional furniture, laying the groundwork for the sofa bed concept we cherish today. Traditional divans were crafted with sturdy wooden frames and luxurious upholstery, reflecting a keen eye for both comfort and lasting quality. Modern Twists on a Timeless Design Today's Turkish sofa beds maintain the core functionality of their ancestors while incorporating modern design elements and improved mechanisms. Typically available in two or three-seater configurations, they offer plush seating for daytime lounging. With a simple transformation (often a click-clack mechanism, fold-out, or pull-out system), they morph into a comfortable full-size or twin-size bed, perfect for occasional use or hosting overnight guests. The Enchanting Allure of a Turkish Sofa Bed Here's what sets these beauties apart: Built to Last: Turkish sofa beds boast robust frames, often constructed from hardwoods like beech or oak. This translates to stability and longevity, making them a wise investment. A Haven for Relaxation: Well-padded cushions upholstered in a variety of fabrics or even leather provide a luxurious seating experience. Double Duty Delight: The effortless conversion from sofa to bed is a game-changer. Whether you choose a click-clack mechanism or a fold-out system, the transformation should be smooth and user-friendly. Hidden Treasures: Many Turkish sofa beds boast secret storage compartments beneath the seat. This ingenious feature allows you to tuck away blankets, pillows, or other essentials, keeping your living space clutter-free. A Touch of Ottoman Majesty: The aesthetics of Turkish sofa beds often draw inspiration from Ottoman design. Expect rolled arms, decorative studs, or intricate upholstery patterns that lend a touch of elegance to your space. Finding Your Perfect Turkish Match With a plethora of options available, selecting the ideal Turkish sofa bed requires some thought: Size Matters: Measure your space and determine the ideal size for both the sofa and bed configurations. Mechanism Matters Too: Choose a conversion system that feels comfortable and easy for you to operate. Upholstery Options: Select a fabric or leather that complements your décor and is easy to clean. Consider durability and stain resistance, especially if you have pets or children. Style Speaks Volumes: Turkish sofa beds come in a spectrum of styles, from modern minimalist to traditionally inspired. Choose one that reflects your overall design aesthetic. Budgeting is Key: Turkish sofa beds vary in price depending on materials, size, and brand. Set a budget and prioritize features that align with your needs. Beyond Functionality: A Feast for the Eyes Turkish sofa beds are more than just practical – they're conversation starters. The rich fabrics, intricate details, and often bold colors add a touch of Ottoman grandeur to your living room, guest room, or even a home office. Here are some tips for incorporating a Turkish sofa bed into your décor: Balance is Beautiful: If your sofa bed is quite large, opt for lighter furniture pieces around it to avoid overwhelming the space. Accessorize with Flair: Accentuate the Ottoman influence with throw pillows featuring geometric patterns or metallic embroidery. Embrace Boldness: Don't shy away from vibrant upholstery colors if they suit your taste. Just balance them with neutral tones in other elements of the room. Layer Up the Textures: Combine the plushness of the sofa bed with textured rugs, throws, or curtains to add visual interest. The Multifaceted Marvel: The Turkish Sofa Bed in Action The true beauty of the Turkish sofa bed lies in its adaptability. Here are some ways you can make it shine in your home: Small Space Savior: A Turkish sofa bed offers a perfect solution in studio apartments or compact living spaces. By day, it provides a comfortable seating area, and by night, it transforms into a cozy sleeping space. The Guest Room Chameleon: A Turkish sofa bed eliminates the need for a dedicated guest bed for occasional guests. It offers a comfortable sleeping option without sacrificing valuable space.
jasontodd220
1,891,691
Case Study: The Rational Class
This section shows how to design the Rational class for representing and processing rational numbers....
0
2024-06-17T20:20:03
https://dev.to/paulike/case-study-the-rational-class-1o41
java, programming, learning, beginners
This section shows how to design the **Rational** class for representing and processing rational numbers. A rational number has a numerator and a denominator in the form **a/b**, where **a** is the numerator and **b** the denominator. For example, **1/3**, **3/4**, and **10/4** are rational numbers. A rational number cannot have a denominator of **0**, but a numerator of **0** is fine. Every integer **i** is equivalent to a rational number **i/1**. Rational numbers are used in exact computations involving fractions—for example, **1/3 = 0.33333**. . . . This number cannot be precisely represented in floating-point format using either the data type **double** or **float**. To obtain the exact result, we must use rational numbers. Java provides data types for integers and floating-point numbers, but not for rational numbers. This section shows how to design a class to represent rational numbers. Since rational numbers share many common features with integers and floating-point numbers, and **Number** is the root class for numeric wrapper classes, it is appropriate to define **Rational** as a subclass of **Number**. Since rational numbers are comparable, the **Rational** class should also implement the **Comparable** interface. Figure below illustrates the **Rational** class and its relationship to the **Number** class and the **Comparable** interface. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xz0fytrs7kgth2wssyv2.png) A rational number consists of a numerator and a denominator. There are many equivalent rational numbers—for example, **1/3 = 2/6 = 3/9 = 4/12**. The numerator and the denominator of **1/3** have no common divisor except **1**, so **1/3** is said to be in _lowest terms_. To reduce a rational number to its lowest terms, you need to find the greatest common divisor (GCD) of the absolute values of its numerator and denominator, then divide both the numerator and denominator by this value. You can use the method for computing the GCD of two integers **n** and **d**, as suggested in [here](https://dev.to/paulike/case-studies-on-loops-27l1), GreatestCommonDivisor.java. The numerator and denominator in a **Rational** object are reduced to their lowest terms. As usual, let us first write a test program to create two **Rational** objects and test its methods. The program below is a test program. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lebatlhjdw61kzkqcc71.png) The **main** method creates two rational numbers, **r1** and **r2** (lines 7–8), and displays the results of **r1 + r2**, **r1 - r2**, **r1 x r2**, and **r1 / r2** (lines 11–14). To perform **r1 + r2**, invoke **r1.add(r2)** to return a new **Rational** object. Similarly, invoke **r1.subtract(r2)** for **r1 - r2**, **r1.multiply(r2)** for **r1 x r2**, and **r1.divide(r2)** for **r1 / r2**. The **doubleValue()** method displays the double value of **r2** (line 15). The **doubleValue()** method is defined in **java.lang.Number** and overridden in **Rational**. Note that when a string is concatenated with an object using the plus sign (**+**), the object’s string representation from the **toString()** method is used to concatenate with the string. So **r1 + " + " + r2 + " = " + r1.add(r2)** is equivalent to **r1.toString() + " + " + r2.toString() + " = " + r1.add(r2).toString()**. The **Rational** class is implemented in the program below. ``` package demo; public class Rational extends Number implements Comparable<Rational> { // Data fields for numerator and denominator private long numerator = 0; private long denominator = 1; /** Construct a rational with default properties */ public Rational() { this(0, 1); } /** Construct a rational with specified numerator and denominator */ public Rational(long numerator, long denominator) { long gcd = gcd(numerator, denominator); this.numerator = ((denominator > 0) ? 1 : -1) * numerator / gcd; this.denominator = Math.abs(denominator) / gcd; } /** Find GCD of two numbers */ private static long gcd(long n, long d) { long n1 = Math.abs(n); long n2 = Math.abs(d); int gcd = 1; for(int k = 1; k <= n1 && k <= n2; k++) { if(n1 % k == 0 && n2 % k == 0) gcd = k; } return gcd; } /** Return numerator */ public long getNumerator() { return numerator; } /** Return denominator */ public long getDenominator() { return denominator; } /** Add a rational number to this rational */ public Rational add(Rational secondRational) { long n = numerator * secondRational.getDenominator() + denominator * secondRational.getNumerator(); long d = denominator * secondRational.getDenominator(); return new Rational(n, d); } /** Subtract a rational number to this rational */ public Rational subtract(Rational secondRational) { long n = numerator * secondRational.getDenominator() - denominator * secondRational.getNumerator(); long d = denominator * secondRational.getDenominator(); return new Rational(n, d); } /** Multiply a rational number to this rational */ public Rational multiply(Rational secondRational) { long n = numerator * secondRational.getNumerator(); long d = denominator * secondRational.getDenominator(); return new Rational(n, d); } /** Divide a rational number to this rational */ public Rational divide(Rational secondRational) { long n = numerator * secondRational.getDenominator(); long d = denominator * secondRational.numerator; return new Rational(n, d); } @Override public String toString() { if(denominator == 1) return numerator + ""; else return numerator + "/" + denominator; } @Override // Override the equals method in the Object class public boolean equals(Object other) { if((this.subtract((Rational)(other))).getNumerator() == 0) return true; else return false; } @Override // Implement the abstract intValue method in Number public int intValue() { return (int)doubleValue(); } @Override // Implement the abstract floatValue method in Number public float floatValue() { return (float)doubleValue(); } @Override // Implement the abstract doubleValue method in Number public double doubleValue() { return numerator * 1.0 / denominator; } @Override // Implement the abstract longValue method in Number public long longValue() { return (long)doubleValue(); } @Override // Implement the compareTo method in Comparable public int compareTo(Rational o) { if(this.subtract(o).getNumerator() > 0) return 1; else if(this.subtract(o).getNumerator() < 0) return -1; else return 0; } } ``` The rational number is encapsulated in a **Rational** object. Internally, a rational number is represented in its lowest terms (line 15), and the numerator determines its sign (line 16). The denominator is always positive (line 17). The **gcd** method (lines 21–32 in the **Rational** class) is private; it is not intended for use by clients. The **gcd** method is only for internal use by the **Rational** class. The **gcd** method is also static, since it is not dependent on any particular **Rational** object. The **abs(x)** method (lines 22–23 in the **Rational** class) is defined in the **Math** class and returns the absolute value of **x**. Two **Rational** objects can interact with each other to perform add, subtract, multiply, and divide operations. These methods return a new **Rational** object (lines 45–70). The methods **toString** and **equals** in the **Object** class are overridden in the **Rational** class (lines 72–86). The **toString()** method returns a string representation of a **Rational** object in the form **numerator/denominator**, or simply **numerator** if **denominator** is **1**. The **equals(Object other)** method returns true if this rational number is equal to the other rational number. The abstract methods **intValue**, **longValue**, **floatValue**, and **doubleValue** in the **Number** class are implemented in the **Rational** class (lines 88–106). These methods return the **int**, **long**, **float**, and **double** value for this rational number. The **compareTo(Rational other)** method in the **Comparable** interface is implemented in the **Rational** class (lines 108–116) to compare this rational number to the other rational number. The getter methods for the properties **numerator** and **denominator** are provided in the **Rational** class, but the setter methods are not provided, so, once a **Rational** object is created, its contents cannot be changed. The **Rational** class is immutable. The **String** class and the wrapper classes for primitive type values are also immutable. The numerator and denominator are represented using two variables. It is possible to use an array of two integers to represent the numerator and denominator. The signatures of the public methods in the **Rational** class are not changed, although the internal representation of a rational number is changed. This is a good example to illustrate the idea that the data fields of a class should be kept private so as to encapsulate the implementation of the class from the use of the class. The **Rational** class has serious limitations and can easily overflow. For example, the following code will display an incorrect result, because the denominator is too large. `public class Test { public static void main(String[] args) { Rational r1 = new Rational(1, 123456789); Rational r2 = new Rational(1, 123456789); Rational r3 = new Rational(1, 123456789); System.out.println("r1 * r2 * r3 is " + r1.multiply(r2.multiply(r3))); } }` `r1 * r2 * r3 is -1/2204193661661244627` To fix it, you can implement the **Rational** class using the **BigInteger** for numerator and denominator.
paulike
1,891,690
Authentication in React: Securing Your Application
Authentication is a fundamental aspect of web development that ensures only authorized users can...
0
2024-06-17T20:18:42
https://dev.to/grace_momah/authentication-in-react-securing-your-application-3h5j
Authentication is a fundamental aspect of web development that ensures only authorized users can access certain functionalities or data within an application. In the context of React, a popular front-end library for building user interfaces, implementing authentication is not only possible but also a common practice. **Why is Authentication Important?** Authentication serves several critical purposes: - **Security**: It protects sensitive data from unauthorized access. - **User Experience**: It provides a personalized experience for users. - **Compliance**: It helps meet legal and regulatory requirements for data protection. **How Can Authentication Be Implemented in React?** React itself doesn't come with built-in authentication features; however, it provides the necessary tools to integrate authentication mechanisms. Here are some ways to implement authentication in React: *1. Using Context API for State Management:* The Context API can be used to create a global state for authentication status, which can be accessed by any component in the app. ```javascript import React, { createContext, useContext, useState } from 'react'; // Create an AuthContext const AuthContext = createContext(null); // Provide AuthContext to the component tree export const AuthProvider = ({ children }) => { const [authUser, setAuthUser] = useState(null); const login = (user) => { setAuthUser(user); }; const logout = () => { setAuthUser(null); }; return ( <AuthContext.Provider value={{ authUser, login, logout }}> {children} </AuthContext.Provider> ); }; // Use AuthContext in any component export const useAuth = () => useContext(AuthContext); ``` *2. Integrating Third-Party Authentication Services:* Services like Firebase Authentication or Auth0 can be integrated into a React app to handle authentication. ```javascript import { useEffect } from 'react'; import { useAuth } from './AuthProvider'; import firebase from 'firebase/app'; import 'firebase/auth'; const LoginComponent = () => { const { login } = useAuth(); useEffect(() => { firebase.auth().onAuthStateChanged((user) => { if (user) { login(user); } }); }, [login]); // ...rest of the component }; ``` *3. Protecting Routes with Higher-Order Components:* Higher-order components (HOCs) can be used to protect routes that require authentication. ```javascript import React from 'react'; import { Redirect } from 'react-router-dom'; import { useAuth } from './AuthProvider'; export const withAuthProtection = (WrappedComponent) => { return (props) => { const { authUser } = useAuth(); if (!authUser) { // Redirect to login if not authenticated return <Redirect to="/login" />; } return <WrappedComponent {...props} />; }; }; ``` *4. Managing Tokens and Sessions:* React apps often manage JSON Web Tokens (JWTs) or session cookies to maintain user sessions. ```javascript import axios from 'axios'; axios.interceptors.request.use((config) => { const token = localStorage.getItem('token'); config.headers.Authorization = token ? `Bearer ${token}` : ''; return config; }); ``` **Conclusion:** Authentication is not only possible in React but also essential for creating secure and user-friendly applications. By leveraging the Context API, integrating third-party services, protecting routes, and managing tokens and sessions, developers can implement robust authentication systems in their React applications. Remember that authentication is a complex process that involves both front-end and back-end considerations. It's important to keep security best practices in mind throughout the development process to ensure your application remains secure against potential threats. With these strategies and examples, you should have a solid understanding of how to approach authentication in your next React project. Whether you're building a small personal app or a large-scale enterprise solution, proper authentication is key to protecting your users and your data.
grace_momah
1,889,787
☁️ AWS - Email notification system
Technologies: ☁️ AWS: 🪣 S3 bucket, ⚡️ lambda function, 🔔 SNS, 📬 SQS 🤖 IaC: 🏗️ Terraform 🔄...
0
2024-06-17T20:13:55
https://dev.to/sharker3312/aws-email-notification-system-2clo
terraform, aws, python, devops
#Technologies: ☁️ AWS: 🪣 S3 bucket, ⚡️ lambda function, 🔔 SNS, 📬 SQS 🤖 IaC: 🏗️ Terraform 🔄 CD: 🛠️ GitLab Project resume: Email notification system developed in AWS. Once a file is uploaded to an s3 bucket it triggers an event which calls a lambda function and notifies service subscribers that a file has been uploaded. ## 📌Requirements - AWS [account](https://aws.amazon.com/free/) - [VsCode](https://code.visualstudio.com/download) - [GitLab](https://gitlab.com/users/sign_in) ## 🎯Workflow 1️⃣ Source code 📄 2️⃣ Lambda function ⚡️ 3️⃣ Deploy to AWS through GitLab 🚀 --- ## 1️⃣ Source code 📄 [Gitlab project](https://gitlab.com/sharker2/AWS-Projects/-/tree/main/Notification-system?ref_type=heads) **Structure** ``` terraform └── terraform ├── bucket │ ├── main.tf │ └── output.tf │ └── variables.tf ├── lambda │ ├── main.tf │ └── output.tf │ └── variables.tf │ └── lambda_assume_role_policy.json │ └── lambda_policy.json │ └── lambda_function.py │ └── lambda_function.zip ├── sns │ ├── main.tf │ └── output.tf │ └── variables.tf ├── sqs │ ├── main.tf │ └── output.tf │ └── variables.tf ├── main.tf └── providers.tf ``` It is organized into modules dedicated to different AWS services, each designed to serve a specific purpose and facilitate the management and scalability of the cloud environment.In addition, using Terraform to orchestrate these modules ensures a consistent and controlled deployment of the infrastructure, aligned with best practices for configuration management and agile development in the cloud. ## 2️⃣ Lambda function ⚡️ ```python import json import boto3 # Initialize AWS clients s3_client = boto3.client('s3') sns_client = boto3.client('sns') sqs_client = boto3.client('sqs') def lambda_handler(event, context): # Define SNS topic ARN and SQS queue URL sns_topic_arn = 'arn:aws:sns:us-east-2:087243254862:notifcationsystem-bucket' sqs_queue_url = 'https://sqs.us-east-2.amazonaws.com/087243254862/notification' # Process S3 event records for record in event['Records']: # Print the entire event for debugging purposes print(event) # Extract S3 bucket and object information from the event record s3_bucket = record['s3']['bucket']['name'] s3_key = record['s3']['object']['key'] # Example: Prepare metadata to send to SQS metadata = { 'bucket': s3_bucket, 'key': s3_key, 'timestamp': record['eventTime'] } # Send metadata to SQS queue sqs_response = sqs_client.send_message( QueueUrl=sqs_queue_url, MessageBody=json.dumps(metadata) ) # Example: Prepare notification message to send to SNS notification_message = f"New file uploaded to S3 bucket '{s3_bucket}' with key '{s3_key}'" # Publish notification message to SNS topic sns_response = sns_client.publish( TopicArn=sns_topic_arn, Message=notification_message, Subject="File Upload Notification" ) # Return a success response return { 'statusCode': 200, 'body': json.dumps('Processing complete') } ``` 3️⃣ GitLab workflow 🚀 - Add pipeline ![Click pipeline](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gtohuu18oh7o1h1gtm66.png) - 🔑Add access key from AWS into secrets in GitLab(Create a User for this purpose) - ▶️ Run ![running pipeline](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ainli13x389z5nz0bmv6.png) --- <a href="https://linkedin.com/in/lesterdprez" rel="nofollow"><img src="https://camo.githubusercontent.com/d94940866c98cb4fca5783c4e8ac95776d2f52df6bbf3d5ab9e30d76836f30ae/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f4c696e6b6564496e2d2532333030373742352e7376673f6c6f676f3d6c696e6b6564696e266c6f676f436f6c6f723d7768697465" alt="LinkedIn" data-canonical-src="https://img.shields.io/badge/LinkedIn-%230077B5.svg?logo=linkedin&amp;logoColor=white" style="max-width: 100%;"></a>
sharker3312
1,891,687
7 MySQL Database Management Labs to Boost Your Skills 💻
The article is about 7 comprehensive MySQL database management lab projects from LabEx. These hands-on tutorials cover a wide range of essential skills, including accessing databases, managing indexes, querying population data, handling user permissions, calculating average salaries, and using PreparedStatement for improved security and performance. Whether you're a beginner or an experienced database administrator, these labs provide a valuable opportunity to enhance your MySQL expertise through practical, step-by-step guidance. The article offers a detailed overview of each lab, complete with links to the corresponding resources, making it a must-read for anyone looking to take their MySQL skills to the next level.
27,755
2024-06-17T20:08:08
https://dev.to/labex/7-mysql-database-management-labs-to-boost-your-skills-4jkf
coding, programming, tutorial, mysql
Dive into the world of MySQL database management with these 7 comprehensive lab projects from LabEx! Whether you're a beginner or an experienced database administrator, these hands-on tutorials will equip you with essential skills to manage and optimize your MySQL databases. 🔍 ## Explore the Power of MySQL Database Management 🔍 ### 1. [Largest Population by Country (Lab)](https://labex.io/labs/301350) In this project, you'll learn how to access a MySQL database, import data, and query the top 10 countries by total population from the city table. ### 2. [Other Basic Operations](https://labex.io/labs/178587) This lab will teach you the crucial concepts of indexing, creating views, and performing backup and recovery operations in MySQL. ### 3. [Managing Database Indexes in MySQL (Lab)](https://labex.io/labs/301274) Discover how to effectively manage indexes in a MySQL database, including adding an index to the title field of the course table in the edusys database. ### 4. [Query Population of All Countries (Lab)](https://labex.io/labs/301388) Enhance your SQL querying skills by learning how to retrieve population data for all countries from a MySQL database. ### 5. [Manage MySQL User Permissions (Lab)](https://labex.io/labs/301430) Dive into the world of user management in MySQL, as you create a new local user named "Rong" and grant them access to the performance_schema database. ### 6. [Average Salaries Per Department (Lab)](https://labex.io/labs/301284) Explore SQL queries to calculate the average salary for each department in a database and display the results in descending order. ### 7. [Modifying the Teacher Table Using PreparedStatement (Lab)](https://labex.io/labs/301362) Learn how to use JDBC and PreparedStatement to delete data from a MySQL database table, focusing on the benefits of PreparedStatement over regular SQL statements for improved security and performance. Dive in and start your MySQL database management journey today! 🚀 Don't forget to check out the lab links for hands-on practice and skill-building. Happy learning! 🎉 --- ## Want to learn more? - 🌳 Learn the latest [MySQL Skill Trees](https://labex.io/skilltrees/mysql) - 📖 Read More [MySQL Tutorials](https://labex.io/tutorials/category/mysql) - 🚀 Practice thousands of programming labs on [LabEx](https://labex.io) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,891,679
Discover the truth about Dmitry Gunyashov and his involvement with Freewallet.
Our latest article delves deep into Gunyashov's background, exposing the intricate details of his...
0
2024-06-17T19:48:53
https://dev.to/feofhan/discover-the-truth-about-dmitry-gunyashov-and-his-involvement-with-freewallet-1lcm
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5cxtmc98z50g7b4oddo1.jpg) Our latest article delves deep into Gunyashov's background, exposing the intricate details of his operations and uncovering the hidden connections that have allowed him to continue his fraudulent activities for years. From his early beginnings to his current status, we leave no stone unturned in our quest to reveal the reality behind his public persona. In our investigation, we have compiled comprehensive information that not only highlights Gunyashov’s deceptive tactics but also sheds light on the broader network he operates within. This is a must-read for anyone who wants to understand the full scope of the Freewallet scam and its implications for the cryptocurrency community. Stay informed and protect yourself from potential scams. Visit our main page for the full story and explore other insightful articles that will keep you updated on the latest developments in this ongoing saga. Read the full article on Dmitry Gunyashov here and don’t forget to check out our homepage for more investigative reports and critical updates. Visit dmitrygunyashov.info to stay ahead and stay safe.
feofhan
1,891,684
interactive cylinder cat
I wondered what a cylindrical cat looks like, so I coded one - I achieved the 3D cylinder effect by...
0
2024-06-17T20:01:32
https://dev.to/mavalen/interactive-cylinder-cat-i43
codepen
I wondered what a cylindrical cat looks like, so I coded one - I achieved the 3D cylinder effect by combining different kinds of rotations. You can rotate the cat by clicking / touching on it face or tail, and roll the cat by dragging. The cylinder cat also walks around the screen. {% codepen https://codepen.io/Ma5a/pen/eYaGyry %}
mavalen
1,889,755
🧱 Inmutabilidad
Recientemente en mi trabajo me tocó enfrentarme a una base de código que carecía prácticamente por...
0
2024-06-17T20:00:00
https://oscarlp6.dev/blogs/immutability/
architecture, cleancode
Recientemente en mi trabajo me tocó *enfrentarme* a una base de código que carecía prácticamente por completo de la inmutabilidad. Cada vez me he intentado acercar más a los conceptos y prácticas que vienen del mundo funcional, especialmente la *inmutabilidad*, por lo que al estar haciendo cambios en el código eché mucho de menos esta característica, pues creo que tiene muchísimas ventajas que abordaremos en este artículo. ## ❔ ¿Qué es la inmutabilidad? La **inmutabilidad** se refiere a la propiedad de un objeto cuyo estado no puede ser modificado después de su creación. En lugar de cambiar el objeto existente, cualquier operación que modifique el estado del objeto devuelve un nuevo objeto con el estado modificado. Aquí tienes un ejemplo en C#: ```csharp public class Persona { public string Nombre { get; } public int Edad { get; } public Persona(string nombre, int edad) { Nombre = nombre; Edad = edad; } public Persona ConEdadIncrementada() { return new Persona(this.Nombre, this.Edad + 1); } } // Uso var persona = new Persona("Oscar", 30); var personaMayor = persona.ConEdadIncrementada(); Console.WriteLine(persona.Edad); // 30 Console.WriteLine(personaMayor.Edad); // 31 ``` ## 🧟 Efectos secundarios El tener **efectos secundarios** complica el saber dónde cambian las cosas. Tengo que meterme en cada método para ver qué cambios le hizo al objeto que se le pasó. Cuando un objeto es **inmutable**, cualquier modificación resulta en la creación de un nuevo objeto en lugar de cambiar el existente. Esto significa que puedo confiar en que un objeto no cambiará una vez que ha sido creado, reduciendo así la incertidumbre sobre el estado de los datos en mi aplicación. ## 🧩 Debugging A la hora de hacer **debug**, puedes saber que ciertos métodos van a cambiar un objeto porque devuelven uno nuevo. Sin esto, tendría que meterme al método para ver de qué forma lo cambió. La **inmutabilidad** simplifica el proceso de depuración porque cada transformación en los datos es explícita. En lugar de rastrear qué métodos pueden haber alterado un objeto en particular, sé que las transformaciones se realizan de manera controlada y predecible, facilitando la localización y corrección de errores. ## ⏳ Concurrencia La **inmutabilidad** te permite la **concurrencia** prácticamente de a gratis. Cuando los datos son inmutables, no hay riesgo de que múltiples hilos modifiquen el mismo objeto al mismo tiempo, lo que puede causar errores difíciles de reproducir y corregir. La inmutabilidad elimina estos problemas de raíz, permitiendo que los hilos trabajen en paralelo sin necesidad de sincronización explícita. ## 🧪 Testing Es más sencillo replicar casos de **prueba**, pues no hay efectos secundarios. En un entorno de pruebas, los objetos inmutables garantizan que cada prueba comienza con un estado conocido y no se verá afectada por pruebas anteriores. Esto facilita la creación de pruebas consistentes y repetibles, aumentando la confiabilidad de los resultados de las pruebas y la calidad del código. ## 🤔 Conclusión Aunque en muchos casos tiene sentido **mutar objetos**, especialmente en contextos donde el rendimiento es crítico y la creación de nuevos objetos podría ser costosa, la **inmutabilidad** debería ser el modo por defecto. La claridad, simplicidad y seguridad que proporciona superan las desventajas en la mayoría de las aplicaciones. Adoptar la inmutabilidad no solo facilita el mantenimiento y la depuración del código, sino que también mejora la concurrencia y la confiabilidad de las pruebas, haciendo que el desarrollo de software sea más predecible y menos propenso a errores.
oscareduardolp6
1,889,760
🧱 Immutability
Recently at work, I had to deal with a codebase that lacked immutability almost entirely. I have been...
0
2024-06-17T20:00:00
https://oscarlp6.dev/en/blogs/immutability/
architecture, cleancode
Recently at work, I had to *deal* with a codebase that lacked immutability almost entirely. I have been trying to get closer to concepts and practices from the functional programming world, especially *immutability*, so when making changes to the code, I missed this feature a lot. I believe it has many advantages, which we will address in this article. ## ❔ What is Immutability? **Immutability** refers to the property of an object whose state cannot be modified after its creation. Instead of changing the existing object, any operation that modifies the state of the object returns a new object with the modified state. Here is an example in C#: ```csharp public class Person { public string Name { get; } public int Age { get; } public Person(string name, int age) { Name = name; Age = age; } public Person WithIncreasedAge() { return new Person(this.Name, this.Age + 1); } } // Usage var person = new Person("Oscar", 30); var olderPerson = person.WithIncreasedAge(); Console.WriteLine(person.Age); // 30 Console.WriteLine(olderPerson.Age); // 31 ``` ## 🧟 Side Effects Having **side effects** complicates knowing where things change. I have to look into each method to see what changes it made to the passed object. When an object is **immutable**, any modification results in the creation of a new object instead of changing the existing one. This means I can trust that an object will not change once it has been created, thereby reducing the uncertainty about the state of data in my application. ## 🧩 Debugging When **debugging**, you can know that certain methods will change an object because they return a new one. Without this, I would have to look into the method to see how it changed. **Immutability** simplifies the debugging process because every transformation in the data is explicit. Instead of tracking which methods might have altered a particular object, I know that transformations are done in a controlled and predictable manner, making it easier to locate and fix errors. ## ⏳ Concurrency **Immutability** allows **concurrency** practically for free. When data is immutable, there is no risk of multiple threads modifying the same object at the same time, which can cause hard-to-reproduce and fix errors. Immutability eliminates these problems at their root, allowing threads to work in parallel without the need for explicit synchronization. ## 🧪 Testing It's easier to replicate test cases since there are no side effects. In a testing environment, immutable objects ensure that each test starts with a known state and will not be affected by previous tests. This facilitates the creation of consistent and repeatable tests, increasing the reliability of test results and the quality of the code. ## 🤔 Conclusion Although in many cases it makes sense to **mutate objects**, especially in contexts where performance is critical, and creating new objects could be costly, **immutability** should be the default mode. The clarity, simplicity, and safety it provides outweigh the disadvantages in most applications. Adopting immutability not only makes code maintenance and debugging easier but also improves concurrency and test reliability, making software development more predictable and less error-prone.
oscareduardolp6
1,891,681
Do you have an open-source project ?
First-Issues is an initiative to curate easy pickings from open-source projects, so developers who’ve...
0
2024-06-17T19:50:28
https://dev.to/aadesh_kulkarni_ff9fad10b/do-you-have-an-open-source-project--538n
opensource, webdev, beginners, javascript
First-Issues is an initiative to curate easy pickings from open-source projects, so developers who’ve never contributed to open-source can get started quickly. Open-source maintainers are always looking to get more people involved, but new developers generally think it’s challenging to become a contributor. We believe getting developers to fix super-easy issues removes the barrier for future contributions. This is why https://firstissues.dev/ exists. While we are building the MVP, we are on the look out for open source projects that are actively maintained and have atleast 3 good first issues. If you are open source maintainer and looking for contributors, please share your Github repository to get featured on the app.
aadesh_kulkarni_ff9fad10b
1,891,680
The Evolution of Online Betting Sites
The advancement of online wagering locales has been nothing brief of progressive, changing the...
0
2024-06-17T19:49:01
https://dev.to/ali_nasir_62c60a7afa3833b/the-evolution-of-online-betting-sites-57bd
cbd, online, bet, games
The advancement of online wagering locales has been nothing brief of progressive, changing the [betting site](https://shartbazi.com/) industry and advertising exceptional comfort and openness to bettors around the world. At first developing within the late 1990s with fundamental offerings, these stages have advanced essentially over the a long time, driven by headways in innovation and changing shopper inclinations. Early Days and Growth Online wagering destinations to begin with picked up ubiquity within the late 1990s and early 2000s, fueled by the expanding accessibility of the web and changes in advanced framework. These stages at first advertised essential sports wagering alternatives, permitting clients to put wagers on well known sports occasions from their desktop computers. The comfort of wagering from domestic and the capacity to get to a wide run of wagering markets rapidly pulled in a developing number of clients. Expansion of Offerings As online wagering picked up footing, wagering locales started growing their offerings past conventional sports wagering. Nowadays, clients can wagered on a different cluster of sports extending from football and ball to specialty sports and virtual competitions. Besides, numerous destinations have joined comprehensive online casinos Betting site including a assortment of recreations such as spaces, blackjack, roulette, and live merchant diversions. The presentation of virtual sports and eSports advance expanded the wagering choices, catering to a broader gathering of people with changing interface. Technological Advancements Innovative headways have played a essential part in forming the advancement of online wagering destinations. The appropriation of portable innovation has been especially transformative, empowering clients to put wagers helpfully from their smartphones and tablets. Driving wagering locales have created mobile-responsive stages and devoted apps that offer consistent route, live gushing of occasions, and real-time overhauls on wagering markets. This openness has made wagering more adaptable and open than ever some time recently. Security and Regulation As online wagering destinations have developed in notoriety, so as well has the significance of security and direction. Legitimate stages prioritize client security with progressed encryption innovations, secure installment strategies, and rigid information security measures to defend users' individual and monetary data. Control by regarded betting specialists guarantees that wagering destinations work morally and follow to strict benchmarks of reasonableness and capable betting hones, giving clients with a secure and straightforward wagering environment. Future Trends and Innovations Looking ahead, long-standing time of online wagering locales is balanced for proceeded development and development. Developing patterns such as the integration of manufactured insights (AI) for personalized wagering encounters, the rise of cryptocurrency installments for quicker exchanges and upgraded security, and the advancement of virtual reality (VR) and expanded reality (AR) innovations for immersive gaming encounters are anticipated to shape the another era of wagering stages. Betting site additionally, administrative systems are likely to advance to address developing challenges and guarantee a secure and reasonable wagering environment for clients around the world. **Conclusion** The advancement of online wagering destinations has changed the way individuals lock in with betting, advertising phenomenal comfort, assortment, and security. From humble beginnings to modern stages advertising sports wagering, online casinos, and virtual gaming alternatives, these locales proceed to improve and adjust to meet the advancing needs of clients. By prioritizing mechanical headways, security measures, and administrative compliance, online wagering destinations guarantee a secure and pleasant wagering encounter for millions of clients all inclusive. As innovation proceeds to development and buyer inclinations advance, long term guarantees indeed more energizing improvements within the world of online wagering.
ali_nasir_62c60a7afa3833b
1,891,677
<h2>Explorando las Funciones Avanzadas en Javascript</h2><p>
Javascript es un lenguaje de programación tan versátil que puede ser utilizado tanto en el front-end...
0
2024-06-17T19:42:03
https://dev.to/mypsicobien/explorando-las-funciones-avanzadas-en-javascript-50k0
javascript
<strong>Javascript</strong> es un lenguaje de programación tan versátil que puede ser utilizado tanto en el front-end como en el back-end. Sin embargo, uno de los pilares fundamentales de este lenguaje son sus funciones. Las funciones en Javascript no solo permiten la reutilización de código, sino que también juegan un papel crucial en la creación de aplicaciones más eficientes y robustas.</p><h3>Introducción a las Funciones</h3><p>Las funciones en Javascript son bloques de código diseñados para realizar una tarea específica. Se pueden definir utilizando la palabra clave <em>function</em>, seguida del nombre de la función, un conjunto de paréntesis y, finalmente, un bloque de código encerrado entre llaves. Un ejemplo básico sería:</p><pre><code>function saludar() { console.log('¡Hola Mundo!');}saludar(); // Imprime “¡Hola Mundo!” en la consola</code></pre><h3>Funciones Anónimas y Arrow Functions</h3><p>En Javascript, también puedes crear funciones anónimas y <em>arrow functions</em>. Estas últimas ofrecen una sintaxis más compacta y son especialmente útiles cuando se trata de funciones de una sola línea. Por ejemplo:</p><pre><code>const saludar = () => console.log('¡Hola Mundo!');saludar(); // Imprime “¡Hola Mundo!” en la consola</code></pre><h3>Funciones como Ciudadanos de Primera Clase</h3><p>Una de las características más poderosas de Javascript es que las funciones son <em>ciudadanos de primera clase</em>. Esto significa que pueden ser asignadas a variables, pasadas como argumentos a otras funciones y retornadas desde funciones. Aquí hay un ejemplo:</p><pre><code>function operar(operacion, a, b) { return operacion(a, b);}const suma = (x, y) => x + y;console.log(operar(suma, 5, 3)); // Imprime 8</code></pre><h3>Closures</h3><p>Un concepto avanzado pero extremadamente útil en Javascript es el de <em>closure</em>. Un closure es una función que tiene acceso a las variables de su ámbito exterior incluso después de que la función exterior haya terminado de ejecutarse. Esto permite la creación de funciones privadas y es útil para mantener el estado. Aquí vemos un ejemplo sencillo de un closure:</p><pre><code>function crearContador() { let contador = 0; return function() { contador++; console.log(contador); }}const incrementar = crearContador();incrementar(); // Imprime 1incrementar(); // Imprime 2</code></pre><h3>Conclusión</h3><p>Las funciones en Javascript son una herramienta extremadamente versátil y poderosa. Desde la creación de closures hasta la definición de funciones anónimas y arrow functions, las posibilidades son infinitas. Si te interesa profundizar más en las funciones de Javascript y aprender sobre sus características avanzadas, te recomiendo que visites este <a href='https://synzen.org/15-funciones-en-javascript/'>enlace sobre #15 Funciones en Javascript</a>, donde encontrarás información detallada y ejemplos más específicos.</p>
mypsicobien
1,891,676
Interfaces vs. Abstract Classes
A class can implement multiple interfaces, but it can only extend one superclass. An interface can be...
0
2024-06-17T19:41:25
https://dev.to/paulike/interfaces-vs-abstract-classes-1ipm
java, programming, learning, beginners
A class can implement multiple interfaces, but it can only extend one superclass. An interface can be used more or less the same way as an abstract class, but defining an interface is different from defining an abstract class. Table below summarizes the differences. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dpkjca6zoz6i2fqzzw2c.png) Java allows only _single inheritance_ for class extension but allows _multiple extensions_ for interfaces. For example, `public class NewClass extends BaseClass implements Interface1, ..., InterfaceN { ... }` An interface can inherit other interfaces using the **extends** keyword. Such an interface is called a _subinterface_. For example, **NewInterface** in the following code is a subinterface of **Interface1**, . . . , and **InterfaceN**. `public interface NewInterface extends Interface1, ... , InterfaceN { // constants and abstract methods }` A class implementing **NewInterface** must implement the abstract methods defined in **NewInterface**, **Interface1**, . . . , and **InterfaceN**. An interface can extend other interfaces but not classes. A class can extend its superclass and implement multiple interfaces. All classes share a single root, the **Object** class, but there is no single root for interfaces. Like a class, an interface also defines a type. A variable of an interface type can reference any instance of the class that implements the interface. If a class implements an interface, the interface is like a superclass for the class. You can use an interface as a data type and cast a variable of an interface type to its subclass, and vice versa. For example, suppose that c is an instance of **Class2** in Figure below. **c** is also an instance of **Object**, **Class1**, **Interface1**, **Interface1_1**, **Interface1_2**, **Interface2_1**, and **Interface2_2**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uq4o9l9s7czxfmau4n14.png) Class names are nouns. Interface names may be adjectives or nouns. Abstract classes and interfaces can both be used to specify common behavior of objects. How do you decide whether to use an interface or a class? In general, a _strong is-a relationship_ that clearly describes a parent-child relationship should be modeled using classes. For example, Gregorian calendar is a calendar, so the relationship between the class **java.util.GregorianCalendar** and **java.util.Calendar** is modeled using class inheritance. A _weak is-a relationship_, also known as an is-kind-of relationship, indicates that an object possesses a certain property. A weak is-a relationship can be modeled using interfaces. For example, all strings are comparable, so the **String** class implements the **Comparable** interface. In general, interfaces are preferred over abstract classes because an interface can define a common supertype for unrelated classes. Interfaces are more flexible than classes. Consider the **Animal** class. Suppose the **howToEat** method is defined in the **Animal** class, as follows: `abstract class Animal { public abstract String howToEat(); }` Two subclasses of **Animal** are defined as follows: `class Chicken extends Animal { @Override public String howToEat() { return "Fry it"; } }` `class Duck extends Animal { @Override public String howToEat() { return "Roast it"; } }` Given this inheritance hierarchy, polymorphism enables you to hold a reference to a **Chicken** object or a **Duck** object in a variable of type **Animal**, as in the following code: `public static void main(String[] args) { Animal animal = new Chicken(); eat(animal); animal = new Duck(); eat(animal); }` `public static void eat(Animal animal) { animal.howToEat(); }` The JVM dynamically decides which **howToEat** method to invoke based on the actual object that invokes the method. You can define a subclass of **Animal**. However, there is a restriction: The subclass must be for another animal (e.g., **Turkey**). Interfaces don’t have this restriction. Interfaces give you more flexibility than classes, because you don’t have to make everything fit into one type of class. You may define the **howToEat()** method in an interface and let it serve as a common supertype for other classes. For example, `public static void main(String[] args) { Edible stuff = new Chicken(); eat(stuff); stuff = new Duck(); eat(stuff); stuff = new Broccoli(); eat(stuff); }` `public static void eat(Edible stuff) { stuff.howToEat(); }` `interface Edible { public String howToEat(); }` `class Chicken implements Edible { @Override public String howToEat() { return "Fry it"; } }` `class Duck implements Edible { @Override public String howToEat() { return "Roast it"; } }` `class Broccoli implements Edible { @Override public String howToEat() { return "Stir-fry it"; } }` To define a class that represents edible objects, simply let the class implement the **Edible** interface. The class is now a subtype of the **Edible** type, and any **Edible** object can be passed to invoke the **howToEat** method.
paulike
1,891,674
italian bedroom set
For centuries, Italy has been synonymous with exquisite craftsmanship, timeless design, and a flair...
0
2024-06-17T19:37:59
https://dev.to/jasontodd220/italian-bedroom-set-3moi
productivity
For centuries, Italy has been synonymous with exquisite craftsmanship, timeless design, and a flair for the luxurious. This heritage extends to furniture, and Italian bedroom sets stand out as a pinnacle of elegance and comfort. Owning an [Italian bedroom set](https://italianlivingstyle.co.uk/product-category/italian-bedroom-furniture-italian-furniture/) isn't just about acquiring furniture; it's an investment in creating a sanctuary of sophistication and a haven for restful nights. Unveiling the Essence: Quality and Design Italian bedroom sets are renowned for two key aspects: impeccable quality and captivating design. Craftsmanship and Materials: Italian furniture makers are known for their meticulous attention to detail and commitment to using only the finest materials. Solid wood like walnut, cherry, and mahogany are popular choices, ensuring the furniture's longevity and resilience. Top-grain leather, plush fabrics, and exquisite hardware further elevate the aesthetic and provide a touch of indulgence. Design Finesse: Italian design strikes a perfect balance between functionality and artistic flair. From the clean lines of contemporary styles to the ornate details of classic pieces, Italian bedroom sets boast a timeless elegance that transcends trends. A Symphony of Style: Exploring Different Options The beauty of Italian bedroom sets lies in their variety. Whether your taste leans towards modern minimalism or classic grandeur, there's a perfect set waiting to be discovered. Modern Italian Bedroom Sets: Modern Italian design is characterized by clean lines, sleek finishes, and a focus on functionality. Think platform beds with integrated storage, nightstands with hidden drawers, and wardrobes with innovative storage solutions. High-gloss lacquers and metallic accents add a touch of contemporary sophistication. Classic Italian Bedroom Sets: For those who appreciate timeless elegance, classic Italian bedroom sets offer a touch of grandeur. Ornate headboards, hand-carved details, and rich finishes like mahogany or cherry create a luxurious and sophisticated atmosphere. Material Marvels: Italian furniture makers are masters of working with various materials. From the warmth of solid wood to the luxurious feel of leather, each material brings its unique character to the bedroom set. Solid Wood: The timeless beauty and durability of solid wood are hallmarks of Italian craftsmanship. Walnut, cherry, and mahogany are popular choices, offering a rich and sophisticated look. Leather: For an extra touch of luxury, consider an Italian bedroom set with leather accents. The soft feel and rich patina of leather elevate the aesthetic and create a warm and inviting atmosphere. Lacquer: High-gloss lacquer finishes add a touch of contemporary flair to modern Italian bedroom sets. Available in a variety of colors, lacquer finishes create a sleek and sophisticated look. Creating Your Sanctuary: Choosing the Perfect Set When selecting an Italian bedroom set, consider these factors to ensure it perfectly complements your style and needs. Size and Layout: Measure your bedroom carefully and choose a set that fits comfortably within the space. Don't forget to account for traffic flow and ensure drawers and wardrobes open freely. Style Preference: Are you drawn to the clean lines of modern design or the ornate details of classic pieces? Choose a set that reflects your personal taste and complements your existing décor. Functionality: Consider your storage needs. Do you require spacious wardrobes with integrated drawers? Opt for a set that offers ample storage solutions to keep your bedroom clutter-free. Material Preference: Solid wood offers timeless elegance, while leather adds a touch of luxury. Choose a material that complements your design style and feels comfortable to the touch. Beyond the Bed: Completing the Italian Dream An Italian bedroom set goes beyond just the bed itself. Here are some additional pieces to consider to create a cohesive and luxurious sleeping haven. Nightstands: Nightstands with drawers and shelves provide convenient storage for bedside essentials. Opt for nightstands that complement the style and height of your bed frame. Dressers and Chests: For additional storage, consider adding a dresser or chest. Italian dressers often feature spacious drawers and beautifully crafted details. Wardrobes: Italian wardrobes are known for their functionality and elegance. Choose a wardrobe with ample hanging space, drawers, and shelves to keep your clothes organized and wrinkle-free. Mirrors: Adding a mirror to your bedroom set can create a sense of spaciousness and light. Consider a full-length mirror or a vanity mirror with integrated storage. The Investment in Comfort and Luxury Italian bedroom sets are undoubtedly an investment. However, the quality, style, and timeless elegance they offer make them a worthwhile purchase. Here are some benefits to consider: Durability: Made with high-quality materials and meticulous craftsmanship, Italian bedroom sets are built to last for generations.
jasontodd220
1,891,671
The Cloneable Interface
The Cloneable interface specifies that an object can be cloned. Often it is desirable to create a...
0
2024-06-17T19:26:56
https://dev.to/paulike/the-cloneable-interface-icp
java, programming, learning, beginners
The **Cloneable** interface specifies that an object can be cloned. Often it is desirable to create a copy of an object. To do this, you need to use the **clone** method and understand the **Cloneable** interface. An interface contains constants and abstract methods, but the **Cloneable** interface is a special case. The **Cloneable** interface in the **java.lang** package is defined as follows: `package java.lang; public interface Cloneable { }` This interface is empty. An interface with an empty body is referred to as a _marker interface_. A marker interface does not contain constants or methods. It is used to denote that a class possesses certain desirable properties. A class that implements the **Cloneable** interface is marked cloneable, and its objects can be cloned using the **clone()** method defined in the **Object** class. Many classes in the Java library (e.g., **Date**, **Calendar**, and **ArrayList**) implement **Cloneable**. Thus, the instances of these classes can be cloned. For example, the following code `1 Calendar calendar = new GregorianCalendar(2013, 2, 1); 2 Calendar calendar1 = calendar; 3 Calendar calendar2 = (Calendar)calendar.clone(); 4 System.out.println("calendar == calendar1 is " + 5 (calendar == calendar1)); 6 System.out.println("calendar == calendar2 is " + 7 (calendar == calendar2)); 8 System.out.println("calendar.equals(calendar2) is " + 9 calendar.equals(calendar2));` displays `calendar == calendar1 is true calendar == calendar2 is false calendar.equals(calendar2) is true` In the preceding code, line 2 copies the reference of **calendar** to **calendar1**, so **calendar** and **calendar1** point to the same **Calendar** object. Line 3 creates a new object that is the clone of **calendar** and assigns the new object’s reference to **calendar2**. **calendar2** and **calendar** are different objects with the same contents. The following code `1 ArrayList<Double> list1 = new ArrayList<>(); 2 list1.add(1.5); 3 list1.add(2.5); 4 list1.add(3.5); 5 ArrayList<Double> list2 = (ArrayList<Double>)list1.clone(); 6 ArrayList<Double> list3 = list1; 7 list2.add(4.5); 8 list3.remove(1.5); 9 System.out.println("list1 is " + list1); 10 System.out.println("list2 is " + list2); 11 System.out.println("list3 is " + list3);` displays `list1 is [2.5, 3.5] list2 is [1.5, 2.5, 3.5, 4.5] list3 is [2.5, 3.5]` In the preceding code, line 5 creates a new object that is the clone of **list1** and assigns the new object’s reference to **list2**. **list2** and **list1** are different objects with the same contents. Line 6 copies the reference of **list1** to **list3**, so **list1** and **list3** point to the same **ArrayList** object. Line 7 adds **4.5** into **list2**. Line 8 removes **1.5** from **list3**. Since **list1** and **list3** point to the same **ArrayList**, line 9 and 11 display the same content. You can clone an array using the **clone** method. For example, the following code `1 int[] list1 = {1, 2}; 2 int[] list2 = list1.clone(); 3 list1[0] = 7; 4 list2[1] = 8; 5 System.out.println("list1 is " + list1[0] + ", " + list1[1]); 6 System.out.println("list2 is " + list2[0] + ", " + list2[1]);` displays `list1 is 7, 2 list2 is 1, 8` To define a custom class that implements the **Cloneable** interface, the class must override the **clone()** method in the **Object** class. The program below defines a class named **House** that implements **Cloneable** and **Comparable**. ``` package demo; public class House implements Cloneable, Comparable<House> { private int id; private double area; private java.util.Date whenBuilt; public House(int id, double area) { this.id = id; this.area = area; whenBuilt = new java.util.Date(); } public int getId() { return id; } public double getArea() { return area; } public java.util.Date getWhenBuilt(){ return whenBuilt; } @Override /** Override the protected clone method defined in the object class, and strengthen its accessibility */ public Object clone() throws CloneNotSupportedException{ return super.clone(); } @Override // Implement the compareTo method defined in Comparable public int compareTo(House o) { if(area > o.area) return 1; else if(area < o.area) return -1; else return 0; } } ``` The **House** class implements the **clone** method (lines 26–28) defined in the **Object** class. The header is: `protected native Object clone() throws CloneNotSupportedException;` The keyword **native** indicates that this method is not written in Java but is implemented in the JVM for the native platform. The keyword **protected** restricts the method to be accessed in the same package or in a subclass. For this reason, the **House** class must override the method and change the visibility modifier to **public** so that the method can be used in any package. Since the **clone** method implemented for the native platform in the **Object** class performs the task of cloning objects, the **clone** method in the **House** class simply invokes **super.clone()**. The **clone** method defined in the **Object** class may throw **CloneNotSupportedException**. The **House** class implements the **compareTo** method (lines 32–39) defined in the **Comparable** interface. The method compares the areas of two houses. You can now create an object of the **House** class and create an identical copy from it, as follows: `House house1 = new House(1, 1750.50); House house2 = (House)house1.clone();` **house1** and **house2** are two different objects with identical contents. The **clone** method in the **Object** class copies each field from the original object to the target object. If the field is of a primitive type, its value is copied. For example, the value of **area** (**double** type) is copied from **house1** to **house2**. If the field is of an object, the reference of the field is copied. For example, the field **whenBuilt** is of the **Date** class, so its reference is copied into **house2**, as shown in Figure below. Therefore, **house1.whenBuilt == house2.whenBuilt** is true, although **house1 == house2** is false. This is referred to as a _shallow copy_ rather than a _deep copy_, meaning that if the field is of an object type, the object’s reference is copied rather than its contents. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imf9ehso67hfvpvrbyqm.png) (a) The default **clone** method performs a shallow copy. (b) The custom **clone** method performs a deep copy. To perform a deep copy for a **House** object, replace the **clone()** method in lines 27–29 with the following code: `public Object clone() throws CloneNotSupportedException { // Perform a shallow copy House houseClone = (House)super.clone(); // Deep copy on whenBuilt houseClone.whenBuilt = (java.util.Date)(whenBuilt.clone()); return houseClone; }` or `public Object clone() { try { // Perform a shallow copy House houseClone = (House)super.clone(); // Deep copy on whenBuilt houseClone.whenBuilt = (java.util.Date)(whenBuilt.clone()); return houseClone; } catch (CloneNotSupportedException ex) { return null; } }` Now if you clone a **House** object in the following code: `House house1 = new House(1, 1750.50); House house2 = (House)house1.clone();` **house1.whenBuilt** == **house2.whenBuilt** will be **false**. **house1** and **house2** contain two different **Date** objects, as shown in Figure (b) below.
paulike
1,891,654
Automatisons l'enregistrement du User sur n'importe quelle entité [Symfony]
Admettons que nous ayons des entités avec l'attribut author, nous allons voir comment factoriser la...
0
2024-06-17T19:19:51
https://dev.to/aratinau/automatisons-lenregistrement-du-user-sur-nimporte-quelle-entite-4f68
symfony, webdev
Admettons que nous ayons des entités avec l'attribut `author`, nous allons voir comment factoriser la logique en une ligne pour enregistrer l'utilisateur automatiquement. Prenons l'exemple de cette entité qui contient l'attribut `author`. ```php <?php namespace App\Entity; use App\Repository\CategoryRepository; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: CategoryRepository::class)] class Category { #[ORM\Id] #[ORM\GeneratedValue] #[ORM\Column] private ?int $id = null; #[ORM\Column(length: 255)] private ?string $name = null; #[ORM\ManyToOne] #[ORM\JoinColumn(nullable: false)] private ?User $author = null; public function getId(): ?int { return $this->id; } public function getName(): ?string { return $this->name; } public function setName(string $name): static { $this->name = $name; return $this; } public function getAuthor(): ?User { return $this->author; } public function setAuthor(?User $author): static { $this->author = $author; return $this; } } ``` Dans notre projet nous avons également une dizaine d'entités qui contient également l'attribut `author`. La premiere solution serait de faire un controller ou un DoctrineListener pour chaque entité. Comme celui-là par exemple : ```php <?php namespace App\DoctrineListener; use App\Entity\Category; use Doctrine\Bundle\DoctrineBundle\Attribute\AsEntityListener; use Doctrine\ORM\Events; use Doctrine\Persistence\Event\LifecycleEventArgs; #[AsEntityListener(event: Events::prePersist, entity: Category::class)] class CategoryDoctrineListener { public function __construct( private Security $security, ) { } public function prePersist(Category $category, LifecycleEventArgs $event) { $user = $this->security->getUser(); $entity->setAuthor($user); } } ``` Au lieu de ça nous allons garder la logique métier de `CategoryDoctrineListener` et l'adapter à toutes les entités qui contiennent l'attribut `author` Pour cela nous allons faire une interface ```php <?php namespace App\Entity; interface AuthorInterface { public function setAuthor(?User $author): static; } ``` Ainsi nous allons l'implementer sur toutes les entités qui ont la méthode `setAuthor` Exemple avec `Catégory` ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\CategoryRepository; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Serializer\Attribute\Groups; #[ORM\Entity(repositoryClass: CategoryRepository::class)] #[ApiResource(paginationEnabled: false)] class Category implements AuthorInterface // ... ``` Maintenant nous pouvons créer un DoctrineListener qui regarde si l'entité implemente `AuthorInterface` en utilisant la réflexion comme suit ```php <?php namespace App\DoctrineListener; use App\Entity\AuthorInterface; use Doctrine\Bundle\DoctrineBundle\Attribute\AsDoctrineListener; use Doctrine\ORM\Event\PrePersistEventArgs; use Symfony\Bundle\SecurityBundle\Security; #[AsDoctrineListener('prePersist')] class AttachAuthorDoctrineListener { public function __construct( private Security $security, ) { } public function prePersist(PrePersistEventArgs $event): void { $entity = $event->getObject(); $reflectionClass = new \ReflectionClass($entity); if (!$reflectionClass->implementsInterface(AuthorInterface::class)) { return; } $user = $this->security->getUser(); $entity->setAuthor($user); } } ``` Et c'est tout, maintenant vous n'aurez qu'à ajouter `implements AuthorInterface` sur les entités qui ont besoin d'enregistrer l'auteur 🚀 Lisez aussi "comment filtrer les GET uniquement sur l'utilisateur connecté" https://dev.to/aratinau/api-platform-filtrer-les-resultats-uniquement-sur-lutilisateur-connecte-1fp6
aratinau
339,836
Implementation of Redis
Github Task Link (https://github.com/samcaspus/red_implementation) Task Report (https://docs.google.c...
0
2020-05-20T11:13:32
https://dev.to/samcaspus/implementation-of-redis-2o76
python, redis, development
Github Task Link (https://github.com/samcaspus/red_implementation) Task Report (https://docs.google.com/document/d/1elozY9I13iF31C5pHWN0GX16acSuzT_JvxGYcwb6hjY/edit#heading=h.9nvcibv3gama) Introduction Data storage and retrieval is essence for any data related work. A working implementation of Redis for storage of key value pairs for data retrieval and storage is a requirement. The essence of a key-value store is the ability to store some data, called a value, inside a key. The value can be retrieved later only if we know the specific key it was stored in. Further, when the application ends, the data isn't lost. The application also has a delay system in order to avoid race conditions. Aim The aim of the assignment is to build a working implementation of Redis such as GET ( https://redis.io/commands/get ) SET ( https://redis.io/commands/set ) EXPIRE ( https://redis.io/commands/expire ) ZADD ( https://redis.io/commands/zadd ) ZRANK ( https://redis.io/commands/zrank ) ZRANGE ( https://redis.io/commands/zrange ) and more if possible Language : Python The choice of language (Python) is based on the following criteria:- Formulating a working implementation of Redis for storage and retrieval of data from specific keys, using Python results in seamless merger . Quick implementation. Confidence in programming skill using Python. Architecture The layout of the program is primarily divided into driver, instructions and storage verticals. The program verticals are such that the implementations of similar operations are clubbed together which facilitates support to multiple users. The program consist of the following: Redis.py is the main driver program which works as a set up for the redis implementation to work. Filehandeling.py is a file in the modules section which handles all reading and writing of data into a json file. Instructions.py is a file which contains all the tasks / logic implementations of each redis function such as get,set,zadd,zrange etc. ExecutionHandler.py improves the function call separately using eval method. Demo.json Storage of the data is done in demo.json file Program / Instructions Requisites. Tools/Components required are as follows: Python 3 or greater Json package of python installed Execution. In order to execute the program the sequence is as follows: Open terminal and navigate to the clone location > cd Redis > python redis.py The program is ready to use. Implementation Redis.py (Driver page) Redis.py is the driver program that focuses on running every command given by a user one by one by sending it to the pages that would handle those instructions. The driver page is imported with Json. Json is used to store the entire data to a file called demo.json. The page consists of the function run which is invoked by the main to execute. Run function creates two classes in modules folder: Instructions File handling The ‘read’ function is used to read the json data present in demo.json file of the program. self.redisData = self.FileHandeling.read_json_data() A never ending while loop is run which terminates only on a specific condition from the user i.e., ‘exit’. The following code gets invoked leading to termination of the program. if not self.Instructions.behave(self.instructionResponse): break self.instructionResponse is the keyword which helps the program to break (“Error word”) The input instruction is usersInstruction = input(">>> ").split() The function used to read every time, so that any changes that are required to be reflected, can be known before command execution. self.redisData = self.FileHandeling.temp_read() The function used to save the modified data into the json format (demo.json) self.FileHandeling.temp_save(self.redisData) Once the program terminates, a final confirmation storage of result is done by function self.FileHandeling.write_json_data(self.redisData) Inorder to run the program the following code is executed. An object is created for redis and the run method is used to start the application. redisObject = Redis() redisObject.run() FileHandling.py (I/O Storage page) This file focuses towards reading / writing of data in the demo.json file. Demo.json file acts as a storage for the key value pairs. FileHandling.py has four methods as follows: Read_json_data Write_json_data Temp_save Temp_read Read json data uses python's inbuilt file handling and reads all the content that is present inside demo.json and gives the program a start. Write json data is a tear down method which writes all the data back in json format to the demo.json file so that all key value pairs are saved. Temp save is a method that is used inside the while loop so that every time the user executes a command the current instance of the data is saved for the next instance. Temp read is another method inside the while loop which is present right after the input taken by the user and it is used so that the user gets an instance at that point in time and the program makes a call on how to process the request. Instructions.py (Commands for redis) Commands implemented in this application are as follows: Append Del Exists Exit Expire Get Getrange Set Zadd Zrange and Zrange withscores Zrankt ttl lpush rpush lindex linsert rinsert llen rpop lpop lrange The explanation for the various commands used is as enumerated below: APPEND Append is a method that allows the user to add more content to the string already present as value. DEL Del is a method which is used to delete any key value pair. In the above example, as we created a Key value pair sandeep. lets delete it using the del command (del sandeep) As we use del to delete the command the data is deleted permanently from the json file. Exist Exist is a command which is similar to declaration of a key and no value is assigned to it Eg) Exist sandeep Exit Exit is a command which is used to terminate the program successfully in this process the final invoke is taken place to ensure all data is saved properly. Eg) exit Expire Expire is a command used to give a TTL (Time To Live) for a particular key value pair and after that the presence is lost and the time given is in seconds. Eg) Expire sandeep 10 Once time is over the content gets deleted from the json file and it will not be accessible. Get Get method is used to retrieve data from the document, it effectively returns the value stored in the key Eg) Get sandeep Getrange Getrange is a method by which we can set ‘start’ and ‘end’ and get all values in between in that instance Zadd, Zrange, Zrange withscores, Zrank These are used to handle pairs. Zadd helps to add pairs together as value Zrange helps to display the values within a range (Keys) Zrange withscores help to display both keys and values within a range Zrank tells the index of an element if it exists Demo.json (Storage file) This is the file where all the content is stored upon saving. A snapshot of the file after having all data saved is shown below: Ways to Improve The efficacy of the program can be improved by syncing it via api for any front end facing application (Website/app) which would also improve the response time. For future development, Instead of using lots of “if-else” statements, a dictionary (map) implementation would improve the performance from O(n) to O(1) . The data can be stored in a database such as sqlite or mysql. References Redis ( https://redis.io/commands/ ) Date time module ( https://docs.python.org/3/library/datetime.html ) Parser module ( https://dateutil.readthedocs.io/en/stable/parser.html )
samcaspus
1,891,660
In-house vs. Outsourcing: Startup Development Guide
Recruiting the right development team for your startup is never an easy process and it is a very...
0
2024-06-17T19:04:59
https://dev.to/cyaniclab/in-house-vs-outsourcing-right-choice-for-startup-development-48fc
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kihtync6rz484ydw488n.png) Recruiting the [right development team for your startup](https://cyaniclab.com/blogs/it-staff-augmentation-process-a-guide-for-startups) is never an easy process and it is a very sensitive step. Let me just pick one of the most important things you are being asked to decide about. When choosing your strategy, should you invest in developing your internal team or choose to contract with outside professionals? This decision isn’t simply a need for more bubble or a way to occupy a space; it is about the reality which will become your enterprise. This is an exchange based on the critical choice of the teamwork advantages only the in-house service providers can offer against the cost deficit after opting for outsourcing. But worry not! In this article, you will find yourself in the middle of the pros and cons of both situations. Knowing more about basic and advanced paths, each of these approaches offers its advantages and difficulties to monitor, you’ll be able to make the best choice that will fit your startup visions and missions best. ## In-House vs. Outsourcing: Making the Right Choice for Startup Development Choosing between in-house development and outsourcing is a pivotal decision for startup owners in their early stages. Each option presents unique advantages and challenges that must align with the startup's goals and operational strategy. ## Overview of In-House vs. Outsourcing In-house development involves forming an internal team of developers and technical staff dedicated to the startup's projects. Conversely, outsourcing entails partnering with [offshore Software Development Company](https://cyaniclab.com/software-development) or remote teams to handle technical aspects. ## Challenges in In-House Development Establishing an in-house team can be daunting for startups due to high costs and time constraints. Recruiting and retaining skilled developers require substantial investment, often exceeding initial budgetary limits. ## Challenges in Outsourcing Outsourcing introduces its own hurdles, including communication barriers, time zone discrepancies, and quality control issues with remote teams. Selecting a compatible outsourcing partner that aligns with the startup's vision can prove challenging. ## Benefits of Outsourcing for Startups Despite challenges, outsourcing offers significant advantages. Cost efficiency is paramount, sparing startups from infrastructure investments and overhead expenses associated with internal teams. Access to a global talent pool and specialized skills enhances development capabilities. ## Why Outsourcing Could be Beneficial for Startups? Outsourcing empowers startups with flexibility and scalability, adapting development resources to project demands without permanent staffing adjustments. Accelerated time-to-market ensures competitive edge, crucial for startup growth and market presence. ## Why Choose Cyaniclab? Cyaniclab specializes in [custom software development for startups](https://cyaniclab.com/startup-development), delivering high-quality solutions globally. Our seasoned developers and project managers streamline communication and collaboration, mitigating common outsourcing challenges. By partnering with Cyaniclab, startups focus on core business while leveraging our expertise in remote software development. ## Conclusion The decision between in-house development and outsourcing profoundly impacts startup growth. Evaluating specific needs, budget constraints, and long-term objectives is essential. Whether opting for in-house teams or [IT Project outsourcing](https://cyaniclab.com/it-team-augmentation), the objective remains consistent: fostering sustainable growth amidst competitive business landscapes.
cyaniclab
1,891,657
The Comparable Interface
The Comparable interface defines the compareTo method for comparing objects. Suppose you want to...
0
2024-06-17T19:00:24
https://dev.to/paulike/the-comparable-interface-1ncj
java, programming, learning, beginners
The **Comparable** interface defines the compareTo method for comparing objects. Suppose you want to design a generic method to find the larger of two objects of the same type, such as two students, two dates, two circles, two rectangles, or two squares. In order to accomplish this, the two objects must be comparable, so the common behavior for the objects must be comparable. Java provides the **Comparable** interface for this purpose. The interface is defined as follows: `// Interface for comparing objects, defined in java.lang package java.lang; public interface Comparable<E> { public int compareTo(E o); }` The **compareTo** method determines the order of this object with the specified object **o** and returns a negative integer, zero, or a positive integer if this object is less than, equal to, or greater than **o**. The **Comparable** interface is a generic interface. The generic type **E** is replaced by a concrete type when implementing this interface. Many classes in the Java library implement **Comparable** to define a natural order for objects. The classes **Byte**, **Short**, **Integer**, **Long**, **Float**, **Double**, **Character**, **BigInteger**, **BigDecimal**, **Calendar**, **String**, and **Date** all implement the **Comparable** interface. For example, the **Integer**, **BigInteger**, **String**, and **Date** classes are defined as follows in the Java API: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ngh89llze8d7enqtb599.png) Thus, numbers are comparable, strings are comparable, and so are dates. You can use the **compareTo** method to compare two numbers, two strings, and two dates. For example, the following code `1 System.out.println(new Integer(3).compareTo(new Integer(5))); 2 System.out.println("ABC".compareTo("ABE")); 3 java.util.Date date1 = new java.util.Date(2013, 1, 1); 4 java.util.Date date2 = new java.util.Date(2012, 1, 1); 5 System.out.println(date1.compareTo(date2));` displays `-1 -2 1` Line 1 displays a negative value since **3** is less than **5**. Line 2 displays a negative value since **ABC** is less than **ABE**. Line 5 displays a positive value since **date1** is greater than **date2**. Let **n** be an **Integer** object, **s** be a **String** object, and **d** be a **Date** object. All the following expressions are **true**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7k02vh2sfxv0xmlcvpi5.png) Since all **Comparable** objects have the **compareTo** method, the **java.util.Arrays.sort(Object[])** method in the Java API uses the **compareTo** method to compare and sorts the objects in an array, provided that the objects are instances of the **Comparable** interface. the program below gives an example of sorting an array of strings and an array of **BigInteger** objects. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6pmnhepx2qxp46bq6uq3.png) The program creates an array of strings (line 7) and invokes the **sort** method to sort the strings (line 8). The program creates an array of **BigInteger** objects (lines 13) and invokes the **sort** method to sort the **BigInteger** objects (line 14). You cannot use the **sort** method to sort an array of **Rectangle** objects, because **Rectangle** does not implement **Comparable**. However, you can define a new rectangle class that implements **Comparable**. The instances of this new class are comparable. Let this new class be named **ComparableRectangle**, as shown in the program below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bvgna4g717x74o3zluqx.png) **ComparableRectangle** extends **Rectangle** and implements **Comparable**, as shown in Figure below. The keyword **implements** indicates that **ComparableRectangle** inherits all the constants from the **Comparable** interface and implements the methods in the interface. The **compareTo** method compares the areas of two rectangles. An instance of **ComparableRectangle** is also an instance of **Rectangle**, **GeometricObject**, **Object**, and **Comparable**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pmxlxx0vren3utr7azm.png) You can now use the **sort** method to sort an array of **ComparableRectangle** objects, as in the program below. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cvw4qcoc18diivbmjkqj.png) `Width: 3.4 Height: 5.4 Area: 18.36 Width: 1.4 Height: 25.4 Area: 35.559999999999995 Width: 7.4 Height: 35.4 Area: 261.96 Width: 13.24 Height: 55.4 Area: 733.496` An interface provides another form of generic programming. It would be difficult to use a generic sort method to **sort** the objects without using an interface in this example, because multiple inheritance would be necessary to inherit **Comparable** and another class, such as **Rectangle**, at the same time. The **Object** class contains the **equals** method, which is intended for the subclasses of the **Object** class to override in order to compare whether the contents of the objects are the same. Suppose that the **Object** class contains the **compareTo** method, as defined in the **Comparable** interface; the **sort** method can be used to compare a list of **any** objects. Whether a **compareTo** method should be included in the **Object** class is debatable. Since the **compareTo** method is not defined in the **Object** class, the **Comparable** interface is defined in Java to enable objects to be compared if they are instances of the **Comparable** interface. It is strongly recommended (though not required) that **compareTo** should be consistent with **equals**. That is, for two objects **o1** and **o2**, **o1.compareTo(o2) == 0** if and only if **o1.equals(o2)** is **true**.
paulike
1,891,656
💥Notcoin (NOT) drops by over 15% drop in price
Since the conclusion of the most recent airdrop claim period on Sunday, the crypto known as Notcoin...
0
2024-06-17T18:56:16
https://dev.to/irmakork/notcoin-not-drops-by-over-15-drop-in-price-ek3
Since the conclusion of the most recent airdrop claim period on Sunday, the crypto known as Notcoin (NOT) has experienced a huge decrease of 18.3%, falling from $0.02071 to $0.0175. As of the time this article was written, the price of Notcoin is $0.01804, and its total market capitalization is $1.85 billion. The trade volume of Notcoin has decreased by more than twenty percent during the course of the past twenty-four hours, reaching a total of eight hundred and ninety-nine million dollars. This decrease is accompanied by a more widespread correction in the market, with Notcoin currently trading at a price that is more than 37% lower than its all-time high of $0.02896. Despite this, it continues to be the 57th largest cryptocurrency by market capitalization, outperforming renowned projects such as Jupiter and zkSync in terms of the buzz around it. Notcoin has acquired a sizeable user base, having 11.5 million holders, with at least 2.5 million confirmed as on-chain participants. This is despite the fact that the crypto has experienced negative growth. Notcoin is a cryptocurrency that is only partially developed. It is based on the TON blockchain and initially acquired popularity by means of an airdrop that was distributed to users of a popular tap-to-earn game that is based on Telegram. Notcoin's rapid acceptance and favorable attention from the cryptocurrency world can be attributed to the widespread success of the game. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p3o4v95e15wcvhnlp6qo.png)
irmakork
1,891,655
#babylonjs Browser MMORPG #indiegamedev Ep21 - Spatial Hash Grid Area of Interest
Hello I'm working on POC (prove of concept) implementation of monsters system. I managed to achieve...
0
2024-06-17T18:54:23
https://dev.to/maiu/babylonjs-browser-mmorpg-indiegamedev-ep21-spatial-hash-grid-area-of-interest-4fmc
babylonjs, indiegamedev, devlog, mmorpg
Hello I'm working on POC (prove of concept) implementation of monsters system. I managed to achieve first milestone which is loading entities into engine. And finally i have occasion to show You how my hash grid area of interest is working. Engine is refreshing area of interest data each second and it sends info to the clients to show/remove entity. There're two visibility ranges. First is lower (eg 50 units) it describes distance below which entity should be visible and second visibility range (eg 100 units) is bigger and it's use to hide entities. This way I'm avoiding sending create/remove entity command when some entity is on the edge of visibility and potentially I'd have to show/hide and resend entity each spatial hash grid update. Keep in mind that it's not working on radius but on bounding boxes (rectangles) and entities visibility is not symmetrical. This implementation is far more efficient and actually not having it symmetrical doesn't change anything. Eventually I'll setup it to comfortable range and players will no care about it. {% youtube 1siK2BjyZVg %}
maiu
1,891,653
Closure 🫥
In JavaScript, a closure is a feature where an inner function has access to the outer (enclosing)...
0
2024-06-17T18:51:26
https://dev.to/__khojiakbar__/closure-2oek
javascript, closure
> In JavaScript, a closure is a feature where an inner function has access to the outer (enclosing) function’s variables. This includes the outer function’s parameters, variables declared within the outer function, and variables declared in the global scope. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7nyzf3ttk8ymvoeiyw3c.png) #### Example: ``` function outherFunction(outVar) { let icon = '☺️' return function innerFunction(innerVar) { let excMark = '!!!' console.log('Outer:', outVar) console.log('Inner:', innerVar) console.log(`Together: ${icon} ${outVar} ${innerVar} ${excMark}`) } } let newFN = outherFunction('Hello') newFN('World') // Outer: Hello // Inner: World // Together: ☺️ Hello World !!! ``` # FUNNY EXAMPLES FOR CLOSURE EXAMPLE 1: ``` function aboutTeller(lie) { return { tellAbout : function() { console.log(lie) }, changeAbout : function(truth) { lie = truth } } } const aboutMe = aboutTeller('I am senior developer.'); aboutMe.tellAbout() aboutMe.changeAbout('I am junior developer.'); aboutMe.tellAbout() // I am senior developer. // I am junior developer. ``` EXAMPLE 2: ``` // A collector squirrel function collectorSquirrel() { let nuts = 0; return { stored : function (numNuts) { nuts += numNuts; console.log(`Squirrel stored ${numNuts} nuts into the wood.`) }, has : function () { console.log(`Squirrel has ${nuts} nuts.`) } } } let mySquirrel = collectorSquirrel() mySquirrel.stored(3) mySquirrel.has() // Squirrel stored 3 nuts into the wood. // Squirrel has 3 nuts. ``` // EXAMPLE 3: ``` // Time traveler function timeTravel() { time = new Date().getFullYear() return { travelTo: function(desiredTime) { console.log(`Hello! Welcome to ${time + desiredTime}. Have a nice time!`) }, reset: function() { time = new Date().getFullYear() console.log(time) } } } let thisTime = timeTravel() thisTime.travelTo(10) // Hello! Welcome to 2034. Have a nice time! thisTime.reset() // 2024 ``` // EXAMPLE 4: ``` // The Motivational Coach function motivationalCoach () { const phrases = [ "You're doing great!", "Keep up the good work!", "You can do it!", "Believe in yourself!" ]; return function() { let phrase = phrases[Math.floor(Math.random() * phrases.length)]; console.log(phrase) } } let motivateMe = motivationalCoach() motivateMe() // You can do it! ```
__khojiakbar__
1,891,652
🤯Notcoin (NOT) Faces 11% Decline Post-Airdrop
The well-known cryptocurrency Notcoin (NOT) has seen a significant 18.3% decline, dropping from...
0
2024-06-17T18:51:23
https://dev.to/irmakork/notcoin-not-faces-11-decline-post-airdrop-179f
The well-known cryptocurrency Notcoin (NOT) has seen a significant 18.3% decline, dropping from $0.02071 to $0.0175, since the end of its recent airdrop claim period on Sunday. At the time of writing, Notcoin is trading at $0.01804 with a market cap of $1.85 billion. In the last 24 hours, Notcoin’s trading volume has dipped by more than 20%, settling at $809 million. This decline accompanies a broader market correction, with Notcoin now trading over 37% lower than its peak value of $0.02896. Nonetheless, it remains the 57th largest cryptocurrency by market capitalization, surpassing prominent projects like Jupiter and zkSync in the hype. Despite this downturn, Notcoin has amassed a sizable user base, boasting 11.5 million holders, with at least 2.5 million confirmed as on-chain participants. Notcoin is a partially new crypto that operates on the TON blockchain and gained initial attraction through an airdrop to users of a popular Telegram-based tap-to-earn game. The game’s success led to Notcoin’s rapid adoption and favorable attention from the crypto community. Concurrently, the TON blockchain itself has experienced a surge, recently surpassing its previous all-time high and exceeding $8.08. Active wallet addresses on TON have surpassed 577848, indicating significant ecosystem growth. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ag51tnqrodiplb772mzz.png)
irmakork
1,891,650
🚀3 Altcoins to Watch After the ETH ETF Market Launch: Pepe (PEPE), Optimism (OP) and Rebel Satoshi Arcade (RECQ)
🚀 Ethereum ETF Launch: Altcoins Ready to Shine 🔹 Rebel Satoshi Arcade (RECQ): 5,000% Growth...
0
2024-06-17T18:50:45
https://dev.to/irmakork/3-altcoins-to-watch-after-the-eth-etf-market-launch-pepe-pepe-optimism-op-and-rebel-satoshi-arcade-recq-ldb
🚀 Ethereum ETF Launch: Altcoins Ready to Shine 🔹 Rebel Satoshi Arcade (RECQ): 5,000% Growth Potential Built on Ethereum, RECQ blends memes, NFTs, and GameFi, making it a standout investment. With a presale token price of $0.0044 and an upcoming rich ecosystem, experts predict a monumental 5,000% rally post-launch. 🔹 Pepe (PEPE): Surging Towards New ATHs Pepe has emerged as a top memecoin on Ethereum, surpassing other meme narratives. With expectations of further ATHs amid ETH ETF momentum, it remains a top pick for budget-friendly altcoin investors. 🔹 Optimism (OP): Scaling Ethereum to $10 Optimism addresses Ethereum's scalability with optimistic rollups, gaining traction among developers. Positioned for a rally past $10, it's poised to capitalize on the impending bull run driven by the ETH ETF market. 🔹 Conclusion After the launch of ETH ETFs, Pepe, Optimism, and Rebel Satoshi Arcade stand out as prime altcoins to watch. Positioned for substantial gains in Ethereum's expanding ecosystem, they present lucrative opportunities for savvy investors. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dfazmsrabs6vn1domug9.png)
irmakork
1,891,649
🔥🔥🔥Best Cryptocurrencies to Watch This Week
Exciting Crypto Opportunities This Week 🔹 CYBRO Presale: High-Yield Investments CYBRO is drawing...
0
2024-06-17T18:49:53
https://dev.to/irmakork/best-cryptocurrencies-to-watch-this-week-25o7
Exciting Crypto Opportunities This Week 🔹 CYBRO Presale: High-Yield Investments CYBRO is drawing attention with its exclusive token presale, offering a potential ROI of 1200% at a token price of $0.025. With lucrative staking rewards, airdrops, and more, it's gaining traction among crypto whales and influencers. 🔹 Notcoin (NOT): Strong Uptrend NOT has surged 180.80% in a month and shows positive indicators with an RSI of 54.01 and slight MACD positivity. Trading between $0.0148 to $0.0260, it faces resistance at $0.0329 and support at $0.0105. 🔹 Pepe (PEPE): Overview and Prediction PEPE trades between $0.00001 and $0.00002, showing a 19.19% increase in a month and 817.56% over six months. With an RSI of 56.13, it's poised for impulsive moves. 🔹 Toncoin (TON): Overview and Prediction TON has surged by 266.37% in six months, trading between $6.48 to $7.82. With resistance at $8.52 and support at $5.85, it shows strong upward momentum. 🔹 Solana (SOL): Overview and Prediction SOL ranges from $151.91 to $173.96, with resistance at $185.81 and support at $141.71. Despite recent declines, it has risen by 94.32% in six months. 🔹 Conclusion While NOT, PEPE, TON, and SOL show potential, CYBRO stands out with its upcoming launch and unique yield opportunities on Blast blockchain. Early investors can participate in the presale for favorable terms, positioning CYBRO as a promising contender in the current market. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tsqe6w5zhv9tb5uvx6x3.png)
irmakork
1,891,648
👀XRP Price Recovers Past 100 SMA: Bullish Indicators Ahead?
📉 XRP Price Shows Signs of Recovery XRP experienced losses below $0.4650 but found support near...
0
2024-06-17T18:49:19
https://dev.to/irmakork/xrp-price-recovers-past-100-sma-bullish-indicators-ahead-5nk
📉 XRP Price Shows Signs of Recovery XRP experienced losses below $0.4650 but found support near $0.4600, initiating a recent recovery wave similar to Ethereum. Bulls managed to break above key resistance levels at $0.4680 and $0.4720, and even surpassed the $0.5000 zone. A high of $0.5049 was reached before the price started correcting lower. 📈 Current Price Levels and Resistance Zones Currently, XRP is trading above $0.4850 and the 100-hourly Simple Moving Average. Resistance is anticipated near $0.4950, with a pivotal level at $0.4980. Further up, significant resistance lies around $0.5050, and a close above this could propel XRP towards $0.5250 and potentially $0.5320. Continued momentum might lead to a test of the $0.5500 resistance. 📉 Potential Downside and Support Levels Failure to surpass the $0.4980 resistance may prompt a downside movement. Initial support is near $0.4850 and the 100-hourly SMA. Further down, a major support level awaits at $0.4720. A break below this could trigger bearish momentum towards the $0.460 support in the near term. 📊 Technical Indicators The hourly MACD for XRP/USD is showing bullish momentum, while the hourly RSI is above the 50 level, indicating positive price action. Summary: XRP's recent recovery above key resistance levels suggests bullish potential, with resistance zones identified at $0.4980, $0.5050, and higher. Support levels are seen at $0.4850 and $0.4720, with technical indicators favoring further upside. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwxadxvaxj686tbry3dg.png)
irmakork
1,891,635
Setting up a React environment
React is a widely go-to library built with JavaScript used to build interactive and dynamic...
0
2024-06-17T18:48:51
https://dev.to/josephfabox/setting-up-a-react-environment-7ac
react, vscode, env, frontend
React is a widely go-to library built with JavaScript used to build interactive and dynamic interfaces. To run any React application, we need to first set up a ReactJS development environment. This article provides a step-by-step guide to installing and configuring a functional React development environment. Prerequisite 1.Node.js and npm: React development significantly depends on Node.js and npm (Node Package Manager). They can be downloaded and installed from the official Node.js website [here](https://nodejs.org/). 2.Code Editor: Select a code editor that matches your preferences. Popular options are Visual Studio Code, Sublime Text, and Atom. You can download Visual Studio Code [here](https://code.visualstudio.com/). Using Create-react-app command Step 1: Go to the folder where you want to create the project and open terminal Step 2: Open the terminal in your application directory and enter the following command: ``` npx create-react-app <Application_Name> ``` Step 3: Move into the newly created folder by using the command: ``` cd <Application_Name> ``` A default application will be created with a standard project structure and necessary dependencies. Step 5: To start the application, enter the following command in the terminal: ``` `npm start` ``` Step 6: The browser will display the following output: ![react description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xoq7oa6qkihz7cj8684i.png) Step 7: Based on your project needs, you may need to install extra dependencies. Popular choices include state management libraries like Redux or Mobx, routing libraries such as React Router, or UI component libraries like Material-UI or Ant Design. You can add these dependencies using npm or yarn: npm install package-name Conclusion Establishing your development environment for React is the essential first step in creating robust web applications. Create React App streamlines this process, allowing you to start coding quickly without the hassle of exten sive setup. As your expertise grows, you can explore more complex configurations and integrate various libraries to customize your environment to fit your needs. React's extensive ecosystem of tools and libraries makes it an excellent choice for building dynamic and efficient user interfaces. So, get ready to dive into React and start building impressive web applications! With your development environment prepared, you're set to begin your React adventure. Happy coding!
josephfabox
1,891,647
💥Polkadot (DOT) Struggles Near $6.30 – Is Now The Time To Accumulate?
📉 Polkadot (DOT) Faces Technical Challenges Polkadot is grappling with bearish technical indicators...
0
2024-06-17T18:48:49
https://dev.to/irmakork/polkadot-dot-struggles-near-630-is-now-the-time-to-accumulate-1a77
📉 Polkadot (DOT) Faces Technical Challenges Polkadot is grappling with bearish technical indicators as it dips below the Ichimoku Cloud, indicating a clear downtrend. Both the conversion line and baseline hover above the current price, exacerbating negative sentiment. The token struggles to breach the stubborn $7 resistance, recently slipping to $6.16, causing concern among investors. 📊 Oversold Conditions and Potential Bounce Despite the downtrend, Polkadot finds solace near the lower Bollinger Band, hinting at oversold conditions. This could spark a short-term bounce if buying pressure increases. Attention is focused on the $6.20 consolidation zone as a pivotal support level. Holding here might pave the way for a bullish reversal, targeting resistance at $6.30. 📈 Analyst's Bullish Perspective Amidst Bearish Market Amidst the bleak sentiment, analyst Michaël van de Poppe remains optimistic. He sees DOT's descent to critical support as an opportunity to accumulate at a discount. Van de Poppe highlights the growing interest in Real World Assets (RWAs) and the expanding Polkadot ecosystem as catalysts for potential growth. He identifies key support at $5.67-$6.11 and outlines resistance levels at $9.30 and $17.00 for a bullish breakout scenario.
irmakork
1,890,501
SteamVR Overlay with Unity: Create Overlay
Key and Name Overlay needs two strings key and name. key is a unique string among all...
27,740
2024-06-16T18:45:43
https://dev.to/kurohuku/part-3-create-overlay-3em1
unity3d, steamvr, openvr, vr
## Key and Name Overlay needs two strings `key` and `name`. `key` is a unique string among all overlays. `name` is just an overlay name sometimes shown to users. We use `"WatchOverlayKey"` and `"WatchOverlay"`. ```diff void Start() { InitOpenVR(); + var key = "WatchOverlayKey"; + var name = "WatchOverlay"; } ``` ## Overlay handle Create a variable to save an overlay handle. Similar to controlling a file with a file handle, overlays are controlled with an overlay handle. ```diff void Start() { InitOpenVR(); var key = "WatchOverlayKey"; var name = "WatchOverlay"; + var overlayHandle = OpenVR.k_ulOverlayHandleInvalid; } ``` [OpenVR.k_ulOverlayHandleInvalid](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.OpenVR.html?q=k_ulOverlayHandleInvalid#Valve_VR_OpenVR_k_ulOverlayHandleInvalid) means the overlay is not created. Overlay handle is `ulong` type. ## Create an overlay and get a handle Use [CreateOverlay()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_CreateOverlay_System_String_System_String_System_UInt64__) to create an overlay. (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::CreateOverlay) for details) Pass the `key` and `name` and reference of `overlayHandle` to `CreateOverlay()`. ```diff void Start() { InitOpenVR(); var key = "WatchOverlayKey"; var name = "WatchOverlay"; var overlayHandle = OpenVR.k_ulOverlayHandleInvalid; + var error = OpenVR.Overlay.CreateOverlay(key, name, ref overlayHandle); } ``` `CreateOverlay()` sets overlay handle to the variable `overlayHandle`. The return value is an overlay creation error. All overlay errors are defined in [EVROverlayError](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.EVROverlayError.html). ## Error handling Add error handling for overlay creation. ```diff void Start() { InitOpenVR(); var key = "WatchOverlayKey"; var name = "WatchOverlay"; var overlayHandle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref overlayHandle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to create overlay: " + error); + } } ``` If there is no error, `EVROverlayError.None` is back. ## Cleanup overlay Add code to dispose of the overlay at the end of the application. ### Move overlay handle Move `overlayHandle` from `Start()` to the class member variable. ```diff public class WatchOverlay : MonoBehaviour { + private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { InitOpenVR(); var key = "WatchOverlayKey"; var name = "WatchOverlay"; - var overlayHandle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref overlayHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } } ... } ``` ### Dispose of overlay The function for overlay cleanup is [DestroyOverlay()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_DestroyOverlay_System_UInt64_). (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::DestroyOverlay) for details) Add `OnApplicationQut()` with overlay cleanup code. ```diff public class WatchOverlay : MonoBehaviour { private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { InitOpenVR(); var key = "WatchOverlayKey"; var name = "WatchOverlay"; var error = OpenVR.Overlay.CreateOverlay(key, name, ref overlayHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } } + private void OnApplicationQuit() + { + if (overlayHandle != OpenVR.k_ulOverlayHandleInvalid) + { + var error = OpenVR.Overlay.DestroyOverlay(overlayHandle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to dispose overlay: " + error); + } + } + } private void OnDestroy() { ShutdownOpenVR(); } ... } ``` `DestroyOverlay()` must be called before `Shutdown()` so we put the overlay cleanup code inside `OnApplicationQuit()` that is called before `OnDestroy()`. ### Check created overlay Check whether the overlay is created. **Overlay Viewer** included in the SteamVR is useful to check overlays. Select **Developer > Overlay Viewer** in the SteamVR menu. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k2io4pakb2p8xv20rz3v.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2wgo2cw0qu4w3og9svsd.png) Overlay Viewer displays a list of created overlays. You can see many overlays already created by SteamVR system. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zlex2o6ksvegf7wnrcsk.png) In this state, run the program from Unity. Then the overlay key `WatchOverlayKey` is added to the list. You can see the overlay details by clicking the key. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/358mupcnh97frl3wtwgm.png) The overlay preview is shown on the right side gray area but now we draw nothing to the overlay yet. Here, we confirmed the overlay is created. Stop the program and close Overlay Viewer. --- ### Optional: Overlay Viewer location Overlay Viewer is located in the SteamVR install directory. `C:/ProgramFiles(x86)/Steam/steamapps/common/SteamVR/bin/win32/overlay_viewer.exe` We use this tool frequently during development so I recommend creating short cut. --- ## Organize code ### Create overlay Move the overlay creation code `intoCreateOverlay()`. It gets `key` and `name` as arguments, and return created overlay handle. Please note that some variable names are changed in code organization. ```diff public class WatchOverlay : MonoBehaviour { private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { InitOpenVR(); - var key = "WatchOverlayKey"; - var name = "WatchOverlay"; - var error = OpenVR.Overlay.CreateOverlay(key, name, ref overlayHandle); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to create overlay: " + error); - } + overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); } ... + private ulong CreateOverlay(string key, string name) { + // Some code changed here. + var handle = OpenVR.k_ulOverlayHandleInvalid; + var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to create overlay: " + error); + } + return handle; + } } ``` ### Cleanup overlay Similarly, move the overlay cleanup code into `DestroyOverlay()`. ```diff public class WatchOverlay : MonoBehaviour { ... private void OnApplicationQuit() { - if (overlayHandle != OpenVR.k_ulOverlayHandleInvalid) - { - var error = OpenVR.Overlay.DestroyOverlay(overlayHandle); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to dispose overlay: " + error); - } - } + DestroyOverlay(overlayHandle); } ... + // overlayHandle -> handle variable name changed. + private void DestroyOverlay(ulong handle) + { + if (handle != OpenVR.k_ulOverlayHandleInvalid) + { + var error = OpenVR.Overlay.DestroyOverlay(handle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to dispose overlay: " + error); + } + } + } } ``` ## Final code ```cs using UnityEngine; using Valve.VR; using System; public class WatchOverlay : MonoBehaviour { private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); } private void OnApplicationQuit() { DestroyOverlay(overlayHandle); } private void OnDestroy() { ShutdownOpenVR(); } private void InitOpenVR() { if (OpenVR.System != null) return; var error = EVRInitError.None; OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); if (error != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + error); } } private void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } private ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } return handle; } private void DestroyOverlay(ulong handle) { if (handle != OpenVR.k_ulOverlayHandleInvalid) { var error = OpenVR.Overlay.DestroyOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to dispose overlay.: " + error); } } } } ``` With that, we have created and cleaned up an overlay. Next part, we will draw an image file to the overlay.
kurohuku
1,891,646
💥PEPE Price Analysis: Can Pepe Reclaim Record Highs As Buy Signal Emerges?
🚀 PEPE Price Boosted by SEC Chair's Hint Over the weekend, Pepe coin received a price boost following...
0
2024-06-17T18:48:30
https://dev.to/irmakork/pepe-price-analysis-can-pepe-reclaim-record-highs-as-buy-signal-emerges-48ib
🚀 PEPE Price Boosted by SEC Chair's Hint Over the weekend, Pepe coin received a price boost following hints from SEC Chair about potential approval for spot Ethereum ETF S-1s by summer's end. PEPE traded around $0.0000118 on Monday, marking a 1.4% increase in 24 hours. This revival may embolden PEPE bulls to rally toward previous all-time highs. 📈 PEPE Breaks Out of Ascending Triangle PEPE's price broke out of an ascending triangle on May 21, reaching a new all-time high before profit-taking pulled it back to $0.00001057. Recently, it found support at a critical resistance-turned-support level, reinforced by a bullish falling wedge pattern. The current level aligns with the 0.786 Fibonacci retracement, typically a strong support. 🔍 Future Outlook for PEPE Price Gary Gensler's remarks on Ethereum ETFs have spurred optimism among PEPE investors, aligning with historical trends where PEPE price movements mirrored Ethereum's. Santiment data indicates a rise in PEPE holders, with smaller investors potentially reshaping supply dynamics. As Ethereum-related optimism grows, PEPE could see sustained momentum, potentially crucial for its long-term price performance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dfwfxs89lwnlx19tuhbk.png)
irmakork
1,891,645
🚀Bitcoin Notes Major Buying Pressure Despite Dip To $66K, Analyst Hints Recovery
📉 BTC Registers Significant Buying Pressure The Bitcoin (BTC) price recently fluctuated, briefly...
0
2024-06-17T18:47:59
https://dev.to/irmakork/bitcoin-notes-major-buying-pressure-despite-dip-to-66k-analyst-hints-recovery-51k1
📉 BTC Registers Significant Buying Pressure The Bitcoin (BTC) price recently fluctuated, briefly dropping to $65,000 before rebounding above $66,000. Despite this volatility, market sentiment remains optimistic due to increased buying pressure, notably on platforms like Huobi Global. BTC is holding above a crucial support level, hinting at a potential recovery. 📈 Ali Martinez Highlights Surge in Buying Activity Crypto analyst Ali Martinez noted a significant surge in buying activity on Huobi Global. Martinez highlighted, “Someone is buying the Bitcoin dip! The BTC Taker Buy Sell Ratio on Huobi Global surged to 545!” This spike in buy pressure indicates bullish sentiment, suggesting a potential upward movement in BTC price. 🌐 Macro Factors and Market Context Michaël van de Poppe provided insight into recent market movements, noting favorable macroeconomic data in traditional markets like gold and USD. Despite this, recent downturns in the crypto markets have persisted. Economic indicators such as CPI data have been closely watched, influencing market sentiment towards risk-on assets like cryptocurrencies. 💼 Federal Reserve and Market Impact The Federal Reserve's recent actions and statements, including Chair Jerome Powell’s speech, have also impacted market expectations. Powell’s comments tempered expectations for imminent rate cuts despite positive economic indicators suggesting potential future positivity. 💡 Potential BTC Price Recovery Despite market turbulence, signs of potential recovery are evident. Martinez emphasized the importance of Bitcoin maintaining above $66,254 to avoid deeper corrections. Additionally, CryptoCon highlighted the 20-week EMA as a critical support level to watch, suggesting cautious optimism amidst ongoing market dynamics. In conclusion, while Bitcoin faces fluctuations and external economic influences, the current market sentiment leans towards potential stabilization and recovery. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oj4qrby6spreb67fyvw4.png)
irmakork
1,891,643
🔥$60k or $80K; Where Bitcoin Price Heading by June End?
📉 Bitcoin Price Faces Downturn Amidst Miner and Whale Sell-Offs For over a week, Bitcoin has...
0
2024-06-17T18:47:11
https://dev.to/irmakork/60k-or-80k-where-bitcoin-price-heading-by-june-end-1gb5
📉 Bitcoin Price Faces Downturn Amidst Miner and Whale Sell-Offs For over a week, Bitcoin has experienced aggressive selling pressure, dropping from $71,947 to $66,197, marking an 8.3% pullback. This decline was influenced by pre-CPI data uncertainty, significant outflows from BTC ETFs, whale distribution, and Bitcoin miners’ capitulation. The price broke key support levels, signaling a continuation of the downtrend. 📊 Bitcoin Trading in a Bearish Reversal Pattern Bitcoin has been consolidating within two parallel trendlines for the past three months, forming a bullish flag pattern—a setup typically seen during strong uptrends to stabilize price action before a higher rally. However, on June 7th, BTC faced a bearish reversal from the upper trendline, indicating potential prolonged consolidation. This downturn led Bitcoin to a 4-week low of $64,936, with the market cap dropping to $1.28 trillion. 💰 Impact of Miner and Whale Activities Crypto trader Alicharts highlighted that Bitcoin miners sold over 1,200 BTC worth $79.20 million recently, contributing to the price correction. Data from CryptoQuant showed a notable increase in miner selling starting June 10, 2024, correlating with Bitcoin's price decline post-halving adjustments. Meanwhile, Bitcoin whales liquidated over 50,000 BTC in the past 10 days, totaling about $3.30 billion, further driving the price downward. 📉 Technical Outlook and Support Levels Sellers breached the combined support of $66,588 and the 50-day EMA slope, suggesting potential further decline. If the breakdown continues, Bitcoin could test as low as $57,000 by the end of June, seeking support from the lower trendline of the flag pattern. Buyers need a breakout above the flag pattern to regain control, potentially triggering a rally towards $90,000. In summary, Bitcoin faces challenges from miner and whale activities amidst broader market uncertainties, prompting caution among investors and traders alike. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s9vo9ioniylpy627hpnd.png)
irmakork
1,891,642
💥dogwifhat Price Analysis: WIF Inside the Coiling Pattern May Push 42% Upswing
📊 Dogwifhat Price Analysis The cryptocurrency market has seen diminished volatility over the weekend,...
0
2024-06-17T18:46:27
https://dev.to/irmakork/dogwifhat-price-analysis-wif-inside-the-coiling-pattern-may-push-42-upswing-2po4
📊 Dogwifhat Price Analysis The cryptocurrency market has seen diminished volatility over the weekend, aiming for stability after a recent downturn. Bitcoin has managed to stay above $65,000 without clear signs of a reversal. However, Solana-based memecoin Dogwifhat has gained 5%, surpassing the $2.5 mark. Will this upward trend continue? 📈 Will Dogwifhat Hold $2 Amid Prolonged Market Consolidation? Dogwifhat has been trading sideways for the past two weeks, bouncing between two converging trendlines. This has formed a symmetrical triangle pattern on the daily chart. Recent Downturn: Dogwifhat's price fell from $4.08 to $2.2, a 45.85% decline, driven by selling pressure partly due to Bitcoin miners' capitulation. Potential Upward Movement: Trader Alicharts pointed out that Bitcoin's average mining cost is $86,668. Historically, Bitcoin's price tends to rise above this cost, indicating a potential bullish trend. With Bitcoin stabilizing above $65,000, Dogwifhat rebounded from the $2.2 support and the triangle’s lower trendline. This resulted in a 9% jump in 48 hours, pushing the price to $2.5. If the pattern holds, Dogwifhat could challenge the triangle’s upper boundary at $3.5, a potential 42% gain. However, the sideways action will persist until the triangle pattern breaks. 📉 Technical Indicators: EMAs: A bearish crossover between the 50-and-100-day Exponential Moving Averages could accelerate selling momentum and extend consolidation above lower support before a bullish bounce.Vortex Indicator: A notable bearish crossover between VI+ (Blue) and VI- (Pink) indicates that bears still have control over the asset. In summary, while Dogwifhat shows signs of a potential upward trend, the sideways action and technical indicators suggest that the market could remain cautious. Investors should watch for a break in the triangle pattern for clearer signals. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0c5u3czq4xsjj6c0ndkh.png)
irmakork
1,891,641
👇3 Reasons Why Bitcoin (BTC) May Retest $70,000 This Week
📈 Bitcoin Mild Resurgence Bitcoin (BTC) is showing a mild resurgence, rising 0.51% in the past 24...
0
2024-06-17T18:45:53
https://dev.to/irmakork/3-reasons-why-bitcoin-btc-may-retest-70000-this-week-jgl
📈 Bitcoin Mild Resurgence Bitcoin (BTC) is showing a mild resurgence, rising 0.51% in the past 24 hours to $66,600.62. While this uptick does not confirm a sustained trend shift, it might indicate the start of a price rebound. 🔍 Top 3 Bitcoin Price Rebound Catalysts Bitcoin has been in a bearish trend since reaching its All-Time High (ATH) of $73,750.07 in March, maintaining a tight range with balanced bull-bear action. Three key factors are crucial for a potential trend shift: Retail and Whale Transactions: Trading Volume: Current trading volume is $12,812,056,073, down by 46.85%. For Bitcoin to retest the $70,000 resistance, this volume needs to increase.Large Transactions: Dropped by 35.45% to $30.39 billion, indicating a decrease in significant capital flow. Social Sentiment: Social sentiment around BTC gauges interest levels. Currently, it plays a critical role in understanding market enthusiasm and potential price movements. Spot Bitcoin ETF Impact: The influence of spot Bitcoin ETFs is significant. Recent outflows, particularly from Grayscale Investments' GBTC, have impacted the market. A shift in this trend could drive Bitcoin back to higher price points. 🚀 How High Can Bitcoin Soar? With Bitcoin about 9.7% below its previous ATH, the expectation is a potential rise above this level soon. Analysts have varying projections: PlanB (S2F Model): Predicts BTC could reach $500,000 by 2025.Robert Kiyosaki ("Rich Dad Poor Dad" Author): Foresees BTC soaring to $350,000 by August. 📊 Short-Term Outlook Bitcoin might retest the $70,000 mark in the coming days, driven by the factors above. Analysts remain optimistic, anticipating a potential rebound and new highs in the near future. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ba77y8c49f6iynd3jmh5.png)
irmakork
1,891,640
👀 ETH/BTC Price Prediction: When Is Ethereum Price Poised Reach $4,200?
📈 Ethereum Price Surge Ethereum jumped 8% over the weekend, dipping to $3,362 before climbing. By...
0
2024-06-17T18:45:22
https://dev.to/irmakork/ethbtc-price-prediction-when-is-ethereum-price-poised-reach-4200-4e77
📈 Ethereum Price Surge Ethereum jumped 8% over the weekend, dipping to $3,362 before climbing. By Monday, ETH hovered around $3,586, marking a 1.2% 24-hour increase but a 2.7% drop over the past week. Meanwhile, Bitcoin remained in a consolidation zone, awaiting a directional move. 🔍 ETH Price Prediction: Bulls vs. Bears Last week, ETH fell continuously for 8 days from $3,878 to $3,362, breaking critical support at $3,650. Bulls utilized strong support between $3,400 and $3,250 to start an uptrend. ETH now faces a challenge breaking above the $3,600 resistance. If successful, prices could rise to $4,216, with resistance at $3,800 and $3,900. 📊 Bitcoin at Crucial Support Bitcoin is trending around key support at $65,000. A clean break below could drop BTC to $57,000, but strong support could push it to $72,500 or higher, potentially reaching new all-time highs. 📢 Spot ETF Approval Anticipation SEC Chair Gary Gensler hinted at a likely Ethereum ETF approval by the end of summer, possibly causing the weekend price reversal. Ethereum blob usage has been rising, improving Layer 2 transaction efficiency and reducing congestion. 📝 Bottom Line Spot ETFs historically boost crypto prices. As summer ends, the market watches for Ethereum Spot ETF approval. Investors are in a "buy-the-rumor" phase, anticipating ETF approval. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/60s7h42e2d80kxfk4tzh.png)
irmakork
1,891,639
🔥Bitcoin Price: Expert Foresees Continued Bull Market Despite $4B BTC Selloff
The Bitcoin (BTC) price is currently holding just above $66,000 after dropping to $65,000 last week...
0
2024-06-17T18:44:45
https://dev.to/irmakork/bitcoin-price-expert-foresees-continued-bull-market-despite-4b-btc-selloff-2847
The Bitcoin (BTC) price is currently holding just above $66,000 after dropping to $65,000 last week due to significant selloffs by Bitcoin whales and miners. These selloffs amounted to over $4 billion, yet analysts remain optimistic about Bitcoin’s future. 📉 Whales & Miners Selloff On-chain data from Santiment shows Bitcoin whales sold over 50,000 BTC (approx. $3.30 billion) in the ten days before the recent correction. Miners also contributed by selling over 1,200 BTC ($79.20 million). Despite this, analysts believe the bull market is still intact. 📊 Analyst Optimism CryptoQuant CEO Ki Young Ju noted, “Bitcoin traders’ average entry price is $47K. Even with a 27% drop, it can still be considered a bull market.” He remains long-term bullish, suggesting the recent 9% pullback from $71,500 isn’t significant enough to end the bull market. ⛏️ Mining Cost Insights Crypto analyst Ali Martinez highlighted that Bitcoin’s average mining cost is $86,668, stating, “Historically, BTC always surges above its average mining cost!” The recent Halving event reduced block rewards, increasing mining costs but potentially driving prices up due to reduced supply. 🚀 Future Prospects Martinez believes Bitcoin will soon surpass its average mining cost, prompting miners to hold rather than sell, reducing selling pressure and driving prices higher. Despite short-term profit-taking, analysts expect an unprecedented surge once prices exceed mining costs. 📉 Current Market Stats At press time, BTC price is down by 0.39% to $66,004.88 with a market cap of $1.30 trillion. The 24-hour trading volume surged 39.29% to $16.95 billion. Long liquidations exceeded shorts, creating downside pressure, with $5.89 million in long liquidations and $3.93 million in shorts. Overall, while recent selloffs indicate short-term adjustments, the sentiment among analysts remains positive for Bitcoin's long-term trajectory. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mg3omlxti9iqtxg2ox7s.png)
irmakork
1,891,637
🤑Polygon logs 200% hike in THIS key metric: Can it help MATIC?
While Bitcoin (BTC), Ethereum (ETH), and Solana (SOL) have garnered attention due to ETFs and...
0
2024-06-17T18:44:16
https://dev.to/irmakork/polygon-logs-200-hike-in-this-key-metric-can-it-help-matic-3dce
While Bitcoin (BTC), Ethereum (ETH), and Solana (SOL) have garnered attention due to ETFs and memecoins, Polygon (MATIC) has seen significant growth since the beginning of the year. 📈 Activity on the Rise Data shows a 200% increase in daily active addresses since the year began. Despite this, MATIC's price fell by 19%. After a period of sideways movement, MATIC exhibited a bearish trend, forming lower lows and lower highs. For a reversal, MATIC needs to re-test and break past $0.6346 and aim for $0.6886. The RSI at 52.65 indicates some bullish momentum, but the negative Awesome Oscillator (AO) signals weaker short-term movement. 🔗 MATIC’s On-Chain Signs On-chain metrics reveal positives for MATIC. Network growth has increased, indicating more new addresses accumulating MATIC. The increased velocity suggests more trading activity. However, large holders are selling, while retail interest drives the recent growth. 📉 Trader Sentiment Retail spot traders show interest in accumulating MATIC, but futures and options traders are less enthusiastic. Coinglass data reveals a significant drop in Open Interest for MATIC since April 1. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/norgdfmnuzv2nlprd133.png)
irmakork
1,888,602
Are we all prompting wrong? Balancing Creativity and Consistency in RAG.
For a Boston native like myself, there are few things more heartwarming than Artificial Intelligence...
0
2024-06-17T18:44:07
https://dev.to/llmware/are-we-all-prompting-wrong-balancing-creativity-and-consistency-in-rag-20fm
ai, llm, python, rag
For a Boston native like myself, there are few things more heartwarming than Artificial Intelligence understanding the brilliance of _Good Will Hunting_. A few cursory prompts reveal that it views it as a "must-watch tale of redemption and self discovery". ![Chat Will Hunting](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/igxw6cfugs9mb2ivue1i.png) But a slightly closer look reveals what many users of LLMs have accepted as a given - slight variations on an otherwise consistent topic. This is the result of Stochastic Generation. ##Stochastic generation 🤖 This is a fairly common term, from online bootcamps to college lectures, students of AI are familiar with this concept. For those who need a quick refresher, here is the 3-step generation loop that many LLMs follow. LLMs are trained using a next-token prediction task, where the model predicts the next token in a sequence based on the previous tokens. This process involves: 1. **Tokenized Input**: The input text is converted into a sequence of numbers (tokens). 2. **Probability Distribution**: The model generates a probability distribution over the possible next tokens. 3. **Sampling Algorithm**: This distribution is passed through a sampling algorithm to select the next token. The probabilistic elements that this process introduces enables LLMs to generate more captivating dialogue, novel images, and creatively praise award-winning films. <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExbHpta3l3aXZicWp4dDhjbjc1b3p5MjVmMnZvcGQ2d3FqMzNnZDF2ZyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/7pLv68ItwBaHS/giphy.gif"> ##Randomness and RAG 🎰 When building RAG based applications, we are often not as concerned with creativity as we are with facts. When dealing with facts, we want as little probability involved as possible. In other words, instead of sampling a probability distribution, its beneficial to just take the token with the maximum likelihood every time. LLMWARE allows you to explore how random your generated results are, as well as augment how random you want them to be. Heres a quick demonstration: ##Demo 🙌 **Load the model** ```python model = ModelCatalog().load_model("bling-stablelm-3b-tool", sample=True, temperature=0.3, get_logits=True, max_output=123) ``` In the load_model method, we make a few important selections. The bling 3B is one of our newest and highest performing models. Setting the sample attribute to True or False will allow you to change between a stochastic approach and a top-token model. The temperature can be an important tool to control the randomness of the output, with lower values making responses more focused and higher values increasing diversity in the generated text. These key settings will allow you to see what kind of approach you want to take when it comes to the probabilistic nature of your model. **Run a simple inference model on some sample text** ```python response = model.inference("What is a list of the key points?", sample) ``` This step is where your model is doing the heavy lifting, analyzing and summarizing the loaded-in documents. **Run a sampling analysis** ```python sampling_analysis = ModelCatalog().analyze_sampling(response) print("sampling analysis: ", sampling_analysis) ``` Now you get to see the analytics - giving you a better idea of how heavily your model samples from the lower-probability side of the distribution. This analysis will include what percentage of the tokens selected by the model were also the highest probability output, and will note cases where the not-top-token was selected. In cases where the top token was not selected, the below code will print out the exact entries of the outputs, including their token rank. ```python for i, entries in enumerate(sampling_analysis["not_top_tokens"]): print("sampled choices: ", i, entries) ``` All these tools can help you make an informed decision on whether you want your model to think a little outside the box, or stick to the most likely answer. To see this process in action, check out our youtube video on consistent LLM output generation. {% embed https://www.youtube.com/watch?v=iXp1tj-pPjM %} The full code for this example can be found in our [Github repo](https://github.com/llmware-ai/llmware/blob/main/examples/Models/adjusting_sampling_settings.py). If you have any questions, or would like to learn more about LLMWARE, come to our Discord community. Click [here](https://discord.gg/6nNVdn7A) to join. See you there!🚀🚀🚀
simon_risman_1991f73692bc
1,891,636
💥XRP Whale Dumps 31M Coins Amid Price Dip, What’s Next?
XRP, the Ripple-backed cryptocurrency, saw significant whale activity as nearly 31 million XRP coins...
0
2024-06-17T18:43:52
https://dev.to/irmakork/xrp-whale-dumps-31m-coins-amid-price-dip-whats-next-3nl1
XRP, the Ripple-backed cryptocurrency, saw significant whale activity as nearly 31 million XRP coins were offloaded to centralized exchanges amid a recent price recovery. This has raised investor concerns about future price movements. 🐋 XRP Whale Activity and Market Impact Massive token dumps to exchanges are seen as bearish indicators, increasing supply and potentially lowering prices. Data from Whale Alert revealed a whale transferred 30.350 million XRP ($14.53 million) to Bitstamp and another CEX, causing bearish sentiments and increased selling pressure. 📉 Current Market Conditions and Future Prospects XRP's price is currently $0.4872, with a 24-hour trading volume of $661.7 million, reflecting a -0.38% decline in 24 hours and -2.04% over the past week. Despite the dip, positive news around the XRP Ledger (XRPL) and plans for a new stablecoin, RLUSD, have fostered cautious optimism. 📊 Market Metrics The futures open interest (OI) increased by 0.21% to $416.1 million, and derivatives volume jumped by 5.16%. The Relative Strength Index (RSI) near 47.04 indicates downside pressure but suggests a potential rebound if it moves into oversold territory. Overall, while XRP faces current selling pressure, robust market interest and developments within XRPL could lead to future price recovery. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50cqpg09e2ygqivm59rj.png)
irmakork
1,891,634
🔥Dogecoin: Should You HODL Or Fold? Dev Sounds The Alarm
The world of cryptocurrency can be alluring yet intimidating, especially for newcomers. While...
0
2024-06-17T18:43:22
https://dev.to/irmakork/dogecoin-should-you-hodl-or-fold-dev-sounds-the-alarm-67n
The world of cryptocurrency can be alluring yet intimidating, especially for newcomers. While memecoins like Dogecoin surge in popularity, Dogecoin developer Mishaboar urges caution, reminding everyone that crypto investments are a calculated leap, not a blind jump into digital gold. ⚠️ Dogecoin Analyst: Know The Risks Mishaboar, a respected figure in the Dogecoin community, recently warned via social media about the volatility of cryptocurrencies. He emphasized the importance of risk assessment: “Crypto is highly volatile and risky. Do not gamble with more than you can afford to lose.” This advice is often overlooked by enthusiastic investors who end up getting burned. 🎲 Educated Gamblers vs. Uninformed Enthusiasts Mishaboar calls crypto investment "educated gambling." While acknowledging the thrill and potential returns, he stresses the need for education: “It’s okay to gamble, but do it responsibly after understanding the risk/reward ratio.” This echoes Justin Bons, founder of Cyber Capital, who likened memecoin investments to gambling. 🛡️ Protecting Newcomers Mishaboar's primary concern is protecting newcomers from the crypto landscape's pitfalls. Many new investors buy and trade coins without fully understanding the risks, leaving them vulnerable to manipulation by unscrupulous actors. He aims to equip newcomers with the knowledge to navigate the crypto space safely. 🔍 Transparency and Responsible Innovation Mishaboar also criticizes the lack of transparency within the crypto industry. He points out that some projects fail to disclose inherent risks, making informed decisions difficult for investors. He advocates for a more responsible approach to crypto development, prioritizing safety and education alongside innovation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mc3e9s10xoyoiz8wlj5w.png)
irmakork
1,891,633
👇3 Ethereum Crypto Losers of the Week: Should You Buy, Sell, or HODL?
The global cryptocurrency market has been volatile, with major price changes prompting investors to...
0
2024-06-17T18:42:51
https://dev.to/irmakork/3-ethereum-crypto-losers-of-the-week-should-you-buy-sell-or-hodl-3cji
The global cryptocurrency market has been volatile, with major price changes prompting investors to decide whether to buy, sell, or hold (hodl). This article analyzes three Ethereum tokens, Wormhole (W), Floki Inu (FLOKI), and Worldcoin (WLD), which have seen significant losses this week. 🐛 Wormhole (W) Losses: 34.74%Current Price: $0.4585Market Cap: $826.01 million Wormhole recorded a 34.74% loss this week. EMA and SMA indicators show a downward trend, with selling pressure evident in the MACD Level. The RSI is neutral, and Fibonacci support is at $0.3116 with resistance at $0.7879. More declines are possible before a potential reversal. 🐶 Floki Inu (FLOKI) Losses: 28.52%Current Price: $0.0002061Market Cap: $1.97 billion Floki Inu fell by 28.52% this week. Both EMA and SMA indicate a bearish trend, with the MACD Level showing selling pressure. The RSI indicates an oversold condition, suggesting a possible buying opportunity. Fibonacci support is at $0.0001608803 and resistance at $0.0003310183, indicating potential stabilization soon. 🌐 Worldcoin (WLD) Losses: 21.77%Current Price: $3.49Market Cap: $838.78 million Worldcoin experienced a 21.77% decline. EMA and SMA indicators show a downward trend, with the MACD Level indicating selling pressure. The RSI suggests a potential buying opportunity as the token nears oversold territory. Fibonacci support is at $0.3116 with resistance at $0.7879, hinting at potential stabilization or recovery. 💡 Final Thoughts The market crash has presented opportunities and challenges. For Wormhole (W), strong bearish signals suggest selling. Floki Inu (FLOKI) should be held until clearer signals emerge. Worldcoin (WLD) may be a speculative buy as it could rebound. Always assess your risk tolerance and conduct additional research before making investment decisions. Staying updated and adaptable is crucial in the unpredictable crypto market. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4ss4lx3v71rg5piy4rj4.png)
irmakork
1,891,632
3 Top Altcoins Set For The Rally For The First Time Since March
March was a bullish month for the crypto market, with many cryptocurrencies reaching new heights....
0
2024-06-17T18:42:31
https://dev.to/irmakork/3-top-altcoins-set-for-the-rally-for-the-first-time-since-march-406a
cryptocurrency
March was a bullish month for the crypto market, with many cryptocurrencies reaching new heights. However, the trends have shifted, and bears are now dominating. This presents an opportunity to buy altcoins low and sell when conditions improve. Here are some altcoins to consider before they rally again: 📈 Avalanche (AVAX) Avalanche's native token, AVAX, has been declining since its March peak of $146.22. Currently priced at $30.20 with a market cap of $11.8 billion, it’s poised for a potential rally as market conditions improve. 🚀 TRON (TRX) TRON has been slowly rising this month, currently at $0.1152 with a market cap of $10 billion. It reached $0.1422 in March, and analysts predict a possible rally in the upcoming altcoin season, targeting values near its all-time high of $0.3004. 💹 Brett (BRETT) Brett recently hit an all-time high of $0.1939. Now at $0.1413, it’s 28% below its peak but has high potential for a recovery or even a bigger rally due to increased market demand. Other altcoins are also poised for a rally once the market recovers, with Avalanche, TRON, and Brett leading the way. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q2zcuy9dsrj4pkx8kzv8.png)
irmakork
1,891,631
Wk 2 Experiment Tracking: MLOPs with DataTalks
In this week, the course discusses how to carry out ML experiments, using the MLflow experiment...
0
2024-06-17T18:42:21
https://dev.to/afrologicinsect/wk-2-experiment-tracking-mlops-with-datatalks-45d2
machinelearning, python, programming, beginners
In this week, the course discusses how to carry out ML experiments, using the __MLflow__ experiment tracking tool. This setup allows you to track and log your machine learning experiments and results. For this week's assignment, we have been provided with 4 scripts for us to modify to complete the assignment. Here's what the questions are: <Insert screenshot> The only setup required now is to install the __mlflow__ library into the our __MLOPS_env__ virtual environment as well as download the 4 scripts. **Q1: Install MLflow** Like before from your parent folder (MLOPS), create a __wk2__ sub-directory and navigate into it, download the scripts from [here](https://github.com/DataTalksClub/mlops-zoomcamp/blob/main/cohorts/2024/02-experiment-tracking/homework) and then run the following: ``` mkdir wk2 cd wk2 ``` Then activate your virtual environment and install like so: ``` pip install -mlflow mlflow --version ``` => mlflow, version 2.13.0 Or just create a __jupyter notebook__ to have a compact file for your answers, simply prefix these bash/terminal commands with a `!`. **Q2. Download and preprocess the data** To store the datasets, we will create a new sub-directory - ``` ## Fetch Data ! mkdir datasets ! curl -o ./datasets/green_tripdata_2023-01.parquet https://d37ci6vzurychx.cloudfront.net/trip-data/green_tripdata_2023-01.parquet ! curl -o ./datasets/green_tripdata_2023-02.parquet https://d37ci6vzurychx.cloudfront.net/trip-data/green_tripdata_2023-02.parquet ! curl -o ./datasets/green_tripdata_2023-03.parquet https://d37ci6vzurychx.cloudfront.net/trip-data/green_tripdata_2023-03.parquet ``` To **preprocess** the datasets, we will use one of the pre-defined scripts - preprocess.py `! python preprocess_data.py --raw_data_path datasets/ --dest_path ./output` - what does this do? This command runs the `preprocess_data.py` Python script with two command-line arguments: - `--raw_data_path`: Specifies the path to the raw data, which is set to `datasets/`. - `--dest_path`: Specifies the destination path for the output of the script, which is set to `./output` which it creates to save the processed data and automatically upon run. Now if you run if you run `! ls`, we should have: ``` wk2 | - datasets/ - output/ - homework.ipynb - hpo.py preprocess_data.py register_model.py train.py ``` Now, `!ls output` and you have 4 pickle files - ``` dv.pkl test.pkl train.pkl val.pkl ``` NB: Notice that the download files follow a `{dataset}_tripdata_YYYY-MM.parquet` naming convention. **Q3. Train a model with autolog** The task is to modify the script to enable autologging with MLflow, execute the script and then launch the MLflow UI to check that the experiment run was properly tracked. For brevity, the author would just show where modifications were made on the original train.py file. 3.1 Set Experiment Tracking First, we setup our mlflow server to allows us track experiments, store results, and manage the machine learning models. ` mlflow server --backend-store-uri sqlite:///mlflow.db --default-artifact-root ./artifacts --host 0.0.0.0 ` This command starts an MLflow tracking server with the following configurations: - `--backend-store-uri sqlite:///mlflow.db`: Sets the backend store to a SQLite database located at `mlflow.db`. - `--default-artifact-root ./artifacts`: Sets the default location for storing artifacts (like models and plots) to the `./artifacts` directory. - `--host 0.0.0.0`: Binds the server to all public IPs (`0.0.0.0`), making it accessible from other machines. Now, we make modifications to the training script. ``` # Set Tracking URI mlflow.set_tracking_uri("http://127.0.0.1:5000") # Set the experiment name mlflow.set_experiment("sklearn-init") ``` This code is used to set up MLflow for tracking experiments: 1. `mlflow.set_tracking_uri("http://127.0.0.1:5000")`: This sets the tracking URI to the local server running at `http://127.0.0.1:5000`, which is where MLflow is listening for incoming tracking data. 2. `mlflow.set_experiment("sklearn-init")`: This sets the name of the experiment to "sklearn-init" in MLflow, under which all runs will be logged. ``` def run_train(data_path: str): # Enable autolog mlflow.sklearn.autolog() with mlflow.start_run(): <Original training Commands> ``` The `run_train` function is designed to train a machine learning model and log the training process with MLflow: 1. `mlflow.sklearn.autolog()`: Automatically logs MLflow metrics, parameters, and models when training with **scikit-learn**. 2. `with mlflow.start_run()`: Starts a new MLflow run to track the training process within the block. The placeholder `<Original training Commands>` is where the actual machine learning training commands would be placed. When this function is called with a data path, it will train the model and log all relevant information to MLflow. Run `mlflow server` from the terminal to launch MLflow you should see this: `INFO:waitress:Serving on http://127.0.0.1:5000` Now run `python train.py` you should see something like: ``` 2024/06/17 18:38:32 INFO mlflow.tracking.fluent: Experiment with name 'sklearn-init' does not exist. Creating a new experiment. 2024/06/17 18:38:32 WARNING mlflow.utils.autologging_utils: You are using an unsupported version of sklearn. If you encounter errors during autologging, try upgrading / downgrading sklearn to a supported version, or try upgrading MLflow. 2024/06/17 18:38:34 WARNING mlflow.sklearn: Failed to log training dataset information to MLflow Tracking. Reason: 'numpy.ndarray' object has no attribute 'toarray' 2024/06/17 18:39:03 WARNING mlflow.utils.autologging_utils: MLflow autologging encountered a warning: "c:\Users\buasc\OneDrive\Desktop\MLOps_24\MLOps_24\Lib\site-packages\_distutils_hack\__init__.py:26: UserWarning: Setuptools is replacing distutils." c:\Users\buasc\OneDrive\Desktop\MLOps_24\MLOps_24\Lib\site-packages\sklearn\metrics\_regression.py:492: FutureWarning: 'squared' is deprecated in version 1.4 and will be removed in 1.6. To calculate the root mean squared error, use the function'root_mean_squared_error'. warnings.warn( ``` ![MLflow server](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jibml1dflqlqnrfltr4a.png) ![min_samples_split](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6tm405ey98s8w85hfjr.png) **Q5. Tune model hyperparameters** Our task is to modify the script _hpo.py_ and make sure that the **validation RMSE** is logged to the tracking server for each run of the hyperparameter optimization (we will need to add a few lines of code to the _objective_ function) and run the script without passing any parameters. "Note: Don't use autologging for this exercise." Modifications: ``` def run_optimization(data_path: str, num_trials: int): # Enable autolog # mlflow.sklearn.autolog() with mlflow.start_run(): <nested commands to retrieve pickle files> def objective(params): with mlflow.start_run(nested=True): <nested commands to generate rmse> mlflow.log_metric("rmse", rmse) .... ``` Run `python hpo.py` - this one will take a while to run, because we introduce hyper-parameter tuning. You should have something like this. ![hyper-params tuning](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jhthdx517yj8wxunvezc.png) Now refresh your MLflow ui, you should have: ![random-forest-hyperopt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifnra6g17yontan1n2sf.png) Collapse the Run Name, to explore the various results. ![Collapsed Run](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/40czv0401tpayqhspxx7.png) ![rambunctious-hog-867 rmse: 5.335419588556921](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hkty5xnfpnmv8iww0ptq.png) => rambunctious-hog-867 (rmse) => rmse: 5.335419588556921 *Q6. Promote the best model to the model registry* Our task is to update the script _register_model.py_ so that it selects the model with the lowest RMSE on the test set and registers it to the model registry. Modifications: ``` with mlflow.start_run(): new_params = {} for param in RF_PARAMS: new_params[param] = int(params[param]) ## Coarse params to integers params[param] = int(params[param]) ``` ``` # Select the model with the lowest test RMSE experiment = client.get_experiment_by_name(EXPERIMENT_NAME) best_run = client.search_runs( experiment_ids=experiment.experiment_id, run_view_type=ViewType.ACTIVE_ONLY, max_results=1, order_by=["metrics.test_rmse ASC"] )[0] ``` 1. `client.get_experiment_by_name(EXPERIMENT_NAME)`: Retrieves an experiment object by its name. 2. `client.search_runs(...)`: Searches for runs from the retrieved experiment, filtering for active runs only, limiting the results to one, and ordering them by the "test_rmse" metric in ascending order. The variable `best_run` will hold the run with the lowest RMSE on the test set, indicating it's potentially the best-performing model. ``` # Register the best model model_uri = f"runs:/{best_run.info.run_id}/model" mlflow.register_model(model_uri, "best_random_forest_model") ``` 1. `model_uri = f"runs:/{best_run.info.run_id}/model"`: Constructs the URI for the model from the best run's ID. 2. `mlflow.register_model(model_uri, "best_random_forest_model")`: Registers the model with the given URI under the name "best_random_forest_model" in MLflow's model registry. Allowing us to version and manage our models systematically. Run this script as we have done previously and you should have around **5.567** as your best performing RMSE. That's it! Visit [wk2_submission](https://github.com/AkanimohOD19A/MLOps_24/tree/main/wk2) to review the codes and Cheers! Comment below if there are any issues. I'd be skipping the solutions on Wk3 as it is extensively covered in this [YouTube Tutorial](https://www.loom.com/share/802c8c0b843a4d3bbd9dbea240c3593a)
afrologicinsect
1,391,551
An operation-oriented API using PHP and Symfony
Introduction This post is a brief description about my first book that I have recently...
0
2024-06-17T18:41:25
https://dev.to/icolomina/an-operation-oriented-api-using-php-and-symfony-4p6d
api, symfony, operation, php
### Introduction This post is a brief description about [my first book](https://www.amazon.com/dp/B0D79KGL7P) that I have recently published in Amazon KDP. When developing an api, we usually tend to organize our api endpoints using the CRUD approach which is the acronym for (**CREATE**, **READ**, **UPDATE** and **DELETE**). In other words, we create an endpoint for each CRUD operation. As an example, let's see how we would organize a "blog-post" resource endpoints: ```shell GET /blog-post GET /blog-post/{id} POST /blog-post PATCH /blog-post/{id} DELETE /blog-post/{id} ``` The above endpoints relate to CRUD operations as follows: - **HTTP GET** endpoints performs the **READ** operation. - **HTTP POST** endpoint performs the **CREATE** operation. - **HTTP PATCH** endpoint performs the **UPDATE** operation. - **HTTP DELETE** endpoint performs the **DELETE** operation. That approach can be useful for basic operations but, what can we do if we want to represent more complex operations such as "ApproveOrder", "SendPayment", "SyncData" etc. Let's analyze a way in the next section. ### An operation-oriented approach In this way, we are considering the operation as a resource so our endpoint would look like this: ```shell POST https://<domain>/api/operation ``` The above endpoint would allow us to POST any kind of operation and the logic behind it should perform the operation and return the result to the client. To be able to perform the operation, the endpoint should receive the operation to perform and the data required to perform it. Let's see a payload example: ```json { "operation" : "SendPayment", "data" : { "from" : "xxxx" "to" : "yyyy" "amount" : 21.69 } } ``` As we can see in the above payload, the **operation** key specifies the operation we want to perform and the **data** key specifies the data required to perform it After receiving this payload, our core should execute (at least), the following steps: - Get the operation to execute from the input payload. - Get the required data to perform the operation and validate it. - Perform the operation. - Return the result to the client How Symfony can help us to code this steps ? Let's see it in the next sections: ### Get the operation to execute from the input payload To get the operation to execute based on the received name, we should be able to get the operation from an "operation collection". This collection would receive the operation name and would return the operation handler. To build the collection, we can rely on the following Symfony attributes: [Autoconfigure](https://symfony.com/doc/current/service_container/tags.html#autoconfiguring-tags) and [TaggedIterator](https://symfony.com/doc/current/service_container/tags.html#reference-tagged-services): - **Autoconfigure**: We can use it to apply a tag to all services which implements a concrete interface. - **TaggedIterator**: We can use it to easily load a collection with all the services tagged with an specified tag. ```php #[Autoconfigure(tags: ['operation'])] interface OperationInterface { public function perform(mixed $data): array ; public function getName(mixed $data): array ; } ``` The above interface uses the Autoconfigure attribute to specify that all services which implement such interface will be tagged as "operation" automatically. ```php class OperationCollection { /** * @var array<string, OperationInterface> $availableOperations **/ private array $availableOperations = []; public function __construct( #[TaggedIterator('operation')] private readonly iterable $collection ){ foreach($collection as $operation) { $this->availableOperations[$operation->getName()] = $operation; } } public function getOperation(string $name): OperationInterface { if(!isset($this->availableOperations[$name])) { throw new \RuntimeException('Operation not available'); } return $this->availableOperations[$name]; } } ``` The above service uses the TaggedIterator attribute to load all services tagged as "operation" into the "$collection" iterable. ```php class SendPaymentOperation implements OperationInterface { public function perform(mixed $data): array { // operation logic } public function getName(): string { // here we return the corresponding model class } } ``` The above operation implements the OperationInterface so it will be tagged as "operation". We would also need to get the request payload so that we can access the operation name and pass it to the collection. ```php class InputData { public function __construct( public readonly string $operationName, public readonly array $data ){} } ``` ```php $payload = $request->getContent(); $input = $this->serializer->deserialize($payload, InputData::class, 'json'); $operation = $collection->getOperation($input->operationName); ``` The above code snippet uses the [Symfony serializer](https://symfony.com/doc/current/serializer/.html) to deserialize the request payload to the InputData class. Then, we can pass the operation name to the collection getOperation method to get the operation handler. ### Get the required data to perform the operation and validate it The data required for each operation can vary so that each operation would require a different DTO to represent it. For instance, Let's write a model or DTO to represent the data required for a "SendPayment" operation. ```php class SendPaymentInput { public function __construct( #[NotBlank] public readonly string $sender, #[NotBlank] public readonly string $receiver, #[GreaterThan(0)] public readonly float $amount ){} } ``` As you can see, the above model requires that both sender and receiver to not be empty and the amount to be greater than 0. We will need to use the [Symfony serializer](https://symfony.com/doc/current/serializer/.html) to deserialize the input data to the SendPaymentInput and the [Symfony validator](https://symfony.com/doc/current/validation.html) to validate the deserialized input. Furthermore, we need a way to know that the "SendPayment" operation data must be validated using the above model. To do it, we can add another method to the OperationInterface to specify the data model. ```php #[Autoconfigure(tags: ['operation'])] interface OperationInterface { public function perform(mixed $data): array ; public function getName(): string ; public function getDataModel(): string ; } ``` Then, we can denormalize the InputData data array to the corresponding operation data model. ```php $payload = $request->getContent(); $input = $this->serializer->deserialize($payload, InputData::class, 'json'); $operation = $collection->getOperation($input->operationName); $inputData = null; if(!empty($input->data)) { $inputData = $this->serializer->denormalize($input->data, $operation->getDataModel()); $this->validator->validate($inputData) } ``` After the $operation is retrieved from the collection, we can denormalize the InputData data to the operation data model and validate it using the [Symfony validator](https://symfony.com/doc/current/validation.html). ### Perform the operation Peforming the operation is a really easy task. We only have to execute the perform method after checking whether the data is valid. ```php // Rest of the code if(!empty($input->data)) { $inputData = $this->serializer->denormalize($input->data, $operation->getDataModel()); $this->validator->validate($inputData) } $output = $operation->perform($inputData); ``` ### Return the result to the client Here, we could use the [Symfony JsonResponse](https://symfony.com/doc/current/components/http_foundation.html#creating-a-json-response) class to return the operation result (which is returned as an array) to the client: ```php return new JsonResponse($output); ``` ## Conclusion I think this can be an attractive option for organizing our API's and also that it can be perfectly compatible with other approaches. Another aspect I like about this approach, is that you can focus on business actions since there is no limit on how you can name your operations (**ApproveOrder**, **GenerateReport**, **ValidateTemplate**, **EmitBill**, ....). If you want to know more about this, I leave you here the [link of the ebook](https://www.amazon.com/dp/B0D79KGL7P).
icolomina
1,891,629
🚀Top Crypto Gainers & Losers of The Week
The bullishness of the crypto market is waning, with the fear and greed index turning neutral and...
0
2024-06-17T18:38:24
https://dev.to/irmakork/top-crypto-gainers-losers-of-the-week-i0c
The bullishness of the crypto market is waning, with the fear and greed index turning neutral and Bitcoin prices declining. The global market cap and trading volume are also down. Despite this, some cryptocurrencies managed to gain, while others faced significant losses. 📈 Crypto Gainers of The Week Oasis (ROSE): Oasis surged 19% this week, now at $0.1246. It's still 80% below its all-time high of $0.5964 set two and a half years ago. Uniswap (UNI): Uniswap climbed from $8.89 to $11.51 this week, marking a 13% increase and a 58% rise over the month. Notcoin (NOT): Notcoin added 12% this week, now at $0.0212, with an overall profit of 96% since its launch in May 2024. 📉 Crypto Losers of The Week Wormhole (W): Wormhole saw a 32% drop this week, now at $0.4611, hitting an all-time low of $0.4397 recently. FLOKI (FLOKI): FLOKI dropped 25% this week, now priced at $0.0002047, after recently hitting an all-time high of $0.0003462. ORDI (ORDI): ORDI is down 24% this week to $45.67, after previously being at a high of $96.17, marking a 52% drop. In this volatile market, declines offer buying opportunities, while surges present chances to sell. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mkfmfsxtlkyt6knypet9.png)
irmakork
1,891,628
Machine Learning Roadmap for Beginners ( If you have a Non-CS background like me😉)
So, you’re a non-CS background student like me, trying hard to switch to the machine learning/data...
27,754
2024-06-17T18:37:34
https://dev.to/shemanto_sharkar/machine-learning-roadmap-for-beginners-if-you-have-a-non-cse-background-like-me-7g3
machinelearning, datascience, python, deeplearning
So, you’re a non-CS background student like me, trying hard to switch to the machine learning/data science field because you see how the world is moving towards AI. To be honest, there is nothing wrong with your thinking. Back in my second year of university, I also realized that there were very few opportunities in the subject I was studying. I couldn’t get the opportunities, flexibility, and of course, salaries that I wanted in that field. So, I started to explore other fields with bright futures and opportunities. I decided to move to the data science field, finished a 5-month-long online course, and now I’m doing my last year thesis using the machine learning skills I have learned so far. Let’s be honest, data science/ML is a combination of coding, statistics, math, communication, and understanding business problems. If you’re preparing for a job right now, you need to learn both soft and hard skills. But if you’re a student and want to start learning ML right now, my suggestion is to focus on learning the hard skills (coding, math, doing projects, business theories). I will be writing this blog for students like me who are planning to learn ML alongside their academic studies and apply these skills in their field so that after graduation they can move to the DS/ML field completely. So, let's start...! ### Stage 1: Coding 👨‍💻 You need to start with a programming language. Python and R are the most popular languages in the data science field. You can go with either of them, but my suggestion is to go with Python. Python is really easy to learn and there is nothing that cannot be done with Python—web development, app development, data analysis. Python is like a One-Man Army! I totally love Python. **Topics:** - Variables, Numbers, Strings - Lists, Dictionaries, Sets, Tuples - If conditions, For loops - Functions, Lambda Functions - Modules (pip install) - Read, Write files - Exception handling - Classes, Objects These basic topics are enough for beginners to move to the next stage. ### Stage 2: Data Analysis and Visualization 📊 Learn: - Numpy - Pandas - Data Visualization Libraries (Matplotlib and Seaborn) ### Stage 3: Math, Statistics for Machine Learning 📐 **Topics to Learn:** - **Basics:** Descriptive vs. inferential statistics, continuous vs. discrete data, nominal vs. ordinal data - **Linear Algebra:** Vectors, Matrices, Eigenvalues, and Eigenvectors - **Calculus:** Basics of integral and differential calculus - **Basic Plots:** Histograms, pie charts, bar charts, scatter plots, etc. - **Measures of Central Tendency:** Mean, median, mode - **Measures of Dispersion:** Variance, standard deviation - **Probability Basics** - **Distributions:** Normal distribution - **Correlation and Covariance** - **Central Limit Theorem** - **Hypothesis Testing:** p-value, confidence interval, type 1 vs. type 2 error, Z-test ### Stage 4: Exploratory Data Analysis (EDA) 🔍 Time for projects! Use the skills you have learned so far and do some data analysis projects. EDA is extremely important in machine learning as it is necessary for data preprocessing. For datasets, go to Kaggle. ### Stage 5: Machine Learning 🤖 **Machine Learning: Preprocessing** - Handling NA values, outlier treatment, data normalization - One hot encoding, label encoding - Feature engineering - Train-test split - Cross-validation **Machine Learning: Model Building** - **Types of ML:** Supervised, Unsupervised - **Supervised:** Regression vs. Classification - Linear models: Linear regression, logistic regression, gradient descent - Nonlinear models (tree-based models): Decision tree, Random forest, XGBoost - **Model Evaluation** - Regression: Mean Squared Error, Mean Absolute Error, MAPE - Classification: Accuracy, Precision-Recall, F1 Score, ROC Curve, Confusion matrix - **Hyperparameter Tuning:** GridSearchCV, RandomSearchCV - **Unsupervised:** K-means, Hierarchical clustering, Dimensionality reduction (PCA) Do at least 2 end-to-end machine learning projects and deployment. For deployment, you can use Streamlit. For advanced learning, you must learn Python web development frameworks—Flask, FastAPI, Django. I suggest FastAPI. --- That’s it! Now you have enough skills to start doing projects and learn further on your own. Some of you may ask, "Shemanto, you didn't write about SQL, MLOps, and other advanced Python concepts." You’re right, I didn’t write them on purpose. You see, ML skill is specific knowledge. You cannot learn it just by watching tutorials. You have to spend less time consuming information and more time: - Digesting - Implementing - Sharing If you keep doing this, you can find out other stuff on your own. I just shared 5 stages that are easy to start with, and I think that is enough for a beginner to say, "I have machine learning skills." **Keep Learning!** 🚀 **🔗[Let's Connect on LinkedIn](https://www.linkedin.com/in/shemanto/)**
shemanto_sharkar
1,891,627
Scroll-snap property Exemple
This is an example of the Scroll-snap property.
0
2024-06-17T18:36:44
https://dev.to/tidycoder/scroll-snap-property-exemple-20eg
codepen
This is an example of the Scroll-snap property. {% codepen https://codepen.io/TidyCoder/pen/YzbYbdz %}
tidycoder
1,891,626
Fenwick Trees Aren't Fiendish
I've been participating in programming contests ever since I was a child. However, I've always had...
0
2024-06-17T18:36:41
https://dev.to/miguelx/how-i-befriended-segment-trees-4p8a
dsa, learning
I've been participating in programming contests ever since I was a child. However, I've always had this fear of advanced data structures and algorithms. To me, anyone discussing tricky combinatorics or Segment trees was like a magician; I would stand quietly next to such people and listen to them in awe, not understanding a damn thing. However, once I finished university, I decided to give all this magic another go and finally become a ~~sorcerer~~ better developer too. One might think, "All these data structures and algorithms (also called DSA, by the way) are rubbish, as they are never used in real programming." However, I have to disagree. You see, even though it's certainly not necessary to notice tricky use cases for most of the advanced DSA out there, they are useful in the way that they actually teach you how to think and to spot bigger patterns in general. I believe that over the past several years, I've become a quicker thinker simply because I've been practicing a lot of these very puzzles, be it on LeetCode or other websites I'll share with you. Nowadays, I can implement most of the basic algorithms without much thinking, similarly to how people use their native languages when they speak. Hence, I'm of the opinion that learning more advanced DSA may be beneficial to your overall self-confidence. To make this article more interesting, I'm going to take you on a journey with me, so you're in for a treat today. Not only will you learn something new about an advanced data structure, but I'll also give you an idea of how to approach learning DSA in general. ## What is a Fenwick Tree? Do you like trees or do they scare you? What's the first thing that comes to your mind when you hear this word? If your first instinct is those green leafy plants, you're a normal person – stop reading here and enjoy the nature outside. If you thought recursion is the answer, I'll advice you to think a bit more. Perhaps there are some other kind of trees that don't imply this concept? Well, I wouldn't be asking you this question if the answer was no. Sometimes it's easier to represent a tree as an array. Maybe you're already familiar with the Heap data structure. In this case, you know what I'm talking about. However, there are other examples as well, and one of them is the Fenwick tree. Before going any further, I advice you to take a look at one amazing [competitive programmer's handbook](https://cses.fi/book/book.pdf) available for free on the Internet. It'd be rather foolish of me to simply teach you something you can read there, especially because Antti Laaksonen, the author, has explained this data structure truly well. So, instead of rambling on and flexing my understanding of the Fenwick tree, I'll take you on a journey I recently took myself while learning this data structure. Still with me? Good, then go to the chapter called "Range Queries" and locate the section "Binary Indexed Tree." Incidentally, it's just another name for the Fenwick tree. Say you have an array `A = [2, 7, 6, 3, 5, 9, 0, 4]` of size _N_, and you need to find the sum from the first element to the fifth one, i.e. the sum of `[2, 7, 6, 3, 5]`. Of course, we can do that in linear time, i.e. _O(M)_, going through each element and summing up the total, but wouldn't it be neat to always get the answer in _O(logM)_ instead, where _M_ is the size of our range? And with a little preprocessing on the initial array, which takes _O(NlogN)_, the Fenwick tree allows us to do just that! What's more, your array doesn't have to stay the same during the whole process. In other words, you can update some of its elements in _O(logN)_ time and still get your answer in _O(logM)_ for any range sum query! Before I continue, I advice you to go through that chapter all by yourself, just like I did a few days ago. I bet you'll be surprised at how much you can understand even on the first go! Still, there might be some things you can find a little bit tricky; at least, this was the case for me. Read on to find out how I managed to clear my doubts in the end. ## The challenges I faced Honestly, the explanation provided by the handbook was very clear, and I came up with my own implementation in Python pretty quickly: ```py class FenwickTree: def __init__(self, A: list[int]) -> None: self.n = len(A) self.tree = [0] * (self.n + 1) for k in range(1, self.n + 1): self.add(k, A[k - 1]) def add(self, k: int, x: int) -> None: while k <= self.n: self.tree[k] += x k += k & -k def sum(self, k: int) -> int: s = 0 while k >= 1: s += self.tree[k] k -= k & -k return s def range_sum(self, a: int, b: int) -> int: return self.sum(b) - self.sum(a - 1) ``` Basically, the only functions I had to implement by myself were `__init__` and `range_sum`, which are pretty intuitive anyway. Perhaps the hardest thing to understand was the _while_ loops, in which we do the following magic: `k += k & -k` and `k -= k & -k`. What's going on there? Well, first of all, let's get clear on the fact that `k & -k` is just a fancy way of getting the last set bit in our number `k`. For instance, if `k = 22`, then this is what we get in its binary representation: `k = 10110`. To represent `-k` in computer memory, we have to make use of a so-called complement code, which you get by inverting all bits of k and adding 1, like so: `-k = 01001 + 1 = 01010`. Now, when you take `k & -k`, it's clear that you'll get the last set bit in `k`: `k & -k = 10110 & 01010 = 00010`. Play around with some other examples to see why it always works. By the way, here's a [nice article](https://leetcode.com/explore/learn/card/bit-manipulation/669/bit-manipulation-concepts/4495/) that explains how negative numbers are stored in computer. I refer to it whenever I have to clear some doubts. Anyway, why would we have to iteratively perform `k += k & -k` in the `add` method? Just look at the diagram in the last example from the book. When you update the value `A[2]` of our initial array, there are three values that get updated in our Fenwick tree: `tree[3]`, `tree[4]`, `tree[8]`. And here's how we get their indices: ``` # First iteration k = 3 # Second iteration k = 3 + (3 & -3) = 3 + 1 = 4 # Third iteration k = 4 + (4 & -4) = 4 + 4 = 8 ``` The function `sum` is very similar; the only difference is that summing up requires us to go to the left rather than to the right, which is why we subtract `k & -k` from `k`. For example, to get `sum(7)`, which is the sum of the first 7 elements, we have to add up the following tree values: `tree[7]`, `tree[6]`, `tree[4]`. ``` # First iteration k = 7 # Second iteration k = 7 - (7 & -7) = 7 - 1 = 6 # Third iteration k = 6 - (6 & -6) = 6 - 2 = 4 ``` Well, what I just shared with you is an approach I usually take whenever I don't understand a certain algorithm: I try to go step by step and imitate it using pen and paper. Diagrams and pictures are usually of great help too, which is why I found the examples in the handbook particularly useful. ## How can you deepen your knowledge? Okay, writing and understanding the whole thing is great, but is there something else you could do before moving on? Usually, I'd suggest playing around with the code a bit to make sure you fully understand it. However, I don't think it's very useful here. Instead, it would be better to do some practice problems! Whenever I learn a new algorithm or a data structure, I prefer to practice it on a website called [CSES](https://cses.fi/problemset/list/). For example, you can go to the problem ["Static Range Sum Queries"](https://cses.fi/problemset/task/1646) and solve it using our new data structure. You'll be surprised that the problem that might have looked so scary to you in the past will seem pretty easy now. This feeling of understanding something complex is very satisfying! There are only a few lines you have to add to solve the above problem: ```py from sys import stdin class FenwickTree: # ... n, q = map(int, stdin.readline().split()) A = [int(x) for x in stdin.readline().split()] tree = FenwickTree(A) for _ in range(q): a, b = map(int, stdin.readline().split()) print(tree.range(a, b)) ``` ## Remembering what you've learned Now some of you might be wondering: "How do I make sure I won't forget this new data structure in the future?" I suggest focusing on just one thing: its use case. In other words, the only thing you need to remember is that the Fenwick tree is useful for handling a lot of range sum queries, and it also allows updates to the initial array. Don't worry about memorizing the code, since you can simply save this data structure somewhere and refer to it as a ready tool. Seriously, no one would ever ask you to implement it from scratch, unless you participate in a competitive programming contest that doesn't allow you to look up your code snippets. But let's be honest, is it really a likely situation? Besides, if you truly get what's going on under the hood in the Fenwick tree, you're likely to recreate its implementation simply from your understanding. It's very similar to how we are able to write a letter when we already know what it's going to say. ## What's next? Great! We just learned a pretty advanced data structure, and hopefully, it didn't even feel that scary to you. Now you might feel more confident, ready to confront other advanced DSA, and to practice harder programming problems. Even though some of you might still not be quite convinced as to why all these DSA matter, I'm pretty sure that over time, I'll shed more light on this question and insipire you to become a more well-rounded developer. After all, it's the exact path I'm on right now too.
miguelx
1,891,625
Preventing IDM: A Tactical Guide to Protecting Your Video Content on Website
Introduction Ever put your heart and soul into making a stunning video only to see it...
0
2024-06-17T18:35:06
https://dev.to/kareem-khaled/blocking-idm-downloads-a-tactical-guide-to-protecting-your-video-content-on-website-3ilo
security, javascript, webdev
## Introduction Ever put your heart and soul into making a stunning video only to see it being ripped and shared all over the internet without your permission? If you nodding, then most probably IDM is familiar to you. This powerhouse of a tool might be deemed a godsend for users in a zeal of downloading videos, but surely a bane upon creators like us enthralled with the prospects of protection from cyber theft of our treasured content. ## Now, here's the challenge: Any website has no direct access to your system's processes, so we can't simply ask, "Hey, is IDM running?" But, fellow creators, fear not! We're about to outsmart IDM a little with some clever code and tactical maneuvering. ## Tell-tale Sign of IDM: The Video Tag IDM isn't subtle, like most download managers, in targeting a video. It leaves a fingerprint in the form of a custom attribute, "**__idm_id__**", on the video tag. This is how IDM keeps track of which videos to snatch. But that very attribute is its Achilles' heel, and we're going to exploit it. ``` function checkForIDM() { const videos = document.getElementsByTagName('video'); for (const video of videos) { if (video.hasAttribute('__idm_id__')) { return true; // IDM is likely running } } return false; // IDM not detected (yet) } ``` ### One of our Lines of Defense: The IDM Detector This JavaScript snippet will check your page for the presence of the "**__idm_id__**" attribute once every second. If it is found, we know IDM is on the prowl. ``` let idmCheckInterval; function startIDMCheck() { idmCheckInterval = setInterval(() => { if (checkForIDM()) { clearInterval(idmCheckInterval); blockContent(); // (see next snippet) } }, 1000); // Check every second } startIDMCheck(); // Begin monitoring immediately ``` ### Block and Redirect: When IDM Strikes If IDM is detected, we need to act fast. Here's how we'll block the content and give the user clear instructions: ``` function blockContent() { const content = document.body.innerHTML; document.body.innerHTML = ` <h1>IDM Detected</h1> <p>Please disable IDM and reload the page to view this content.</p> <button onclick="location.reload()">Reload</button> `; } ``` ### The Bait-and-Switch: Protecting Your Video URL But we're not done yet. Though we block IDM, it might have taken your video URL. To counter this, we're going to do a little trickery. 1. When the page loads, embed your video with a fake URL (like some very short, silent video). 2. After 5 seconds, start a timer. 3. If there is no IDM activity within those 5 seconds, we know we are in the clear. 4. Replace this with your actual video source. ``` const realVideoSrc = "your-actual-video.mp4"; const fakeVideoSrc = "dummy-video.mp4"; function swapVideoSrc() { const video = document.querySelector('video'); video.src = realVideoSrc; } setTimeout(() => { if (!checkForIDM()) { swapVideoSrc(); } }, 5000); // Replace after 5 seconds if safe ``` ### Don't Forget.. This is a good base but this fight against downloaders never gets really won. You may need to tune these techniques from time to time when new tools are invented . ## Let's Chat! I'd love to hear your thoughts on this approach. Have you faced similar challenges? Do you have other tricks up your sleeve? Feel free to drop a comment below or reach out to me via email at [karemkhaled945@gmail.com](https://mailto:kareem_khaled@t-horizons.com/) . Let's geek out about video protection together!
kareem-khaled
1,891,624
Say no to console.log!
Do you always use console.log in your projects during development? And even though we will keep...
0
2024-06-17T18:34:53
https://dev.to/alishgiri/say-no-to-consolelog-556n
javascript
Do you always use `console.log` in your projects during development? And even though we will keep using `console.log`, there are other alternatives that will make your development more fun and productive. &nbsp; > NOTE: If you are viewing log in a terminal especially if you are a backend developer then you can try `JSON.stringify(your_data, null, 2)` and use `console.log()` on the result. This will make sure that you don't get `{... value: [Object, Object]}` in the log and the log will also be formatted making it easier to read. &nbsp; ## console.dir() For hierarchical listing of arrays and objects. ```javascript console.dir(["apples", "oranges", "bananas"]); ``` ![console.dir example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ksvxyavsmejk1eb73u5.png) &nbsp; ## console.table() For rows and columns listing of arrays (might not be suitable for objects). ```javascript console.table(["apples", "oranges", "bananas"]); ``` ![console.table array example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zwzq9uf4t643lorq1qbu.png) ```javascript console.table({"a": 1, "b": 2, "c": 3}); ``` ![console.table object example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r45pzliuaaxbx4preo4k.png) &nbsp; ## console.group() ```javascript console.log('This is the top outer level'); console.group('Task 1'); console.log('Task activity 1'); console.log('Task activity 2'); console.groupEnd(); console.group('Task 2'); console.log('Task activity 3'); console.log('Task activity 4'); console.groupEnd(); console.log('Back to the top outer level'); ``` ![console.group example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qomtbv7ifve72l51hino.png) &nbsp; ## console.time() & console.timeEnd() ```javascript try { console.time("record-1"); await someAsyncTask(); } catch (error) { // handle error } finally { console.timeEnd("record-1"); } ``` ![console.time log](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d102ywfw05lmnp69lc0l.png) &nbsp; ## console.clear() This will clear the console. &nbsp; I hope this was helpful! 🚀
alishgiri
1,891,623
i got followers here a lot and quickly !
this is actually so crazy !!! thank you all so much ! i'll try to not let you down and...
0
2024-06-17T18:33:04
https://dev.to/tonic/i-got-followers-here-a-lot-and-quickly--5b5i
community, devto, opensource
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6voqhc2a3oj60bjniggu.png) this is actually so crazy !!! ## thank you all so much ! i'll try to not let you down and keep the cool oss / ai / enterprise / fintech (not crypto, just normal) projects coming and make life on earth !
tonic
1,891,622
Simple Reddit App
Check out this Pen I made!
0
2024-06-17T18:32:18
https://dev.to/sreenathvadlamudi/simple-reddit-app-3f04
codepen, webdev, javascript, html
Check out this Pen I made! {% codepen https://codepen.io/sreenathvadlamudi/pen/qBGpGaJ %}
sreenathvadlamudi
1,891,621
How to Create a Basic Notes App using React Native and Expo
Using create-expo-app simplifies the setup process for a React Native project. Expo provides a...
0
2024-06-17T18:30:06
https://dev.to/sh20raj/how-to-create-a-basic-notes-app-using-react-native--397d
javascript, androiddev, webdev, android
Using `create-expo-app` simplifies the setup process for a React Native project. Expo provides a managed workflow that handles many of the complex configuration steps for you. Let's build a note-taking app using Expo. ### Step 1: Set Up Your Project 1. Open VS Code and open the terminal (you can open the terminal by pressing `Ctrl + (backtick)`). 2. Ensure you are in your desired directory. If not, navigate to it using `cd your-directory`. 3. Initialize a new Expo project: ```bash npx create-expo-app NotesApp cd NotesApp ``` ### Step 2: Install Dependencies We'll use React Navigation for navigating between screens. Run the following commands in the terminal to install the necessary packages: ```bash npm install @react-navigation/native @react-navigation/native-stack npm install react-native-screens react-native-safe-area-context ``` ### Step 3: Create Screens In the `src` folder, create a directory named `screens`. Inside the `screens` directory, create two files: `HomeScreen.js` and `NoteScreen.js`. #### HomeScreen.js ```javascript import React, { useState } from 'react'; import { View, Text, Button, FlatList, TouchableOpacity, StyleSheet } from 'react-native'; const HomeScreen = ({ navigation }) => { const [notes, setNotes] = useState([]); const addNote = () => { navigation.navigate('Note', { saveNote: (note) => setNotes([...notes, note]), }); }; return ( <View style={styles.container}> <Button title="Add Note" onPress={addNote} /> <FlatList data={notes} keyExtractor={(item, index) => index.toString()} renderItem={({ item }) => ( <TouchableOpacity style={styles.note}> <Text>{item}</Text> </TouchableOpacity> )} /> </View> ); }; const styles = StyleSheet.create({ container: { flex: 1, padding: 20, }, note: { padding: 15, borderBottomWidth: 1, borderColor: '#ccc', }, }); export default HomeScreen; ``` #### NoteScreen.js ```javascript import React, { useState } from 'react'; import { View, TextInput, Button, StyleSheet } from 'react-native'; const NoteScreen = ({ route, navigation }) => { const [note, setNote] = useState(''); const saveNote = () => { route.params.saveNote(note); navigation.goBack(); }; return ( <View style={styles.container}> <TextInput style={styles.input} placeholder="Enter note" value={note} onChangeText={setNote} /> <Button title="Save Note" onPress={saveNote} /> </View> ); }; const styles = StyleSheet.create({ container: { flex: 1, padding: 20, }, input: { height: 40, borderColor: '#ccc', borderWidth: 1, marginBottom: 20, paddingLeft: 10, }, }); export default NoteScreen; ``` ### Step 4: Set Up Navigation Modify the `App.js` file to set up the navigation: ```javascript import * as React from 'react'; import { NavigationContainer } from '@react-navigation/native'; import { createNativeStackNavigator } from '@react-navigation/native-stack'; import HomeScreen from './src/screens/HomeScreen'; import NoteScreen from './src/screens/NoteScreen'; const Stack = createNativeStackNavigator(); const App = () => { return ( <NavigationContainer> <Stack.Navigator initialRouteName="Home"> <Stack.Screen name="Home" component={HomeScreen} /> <Stack.Screen name="Note" component={NoteScreen} /> </Stack.Navigator> </NavigationContainer> ); }; export default App; ``` ### Step 5: Run the Application Finally, run your application: ```bash npx expo start ``` ### Additional Enhancements 1. **Styling**: Use `StyleSheet` from React Native to style your components more elegantly. 2. **Persistent Storage**: Use libraries like `@react-native-async-storage/async-storage` to persist notes. 3. **Editing Notes**: Implement functionality to edit and delete notes. This setup provides you with a basic note-taking app using Expo. You can further enhance it by adding more features like note categorization, search functionality, or synchronization with a backend service.
sh20raj
1,891,619
Implementing an Email Delivery Service with Cloudflare Workers
Without using the STMP server or three-party mail forwarding services, only through Cloudflare...
0
2024-06-17T18:28:59
https://dev.to/georgech2/implementing-an-email-delivery-service-with-cloudflare-workers-1mcd
email, javascript, tutorial, programming
Without using the STMP server or three-party mail forwarding services, only through Cloudflare Workers and Email Routing, it is free to implement a mail delivery service. Cloudflare Workers and Email Routing are both on a free plan, which is more than enough for individual users. ## Prepare Register for a Cloudflare account ## Enable Cloudflare Email Routing Refer to the official document to enable it. ## Deploy Workers * clone/download https://github.com/alivefree/cf-email-workers ``` javascript import { Hono } from "hono" import { cors } from "hono/cors" import { EmailMessage } from "cloudflare:email" import { createMimeMessage } from "mimetext" const worker = new Hono(); // cors worker.use('*', (c, next) => { const origins = c.env.ALLOWED_ORIGINS == '*' ? '*' : c.env.ALLOWED_ORIGINS.split(','); const corsMiddleware = cors(origins); return corsMiddleware(c, next); }); // Mail sending interface, can be modified to any api you want worker.post('/send', async (c) => { const text = await c.req.text(); const body = JSON.parse(text); if (!body['subject'] || !body['body']) { c.status(400) return c.json({ "status": "error", "message": "Missing subject or body" }) } const msg = createMimeMessage() msg.setSender({ name: c.env.SENDER_NAME, addr: c.env.SENDER_ADDRESS }) msg.setRecipient(c.env.RECIPIENT_ADDRESS) msg.setSubject(body['subject']) msg.addMessage({ contentType: 'text/html', data: body['body'] }) var message = new EmailMessage( c.env.SENDER_ADDRESS, c.env.RECIPIENT_ADDRESS, msg.asRaw() ); try { // The SEB here comes from the send_email configuration in wrangler.toml await c.env.SEB.send(message) } catch (e) { c.status(500) return c.json({ "status": "error", "message": "Email failed to send", "error_details": e.message }); } return c.json({ "status": "success", "message": "Email sent successfully" }); }); export default worker; ``` * Modifying Custom Configurations(wrangler.toml) ``` yaml # The name configuration here corresponds to c.env.SEB.send in the code, which needs to be synchronized. send_email = [ {type = "send_email", name = "SEB", destination_address = "xxx"}, ] [vars] ALLOWED_ORIGINS = "*" RECIPIENT_ADDRESS = "*" SENDER_ADDRESS = "*" SENDER_NAME = "*" ``` * deploy to Cloudflare Workers ```bash $ npm install -g wrangler $ wrangler login $ wrangler deploy ``` ## Call After successful deployment, you can call the API to send emails to other projects API: worker domain/API ``` javascript fetch('https://example.workers.dev/send', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ subject: 'subject', body: 'email content' }) }) ``` You can modify the content of the body to customize the email content and make it more aesthetically pleasing. The Email Worker code deployed in this article has been opened on [GitHub](https://github.com/alivefree/cf-email-workers).
georgech2
1,891,618
Connect Spring Boot with MySQL
Hello everyone, In this tutorial I will explain the proccess I followed in order to connect Spring...
0
2024-06-17T18:27:52
https://dev.to/georgiosdrivas/connect-spring-boot-with-mysql-5cei
mysql, springboot, java, backend
Hello everyone, In this tutorial I will explain the proccess I followed in order to connect Spring Boot with MySQL, in order to create an API for my Front-End. ## Prerequisites: - IDE (I use Intellij IDEA so this tutorial will be based on that) - MySql Workbench Click [here](https://github.com/GeorgiosDrivas/valueMe-Backend) for the source code. ## Create a Spring Boot project using Spring Initializr Visit [start.spring.io](https://start.spring.io/) and select: Project: Maven Language: Java Spring Boot: 3.3.0 Write the necessary fields with your content Packaging: JAR Java: 17 As for dependencies, we will need: - MySQL Driver - Spring Web - Spring Data JPA After these, the initializr should look like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/stprilwqd7ysmdto2ggj.png) Click Generate and save the folder in your desired path and extract the folder's content. ## Intellij and Mysql configuration First of all create a database in MySQL. I used MySQL Workbench for this. Even the simpliest database will work, just like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/350olca4qzpmobtkr3zl.png) Open the folder's content in your desired IDE. I will cover this tutorial using Intellij IDEA. Open the application.properties file which is located at scr/resources/application.properties In this file, we configure the settings that will help us connect in our database. Write these settings in the file: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lfh990gdu59z8hjafoco.png) Replace ${DB_NAME}, ${DB_USER}, ${DB_PASSWORD} with your database's credentials. These settings will help us connect with the database we created: ``` spring.jpa.show-sql=true: ``` This enables the logging of SQL statements generated by Hibernate. When set to true, Hibernate will print the SQL statements to the console. ``` spring.jpa.hibernate.ddl-auto=update: ``` This setting is used to automatically update the database schema to match the entity definitions. The value update means that Hibernate will update the existing schema, adding any new columns or tables required by the entity mappings. ``` logging.level.org.hibernate.SQL=DEBUG: ``` This sets the logging level for the Hibernate SQL logger to DEBUG. It will provide detailed information about the SQL statements being executed. ``` logging.level.org.hibernate.type.descriptor.sql.BasicBinder=TRACE: ``` This sets the logging level for the Hibernate type descriptor SQL binder to TRACE. This will log detailed information about the binding of parameters in SQL statements. ``` spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver: ``` This specifies the JDBC driver class name for MySQL. It tells Spring Boot which driver to use for establishing the connection to the database. ``` spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQLDialect: ``` This sets the Hibernate dialect to MySQLDialect, which is optimized for MySQL. It allows Hibernate to generate SQL statements that are compatible with MySQL. Now, create a sub-package in the main package of your project, and call it "model". Inside, create a class calling it however you want, in my case I will call it Users. ``` package com.evaluation.evaluationSystem.model; import jakarta.persistence.*; @Entity @Table(name = "users") public class Users { public Long getId() { return id; } @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id") private Long id; @Column(name = "email") private String email; @Column(name = "password") private String password; public void setId(Long id) { this.id = id; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } } ``` In this file, in this file we define a JPA entity Users which will be mapped to a database table users. The class includes fields for id, email, and password which correspond to columns in the users table, so make sure the field align with the columns of your database. Moving on, create another sub-package called "controller" and create a file in it. ``` package com.evaluation.evaluationSystem.controller; import com.evaluation.evaluationSystem.model.Users; import com.evaluation.evaluationSystem.repository.UserRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.util.List; import java.util.Optional; @RestController public class UsersController { @Autowired private UserRepository userRepository; @GetMapping("/users") public List<Users> getUsers(@RequestParam("search") Optional<String> searchParam){ return searchParam.map(param -> userRepository.getContainingQuote(param)) .orElse(userRepository.findAll()); } } ``` In this file, we define a RESTful API endpoint (/users) that can optionally filter Users entities based on a search parameter. It utilizes UserRepository for database interaction and returns results in JSON format due to the @RestController annotation. Replace "/users" with any endpoint you want. Create one more ( the last one ) sub-package called repository and create a file interface ( be carefull, not class). ``` package com.evaluation.evaluationSystem.repository; import com.evaluation.evaluationSystem.model.Users; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.data.jpa.repository.Query; import org.springframework.data.repository.query.Param; import java.util.List; public interface UserRepository extends JpaRepository<Users, Long> { @Query("SELECT u FROM Users u WHERE u.email LIKE %:word%") List<Users> getContainingQuote(@Param("word") String word); } ``` In this file, we define the query that will allow us to retrieve the data from the database. Make sure to edit it based on your needs. We write this query using JPQL (Java Persistence Query Language). It is a query language defined as part of the Java Persistence API (JPA) specification, which is used to perform database operations on Java objects and entities. Your last folder structure should look like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zsz8cyc6w2q3rpky3vaz.png) Now, navigate to the main file (in my case, EvaluationSystemApplication) and run the project. If everything works well, visiting localhost:8080/users ( or the endpoint you chose ) will display your data from the database. Make sure you fill the table of the data with some content. ## Conclusion I hope this tutorial helped you. I am new too in this environment, so I learn too. Every comment and suggestion is more than welcome! Feel free to drop a follow on my [GitHub account](https://github.com/GeorgiosDrivas) to stay updated on my journey to develop a full stack web app using Spring Boot, MySQL and React!
georgiosdrivas
1,891,614
Not All Market Research Studies Need to Have Real-Time/Live Data Reporting!
There’s a surge in demand for “Real Time/Live Data Reporting,” often utilizing Power BI Dashboards as...
0
2024-06-17T18:21:40
https://dev.to/glasgow_insights/not-all-market-research-studies-need-to-have-real-timelive-data-reporting-2il0
realtime
There’s a surge in demand for “Real Time/Live Data Reporting,” often utilizing Power BI Dashboards as part of market research deliverables. Power BI Dashboards are fantastic for bringing data to life, allowing users to customize with minimal training. They’re visual, and as they say, “a picture speaks a thousand words.” In the “Age of Data,” such technologies have made data easily accessible and understandable, with “Real Time Updates” being a game-changer in data-driven decision-making. Click: [](https://www.glasgowinsights.com/blog/not-all-market-research-studies-need-to-have-real-time-live-data-reporting/) Industries like Retail, BFSI, Healthcare, Travel, Hospitality, Logistics, Manufacturing, and even Public entities have embraced this trend. However, not all Market Research Studies may need “Real Time Updates,” even for continuously run Tracker studies. Firstly, let’s define Tracker studies. There are two types of market research studies – Trackers and Adhocs. Trackers are continuously run market monitors which provide businesses with a steady stream of intelligence (like Mystery Shopping), while Adhocs (also called Need-Based, Custom, Dip-Stick studies) are built to answer specific business questions (like UX studies). Now circling back to “Real Time/Live Data Reporting,” for Mystery Shopping studies, data is never reported in real-time due to the study’s design construct (pre-decided survey questionnaire, data analyzed and reported at a collated level, and seen basis pre-designated reporting heads, etc.). Investing in real-time reporting should only be done if there’s a genuine need and capability for immediate action. For instance, a company with multiple coffee shops may conduct Mystery Shopping to assess staff performance and customer experience. But do they really need “Real Time/Live Data Reporting “? Before opting for real-time reporting, stakeholders should justify its absolute necessity. If not warranted, Power BI Dashboards along with tools like Excel PivotTables and Online Dashboards can provide efficient monitoring and decision-making capabilities. Final comment – a Dashboard is neither an insight nor a strategy toolkit. Invest wisely to maximize business benefits. If you found this interesting, please do reach out to us to see how we can help you drive data driven business growth. Contact Us Office No 6, Unit 402, Level 4, Crystal Tower, Business Bay, PO Box 445190 Dubai, United Arab Emirates Mobile: +971 55 9744360 | Phone: +971 4 566 8869 Website : [](https://www.glasgowinsights.com)
glasgow_insights
1,891,616
Top Reasons Why Your Business Needs a Salesforce Data Cloud Consultant
Introduction to Salesforce Data Cloud Consulting We all aim for success and want to see...
0
2024-06-17T18:27:58
https://www.sfapps.info/why-hire-salesforce-data-cloud-consultant/
blog, industries
--- title: Top Reasons Why Your Business Needs a Salesforce Data Cloud Consultant published: true date: 2024-06-17 18:19:07 UTC tags: Blog,Industries canonical_url: https://www.sfapps.info/why-hire-salesforce-data-cloud-consultant/ --- ## Introduction to Salesforce Data Cloud Consulting We all aim for success and want to see our companies thrive. But with an overwhelming amount of information, how do we manage it all without feeling swamped? The answer lies in using a Salesforce Data Cloud. And to make it truly effective, you need the expertise of a Salesforce Data Cloud Consultant. [Salesforce Data Cloud](https://www.salesforce.com/eu/data/) offers powerful tools for data management, but maximizing its potential often requires specialized expertise. This is where a Salesforce Data Cloud consultant comes in. These professionals can transform how your business manages, analyzes, and utilizes data, driving better decision-making and enhancing overall performance. Hiring a Salesforce Data Cloud consultant and [Pardot marketing automation consultant](https://www.sfapps.info/salesforce-pardot-consultant-for-professional-services/) can provide your business with a competitive edge. They bring a wealth of knowledge and experience that can help you handle the complexities of data management. By optimizing the use of Salesforce Data Cloud, these consultants ensure that your business processes are more efficient, your data insights are sharper, and your overall strategy is data-driven. So, let’s explore the key reasons why hiring a Salesforce Data Cloud consultant can be a game-changer for your business. ![Reasons to Hire Salesforce Data Cloud Consultant with Cert](https://www.sfapps.info/wp-content/uploads/2024/06/Reasons-to-Hire-Salesforce-Data-Cloud-Consultant-1-1024x536.png "Reasons to Hire Salesforce Data Cloud Consultant with Cert") ## Reason #1: Expertise and Experience One of the primary reasons to hire a Salesforce Data Cloud consultant is their expertise and experience. These professionals have in-depth knowledge of Salesforce Data Cloud’s capabilities and best practices. They are well-versed in data integration, management, and analytics, enabling them to tailor solutions that meet your specific business needs. Their experience with various industries and business models allows them to provide insights and strategies that you might not have considered. ### Insight: To qualify as a Salesforce Data Cloud consultant, individuals must successfully pass a comprehensive [certification exam](https://trailhead.salesforce.com/help?article=Salesforce-Certified-Data-Cloud-Consultant-Exam-Guide). This exam includes 60 multiple-choice and multiple-select questions, along with up to five additional non-scored questions. Candidates are allotted 105 minutes to finish the exam and need to secure a score of at least 62% to pass. The cost for registering for the exam is USD 200, plus any applicable local taxes. If a retake is required, the fee is USD 100, plus applicable taxes. Candidates have the option to take the exam either at a proctored testing center or through an online proctored environment, offering convenient options to suit different needs. Common reasons to opt for services of Salesforce Data Cloud consultants include: - **Deep Understanding of Salesforce Data Cloud:** A consultant’s expertise ensures that you are leveraging the full potential of Salesforce Data Cloud. They understand the intricacies of the platform, including its advanced features and functionalities, which can be complex for those without specialized training. - **Industry-Specific Insights:** Salesforce Data Cloud consultants often have experience across various industries, allowing them to bring industry-specific insights to your business. This means they can recommend best practices and solutions that are proven to work within your industry. - **Avoiding Common Pitfalls** : With their extensive experience, consultants can help you avoid common mistakes and pitfalls that can occur during the implementation and use of Salesforce Data Cloud. This can save your business significant time and resources. - **Optimizing Performance:** A consultant’s expertise ensures that your Salesforce Data Cloud is set up and configured for optimal performance. They can identify and address any issues that might be slowing down your processes or affecting your data quality. ## Reason #2: Tailored Solutions Another significant advantage of hiring a Salesforce Data Cloud consultant is their ability to provide solutions tailored to your unique business needs. These professionals know that no two businesses are the same and that a one-size-fits-all approach doesn’t work for data management. - **Customized Implementation:** Salesforce Data Cloud consultants also as [offshore Salesforce Commerce Cloud consultants](https://www.sfapps.info/benefits-of-hiring-offshore-salesforce-commerce-cloud-consultants/) take the time to understand your specific requirements and business goals. They customize the implementation of Salesforce Data Cloud to ensure it aligns perfectly with your processes and objectives, making sure it fits like a glove. - **Scalability:** Consultants make sure the solutions they implement can grow with your business. As your business expands, your data needs will change. A skilled consultant will design your Salesforce Data Cloud setup to adapt to these changes, ensuring long-term efficiency and effectiveness. - **Integration with Existing Systems:** One of the key roles of a consultant is to ensure seamless integration of Salesforce Data Cloud with your existing systems. They have the know-how to connect Salesforce with other tools and platforms your business uses, creating a unified and efficient data ecosystem. - **Personalized Training and Support:** Consultants don’t just set up the system and leave. They provide personalized training to your staff, ensuring they are comfortable and proficient with the new system. They also offer ongoing support to address any issues or questions that arise, helping your team fully leverage the power of Salesforce Data Cloud. ## Reason #3: Improved Data Quality and Accuracy Maintaining high data quality and accuracy is essential for making informed business decisions. Salesforce Data Cloud consultants play a crucial role in ensuring that your data is reliable and precise. Common tasks of Data Cloud experts include: - **Data Cleansing:** Over time, data can become messy and filled with inaccuracies. Consultants perform thorough data cleansing to remove duplicates, correct errors, and standardize formats. This process ensures that the data you rely on is accurate and consistent. - **Data Validation:** Consultants set up robust data validation rules within Salesforce Data Cloud. These rules automatically check for errors and inconsistencies when data is entered, preventing issues before they can affect your business operations. - **Enhanced Data Integration:** Integrating data from various sources can often lead to discrepancies and errors. A Salesforce Data Cloud consultant ensures that data integration processes are smooth and error-free, leading to more accurate and reliable data. - **Regular Audits and Monitoring:** To maintain high data quality over time, consultants establish regular audits and monitoring processes. These checks help identify and address any issues promptly, ensuring your data remains accurate and up-to-date. - **Training on Best Practices:** Consultants also train your team on best practices for data entry and management. This knowledge transfer helps prevent future data quality issues and ensures your team is equipped to maintain high standards. Looking to hire Salesforce Data Cloud Consultant? Get in touch with our parent company! [Explore More](https://mobilunity.com/tech/hire-salesforce-developers/) [![](https://www.sfapps.info/wp-content/uploads/2024/05/banner-3-svg.svg)](https://www.sfapps.info/wp-content/uploads/2024/05/banner-2-icon.svg) ## Reason #4: Cost Efficiency Hiring a Salesforce Data Cloud consultant can lead to significant cost savings for your business. While it may seem like an additional expense, the return on investment can be substantial when considering the efficiency and effectiveness brought by these professionals. - **Reduced Implementation Time:** Consultants streamline the setup and implementation process, reducing the time and resources required. This minimizes downtime and gets your system up and running quickly, allowing you to start benefiting from Salesforce Data Cloud sooner. - **Avoiding Costly Mistakes:** With their expertise, consultants help you avoid common pitfalls and mistakes that can lead to costly fixes. They ensure that your system is set up correctly from the start, which can save you money in the long run. - **Optimized Resource Allocation:** Consultants can identify areas where you might be overspending or underutilizing resources. They help optimize your Salesforce Data Cloud setup to ensure you are getting the most value out of your investment. - **Scalable Solutions:** Consultants design solutions that grow with your business, preventing the need for expensive overhauls as your data needs expand. This scalability ensures that you only pay for what you need when you need it. - **Training and Support:** By providing thorough training and ongoing support, consultants reduce the need for additional hiring or external help. Your team becomes proficient in managing the system, which leads to long-term cost savings. ### Insight: Did you know, that the average hourly rate of a Salesforce Consultant in the US is – USD [93.4](https://www.glassdoor.com/Salaries/salesforce-consultant-salary-SRCH_KO0,21.htm) (June 2024, Glassdoor)? And, did you know that hiring offshore consultants in Eastern Europe may show up to 60% cost reduction without the loss in quality and ability to provide top-level consultancy? ## Reason #5: Enhanced Decision-Making One of the most significant benefits of hiring a Salesforce Data Cloud consultant is the improvement in decision-making capabilities. Accurate, well-organized data is the backbone of effective business strategies. - **Actionable Insights:** Consultants help you harness the full potential of your data, turning raw information into actionable insights. They set up advanced analytics and reporting tools that provide clear, meaningful data to inform your decisions. - **Real-Time Data Access:** With the help of a consultant, you can access real-time data and insights. This immediacy allows for quick decision-making, helping you stay agile and responsive in a fast-paced business environment. - **Custom Dashboards and Reports:** Consultants create custom dashboards and reports tailored to your specific needs. These tools give you a comprehensive view of your business metrics, enabling you to monitor performance and make data-driven decisions. - **Predictive Analytics:** Salesforce Data Cloud consultants can implement predictive analytics models that help forecast trends and outcomes. These insights enable you to anticipate market changes and adjust your strategies proactively. - **Improved Collaboration:** With well-organized and accessible data, your teams can collaborate more effectively. Consultants ensure that data is shared seamlessly across departments, fostering a collaborative environment where everyone is working with the same information. ## Reason #6: Enhanced Customer Experience Improving customer experience is crucial, and a Salesforce Data Cloud consultant can make a big difference. By using data more effectively, you can offer a more personalized and seamless experience for your customers. - **Personalized Interactions:** Consultants help you use customer data to create personalized experiences. By analyzing behavior, preferences, and history, you can tailor interactions to individual needs, boosting satisfaction and loyalty. - **Unified Customer View:** Consultants integrate data from various touchpoints to give you a complete view of each customer, helping your team respond to their needs more effectively. - **Efficient Customer Service:** With accurate data at their fingertips, your customer service team can resolve issues quickly. Consultants ensure your data systems provide instant access to relevant customer info, reducing response times and improving service quality. - **Targeted Marketing Campaigns:** By analyzing customer data, consultants help you design and execute targeted marketing campaigns. This means reaching the right audience with the right message at the right time. - **Continuous Improvement:** Consultants set up systems to capture and analyze customer feedback, helping you refine and improve your products, services, and overall customer experience. ## Reason #7: Compliance and Security Ensuring data compliance and security is critical in today’s business environment. A Salesforce Data Cloud consultant helps you navigate these complex areas, protecting your business and its data. - **Regulatory Compliance:** Consultants ensure that your data management practices comply with relevant laws and regulations, such as GDPR or CCPA. They help you implement policies and procedures that safeguard your data and avoid legal pitfalls. - **Data Security:** Consultants set up robust security measures to protect your data from breaches and unauthorized access. This includes encryption, access controls, and regular security audits to keep your data safe. - **Risk Management:** By identifying potential vulnerabilities in your data systems, consultants help you mitigate risks. They create strategies to handle data breaches or losses, ensuring your business is prepared for any scenario. - **Employee Training:** Consultants also provide training to your staff on best practices for data security and compliance. This helps ensure that everyone in your organization understands their role in protecting sensitive information. ## Reason #8: Strategic Planning and Growth A Data Cloud consultant Salesforce can play a pivotal role in your strategic planning and growth efforts. By leveraging their expertise, you can create a clear roadmap for the future. - **Long-Term Vision:** Consultants help you develop a long-term vision for your data strategy, aligning it with your overall business goals. This ensures that your data initiatives support your growth objectives. - **Scalable Solutions:** As your business grows, your data needs will change. Consultants design solutions that can scale with your business, ensuring that your data infrastructure remains effective and efficient. - **Innovation and Competitiveness:** By staying updated with the latest trends and technologies, consultants can introduce innovative solutions that keep your business competitive. They help you leverage new features and tools within Salesforce Data Cloud to stay ahead of the curve. - **Performance Metrics:** Consultants set up key performance indicators (KPIs) to measure the success of your data initiatives. This ongoing assessment helps you track progress, make informed adjustments, and achieve your strategic goals. - **Change Management:** Implementing new data strategies often involves significant changes. Consultants guide you through this process, helping you manage the transition smoothly and ensuring that your team is on board with the new direction. Looking for professional help with Salesforce Data Cloud? Get in touch with our parent company! [Explore More](https://mobilunity.com/tech/hire-salesforce-developers/) [![](https://www.sfapps.info/wp-content/uploads/2024/05/banner-2-icon.svg)](https://www.sfapps.info/wp-content/uploads/2024/05/banner-2-icon.svg) ## Wrapping Up: Why Your Business Needs a Salesforce Data Cloud Consultant Hiring a Salesforce Data Cloud consultant can be a transformative step for your business. These experts bring a wealth of knowledge and experience, offering tailored solutions that perfectly fit your unique needs. They ensure your data is clean, accurate, and secure, while also helping you save costs and enhance decision-making. With the help of a Salesforce certified Data Cloud consultant, you can significantly improve customer experiences, ensure compliance with regulatory standards, and strategically plan for future growth. Salesforce Data Cloud consulting not only optimizes your current data processes but also prepares your business for future challenges and opportunities. Check out the list of the Salesforce [most demanded roles in 2024](https://www.sfapps.info/salesforce-talent-market-changes/) to understand if you need a Salesforce Data Cloud Consultant. In a world where data is a key driver of success, having a data cloud consultant specialized in Salesforce can make all the difference. By leveraging the expertise of a Salesforce Data Cloud consultant, you position your business to thrive in an increasingly data-driven landscape. The post [Top Reasons Why Your Business Needs a Salesforce Data Cloud Consultant](https://www.sfapps.info/why-hire-salesforce-data-cloud-consultant/) first appeared on [Salesforce Apps](https://www.sfapps.info).
doriansabitov
1,891,357
Fundamentos do desenvolvedor frontend
Trabalho como frontend desde 2017 e atualmente tenho me especializado em React. Antes disso, atuei...
0
2024-06-17T18:05:37
https://dev.to/lpazzim/fundamentos-do-desenvolvedor-frontend-jo6
frontend, webdev, beginners, career
![web developer in front of the computer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uai9181as4eryvch627f.jpeg) Trabalho como frontend desde 2017 e atualmente tenho me especializado em React. Antes disso, atuei como fullstack, tendo a oportunidade de trabalhar com Delphi 7, C#, C++, Angular, ASP.NET e algumas outras tecnologias. Nesse período, priorizei a entrega, sempre focando no produto final e em agradar o cliente, mantendo-me dentro do prazo estabelecido. Com o tempo, conheci pessoas tecnicamente muito competentes que me inspiraram e apoiaram no meu crescimento técnico e profissional. Comecei a me interessar em compreender cada parte do processo de desenvolvimento de um projeto (frontend). Neste artigo, gostaria de apresentar alguns tópicos e subtópicos que considero de extrema importância, seja você um desenvolvedor com mais experiência ou esteja ingressando agora no mundo da programação. Estes são os tópicos de maior relevância, na minha opinião. Se faltar algo, sintam-se à vontade para sugerir nos comentários. Afinal, o objetivo deste artigo é compartilhar e também aprender. Abaixo de cada sessão, vou deixar um link para um material onde você pode começar a estudar cada tópico. ### Javascript - Características e diferenças entre var, let e const; - Primitive Types; - for, while, map e foreach; - Promise; - Function e Arrow function; - closure(avançado); https://www.w3schools.com/js/ ### HTML - Tags de elementos básicos; - Acessibilidade; - Local storage e Session Storage; https://www.w3schools.com/html/html_intro.asp ### CSS - Flexbox e grid; - Design responsivo; https://web.dev/learn/css?hl=pt ### Algoritmos e Estutura de dados - Frequence count pattern; - Sliding window pattern; - Recusion; - Pointers; - BInary Search; - Linear search; - Merge Sort; - Quick Sort; - Hash Table; - Graphs; - Binary Tree; - Singly linked List; - Doubly linked List; https://www.udemy.com/course/js-algorithms-and-data-structures-masterclass/ ### Design Patterns - Singleton; - Observer; - Decorator; - Factory; https://medium.com/better-programming/javascript-design-patterns-25f0faaaa15 Agora, vou focar um pouco na lib em que estou estudando e trabalhando. Nessa seção, acredito que cada um possa escolher o framework que mais gosta ou está familiarizado, seja por ter iniciado por ele ou por trabalhar e ter um contato diário. ### React - DOM e Virtul DOM; - React Hooks; - Memoization; - Typescript(aqui poderia ter um tópico próprio mas deixei como subtópico em react por acreditar que os dois estão bem ligados) - webpack e vite; - NextJs(SSR); - Testes (Jest, Vitest ou outra lib de testes); - Context API e Redux; - Css in Js (Styled components); https://react.dev/learn Resumi em alguns tópicos assuntos que considero muito importantes para fundamentar e ajudar a compreender um pouco mais do que fazemos no nosso dia a dia. Nos próximos artigos, vou trazer informações e fontes com conteúdos detalhados dos tópicos acima, além de indicar onde estudar e praticar alguns deles.
lpazzim
1,891,553
GxP Compliance: Leveraging Test Automation for Validation and Security
Enterprises in every industry need efficient software. However, some industries are much more...
0
2024-06-17T18:05:25
https://www.opkey.com/blog/gxp-compliance-leveraging-test-automation-for-validation-and-security
gxp, compliance
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p97zawsry3p5tu9jfhpk.png) Enterprises in every industry need efficient software. However, some industries are much more heavily regulated than others, and this can complicate development and testing timelines. This blog will define what it means to be GxP compliant and examine the impact of these compliance requirements have on software testing. We will also explore how recent advances in AI and test automation help accelerate GxP validation. **What Is GxP Compliance**? GxP is a single term that refers to a variety of quality standards and regulations, all relating to the healthcare, life sciences, food, beverage, and pharmaceutical fields. GxP compliance implies that a software system adheres to a certain set of best practices and regulatory guidelines. The phrase itself stems from “Good Practices Compliance.” GxP system compliance guarantees that companies adhere to high quality, efficacy, and safety requirements for their processes, products, and documentation. Violating GxP system validation requirements can lead to potential fines, recalls, reputation harm, and legal issues. These industries are governed by regulatory agencies like: - The Food and Drug Administration (FDA) (USA) - Federal Communication Commission (FCC) - International Organization for Standardization (ISO) - EU Medical Device Regulation (MDR) - The European Medicines Agency (EMA) **What Is GxP Validation**? Verifying that you're following these procedures is known as GxP validation. This is a written procedure that shows how a system works within defined limits for a particular intended use. Along with guaranteeing adherence to legal and quality standards, GxP also supports data integrity, product safety, and consumer health protection. Several crucial phases are involved in the GxP validation process, including: **Planning**: Planning entails defining the scope and determining what must be validated. **Specification**: Outlining the conditions and establishing the benchmarks for results. **Design Qualification (DQ**): The process of recording the system's design and making sure it complies with all requirements. **Installation Qualification (IQ**): Verifying that the installation complies with the manufacturer's criteria is known as installation qualification, or IQ. **Operational Qualification (OQ**): Proving the system works as intended under predetermined circumstances. **Performance Qualification (PQ**): Demonstrating that the system operates reliably and efficiently under typical circumstances. **Revalidation**: This ensures that the system continues to be compliant. **Why Is GxP Compliance and Validation Important**? Government entities and agencies regularly monitor and enforce GxP compliance via audits, inspections, and certification requirements monitoring. As a result, digital firms in regulated industries must do due diligence to understand all legal obligations before launching a product. Below are several reasons GxP is so important: The GxP systems validation process includes identifying and managing risks associated with important functions, equipment, and systems in order to ensure product safety and quality. A commitment to compliance reduces failures and deviations that could lead to quality issues, product recalls, or consumer harm. GxP computer system validation guarantees that testing, production, and distribution processes operate as expected. Often, lives are at stake (eg. a defective pharmaceutical could put a patient’s life at risk). Who Are Major Stakeholders in GxP System Compliance? GxP computer system validation is a shared responsibility for everyone in the company in question. Below is a list of key players: **Executive leadership and management**: The management is responsible for developing the overall compliance strategy, allocating resources, implementing procedures, and pushing for a compliance culture. **Quality Assurance (QA) department**: QA ensures that systems and procedures adhere to regulatory requirements and quality standards. QA personnel are responsible for computer system validation, document control, training, auditing, and inspections. **IT department**: The IT department is responsible for implementing, maintaining, and supporting GxP systems. IT collaboration with other departments ensures that systems are effectively implemented, evaluated, maintained, and meet cybersecurity standards. **Testers**: Testers are dedicated to the validation process. They create validation plans, build test scripts, carry out validation testing, and document the results. They verify that computer systems meet all set requirements and work properly in a controlled environment. **Regulatory Affairs team**: Regulatory Affairs analyzes and interprets applicable rules and laws to ensure that GxP systems meet regulatory standards. **Why Is Test Automation Vital for Effective GxP Compliance**? Testing plays a critical role in establishing and maintaining GxP compliance, through continuous validation. However, due to time and budget restrictions, testing to a satisfactory degree can be challenging for many firms. The stringent standards of GxP validation might be challenging to meet with manual testing methods because they can be labor-intensive and prone to errors. A growing number of businesses in regulated sectors are using test automation to achieve GxP compliance with their desired coverage and accuracy levels. You can make better use of your resources by automating time-consuming and repetitive operations. **Significance of Testing for Healthcare Apps** - Lack of testing can result in a security breach that can result in hefty fines under compliance and standards. - Issues in medical data can cause practitioners to make incorrect decisions. - Inadequate testing for UI/UX design can make a program difficult for its users. - Your app must be tested to work on multiple devices or operating systems. - Problems with medical device software connectivity can compromise patients' personal details. **How Test Automation Tools Like Opkey Improve Your GxP Validation Process** Businesses in regulated industries have few options to build GxP-compliant software. However, this can be a huge burden due to changing standards, staffing shortages, and strict timelines. Therefore, organizations must employ AI-based testing tools to meet their growing testing needs. Sparta Systems automates their Trackwise CSV Processes with Opkey; Reduces Validation Time by 50%, and Enables In-Sprint Test Automation “The testing challenge grows in tandem due to strict regulations. If these systems were tough to test in the days of quarterly release cycles, what is the situation now that software is evolving in a matter of days..or even minutes? “ Opkey complies with some of the most stringent IT security practices in the world. **Opkey Compliance** Opkey streamlines compliance management by providing a consistent platform for assessing and approving all development activities. Opkey is proven to speed up software validation and assure overall GxP compliance. Book a demo to find out why leading life sciences companies like Sparta Systems have already chosen Opkey to improve their software testing and computer system validation.
johnste39558689