id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,889,062
darvinmonteraswebsiteorg.godaddysites.com
A post by Darvin Monteras
0
2024-06-14T22:12:04
https://dev.to/metadarvinmonteras/darvinmonteraswebsiteorggodaddysitescom-4nhh
[](https://darvinmonteraswebsiteorg.godaddysites.com ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h07wf9fo507u10mq7u9t.jpg))
darvinmonterasgodaddysitescom
1,499,601
Create A Vim Plugin For Your Next Programming Language, Structure, and syntax highlight.
Vim is a text-based editor, opensource, it is also an improved version of the old vi UNIX, Vim has...
0
2024-06-14T22:09:34
https://dev.to/ezpzdevelopement/create-a-vim-plugin-for-your-next-programming-language-structure-and-syntax-highlight-4gd1
communicationprotoco, vim, extension, programminglanguages
![Create A Vim Plugin For Your Next Programming Language, Structure, and syntax highlight.](https://cdn-images-1.medium.com/max/1024/1*FzAafbTm2QR9L3Nw2yrG6Q.png) Vim is a text-based editor, opensource, it is also an improved version of the old vi UNIX, Vim has so many features including multi-level undo, syntax highlighting, command line history, on-line help, spell checking, filename completion, block operations, script language, etc. If we want to talk about compatibility, Vim runs under MS-Windows (XP, Vista, 7, 8, 10), macOS, Haiku, VMS, and almost every OS based on UNIX. In today's post, I would like to show you how to write your own vim extension for a new programming language, I wrote this plugin with the help of my two coworkers [Imen](https://github.com/imen-ben) and [Djamel](https://github.com/theLegend98). ### Introduction First, let me introduce you to IOP (_Intersec Object Packer) it is a method to serialize data in different communication protocols, inspired by_ [Google Protocol Buffers](https://developers.google.com/protocol-buffers/docs/overview), IOP syntax looks like _D language syntax, and all of this is according to_ [_IOP official documentation_](https://intersec.github.io/lib-common/lib-common/iop/base.html)_._ ### **The Structure of a Vim Plugin** When we start working on our extension, we will create a root folder under the name [vim-io](https://github.com/abderrahmaneMustapha/vim-iop)p, which is exactly what we picked as a name for our vim extension. This directory will contain three other important folders which are : - autoload: is a technique for blocking the loading of your plugin’s code until it’s required, in our case, we will implement the autocomplete feature in this folder. - ftdetect: or file type detection, has a clear purpose to figure out what file type a given file is. - ftplugin: contains scripts that run automatically when vim detect a file opened or created by a user, in our case this file will contain the logic to implement indentation. - scripts: contains a script that implements syntax highlight. ### Detect File Type In this section, we add the code to set file type for IOP files, but first, our root folder **vim-iop** must look like this : ``` vim-iop ------- ftplugin ------- ftdetect ------- syntax ------- autoload ``` In this part we need to create a new file ftdetect/[_iop.vim_](https://github.com/abderrahmaneMustapha/vim-iop/blob/main/ftdetect/iop.vim), add this code below to it: ``` " ftdetect/iop.vim autocmd BufNewFile,BufRead *.iop setfiletype iop ``` ### Syntax Highlight In this section, we will write some vim script in addition to some regex, so we can add the syntax highlight feature to our Vim extension. Before we can start coding i want to mention that IOP has Basics types which are : int, uint, long, ulong, byte, ubyte ... and more, plus four complex types struct, class, union, enum , if you want to learn more about this types make sure to check this [_link_](https://intersec.github.io/lib-common/lib-common/iop/base.html). So for the code below, we need add the logic to highlight the IOP types mentioned in this part of _I_[_OP documentation._](https://intersec.github.io/lib-common/lib-common/iop/base.html) ``` "syntax/iop.vim **_syntax keyword_ iopComplexType** class enum union struct module **nextgroup** =iopComlexTypeName **skipwhite** **_syntax keyword_ iopBasicTypes** int uint long ulong xml **_syntax keyword_ iopBasicTypes** byte ubyte short ushort void **_sytanx keyword_ iopBasicTypes** bool double string bytes " complex types name **_syntax match iopComlexTypeName_**" **\w\+**" contained ``` as you can see we have **iopComplexType** , **iopBasicTypes** both of these variables contain the different complex and basic types of IOP, we also want to tell our extension that each complex type is followed by a name and that white space should be ignored, after this, we need to tell our vim extension to highlight this types by adding the code below in the bottom of syntax/iop.vim. ``` "syntax/iop.vim **_highlight link_** **iopComplexType** Keyword **_highlight link_ iopBasicTypes** Type ``` in the end and after adding this extension to our vim ide, we will see something like this. ![](https://cdn-images-1.medium.com/max/1024/1*clJKJaBcuSz9fDn60PlOPQ.png) IOP syntax contains decorators also, we are going to write some regular expressions in order to highlight this, so just add the code below to our syntax/iop.vim file. ``` "syntax/iop.vim syntax match iopDecorator /^ **\s*** @/ nextgroup=iopDecoratorFunction syntax match iopDecoratorFunction contained / **\h** [a-zA-Z0-9_.]*/ ``` In the first line of the code above we are telling our vim extension that this decorator can start with a zero or multiple white spaces followed by an “@” ( /^ **\s** \*@/) , and the **nextgroup** keyword means that after “@” there is a name of this decorator, the name of this decorator can contain all _alphabet_ letters whether it is upper or lower case, this decorator name can also contain numbers and the two special characters “\_ “ and “.”. After telling our vim extension to highlight the decorators. ``` "syntax/iop.vim highlight link iopDecoratorFunction Function ``` This is an example of what we will see in our vim IDE. ![](https://cdn-images-1.medium.com/max/1024/1*abXRlK8jyTHfgncmEu23GQ.png) if you want the complete implementation of vim-iop syntax highlight make sure to check this [link](https://github.com/abderrahmaneMustapha/vim-iop/tree/main/syntax). That's it for now, in the next post, i will show you how to add autocomplete and indentation. ### **References:** - [Learn Vimscript the Hard Way](https://learnvimscriptthehardway.stevelosh.com/) - [How To Use Vundle to Manage Vim Plugins on a Linux VPS | DigitalOcean](https://www.digitalocean.com/community/tutorials/how-to-use-vundle-to-manage-vim-plugins-on-a-linux-vps) - [IOP](https://intersec.github.io/lib-common/lib-common/iop/base.html)
ezpzdevelopement
1,888,584
How I created a simple cross-multiplication for entrepreneurs
Hello, first post on DEV for this professional account, it will be written in English and French to...
0
2024-06-14T22:02:14
https://dev.to/conceptweb/how-i-created-a-simple-cross-multiplication-for-entrepreneurs-435l
nuxt, webdev, javascript, productivity
Hello, first post on DEV for this professional account, it will be written in English and French to facilitate reading. ## Why I created that? I'm a **freelancer** since april 2023, I **invoice by the hour according to my tasks**. So I used a classic website or even a calculator to get my result for each task. Except it was taking me a bit longer, and I was thinking it **could be an even simpler and faster process**. So, as a web developer, I **designed this web interface to meet my needs**. **It's accessible to everyone, and free**. The magic of the Internet. {% embed https://conceptweb.agency/tools/produit-en-croix/ %} ## What is a cross-multiplication? > In mathematics, specifically in elementary arithmetic and elementary algebra, given an equation between two fractions or rational expressions, one can cross-multiply to simplify the equation or determine the value of a variable. ![example of a cross-multiplication](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u8qdb1dvuy2ndoqkhoce.png) That's all. Thanks Wikipedia. 🙌 {% embed https://en.wikipedia.org/wiki/Cross-multiplication %} ## How I created this tool? The [website](https://conceptweb.agency) is under **[Nuxt](https://nuxt.com)**, I use [SASS](https://sass-lang.com) for styling. The script is quite simple, written in two parts, one to calculate the total, and the second to transform "_4h23_" into "_263m_". {% embed https://youtu.be/MOiKPESzvaQ %} ### How I store datas? All data is stored via _[localStorage](https://developer.mozilla.org/en-US/docs/Web/API/Window/localStorage)_, based on your client. This makes it easier to have both values (time and price) saved when you return to the page. ```javascript watch([defaultMinutes, defaultHourlyRate, defaultTimePerMinutes, result], () => { localStorage.setItem('defaultMinutes', defaultMinutes.value); localStorage.setItem('defaultHourlyRate', defaultHourlyRate.value); }); onMounted(() => { if (localStorage.getItem('defaultMinutes')) { defaultMinutes.value = localStorage.getItem('defaultMinutes'); } if (localStorage.getItem('defaultHourlyRate')) { defaultHourlyRate.value = localStorage.getItem('defaultHourlyRate'); } }); ``` When the page is loaded (onMounted), we retrieve the localstorage if available to set them to the correct values. Nothing mysterious! 🧙‍♂️ ### Converting hours into minutes Okay, this part takes a bit longer to explain, but if you understand a bit of code logic, you'll notice quite easily how I do it. First I simply check if defaultTimePerMinutes is not undefined, then other conditions to avoid errors, and then we check if there's the letter "h" to split the two parts. We calculate the hours value, i.e. the value to the left of "h" in minutes, and add the two parts to obtain a total in minutes. ```javascript watch([defaultMinutes, defaultHourlyRate, defaultTimePerMinutes], () => { if (!defaultTimePerMinutes.value) { result.value = null; return; } if (isNaN(defaultTimePerMinutes.value) && !defaultTimePerMinutes.value.includes('h')) { result.value = null; totalMinutes.value = null; return; } // if defaultTimePerMinutes contains a value of type "1h30". if (defaultTimePerMinutes.value.includes('h')) { let [hours, minutes] = defaultTimePerMinutes.value.split('h'); minutes = minutes ? parseInt(minutes) : 0; let totalHours = hours * 60; totalMinutes.value = totalHours + minutes; result.value = (totalMinutes.value * defaultHourlyRate.value) / defaultMinutes.value; result.value = result.value.toFixed(2); } else { totalMinutes.value = defaultTimePerMinutes.value; result.value = (totalMinutes.value * defaultHourlyRate.value) / defaultMinutes.value; result.value = result.value.toFixed(2); } if (totalMinutes.value === 0 || !totalMinutes) { totalMinutes.value = null; } }); ``` ___ Don't hesitate to give me feedback, or even a tool like this one, which can be great to set up for calculations in a professional environment. It's always interesting to know what's missing on the Internet! {% cta https://conceptweb.agency/tools/produit-en-croix %} Test the cross-multiplication → {% endcta %}
thomasbnt
1,889,059
Why GitLab Runners Are So Slow?
So there I was, Monday morning, coffee in hand, ready to tackle the day. I had a bunch of CI/CD...
0
2024-06-14T22:01:53
https://dev.to/spiritus/why-gitlab-runners-are-so-slow-4ic9
devops, gitlab, cicd
So there I was, Monday morning, coffee in hand, ready to tackle the day. I had a bunch of CI/CD pipelines to run and a to-do list longer than a CVS receipt. Feeling optimistic, I hit "run" on my pipelines and leaned back, expecting them to zoom through like a Ferrari on an open road. Instead, it felt like I had strapped a rocket to a tortoise. My pipelines were crawling slower than my grandma on her morning walk. I sighed, took another sip of my now lukewarm coffee, and wondered what was going on. This couldn't just be a one-time thing, right? As I stared at the progress bar moving at a glacial pace, I decided to get to the bottom of this mystery. And let me tell you, what I found was quite the rabbit hole. **1. GitLab's pricing strategy** First off, I discovered that GitLab's pricing strategy isn't exactly in our favor. You see, GitLab charges for runner usage by the minute. The longer your pipelines take, the more you pay. It was like finding out that the pizza place around the corner purposely slows down their oven to make you buy more drinks while you wait. With this kind of setup, GitLab doesn't really have a reason to speed things up. More time means more money for them. Clever, but not great for my productivity. **2. Shared runners and high demand** Next, I learned that GitLab runners are shared among all users. Imagine you're at an all-you-can-eat buffet, but every time you reach for the food, a hundred other hands are grabbing at it too. During peak times, the demand for these runners spikes, causing a traffic jam that makes rush hour look like a breeze. My jobs were stuck in a queue, waiting for their turn like kids at a popular amusement park ride. No wonder everything was so slow! **3. Limited resources of GitLab runners** To add insult to injury, these GitLab runners are hosted on Google Cloud Platform (GCP) with limited resources. We're talking **2vCPUs** and a few gigabytes of RAM. It's like trying to win a race with a tricycle while everyone else is on motorcycles. These limited resources just can't handle the intensive workloads that CI/CD pipelines often require, causing even more delays. After my deep dive into the world of GitLab runners, I realized that the slow performance wasn't just a minor inconvenience; it was a systemic issue. GitLab's pricing strategy, shared runner congestion, and limited resources were all conspiring against my productivity. But hey, every problem has a solution, right? That's where [Cloud-Runner](https://cloud-runner.com) comes in. I stumbled upon this gem while searching for alternatives and decided to give it a try. And let me tell you, it was like swapping a bicycle for a jet engine. My pipelines started zipping through tasks faster than my morning espresso can cool down. No more waiting in line, no more watching progress bars creep along – just pure, unadulterated speed.
spiritus
1,889,057
Day 971 : The Upgrade
liner notes: Professional : Still working on trying to get this visa. The organizer is saying that...
0
2024-06-14T21:54:39
https://dev.to/dwane/day-971-the-upgrade-5212
hiphop, code, coding, lifelongdev
_liner notes_: - Professional : Still working on trying to get this visa. The organizer is saying that the document that the visa application site is saying is required is not required. haha. We'll try again on Monday. I responded to some community questions. Set up tasks for the next sprint. Refactored a feature in an application to use a new Client SDK. Not a bad day. - Personal : Picked up some tracks on Bandcamp and got my social media posts ready. Went through more tracks for the radio show. Did some more refactoring on the highlight clip creator. I can now have two lines on the title slides. Ended the night watching an episode of "The Boys". I forgot how wild that show is. Oh, I forgot, I bought a new camera for when I travel. It's an upgrade to a camera that I use a lot. There's also a new device that will help transfer files off of the camera. That's one of my pain points with the camera, so I think this will help. ![A dark blue, black and purple background with a starry night sky](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbsishjglx2ejc9p80cw.jpg) Going to finish putting together the tracks for the radio show. I want to put together some packages so I can post them on my way to the station tomorrow. There's also an online order I need to place as well. After all that, I want to work on upgrading a previous side project to a new version of the framework I used. Hopefully that goes smoothly. Looking to end the night watching an episode of "The Boys". Have a great night and weekend! peace piece Dwane / conshus https://dwane.io / https://HIPHOPandCODE.com {% youtube DqROS_ub6Q4 %}
dwane
1,889,056
Unique combinations of a string
This one is a little tricky and certainly depends on how one interprets the question. Write a...
27,729
2024-06-14T21:50:00
https://dev.to/johnscode/unique-combinations-of-a-string-ekj
programming, go, interview, interviewquestions
This one is a little tricky and certainly depends on how one interprets the question. Write a golang function to find the unique combinations of a string. I am taking the problem in a simple form. I will produce the combinations one would get assuming the string is left in order; that is to say that I am not considering combinations that would occur from any reordering of the characters from the original string. ``` func FindUniqueCombinations(str string) []string { // Convert string to slice of runes runes := []rune(str) // Store unique combos var combos = []string{} // Generate combos recursively var searchForCombos func(start int, currentCombo[]rune) searchForCombos = func(start int, currentCombo []rune) { comboAsStr := string(currentCombo) // Skip duplicates and empty string if !slices.Contains(combos, comboAsStr) && len(comboAsStr) > 0 { // Add current currentCombo to the result combos = append(combos, string(currentCombo)) } // Generate combos for remaining characters for i := start; i < len(runes); i++ { currentCombo = append(currentCombo, runes[i]) searchForCombos(i+1, currentCombo) currentCombo = currentCombo[:len(currentCombo)-1] } } searchForCombos(0, []rune{}) return combos } ``` You can see that we've chosen a solution to uses recursion. The approach is pretty simple. Given a string, start from the first character and generate the combinations that are possible by incrementing across the remaining characters making sure not to include any combinations that have already been generated. How can we make it better? What would you change if you wanted to consider reordering characters to get all permutations? Put your comments and suggestions below. Thanks! _The code for this post and all posts in this series can be found [here](https://github.com/johnscode/gocodingchallenges)_
johnscode
1,889,055
This is why I prefer Massed Compute over RunPod
I just installed our SUPIR APP on RunPod : https://runpod.io?ref=1aka98lq SUPIR 1 Click Windows,...
0
2024-06-14T21:48:49
https://dev.to/furkangozukara/this-is-why-i-prefer-massed-compute-over-runpod-3hig
beginners, tutorial, ai, news
<p style="margin-left:0px;">I just installed our SUPIR APP on RunPod : <a target="_blank" rel="noopener noreferrer" href="https://runpod.io/?ref=1aka98lq"><u>https://runpod.io?ref=1aka98lq</u></a></p> <p style="margin-left:0px;"><a target="_blank" rel="noopener noreferrer" href="https://www.patreon.com/posts/99176057"><u>SUPIR 1 Click Windows, RunPod / Massed Compute / Linux Installer &amp; Free Kaggle Notebook</u></a></p> <p style="margin-left:0px;">Used identical RTX 4090 pods from different countries.</p> <p style="margin-left:0px;">The first Pod was too slow while installing. So I terminated the Pod and deleted it.</p> <p style="margin-left:0px;">Then I started 2 different RTX 4090 pods from different countries and started installing on both of them.</p> <p style="margin-left:0px;">One of the Pod gave error during the install without any reason. The other one went smooth and worked : <a target="_blank" rel="noopener noreferrer" href="https://www.patreon.com/file?h=106218325&amp;i=19332535"><u>runpod failed.png</u></a></p> <p style="margin-left:0px;">So you see out of 3 RTX 4090 Pods, only 1 of them worked properly.</p> <p style="margin-left:0px;">This is why I prefer Massed Compute now and making tutorials and scripts for Massed Compute as well in addition to RunPod and Windows : <a target="_blank" rel="noopener noreferrer" href="https://vm.massedcompute.com/signup?linkId=lp_034338&amp;sourceId=secourses&amp;tenantId=massed-compute"><u>https://vm.massedcompute.com/signup?linkId=lp_034338&amp;sourceId=secourses&amp;tenantId=massed-compute</u></a></p> <p style="margin-left:0px;">I have never seen such error on Massed Compute yet — always fast both Disk speed and internet Speed</p> <p style="margin-left:0px;">We also have an amazing Coupon Code for Massed Compute to use both A6000 and A6000 ALT GPUs there with only 31 cents per hour</p> <p style="margin-left:0px;">A6000 GPU on RunPod is minimum 69 cents per hour in Community Cloud</p> <p style="margin-left:0px;">The coupon code is : SECourses</p> <p style="margin-left:auto;"> <picture> <source srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*zQUXKjlA5sILdA9axOd0Rg.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*zQUXKjlA5sILdA9axOd0Rg.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*zQUXKjlA5sILdA9axOd0Rg.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*zQUXKjlA5sILdA9axOd0Rg.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*zQUXKjlA5sILdA9axOd0Rg.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*zQUXKjlA5sILdA9axOd0Rg.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*zQUXKjlA5sILdA9axOd0Rg.png 1400w" type="image/webp" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px"> <source srcset="https://miro.medium.com/v2/resize:fit:640/1*zQUXKjlA5sILdA9axOd0Rg.png 640w, https://miro.medium.com/v2/resize:fit:720/1*zQUXKjlA5sILdA9axOd0Rg.png 720w, https://miro.medium.com/v2/resize:fit:750/1*zQUXKjlA5sILdA9axOd0Rg.png 750w, https://miro.medium.com/v2/resize:fit:786/1*zQUXKjlA5sILdA9axOd0Rg.png 786w, https://miro.medium.com/v2/resize:fit:828/1*zQUXKjlA5sILdA9axOd0Rg.png 828w, https://miro.medium.com/v2/resize:fit:1100/1*zQUXKjlA5sILdA9axOd0Rg.png 1100w, https://miro.medium.com/v2/resize:fit:1400/1*zQUXKjlA5sILdA9axOd0Rg.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px"><img class="image_resized" style="height:auto;width:680px;" src="https://miro.medium.com/v2/resize:fit:1313/1*zQUXKjlA5sILdA9axOd0Rg.png" alt="" width="700" height="495"> </picture> </p> <p style="margin-left:auto;"> <picture> <source srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*zm53hiSfbXlwnbCeb5Ln3g.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*zm53hiSfbXlwnbCeb5Ln3g.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*zm53hiSfbXlwnbCeb5Ln3g.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*zm53hiSfbXlwnbCeb5Ln3g.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*zm53hiSfbXlwnbCeb5Ln3g.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*zm53hiSfbXlwnbCeb5Ln3g.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*zm53hiSfbXlwnbCeb5Ln3g.png 1400w" type="image/webp" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px"> <source srcset="https://miro.medium.com/v2/resize:fit:640/1*zm53hiSfbXlwnbCeb5Ln3g.png 640w, https://miro.medium.com/v2/resize:fit:720/1*zm53hiSfbXlwnbCeb5Ln3g.png 720w, https://miro.medium.com/v2/resize:fit:750/1*zm53hiSfbXlwnbCeb5Ln3g.png 750w, https://miro.medium.com/v2/resize:fit:786/1*zm53hiSfbXlwnbCeb5Ln3g.png 786w, https://miro.medium.com/v2/resize:fit:828/1*zm53hiSfbXlwnbCeb5Ln3g.png 828w, https://miro.medium.com/v2/resize:fit:1100/1*zm53hiSfbXlwnbCeb5Ln3g.png 1100w, https://miro.medium.com/v2/resize:fit:1400/1*zm53hiSfbXlwnbCeb5Ln3g.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px"><img class="image_resized" style="height:auto;width:680px;" src="https://miro.medium.com/v2/resize:fit:1313/1*zm53hiSfbXlwnbCeb5Ln3g.png" alt="" width="700" height="197"> </picture> </p>
furkangozukara
1,889,052
Security news weekly round-up - 14th June 2024
Weekly review of top security news between June 7, 2024, and June 14, 2024
6,540
2024-06-14T21:47:45
https://dev.to/ziizium/security-news-weekly-round-up-14th-june-2024-8ac
security
--- title: Security news weekly round-up - 14th June 2024 published: true description: Weekly review of top security news between June 7, 2024, and June 14, 2024 tags: security cover_image: https://dev-to-uploads.s3.amazonaws.com/i/0jupjut8w3h9mjwm8m57.jpg series: Security news weekly round-up --- ## __Introduction__ Hello, and welcome to our security news weekly review here on DEV. In this edition, it's 80% of the articles are about _malware_, and 20% are about _vulnerability_. So, everyone, let's get started. <hr/> ## [Malicious VSCode extensions with millions of installs discovered](https://www.bleepingcomputer.com/news/security/malicious-vscode-extensions-with-millions-of-installs-discovered/) As a developer, this can be tough to handle because you have many things to worry about when coding than an extension which could be malicious. Nonetheless, you should know this exists and hope Microsoft puts in more strict policies about the extensions that find their way to the VSC Marketplace. My advice: install only the necessary extensions that you need in VSCode. The following is an excerpt from the article: > VSCode extensions are an abused and exposed attack vertical, with zero visibility, high impact, and high risk. This issue poses a direct threat to organizations and deserves the security community’s attention. ## [More_eggs Malware Disguised as Resumes Targets Recruiters in Phishing Attack](https://thehackernews.com/2024/06/moreeggs-malware-disguised-as-resumes.html) Irrespective of your job, don't think "Who will target me, I mean, I don't offer any value". No, no, no, don't think that. As detailed in the article, the would-be victim was a recruiter, but the attack was not successful. What's more, beware of using pirated software because it could be a lure to get something dangerous on your computer system. Read the following excerpt and take time to read the full article linked above: > More_eggs campaigns are still active and their operators continue to use social engineering tactics such as posing to be job applicants who are looking to apply for a particular role and luring victims (specifically recruiters) to download their malware ## [Phishing emails abuse Windows search protocol to push malicious scripts](https://www.bleepingcomputer.com/news/security/phishing-emails-abuse-windows-search-protocol-to-push-malicious-scripts/) This is a dangerous combination. Phishing plus legitimate Windows feature, and finally, a malicious script. We might as well refer to this as a nightmare. Armed with this knowledge, be wary of downloading HTML attachments in your email. What's more, here is an excerpt from the article: > The recent attacks described in the Trustwave report start with a malicious email carrying an HTML attachment disguised as an invoice document placed within a small ZIP archive. The ZIP helps evade security/AV scanners that may not parse archives for malicious content. > > The HTML file uses the \<meta http-equiv= "refresh"\> tag to cause the browser to automatically open a malicious URL when the HTML document is opened. ## [New Cross-Platform Malware 'Noodle RAT' Targets Windows and Linux Systems](https://thehackernews.com/2024/06/new-cross-platform-malware-noodle-rat.html) It's scary when malware is cross-platform, especially targeting two popular operating systems used by millions of people. The excerpt below is a quick overview of how the malware works. > The Windows version of Noodle RAT, an in-memory modular backdoor, has been put to use by hacking crews like Iron Tiger and Calypso. Launched via a loader due to its shellcode foundations, it supports commands to download/upload files, run additional types of malware, function as a TCP proxy, and even delete itself. ## [Ransomware attackers quickly weaponize PHP vulnerability with 9.8 severity rating](https://arstechnica.com/security/2024/06/thousands-of-servers-infected-with-ransomware-via-critical-php-vulnerability/) [The bug was first reported on June 7, 2024](https://arstechnica.com/security/2024/06/php-vulnerability-allows-attackers-to-run-malicious-code-on-windows-servers/). Now, a week later, some have been victims of threat actors taking advantage of the vulnerability. I'll encourage you to read the article, starting with the excerpt below. It briefly explains how the bug works. > CVE-2024-4577 affects PHP only when it runs in a mode known as CGI, in which a web server parses HTTP requests and passes them to a PHP script for processing. Even when PHP isn’t set to CGI mode, however, the vulnerability may still be exploitable when PHP executables such as php.exe and php-cgi.exe are in directories that are accessible by the web server. ## __Credits__ Cover photo by [Debby Hudson on Unsplash](https://unsplash.com/@hudsoncrafted). <hr> That's it for this week, and I'll see you next time.
ziizium
1,881,750
Playwright with Python - A Quick Guide
In this guide, we'll see how to set up a basic Playwright project using Python and Pytest. Then, how...
0
2024-06-14T21:45:54
https://dev.to/lrenzi/playwright-with-python-a-quick-guide-356p
playwright, testing, python, e2e
In this guide, we'll see how to set up a basic Playwright project using Python and Pytest. Then, how to implement the Page Object Pattern and a few other things. This guide requires a basic knowledge of Python. ## Preconditions The packages required are: - `playwright` - `pytest-playwright` - `pytest-xdist` ## The folder structure The base folder structure for our project: ``` /pages /tests conftest.py requirements.txt ``` ## Installation We add the required packages to the requirements.txt file: **requirements.txt** ``` playwright>=1.44.0 pytest-playwright>=0.5.0 pytest-xdist>=3.6.1 ``` Then we run the following command: `pip install -r requirements.txt` Note: instead of `pip`, `pip3` might be required depending on the OS. Then: `playwright install` This command will install the required browsers. ## Adding a Basic Test A basic login test using Playwright: **/tests/test_login.py** ```python from playwright.sync_api import Page, expect def test_login_success(page: Page): page.goto('https://react-redux.realworld.io/#/login') page.get_by_placeholder('Email').type('test_playwright_login@test.com') page.get_by_placeholder('Password').type('Test123456') page.get_by_role('button', name='Sign in').click() expect(page.get_by_role('link', name='test_playwright_login')).to_be_visible() ``` ### Running this test This runs a single test in headed mode. Playwright runs in headless mode by default. `pytest -k test_login_success --headed` ### Running all the tests `pytest` ### Selecting the browsers `pytest --browser webkit --browser firefox` ## Implementing the Page Object Pattern To start using the POM we add: **pages/login_page.py** ```python from playwright.sync_api import Page class Login: def __init__(self, page: Page): self.page = page self.email_input = page.get_by_placeholder('Email') self.password_input = page.get_by_placeholder('Password') self.signin_button = page.get_by_role('button', name='Sign in') def goto(self): self.page.goto('/#/login') ``` **pages/navbar_page.py** ```python from playwright.sync_api import Page class Navbar: def __init__(self, page: Page): self.page = page def user_link(self, username: str): return self.page.get_by_role('link', name=username) ``` We add the base URL of our app to the conftest.py as a fixture like this: **conftest.py** ```python import pytest @pytest.fixture(scope='session') def base_url(): return 'https://react-redux.realworld.io/' ``` And now the test looks like this: **tests/test_login.py** ```python from playwright.sync_api import Page, expect from pages.login_page import Login from pages.navbar_page import Navbar def test_login_success(page: Page): login = Login(page) navbar = Navbar(page) login.goto() login.email_input.type('test_playwright_login@test.com') login.password_input.type('Test123456') login.signin_button.click() expect(navbar.user_link('test_playwright_login')).to_be_visible() ``` ### Using Pytest fixtures to Instantiate the Page Objects Instead of instantiating the page objects in each test, we use [Pytest fixtures](https://docs.pytest.org/en/latest/explanation/fixtures.html). We add the following to conftest.py **conftest.py** ```python import pytest from playwright.sync_api import Page from pages.login_page import Login from pages.navbar_page import Navbar @pytest.fixture(scope='session') def base_url(): return 'https://react-redux.realworld.io/' @pytest.fixture def page(page: Page) -> Page: timeout = 10000 page.set_default_navigation_timeout(timeout) page.set_default_timeout(timeout) return page @pytest.fixture def login(page) -> Login: return Login(page) @pytest.fixture def navbar(page) -> Navbar: return Navbar(page) ``` **Note**: we use the "page" fixture to define timeouts. Now the test looks like this: **tests/test_login.py** ```python from playwright.sync_api import expect def test_login_success(login, navbar): login.goto() login.email_input.type('test_playwright_login@test.com') login.password_input.type('Test123456') login.signin_button.click() expect(navbar.user_link('test_playwright_login')).to_be_visible() ``` ## Removing User Data Values from the Test We create a users.py file to store user's data, just one user for now. The DictObject is a utility class to access dictionary values using object notation. It could be moved elsewhere, but for now, we keep it here. **users.py** ```python import json class DictObject(object): def __init__(self, dict_): self.__dict__.update(dict_) @classmethod def from_dict(cls, d): return json.loads(json.dumps(d), object_hook=DictObject) USERS = DictObject.from_dict({ 'user_01': { 'username': 'test_playwright_login', 'email': 'test_playwright_login@test.com', 'password': 'Test123456' } }) ``` And we update the test: **tests/test_login.py** ```python from playwright.sync_api import expect from users import USERS def test_login_success(login, navbar): login.goto() login.email_input.type(USERS.user_01.email) login.password_input.type(USERS.user_01.password) login.signin_button.click() expect(navbar.user_link(USERS.user_01.username)).to_be_visible() ``` ## Running tests in parallel To run the tests in parallel we use pytest-xdist. We should already have it installed by this point. `pytest -n 5` Where `-n` is the number of workers. ## Managing Environment Data Usually, we want the tests to run in different environments. So we want to set the base URL based on the selected env. We add these changes to the conftest.py file: **conftest.py** ```python def pytest_addoption(parser): parser.addoption("--env", action="store", default="staging") @pytest.fixture(scope='session', autouse=True) def env_name(request): return request.config.getoption("--env") @pytest.fixture(scope='session') def base_url(env_name): if env_name == 'staging': return 'https://react-redux.realworld.io/' elif env_name == 'production': return 'https://react-redux.production.realworld.io/' else: exit('Please provide a valid environment') ``` After this, we can pass as an argument the environment name like this: `pytest --env staging` ## Defining Global Before and After Test ```python @pytest.fixture(scope="function", autouse=True) def before_each_after_each(page: Page, base_url): # The code here runs before each test print(‘before the test runs’) # Go to the starting url before each test. page.goto(base_url) yield # This code runs after each test print(‘after the test runs’) ``` This fixture can be added to the conftest.py and apply it to every test. Or it can be defined inside a single test module and be applied only to the tests inside that module. ## Tagging the Tests To tag the tests we can use Pytest [marker feature](https://docs.pytest.org/en/stable/example/markers.html#marking-test-functions-and-selecting-them-for-a-run). ```python import pytest @pytest.mark.login def test_login_success(): # ... ``` We must register our custom markers in the pytest.ini file (a new file added in the root folder). **pytest.ini** ```ini [pytest] markers = login: mark test as a login test. slow: mark test as slow. ``` And to run a custom marker/tag we use: `pytest -m login` ## Tooling ### [Codegen](https://playwright.dev/python/docs/codegen-intro#running-codegen) This generates a test capturing actions in real time. It's useful for generating locators and assertions. But if we are using POM, then the generated code needs to be refactored into it. ### [Playwright Inspector](https://playwright.dev/python/docs/running-tests#debugging-tests) This is a debugger util that enables running a test step by step, among other things. ### [Trace Viewer](https://playwright.dev/python/docs/trace-viewer-intro) The trace viewer records the result of a test so it can be reviewed later with a live preview of each action performed. This is super useful specially when running tests from CI.
lrenzi
1,889,051
Automate NSE Stock Prices in Google Sheets with Ease!
"The stock market is filled with individuals who know the price of everything, but the value of...
0
2024-06-14T21:40:00
https://dev.to/vikranth3140/automate-nse-stock-prices-in-google-sheets-with-ease-3mop
_"The stock market is filled with individuals who know the price of everything, but the value of nothing."_ — Philip Fisher Have you ever wished for a seamless way to track real-time stock prices and financial data for NSE stocks directly in Google Sheets? Well, we did too! This led us on a journey to create a powerful Google Sheets script that fetches real-time stock prices, market cap, P/E ratio, and much more, using the [GOOGLEFINANCE](https://www.google.com/finance/) function. Here’s our story and how you can use it to make your financial tracking a breeze. --- ### The Idea It all started with a simple desire: to keep an eye on our favorite NSE stocks without jumping between multiple websites. We wanted everything in one place, preferably in Google Sheets, where we could easily manipulate and analyze the data. The idea was straightforward: automate the fetching of real-time stock data into Google Sheets, and that's exactly what we set out to achieve. ### The Journey Our journey began with exploring the capabilities of the `GOOGLEFINANCE` function in Google Sheets. We discovered that while `GOOGLEFINANCE` supports various attributes like price, volume, and P/E ratio, it required us to manually enter each stock symbol and formula. This was tedious and time-consuming. We needed a way to automate this process. ### The Solution We decided to create a Google Apps Script that would: 1. **Fetch real-time stock prices and financial data** for NSE stocks. 2. **Automatically append `NSE:` to stock symbols** to ensure accurate results. 3. **Clear previous data** to keep the sheet tidy. 4. **Populate a predefined list of stock symbols** for easy setup. ### How to Use Our Script We’ve made it incredibly easy for you to get started with our script. Follow these simple steps: 1. **Open your Google Sheets document.** 2. **Go to `Extensions` > `Apps Script`.** 3. **Copy and paste the code** from our `Code.js` file into the Apps Script editor. 4. **Save the script** by clicking the disk icon or pressing `Ctrl + S`. 5. **Close the Apps Script editor and refresh your Google Sheets document.** 6. **Enter stock symbols (without `NSE:`)** starting from cell A3. 7. **Click on `Stock Prices` > `Update Prices`** to fetch and display stock data. ### Features Our script fetches a wealth of data, including: - Real-time price - Percentage change - Volume - High and low prices - Open price - Market capitalization - Average daily trading volume - P/E ratio - Earnings per share - 52-week high and low - Previous day's closing price - Number of outstanding shares - Trade time - Data delay ### Visual Guide Here’s a snapshot of what your Google Sheets setup will look like: ![Working Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b6qmefe6oec171izxhnl.png) **Description:** - The left side shows the stock symbols entered in column A. - The top menu shows the `Stock Prices` menu with the `Update Prices` option. ### Pre-configured Sheet To make things even simpler, we’ve prepared a pre-configured Google Sheet with the script already set up. You can view and make a copy of it [here](https://docs.google.com/spreadsheets/d/1lFifrj-Tz-uy5HfSLb8w6gkfa67wKtm-XMkU9gA29qk/edit?usp=sharing). The App Script is included, so you don't need to worry about setting up the code. ### Video Tutorial For those who prefer a visual guide, we’ve got you covered! Check out our detailed video tutorial on how to use the script [here](https://drive.google.com/file/d/1IUSCFHQpC6hRwfGvGHgsxXfry0T5IXRh/view?usp=sharing). ### Going Beyond We didn’t stop at fetching data. We also created an additional script, `Populate.js`, to automatically populate a list of stock symbols. This script can be incredibly handy if you want to start with a predefined list of stocks. ### Steps to Use Populate.js 1. **Open your Google Sheets document.** 2. **Go to `Extensions` > `Apps Script`.** 3. **Copy and paste the code** from our `Populate.js` file into the Apps Script editor. 4. **Save the script** by clicking the disk icon or pressing `Ctrl + S`. 5. **Close the Apps Script editor and refresh your Google Sheets document.** 6. **Run the `populateStocks` function** to automatically populate your list of NSE stocks starting from cell A3. ### The Joy of Automation This project was a joy to work on. The idea of automating something as tedious as manually entering stock data brought a smile to our faces. We hope it brings the same joy and convenience to you. Check out the Github repo here - [Vikranth3140/NSE-Stock-Prices-Automation](https://github.com/Vikranth3140/NSE-Stock-Prices-Automation) Feel free to explore, tweak, and expand on our scripts. Automation can make financial tracking not just easier but also a lot more fun! ### Conclusion We hope our journey and this script inspire you to automate and simplify your own tasks. Happy tracking, and may your investments flourish! --- Thank you for joining us on this journey. If you have any questions or feedback, we’d love to hear from you. Happy automating and investing!
vikranth3140
1,889,050
Securing the Cloud #31
Welcome to the 31st edition of the Securing the Cloud Newsletter! We've taken two weeks off while...
26,823
2024-06-14T21:28:56
https://community.aws/content/2ht2dH6zPI79h1swGUhPPIZuE3f/securing-the-cloud-31
Welcome to the 31st edition of the Securing the Cloud Newsletter! We've taken two weeks off while travelling for two different conferences. The week of June 3rd we were in Las Vegas for Cisco Live. This week we were in Philadelphia for AWS re:Inforce 2024. Both events were amazing and we were able to spend a lot of time with the community talking networking, cloud, security, and Gen AI. So, in this issue, we dive into the latest trends and insights in cloud security with a bit of what came out of re:Inforce. Plus, we explore career development and share some valuable learning resources. Additionally, we feature insightful perspectives from our community members. Let's go! ## Technical Stuff From CiscoLive and re:Inforce * [Unleashing Cloud Power with Cisco and AWS](https://www.tiktok.com/@thecloudsecurityguy/video/7380465741331434795) - Du'An and I presented this 20 minute talk at the AWS booth last week in Las Vegas. We were really excited to help people like us, with a background in Cisco Networking, to bridge that knowledge to the Cloud. Enjoy the video! * [Introducing Amazon GuardDuty Malware Protection for Amazon S3 | AWS News Blog](https://brandonjcarroll.com/links/9zrhf) - Amazon GuardDuty Malware Protection for Amazon S3 now detects malicious file uploads, adding to its existing capabilities for Amazon EBS volumes. This was an announcement made at re:Inforce this week in case you missed it. Users can easily enable this service in the GuardDuty console and configure advanced malware protection measures such as object tagging and event-based actions. For more details on how to enhance your organization's security with GuardDuty Malware Protection for Amazon S3. Check out the article for the full details. ## Career Corner * [AWS re:Invent 2024 All Builders Welcome | Amazon Web Services](https://brandonjcarroll.com/links/lslcw) - Ok, this share is in the Career Corner today because I realized many of you may not be familiar with the program. At AWS re:Inforce we had several builders early in their career that were mentored and brought to re:Inforce with the All Builders Welcome program. This is a program where AWS is empowering underrepresented technologists in the early stages of their careers by providing grants to attend certain events. AWS is also doing this for AWS re:Invent in December 2024, offering opportunities to learn, network, and grow in the tech industry. Read the landing page for the re:Invent specific program where it describes the AWS commitment to fostering diversity and inclusion while bridging the gap in the tech space, inviting those interested to apply for the grant and join the next generation of technical leaders. It's a pretty cool opportunity that you might want to give a shot. ## Learning and Education * [Exam Updates, Beta Exams, and New Certifications | Coming Soon to AWS Certification | AWS](https://brandonjcarroll.com/links/jjmzz) - Ok, this normally wouldn't be in this section because it's not really an article that teaches you something. It's here because it shows the two new certifications that AWS announced at re:Inforce and I couldn't find an article that went into more details. Anyhow, check them out. They aren't availabe yet, but keep them on your radar. * AWS Certified AI Practitioner beta exam * AWS Certified Machine Learning Engineer - Associate beta exam ## Community Voice Here are a few things going on in the community. 1. [Incognito Authentication | CarriageReturn.Nl](https://carriagereturn.nl/aws/alb/basic/auth/cognito/2024/05/21/incognito-basic-auth.html) - Learn how to implement a shared password authentication for a web service using ALB and Lambda from this article, which details the challenges faced and solutions adopted in a step-by-step manner. Explore the author's journey in setting up secure authentication in the cloud and the insights gained along the way. 2. [MITRE ATT&CK Cloud Matrix: New Techniques & Why You Should Care. Part I | Mitigant](https://www.mitigant.io/blog/mitre-att-ck-cloud-matrix-new-techniques-why-you-should-care-part-i) - The MITRE ATT&CK Framework v.14, released in October 2023, introduces over 18 new techniques crucial for modern cybersecurity defenses, with two notable additions in the IaaS section for enterprises. Exploring these techniques sheds light on how attackers exploit vulnerabilities in cloud systems and emphasizes the importance of staying updated and implementing effective detection strategies. For a deeper dive into cloud threat detection and mitigation strategies, read more at https://www.mitigant.io/sign-up. 3. [MITRE ATT&CK Cloud Matrix: New Techniques & Why You Should Care - Part II | Mitigant](https://www.mitigant.io/blog/mitre-att-ck-cloud-matrix-new-techniques-why-you-should-care-part-ii) - The MITRE ATT&CK Framework v.14 introduces new techniques like Log Enumeration to address challenges in cloud attack detection. Explore how the framework, along with suggested mitigation strategies, can help defend against evolving threats in cloud environments in the full article. 4. [Learn to Build RAG Application using AWS Bedrock and LangChain](https://somilgupta.hashnode.dev/learn-to-build-rag-application-using-aws-bedrock-and-langchain) - Explore the world of Retrieval-Augmented Generation (RAG) in natural language processing and machine learning. Discover how RAG enhances language models by bridging gaps in data sources, offering accurate responses, and fostering innovation, as demonstrated through a step-by-step guide to building an RAG application in this insightful article. Thanks for reading this weeks edition. We encourage you to subscribe, share, and leave your comments on this edition of the newsletter. Happy Labbing!
8carroll
1,889,049
The next generation of GitHub profile stats
There are several great tools and guides out there for collecting statistics on your GitHub...
0
2024-06-14T21:21:59
https://dev.to/lukehagar/the-next-generation-of-github-profile-stats-1nh8
github, opensource, stats4nerds, automation
There are several great tools and guides out there for collecting statistics on your GitHub profile - [anuraghazra/github-readme-stats](https://github.com/anuraghazra/github-readme-stats) - [hoangsonww/Profile-Readme-Cards](https://github.com/hoangsonww/Profile-Readme-Cards) - [omsimos/github-stats](https://github.com/omsimos/github-stats) - [sitepoint guide](https://www.sitepoint.com/github-profile-readme/) - [brunobritodev/awesome-github-stats](https://github.com/brunobritodev/awesome-github-stats) But none of those tools offer the freedom, customization, detail, privacy, or level of flair I would prefer. So a while back I set about to implement an entirely new system of stats collection and showcasing. Today I am happy to share how I am currently generating graphics like this one: ![Luke Hagar's GitHub stats](https://raw.githubusercontent.com/LukeHagar/github-stats-remotion/main/out/readme.gif) This graphic is generated in two parts: * First there is a [GitHub action](https://github.com/LukeHagar/stats-action) that pulls the stats on a cron schedule using the GitHub SDK and a personal access token and saves the stats to a JSON file in the repo * Second there is a [remotion GitHub action](https://github.com/LukeHagar/github-stats-remotion) that consumes that JSON file and generates various GIFs that I have designed ahead of time to showcase the stats however I want. I have configured [my repo as a template repository](https://github.com/LukeHagar/stats) so anyone can easily use the template to collect stats for their own GitHub profiles. I'm excited to see what things the community builds from here!
lukehagar
1,889,048
Function Multi-Versioning: The Swiss Army Knife of Code
Hey tech tribe! 🗡️ Let’s talk about something that’s got all the versatility of a Swiss Army knife –...
0
2024-06-14T21:21:45
https://dev.to/yuktimulani/function-multi-versioning-the-swiss-army-knife-of-code-5gdf
gcc, afmv
Hey tech tribe! 🗡️ Let’s talk about something that’s got all the versatility of a Swiss Army knife – Function Multi-Versioning (FMV).I know I have talked about it before but trust me it is different this time. It’s not just a fancy term; it’s a game-changer for developers juggling different CPU architectures. Imagine you’re at a cook-off, and you’ve got to make a dish that impresses every judge with their unique tastes. That’s what FMV does for your code. You write a function, and FMV makes sure it’s the best darn function for every kind of CPU it might run on. Whether it’s x86-64 or aarch64, FMV has got you covered. It’s like having a single knife that can slice, dice, chop, and julienne – all with perfect precision! 🔪 There are a few ways to use FMV. You can manually create different versions of your functions, or let the compiler do the heavy lifting with something called automatic cloning. It’s like having a sous-chef who not only preps all the ingredients but also makes sure every dish is seasoned to perfection. Our ultimate goal? To implement Automatic Function Multi-Versioning (AFMV) in GCC for aarch64. It’s the next evolution of FMV, making sure that your software isn’t just good, but great, no matter where it runs. So, here’s to FMV – the Swiss Army knife in your coding toolkit, ensuring your code is always sharp and ready to impress! 🥳
yuktimulani
1,889,046
Weekly Updates - June 14, 2024
Hi everyone! Hope you had a good week! Time for our weekly announcements: ❗Important announcement...
0
2024-06-14T21:15:16
https://dev.to/couchbase/weekly-updates-june-14-2024-2kcn
couchbase, community, rag, learning
Hi everyone! Hope you had a good week! Time for our weekly announcements: - ❗**Important announcement regarding the Community Hub** - We are continuously working hard in creating and growing a cohesive community using platforms you are familiar with and enjoy using. With that being said, we are announcing today the **retirement of the Couchbase Community Hub**. We realized that we will be able to create a better community focusing our efforts on [_Discord_](https://bit.ly/3JGCeUg) and the [_Forum_](https://www.couchbase.com/forums/) and continuously work on improving your experiences on both platforms. <br> - 📖 **New Blog: Learn more about the Couchbase Ambassador Program** - Are you excited about Couchbase, want to learn more and share your excitement with your own communities? Apply to be an Ambassador! [*Read about the Ambassador program here >>*](https://www.couchbase.com/blog/couchbase-ambassador-program/) <br> - 📺 **New Video: Using RAG for PDF Vector Searching** Using LangChain and OpenAI, see how you can upload a PDF file to Couchbase and then search the text in "chat" mode [*You can watch the video here >>*](https://www.youtube.com/watch?v=aMpJVsJZECc) <br> - 🎓 **Free 90-minutes hands-on course for developers coming to a time zone near you!** Check out our FREE hands-on course for Developers. In just 90 minutes with a dedicated instructor, you'll learn everything you need to get started with Couchbase Capella and kick off your Couchbase certification journey. [*Find a course that suits your time zone here >>*](https://www.couchbase.com/couchbase-capella-test-drive/?utm_campaign=adaptive-apps&utm_medium=event&utm_source=meetup&utm_content=webinar&utm_term=developer) Have a great weekend everyone! 👋 And to all of my fellow ⚽ fans, happy Euro 2024 kickoff!
carrieke
1,889,045
Prevents Double Payment Using Idempotent API
Two brothers wanted to start a business but struggled to set up an online payment system. So, they...
0
2024-06-14T21:14:41
https://dev.to/palwashakmalik/prevents-double-payment-using-idempotent-api-141n
systemdesign, fullstack, systemdesigninterview, softwareengineering
> Two brothers wanted to start a business but struggled to set up an online payment system. So, they decided to create their own service, calling it Stripe. ## The Double Payment Problem As Stripe's user base grew, they encountered issues with double payments, where users were accidentally charged twice for the same transaction. Here are the main reasons for this: 1. Server Error The server might fail while processing a request, leaving the client unsure if the transaction was successful. Retrying could lead to double payment. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pa9wwzoqt4h6f76dge5h.png) 2. Network Error The server processes the request, but a network failure prevents the response from reaching the client. Again, the client doesn't know if the request succeeded, so retrying might result in double payment. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nkzt5rch8dsb7wrgckwt.png) ## Idempotent API To solve this issue, Stripe developed an idempotent API, ensuring that a request can be safely retried multiple times without side effects. Here's how it works: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wusa53uqyacb46rn75d.png) 1. Idempotency keys Each request includes a unique idempotency key (a UUID) in its HTTP header. This key is used to track if the request has already been processed. If the request is new, it gets processed and the key is stored. If the request has been processed before, the cached response is returned. Idempotency keys are stored in an in-memory database and are removed after 24 hours to reduce storage costs. 2. Retrying Failed Requests ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0avh932v54v49ice142x.png) To prevent server overload, Stripe uses an exponential backoff algorithm with jitter. This means adding increasing delays with some randomness between retries to avoid overwhelming the server with simultaneous requests. By implementing these strategies, Stripe effectively prevents double payments and ensures reliable transaction processing.
palwashakmalik
1,889,044
Automatic Function Multi-Versioning: The Lazy Programmer’s Dream!
Hey there, code wizards! 🧙‍♂️ Imagine a world where you write a piece of code, sit back, and the...
0
2024-06-14T21:14:08
https://dev.to/yuktimulani/automatic-function-multi-versioning-the-lazy-programmers-dream-lbn
afmv, multiversioning, assembly, aarch64
Hey there, code wizards! 🧙‍♂️ Imagine a world where you write a piece of code, sit back, and the computer magically makes it work on all kinds of different hardware. Sounds like a dream, right? Well, say hello to Automatic Function Multi-Versioning (AFMV) – the lazy programmer’s dream come true! 🌟 In our quest to make GCC (GNU Compiler Collection) smarter, we’re implementing AFMV for the aarch64 architecture. What does this mean? It’s like having a genie who clones your functions into multiple versions, each tailored for different micro-architectural features. You don’t have to lift a finger (well, except for typing the initial code, but let’s not get picky!). Here’s the deal: normally, you’d have to write different versions of your functions to cater to various CPU features. It’s like making different kinds of pizzas for people with different tastes. One for the spicy lovers, one for the cheese enthusiasts, and one for the pineapple-on-pizza weirdos. 🍕 With AFMV, the compiler does this for you! It’s the ultimate life hack for programmers, reducing the time spent tweaking code for different hardware. So, next time you’re coding, remember that AFMV is like having your own personal genie in a bottle. Just make sure not to wish for infinite loops – those never end well! Thank you for reading!! Until next time happy coding!!🚀
yuktimulani
1,889,043
Why TypeScript is Better than JavaScript
Introduction TypeScript, a superset of JavaScript developed by Microsoft, enhances JavaScript by...
0
2024-06-14T21:12:40
https://dev.to/hussain101/why-typescript-is-better-than-javascript-477k
javascript, typescript, webdev, programming
**Introduction** TypeScript, a superset of JavaScript developed by Microsoft, enhances JavaScript by adding static types and powerful features. It is particularly beneficial for larger, more complex projects. Below, we delve into why TypeScript offers a superior development experience compared to plain JavaScript, with specific examples illustrating its advantages. **1. Static Type Checking** JavaScript Example: `function add(first, second) { return first + second; } add(5, "10"); ` In JavaScript, the lack of type safety can lead to unexpected results like the example above, where numbers and strings are concatenated instead of numerically added. TypeScript Example: `function add(first: number, second: number): number { return first + second; } add(5, "10"); ` TypeScript prevents this error at compile time, ensuring that the types match the function’s expectations. **2. Improved Code Quality and Understandability** TypeScript interfaces and types enhance documentation: `interface User { name: string; age: number; } function greet(user: User) { console.log(`Hello, ${user.name}!`); } greet({ name: "Alice", age: 30 }); // Clearly defined object structure` Interfaces in TypeScript clarify what object structure functions expect, which serves as an in-code documentation. **3. Enhanced IDE Support** Autocomplete and Error Highlighting in IDEs: While working in an IDE like Visual Studio Code, TypeScript provides autocomplete suggestions and immediately highlights errors when you type them, vastly improving development speed and reducing bugs. **4. Advanced Features** TypeScript Generics Example: `function identity<T>(arg: T): T { return arg; } let output = identity<string>("myString"); // Type of output is string` **5. Scalability** Refactoring Example: Imagine renaming a deeply integrated property of an object that appears across many files. `interface User { firstName: string; // Renaming this property will update it across all files in a TypeScript project. } ` **6. Community and Ecosystem** Usage in Major Frameworks: Frameworks like Angular advocate TypeScript out of the box, and many React and Vue.js projects adopt TypeScript for its robustness. **7. Gradual Adoption** Example of Mixed Project: `// In a TypeScript file import { calculate } from './math.js'; // Importing JavaScript file into TypeScript file console.log(calculate('5', 3)); // This integration helps in gradually shifting to TypeScript.` You can mix TypeScript and JavaScript files in a project, allowing incremental adoption without the need for a full rewrite. **Conclusion** TypeScript provides an array of tools and features that promote better coding practices, easier maintenance, and robust application structure. Its integration with development environments, alongside static typing and advanced features like generics and interfaces, drastically reduces common JavaScript errors and simplifies handling large code bases. Whether for new projects or upgrading existing ones, TypeScript stands out as a strategic choice for developers aiming for scalability, maintainability, and enhanced productivity.
hussain101
1,889,042
Networking and Sockets: Endianness
In my previous article, we introduced some basic concepts about networking and sockets. We discussed...
27,728
2024-06-14T21:11:45
https://www.kungfudev.com/blog/2024/06/14/network-sockets-endianness
linux, rust, networking, socket
In my previous article, we introduced some basic concepts about networking and sockets. We discussed different models and created a simple example of network communication using sockets, specifically starting with `Stream TCP` sockets. We started with this socket type and family because it is one of the most common implementations and the one that the majority of developers work with most frequently. **Stream TCP sockets are widely used due to their reliability and ability to establish a connection-oriented communication channel**. This makes them ideal for scenarios where data integrity and order are crucial, such as web browsing, email, and file transfers. By understanding the fundamentals of Stream TCP sockets, developers can build robust network applications and troubleshoot issues more effectively. In this article, we will continue working with this socket type and dive a bit deeper into socket programming. So far, we have seen how to create a listener socket as a server and how to use another socket to connect to it. Now, we will explore one key detail about this communication, the `endianness`! ## Endianness ![Image from Hackday](https://www.kungfudev.com/_next/image?url=%2Fimages%2Fblog%2Fendianness_had.jpeg&w=2048&q=75) Endianness refers to the order in which bytes are arranged in memory. In a `big-endian` system, the most significant byte is stored at the smallest memory address, while in a `little-endian` system, the least significant byte is stored first. Understanding endianness is **"crucial"** in socket programming because data transmitted over a network may need to be converted between different endianness formats to ensure proper interpretation by different systems. This conversion ensures that the data remains consistent and accurate, regardless of the underlying architecture of the communicating devices. In this example, a 32-bit value 1,200,000 (hex `0x00124F80`) shows endianness. In a big-endian system, bytes are stored as `0x00 0x12 0x4F 0x80` from lowest to highest address. In a little-endian system, the order is reversed: `0x80 0x4F 0x12 0x00`. ```txt Endianness (Order of bytes in memory) Big-Endian System +-----------------+-----------------+-----------------+------------------+ | Most Significant| | | Least Significant| | Byte | | | Byte | | (MSB) | | | (LSB) | +-----------------+-----------------+-----------------+------------------+ | 0x00 | 0x12 | 0x4F | 0x80 | +-----------------+-----------------+-----------------+------------------+ | Address: 0x00 | Address: 0x01 | Address: 0x02 | Address: 0x03 | +-----------------+-----------------+-----------------+------------------+ Little-Endian System +------------------+-----------------+-----------------+------------------+ | Least Significant| | | Most Significant | | Byte | | | Byte | | (LSB) | | | (MSB) | +------------------+-----------------+-----------------+------------------+ | 0x80 | 0x4F | 0x12 | 0x00 | +------------------+-----------------+-----------------+------------------+ | Address: 0x00 | Address: 0x01 | Address: 0x02 | Address: 0x03 | +------------------+-----------------+-----------------+------------------+ ``` Imagine you receive `0x00124F80`, and you don’t know if they are in big-endian or little-endian format. If you interpret these bytes using the wrong endianness, you’ll end up with a completely different value. In `big-endian` format, the most significant byte comes first, so `0x00124F80` remains the same `(decimal value: 1,200,000)`. However, in `little-endian` format, the bytes are reversed, and the value would be `0x804F1200` `(decimal value: 2152665600)`. This discrepancy can lead to significant errors in data processing, making it essential to handle endianness correctly. To illustrate the concept of endianness with a simpler decimal example, consider the 32-bit value 574. In a big-endian system, the most significant digits are stored first. For 574, the digits are 5, 7, and 4. In a big-endian system, this would be stored as 500 (5 x 100), 70 (7 x 10), and 4 (4 x 1), in that order. In contrast, a little-endian system would store these digits in reverse order: 4 (4 x 1), 70 (7 x 10), and 500 (5 x 100). This reversal can cause the value to be interpreted incorrectly. ### Network byte order So for that it’s crucial for machines to agree on endianness when communicating to ensure data is interpreted correctly. For example, in internet communication, standards like `RFC 1700` and `RFC 9293` define the use of big-endian format, also known as network byte order. This standardization ensures that all devices on the network interpret the data consistently, preventing errors and miscommunication. > RFC 1700: > The convention in the documentation of Internet Protocols is to express numbers in decimal and to picture data in "big-endian" order ... > > RFC 9293: > Source Address: the IPv4 source address in network byte order > Destination Address: the IPv4 destination address in network byte order > > https://datatracker.ietf.org/doc/html/rfc768 > https://datatracker.ietf.org/doc/html/rfc1700 ### Machine agreements If your application defines a protocol that specifies byte order, both the sender and receiver must adhere to this protocol. If the protocol specifies that certain fields in the data should be in network byte order, then you must convert those fields accordingly. Many application data formats that rely on `plain text` such as `JSON` and `XML` are either byte order independent (treating data as strings of bytes) or have their own specifications for encoding and decoding multi-byte values. Endianness only applies to `multi-byte` values, meaning it affects how sequences of bytes representing larger data types (like integers and floating-point numbers) are ordered. Single-byte values are not affected by endianness. For example, in Rust, the crates `bincode` and `postcard` use `little-endian` by default. This means that when you serialize data using these crates, multi-byte values will be ordered in little-endian format unless specified otherwise. ### Socket address for binding If you have read or watched anything about socket programming, it is almost certain that you have encountered some `C` examples. For instance, you might have seen something like the following code. This is because, as we learned earlier, IP addresses and ports need to be in big-endian format: ```c address.sin_family = AF_INET; address.sin_addr.s_addr = htonl(...); address.sin_port = htons(PORT); bind(sockfd, (struct sockaddr *)&address, sizeof(struct address)) ... ``` These functions `htons` and `htonl` is part of a series of `C` helper functions that help convert `multi-byte` values to the host byte order. These functions ensure that data is correctly interpreted regardless of the underlying system’s endianness. ```txt The htonl() function converts the unsigned integer hostlong from host byte order to network byte order. The htons() function converts the unsigned short integer hostshort from host byte order to network byte order. The ntohl() function converts the unsigned integer netlong from network byte order to host byte order. The ntohs() function converts the unsigned short integer netshort from network byte order to host byte order. ``` Modern programming languages and frameworks typically handle these formats for you and also offer specialized functions to manage them. For example, if we revisit our previous socket server example in Rust, we will see the following when creating the address: ```rust let server_address = SockaddrIn::from_str("127.0.0.1:8080").expect("..."); ``` In the previous line, we don’t have to deal with byte ordering due to the implementation. However, we can inspect the address to determine the default byte ordering on our machine, which is little-endian in my case. We can also convert it to big-endian, as required for the IP address and port. ```rust // Create a socket address let sock_addr = SockaddrIn::from_str("127.0.0.1:6797").expect("..."); let sockaddr: sockaddr_in = sock_addr.into(); println!("sockaddr: {:?}", sockaddr); println!("s_addr Default: {}", sockaddr.sin_addr.s_addr); // big endian println!("s_addr be: {:?}", sockaddr.sin_addr.s_addr.to_be()); // little endian println!("s_addr le: {:?}", sockaddr.sin_addr.s_addr.to_le()); ``` When we run this code, we get the following output: ```txt $ cargo run --bin addr sockaddr: sockaddr_in { sin_len: 16, sin_family: 2, sin_port: 36122, sin_addr: in_addr { s_addr: 16777343 }, sin_zero: [0, 0, 0, 0, 0, 0, 0, 0] } s_addr Default: 16777343 s_addr be: 2130706433 s_addr le: 16777343 ``` As we mentioned, although Rust and other modern programming languages handle byte ordering i some levels and offer convenient abstractions, it’s always valuable to understand these foundational concepts. ## Practical Example: Handling Endianness in Client-Server Communication Now that we know about byte ordering, we are going to create a simple example to illustrate what we have covered so far. For this, we will create a simple program that receives a file from a socket. The client will first send the size of the file, and then it will send the data of the file. Note that this will be a straightforward implementation, keeping the code simple to illustrate this concept. ### Server We are going to re-use the code from the last article about creating a socket server of type stream and INET family. We will create a simple function to set up this socket server and return the file descriptor. This will help us maintain a clean implementation and reuse it in all future examples. The function is named `create_tcp_server_socket`, and it is quite simple, as shown below: ```rust pub fn create_tcp_server_socket(addr: &str) -> Result<OwnedFd, nix::Error> { let socket_fd = socket( nix::sys::socket::AddressFamily::Inet, // Socket family nix::sys::socket::SockType::Stream, // Socket type nix::sys::socket::SockFlag::empty(), None, )?; // Create a socket address let sock_addr = SockaddrIn::from_str(addr).expect("..."); // Bind the socket to the address bind(socket_fd.as_raw_fd(), &sock_addr)?; // Listen for incoming connections let backlog = Backlog::new(1).expect("..."); listen(&socket_fd, backlog)?; Ok(socket_fd) } ``` Now we can use that function to create a server socket and accept incoming connections. ```rust fn main() { let socket_fd = create_tcp_server_socket("127.0.0.1:8000").expect("..."); // Accept incoming connections let conn_fd = accept(socket_fd.as_raw_fd()).expect("..."); } ``` Next, after accepting the connection, we expect to receive the size of the file that the client will transmit over the network. We expect to receive a 32-bit unsigned integer, so we prepare a 4-byte buffer to receive the size. We then read from the socket and put the data in the buffer: ```rust // Receive the size of the file let mut size_buf = [0; 4]; recv(conn_fd, &mut size_buf, MsgFlags::empty()).expect("..."); let file_size = u32::from_ne_bytes(size_buf); println!("File size: {}", file_size); ``` If you notice, we are using `from_ne_bytes` here. This function assumes that the bytes are arranged in the native endianness of our machine `(ne stands for native endianness)`. Therefore, we expect that the value is using the same endianness as our machine. Finally, we will use that size to create an in-memory buffer to receive the file’s data over the connection and print the bytes read. As mentioned, this is a naive and basic implementation just to illustrate the case: ```rust // Receive the file data let mut file_buf = vec![0; file_size as usize]; let bytes_read = recv(conn_fd, &mut file_buf, MsgFlags::empty()).expect("..."); println!("File data bytes read: {:?}", bytes_read); ``` Full code: ```rust use nix::sys::socket::{accept, recv, MsgFlags}; use socket_net::server::create_tcp_server_socket; use std::os::fd::AsRawFd; fn main() { let socket_fd = create_tcp_server_socket("127.0.0.1:8000").expect("..."); // Accept incoming connections let conn_fd = accept(socket_fd.as_raw_fd()).expect("..."); // Receive the size of the file let mut size_buf = [0; 4]; recv(conn_fd, &mut size_buf, MsgFlags::empty()).expect("..."); let file_size = u32::from_ne_bytes(size_buf); println!("File size: {}", file_size); // Receive the file data let mut file_buf = vec![0; file_size as usize]; let bytes_read = recv(conn_fd, &mut file_buf, MsgFlags::empty()).expect("..."); println!("File data bytes read: {:?}", bytes_read); } ``` ### Client For our client, the logic is again pretty simple and similar to what we had before, but with some subtle additions, like reading the file and sending its size before sending the actual data. ```rust ... socket creation and connection // Read the file into a buffer let buffer = std::fs::read("./src/data/data.txt").expect("Failed to read file"); // send the size of the file to the server let size: u32 = buffer.len() as u32; send( socket_fd.as_raw_fd(), &size.to_ne_bytes(), MsgFlags::empty()).expect("..."); // send the file to the server send(socket_fd.as_raw_fd(), &buffer, MsgFlags::empty()).expect("..."); ``` Two things to notice: as we mentioned earlier, we create a u32 for the file size and we are using the function `to_ne_bytes` to send bytes arranged in the native endianness, which in my case is little-endian. The `data.txt` file is a simple text file containing some lorem ipsum data. We can inspect its size using the stat command: ```sh $ stat -c%s ./src/data/data.txt 574 ``` If you are wondering why we have to deal with endianness for the size but not for the file content itself, remember what we learned earlier: > Many application data formats that relies on plain text, are either byte order independent (treating data as strings of bytes) or ... ### Running Now we can run our server and client to see the output and understand how it works. The size of the file is 574 bytes, and we send the entire file. ```sh # Server $ cargo run --bin tcp-file-server Socket file descriptor: 3 Socket bound to address: 127.0.0.1:8000 File size: 574 File data bytes read: 574 ``` ```sh # Client $ cargo run --bin tcp-file-client Socket file descriptor: 3 Sending file size: 574 Sending file data ``` So far, so good, right? The behavior is as expected: we are sending a file with 574 bytes, and we are receiving that in the server. This works because both the client and server are using `ne (native endianness)`, and since they are on the same machine, they are both using little-endian. But what if the client uses a different endianness? What if it sends the size using `big-endian`, for example? We can simulate what will happen by modifying this line in the client to instruct it to send the data in big-endian instead of little-endian. For that, we use `to_be_bytes` (be stands for big-endian): ```rust send( socket_fd.as_raw_fd(), &size.to_be_bytes(), MsgFlags::empty()).expect("..."); ``` If we run our programs again, we will see why understanding endianness is important. From the client’s perspective, we are sending the same value, just in a different endianness. However, if you look at our server, you will notice the issue: ```sh # Client $ cargo run --bin tcp-file-client Socket file descriptor: 3 Sending file size: 574 Sending file data ``` The server still treats the size as if it is coming in the same endianness, which is little-endian. But it is not little-endian anymore; it is big-endian. This results in a totally different and incorrect value. In this simple case, it causes the server to allocate much more memory than needed for the buffer to receive the file. You can imagine that in more complex scenarios, this issue could have a much larger and more serious impact. ```sh # Server $ cargo run --bin tcp-file-server Socket file descriptor: 3 Socket bound to address: 127.0.0.1:8000 File size: 1040318464 File data bytes read: 574 ``` And if we were using our native OS bit size, like a 64-bit integer in my case, and used a u64 type for the size variable in both the client and server, the issue could be even worse. We can modify the following lines in the client and server and see the result: ```rust // server let mut size_buf = [0; 8]; ... let file_size = u64::from_ne_bytes(size_buf); // client let size: u64 = buffer.len() as u64; ``` After making these changes, let’s run the server and client again: ```sh # Server $ cargo run --bin tcp-file-server Socket file descriptor: 3 Socket bound to address: 127.0.0.1:8000 File size: 4468133780304953344 memory allocation of 4468133780304953344 bytes failed Aborted (core dumped) ``` In this case, the multi-byte representation of the integer is larger, and the value representation becomes colossal in the wrong byte order. Here, the server tries to allocate about 4.4 exabytes of memory, which is far more than what’s available on any current machine. This illustrates how critical it is to handle endianness correctly, as incorrect handling can lead to severe issues like memory allocation failures and program crashes. You can find the code for this example and future ones in this [repo](https://github.com/douglasmakey/socket_net). ## To Conclude Understanding endianness is crucial for developing robust network applications. As demonstrated, even a simple task like sending a file over a network can lead to significant issues if endianness is not handled correctly. By adhering to standards and properly managing byte order, we ensure that data is accurately interpreted across different systems, preventing errors and enhancing the reliability of our applications. Modern programming languages like Rust provide helpful abstractions, but a solid grasp of these foundational concepts allows developers to troubleshoot and optimize their code more effectively. Always be mindful of endianness when working with multi-byte values in network communication to avoid potential pitfalls. Thank you for reading along. This blog is a part of my learning journey and your feedback is highly valued. There's more to explore and share regarding socket and network, so stay tuned for upcoming posts. Your insights and experiences are welcome as we learn and grow together in this domain. **Happy coding!**
douglasmakey
1,889,041
The Juggling Act of Multiple Architectures
Hey, folks! 🤹‍♂️ Ever tried juggling flaming swords while riding a unicycle? No? Well, that’s pretty...
0
2024-06-14T21:09:58
https://dev.to/yuktimulani/the-juggling-act-of-multiple-architectures-2h19
fmv, lmv, afmv
Hey, folks! 🤹‍♂️ Ever tried juggling flaming swords while riding a unicycle? No? Well, that’s pretty much what handling multiple computer architectures feels like! In our Software Portability and Optimization course, we’re diving deep into the world of x86-64 and aarch64 architectures. They’re like the Batman and Robin of computing, each with their own unique set of tricks and tools. These architectures come with a smorgasbord of micro-architectural features. Think of these as special moves in a video game – from cryptography acceleration to SIMD capabilities, each feature is like an extra power-up. But here’s the kicker: not all processors have all the features, so coding for them is like trying to write instructions for a treasure hunt in a mansion, not knowing if everyone has the same map! If you write software that relies on a feature not available on a user’s CPU, it’s game over. Literally. The software crashes. 💥 So, developers have to juggle – make the software flexible enough to run on any machine while also taking full advantage of any available superpowers. Multi-versioning comes to the rescue! 🦸‍♂️ It’s like preparing multiple versions of our treasure hunt instructions, each tailored for different maps. Whether it’s Library Multi-Versioning (LMV) duplicating whole libraries or Function Multi-Versioning (FMV) creating multiple versions of functions, it ensures that no matter what CPU the user has, they get the best possible performance. Now that’s some serious juggling! Thank you for reading.Until next time happy coding!!🚀
yuktimulani
1,888,813
Creathievity: Can your ideas be stolen? (& what to do)
If someone takes your idea, you still have your idea, right? So they didn't really steal it—they...
0
2024-06-14T21:07:52
https://dev.to/drpraze/creathievity-can-your-ideas-be-stolen-what-to-do-1ibl
learning, discuss, developer, startup
**If someone takes your idea, you still have your idea, right? So they didn't really steal it—they copied it.** I used "steal" in the title because it's catchy, but let's talk about the real issue. **How to Stop People from Copying Your Idea** People will always copy good ideas. Here's how to stop them from copying yours: DON'T TELL ANYONE: Don't seek feedback, don't seek constructive input, partners, help, support, investors, or funding. Just keep it to yourself, and your idea will be secure, safe, and die with you. But you don't want that, right? You want to share your idea but not have it copied? **The Reality of Sharing Ideas** What's the point of sharing if you must control how others use the idea? If you don't want the idea to die with you, why worry about people copying it? When people copy you, it means you're doing something impactful. Do you really want them to stop copying you? You should fear the opposite. If no one is copying your ideas, your ideas suck. **Nothing is _Really_ 100% Original** Where did you get the ideas? Likely a mix of other people's ideas. The originality of the idea is proportional to the rate to which it was remixed. Anyone with similar inputs can come up with it. The Wright brothers and Whitehead both created planes independently. Newton and Leibniz developed calculus without ever meeting. Great ideas are truths about solving problems, and NO ONE OWNS THE TRUTH. **It's Not The Copycats You Should Worry About** Copycats are limited. They might understand how the idea works but not why it works. When it breaks, they won't know how to fix it. Ideas evolve. If you want your solution today to still be valuable in 10 years, you'd have to adjust a few things. Copycats can't keep up with continuous creativity. While they're catching up, you're already ahead. You can't beat innovators like Vitalik Buterin or Sam Altman by copying them—they're always moving forward. **What You Should Worry About** Competition. If competitors get vital details and significantly improve on the idea, it can affect your business. Smart competitors can iterate and improve based on your architecture. The difference is your vision and execution style. They're not "stealing your idea"—they're being creative and competitive. To beat them, you have to be a better executor. Thomas Edison didn't invent the lightbulb from scratch. He perfected existing ideas. It's selfish to hate somebody because they took an idea you think is yours and did something that's better for the world with it, that you couldn't do. In the end, you should pay less attention to competitors to make sure your creativity stays fresh, so that you can be more visionary instead of putting out reactions to other people's visions. **The Only Thing That's Yours** The only thing that's truly yours is your expression of an idea—your tangible creations. This article is an expression of my ideas. I can protect my content from plagiarism, but I can't claim exclusive rights to the discussion on whether ideas can be stolen. Inventions, recipes, and formulas can be patented, as they are tangible expressions. Protect these, not the idea. Consult legal advice to safeguard your work, and beware of people that could cheat you. But if someone's expression of an idea is better than yours, improve and compete. It's survival of the fittest. If you liked this article, it was picked out from my newsletter. You can subscribe to recieve great lessons and ideas on building & selling cool projects in your inbox for free: https://tbk.beehiiv.com
drpraze
1,889,040
shadcn-ui/ui codebase analysis: Tasks example explained.
In this article, we will learn about Tasks example in shadcn-ui/ui. This article consists of the...
0
2024-06-14T21:07:12
https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-tasks-example-explained-2pm7
javascript, opensource, react, nextjs
In this article, we will learn about [Tasks](https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)/examples/tasks) example in shadcn-ui/ui. This article consists of the following sections: ![](https://media.licdn.com/dms/image/D4E12AQHYDefNT6XRFQ/article-inline_image-shrink_1000_1488/0/1718398649517?e=1723680000&v=beta&t=u4_70Q5-5fU01bpQm-refO6JdI2aGVXKlDv_8HhCxJg) 1. Where is tasks folder located? 2. What is in tasks folder? 3. Components used in tasks example. Where is tasks folder located? ------------------------------ Shadcn-ui/ui uses app router and [tasks folder](https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)/examples/tasks) is located in [examples](https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)/examples) folder, which is located in [(app)](https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)), [a route group in Next.js](https://medium.com/@ramu.narasinga_61050/app-app-route-group-in-shadcn-ui-ui-098a5a594e0c). ![](https://media.licdn.com/dms/image/D4E12AQEsgep0F8a3-Q/article-inline_image-shrink_1500_2232/0/1718398648077?e=1723680000&v=beta&t=6GLcRPKOYFknxnZM_detLhAI411culCU7e-FP7XyF7Q) What is in tasks folder? ------------------------ As you can see from the above image, we have components folder, data, page.tsx. [page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/examples/tasks/page.tsx) is loaded in place of [{children} in examples/layout.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/examples/layout.tsx#L55). Below is the code picked from tasks/page.tsx ```js import { promises as fs } from "fs" import path from "path" import { Metadata } from "next" import Image from "next/image" import { z } from "zod" import { columns } from "./components/columns" import { DataTable } from "./components/data-table" import { UserNav } from "./components/user-nav" import { taskSchema } from "./data/schema" export const metadata: Metadata = { title: "Tasks", description: "A task and issue tracker build using Tanstack Table.", } // Simulate a database read for tasks. async function getTasks() { const data = await fs.readFile( path.join(process.cwd(), "app/(app)/examples/tasks/data/tasks.json") ) const tasks = JSON.parse(data.toString()) return z.array(taskSchema).parse(tasks) } export default async function TaskPage() { const tasks = await getTasks() return ( <> <div className="md:hidden"> <Image src="/examples/tasks-light.png" width={1280} height={998} alt="Playground" className="block dark:hidden" /> <Image src="/examples/tasks-dark.png" width={1280} height={998} alt="Playground" className="hidden dark:block" /> </div> <div className="hidden h-full flex-1 flex-col space-y-8 p-8 md:flex"> <div className="flex items-center justify-between space-y-2"> <div> <h2 className="text-2xl font-bold tracking-tight">Welcome back!</h2> <p className="text-muted-foreground"> Here&apos;s a list of your tasks for this month! </p> </div> <div className="flex items-center space-x-2"> <UserNav /> </div> </div> <DataTable data={tasks} columns={columns} /> </div> </> ) } ``` Components used in tasks example. --------------------------------- To find out the components used in this cards example, we can simply look at the imports used at the top of page. ``` import { columns } from "./components/columns" import { DataTable } from "./components/data-table" import { UserNav } from "./components/user-nav" import { taskSchema } from "./data/schema" ``` Do not forget the [modular components](https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)/examples/tasks/components) inside cards folder. ![](https://media.licdn.com/dms/image/D4E12AQGBA9SshUsSmA/article-inline_image-shrink_1000_1488/0/1718398649435?e=1723680000&v=beta&t=89278ZYDrj0JMsjiLb1vgfTP4uSxgi-Y2bRxsbIA15w) > _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://github.com/Ramu-Narasinga/build-from-scratch) _and give it a star if you like it. Sovle challenges to build shadcn-ui/ui from scratch. If you are stuck or need help?_ [_solution is available_](https://tthroo.com/build-from-scratch)_._ About me: --------- Website: [https://ramunarasinga.com/](https://ramunarasinga.com/) Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/) Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga) Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com) References: ----------- 1. [https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)/examples/cards](https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)/examples/cards)
ramunarasinga
1,889,038
SIMD and SVE: The Superheroes of Speed!
Hey there, fellow tech enthusiasts! 👋 Let’s talk about some true unsung heroes of the computing...
0
2024-06-14T21:05:59
https://dev.to/yuktimulani/simd-and-sve-the-superheroes-of-speed-566d
simd, sve, superheroes
Hey there, fellow tech enthusiasts! 👋 Let’s talk about some true unsung heroes of the computing world – SIMD and SVE. No, they’re not the latest Marvel superheroes (though they might as well be!). These acronyms stand for Single Instruction, Multiple Data (SIMD) and Scalable Vector Extensions (SVE). If you’re scratching your head wondering what they do, imagine a superhero with the power to handle multiple tasks simultaneously with lightning speed! ⚡️ SIMD is like having a superhero team where each member can handle a different piece of the puzzle simultaneously, boosting performance without breaking a sweat. It’s a lifesaver when you’re dealing with large datasets, making your computer blaze through operations that would otherwise take forever. But wait, there’s more! Enter SVE and its sidekick SVE2, extensions designed to make our CPUs even more flexible and powerful. They’re like upgrading your superhero team with new gadgets and powers, allowing them to tackle an even broader range of challenges. 🚀 So next time your computer zips through a complex task, give a nod to SIMD and SVE – the unsung heroes behind the scenes!
yuktimulani
1,889,037
Litlyx | The One-Line Code Analytics | Plug anywhere!
What is Litlyx? Litlyx is a simple, open-source analytics tool designed to help businesses...
0
2024-06-14T21:05:36
https://dev.to/litlyx/litlyx-the-one-line-code-analytics-plug-anywhere-e6h
javascript, typescript, vue, nextjs
## What is Litlyx? Litlyx is a simple, open-source analytics tool designed to help businesses and developers track KPIs effortlessly. With its lightweight size (<4kb) and real-time capabilities, [Litlyx](https://litlyx.com) makes it simple to monitor and analyze your data with minimal setup. ## What Was the Problem? In today's data-driven world, businesses often struggle with complex docs and expensive analytics tools that require extensive setup and maintenance. These tools can be difficult to customize, and many lack the ability to provide real-time insights or comprehensive KPI tracking with high customization. ## What is the Solution? Litlyx offer a powerful yet easy-to-use analytics platform. With just one line of code, you can start tracking over 10 KPIs, making it accessible to businesses of all sizes. Litlyx combines high customization, real-time data analysis, and affordability to provide a superior analytics experience. ## Easy Setup for Litlyx One of the standout features of Litlyx is its incredibly simple setup process. By embedding a single line of code into your website or application, you can start collecting valuable data immediately (real-time). Plug natively in all JS/TS frameworks (e.g Vue, Next, Nuxt, Bun, Angular, React and many more) ## Features of Litlyx - **Real-Time Capabilities**: Get immediate insights into your data with real-time tracking and reporting. - **Track Over 10 KPIs**: Monitor essential performance indicators effortlessly. - **Anonymous (privacy first)**: All collected data are anonymous & GDPR Compliant - **High Customization**: Tailor the tool to meet your specific needs and preferences with custom-events. - **AI Data Analyst Assistant**: Leverage AI to analyze for you your data and provide actionable insights. - **Report Generation**: Create detailed reports to share with stakeholders and make informed decisions. - **Affordable Pricing**: Access all these powerful features without breaking the bank. ## Open Source Litlyx is proudly open-source on [Github](https://github.com/Litlyx/litlyx)! Leave a star ✩! This ensures that the tool remains up-to-date, secure, and adaptable to the ever-changing needs of businesses. Being open-source also means that you have complete transparency and control over the software, giving you the flexibility to modify and self-host it if you want. --- With Litlyx, tracking and analyzing your business data has never been easier or more efficient. Experience the benefits of real-time insights, extensive customization, and a user-friendly setup—all at an affordable price. Start using Litlyx today and take your analytics to the next level FOR FREE!
litlyx
1,889,035
One Byte Explainer - Event loop
Hello 👋 Let's start with Event loop One-Byte Explainer: The Event Loop is like a traffic...
27,721
2024-06-14T21:03:15
https://dev.to/imkarthikeyan/one-byte-explainer-event-loop-nlf
cschallenge, devchallenge, javascript, webdev
Hello 👋 Let's start with **Event loop** ## One-Byte Explainer: The **Event Loop** is like a traffic cop for JavaScript's code, managing a lane (callback queue) where tasks wait. The cop (Event Loop) processes tasks one by one, checking the call stack and ensuring everything runs smoothly. ## Demystifying JS: Event Loop in Action **Event Loop** manage asynchronous operations, allowing programs to continue running without getting stuck. Let's look at a code example and its explanation: ```javascript console.log("data"); // Output: "data" (immediately) for (let i = 0; i < 10; i++) { console.log(i); // Output: 0, 1, 2, ..., 9 (immediately, one by one) } setTimeout(() => { console.log('timer operations'); // Scheduled for later execution }, 10); // Reliable delay of 10 milliseconds console.log("after setTimeout"); // Placed after setTimeout for clarity ``` > setTimeout is a function in JavaScript, but it is not a built-in function of the language itself. When you call setTimeout in your JavaScript code, the browser's JavaScript engine creates a timer in the Web API. Here is the flow for better understanding: ![flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cgkzmyhzwy7frn1e9jjc.png) 1. "data" and the loop outputs from 0 to 9 are printed immediately. 2. Once the loop execution is done, the setTimeout callback is scheduled. 3. The timer starts as part of the Web API. 4. Meanwhile, the event loop keeps an eye on the callback queue for any results from asynchronous operations. 5. Synchronous operations continue, printing "after setTimeout". 6. After 10 milliseconds, the timer completes and the setTimeout callback is placed in the callback queue. 7. The event loop checks the call stack, and once it's empty, it pushes the first item in the queue to the stack for execution, printing "timer operations". Thank you for reading , See you in the next blog ![final](https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExbTd1c2I3NGFrYmxvYnB3ZGJqMnphYWM4eDhrODd2NHp0bHk0dms3MiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/lOJKLVYkNDWN8GoPoA/giphy.gif)
imkarthikeyan
1,889,036
UI UX Designer Interview Questions..!
Here are most asked questions while giving interview for UI UX Designer position : Q.1 : Tell us...
0
2024-06-14T20:58:38
https://dev.to/iam_divs/ui-ux-designer-position-interview-questions-3ej2
webdev, ui, interview, javascript
Here are most asked questions while giving interview for UI UX Designer position : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hoejsudc776nlfr84pvc.png) **Q.1 : Tell us about yourself ?** Ans: this answer includes a short intro of yourself and summary of your work experience and projects Other forms this question might take: Why are you interested in UX? How did you get started in UX? Tell me a little bit more about your background. **Q.2 : What is UX design?** Ans : UX design stands for User Experience design. It's all about making products (like websites, apps, or even physical items) easy and enjoyable for people to use. UX designers focus on understanding how users interact with a product and then design it in a way that meets their needs and preferences. It's like making sure a road trip is smooth and fun by planning the route, making sure the car is comfy, and putting the snacks where they're easy to reach! Other forms this question might take: Why should we hire a UX designer? What’s the value of UX design? How do you define UX? **Q.3 : Tell me about some of your favorite examples of good UX.** Ans : This answer contains about a user’s experience like what a user need to experience when they visit our website or application for example: Google Search : Google's search engine is a classic example of excellent UX design. It's simple, fast, and incredibly effective. The search bar is prominently placed, and the results are easy to scan and understand. **Q.4 What is the difference between UI and UX ?** Ans: Think of UI (User Interface) as the look and feel of a house—how the rooms are laid out, where the furniture is placed, and what colors are used on the walls. It's all about the visual design and how things appear to the user. UX (User Experience), on the other hand, is like the overall experience of living in that house. It's not just about how it looks, but also how easy it is to move around, whether everything is convenient to use, and if it meets your needs effectively. UX focuses on how users feel and interact with the product or service as a whole “In short, UI is about the visuals, while UX is about the overall user experience. They work together to create products or services that are both visually appealing and easy and enjoyable to use.” Other forms this question might take: What’s the difference between a UX designer and a graphic designer? How is UX design different from visual design? What sets UX apart from other design disciplines? **Q.5 : Walk me through your work flow ?** Ans : Show your work to the interviewer.
iam_divs
1,889,034
Is Data Science Over? What's Changed in 2024?
Recently, there have been thought-provoking questions about the future of data science. Let's delve...
0
2024-06-14T20:55:17
https://dev.to/edulon/is-data-science-over-whats-changed-in-2024-563n
datascience, programming, beginners, career
Recently, there have been thought-provoking questions about the future of data science. Let's delve into this topic and explore how the field has evolved based on current trends and advancements up to 2024. 🌐 Continuous Evolution: Technological Advancements and Emerging Tools Instead of fading away, data science is transforming itself. By 2024, significant strides have been made in technologies like AutoML, which streamlines the development of machine learning models without necessitating extensive expertise. For instance, the application of AutoKeras for automating the creation of intricate deep learning models: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sdyul5pbsottgzyahyof.png) 📉 Challenges and New Paradigms: Ethics and Data Privacy In confronting fresh ethical challenges such as algorithmic bias and data privacy, regulatory frameworks like GDPR persist in shaping data science practices. In 2024, a focus on ethics and transparency remains paramount. Here's an example of implementing ethical practices within machine learning models: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bm8upn2eig9midimigre.png) 🧠 The Role of Automation and Low-Code: Democratization of Knowledge Despite the streamlining of certain tasks in data science through platforms for low-code development and automation, human expertise remains crucial for interpreting findings and implementing strategic insights. Here's an example of performing exploratory data analysis using Pandas and visualizing data with Matplotlib: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0rnjit5ueyud1ejs69ts.png) 🚀 Conclusion: The Bright Future of Data Science As of 2024, data science isn't facing obsolescence; instead, it is adapting and flourishing with advancements in technology, an enhanced emphasis on ethics and privacy, and more accessible knowledge. Embracing these transformations and continually evolving with the field is crucial for success. How do you envision these changes influencing your work or future prospects in data science? Share your insights and experiences! 🔍📈💬
edulon
1,882,373
Arc (Bite-size Article)
Introduction What web browser do you usually use? Currently, I primarily use Brave as my...
0
2024-06-14T20:49:36
https://dev.to/koshirok096/arc-bite-size-article-4nl5
browser, arcbrowser
#Introduction What web browser do you usually use? Currently, I primarily use **Brave** as my main browser and **[Sidekick](https://dev.to/koshirok096/sidekick-browser-enhancing-productivity-with-a-fresh-web-experience-3b46)** as my secondary browser. But recently, I have been trying out a new web browser called **Arc**. While there are still many things I am figuring out, I find Arc to be more user-friendly and useful than Sidekick at this point. I am considering making Arc my secondary, or even my main browser. That's how innovative and unique I find it. In this article, I will share my current impressions of Arc browser. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1rz8txoogmrp5memg798.png) #What is Arc? [Arc](https://arc.net/) is a relatively new web browser developed by the startup company called **The Browser Company**, and released in 2022. Arc aims to function as the operating system for the web, integrating web browsing with built-in applications and features. Many of its functions and design elements are innovative. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ht9p20l8fd1ydaw3necz.png) #My Impressions until so far I have only used Arc for a short period, so my understanding is still limited, but here are some points I've noticed so far: ##Spaces Arc has a unique feature called **Spaces**. In Arc, tabs can be placed in Spaces, which are separate areas that you can use for different purposes. For example, I have Spaces for General, Study, Programming, and Work. I am still experimenting with how many Spaces to create and how to organize them, and I found it's an interesting. Initially, I felt it quite strange for me, but as I used it more, I found it to be a very intuitive and useful feature. #Organizing Frequently Used Apps in Favorites The left sidebar of the browser contains **Favorites**, **Pinned Tabs**, and **Today Tabs**. For detailed explanations, you can refer to the official website, but this navigation layout is very user-friendly and convenient. In Sidekick, I could also organize frequently used apps like Notion and Slack on the left side, but there were limitations depending on the plan. Arc's Favorites seem to operate similarly, but without restrictions at the moment. I have multiple apps like ChatGPT, Notion, Slack, and WhatsApp organized here, making it very convenient. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/72wn2hvm6cp6q3n72btf.png) #Tabs That Disappear After a Certain Time In Arc, you can pin tabs and place them in labeled areas in the sidebar. One particularly unique feature is that <u>unpinned tabs will disappear after a certain period</u> (this can be changed or disabled in settings, and disappeared tabs can be retrieved from the "Archived Tabs" section). When I knew this feature, I found this quite odd, but I understood it makes sense after a few days later. I have a bad habit of accumulating tabs out of inertia, which often results in a cluttered and hard-to-use browser. Arc's "disappearing tabs" feature has the potential to significantly enhance the user experience by addressing this issue. While I'm not fully accustomed to it yet, I plan to get the hang of it. #Seamlessly Handling Downloaded Files in the Browser One small but convenient feature is the ability to <u>handle downloaded files directly from Arc</u>. As a Mac user, I can manage downloaded files without opening Finder, directly from the "**Library**" button in the lower-left corner. For example, if I want to download an image from a website and want to compress it with [Tinypng](https://tinypng.com/), this feature allows for quick and easy handling. #Chromium-Based Made When considering switching browsers, it's crucial for me to be able to use **Chrome extensions** in the new browser. Like Sidekick, Arc is built on **Chromium**, so Chrome extensions are supported. This makes it easy for Chrome users to transition. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5zml2e6pt9lza9ajytm.png) #Conclusion While there are still many aspects I'm getting used to and sometimes find inconvenient, I believe Arc is a highly useful browser with a lot of potential. I plan to continue using it and exploring its innovative features. If this article has piqued your interest, I encourage you to give Arc a try. Thank you for reading!
koshirok096
1,889,030
Random Thoughts on Implementing Reactions
I was randomly thinking of how we'd integrate stuff like Mastodon, Twitter, Facebook posts, YouTube...
0
2024-06-14T20:48:09
https://dev.to/grim/random-thoughts-on-implementing-reactions-cg0
opensource, integration, pidgin, purple
I was randomly thinking of how we'd integrate stuff like Mastodon, Twitter, Facebook posts, YouTube videos, and so on into Pidgin 3. Now I'm not saying **I'm** planning on implementing these, but thinking about these helps determine how the interface should work to make them possible while still supporting traditional chat applications. Replies/comments will be easy as they'll be built into libpurple. The protocol would just treat each post or whatever has a channel or group direct message and then everything else is business as usual. However reactions might get weird because some of these only allow a few different reactions. For Mastodon and Twitter this is star/like, YouTube is thumbs up and thumbs down, and Facebook has it's normal 6 or whatever reactions. Reactions are something we _need_ to have and it's a topic I haven't actually spent much time thinking about it, but it's going to be coming up soon (tm), so I've been slowly working the problem. In traditional chat applications you can usually react with anything from the [Unicode Emoji list](https://www.unicode.org/emoji/charts/full-emoji-list.html) which would have made this whole thing easy if that were the only thing you could react with. However, other chat networks have custom emoji that can be used as reactions too. Discord in particular makes this kind of funky because some servers won't allow custom emojis and other servers only let you use their custom emojis on their server. With that background now, it becomes obvious that the only real way to determine what reactions are available for a given message is to ask the protocol plugin itself on a per message basis. This means that we will have to add additional API to do this. This is anything major, and protocols that have a static list of elements, like Mastodon, Twitter, YouTube, etc will just have a simple function that can return the available reactions, where as something like Discord can go and ask the server what's available if need be. Now that the UI knows what reactions the user can use, it can then pass it on to the protocol which will actually pass it on to the server doing what it needs to do. We still need to figure out the exact representation reactions, which will _probably_ just be an `id, `icon-name`, and `description`, but since most reactions are repurposed emoji, we'll probably have to wait until those are sorted out before moving forward. At any rate we now have some ideas on how to move forward here which is always a good place to be :)
grim
1,889,028
Sending GitHub Secrets to Docker Apps on VMs Using adnanh/webhooks
In this tutorial, you'll learn how to securely send GitHub secrets to a Docker application running on...
0
2024-06-14T20:37:02
https://dev.to/burgossrodrigo/sending-github-secrets-to-docker-apps-on-vms-using-adnanhwebhooks-1jdo
github, webhook, docker, pipeline
In this tutorial, you'll learn how to securely send GitHub secrets to a Docker application running on a virtual machine (VM) using the adnanh/webhooks tool. We'll walk through setting up the GitHub Actions workflow, configuring the webhook, and creating the bash script to handle the incoming data and restart the Docker container. **Prerequisites** - A GitHub repository - A VM with Docker installed - adnanh/webhook installed on your VM - GitHub secrets configured for your project **Step 1: Set Up GitHub Actions Workflow** Create a GitHub Actions workflow file in your repository. This workflow will be triggered on every push to the main branch and will send secrets to the webhook on your VM. ``` name: Send Secrets to Webhook on: push: branches: - main jobs: send-secrets: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v2 - name: Trigger Webhook env: YOUR_SECRET_1: ${{ secrets.YOUR_SECRET_1 }} YOUR_SECRET_2: ${{ secrets.YOUR_SECRET_2 }} run: | echo "YOUR_SECRET_1: $YOUR_SECRET_1" echo "YOUR_SECRET_2: $YOUR_SECRET_2" curl -X POST http://<YOUR_VM_IP>:9000/hooks/your-hook-id \ -H 'Content-Type: application/json' \ -d '{ "YOUR_SECRET_1": "'"$YOUR_SECRET_1"'", "YOUR_SECRET_2": "'"$YOUR_SECRET_2"'" }' ``` **Step 2: Configure Webhook on VM** On your VM, create a webhook configuration file. This file tells the webhook tool how to handle incoming requests and pass the secrets to the environment variables in your script. ``` [ { "id": "your-hook-id", "execute-command": "/path/to/your/script.sh", "pass-environment-to-command": [ { "source": "payload", "name": "YOUR_SECRET_1", "envname": "YOUR_SECRET_1" }, { "source": "payload", "name": "YOUR_SECRET_2", "envname": "YOUR_SECRET_2" } ] } ] ``` **Step 3: Create the Bash Script** Create a bash script that will be executed by the webhook. This script will pull the latest changes from the repository, build the Docker image, and restart the Docker container with the new secrets. ``` #!/bin/bash set -e # Set the repository's directory REPO_DIR="/path/to/your/repository" # Set the Docker container and image names CONTAINER_NAME="your_container_name" IMAGE_NAME="your_image_name" # Change to the repository directory cd "$REPO_DIR" || { echo "Failed to change directory to $REPO_DIR"; exit 1; } # Pull the latest changes from the main branch git pull origin main || { echo "Failed to pull from the main branch"; exit 1; } echo "Successfully pulled from the main branch." # Build the Docker image docker build -t "$IMAGE_NAME" . || { echo "Docker build failed"; exit 1; } echo "Docker build successful." # Stop the old container if it exists if [ "$(docker ps -aq -f name="$CONTAINER_NAME")" ]; then docker stop "$CONTAINER_NAME" || { echo "Failed to stop container"; exit 1; } docker rm "$CONTAINER_NAME" || { echo "Failed to remove container"; exit 1; } fi # Echoing ENV variables for debugging purposes echo "START ECHOING" echo "YOUR_SECRET_1: $YOUR_SECRET_1" echo "YOUR_SECRET_2: $YOUR_SECRET_2" echo "FINISHED ECHOING" # Start a new container with the updated image and pass the environment variables docker run -d \ --name "$CONTAINER_NAME" \ -e YOUR_SECRET_1="$YOUR_SECRET_1" \ -e YOUR_SECRET_2="$YOUR_SECRET_2" \ "$IMAGE_NAME" || { echo "Failed to restart the container"; exit 1; } echo "Container restarted successfully." ``` **Step 4: Start the Webhook on Your VM** Start the webhook tool with the configuration file you created. `webhook -hooks /path/to/your/webhook-config.json -verbose` **Step 5: Push Changes to GitHub** Push any changes to your GitHub repository. This will trigger the GitHub Actions workflow, sending the secrets to your VM, where the webhook will handle them and restart your Docker container with the updated environment variables. **Final touch** 1 - Try to run _adnanh/webhooks_ as a linux system service. 2 - Make sure your `pipeline.sh` is executable by chmod +x `pipeline.sh` **Conclusion** By following these steps, you can securely send GitHub secrets to a Docker application running on a VM using adnanh/webhooks. This setup allows you to keep sensitive information out of your codebase while ensuring your applications have the necessary secrets to run correctly.
burgossrodrigo
1,888,891
Como fui aprovada em Duas Iniciações Cientificas: Uma Com Bolsa e Outra Voluntária
Introdução A Iniciação Científica (IC) é um programa destinado a integrar os estudantes de...
0
2024-06-14T20:36:46
https://dev.to/ianevictoria/como-fui-aprovada-em-duas-iniciacoes-cientificas-uma-com-bolsa-e-outra-voluntaria-1pf1
guia, pesquisa, cientifica, ia
## Introdução A Iniciação Científica (IC) é um programa destinado a integrar os estudantes de graduação ao ambiente de pesquisa acadêmica. Ele permite que os alunos desenvolvam habilidades científicas, aprendam metodologias de pesquisa e contribuam para o avanço do conhecimento em sua área de estudo. A IC pode ser realizada com bolsa, onde o estudante recebe uma ajuda financeira, ou de forma voluntária, sem remuneração. Neste artigo, compartilho minha experiência, dicas e motivos pelos quais recomendo participar de uma IC. ## Minha Busca para Ingressar na Iniciação Científica A ciência sempre me fascinou e despertou meu interesse. Desde muito jovem, acompanho nas redes sociais pessoas que realizam pesquisas e divulgação científica, especialmente mulheres, que são minha inspiração. Fiquei encantada ao seguir a [Patricia Honorato](https://www.instagram.com/ipatriciahonorato?igsh=MW50enB6Z2RtaWRzOA==) que discutia o uso de inteligência artificial (IA) no combate ao câncer. Ela também participava de Iniciação Científica (IC), e eu pensei: "É isso que quero fazer – pesquisar, inovar, ajudar as pessoas." Contudo, eu não sabia como ingressar em uma IC. Então, no mesmo dia, abri o X (antigo Twitter) para pesquisar "como ingressar em uma IC", buscando relatos de alunos do studytwt. Porém, ao abrir o aplicativo, me deparei com um post da querida [Pin](https://twitter.com/Pinheiro314) que havia enviado um e-mail a um professor e conseguido uma resposta sobre um voluntariado. Pois bem, nesse momento veio o estalo. Decidi tomar a mesma iniciativa, adaptando-a ao meu objetivo. Li o edital para o **Programa de Iniciação Científica Voluntária - PIBIV**, o único disponível na época. E em seguida, enviei e-mails para três professores e chamei a atenção de todos. Após conversar com eles, optei pelo orientador com projetos em Engenharia de Dados e IA, áreas que me interessam. Antes de ingressar, durante dois meses, participei de reuniões semanais e executei tarefas para me destacar, sempre entregando mais do que o solicitado. Isso chamou a atenção do orientador que, a partir disso, me convidou para o Programa de Iniciação Científica Voluntária - PIBIV, o qual aceitei imediatamente com muita alegria. Após essa experiência, aprendi a importância de estar atenta aos editais para programas de bolsas. Consciente de que, além de ser uma chance de aprendizado, obter uma bolsa representaria um suporte financeiro significativo, considerando o tempo e o empenho exigidos pela pesquisa e produção. Assim que o edital para o **Programa de Iniciação Cientifica - PIBID** foi lançado, fiz minha inscrição sem hesitar. Elaborei um projeto minucioso, investindo várias horas para assegurar sua excelência. Encaminhei-o à universidade e, já no dia seguinte, recebi a notícia de que havia sido pareada com uma Doutora na área da Inteligência Artificial. Ela propôs algumas pequenas modificações no projeto e o apresentou à comissão. Uma semana mais tarde, fui notificada de que estava entre os dez estudantes escolhidos para a bolsa. Foi um momento de imensa felicidade; percebi que minha dedicação tinha sido recompensada, pois aguardei meses por essa chance e, quando ela surgiu, agarrei-a com todas as forças. No dia subsequente, compartilhei a novidade com os amigos do studytwt e recebi várias mensagens, tanto na publicação quanto nas mensagens diretas, pedindo dicas e orientações sobre. Foi no X que tive a epifania que me mostrou um meio para alcançar o que tanto desejava, inspirando-me a redigir este artigo como forma de agradecimento e para ajudar aqueles que desejam o mesmo. ## Quem Pode Fazer Iniciação Científica #### 1. Estudantes de Graduação A Iniciação Científica (IC) é tradicionalmente associada a estudantes de graduação, proporcionando a esses alunos a oportunidade de se envolverem em projetos de pesquisa desde os primeiros anos de sua formação acadêmica. #### 2. Estudantes do Ensino Médio e Fundamental A participação de estudantes do ensino médio e fundamental em projetos de IC é rara, mas possível através de programas específicos que visam incentivar o interesse precoce pela ciência. Universidades, institutos de pesquisa e fundações promovem iniciativas como a **Pré-Iniciação Científica**, destinada a jovens talentos que demonstram grande potencial em áreas científicas. Esses programas introduzem os alunos ao ambiente de pesquisa, estimulando seu desenvolvimento acadêmico e científico. ## Como Conquistar uma Iniciação Científica 1. **Mantenha um bom desempenho acadêmico**: Você não precisa ser um gênio, alguém "fora da curva", mas ter boas notas e um bom histórico acadêmico pode ser um diferencial no processo de seleção para a IC. 2. **Identifique seus interesses**: Pense nas áreas que mais te interessam e onde você gostaria de realizar pesquisas. Afinal, você vai estudar muito sobre determinado assunto, então por que não estudar algo que gosta, não é mesmo? 3. **Pesquise orientadores**: Procure professores e pesquisadores que atuam nessas áreas. Verifique seus projetos e publicações. É muito importante conseguir um orientador que tenha conhecimento sobre a área que você quer pesquisar, para que você tenha, de fato, uma boa orientação. 4. **Envie e-mails**: Entre em contato com esses professores, apresente-se, fale sobre seu interesse na área e pergunte sobre oportunidades de IC. Expresse seu entusiasmo e disposição para aprender. Não tenha vergonha, seja determinado a conseguir o que quer. 5. **Networking**: Participe de eventos acadêmicos, palestras e seminários. Conhecer pessoas na área pode abrir portas para oportunidades de IC. Afinal, você estará no meio do pessoal, e a barreira da desinformação cai por terra se você conversar com quem já passou por aquilo. 6. **Prepare-se para o processo de seleção**: Se você está buscando uma IC com bolsa, fique atento aos editais que são publicados periodicamente pelas universidades - fique de olho no calendário acadêmico - e pelas agências de fomento. Leia atentamente os requisitos e prepare os documentos necessários, incluindo um projeto de pesquisa detalhado. 7. **Capriche na elaboração do projeto**: Dedique tempo para elaborar um projeto sólido e bem estruturado. Em ambos os casos, nunca entregue o mínimo, entregue o máximo sempre. Saliento ainda mais no caso de IC com bolsa, não faça "meia boca", você estará competindo com outras pessoas, tenha isso em mente. Isso mostra seu comprometimento e aumenta suas chances de ser selecionado. 8. **Acompanhe o processo**: Após a inscrição, acompanhe as notificações da instituição para saber sobre o andamento do processo e os resultados. 9. **Após aprovação**: Fique de olho no edital e no e-mail. Se você for aprovado, confira no edital os documentos que deve enviar. Muita atenção aqui, pois se não fizer tudo certinho pode perder a chance que tanto lutou para conseguir. ## Benefícios Participar de uma Iniciação Científica (IC) oferece diversos benefícios importantes para quem deseja desenvolver um artigo acadêmico. Alguns dos principais ganhos incluem: 1. **Trabalho Remunerado:** Alunos aprovados no processo seletivo para Iniciação Cientifica com Bolsa, recebem suporte financeiro durante a realização da pesquisa - valores podem variar conforme o nível de formação do pesquisador, dependendo do edital, agência de fomento ou universidade. 2. **Certificado de Participação:** Ao concluir uma IC, é comum receber um certificado que atesta sua participação e contribuição para a pesquisa. 3. **Acúmulo de Horas:** Muitas instituições reconhecem a participação em IC como atividade complementar, contabilizando horas que podem ser úteis para enriquecer seu currículo acadêmico. 4. **Oportunidades para Palestrar:** Participar de IC frequentemente abre portas para apresentar seus resultados em conferências acadêmicas e eventos científicos, proporcionando visibilidade e networking no meio acadêmico. ## Quem São as Agências de Fomento? Agora que você já sabe que, se quiser IC, além de precisar ficar de olho nos editais publicados regularmente pelas universidades, também precisa acompanhar os editais das agências de fomento. É importante conhecer essas agências. No Brasil, algumas das **principais agências de fomento** são: - **CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico)**: Uma das principais agências do governo federal para o apoio à pesquisa científica e tecnológica. O CNPq oferece diversas modalidades de bolsas, incluindo a de Iniciação Científica (PIBIC). - **CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior)**: Vinculada ao Ministério da Educação, a CAPES é responsável pela expansão e consolidação da pós-graduação stricto sensu (mestrado e doutorado) em todos os estados do Brasil, além de oferecer bolsas de Iniciação Científica. 📌 Há outras agências de fomento estaduais que também oferecem apoio à Iniciação Científica. Consulte também as agências de fomento estaduais para oportunidades regionais. ## Por Que Recomendo a Iniciação Científica Recomendo a Iniciação Científica não apenas para quem deseja seguir carreira acadêmica ou de pesquisa, mas também para aqueles que têm outras aspirações profissionais. Alguns dos motivos: 1. **Desenvolvimento de Habilidades Científicas**: Participar de uma IC permite que os alunos desenvolvam habilidades críticas para a pesquisa, como análise de dados, redação científica e apresentação de resultados. 2. **Experiência Prática**: A IC proporciona uma oportunidade prática de aplicar o conhecimento teórico adquirido durante o curso, o que é valioso para qualquer carreira. 3. **Ampliação do Networking**: Trabalhar em projetos de pesquisa permite que você conheça profissionais e acadêmicos da sua área, o que pode abrir portas para futuras oportunidades. 4. **Diferencial no Currículo**: Ter uma experiência de IC no currículo é um grande diferencial, seja para candidaturas em programas de pós-graduação ou para o mercado de trabalho. 5. **Desenvolvimento Pessoal**: A IC ajuda a desenvolver disciplina, responsabilidade e habilidades de gerenciamento de tempo, essenciais em qualquer profissão. ## Conclusão Ingressar em uma Iniciação Científica pode parecer desafiador no início, mas com dedicação, tudo flui. Lembrem-se sempre de manter um bom desempenho acadêmico, identificar seus interesses, pesquisar orientadores, enviar e-mails, participar de eventos acadêmicos e ficar atento aos editais de bolsas e agências de fomento. Não subestime o poder do networking e da preparação de um projeto de pesquisa sólido. Todas as dicas que mencionei acima foram fundamentais para minha jornada e espero que também sejam úteis para vocês. ## Referências 1. [Site oficial do CNPq](https://www.cnpq.br/) 2. [Site oficial da CAPES](https://www.capes.gov.br/) 3. [Iniciação Cientifica no Ensino Médio](https://querobolsa.com.br/revista/iniciacao-cientifica-ensino-medio) 4. [Programas de iniciação científica para o ensino médio no Brasil](http://pepsic.bvsalud.org/scielo.php?script=sci_arttext&pid=S1809-89082015000100004) ---
ianevictoria
1,889,026
P vs NP Problem
The P vs NP problem is one of the most interesting and unanswered questions in computer science. The...
0
2024-06-14T20:27:55
https://dev.to/syedmuhammadaliraza/p-vs-np-problem-4hai
cschallenge, devchallenge, challenge, devto
The P vs NP problem is one of the most interesting and unanswered questions in computer science. The solution learns that every problem that can be checked quickly (in polynomial time, called NP) can also be solved quickly (in multi-cycle time, called P). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t2lwrqdbyrdj6bdfbtxy.png) ### The point is P and NP 1. **Class P (Time of Polygamy)**: - This problem can be solved efficiently - in polynomial dimension with input size. - Example: Sorting a list of numbers using an algorithm like QuickSort or MergeSort. 2. **Class NP (Nondeterministic Polynomial Time)**: - Although the solution to this problem may be complex, once given, there is a solution that can be quickly tested. - Example: Traveling Salesman Problem (TSP) - take a tour, check if the cottage is fast, but finding a short cottage tour is difficult. ### Central question P vs NP begs the question: If we can quickly find solutions to problems, can we also quickly find solutions to them? Is P formally equivalent to NP? ### Why does it matter? Deciding P vs NP has profound implications: 1. **Cryptography**: - The security of cryptographic systems like RSA depends on the difficulty of problems like prime factorization. If P equals NP, this problem can be solved quickly, compromising the security of world data. 2. **Optimization with AI**: - Many complex optimization problems of logistics, scheduling and artificial intelligence are NP-complete. If P is equal to NP, it will transform this industry and allow this problem to be solved efficiently. 3. **Scientific Research**: - Fields such as biology (protein folding), chemistry (molecular interactions) and physics (simulation of quantum systems) are often classified as NPs. Overcoming this will effectively accelerate scientific discovery. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rsnrhg549i6jwtmiexfv.png) ### Current Perspectives Despite several decades of research, P vs NP remains unresolved. Many computer scientists think that P is not the same as NP, some problems are hard to solve, but easy to check. The issue is one of seven issues in the Millennium Prize, with a $1 million prize for clear evidence. ### The Broader Impact Understanding that P is equal to NP is beyond theoretical curiosity. It challenges our fundamental understanding of problem solving and the limits of computing. If P equals NP, it will represent a new limit of algorithmic capabilities, capable of solving problems that are currently intractable. ### The results The P vs NP problem is at the heart of theoretical computer science with important practical implications. The solution will not only advance our theoretical understanding, but will affect many real-world applications, from cybersecurity to scientific research. Until then, it remains a fascinating mystery that continues to inspire and challenge the brightest minds in the field.
syedmuhammadaliraza
1,889,023
How I Developed a Classic Snake Game Using Python and Pygame
As a lawyer who has ventured into the exciting world of programming, I've always looked for ways to...
0
2024-06-14T20:24:01
https://dev.to/codecounsel/how-i-developed-a-classic-snake-game-using-python-and-pygame-3ll3
beginners
As a lawyer who has ventured into the exciting world of programming, I've always looked for ways to bridge my new coding skills with fun, engaging projects. That's why I decided to recreate the classic Snake game, giving me a fantastic opportunity to explore game development with Python. This post details my journey in developing the Snake game and reflects on the learning opportunities it presented. The "Classic Snake Game" allows players to control a snake, aiming to eat apples that randomly appear on the screen. Each apple eaten increases the snake's length and the game's speed, making it progressively more challenging. The game was developed using Python and Pygame, a set of Python modules designed for writing video games. To bring this project to life, I relied on Python for its straightforward syntax and Pygame for its ability to handle game-specific tasks like rendering graphics and playing sounds. During the game's development, I encountered several challenges, particularly around handling game states and user input. One significant hurdle was ensuring that the game accurately detected collisions and updated the game state accordingly. I resolved this by carefully structuring the game loop and collision detection logic, which was crucial for maintaining the flow and integrity of the game. Key Features - Dynamic Collision Detection: The game checks for collisions not just between the snake and the apples, but also between the snake and its own body, adding complexity and challenge. - Audio Feedback: Incorporating sound effects like a crunch when the snake eats an apple and a crash when it collides with itself, enhancing the user experience. - Score Display and Game Over Screen: A scoreboard updates in real-time, and a game over screen offers options to restart or quit, making the game user-friendly. This project deepened my understanding of Python programming and game development concepts. I learned the importance of managing game states and user interactions in real-time, which are critical skills in any software development field. The "Snake Game" is in its initial phase, and there are numerous enhancements that could be implemented, such as: - Increasing Game Complexity: Adding more levels or obstacles. - Enhancing User Interface: Introducing custom themes and graphics to make the game visually appealing. - Multiplayer Functionality: Allowing more than one player to compete on the same screen. I invite everyone to check out the code on my GitHub repository (https://github.com/codecounsel/snake_game1/tree/main) and contribute suggestions for improvements or new features. Your feedback is vital in helping this project evolve! Developing the "Snake Game" was an enriching experience that combined my passion for programming with the joy of game development. I am eager to continue enhancing the game, adding features, and improving the design. I look forward to any feedback that can help take this project to the next level!
codecounsel
1,879,557
Higher Order Functions
I learned the basics of coding in a bootcamp setting. We learned how to do things, in what felt like,...
0
2024-06-14T20:23:57
https://dev.to/lylethorne/higher-order-functions-cke
I learned the basics of coding in a bootcamp setting. We learned how to do things, in what felt like, the hard way first. Looking back, they were teaching us how to understand what was going on 'under the hood' before we started to use the built-in higher order functions. Higher order functions(HOF) allow for smoother coding by letting us reduce and reuse code. HOF are functions that accept a function(s) as a parameter(s), known as 'callback', or the function will return a function as the result. Below we have an example of what a call back can look like ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/06f18z8df4ah2gilhd9t.png) Our first function called 'display' takes a single parameter val and logs it to the console using console.log(val). The second function takes three parameters: num1, num2, and a callback. It calculates the sum of num1 and num2, storing the result in the variable 'add'. It then calls the callback function, passing 'add' as an argument. The last line calls the sum function with the arguments 2, 4, and the display function. After running, 6 is logged to the console. ## HOF with arrays The map(), filter(), and reduce() methods are used fairly commonly, so I am going to show you some I use less frequently - but still find interesting. Sort() is a method that accepts an array, and will return the elements sorted, by mutating the original array. Sort converts the elements into strings and then sorts them in ascending order 'comparing their sequences of [UTF-16](https://developer.mozilla.org/en-US/docs/Glossary/Code_unit) code units values'(MDN, Array.prototype.sort()). Below is an example of sort() ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g90rn5e8wwuwu2dfmgjk.png) First, we declare an array named 'stringArray' containing four strings: "apple", "cherry", "apricot", and "banana". Then, sort() method is called on 'stringArray'. This method sorts the elements of the array in place and returns the sorted array. Remember! By default, the sort() method sorts the elements as strings in ascending order, based on their UTF-16 code unit values. The sorted array is ["apple", "apricot", "banana", "cherry"]. Here is another example for sort(), this time with numbers ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbvx47vxkuqyzrdg2wt5.png) First, we initialize an array called numberArray with the elements [66, 1, 19, 333, 42]. Then, define a comparison function named 'compareNumbers'. This function takes two arguments, a and b, and returns the result of a - b. This result determines the order of the elements: If a - b is negative, a is considered to come before b. If a - b is positive, a is considered to come after b. If a - b is zero, a and b are considered equal in terms of order. The first time sort() is called on numberArray without any arguments. Did you Remember? By default, sort() converts the elements to strings and sorts them lexicographically (based on the unicode code point values of the characters). This is why the array is sorted as [1, 19, 333, 42, 66] instead of numerically. Finally, the sort() method is called with the compareNumbers function as an argument. This causes the array to be sorted numerically in ascending order [1, 19, 42, 66, 333]. ## HOF with objects Object.entries() is a function, that creates a new array from an object. It takes the key and corresponding value pair from the object, and puts them both into a little array together, and followed by the next corresponding pair in another array, and so on until the object is finished. Below is an example of Object.entries() ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipu0dxgvhrjlf95uhsip.png) First, we initialize an object 'user' with two properties: firstName with the value 'Lyle' and pets with the value 2. Then the Object.entries() method is called with the 'user' object as its argument. This method returns an array of the object's own property key-value pairs ['firstName', 'Lyle'], ['pets', 2]. ## Conclusion Since completing the JS coding part of the bootcamp, I've found myself starting to write out code that map(), filter(), or any other HOF accomplishes before I remember them. However, spending time trying to find a missing or misplaced syntax within loops or large if-else chains, is inefficient time management and taxing on our brain's energy. HOF are smaller blocks of code that allow developers to keep their code dry, make code more legible, and easier to debug. Practicing new methods is an important way to strengthen skills and helps streamline coding. ## Sites I found helpful Prasad, Sobit, "Higher Order Functions in JavaScript – Explained with Practical Examples", _Free Code Camp_, 1/3/2023, https://www.freecodecamp.org/news/higher-order-functions-in-javascript-explained/ , 06/2024 MDN, Array.prototype.sort(), https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort , 06/2024 MDN, Code Unit, https://developer.mozilla.org/en-US/docs/Glossary/Code_unit , 06/2024
lylethorne
1,889,022
Trois mois à Kali Academy : Une Immersion dans l'Open Source.
Je m'appelle Patrice Kalwira et je suis un développeur web fullstack. Cet article est un condensé sur...
0
2024-06-14T20:23:14
https://dev.to/patricekalwira/resume-de-mon-stage-a-kali-academy-une-immersion-dans-lopen-source-2ok3
opensource, webdev, programming, career
Je m'appelle Patrice Kalwira et je suis un développeur web fullstack. Cet article est un condensé sur mon parcours en tant que stagiaire au sein de l'organisation Open Source Kali Academy, basée à Goma, à l'Est de la République Démocratique du Congo. Durant trois mois, j'ai eu l'opportunité exceptionnelle de participer à un stage intensif au sein de Kali Academy, une organisation dynamique spécialisée dans l'open. Cette expérience enrichissante m'a permis de plonger dans les profondeurs de l'open source, de découvrir Kali Academy, ses différents projets et programmes. Plus encore, c'était une immersion totale dans le monde de la Wikimedia Foundation et ses projets mais aussi y apporter ma modeste contribution. **C'est quoi Kali Academy ?** Kali Academy est une académie qui vise à promouvoir les valeurs de l'open source dans les régions les moins représentées notament en Afrique. Les valeurs clés de l'open source sont le partage, la collaboration et bien sûr aussi la contribution. Son cible majeur est les étudiants ainsi que les récents diplômés. Mais il n'oublie pas aussi toute la communauté qui doit être aussi sensibilisée sur l'open source. **Programme de stage** Notre stage a duré trois mois et pour chaque mois nous avions une concentration pricipale particulière **1. Premier mois : Hacking, Linux et projets de Kali Academy** - **Comment devenir un bon hacker** Comme on aspirait de dévenir tous des hachers, notre voyage a commencé par la formation sur comment devenir un hacker (bien que la plupart des gens confondent hacker et cracker). Nous avons appris les attitudes à adopter. - **Introduction au système Linux** La plupart des vrais hackers utilisent Linux comme système d'exploitation. D'où même si on ne va pas forcement utiliser Linux quotidiennement on est sensé avoir en tête quelques commandes. - **Présentation des projets de Kali Academy** Nous avons eu des séances sur comment trouver un projet open source passionnant et comment faire pour y contribuer. Notre premier exercice en ce terme était d'aller sur [le github de Kali Academy](https://github.com/kaliacad/) et y choisir quelques issues à travailler. **2. Deuxième mois : Hacking avec MediaWiki** Ce mois visait prémièrement à familiariser les stagiaires au moteur Mediawiki qui propulse la quasi totalité des applications de Wikimedia. Cette partie a été très capitale car elle nous a permis de commencer à faire des contributions techniques (écriture des codes) directement sur les projets de Wikimedia. Les projets sur lesquels nous avons passé trop de temps sont Wikipédia et Wikidata. Il y en a eu plusieurs mais ces deux méritent d'être mentionnés ici. Durant ce mois, nous avons créé [un petit moteur ](https://github.com/kaliacad/WD-Vanila)de recherche qui récupère des données à partir de Wikidata en utilisant l'API de Wikidata. Ce projet nous a permis de mettre en pratique c'est que nous avons appris dans un événement en ligne qui parlait de Wikidata. ![Kali Academy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zthormknkzu2cu46f1r8.jpg) **3. Troisième mois : Projet final** Après ces deux mois d'apprentissage, le dernier mois visait à concrétiser tout ce qu'on avait déjà appris en créant des applications. Pour notre projet final, dans un groupe de 4, nous avons développeé [une application qui utilise l'intelligence artificielle](https://github.com/kaliacad/wikidataqueriIA) pour générer les requêtes SPARQL en utilisant le langage humain. Le projet est évidemment open source et si ça vous tente, contribuez sans modération. L'un des points forts de ce stage a été notre participation active à divers hackathons et ateliers organisés par la Wikimedia Foundation, Kiwix, ... Ces événements étaient des occasions idéales pour appliquer les compétences acquises dans des contextes pratiques et collaboratifs. Nous avons travaillé sur des projets divers, allant de l'amélioration des outils existants de Wikimedia à la création de nouvelles applications innovantes. Ces hackathons nous ont également permis de réseauter avec des professionnels de l'industrie et de recevoir des conseils précieux pour notre développement personnel et professionnel. ![Kali Academy](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6tkmv3jly653lqd5ngg3.jpg) Voici une liste non exhaustive d'activités dans lequelles nous avons eu l'honneur de participer : - [Wishaton March 2024](https://meta.wikimedia.org/wiki/Event:WishathonMarch2024) - [Wikidata Leveling Up 2024](https://www.wikidata.org/wiki/Wikidata:Events/Leveling_Up_Days_2024) - [WMA-MediaWiki 101:Becoming a MediaWiki Hacker](https://meta.wikimedia.org/wiki/Event:WMA-MediaWiki_101:Becoming_a_MediaWiki_Hacker) - Beacoup d'activités organisées par Wikimedia RDC **Conclusion** Ce stage à Kali Academy a été une expérience transformative, offrant une immersion profonde dans le monde de l'open source, des compétences techniques solides, et l'opportunité de travailler sur des projets innovants avec des applications réelles. Les connaissances et les compétences acquises pendant ces trois mois sont inestimables et ouvrent de nombreuses portes pour des opportunités futures. Kali Academy a fourni un environnement d'apprentissage exceptionnel qui nous a préparés à devenir des contributeurs actifs et innovants dans le domaine de l'open source. Il y avait beacoup de choses à écrire mais j'ai fait juste un bref résumé. Si vous avez des questions, n'hésitez pas à les poser dans les commentaires Je remercie toute l'administration de Kali Academy, tous mes collègues stagiaires et tous les contributeurs à l'open source en général quel que soit le projet.
patricekalwira
1,889,020
DevOps Beginner and Enthusist
I started my Devops Journey on 4th February 2024 and it is pretty cool so far. Currently learning GIT...
0
2024-06-14T20:22:13
https://dev.to/zaaviel_geraldkabir_6509/devops-beginner-and-enthusist-n96
I started my Devops Journey on 4th February 2024 and it is pretty cool so far. Currently learning GIT and LINUX.
zaaviel_geraldkabir_6509
1,889,019
Ebook - Do Zero ao J.A.R.V.I.S. - Criando seu Assistente Personalizado com IA e Python
Olá mais uma vez galera Devs! Hoje venho trazer um e-book que elaborei junto a um bootcamp da DIO ...
0
2024-06-14T20:21:49
https://dev.to/carlos-cgs/ebook-do-zero-ao-jarvis-criando-seu-assistente-personalizado-com-ia-e-python-doo
Olá mais uma vez galera Devs! Hoje venho trazer um e-book que elaborei junto a um bootcamp da DIO que me apresentou todo o caminho das pedras para elaborar e-books de forma profissional e simples. Gratidão. Neste e-book ensino o passo a passo e de forma prática como recriar o J.A.R.V.I.S., assistente pessoal do Homem de Ferro. Sobre o projeto J.A.R.V.I.S, coloco em prática conceitos do curso da Alura com o Google sobre integração de APIs do Gemini, além de utilizar diversos conceitos sobre Python que aprendi durante a minha jornada de desenvolvedor. O J.A.R.V.I.S funciona como se fosse a Alexa ou a Siri, respondendo a comandos de voz com através da biblioteca speech_recognition. Ele possui um comando de pesquisa que utiliza uma API do Google que o conecta diretamente com o Google Gemini e obtem as respostas solicitadas. O Jarvis também pode executar e abrir diversas funções em seu computador como calculadora, paint, word, excel, etc. utilizando a biblioteca OS para interagir diretamente com seu sistema operacional. Todas essas funcionalidades interagindo de forma iterativa com o usário entendo o que é dito e retornando respostas de audio. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9s411cy5scv94rtdj9cn.png) Segue link da publicação em meu LinkedIn: https://www.linkedin.com/posts/carlos-cgs_do-zero-ao-jarvis-carlos-cgs-activity-7202264831473684480-HX9y?utm_source=share&utm_medium=member_desktop _Vamos Disseminar os Conhecimentos e Transbordar Tudo o que Aprendemos! _ Segue lá no GitHub: https://github.com/Carlos-CGS Segue lá nos LinkedIn: https://www.linkedin.com/in/carlos-cgs/
carlos-cgs
1,889,018
One Byte Explainer: Homogenous coordinates
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-14T20:21:12
https://dev.to/efpage/one-byte-explainer-homogeneous-coordinates-4cbj
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer To display a 3D scene on a 2D screen, each point needs to be projected. This can include multiple steps. Homogenous coordinates, introduced by A.F. Möbius in 1827, provide a way to combine the whole transformation pipeline into a single matrix operation. <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
efpage
1,889,010
Object Oriented Programming (OOP) Made Simple!
Object-Oriented Programming (OOP) is a way of writing computer programs that focuses on creating...
0
2024-06-14T20:21:02
https://dev.to/jaid28/object-oriented-programming-oop-made-simple-4k9f
oop, java
Object-Oriented Programming (OOP) is a way of writing computer programs that focuses on creating objects that contain both data (attributes) and functionality (methods). Example: **Car Object** Imagine we want to model a car in a computer program using OOP principles. 1. **Class Definition:** - A class is like a blueprint or template for creating objects. For our car example, the class defines what a car is and what it can do. class Car: def __init__(self, make, model, year): self.make = make self.model = model self.year = year self.odometer = 0 # Initial odometer reading def drive(self, distance): self.odometer += distance print(f"The {self.year} {self.make} {self.model} has driven {distance} miles.") - In this example: Car is the class that represents a car. - __init__ method (constructor) initializes a new car object with attributes like make, model, year, and odometer. - drive method simulates driving the car and updates the odometer. 2. **Creating Objects (Instances):** - An object is an instance of a class. We can create multiple cars (objects) based on our Car class. code- # Create instances (objects) of the car class my_car = Car("Toyota", "Camry", 2022) your_car = Car("Honda", "Accord", 2020) #use the objects (instances) my_car.drive(100) your_car.drive(50) - my_car and your_car are objects (instances) of the Car class. - We can use the drive method on each car object to simulate driving and update their odometers. 3. **Encapsulation:** - It means mechanism of wrapping the data and hiding the implementation details 4. **Inheritance and Polymorphism:** - Inheritance allows us to create a new class based on an existing class, inheriting its attributes and methods. For example, we could create a ElectricCar class that inherits from Car and adds additional features like charge_battery'. - Polymorphism allows objects to be treated as instances of their parent class. For example, both Car and ElectricCar objects can be treated as Car objects when methods like drive are called. **Summary:** - Class is a blueprint for creating objects. - Object is an instance of a class. - Attributes are data stored in objects (e.g., make, model). - Methods are functions that operate on objects (e.g., drive). **Explaining more Details about Inheritance and Polymorphism in next article.** And if you have any doubt so please write on comments and please give me feedback!! Thanks!
jaid28
1,888,956
Deploying NextJS apps with PipeOps
It's Day 2 of Pipeops HackOps 1.0 and participants are building at full speed 🚀. Just look at the...
0
2024-06-14T20:14:06
https://dev.to/orunto/pipeops-with-nextjs-144l
nextjs, pipeops, hackathon, pnpm
It's Day 2 of [**Pipeops HackOps 1.0**](https://pipeops.io/hackathon) and participants are building at full speed 🚀. Just look at the [**leaderboard**](https://pipeops.io/hackathon/leaderboard) 😮‍💨 However, PipeOps is a new platform and as such the builders are still trying to find their footing with it. In light of that here's a quick guide to deploying Next JS apps on Pipeops - Start a new project and select Web Project as project type ![Create a new project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fia3umh7tjocktzct5oe.png) ![Select Web Project as project type](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1dzmvd4745w8ymwjqvrk.png) - Select your preferred git provider, the account/organization your project repo is on and your project repo ![Select your preferred git provider, the account/organization your project repo is on and your project repo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wdf993z5ip7lcmj5f8vf.png) - Hit proceed on project summary - In project build settings, make sure Framework is set to NextJS build method is set to Nixpack ![Framework is set to NextJS build method is set to Nixpack](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rjuffsrcqskjepluph8e.png) - Hit 'Deploy project' and watch the magic happen - Your NextJS app is now live! ![The NextJS project is now live](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5w417iifwv60jqlfropg.png) <br> ## Troubleshooting ### Error 503 One of the more common errors participants have run into when deploying NextJS apps is the `503 Error` with their live links. This issue has been addressed by the team but here's a rundown incase you missed it 👇 In your project's build settings, change build method is to Heroku BuildPack and that should settle it. ![Build Settings with build method set to Heroku BuildPack](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c2k9bel1gv6dflgtjf1k.png) ### pnpm If you're a space saving weirdo (Like me 🤡) that uses pnpm, you need to make sure the following is in order - Set Lifecycle command to `pnpm run` ![Lifecycle command in build settings set to pnpm build](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6qfk8813gfygxn6p0abj.png) - When using Heroku BuildPack, specify your pnpm version by running commands in your local environment and commit the resulting change to your `package.json` file before building ``` corepack enable corepack use pnpm@* ``` **Note:** Terminal must be running as administrator ### Image optimization If you're having `Error 404` or `Error 502` when trying to load images, you probably have issues with optimization and that can easily be resolved by installing _sharp_ ``` npm i sharp pnpm add sharp ``` Thanks for reading, happy building! See you at the finals 🫵
orunto
1,888,955
5 Captivating Coding Challenges to Elevate Your Front-End Skills 🚀
The article is about a collection of five captivating coding challenges from the LabEx platform, designed to help front-end developers elevate their skills in CSS, React, and responsive layout design. The challenges cover a wide range of topics, including creating a visually appealing business card, building a React-powered drag-and-drop puzzle game, implementing a responsive Flexbox-based card layout, designing a responsive dice layout, and creating a dynamic sticky tab bar. Each challenge is presented with a detailed description, a link to the corresponding LabEx lab, and an engaging, emoji-infused narrative to inspire and guide the reader. Whether you're a seasoned pro or a budding coder, this article promises to ignite your passion for front-end development and help you showcase your expertise through these captivating coding adventures.
27,689
2024-06-14T20:09:53
https://dev.to/labex/5-captivating-coding-challenges-to-elevate-your-front-end-skills-f7m
css, coding, programming, tutorial
Unlock your full potential as a front-end developer with this curated collection of five engaging coding challenges from the LabEx platform. Whether you're a seasoned pro or a budding coder, these hands-on projects will push the boundaries of your CSS, React, and layout skills, leaving you with a polished portfolio and a newfound confidence in your abilities. 🧑‍💻 ## 1. Create Visually Appealing Business Card (Challenge) CSS styling is the backbone of front-end development, and this challenge invites you to showcase your expertise by helping Jackson design a stunning user business card. [Dive in and get creative!](https://labex.io/labs/300114) ## 2. Building a React Drag-and-Drop Puzzle Game (Challenge) Embark on a captivating journey as you build a React-powered drag-and-drop puzzle game. This project is perfect for beginners looking to master React components, state management, and user interaction handling. 🧩 [Let the puzzling begin!](https://labex.io/labs/299486) ## 3. Responsive Flexible Card Layout (Challenge) Flex your CSS Flexbox skills and create a responsive card layout that adapts seamlessly to different screen sizes and orientations. This challenge will push your front-end prowess to new heights. 📱 [Unlock the power of Flexbox!](https://labex.io/labs/300066) ## 4. Responsive Dice Layout with Flexbox (Challenge) The Flex layout in CSS3 has become the go-to choice for front-end page layout, and this test will challenge you to implement the classic dice layout effect using the power of Flexbox. 🎲 [Roll the dice and showcase your skills!](https://labex.io/labs/300061) ## 5. Implement Dynamic Sticky Tab Bar (Challenge) Dive into the world of dynamic user interfaces and tackle the task of creating a fixed top bar for a course website. The bar should remain in its original position until the user's scrolling height exceeds its height, at which point it should be fixed at the top of the page. 📚 [Stick the landing with this challenge!](https://labex.io/labs/299844) Embark on these captivating coding adventures and elevate your front-end prowess to new heights. 🚀 Unleash your creativity, sharpen your skills, and showcase your expertise with these LabEx challenges. Happy coding! --- ## Want to learn more? - 🚀 Practice thousands of programming labs on [LabEx](https://labex.io) - 🌳 Learn the latest programming skills on [LabEx Skill Trees](https://labex.io/skilltrees/css) - 📖 Read more programming tutorials on [LabEx Tutorials](https://labex.io/tutorials/category/css) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,887,426
🙅 Why I don't use AI as my copilot 🤖
Jesus, take the wheel. 🚗 And Github Copilot, take the IDE. 💻 Github says, 92% of US devs are using...
0
2024-06-14T20:08:54
https://dev.to/middleware/why-i-dont-use-ai-as-my-copilot-47k3
webdev, javascript, programming, ai
**Jesus, take the wheel. 🚗 And Github Copilot, take the IDE. 💻** **[Github says](https://github.blog/2023-06-13-survey-reveals-ais-impact-on-the-developer-experience/)**, 92% of US devs are using Copilot. What. Seriously?! When did you hear 92% of any demographic using a singular thing? Unless of course... you say **100%** of all people who ever existed, consumed di-hydrogen monoxide. _(There's exactly one way this line gets dark. Don't go there. 👀)_ Join me on a quick journey as I talk about: - [What my fears actually are.](#what-im-actually-worried-about) - [How other experienced devs feel about this.](#from-the-experienced-devs-point-of-view) - [Maybe I'm worried about nothing?](#lets-take-a-step-back-for-a-moment) - [And how WE are responsible for how we use LLMs!](#where-the-heck-does-llmai-fit-in) ### 🔥 When the machines took over the world in 2024 After a quick Google search, it seems like most devs are using AI assistance in writing code. **I’d be lying** if I said I haven’t used AI to write code at all. Of course, I have. I don’t live under a rock. I've seen devs get oddly comfortable with the idea of sharing their code related data with third party cloud services, which are often not SOC2 (or something similar) certified, and make vague unprovable claims of privacy at best. Things like Github Copilot (and Copilot chat), Bito.ai, and several other AI code extensions on the VS Code marketplace have more than 30 million installs. Crazy! 🤯 And then there's me. **I’ve not made AI assistance** a part of my regular code workflow. A couple of times I’ve taken help from GPT to get some boilerplate written, sure. But those times are an exception. Things like Github Copilot, or any kind of code review, code generation tool, PR creation, or commit-assistance, isn’t a part of my IDE or CLI flow. Maybe it’ll change with time. We’ll see. > ## “But… why?” ![but why](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bpc1fc4t5a10j0i5r3wj.gif) ### 😟 What I'm ACTUALLY worried about The answer is simple. 👇 #### 1. I fear my programming skills will get rusty I am concerned that the way I write and read code will suffer if I get too used to AI assistance. - I’m concerned I’ll begin to **overlook** flaws in code that I can catch otherwise. - That I’ll start to take AI generated code for **granted**. - And looking up APIs, built-in methods, or other documentation will start to feel like a **chore**. I fear… that I’ll start slipping. #### 2. I'm not comfortable enough with sharing all of my code with a third party service Companies can be damn smart about inferring things from the data you provide. Sometimes they'll know about things that [your family won't know about](https://www.forbes.com/sites/kashmirhill/2012/02/16/how-target-figured-out-a-teen-girl-was-pregnant-before-her-father-did/). The idea that sensitive business logic may get leaked to a third party service, which may eventually be used to make inferences I'm not comfortable with or just... straight-up leak? I mean, software gets hacked all the time. I think I'm being very reasonable thinking that I don't want to expose something as sensitive as code in an unrestricted manner to a third party company. Even if that company is Microsoft, [because even they f*ck up](https://www.wired.com/story/total-recall-windows-recall-ai/). ### 👀 From the Experienced Devs' point of view This isn’t a take that is unique to me either! #### 1. More-experienced devs tend to **not want to lean** on “crutches” to write their code. I’ve even had the pleasure to work with senior devs who didn’t want to use colored themes on their IDEs because they thought it’ll hurt their ability to scan, read, or debug code! (that was a bit much for me too) After all, “programming skills” is a lot **more than just writing code**. #### 2. Older devs have seen all kinds of **software get hacked**, data leaked, etc. I mean, when [haveibeenpwned.com](https://haveibeenpwned.com/) sends you emails about your credentials, emails, and other data leaks every year for over 10 years... MANY TIMES from [billion](https://en.wikipedia.org/wiki/2017_Equifax_data_breach) [dollar](https://www.news18.com/india/indias-biggest-data-leak-so-far-covid-19-test-info-of-81-5cr-citizens-with-icmr-up-for-sale-exclusive-8637743.html) [corporations](https://www.nytimes.com/2017/10/03/technology/yahoo-hack-3-billion-users.html)... When you hear **"When you're not paying for the product, you are the product"** for the bazillionth time, which is then backed by yet another company selling that data to some third party... Yeah... it gets tiring. And it gets real easy to just disconnect as many wires as you can and go back to stone age. > ![old matt damon](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/03pmbpqix6e199ewey5t.gif) > “Older devs”? Am I… Am I getting old? > Nah, I’m still 22 and this is 2016 or something… right? Right?? **Btw, the answer to the question in the title is 👆 this.** Congrats! The post is over! Over to the next one… Buuuuut… if you want to continue reading… ![joey theres more](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jk0ywd7yjqukeokd0t2d.gif) ### 🚶 Let's take a step back for a moment... #### I think my fears may be exaggerated. > _Let's keep the whole data privacy angle aside for now, because that's a whole other topic on it's own that I feel about rather passionately._ I personally don’t have enough data to empirically say that using AI assistance will bring about the doom I fear… That it’ll downgrade me from what I am today, to an [SDE1](https://dev.to/middleware/going-from-sde1-to-sde2-and-beyond-what-it-actually-takes-1cld). But I’ve seen patterns. - I’ve seen AI-generated sub-par quality code go through code reviews and end up on the `main` branch. - I’ve seen library functions being used without properly understanding what, or what alternatives exist just because an LLM generated that. - I’ve even seen code generated to solve a problem, for which a utility function already existed in the codebase but wasn’t used because knowing this utility existed was a lot more work than asking GPT to generate it for you. ### 💎 ~~Diamonds are~~ Bad code is forever **“Wait a damn minute… I’ve seen this movie before!”** ![dejavu](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fvun4635neb2dflv5yto.gif) - LLMs are a pretty new thing… but 💩 code has been **eternal!** - Every. Single. Dev. Ever. Has used library functions without fully **understanding** it or looking at alternatives. You, and me, are both guilty of that. (What? You thought `Array.prototype.sort` was the best way to sort anything? It’s just sufficient in most cases!) - A piece of logic gets reinvented (re-copy-pasted) all the damn time! Just that before it used to be from **StackOverflow**, now it’s from ChatGPT. ### 🤷 So, what’s the big fuss about? > "Will using ChatGPT make me a bad programmer?" I think, no. The takeaway is that you just need to care about what you build. Take **pride** in what you **build**. ### 🤖 Where the heck does LLM/AI fit in? **LLMs are not inherently evil.** In fact, they can be pretty damn useful if used responsibly: - **Quality Code:** An LLM might handle edge-cases that a less diligent developer wouldn’t consider. - **Comprehensive Tests:** LLMs might write tests that are more comprehensive than what some devs would produce. - **Comprehensive Types:** It might even write types more "completely" than an average dev might write on their own, or might have the skill to write. However, the responsibility lies with the developer to ensure that the code output is guarded and well-monitored. Someone who doesn’t care would have done a shoddy job at any point in history. The presence of LLMs doesn’t change that. ### 😎 The art of actually giving a f*ck There's a lot of devs out there who don't care. But you’re no such dev. **You DO care.** Else you wouldn’t be here on dev.to learning from people’s experiences. I recently wrote about what new devs should care about to grow in their career. It’s a LOT MORE than just code. {% embed https://dev.to/middleware/going-from-sde1-to-sde2-and-beyond-what-it-actually-takes-1cld %} **Maybe I’ll introduce some AI in my VSCode.** I think it’s a matter of when, instead of **if**. What’s more important is… as long as I care about making sure my programming output is **readable, performant, high quality, and easily reviewable**, I think I’ll be fine, and so will you. --- ### 👇 P.S. If you want an example of something I care deeply about, and has both great code 💪 and… less than excellent code 🤣, take a look at our **open-source** repo! It’s something that lets you spot how long it takes for you to deliver your code, how many times PRs get stuck in a review loop, and just generally how great your team ships code. {% embed https://github.com/middlewarehq/middleware %}
jayantbh
1,888,954
How to setup Stripe subscriptions with Django and React
what we will be building Recently I added a subscription model to one of my projects,...
0
2024-06-14T20:06:46
https://dev.to/ato_deshi/how-to-setup-stripe-subscriptions-with-django-and-react-3p8p
webdev, react, python, stripe
## what we will be building Recently I added a subscription model to one of my projects, https://www.cvforge.app/, in this app users can create and download resumes, taking advantage of premade examples to help you get started. Some of these features are exclusively available to Pro members: ![Upgrade button CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9pt153433v8l6n6qpxx5.png) New users are welcomed by an ‘upgrade’ button next to their user button. Clicking this will redirect them to a Stripe Checkout page, like so: ![Checkout page CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3aj8c2usk5rwlhtymvq.png) After a user has subscribed they will have access to all features. ## Getting started For this tutorial we will be using Django and React with the stripe Python sdk, but nothing here is specific to those languages or frameworks so feel free to use what ever you prefer. ## Setting up Stripe 1. Create an account over at https://stripe.com/ 2. Install the stripe-cli by following the official guide https://docs.stripe.com/stripe-cli 3. Enable test mode. 4. Get your test secret key from the dashboard, it should start with `sk_test_`. 5. Next we create a product in Stripe, make sure it is set to recurring since we are working with subscriptions. 6. Finally we retrieve the price_id by clicking the 3 dots and selecting ‘Copy price ID’: ![Stripe dashboard CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/covurux1hod900vzx8d3.png) ## Creating a Checkout Session If you do not have a Django project setup please checkout Django’s guide on getting started https://docs.djangoproject.com/en/5.0/intro/, from here on I will assume you have a running django project. Let’s start by installing the stripe sdk. `pip install stripe` Next we will create a new app to handle everything payment related. `python manage.py startapp payments` In here we will create the appropriate models: ```python class Customer(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='customer') source_id = models.CharField(max_length=255, blank=True, null=True) class Subscription(models.Model): customer = models.ForeignKey(Customer, on_delete=models.CASCADE, related_name='subscriptions') source_id = models.CharField(max_length=255, blank=True, null=True) currency = models.CharField(max_length=255, blank=True, null=True) amount = models.FloatField(blank=True, null=True) started_at = models.DateTimeField(blank=True, null=True) ended_at = models.DateTimeField(blank=True, null=True) canceled_at = models.DateTimeField(blank=True, null=True) status = models.CharField(max_length=255, blank=True, null=True) ``` Both of these models will be modeling models in stripe’s system, source_id here refers to the ids in the corresponding stripe models. We use these models to link a subscription to a user in our system. Next we will create a checkout view: ```python from rest_framework.decorators import api_view from rest_framework.response import Response from rest_framework import status from django.conf import settings import stripe @api_view(['POST']) def checkout_session_view(request): try: stripe.api_key = '<stripe-api-key>' # check if user already has a customer assosciated, otherwise create it if not hasattr(request.user, 'customer'): stripe_customer = stripe.Customer.create(email=request.user.email) customer = Customer.objects.create(user=request.user, source_id=stripe_customer.id) customer.save() current_subscription = request.user.customer.subscriptions.last() # our app does not support multiple subscriptions per user if current_subscription and current_subscription.status == 'active': return Response({'error': 'You already have an active subscription'}, status=status.HTTP_400_BAD_REQUEST) session = stripe.checkout.Session.create( line_items=[ { 'price': '<stripe-price-id>', 'quantity': 1, }, ], mode='subscription', success_url=settings.CLIENT_URL + '/checkout/success', cancel_url=settings.CLIENT_URL + '/checkout/canceled', automatic_tax={'enabled': True}, customer=request.user.customer.source_id, customer_update={ 'address': 'auto' # let stripe handle the address } ) except Exception as e: return Response({'error': str(e)}, status=status.HTTP_400_BAD_REQUEST) return Response({'url': session.url}, status=status.HTTP_201_CREATED) ``` As you can see I respond with a url, you can of course also respond with a redirect directly, I however like handeling all of this in the frontend. Finally we will add these endpoints to our urls: ```python from django.urls import path from .views import checkout_session_view urlpatterns = [ path('checkout/session/', checkout_session_view, name='checkout-session') ] ``` Now we will create the before mentioned upgrade button. For this I am using React with react-query in combination with Orval, but you can of course use anything you would like for this: ```react function UpgradeButton() { const { mutate, isPending }= useCheckoutSessionCreate({ mutation: { onSuccess: (data) => { window.location.href = data.url; }, onError: (error) => { // handle error.. }, }, }); function handleClick() { mutate({ data: {} }) } return ( <Button onClick={handleClick} disabled={isPending}> <CrownIcon className="w-4 h-4 mr-2" /> Upgrade </Button> ) } ``` _Note: don’t forget to add a success and canceled page for after checkout_ ## Handeling Webhooks We will use stripe’s webhooks to keep our models in sync with stripe. Retrieve the stripe webhook secret from the stripe dashboard. Add a second endpoint to handle the stripe webhooks: ```python @api_view(['POST']) def checkout_webhook_view(request): stripe.api_key = '<stripe-api-key>' payload = request.body sig_header = request.META['HTTP_STRIPE_SIGNATURE'] endpoint_secret = 'stripe-webhook-secret' # Verify that the request comes from stripe try: event = stripe.Webhook.construct_event(payload, sig_header, endpoint_secret) except ValueError: return Response(status=status.HTTP_400_BAD_REQUEST) except stripe.error.SignatureVerificationError: return Response(status=status.HTTP_400_BAD_REQUEST) if event['type'] == 'customer.subscription.created': handle_subscription_created(event) elif event['type'] in ['customer.subscription.updated', 'customer.subscription.deleted']: handle_subscription_updated(event) return Response(status=status.HTTP_200_OK) ``` For handeling the actual events I like to create a seperate services.py file, where I have two handlers, one for creating and one for updating or deleting: ```python def handle_subscription_created(event): stripe_subscription = stripe.Subscription.retrieve(event['data']['object']['id']) customer = Customer.objects.get(source_id=stripe_subscription.customer) subscription = Subscription.objects.create( customer=customer, source_id=stripe_subscription.id, status=stripe_subscription.status, currency=stripe_subscription.currency, amount=stripe_subscription.plan.amount / 100, started_at=datetime.fromtimestamp(stripe_subscription.created), ) subscription.save() return subscription def handle_subscription_updated(event): stripe_subscription = stripe.Subscription.retrieve(event['data']['object']['id']) subscription = Subscription.objects.get(source_id=stripe_subscription.id) subscription.status = stripe_subscription.status if stripe_subscription.canceled_at: subscription.canceled_at = datetime.fromtimestamp(stripe_subscription.canceled_at) if subscription.ended_at: subscription.ended_at = datetime.fromtimestamp(stripe_subscription.ended_at) subscription.save() return subscription ``` When a user starts a subscription by going through checkout we will receive a `customer.subscription.created` event, which we will use to create the subscription in our system. When a user cancels or renews their subscription we will receive a `customer.subscription.updated` event, which we will use to update that subscription. And finally when the billing period of an expired subscription ends we will receive an `customer.subscription.deleted` event at which point the subscription will no longer be set to ‘active’. Make sure to add the new view to your urls: ```python from django.urls import path from .views import checkout_session_view, checkout_webhook_view urlpatterns = [ path('checkout/session/', checkout_session_view, name='checkout-session'), path('checkout/webhook', checkout_webhook_view, name='checkout-webhook'), ] ``` And update your settings.py to stop append slash from being enforced: `APPEND_SLASH = False` You can now test this by enabling test mode and starting the stripe-cli and make sure it points to your webhook url. `stripe listen — forward-to localhost:8000/api/checkout/webhook` Now go through your checkout flow in the frontend, you can use the test credit card `4242 4242 4242 4242` with a valid expired date in the future. Your endpoint should pick up on the webhook events and update your models accordingly. Consider creating an `admin.py` to easily monitor your models. ## Updating the frontend Finally we will update our frontend to hide and disable certain features if the users subscription status is not active. Keep in mind to always provide clear feedback. For example instead of disabling a button have it trigger a dialog to communicate something being a paid feature. ```react function DownloadButton() { const { subscription } = useUser(); // custom hook to retrieve user details function handleDownload() { // do something... } function handleCheckout() { // do something... } if (subscription.status === 'active') { <Button onClick={handleDownload}> <DownloadIcon className="w-4 h-4 mr-2" /> Download </Button> } return ( <AlertDialog> <AlertDialogTrigger asChild> <Button> <DownloadIcon className="w-4 h-4 mr-2" /> Download </Button> </AlertDialogTrigger> <AlertDialogContent> <AlertDialogHeader> <AlertDialogTitle>This is a paid feature</AlertDialogTitle> <AlertDialogDescription> In order to use this feature, you need to upgrade to our Pro plan. </AlertDialogDescription> </AlertDialogHeader> <AlertDialogFooter> <AlertDialogCancel> Close </AlertDialogCancel> <AlertDialogAction onClick={handleCheckout}> <CrownIcon className="w-4 h-4 mr-2" /> Upgrade Now </AlertDialogAction> </AlertDialogFooter> </AlertDialogContent> </AlertDialog> ) } ``` As you can see I render the same download button, but if the user does not have an active subscription it triggers a dialog instead. ## Letting users manage their subscriptions As user you want to cancel or renew your subscription and download your invoices, for this stripe has a prebuild portal we can link to. ![Pro plan CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f1c3y5ayhuauroihftxo.png) Let’s create another endpoint for this: ```python def customer_portal(request): stripe.api_key = '<stripe-api-key>' try: session = stripe.billing_portal.Session.create( customer=request.user.customer.source_id, return_url=settings.CLIENT_URL + '/settings', ) except Exception as e: return JsonResponse({'error': str(e)}, status=400) return JsonResponse({'url': session.url}, status=201) ``` Make sure to enable the portal in your stripe settings https://dashboard.stripe.com/settings/billing/portal Again we will respond with a url and let the frontend handle it, but of course you can also simply respond with a redirect here. We make use of the `customer.source_id` that we create when a checkout session is created. We update our urls once more: ```python from django.urls import path from .views import checkout_session_view, checkout_webhook_view, customer_portal_view urlpatterns = [ path('checkout/session/', checkout_session_view, name='checkout-session'), path('checkout/webhook', checkout_webhook_view, name='checkout-webhook'), path('customer/portal/', customer_portal_view, name='customer-portal'), ] ``` In the frontend we create card as showed above, with a button which will redirect the user. Make sure to only render this for users that have an active subscription. ```react function CustomerPortal() { const { subscription } = useUser(); const { mutate, isPending } = useCustomerPortalCreate({ mutation: { onSuccess(data) { // redirect page window.location.href = data.url; }, onError(error) { // handle error... }, }, }); function handlePortal() { mutate({ data: {} }) } if (subscription.status !== 'active') { return null; } return ( <Card> <CardHeader> <CardTitle> <CrownIcon className="w-5 h-5 mr-2" /> plan </CardTitle> <CardDescription> You will be redirected to our customer portal where you can cancel or renew your subscription and view your invoices </CardDescription> </CardHeader> <CardFooter> <Button onClick={handlePortal} disabled={isPending} variant="secondary"> Manage Subscription <SquareArrowOutUpRightIcon className="w-4 h-4 ml-2" /> </Button> </CardFooter> </Card> ) } ``` ## Moving to production 1. Go to your product page in the stripe dashboard and copy over your product to live mode (see screenshot) 2. Next disable test mode 3. Retrieve your production secret key and webhook secret 4. Find your product and copy the price_id again like we did for test mode ![Stripe products CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bx34jovoqm1hhzcij5ok.png) Make sure your application takes advantage of these new values in your production environment. Congratiulations you now have a working subscription in your project! ## Final notes You are now ready to go, but there are some potential edge cases you want to consider handeling. ## Canceled subscriptions that are still active When a user cancels their subscription they still have access till the end of the subscription period, you might want to inform the user of this. Stripe provides us with some additional values, which we can add to our models. First we update our models: ```python class Subscription(models.Model): # ... cancel_at_period_end = models.BooleanField(default=False) cancel_at = models.DateTimeField(blank=True, null=True) ``` `cancel_at_period_end` indicates that our subscription will expire at the end of the billing period and `cancel_at` indicates at what date this will happen. in our services we should also handle this to ensure these values are updated accordingly: ```python def handle_subscription_updated(event): # ... subscription.cancel_at_period_end = stripe_subscription.cancel_at_period_end subscription.cancel_at = datetime.fromtimestamp( stripe_subscription.cancel_at) if stripe_subscription.cancel_at else None # ... ``` Next in our frontend we can handle it as such, where we render an alert if `cancel_at_period_end`: ```react subscription.cancel_at_period_end && ( <Alert> <AlertCircleIcon className="h-4 w-4" /> <AlertTitle>Your subscription has been canceled</AlertTitle> <AlertDescription> You can continue using our services till the end if your current billing period. Your subscription will end on {format(new Date(subscription.cancel_at), 'MMMM dd, yyyy')} </AlertDescription> </Alert> ) ``` Which should look something like this: ![Your subscription was canceled CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brpracifrwh1favev2by.png) ## Verified emails It’s important that users have verfied email addresses when they subscribe to your service. Depending on how your auth is implemented you might have to check on this before allowing a user to create a checkout session. Keep this in mind! --- That is it! Thank you so much for following along with this guide, I hope you found it helpful. If you have any questions reach out to me on [X/Twitter](https://x.com/atodeshi) To see this example in action or if you are simply in need of a resume go checkout my latest project over at [CV Forge](https://www.cvforge.app/)!
ato_deshi
1,888,953
How to setup Stripe subscriptions with Django and React
what we will be building Recently I added a subscription model to one of my projects,...
0
2024-06-14T20:06:46
https://dev.to/ato_deshi/how-to-setup-stripe-subscriptions-with-django-and-react-fe2
webdev, react, python, stripe
## what we will be building Recently I added a subscription model to one of my projects, https://www.cvforge.app/, in this app users can create and download resumes, taking advantage of premade examples to help you get started. Some of these features are exclusively available to Pro members: ![Upgrade button CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9pt153433v8l6n6qpxx5.png) New users are welcomed by an ‘upgrade’ button next to their user button. Clicking this will redirect them to a Stripe Checkout page, like so: ![Checkout page CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3aj8c2usk5rwlhtymvq.png) After a user has subscribed they will have access to all features. ## Getting started For this tutorial we will be using Django and React with the stripe Python sdk, but nothing here is specific to those languages or frameworks so feel free to use what ever you prefer. ## Setting up Stripe 1. Create an account over at https://stripe.com/ 2. Install the stripe-cli by following the official guide https://docs.stripe.com/stripe-cli 3. Enable test mode. 4. Get your test secret key from the dashboard, it should start with `sk_test_`. 5. Next we create a product in Stripe, make sure it is set to recurring since we are working with subscriptions. 6. Finally we retrieve the price_id by clicking the 3 dots and selecting ‘Copy price ID’: ![Stripe dashboard CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/covurux1hod900vzx8d3.png) ## Creating a Checkout Session If you do not have a Django project setup please checkout Django’s guide on getting started https://docs.djangoproject.com/en/5.0/intro/, from here on I will assume you have a running django project. Let’s start by installing the stripe sdk. `pip install stripe` Next we will create a new app to handle everything payment related. `python manage.py startapp payments` In here we will create the appropriate models: ```python class Customer(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE, related_name='customer') source_id = models.CharField(max_length=255, blank=True, null=True) class Subscription(models.Model): customer = models.ForeignKey(Customer, on_delete=models.CASCADE, related_name='subscriptions') source_id = models.CharField(max_length=255, blank=True, null=True) currency = models.CharField(max_length=255, blank=True, null=True) amount = models.FloatField(blank=True, null=True) started_at = models.DateTimeField(blank=True, null=True) ended_at = models.DateTimeField(blank=True, null=True) canceled_at = models.DateTimeField(blank=True, null=True) status = models.CharField(max_length=255, blank=True, null=True) ``` Both of these models will be modeling models in stripe’s system, source_id here refers to the ids in the corresponding stripe models. We use these models to link a subscription to a user in our system. Next we will create a checkout view: ```python from rest_framework.decorators import api_view from rest_framework.response import Response from rest_framework import status from django.conf import settings import stripe @api_view(['POST']) def checkout_session_view(request): try: stripe.api_key = '<stripe-api-key>' # check if user already has a customer assosciated, otherwise create it if not hasattr(request.user, 'customer'): stripe_customer = stripe.Customer.create(email=request.user.email) customer = Customer.objects.create(user=request.user, source_id=stripe_customer.id) customer.save() current_subscription = request.user.customer.subscriptions.last() # our app does not support multiple subscriptions per user if current_subscription and current_subscription.status == 'active': return Response({'error': 'You already have an active subscription'}, status=status.HTTP_400_BAD_REQUEST) session = stripe.checkout.Session.create( line_items=[ { 'price': '<stripe-price-id>', 'quantity': 1, }, ], mode='subscription', success_url=settings.CLIENT_URL + '/checkout/success', cancel_url=settings.CLIENT_URL + '/checkout/canceled', automatic_tax={'enabled': True}, customer=request.user.customer.source_id, customer_update={ 'address': 'auto' # let stripe handle the address } ) except Exception as e: return Response({'error': str(e)}, status=status.HTTP_400_BAD_REQUEST) return Response({'url': session.url}, status=status.HTTP_201_CREATED) ``` As you can see I respond with a url, you can of course also respond with a redirect directly, I however like handeling all of this in the frontend. Finally we will add these endpoints to our urls: ```python from django.urls import path from .views import checkout_session_view urlpatterns = [ path('checkout/session/', checkout_session_view, name='checkout-session') ] ``` Now we will create the before mentioned upgrade button. For this I am using React with react-query in combination with Orval, but you can of course use anything you would like for this: ```react function UpgradeButton() { const { mutate, isPending }= useCheckoutSessionCreate({ mutation: { onSuccess: (data) => { window.location.href = data.url; }, onError: (error) => { // handle error.. }, }, }); function handleClick() { mutate({ data: {} }) } return ( <Button onClick={handleClick} disabled={isPending}> <CrownIcon className="w-4 h-4 mr-2" /> Upgrade </Button> ) } ``` _Note: don’t forget to add a success and canceled page for after checkout_ ## Handeling Webhooks We will use stripe’s webhooks to keep our models in sync with stripe. Retrieve the stripe webhook secret from the stripe dashboard. Add a second endpoint to handle the stripe webhooks: ```python @api_view(['POST']) def checkout_webhook_view(request): stripe.api_key = '<stripe-api-key>' payload = request.body sig_header = request.META['HTTP_STRIPE_SIGNATURE'] endpoint_secret = 'stripe-webhook-secret' # Verify that the request comes from stripe try: event = stripe.Webhook.construct_event(payload, sig_header, endpoint_secret) except ValueError: return Response(status=status.HTTP_400_BAD_REQUEST) except stripe.error.SignatureVerificationError: return Response(status=status.HTTP_400_BAD_REQUEST) if event['type'] == 'customer.subscription.created': handle_subscription_created(event) elif event['type'] in ['customer.subscription.updated', 'customer.subscription.deleted']: handle_subscription_updated(event) return Response(status=status.HTTP_200_OK) ``` For handeling the actual events I like to create a seperate services.py file, where I have two handlers, one for creating and one for updating or deleting: ```python def handle_subscription_created(event): stripe_subscription = stripe.Subscription.retrieve(event['data']['object']['id']) customer = Customer.objects.get(source_id=stripe_subscription.customer) subscription = Subscription.objects.create( customer=customer, source_id=stripe_subscription.id, status=stripe_subscription.status, currency=stripe_subscription.currency, amount=stripe_subscription.plan.amount / 100, started_at=datetime.fromtimestamp(stripe_subscription.created), ) subscription.save() return subscription def handle_subscription_updated(event): stripe_subscription = stripe.Subscription.retrieve(event['data']['object']['id']) subscription = Subscription.objects.get(source_id=stripe_subscription.id) subscription.status = stripe_subscription.status if stripe_subscription.canceled_at: subscription.canceled_at = datetime.fromtimestamp(stripe_subscription.canceled_at) if subscription.ended_at: subscription.ended_at = datetime.fromtimestamp(stripe_subscription.ended_at) subscription.save() return subscription ``` When a user starts a subscription by going through checkout we will receive a `customer.subscription.created` event, which we will use to create the subscription in our system. When a user cancels or renews their subscription we will receive a `customer.subscription.updated` event, which we will use to update that subscription. And finally when the billing period of an expired subscription ends we will receive an `customer.subscription.deleted` event at which point the subscription will no longer be set to ‘active’. Make sure to add the new view to your urls: ```python from django.urls import path from .views import checkout_session_view, checkout_webhook_view urlpatterns = [ path('checkout/session/', checkout_session_view, name='checkout-session'), path('checkout/webhook', checkout_webhook_view, name='checkout-webhook'), ] ``` And update your settings.py to stop append slash from being enforced: `APPEND_SLASH = False` You can now test this by enabling test mode and starting the stripe-cli and make sure it points to your webhook url. `stripe listen — forward-to localhost:8000/api/checkout/webhook` Now go through your checkout flow in the frontend, you can use the test credit card `4242 4242 4242 4242` with a valid expired date in the future. Your endpoint should pick up on the webhook events and update your models accordingly. Consider creating an `admin.py` to easily monitor your models. ## Updating the frontend Finally we will update our frontend to hide and disable certain features if the users subscription status is not active. Keep in mind to always provide clear feedback. For example instead of disabling a button have it trigger a dialog to communicate something being a paid feature. ```react function DownloadButton() { const { subscription } = useUser(); // custom hook to retrieve user details function handleDownload() { // do something... } function handleCheckout() { // do something... } if (subscription.status === 'active') { <Button onClick={handleDownload}> <DownloadIcon className="w-4 h-4 mr-2" /> Download </Button> } return ( <AlertDialog> <AlertDialogTrigger asChild> <Button> <DownloadIcon className="w-4 h-4 mr-2" /> Download </Button> </AlertDialogTrigger> <AlertDialogContent> <AlertDialogHeader> <AlertDialogTitle>This is a paid feature</AlertDialogTitle> <AlertDialogDescription> In order to use this feature, you need to upgrade to our Pro plan. </AlertDialogDescription> </AlertDialogHeader> <AlertDialogFooter> <AlertDialogCancel> Close </AlertDialogCancel> <AlertDialogAction onClick={handleCheckout}> <CrownIcon className="w-4 h-4 mr-2" /> Upgrade Now </AlertDialogAction> </AlertDialogFooter> </AlertDialogContent> </AlertDialog> ) } ``` As you can see I render the same download button, but if the user does not have an active subscription it triggers a dialog instead. ## Letting users manage their subscriptions As user you want to cancel or renew your subscription and download your invoices, for this stripe has a prebuild portal we can link to. ![Pro plan CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f1c3y5ayhuauroihftxo.png) Let’s create another endpoint for this: ```python def customer_portal(request): stripe.api_key = '<stripe-api-key>' try: session = stripe.billing_portal.Session.create( customer=request.user.customer.source_id, return_url=settings.CLIENT_URL + '/settings', ) except Exception as e: return JsonResponse({'error': str(e)}, status=400) return JsonResponse({'url': session.url}, status=201) ``` Make sure to enable the portal in your stripe settings https://dashboard.stripe.com/settings/billing/portal Again we will respond with a url and let the frontend handle it, but of course you can also simply respond with a redirect here. We make use of the `customer.source_id` that we create when a checkout session is created. We update our urls once more: ```python from django.urls import path from .views import checkout_session_view, checkout_webhook_view, customer_portal_view urlpatterns = [ path('checkout/session/', checkout_session_view, name='checkout-session'), path('checkout/webhook', checkout_webhook_view, name='checkout-webhook'), path('customer/portal/', customer_portal_view, name='customer-portal'), ] ``` In the frontend we create card as showed above, with a button which will redirect the user. Make sure to only render this for users that have an active subscription. ```react function CustomerPortal() { const { subscription } = useUser(); const { mutate, isPending } = useCustomerPortalCreate({ mutation: { onSuccess(data) { // redirect page window.location.href = data.url; }, onError(error) { // handle error... }, }, }); function handlePortal() { mutate({ data: {} }) } if (subscription.status !== 'active') { return null; } return ( <Card> <CardHeader> <CardTitle> <CrownIcon className="w-5 h-5 mr-2" /> plan </CardTitle> <CardDescription> You will be redirected to our customer portal where you can cancel or renew your subscription and view your invoices </CardDescription> </CardHeader> <CardFooter> <Button onClick={handlePortal} disabled={isPending} variant="secondary"> Manage Subscription <SquareArrowOutUpRightIcon className="w-4 h-4 ml-2" /> </Button> </CardFooter> </Card> ) } ``` ## Moving to production 1. Go to your product page in the stripe dashboard and copy over your product to live mode (see screenshot) 2. Next disable test mode 3. Retrieve your production secret key and webhook secret 4. Find your product and copy the price_id again like we did for test mode ![Stripe products CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bx34jovoqm1hhzcij5ok.png) Make sure your application takes advantage of these new values in your production environment. Congratiulations you now have a working subscription in your project! ## Final notes You are now ready to go, but there are some potential edge cases you want to consider handeling. ## Canceled subscriptions that are still active When a user cancels their subscription they still have access till the end of the subscription period, you might want to inform the user of this. Stripe provides us with some additional values, which we can add to our models. First we update our models: ```python class Subscription(models.Model): # ... cancel_at_period_end = models.BooleanField(default=False) cancel_at = models.DateTimeField(blank=True, null=True) ``` `cancel_at_period_end` indicates that our subscription will expire at the end of the billing period and `cancel_at` indicates at what date this will happen. in our services we should also handle this to ensure these values are updated accordingly: ```python def handle_subscription_updated(event): # ... subscription.cancel_at_period_end = stripe_subscription.cancel_at_period_end subscription.cancel_at = datetime.fromtimestamp( stripe_subscription.cancel_at) if stripe_subscription.cancel_at else None # ... ``` Next in our frontend we can handle it as such, where we render an alert if `cancel_at_period_end`: ```react subscription.cancel_at_period_end && ( <Alert> <AlertCircleIcon className="h-4 w-4" /> <AlertTitle>Your subscription has been canceled</AlertTitle> <AlertDescription> You can continue using our services till the end if your current billing period. Your subscription will end on {format(new Date(subscription.cancel_at), 'MMMM dd, yyyy')} </AlertDescription> </Alert> ) ``` Which should look something like this: ![Your subscription was canceled CV Forge](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/brpracifrwh1favev2by.png) ## Verified emails It’s important that users have verfied email addresses when they subscribe to your service. Depending on how your auth is implemented you might have to check on this before allowing a user to create a checkout session. Keep this in mind! --- That is it! Thank you so much for following along with this guide, I hope you found it helpful. If you have any questions reach out to me on [X/Twitter](https://x.com/atodeshi) To see this example in action or if you are simply in need of a resume go checkout my latest project over at [CV Forge](https://www.cvforge.app/)!
ato_deshi
1,888,952
What happens if you include the CSS <link> tag inside the <body> of an HTML document instead of the <head>?
Well, this question is beginner-friendly but invites thoughtful consideration before giving an...
0
2024-06-14T20:04:45
https://dev.to/george_kingi/what-happens-if-you-include-the-css-tag-inside-the-of-an-html-document-instead-of-the--1f41
Well, this question is beginner-friendly but invites thoughtful consideration before giving an answer.
george_kingi
1,888,946
How to support user custom domains with Next.js
A tutorial guide for supporting user custom domains as a feature in your Next.js app, with an example...
0
2024-06-14T20:03:00
https://approximated.app/guides/nextjs-custom-domains-guide/
webdev, javascript, nextjs, react
**A tutorial guide for supporting user custom domains as a feature in your Next.js app, with an [example github repo](https://github.com/Approximated-Inc/nextjs-custom-domains).** This guide is meant to show you how to allow users of your Next.js app to connect their own domains and load up content specific to them. Custom domains - sometimes called vanity domains, whitelabel domains, or customer domains - are a key feature of many apps. If you need to manage custom domains with SSL for SAAS, marketplaces, blog hosts, email service providers, or other platforms, then this is the guide for you. **What you'll learn in this guide:** - **Domains.** We'll briefly go over web domains, how they work, and how they can be used as custom domains with your Next.js app. - **Networking.** For your Next.js app to handle custom domain requests, they first need to reach your server(s). - **Pages Router.** Once the request reaches your Next.js app, we show how you can differentiate custom domains with the Pages Router. - **App Router.** Alternatively, we show you a few different methods you can use to differentiate custom domains with the App Router. - **Prerendering.** We talk a bit about what prerendering, and how it can impact custom domains. - **Bonus!** How to automate the networking and SSL management easily with [Approximated](https://approximated.app). ## Web domains and how they work You're likely pretty familiar with web domains, but we'll go over a few of the specifics of how domains work that can sometimes cause confusion. ### Registrars and ownership Domains are bought and managed with registrars like Hover, GoDaddy, Namecheap, and many others. This is where a domain is officialy registered and may or may not be where it's DNS is managed. For our purposes, we can assume that it will be the user that owns their own domain, which they would like to point at your app. ### DNS and pointing domains Domains don't do much unless they're pointed at a target server. This is done with DNS, which tells devices where to send requests for a domain, using DNS records. These records are set at the domain's DNS provider, which may or may not be the same as the registrar. ### Apex Domains vs Subdomains Apex domains are the root, naked domain with no subdomain attached. For instance, example.com is an apex domain. A subdomain is what we call it when we prepend something to an apex domain. For example, sub.example.com or sub2.sub1.example.com are both subdomains. Technically, subdomains are domains. For the purpose of this guide though, when we use the word domain, we're referring to an apex domain. ### SSL, TLS, and encrypting requests When a request is sent out over the internet, there are various points where it's contents could be inspected and tampered with unless they're encrypted. This is especially important for any private data such as ecommerce or personal information, and may be legally required in many cases. It's generally a very good idea for all requests to be encrypted, in order to prevent censorship, request injections, snooping and other tampering. SSL certificates - which are technically TLS certificates now, but often still called by the old name - are the standard by which browsers and other applications encrypt these requests. Using them requires a server with a provisioned SSL certificate for each domain. We'll go more into detail on this in the networking section below. ### Networking and SSL for custom domains on Next.js In order for your app to serve custom domains, the requests for each custom domain need to be networked to reach your server(s). For those requests to be secured, you'll also need to provide and manage an SSL certificate for each custom domain. Networking - getting the request to your server Each custom domain will need to have DNS records set to point requests at your app - either directly or indirectly. There are two options for how you can go about this, each with significant trade-offs that you should know about: **DNS A records pointed at an IPv4 address** DNS A records can only be pointed at an IPv4 address, formatted as x.x.x.x, where each x can be any value between 0 and 255. It's important to note that apex domains - those with no subdomain (including www) - can only be pointed with A records. They cannot be pointed with CNAME records. This is the easiest and most universal method but also carries the risk that if you ever need to change that IPv4 address, you'll need to have each user change their DNS individually. **CNAME records pointed through an intermediate domain** CNAME records cannot point an apex domain and cannot point at an IPv4 address, but they can point subdomains at any other domain or subdomain. For example, you could point subdomain.example.com at test.com and requests would be sent to wherever test.com is pointing. If it's acceptable in your use case to require custom domains have a subdomain, then you can do the following: 1. Point the A record for a subdomain that you own at your server IPv4 address. 2. When a custom domain is created, have your user point a CNAME record at the subdomain above. 3. Requests for the custom domain will reach wherever the intermediate subdomain is pointed. 4. This has the advantage of allowing you to re-point all of the custom domains at once, by changing the A record for the intermediate subdomain. The disadvantage here is that those custom domains cannot be apex domains and must have a subdomain. ### SSL Certificates - securing many custom domains You've likely secured your app already with SSL certificates, but unfortunately those will only apply to requests going directly to that domain. SSL certificates cannot be shared amongst apex domains. You're going to need SSL certificates for each individual custom domain that points at your app, to encrypt requests as well as ensure security and privacy for that domain specfically. In order to do that you'll need an automated system to provision SSL certificates automatically when custom domains are created, including: - Requesting the generation of an SSL certificate from a Certificate Authority. - Accepting and appropriately completing the verification challenge from the Certficate Authority. - Renewing within a reasonable window before the certificate expiration date (typically 3 months). - Avoiding going over rate limits with Certificate Authorities. - Checking for certificates that have been revoked early by Certificate Authorities and replacing them. - Monitoring all of it to catch and avoid the many edge cases and unexpected issues that can cause customer facing downtime. For those that need to build this in-house, we recommend Caddy as your starting point. However, this is a complex topic and could easily be many guides worth of content. Building reliable SSL management, distributed networking, and monitoring in-house for more than a few domains can be difficult even with the help of amazing tools like Caddy. There are a wide variety of obscure pitfalls that often come at the worst times, and will usually cause customer-facing downtime. So just be aware that it will be an ongoing effort. To keep this tutorial from exploding in length, we won't be covering building this in-house here. If you'd prefer to have a managed service handle this for you, please see our bonus section below. ## Routing Options There are multiple ways to handle routing in Next.js, depending on how you want your app to work. We cover the Pages Router and the App Router here, since those are the most typical approaches. Some aspects are quite different between the two, and each has it's own set of trade-offs. If you haven't already chosen one, you can find a quick comparison here. ## The Next.js Pages Router for Custom Domains The Pages Router was the default way to route requests in Next.js until version 13, when they introduced the App Router. If your app makes use of the Pages Router, here are some options for you to handle custom domains. ### Pages Router: Middleware You can add middleware to the Pages Router that can capture requests before they reach your page, preparing requests or performing redirects. Next.js recommends smaller operations when using middleware, such as rewriting a path, rather than complex data fetching or extensive session management. Depending on your app, this may be enough for your custom domain needs. Here's an example of a simple middleware that could make some change based on the custom domain before continuing on: ```typescript // Example repo file: middleware.ts import { NextResponse } from 'next/server'; import type { NextRequest } from 'next/server'; export function middleware(request: NextRequest) { /** * Check if there's a header with the custom domain, * and if not just use the host header. * If you're using approximated.app the default is to * inject the header 'apx-incoming-host' with the custom domain. */ const domain = request.headers.has('apx-incoming-host') ? request.headers.get('apx-incoming-host') : request.headers.get('host'); // do something with the "domain" const response = NextResponse.next({ request: { headers: request.headers, }, }); return response; }; ``` ### Pages Router: Using getServerSideProps If you're using the Pages Router, using the getServerSideProps approach may be the best choice for you. This method runs domain checking code on the server before rendering your pages, which may be important for some applications, and lets you pass logic as props to your page components. **Note:** getServerSideProps can only be used on top-level pages, not inside components. For component-level server logic, consider using the app router with React Server Components. Here's an example of using getServerSideProps to generate a different response based on the custom domain: ```typescript // Example repo file: pages/index.ts import type { InferGetServerSidePropsType, GetServerSideProps } from 'next'; import { Inter } from 'next/font/google'; const inter = Inter({ subsets: ['latin'] }); type ApproximatedPage = { domain: string; } export const getServerSideProps = (async ({ req }) => { /** * Check if there's a header with the custom domain, * and if not just use the host header. * If you're using approximated.app the default is to * inject the header 'apx-incoming-host' with the custom domain. */ const domain = req.headers['apx-incoming-host'] || req.headers.host || process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN; // do something with the "domain" return { props: { domain } } }) as GetServerSideProps<ApproximatedPage>; export default function Home({ domain }: InferGetServerSidePropsType<typeof getServerSideProps>) { return ( <main className={`flex min-h-screen flex-col items-center justify-between p-24 ${inter.className}`} > {domain === process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN ? 'Welcome to the primary domain' : `Welcome to the custom domain ${domain}`} </main> ) }; ``` ### Pages Router: Client-side pages Client-side pages can also handle domain information, utilizing window.location.hostname or window.location.host wrapped in a state callback. This can be useful if you're loading data or content from an API after an initial client-side render, based on the custom domain. However, note that this can be pretty easily spoofed, so don't rely on this alone for authentication or authorization. Here's an example of using the window.location.hostname on the client side to do something different when we detect a custom domain: ```typescript // Example repo file: pages/page-csr.ts import { useEffect, useState } from 'react'; import { Inter } from 'next/font/google'; const inter = Inter({ subsets: ['latin'] }); export default function Page() { const [domain, setDomain] = useState<string>(String(process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN)); useEffect(() => { // NOTE: consider the difference between `window.location.host` (includes port) and `window.location.hostname` (only host domain name) const pageDomain = typeof window !== 'undefined' ? window.location.hostname : process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN; setDomain(String(pageDomain)); }, []); return ( <main className={`flex min-h-screen flex-col items-center justify-between p-24 ${inter.className}`} > {domain === process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN ? 'Welcome to the primary domain' : `Welcome to the subdomain ${domain}`} </main> ) }; ``` ## Next.js App Router for Custom Domains (React Server Components) When using the app router, you can run server-side code in your components. This allows you to check request details at the component level, which can be useful for conditional rendering based on the request domain, especially when coupled with authentication. ### App Router: API Routes API routes can perform similar functions to middleware, using the domain from the header in the rest of your code. Here's an example of how you could use an API route to potentially do something different when a custom domain is detected. ```typescript // Example repo file: pages/api/host.ts // Next.js API route support: https://nextjs.org/docs/api-routes/introduction import type { NextApiRequest, NextApiResponse } from 'next' type ResponseData = { message: string } export default function handler( req: NextApiRequest, res: NextApiResponse ) { /** * Check if there's a header with the custom domain, * and if not just use the host header. * If you're using approximated.app the default is to * inject the header 'apx-incoming-host' with the custom domain. */ const domain = req.headers['apx-incoming-host'] || req.headers.host || process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN; // do something with the "domain" res.status(200).json({ message: `Hello from ${domain}` }); }; ``` ### App Router: Route Handlers The new app directory allows the use of route handlers, which can access the domain from request headers, similar to other routing methods. Here's an example of using the App Router route handlers to do something different for each custom domain: ```typescript // Example repo file: app/app-hosts/route.ts import { type NextRequest } from 'next/server'; export const dynamic = 'force-dynamic'; export async function GET(request: NextRequest) { /** * Check if there's a header with the custom domain, * and if not just use the host header. * If you're using approximated.app the default is to * inject the header 'apx-incoming-host' with the custom domain. */ const domain = request.headers.has('apx-incoming-host') ? request.headers.get('apx-incoming-host') : request.headers.get('host'); return Response.json({ message: `Hello from ${domain}` }); }; ``` ### App Router: App Pages **Note:** Not to be confused with the Pages Router, App Pages are something separate. Server components in the app directory can make use of server side functions help manage domain-specific logic. Here's an example of using server side functions in App Pages to render based on the custom domain: ```typescript // Example repo file: app/ssr-page/page.tsx import { headers } from 'next/headers'; export default function Page() { /** * Check if there's a header with the custom domain, * and if not just use the host header. * If you're using approximated.app the default is to * inject the header 'apx-incoming-host' with the custom domain. */ const domain = headers().has('apx-incoming-host') ? headers().get('apx-incoming-host') : headers().get('host'); return <h1>{domain === process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN ? 'Welcome to the primary domain' : `Welcome to the custom domain ${domain}`}</h1> } ``` ### App Router: Client-Side Pages For client-side pages in the app directory, window.location.hostname or window.location.host can be used as long as the page uses "use client". Note that this is happening client-side, so you may be more limited in what you can do. This is often useful for small template changes, or calling an API to get data or content for a specific custom domain. Here's an example of using window.location.hostname to get the current domain and do something different: ```typescript // Example repo file: app/csr-page/page.tsx 'use client'; export default function Page() { // NOTE: consider the difference between `window.location.host` (includes port) and `window.location.hostname` (only host domain name) const domain = typeof window !== 'undefined' ? window.location.host : process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN; return <h1>{domain === process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN ? 'Welcome to the primary domain' : `Welcome to the subdomain ${domain}`}</h1> } ``` ## Pre-rendering for Custom Domains with Next.js It is possible to pre-render pages that use a custom domain or otherwise render unique content based on a custom domain. With the right tools this can be quite easy with little or no app modification required. In other cases this may require significant reimagining of your app architecture, and may need complex logic with frequent re-rendering. This could include fetching the list of all custom domains, creating a specific .env for each, running npm run build, and deploying the static output. This process would potentially need to be repeated for each client and whenever new custom domains are added. ### Bonus: Automating networking and SSL with Approximated Remember how we said above that building a system for managing SSL certificates at scale is difficult, expensive, and error prone? Well, we know that from personal (painful) experience. It's why we built [Approximated](https://approximated.app) - so that you don't have to. And at a price you can afford, starting at $20/month instead of thousands. Approximated is your fully managed solution for automating custom domains and SSL certificates. We stand up a dedicated reverse proxy cluster, with edge nodes globally distributed around the world, for each and every customer. You get a dedicated IPv4 address with each cluster so that you have the option of allowing your users to point their apex domains. Custom domain traffic goes through the cluster node nearest your user and straight to your app. The best part? Your cluster will provision and fully manage SSL certificates automatically along the way. No need to worry about them ever again. All you need to do is call our simple API from your app when you want to create, read, update, or delete a custom domain and it will handle the rest. So what does that look like in our example repo? It can be as simple as an API endpoint like this: ```typescript // in pages/api/createVirtualHost.ts import type { NextApiRequest, NextApiResponse } from 'next'; interface ApiResponse { error?: string; data?: any; } export default async function handler( req: NextApiRequest, res: NextApiResponse ): Promise { if (req.method === 'POST') { try { const apiKey = process.env.APPROXIMATED_API_KEY; // Accessing the API key from environment variable const primaryDomain = process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN; const response = await fetch('https://cloud.approximated.app/api/vhosts', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Accept': 'application/json', 'Api-Key': apiKey || '' // Using the API key from .env }, body: JSON.stringify({ incoming_address: req.body.incoming_address, target_address: primaryDomain || '' }) }); const data = await response.json(); if (response.ok) { res.status(200).json(data); } else { res.status(response.status).json({ error: data || 'Error creating virtual host' }); } } catch (error) { res.status(500).json({ error: 'Failed to create virtual host due to an unexpected error' }); } } else { res.setHeader('Allow', ['POST']); res.status(405).end('Method Not Allowed'); } } In the example above, our endpoint is sending a POST request to the Approximated API with two fields: the custom domain (incoming_address) and the domain of our example app (target_address). Here's a simple page with a form for adding a custom domain to Approximated, using the endpoint above. It also detects whether we're loading the page from the primary domain, or a custom domain. // in pages/index.tsx import type { InferGetServerSidePropsType, GetServerSideProps } from 'next'; import { Inter } from 'next/font/google'; import { useState, FormEvent } from 'react'; const inter = Inter({ subsets: ['latin'] }); type ApproximatedPage = { domain: string; } export const getServerSideProps = (async ({ req }) => { /** * Check if there's a header with the custom domain, * and if not just use the host header. * If you're using approximated.app the default is to * inject the header 'apx-incoming-host' with the custom domain. */ const domain = req.headers['apx-incoming-host'] || req.headers.host || process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN; // do something with the "domain" return { props: { domain } } }) as GetServerSideProps<ApproximatedPage>; export default function Home({ domain }: InferGetServerSidePropsType<typeof getServerSideProps>) { return ( <main className={`flex min-h-screen flex-col items-center gap-8 p-24 ${inter.className}`} > {domain === process.env.NEXT_PUBLIC_APP_PRIMARY_DOMAIN ? 'Welcome to the primary domain' : `Welcome to the custom domain ${domain}`} <DomainForm /> </main> ) }; interface DomainFormData { incoming_address: string; } const DomainForm: React.FC = () => { const [incoming_address, setDomain] = useState<string>(''); const [errors, setErrors] = useState<object| null>(null); // State to hold response errors const [success, setSuccess] = useState<string | null>(null); // State to hold success message const [dnsMessage, setDnsMessage] = useState<string | null>(null); // State to hold DNS message const handleSubmit = async (event: FormEvent) => { event.preventDefault(); setErrors(null); // Reset errors on new submission setSuccess(null); // Reset message on new submission setDnsMessage(null); // Reset DNS message on new submission const formData: DomainFormData = { incoming_address }; const response = await fetch('/api/createVirtualHost', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(formData) }); const data = await response.json(); if (!response.ok) { console.log(data); // Assuming the error message is in the `message` field if(data.error === 'Unauthorized'){ setErrors({'unauthorized': 'Unauthorized - incorrect or missing Approximated API key.', 'help': 'Check your env variables for APPROXIMATED_API_KEY.'}); }else{ setErrors(data.error.errors || data.error || {'unknown': 'An unknown error occurred'}); } return; } setSuccess(data.data.incoming_address) // Handle the response as needed setDnsMessage(data.data.user_message) }; return ( <form className="text-center" onSubmit={handleSubmit}> <div className="mb-2"> <label htmlFor="incoming_address">Connect a custom domain</label> </div> <input type="text" id="incoming_address" value={incoming_address} onChange={(e) => setDomain(e.target.value)} required className="text-black text-left px-2 py-1 text-xs" /> <button className="ml-2 border border-white rounded-md px-2 py-1 text-xs" type="submit">Submit</button> {dnsMessage && <div className="mt-4 mx-auto max-w-xl text-sm mb-2">{dnsMessage}</div>} {success && <div className="mt-2 mx-auto max-w-4xl">Once the DNS for the custom domain is set, <a href={'https://'+success} target="_blank" className="text-green-600 underline">Click here to view it.</a> </div> } {errors && Object.entries(errors).map(([key, message]) => ( <p key={key} className="text-red-500 mt-2">{message}</p> ))} </form> ); } ``` This is all it takes to fully automate custom domains with Approximated, and have reliable SSL certificate management at any scale! Between these two files, we're able to submit a custom domain to an API endpoint which then calls the Approximated API. It returns either a successful response with DNS instructions, or display errors such as duplicate custom domains. You'll likely want to do more than this in a real world scenario, depending on your app. Things like validation, or adding custom domains to a database can be implemented however you see fit. Approximated is agnostic to the rest of your app - it will happily coexist with whatever stack or implementation details you choose now and in the future.
apxcarter
1,888,949
Python phase 3 moringa revamp
What I've Learned in Four Weeks: From Python Basics to SQLAlchemy Introduction Over the past four...
0
2024-06-14T20:02:45
https://dev.to/wathikaeng/python-phase-3-moringa-revamp-3e12
python, backenddevelopment, dsa, learning
What I've Learned in Four Weeks: From Python Basics to SQLAlchemy **<u>Introduction</u>** Over the past four weeks, I've embarked on an exciting journey to master Python programming. Starting from the basics, I gradually delved into more advanced topics, culminating in the use of SQLAlchemy for database interactions. In this blog post, I will share my learning journey, highlighting key concepts and showcasing some code samples to illustrate what I've learned. **<u>Week 1: Getting Started with Python</u>** My journey began with the fundamentals of Python. During this week, I focused on understanding the syntax and basic constructs of the language. Key Concepts Variables and Data Types: I learned about different data types such as integers, floats, strings, and booleans. Control Structures: I explored if-else statements, for and while loops, and how to use them to control the flow of my programs. Functions: I understood the importance of functions for code reusability and modularity. Code Sample: Basic Python `def greet(name): return f"Hello, {name}!" print(greet("World"))` **<u>Week 2: Data Structures and Libraries</u>** In the second week, I focused on Python's built-in data structures and some essential libraries. Key Concepts Lists, Tuples, and Dictionaries: I learned how to store and manipulate collections of data. List Comprehensions: I discovered a concise way to create lists. Libraries: I got introduced to libraries like math and datetime. Code Sample: List Comprehensions `squares = [x**2 for x in range(10)] print(squares)` **<u>Week 3: Object-Oriented Programming (OOP)</u>** The third week was dedicated to understanding the principles of object-oriented programming in Python. Key Concepts Classes and Objects: I learned how to define classes and create objects. Inheritance: I explored how to inherit attributes and methods from other classes. Polymorphism and Encapsulation: I understood how to design flexible and reusable code. Code Sample: OOP in Python `class Animal: def __init__(self, name): self.name = name def speak(self): raise NotImplementedError("Subclasses must implement this method") class Dog(Animal): def speak(self): return f"{self.name} says Woof!" dog = Dog("Buddy") print(dog.speak())` **<u>Week 4: Working with Databases using SQLAlchemy</u>** In the final week, I delved into SQLAlchemy, a powerful ORM (Object-Relational Mapping) library for Python, which allowed me to interact with databases seamlessly. Key Concepts Engine and Sessions: I learned how to set up the engine and create sessions to interact with the database. Models and Schemas: I explored how to define models and map them to database tables. CRUD Operations: I practised performing Create, Read, Update, and Delete operations using SQLAlchemy. Code Sample: SQLAlchemy Basics `from sqlalchemy import create_engine, Column, Integer, String from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker` # Create an engine engine = create_engine('sqlite:///example.db', echo=True) # Define a base class for declarative class definitions `Base = declarative_base()` # Define a model `class User(Base): __tablename__ = 'users' id = Column(Integer, primary_key=True) name = Column(String) age = Column(Integer)` # Create the table Base.metadata.create_all(engine) # Create a new session `Session = sessionmaker(bind=engine) session = Session()` # Add a new user `new_user = User(name='Alice', age=30) session.add(new_user) session.commit()` # Query the database `users = session.query(User).all() for user in users: print(user.name, user.age)` **Conclusion** The past four weeks have been incredibly rewarding. Starting from the basics of Python, progressing through data structures and OOP, and finally mastering SQLAlchemy has equipped me with a solid foundation for future projects. Each step has been a building block, and I am excited to continue expanding my knowledge and applying these skills to real-world applications. If you're just starting with Python, I encourage you to take it one step at a time and enjoy the learning process. Happy coding!
wathikaeng
1,888,947
THE HACK ANGEL IS THE BEST CRYPTO EXPERT TO RECOVER ALL YOUR LOST BTC
Hello everyone, I'd like to use this medium to express my gratitude to THE HACK ANGEL for assisting...
0
2024-06-14T20:00:58
https://dev.to/rutger_gast_2da45c71fab37/the-hack-angel-is-the-best-crypto-expert-to-recover-all-your-lost-btc-bgb
cryptocurrency, webdev, discuss
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tw8zy06m3c4lg264yic9.jpeg) Hello everyone, I'd like to use this medium to express my gratitude to THE HACK ANGEL for assisting me in recovering my stolen crypto worth $3200,000 through their hacking skills. It was skeptical, but it worked, and I got my money back. I'm so glad I discovered them  because I thought I'd never get my money back from those fake online investment websites. You can also reach them via.  Email. hackangel@cyberdude.com WHATsAP: +1 203,309,3359 Web: https://thehackangels.com 
rutger_gast_2da45c71fab37
1,888,943
Azure Components
The core architectural components of Azure. In this summary, I'm just going to quickly wizz through...
0
2024-06-14T19:55:34
https://dev.to/favesdigital24/azure-components-2j78
The core architectural components of Azure. In this summary, I'm just going to quickly wizz through the four core components of Azure and this includes paired regions, Availability Zones, resource groups, and the Azure Resource Manager. The Paired Regions Azure data centers operates all over the world, in many different locations and regions. This regions are referred to an area within a geography that contains one or more data centers. Regional pairs allow Azure to divide platform updates and planned maintenance. This ensures that only one paired region is updated at a given time. This ensure optimal application availability and minimizes recovery time in the event of a disaster occurring. Availability Zones Availability zones offers to protect applications and data centers from data center failures. Each Availability Zone is a unique physical location within an Azure region, and each zone is supported by one or more data centers, equipped with their own independent power, cooling, and networking infrastructure. Availability Zones helps to ensure resiliency in each enabled Azure region. Azure Resource Groups The Resource group is like a folder that is used to host all resources that comprise an overall Azure solution or the resources that need to be managed as part of a group. The administrator gets to decide, based on needs, how to allocate resources in resource groups within Azure. Also resources from multiple resources group can interact with each other. Azure Resource Manager Resource Manager provides a consistent management layer for all Azure resources, security and auditing features, as well as tagging features that you can use to manage your resources once they’ve been deployed into Azure. Using Resource Manager, you can deploy, manage, and monitor all Azure resources for a solution as one group.
favesdigital24
1,888,942
Laravel Scope to get locations nearest the user location
In many web applications, particularly those that involve mapping or location-based services, it is...
0
2024-06-14T19:55:03
https://dev.to/johndivam/laravel-scope-to-get-locations-nearest-the-user-location-2fk5
laravel, maps, coding
In many web applications, particularly those that involve mapping or location-based services, it is crucial to retrieve and display locations closest to a user's current position . When dealing with geographical coordinates, calculating the distance between two points on the Earth's surface can be complex due to the spherical nature of the Earth. The Haversine formula is commonly used for this purpose. It calculates the distance between two points on a sphere given their longitudes and latitudes. ## **Setting Up the Scope** ``` namespace App\Models; use Illuminate\Database\Eloquent\Model; class Farm extends Model { public function scopeClosest($query, $lat, $long) { $distanceQuery = " ( 6371 * acos( cos( radians(?) ) * cos( radians( farms.lat ) ) * cos( radians( farms.long ) - radians(?) ) + sin( radians(?) ) * sin( radians( farms.lat ) ) ) ) AS distance "; return $query->select('farms.*') ->selectRaw($distanceQuery, [$lat, $long, $lat]) ->orderBy('distance', 'asc'); } } ``` Breaking Down the Scope **SQL Query:** The Haversine formula is used within a raw SQL query to calculate the distance. **6371** is the Earth's radius in kilometers. If you need the distance in miles, replace it with 3959. **radians(?)** converts degrees to radians for the latitude and longitude. **selectRaw**: This method allows us to include raw expressions in our query. The placeholders (?) are replaced with the provided latitude and longitude values. **orderBy**: Finally, we sort the results by the calculated distance in ascending order, so the closest farms appear first. ## Using the Scope ``` $lat = 37.7749; // User's latitude or use $request->lat $long = -122.4194; // User's longitude or use $request->long $nearestFarm = Farm::closest($lat, $long)->first(); ``` This query will return a collection of Farm models ordered by their proximity to the user's location.
johndivam
1,888,941
How to Become a Professional .NET Developer in 2024
Nowadays there is tons of information out there and finding the correct one is not an easy task. When...
0
2024-06-14T19:54:13
https://dev.to/turalsuleymani/how-to-become-a-professional-net-developer-in-2024-2nip
csharp, roadmap, dotnet
Nowadays there is tons of information out there and finding the correct one is not an easy task. When you start your journey in programming and want to have a comprehensive guide it is always better to look for a guideline that will walk you through the full process. This article will help you have a roadmap that will teach you what is important and what to learn. PS: you can download a roadmap from the [GitHub repository](https://github.com/TuralSuleymani/DecodeBytes/tree/tutorial/dotnet-roadmap). Everything starts from Software fundamentals ![software fundamentals](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3tlhgke97zmd2mkr0zgr.png) Software fundamentals are the building blocks that all software is made from. They provide a foundation for understanding how computers work, how programs are written, and how to solve problems with code. Just like learning the alphabet is essential before you can write a novel, understanding software fundamentals is essential before you can become a proficient programmer. Software fundamentals cover some basic aspects of programming knowledge. **Algorithms**: An algorithm is a set of step-by-step instructions that tell a computer how to solve a problem. Learning about algorithms helps you break down complex problems into smaller, manageable steps that can be translated into code. **Data Structures**: Data structures are specialized formats for organizing and storing data in a computer's memory. Understanding different data structures (like arrays, lists, and trees) helps you choose the right way to store and access information in your programs. **Programming Paradigms:** Having a basic understanding of Object-oriented programming, Procedural Programming, and Functional programming will help you in the future when you dive into the details of these paradigms. Software fundamentals cover concepts like data types, control flows, variables, and operators. These are essential building blocks for any programming language. **Data Types:** Data types define the kind of data a variable can hold and the operations that can be performed on it. Common data types include. integers (whole numbers) 1.Floats (numbers with decimal points) 2.Characters (single letters or symbols) 3.Booleans (logical values of true or false) 4.Strings (sequences of characters) **Variables**: Variables are named containers that store data in a computer's memory. You can think of them as labeled boxes where you can keep different types of information. **Operators**: Operators are symbols that perform operations on data. There are different categories of operators, including. **Arithmetic operators **(+, -, *, /) for performing mathematical calculations. **Comparison operators** (==, !=, <, >, <=, >=) for comparing values. **Logical operators** (&&, ||, !) for performing logical operations on Boolean values. **Assignment operators** (=, +=, -=, *=, /=) for assigning values to variables and performing calculations at the same time. **Control Flow**: Control flow statements dictate the order in which program instructions are executed. They allow programs to make decisions and repeat tasks based on certain conditions. Common control flow statements include. **Conditional statements** (if, else, elif): These statements allow you to execute different code blocks based on whether a condition is true or false. **Loops (for, while):** Loops allow you to repeat a block of code a certain number of times or until a condition is met. Software fundamentals encompass a broader range than just the core programming concepts. When it comes to web development, understanding these web essentials is crucial. **Web Protocols:** These are the rules that govern how data is transmitted over the web. A key protocol you'll encounter is: **HTTP (Hypertext Transfer Protocol)**: This is the foundation of communication between web browsers and servers. It defines how requests are made, how data is formatted, and how responses are sent back. Basics of HTML and CSS **HTML (HyperText Markup Language):** HTML is the language used to structure the content of a web page. It defines the document layout using tags that tell the browser what kind of content is being displayed (headings, paragraphs, images, etc.). **CSS (Cascading Style Sheets):** CSS controls the visual presentation of a web page. It allows you to style elements like fonts, colors, backgrounds, and layouts. ## Web Server and Client-Server Model **Web Server:** A web server is a computer program that stores web pages and delivers them to web browsers when requested. Think of it as a library that holds all the information for your website. **Client-Server Model:** This model describes the interaction between a web browser (client) and a web server. The client (your browser) sends a request (e.g., visiting a website) to the server, and the server processes the request and sends back a response (the web page you see). **Request-Response Process** This process outlines how communication happens between a browser and a server **Request**: The user enters a URL in the browser or clicks on a link. The browser translates this into an HTTP request containing information like the requested URL and any additional data (e.g., form submissions). Server: The browser sends the request to the web server specified in the URL. **Processing**: The server receives the request, processes it (retrieving the requested webpage or performing an action), and generates a response. Response: The server sends an HTTP response back to the browser. This response includes the requested data (the HTML content) and additional information like status codes. **Rendering**: The browser receives the response, interprets the HTML code, and displays the web page according to the included styling (CSS). Understanding these web essentials alongside software fundamentals provides a solid foundation for building web applications. With this knowledge, you can move on to learning more advanced web development concepts and programming languages. Having a solid grasp of software fundamentals will give you a strong foundation for learning any programming language. It will help you write cleaner, more efficient code, solve problems more effectively, and ultimately become a better programmer. ## Learning .NET essentials Before diving into details of C#, it would be better to understand the framework and the runtime itself. ![dotnet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o22opl54qgojejn85gpu.png) Understanding the underlying concepts of the .NET runtime can definitely be beneficial before diving into C#. Here's a core concept. **CLR (Common Language Runtime):** The CLR is the heart of the .NET framework. It's the virtual environment that manages the execution of .NET code. It handles tasks like memory management, security, and garbage collection. **Manifest**: The manifest is a file within a .NET assembly (like a .dll file) that contains metadata about the assembly itself. It includes information like the assembly name, version, dependencies on other assemblies, and security information. **Metadata**: Metadata is essentially data about your program. In the .NET world, it describes the types, methods, and resources defined in your code. The CLR uses this metadata to understand the structure of your program and execute it correctly. **IL Code (Intermediate Language)**: C# code doesn't directly run on the computer's hardware. Instead, the C# compiler translates your code into a special set of instructions called IL (Intermediate Language). This IL code is designed to be portable and run on any system with a compatible CLR. **JIT (Just-In-Time compilation): **The JIT compiler is a component of the CLR that translates IL code into machine code (native code) specific to the processor architecture it's running on. This translation happens at runtime, which is why it's called Just-In-Time compilation. **FCL/BCL (Framework Class Library/Base Class Library):** These terms are often used interchangeably. It's a collection of pre-written classes and functionalities that provide common operations like file access, database interaction, networking, and more. These libraries save you time by providing pre-built solutions for common programming tasks. **CLI (Common Language Infrastructure)**: This is a broader specification that defines the standards for creating and running programs on the .NET platform. It includes the CLR, libraries, and tools for building and deploying .NET applications. Understanding these concepts will give you a deeper appreciation of how C# code works behind the scenes. You can start by learning the basic concepts of C# syntax, variables, data types, and control flow. This will give you a foundation for understanding how to write code. Then, you can delve deeper into the .NET runtime concepts to solidify your knowledge of the platform. ## Learning Csharp ![csharp](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xbdrwhlblht6c84q6oh1.png) **C# Fundamentals:** This is the foundation. Make sure you grasp core concepts like variables, data types (int, string, bool, etc.), operators (+, -, *, /), control flow statements (if/else, for, while), and basic building blocks like classes, records, and structs. **OOP Implementation**: Object-oriented programming(OOP) is a key paradigm in .NET. Here, you'll learn about encapsulation (data hiding), inheritance (creating child classes from parent classes), polymorphism (objects responding differently to the same message), and abstraction (focusing on what an object does rather than how). **Interfaces**: Interfaces define contracts that classes must adhere to. They are crucial for loose coupling (relying on functionality rather than specific implementations) and promoting code flexibility. **Delegates and Events:** Delegates are function pointers that allow you to pass methods as arguments. Events are a communication mechanism between objects, notifying interested parties when something happens. **Generics**: Generics lets you create code that can work with various data types without sacrificing type safety. This promotes code reusability and maintainability. **Exception Handling**: Exceptions are errors that occur during program execution. Learn how to handle them gracefully using try/catch blocks to prevent application crashes and provide informative error messages. **Method Extensions:** Method extensions allow you to add functionality to existing classes without modifying the original code. This is a powerful technique for improving code readability and maintainability. **Entity Framework Core (EF Core)**: EF Core is an Object-Relational Mapper (ORM) that simplifies data access in .NET applications. It allows you to work with databases using C# objects, reducing the need for raw SQL queries. **LINQ**: Language Integrated Query (LINQ) is a powerful syntax for querying data in various sources (databases, collections, XML) using C# syntax. This makes data manipulation more concise and readable. Functional Programming: Functional programming emphasizes immutability (data doesn't change) and pure functions (always return the same output for the same input). While not core to .NET, understanding these concepts can improve code clarity and maintainability. **Data Representations**: Learn how to work with data in different formats like XML (Extensible Markup Language), JSON (JavaScript Object Notation), and files. This is essential for data exchange and persistence. **Concurrency**: Concurrency deals with handling multiple tasks or operations happening simultaneously. It includes asynchronous programming (performing tasks without blocking the main thread), parallel programming (executing multiple tasks concurrently), and multithreading (using multiple threads for improved performance). **Synchronization and Thread Safety**: When working with multiple threads, it's crucial to synchronize access to shared resources to avoid race conditions (unexpected outcomes due to uncoordinated access) and deadlocks (threads waiting for each other indefinitely). Learn about concepts like context switching, synchronization primitives (locks, mutexes), and how to achieve thread-safe code. **Task Parallel Library (TPL)**: TPL is a set of classes and tools in .NET that simplify parallel programming tasks. It provides a higher-level abstraction compared to directly managing threads. This roadmap covers a solid foundation for becoming a .NET developer. Remember to practice and build projects to solidify your understanding. There are many online resources and tutorials available to help you on your journey! ## .NET Technologies ![dotnet tech](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s78a3kwylren4hugthds.png) **ASP.NET Core:** This is the core framework for building modern web applications in. NET. It's open-source, cross-platform (runs on Windows, macOS, and Linux), and high-performance. It provides a foundation for building various web applications using different UI paradigms. **ASP.NET Core Web API**: This is a sub-framework within ASP.NET Core specifically designed for building web APIs. It excels at creating RESTful APIs that provide data and functionality to other applications (mobile apps, single-page applications, etc.) in a structured way. **ASP.NET Core Razor Pages**: Razor Pages are a lightweight model for building web applications within ASP.NET Core. They combine HTML with C# code (using Razor syntax) to create dynamic web pages that handle user interactions and data access. It's a good option for simpler web applications or quick prototypes. **ASP.NET Core MVC:** Model-View-Controller (MVC) is a classic architectural pattern for building web applications. ASP.NET Core MVC provides a structured way to separate concerns between models (data), views (presentation), and controllers (handling user requests). This offers more control and flexibility compared to Razor Pages for complex applications. **SignalR**: SignalR is a real-time communication library for ASP.NET Core. It enables bi-directional communication between a server and clients (web browsers, mobile apps), allowing for features like live updates, chat applications, and collaborative editing. **WPF (Windows Presentation Foundation)**: WPF is a UI framework for building desktop applications with rich visuals and user experiences. It's specifically designed for the Windows platform and allows for creating visually stunning applications with custom controls and animations. **Blazor**: Blazor is a relatively new UI framework that allows you to build web UI with C# instead of JavaScript. It offers two main models: Blazor WebAssembly for single-page applications with a focus on web performance and Blazor Server for server-side rendering with real-time updates. ## Choosing the right technology The best technology for your project depends on your specific needs. Here's a quick guide. - For web APIs: Use ASP.NET Core Web API. - For simple web applications or prototypes: Consider Razor Pages. - For complex web applications with a clear separation of concerns: Choose ASP.NET Core MVC. - For real-time communication features: Integrate SignalR. - For desktop applications on Windows: Use WPF. - For web UI with C# instead of JavaScript: Explore Blazor (consider WebAssembly or Server, depending on your needs). ## Database skills for .NET developers Database skills vary depending on your projects. I have worked on different projects, and some of them had really deep-dive requirements related to databases. ![database](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwh0f0tw8nlpzbch9f09.png) ## Relational Database Topics **Joins**: This is a fundamental concept for retrieving data from multiple related tables in a relational database. Different join types (inner join, left join, etc.) allow you to specify how rows from different tables should be matched and retrieved based on relationships. **Common Table Expressions (CTEs):** These are temporary named result sets defined within a SQL query. They can be used to simplify complex queries and improve readability by breaking down the logic into smaller, reusable steps. **Pivoting**: This technique transforms data from rows to columns, often used for data summarization and reporting. It allows you to present data in a different format for easier analysis. **Views**: Views are virtual tables based on underlying tables or other views. They provide a customized way to expose data to users without granting direct access to the base tables. **Functions**: These are reusable blocks of SQL code that perform specific operations. They can take parameters and return a value, improving code reusability and modularity. **Stored Procedures**: Stored procedures are precompiled SQL code stored on the database server. They can accept parameters, execute complex logic, and improve performance for frequently used operations. **DDL (Data Definition Language) Operations**: These are SQL statements used to define the structure of the database, like creating tables, columns, indexes, and constraints. **Indexes**: Indexes are special data structures that speed up data retrieval by organizing data in a specific order. Choosing the right indexes can significantly improve query performance. **Grouping and Aggregation:** Grouping allows you to categorize data based on a specific column and then use aggregate functions (SUM, COUNT, AVG, etc.) to summarize data within those groups. **Working with XML/JSON**: Relational databases can store and manipulate XML and JSON data. This allows for data exchange with other systems and working with semi-structured data within the relational model. ## 2. NoSQL Topics **Flexible Data Models**: Unlike relational databases with a fixed schema, NoSQL databases offer flexible data models that can accommodate data with varying structures. This is useful for storing data that doesn't fit neatly into a relational structure. **Scalability**: NoSQL databases are often designed to scale horizontally, meaning you can add more servers to handle increased data volume and user requests. This is in contrast to vertical scaling (upgrading hardware) common in relational databases. **High Availability**: NoSQL databases often prioritize high availability, ensuring minimal downtime and continuous operation even during server failures. They achieve this through techniques like replication (copying data across multiple servers). **Eventual Consistency:** Unlike relational databases that guarantee data consistency across all replicas immediately (ACID properties), NoSQL databases may exhibit eventual consistency. This means data changes may take some time to propagate across all replicas, but eventually, all copies will be consistent. **Different Types of NoSQL Databases:** There are various NoSQL database types, each with its strengths: **Document Databases: **Store data as JSON-like documents and are good for managing hierarchical data. (e.g., MongoDB) Key-Value Stores: Offer very fast lookups based on a unique key and are ideal for simple data like user profiles or shopping cart items. (e.g., Redis) **Column Family Stores: **Organize data by columns instead of rows and are efficient for handling large datasets with frequently accessed columns. (e. Cassandra) **Graph Databases:** Store data as nodes (entities) and relationships between them, ideal for representing social networks or recommendation systems. (e.g., Neo4j) Understanding both relational and NoSQL databases equips you as a .NET developer to choose the right tool for the job based on your project's specific needs. ## Messaging and Streaming Tools for .NET Developers These tools facilitate asynchronous communication between different parts of an application or between microservices. They enable applications to send and receive messages reliably and efficiently, decoupling message senders from receivers. This improves scalability, fault tolerance, and loose coupling in distributed systems. **Azure Service Bus (ASB): **A cloud-based messaging solution from Microsoft that offers queues, topics, event hubs, and relay services. It integrates well with other Azure services and provides a reliable and scalable messaging platform for .NET applications. **RabbitMQ**: A popular open-source message broker that implements the Advanced Message Queuing Protocol (AMQP). It's known for its flexibility, reliability, and ease of use. There are client libraries available for .NET development. **MassTransit**: A powerful open-source service bus specifically designed for .NET development. It simplifies building applications that leverage message queues and integrates well with various message brokers like RabbitMQ and Azure Service Bus. **Apache Kafka:** A distributed streaming platform originally developed by LinkedIn. It excels at handling high-volume, real-time data streams and offers features like message persistence, partitioning, and replication. There are .NET client libraries available for interacting with Kafka clusters. ![messaging](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hjd2j7vvlvl7ovch59wo.png) **AWS Kinesis:** A managed service by Amazon Web Services for handling real-time data streams. It offers various components like Kinesis Data Streams for ingesting data, Kinesis Firehose for delivering data to other destinations (e.g., S3), and Kinesis Data Analytics for real-time data processing. There are .NET SDKs available for working with AWS Kinesis. ## Choosing the Right Tool The best tool depends on your specific needs. Here's a basic guide. For cloud-based messaging with tight integration to Azure services: Consider Azure Service Bus. For open-source flexibility and AMQP support: Explore RabbitMQ with a .NET client library like MassTransit. For high-volume, real-time data streams with. NET-centric development: Look into MassTransit or Apache Kafka with a .NET client library. For real-time data streaming within the AWS cloud: Utilize AWS Kinesis and its .NET SDK. ## Containerization and orchestration Docker and Kubernetes play a significant role in the lives of modern .NET developers, especially those working on cloud-based deployments or microservices architectures. Here's how these tools impact .NET development. ## Containerization with Docker Packaging and Deployment: Docker allows you to package your .NET application (code, dependencies, libraries) into a lightweight, portable unit called a container. This containerized application can then be deployed consistently across different environments (development, testing, production) regardless of the underlying operating system. This simplifies deployment and streamlines the development workflow. Isolation and Consistency: Each Docker container runs in isolation from other containers, ensuring that applications don't interfere with each other's resources or dependencies. This promotes consistency and predictability in application behavior across environments. Version Control and CI/CD: Docker images can be version controlled using Docker Hub or private registries. This allows for easy rollbacks to previous versions if needed and integrates well with Continuous Integration/Continuous Delivery (CI/CD) pipelines for automated builds and deployments. ## Orchestration with Kubernetes Managing Multiple Containers: While Docker excels at building individual containers, Kubernetes shines in managing and orchestrating deployments of multiple containers that work together as a system (often called microservices). It automates tasks like container scaling, load balancing, and health checks, ensuring a highly available and scalable application. Declarative Configuration: Kubernetes uses a declarative approach, where you define the desired state of your application (number of replicas, resource allocation) and Kubernetes takes care of achieving and maintaining that state. This simplifies deployment management and reduces configuration errors. Cloud Agnostic: Kubernetes is designed to be cloud-agnostic, meaning you can deploy your containerized .NET application on various cloud platforms (Azure Kubernetes Service, Amazon Elastic Kubernetes Service, Google Kubernetes Engine) or even on-premises deployments with tools like Rancher. ![docker and kubernetes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/suvbhej8lwlbd9g2qc4l.png) ## Benefits for .NET Developers **Faster Development Cycles:** Containerization and orchestration streamline development workflows by simplifying deployment and management. Developers can focus on writing code and less on infrastructure configuration. **Improved Scalability:** Scaling your .NET application becomes easier with Kubernetes. You can define how many instances of your containerized application to run based on demand, allowing for efficient resource utilization. **Increased Reliability:** Kubernetes offers features like self-healing and automatic rollbacks, improving the overall reliability and resilience of your .NET application. ## Learning Curve While Docker and Kubernetes offer significant benefits, there's a learning curve involved in understanding and effectively using them. However, many resources and tutorials are available to help .NET developers get started with containerization and orchestration. Docker and Kubernetes are powerful tools that can significantly enhance the development and deployment experience for .NET developers. By leveraging containerization and orchestration, you can build and deploy modern, scalable, and reliable .NET applications. ## Version control and testing **Version Control:** Git is a distributed version control system (DVCS) that allows you to track changes to your codebase over time. It creates a history of all changes made, enabling you to revert to previous versions if necessary, collaborate with other developers effectively, and maintain a clear record of project evolution. **Collaboration**: Git facilitates seamless collaboration between developers working on the same project. Multiple developers can work on different parts of the codebase simultaneously, merge their changes without conflicts, and manage different branches for new features or bug fixes. **Branching and Merging: **Git's branching feature allows you to create isolated copies of the codebase (branches) to experiment with new features or bug fixes without affecting the main codebase (master branch). When ready, you can merge changes from the feature branch into the master branch. **Code Sharing:** Platforms like GitHub or Azure Repos built on top of Git enable code sharing among developers and communities. You can share your code publicly or privately, collaborate on open-source projects, and access a vast repository of existing code and libraries. ## Testing (Unit, e2e, and Integration) **Quality Assurance: **Testing is an essential practice for ensuring the quality, reliability, and functionality of your .NET application. It helps identify bugs, regressions, and potential problems early in the development process, saving time and effort in the long run ![git](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aqmb7alyrsfqsfo0794x.png) ## Types of Testing **Unit Testing:** Unit tests focus on isolated units of code (methods, classes) within an application. They verify the expected behavior of these individual units with specific inputs and outputs. Unit tests are typically written by developers using frameworks like NUnit, xUnit, or MSTest. **Integration Testing: **Integration tests assess how different modules or components of an application interact with each other. They ensure proper data flow and communication between these components. Integration tests can involve more complex setups than unit tests. End-to-End Testing (e2e Testing): E2e tests simulate real-world user interactions with the entire application, verifying its overall functionality from beginning to end. They often involve testing user interfaces, database interactions, and external APIs. Tools like Selenium or Cypress are popular for e2e testing. ## Conclusion Being a .NET developer is not just learning C# language syntax. It requires more and more tools and techniques. You should understand the development ecosystem rather than sticking to a concrete language. It requires a lot of time and passion. So, keep learning! oh, by the way, here is my tutorial about it :) {% embed https://youtu.be/34ZKd8Or5ks?si=n2eLTeaG3W3dGmct %} **Want to dive deeper?** Regularly, I share my senior-level expertise on my [TuralSuleymaniTech ](https://www.youtube.com/@TuralSuleymaniTech)YouTube channel, breaking down complex topics like .NET, Microservices, Apache Kafka, Javascript, Software Design, Node.js, and more into easy-to-understand explanations. Join us and level up your skills!
turalsuleymani
1,888,939
Virtualization 256 chars
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-14T19:45:13
https://dev.to/fmalk/virtualization-256-chars-444k
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Virtualization is a technique where a system is run in a virtual environment, as in not run by actual hardware or by its usual host system. The virtual system is designed to be indistinguishble from the real setup by anything running on it. ## Additional Context Many success cases of virtualization are very well-known such as cloud computing, containerization and emulation; some of the lesser known as being examples of virtualization are distributed file systems and work over remote desktops.
fmalk
1,888,938
Building an custom chat bot
heres the full code of the repo changed the dataset...
0
2024-06-14T19:44:52
https://dev.to/abdullah_ali_eb8b6b0c2208/building-an-custom-chat-bot-7fk
ai, machinelearning, webdev, tutorial
_heres the full code of the repo changed the dataset first https://github.com/Abdullaah-Ali/chatbot_ 1. Setting Up the Environment: Before diving into the development process, it's essential to set up the environment properly. First, ensure you have the necessary libraries installed. You'll need Keras, Numpy, and other dependencies. You can install them using pip: ``` pip install keras numpy ``` 2. Understanding the Dataset: A well-prepared training dataset is crucial for the success of your chatbot. Ensure your dataset is formatted correctly and preprocessed for training.The quality of your dataset will significantly impact the performance of the chatbot. 3 The Keras Functional API : provides a flexible way to define complex models. Designing the architecture of your chatbot model involves understanding the structure of the neural network. You'll need to decide on the number of layers, the type of layers, and the connections between them. Each layer serves a specific purpose, such as processing input data, extracting features, and generating output responses. 4 With the architecture defined :, it's time to implement the m4 odel using the Keras Functional API. Start by coding the layers according to the design. Then, compile the model by specifying the appropriate loss function and optimizer. Split the dataset into training and validation sets to evaluate the model's performance during training. ``` output = Dense(len(label_encoder.classes_), activation='softmax')(pooled_output) # Define the model model = Model(inputs=input_text, outputs=output) # Compile the model model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # Train the model model.fit(padded_sequences, encoded_answers, epochs=600, batch_size=200) ``` as the architecture is definned run the bot on the machine and your model would be trained **heres the full code https://github.com/Abdullaah-Ali/chatbot**
abdullah_ali_eb8b6b0c2208
1,888,936
Computer Vision: Transforming the Future
Introduction Computer vision is a field of artificial intelligence that trains computers...
27,673
2024-06-14T19:41:33
https://dev.to/rapidinnovation/computer-vision-transforming-the-future-3k8g
## Introduction Computer vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects — and then react to what they “see.” ## What is Computer Vision? Computer vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects — and then react to what they “see.” ## Types of Computer Vision Computer vision encompasses several technologies and techniques designed to mimic human vision using artificial systems. Here’s a look at some of the primary types: ## How Computer Vision is Implemented Implementing computer vision involves several steps, starting from data collection to deploying a fully functional system. The process is intricate and requires careful planning and execution to ensure the system is efficient and effective. ## Benefits of Computer Vision Computer vision technology has significantly transformed various industries by automating tasks that traditionally required human vision. This automation leads to increased efficiency and productivity, enhanced security, improved user experience, and valuable data insights and analytics. ## Challenges in Computer Vision Despite its vast potential, computer vision faces several challenges, including data privacy concerns, high resource requirements, accuracy and reliability issues, and ethical and legal implications. ## Future of Computer Vision The future of computer vision is poised for significant technological advancements, expanding application areas, and integration with AI and IoT. These advancements promise to transform various industries and enhance the capabilities of computer vision systems. ## Real-World Examples of Computer Vision Computer vision is being applied in numerous real-world scenarios across different industries, demonstrating its versatility and impact. Examples include autonomous vehicles, healthcare diagnostics, retail industry applications, and smart city technologies. ## Why Choose Rapid Innovation for Implementation and Development Rapid Innovation is increasingly becoming a preferred choice for businesses looking to stay competitive in the fast-evolving technological landscape. The company's focus on cutting-edge technologies like AI and blockchain ensures that they are not just keeping up with the current trends but are also setting benchmarks in innovative implementations. ## Conclusion Computer vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. Its applications are widespread and growing rapidly, making it an indispensable tool in the advancement of AI and automation technologies. The future of computer vision looks promising with several trends likely to shape its trajectory. Drive innovation with intelligent AI and secure blockchain technology! 🌟 Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <https://www.rapidinnovation.io/post/what-is-computer-vision-the-complete-guide-for-2024> ## Hashtags #ComputerVision #AI #DeepLearning #AutonomousVehicles #SmartCities
rapidinnovation
1,888,935
Greedy algorithm
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-14T19:41:26
https://dev.to/skomfi/greedy-algorithm-3ghd
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer A greedy algorithm is like a kid in a candy store: it grabs the best candy (local optimum) it sees without thinking ahead, hoping to end up with the most candy (global optimum). It’s used in problems like finding the shortest path or scheduling tasks. ## Additional Context Greedy but effective <!-- Thanks for participating! -->
skomfi
1,888,931
Azure Virtual Machine (VM) and It's Deployment
Azure virtual machines These are on-demand, scalable computing resources offered by...
0
2024-06-14T19:36:17
https://dev.to/franklin_onuegbu/azure-virtual-machine-vm-and-its-deployment-3ikh
azure, cloudcomputing, virtualmachine, microsoft
# Azure virtual machines These are on-demand, scalable computing resources offered by Azure. ## Purpose: Azure virtual machines (VMs) give you more control over your computing environment than other choices. You can use them for various purposes. **Development and test:** Quickly create a VM with specific configurations for coding and testing applications. **Cloud applications:** Run applications on VMs in Azure, scaling up or down as needed. **Extended datacenter:** Connect VMs in an Azure virtual network to your organization’s network. ## Considerations Before Creating a VM **Resource Names:** Decide on resource names. **Location:** Choose where to store resources. **VM Size:** Select an appropriate VM size. **Operating System:** Decide which OS the VM will run. **Configuration:** Plan how to configure the VM after it starts. **Related Resources:** Consider other resources the VM needs. ## Billing and Resources - When you create a VM, you also create supporting resources (e.g., virtual network, NIC). - These resources have their own costs, so be aware of them. - For example, a VM’s supporting resources include a virtual network and a Network Interface Card (NIC). ## Deploying a Virtual Machine 1. **Sign in to Azure:** Access the [Azure portal](https://portal.azure.com/). If you don’t have an Azure subscription, create a [free account](https://azure.microsoft.com/en-us/free/?WT.mc_id=A261C142F). 2. **Create a Virtual Machine:** - Search for “virtual machines” in the Azure portal. - Select “Virtual machines” under Services. ![test](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gvrso0mygedqkrqvho0e.png) - Click “Create” and choose “Azure virtual machine.” ![Img](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/elcy19eysurr9rqpzybg.jpg) ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/igly6a7vmh6c743thwm4.png) - Select an existing Resource Group or create a new one (we will name it Example) ![Naming the RG](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ty6d985qox2uh9olzr3.png) - Provide a name for your VM (e.g., “Example-VM”). ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vqb0jqplugtu88hlxgwf.png) ![Naming](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yr98wkrxhr3ybd0xipn2.png) - Choose the Azure region that is right for you and your customers. Not all VN size are available in all regions. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pbt9pqzijcgs6ejurhi9.png) - Availability options & Availability Zone: Select the availability preferences. for this blog we will choose "no infrastructure redundancy required" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e6k24ysvg8wz013ox3pu.png) - Select “Windows 11 Pro, version 22H2 - x64 Gen2 (free services eligible)” as the image. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0dz03rtgyjv8wcf6nj4.png) - Set up an administrator account (username and password). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qw01qnvvgeamfm8o9ow9.png) - Allow RDP (port 3389). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ep13dc5livebervp2oo.png) - Review and create the VM. ![review](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uhlgd4ujs2pv8pmqddem.png) We still have the storage, Networking, Management and tags but for this blog we will set it all as default so no changes is needed, we will finalize the deployment. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/naia9viwakvf8krd9bgt.png) Now we will connect to the VM we just deployed. ![connect](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iuqf38o6gv839geiq268.png) **Download RDP file** Click on download RDP file. Open file, it will prompt a box. Click connect, enter the admin username and password of the VM. ![Download](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s74glbb3qg61453c5lka.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d1n99fr6tmrk98lpqh4p.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sua0nmkvkx1maov1rznv.png) I’ve walked you through the process of creating a virtual machine in Microsoft Azure. You should have a solid grasp of the settings and configuration options. Virtual machines offer scalable computing resources and highly flexible approach to deploying and managing your applications in the cloud.
franklin_onuegbu
1,888,929
945. Minimum Increment to Make Array Unique
945. Minimum Increment to Make Array Unique Medium You are given an integer array nums. In one...
27,523
2024-06-14T19:33:41
https://dev.to/mdarifulhaque/945-minimum-increment-to-make-array-unique-4am8
php, leetcode, algorithms, programming
945\. Minimum Increment to Make Array Unique Medium You are given an integer array `nums`. In one move, you can pick an index `i` where `0 <= i < nums.length` and increment `nums[i]` by `1`. Return _the minimum number of moves to make every value in `nums` **unique**_. The test cases are generated so that the answer fits in a 32-bit integer. **Example 1:** - **Input:** nums = [1,2,2] - **Output:** 1 - **Explanation:** After 1 move, the array could be [1, 2, 3]. **Example 2:** - **Input:** nums = [3,2,1,2,1,7] - **Output:** 6 - **Explanation:** After 6 moves, the array could be [3, 4, 1, 2, 5, 7]. It can be shown with 5 or less moves that it is impossible for the array to have all unique values. **Constraints:** - <code>1 <= nums.length <= 10<sup>5</sup></code> - <code>0 <= nums[i] <= 10<sup>5</sup></code> **Solution:** ``` class Solution { /** * @param Integer[] $nums * @return Integer */ function minIncrementForUnique($nums) { $ans = 0; $minAvailable = 0; sort($nums); foreach ($nums as $num) { $ans += max($minAvailable - $num, 0); $minAvailable = max($minAvailable, $num) + 1; } return $ans; } } ``` **Contact Links** - **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)** - **[GitHub](https://github.com/mah-shamim)**
mdarifulhaque
1,888,904
Spring Data
Spring Data, Spring Framework'ün bir alt projesidir ve çeşitli veri depoları (relational databases,...
0
2024-06-14T19:18:06
https://dev.to/mustafacam/spring-data-473a
Spring Data, Spring Framework'ün bir alt projesidir ve çeşitli veri depoları (relational databases, NoSQL, key-value stores, vb.) ile çalışmayı kolaylaştırmayı amaçlar. Spring Data, veri erişim katmanlarını daha basit, daha hızlı ve daha tutarlı bir şekilde geliştirmek için kapsamlı bir altyapı sağlar. Bu proje, çeşitli modüller içerir ve her bir modül belirli bir veri deposu türü ile çalışmak için optimize edilmiştir. ### Spring Data'nin Temel Özellikleri 1. **Repository Abstraction**: Veri erişimi için ortak bir arayüz sağlar ve farklı veri depolarına yönelik özel implementasyonlar sunar. 2. **Query Methods**: Metod adlarına dayalı otomatik sorgu oluşturma imkanı verir (örneğin, `findByLastName`). 3. **Custom Queries**: JPQL, SQL veya veri deposuna özgü sorgu dilleri kullanarak özel sorgular tanımlama yeteneği. 4. **Pagination ve Sorting**: Veritabanı sorguları için sayfalama ve sıralama desteği. 5. **Auditing**: Veri değişikliklerini (oluşturma, güncelleme zamanları gibi) otomatik olarak izleme. 6. **Transaction Management**: Veri bütünlüğünü sağlamak için işlemsel yönetim. 7. **Geospatial Support**: Coğrafi verilerle çalışma yeteneği. ### Spring Data'nin Modülleri Spring Data, çeşitli veri depolarıyla çalışmayı kolaylaştırmak için bir dizi modül sunar. İşte bazı önemli modüller: 1. **Spring Data JPA**: Java Persistence API (JPA) ile çalışmak için kullanılır. 2. **Spring Data MongoDB**: MongoDB NoSQL veritabanı ile çalışmak için kullanılır. 3. **Spring Data Redis**: Redis key-value store ile çalışmak için kullanılır. 4. **Spring Data Cassandra**: Apache Cassandra NoSQL veritabanı ile çalışmak için kullanılır. 5. **Spring Data Elasticsearch**: Elasticsearch arama ve analiz motoru ile çalışmak için kullanılır. 6. **Spring Data Neo4j**: Neo4j grafik veritabanı ile çalışmak için kullanılır. 7. **Spring Data JDBC**: Doğrudan JDBC kullanarak çalışmak için kullanılır. ### Spring Data Kullanarak Basit Bir Örnek Aşağıda, Spring Data JPA kullanarak bir veritabanı bağlantısının nasıl kurulacağını ve veri işlemlerinin nasıl yapılacağını gösteren bir örnek bulunmaktadır. ### Maven Bağımlılıkları ```xml <dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <scope>runtime</scope> </dependency> </dependencies> ``` ### Uygulama Özellikleri (application.properties) ```properties spring.datasource.url=jdbc:h2:mem:testdb spring.datasource.driverClassName=org.h2.Driver spring.datasource.username=sa spring.datasource.password=password spring.h2.console.enabled=true spring.jpa.hibernate.ddl-auto=update ``` ### Entity Sınıfı (Employee.java) ```java import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity public class Employee { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private String name; private int age; // Getter ve Setter metodları } ``` ### Repository Arayüzü (EmployeeRepository.java) ```java import org.springframework.data.repository.CrudRepository; import java.util.List; public interface EmployeeRepository extends CrudRepository<Employee, Long> { List<Employee> findByName(String name); } ``` ### Service Sınıfı (EmployeeService.java) ```java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.util.List; @Service public class EmployeeService { @Autowired private EmployeeRepository employeeRepository; public Employee saveEmployee(Employee employee) { return employeeRepository.save(employee); } public List<Employee> getEmployeesByName(String name) { return employeeRepository.findByName(name); } public Iterable<Employee> getAllEmployees() { return employeeRepository.findAll(); } } ``` ### Controller Sınıfı (EmployeeController.java) ```java import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.bind.annotation.*; import java.util.List; @RestController @RequestMapping("/employees") public class EmployeeController { @Autowired private EmployeeService employeeService; @PostMapping public Employee createEmployee(@RequestBody Employee employee) { return employeeService.saveEmployee(employee); } @GetMapping("/{name}") public List<Employee> getEmployeesByName(@PathVariable String name) { return employeeService.getEmployeesByName(name); } @GetMapping public Iterable<Employee> getAllEmployees() { return employeeService.getAllEmployees(); } } ``` ### Spring Boot Uygulaması (SpringDataApplication.java) ```java import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class SpringDataApplication { public static void main(String[] args) { SpringApplication.run(SpringDataApplication.class, args); } } ``` ### Açıklamalar 1. **Bağımlılıklar**: Spring Boot starter'ları ve H2 veritabanı bağımlılıkları tanımlanmıştır. 2. **Uygulama Özellikleri**: H2 veritabanı için bağlantı bilgileri ve diğer yapılandırmalar yapılmıştır. 3. **Entity Sınıfı**: `Employee` sınıfı, veritabanı tablosu ile eşleştirilmiştir. `@Entity` ve `@Id` anotasyonları kullanılmıştır. 4. **Repository Arayüzü**: `EmployeeRepository` arayüzü, Spring Data'nın `CrudRepository` arayüzünü genişleterek temel CRUD işlemlerini sağlar. Ayrıca, `findByName` metoduyla özel bir sorgu tanımlanmıştır. 5. **Service Sınıfı**: `EmployeeService` sınıfı, iş mantığını içerir ve `EmployeeRepository`'i kullanarak veri işlemlerini gerçekleştirir. 6. **Controller Sınıfı**: `EmployeeController` sınıfı, RESTful API uç noktalarını tanımlar. 7. **Spring Boot Uygulaması**: `SpringDataApplication` sınıfı, Spring Boot uygulamasını başlatır. ### Spring Data'nin Avantajları 1. **Kolay Kullanım**: Repository arayüzleri sayesinde CRUD işlemlerini hızlıca gerçekleştirme. 2. **Verimlilik**: Otomatik olarak sorgu oluşturma ve JPQL ile karmaşık sorgular yazabilme. 3. **Genişletilebilirlik**: Özelleştirilmiş repository arayüzleri ve metotlar tanımlayabilme. 4. **Entegrasyon**: Spring Boot ile tam uyumlu, otomatik konfigürasyon ve basit yapılandırma. 5. **Veritabanı Bağımsızlığı**: Farklı veritabanı türleriyle kolayca çalışabilme. Spring Data, veri erişim işlemlerini basitleştirir ve hızlandırır, özellikle Spring Boot ile birlikte kullanıldığında hızlı uygulama geliştirme imkanı sağlar. Spring Data'nın modüler yapısı sayesinde, farklı veri depoları ile çalışırken aynı basit ve tutarlı API'yi kullanarak geliştirme yapabilirsiniz.
mustafacam
1,886,530
Exploring Microsoft Azure AI Capabilities Using React, Github Actions, Azure Static Apps and Azure AI
Lately, I've been dedicated to learning Microsoft Azure, and I've been using the Microsoft Learn...
0
2024-06-14T19:32:05
https://dev.to/yutee_okon/exploring-microsoft-azure-ai-capabilities-using-react-github-actions-azure-static-apps-and-azure-ai-4d8g
azure, cloud, ai, azuredevops
Lately, I've been dedicated to learning Microsoft Azure, and I've been using the Microsoft Learn platform extensively. I recently came across a challenge that requires utilizing Azure AI computer vision capabilities, specifically Azure AI Vision and Azure OpenAI cognitive services, to integrate image analysis and generation features into a product. To complete this challenge, Microsoft Learn recommends that one should have experience using JavaScript and React or similar frameworks, experience using GitHub and Visual Studio Code and familiarity with REST APIs. While my grasp of Javscript is quite solid and I know a little about React, I have not had any real development experience of it so my React code might look funny, bear with me. By the way, If this challenge sounds interesting to you and you want to attempt it, but you are not confident of your frontend development skills, do not be discouraged, as I was not. In fact, my limited knowledge was even more motivation to challenge myself. You can go through the challenge [here](https://learn.microsoft.com/en-us/training/modules/challenge-project-add-image-analysis-generation-to-app/). Like me, you might not get everything right, but do not hesitate to share your progress. So, following the challenge requirements, I created a new Azure Static Web App resource and then built a CI/CD pipeline to deploy a React application on Azure, using Azure Static Web Apps service and GitHub Actions. This was my first time trying to automate a deployment pipeline and of course I struggled with this a bit. Next, I setup a react application and fixed up the GUI, spending a lot of time in the react docs and github copilot. ![ui built with react](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hjpanxroyw82qqa984u2.png) Up next, I had to implement the image analysis feature, after going back and forth with the documentation and some code tweaking, I arrived at this. ![image analysis code using azure computer vision](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wu95cpcpq991z6tkfgow.png) The code is quite straightforward, but just a little explanation: __Line 7 - 11:__ Define useful variables. __Line 12 - 15:__ Creates an array that holds the query parameters for analysis feedback that is required from the Azure AI engine. here, I need just the caption for the image, but there are many more options that can be requested. __Lines 17 - 26:__ Declares an asynchronous function. This function takes in the image URL to be analysed as an argument, and then makes a request to the Azure AI engine with the image URL, the features, and the content type. Finally, the function returns the result gotten from the API call for processing and display on my user interface. My version of the challenge has a fully functional Image analyses capability, but I was unable to complete the Image generation functionality because Azure OpenAI is not available in my region. To test the app, clone the repository and in the project directory run: ##### `npm start` This will launch the react app. The repository can be found [here](https://github.com/yutee/cloud-ai). If you find it interesting, do not hesitate to leave a star and probably contribute to it. There are several other improvements that could be made, these inprovements can include... - Image Generation functionality using Open AI/Azure OpenAI - Better Security measures within the codebase - Error handling - Client side authentication - User Interface micro interactions - Add possible improvements and features to this README. Any other ones that you can think of is welcomed, happy hacking!
yutee_okon
1,888,927
First program/CCademy/CS101
CookBook.py is a program that I wrote (my first program ever written) that takes user input and...
0
2024-06-14T19:28:55
https://dev.to/murphea/first-programccademycs101-2pj4
CookBook.py is a program that I wrote (my first program ever written) that takes user input and returns recipes (ingredient lists) with any allergies the user may have factored into the returned recipes. I wanted to create a program that I could use daily that could automatically factor allergies or foods I'm wanting to avoid using into recipes that would be possible for me to make or eat. The program uses if/elif/else conditions to run code that combs through a list comp of recipes and their corresponding ingredients. https://github.com/Murphea/project-1.git ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/udp6dg4x8jggq7adp4bi.png) In honest conclusion, this program took me longer to write than I had thought it would. I found myself getting discouraged from the project and struggled to motivate myself to keep plugging at it. But I still always came back to my desire to write, learn and build something that perfect or not, I can be proud of. And this project allowed me a space to accomplish that and grow my desire to keep learning.
murphea
1,888,926
Top-Tier Software Solutions for Business Growth!
Code.Store: Elevate Your Business with Curated Software Solutions In the ever-evolving digital...
0
2024-06-14T19:26:15
https://dev.to/code-store/top-tier-software-solutions-for-business-growth-5gei
tooling, devops, development
[Code.Store:](https://code.store) Elevate Your Business with Curated Software Solutions In the ever-evolving digital landscape, businesses require advanced tools to stay competitive and grow. [Code.Store](https://code.store) is your premier destination for meticulously curated software solutions tailored to meet the diverse needs of modern enterprises. Our offerings span e-commerce optimization, robust cybersecurity, advanced artificial intelligence, comprehensive data analytics, and scalable cloud computing, all designed to empower your business with top-tier tools. E-Commerce Optimization: Enhance Your Online Presence Navigate the competitive e-commerce landscape with ease. [Code.Store](https://code.store) provides a range of solutions to streamline operations and elevate customer experiences. Our platform offers seamless payment processing, efficient inventory management, sophisticated CRM systems, and dynamic marketing automation tools, ensuring your online store operates smoothly and efficiently. Cybersecurity: Protect Your Digital Assets In an era of sophisticated cyber threats, safeguarding your digital assets is crucial. [Code.Store](https://code.store) delivers robust cybersecurity solutions, including advanced threat detection, endpoint protection, encryption tools, and secure access management. Integrate these measures to protect your data, systems, and networks from cyberattacks and data breaches. Artificial Intelligence: Achieve New Efficiency Levels Artificial intelligence is transforming business operations, enabling smarter decision-making and enhanced automation. [Code.Store](https://code.store) features a variety of AI-driven solutions, such as machine learning platforms, natural language processing tools, predictive analytics, and AI-powered customer service bots. These technologies help businesses analyze data effectively, personalize customer interactions, and automate routine tasks. Data Analytics: Convert Data into Actionable Insights Effective data management is vital for informed decision-making. [Code.Store](https://code.store) offers comprehensive data analytics solutions to help businesses collect, analyze, and visualize data. Our tools include data integration platforms, business intelligence software, and advanced analytics solutions, providing actionable insights that drive strategic initiatives. Cloud Computing: Seamlessly Scale Your Operations Embrace the power of cloud computing to revolutionize your business operations. [Code.Store](https://code.store) provides access to leading cloud platforms and services, including cloud storage, hosting, and software applications. These solutions enable seamless scaling, enhanced collaboration, and reduced IT costs. Commitment to Quality and Innovation At [Code.Store](https://code.store), quality assurance is our priority. Every software solution undergoes rigorous evaluation to meet high standards of performance, security, and reliability. We continuously update our marketplace with the latest technologies, ensuring you always have access to cutting-edge tools. Join [Code.Store](https://code.store) Today [Code.Store](https://code.store) is more than a marketplace; it’s a partner in your digital transformation journey. Access the best software solutions to enhance operations, protect assets, and drive growth. Visit Code.Store to explore our offerings and start your journey towards digital excellence today. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6p7rz5f229nsdqdd8ve.JPG)
code-store
1,888,925
VIRTUAL MACHINE-VM PROVISION ON AZURE PORTAL
WHAT ARE VIRTUAL MACHINES? The easiest way to think of a virtual machine (VM) would be as a computer...
0
2024-06-14T19:26:01
https://dev.to/presh1/virtual-machine-vm-provision-on-azure-portal-2d7e
signintoazure, createvirtualmachine, connecttovirtualmachine
**WHAT ARE VIRTUAL MACHINES?** The easiest way to think of a virtual machine (VM) would be as a computer within a computer. Current technology and processing power now allow for easy creation of virtual computing environments within a “host on-prem” computer. VMs may be more sophisticated or less sophisticated compared to the host computer depending on configuration demands. VMs are not physical but virtual. To create a Virtual Machine on an Azure portal, the following are the steps. **SIGN IN TO AZURE PORTAL** Its totally impossible to provision a VM on an Azure portal without logging in to the portal. You cant log in to the portal without having a an Azure Account/login detail. Azure account must be created if not created earlier and login details provided at the portal gateway as below ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg93tgil9dp3t2o4fzms.jpeg) **CREATE A VIRTUAL MACHINE** **1.Enter The Virtual Machine:** After gaining access to the Azure portal, the next step is to get a way to start up the creation a the VM. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dsfufhfm3667izabg5ap.jpeg) There are 3 ways to start up the VM creation * By typing "Virtual Machine" in the search bar as in 1 in the diagram above * By clicking on "Create Resource" as in 2 in the image above * By clicking on "Virtual Machine " icon as in 3 in the image above **2. In the Virtual Machines Page,** select Create (arrow 1) and then Azure virtual machine (arrow 2), then Create a virtual machine page opens. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8v30it5urrtwdclomlp.jpeg) **3. Under Instance Details:** Enter the desired name for the Virtual machine name eg PreshVM (arrow 4 below), the desired Regions by proximity (arrow 5), Availability Option (arrow 6), Security Type (arrow 7), desired image of the windows (arrow 8), architecture of the virtual machine (arrow 9). All these basic configuration are based on requests, if in doubt others can be left as defaults. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l62urchy070iri1219bq.jpeg) **4. Under Administrator Account**: Provide a customized username, such as PreshVM and a password. The password must be at least 12 characters long and meet the defined complexity requirements. Confirm password by typing the same password as typed in the first password bar. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wji604r52hznryxeqvb2.jpeg) **5.Under Inbound Port Rules:** As shown ain the above diagram, Choose Allow selected ports to scrutinize the port way to the VM and then select RDP (3389) and HTTP (80) from the drop-down. These are set protocols and port rules for incoming traffics. **6. Licensing:**Check the box to confirm Windows Eligibility License. Kindly note that every of the steps in 1 to 6 are the basic configuration of the VM. All other option can be at default setting. Click on NEXT to peruse through all other configuration options under **DISK, NETWORKING, MANAGEMENT, MONITORING,ADVANCED, TAG** for possible change of options. if none, the default setting is not a bad configuration too. **7. Review And Create:** Select this option button at the bottom of the page or the last submenu after TAG. It will peruse, review through all the selections and option you have made, displace the financial implications as its runs to validate stuffs. The system will request for your consent to CREATE the VM as below ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s63wjraae6gzqedp3bv7.jpeg) **8. Create:**On selecting the CREATE button, the system starts the deployment of the Virtual Machine. It will take a couple to minutes before this can be actualize. As this stage, other associated resources are created with the VM that would be created and the list will be displayed ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g2so8gd8nh33hz9srshv.jpeg) On completing this task, it displays the image below, then you can be sure that your deployment is complete. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5al1lydpqsixkuvedurp.jpeg) AFTER DEPLOYMENT, SELECT **GO TO RESOURCE ** this will allow you check the configuration of the Virtual Machine or any resource that you just created. **CONNECTING TO VIRTUAL MACHINE** Create a remote desktop connection to the virtual machine. These directions tell you how to connect to your VM from a Windows computer. 1. On the overview page for your virtual machine, select the Connect ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfjwrphhb5995gqubf71.jpg) 2. Click on DOWNLOAD RDP FILE 3. SELECT "KEEP" ON THE DOWNLOADED PAGE Open the downloaded RDP file and click Connect when prompted. In the Windows Security window, select More choices and then Use a different account. Type the username as localhost\username, enter the password you created for the virtual machine, and then click OK. You may receive a certificate warning during the sign-in process. Click Yes or Continue to create the connection.
presh1
1,886,190
Multi-Region Async Table Broadcast with YugabyteDB xCluster 1-to-N replication
Using PostgreSQL, you can add asynchronous read replicas to speed up reads from regions far from the...
0
2024-06-14T19:19:01
https://dev.to/yugabyte/multi-region-async-table-broadcast-with-yugabytedb-xcluster-1-to-n-replication-3d6f
yugabytedb, distributed, postgres, database
Using **PostgreSQL**, you can add **asynchronous read replicas** to speed up reads from regions far from the primary one (the one handling reads and writes) and without affecting the primary's latency. This works when the reads can tolerate some staleness. If communication with the primary fails, you can choose to continue providing those reads even if the staleness increases. **YugabyteDB** distributes reads and writes across the cluster by sharding tables and indexes into tablets, which are **Raft groups**, based on their respective primary key and indexed columns. You may decide to place the Raft leaders in a specific region to reduce the latency for the main user's region and have all Raft followers in other regions. When reading from one follower, it is not guaranteed to get the latest changes because it needs a **quorum** to get a **consensus** on the current state. However, when setting a read-time in the near past, typically 15 or 30 seconds, for a read-only transaction, one follower knows if it received all writes as of this point in time. This means that we can **read from a follower**, which may be closer, with reduced latency, when accepting a defined staleness. However, having followers in remote regions may impact the write latency, as the leader waits for the acknowledgment of the majority of followers. When your goal is to allow reads with some staleness, asynchronous replication is sufficient and does not impact the primary. YugabyteDB offers three possibilities for read-only transactions that accept some staleness. - **Raft followers** in the primary cluster of the YugabyteDB Universe: They participate in the quorum. A network partition may affect the latency for the write workload and could impact availability in the event of additional failures. - **Read Replica** in additional nodes of the same Universe: The YugabyteDB cluster can be extended with extra Raft peers in a remote region. These peers receive writes asynchronously and do not participate in the quorum. They serve as a complete data copy to reduce read latency from a remote region. However, they need to be connected to the cluster and cannot be used for disaster recovery. - **xCluster replica** in another YugabyteDB universe: On a per-table basis, the write-ahead log (WAL) can be fetched by another cluster to maintain an asynchronous copy. This has no impact on the primary cluster and remains available in case of a network partition. It can be used for disaster recovery or to maintain availability for read workloads as long as the increased staleness is acceptable. The xCluster replication is ideal for replicating a single table across multiple regions without impacting the primary database's availability. It also enhances read performance and availability. Consider the following scenario: You have a basic table that stores messages, similar to a social media timeline. You require uninterrupted availability and fast performance for all users globally. It's understood that users may not see the most recent message immediately because they perceive it more as a chat system, with send/receive, rather than a traditional database. ## Primary cluster For this lab, I created a simple cluster using Docker. ```sh # create a network docker network create yb # start the first node (RF=1) in lab.eu.zone1 docker run --name eu1 --network yb --hostname eu1 -d \ yugabytedb/yugabyte \ yugabyted start --cloud_location=lab.eu.zone1 \ --fault_tolerance=zone --background=false until docker exec eu1 postgres/bin/pg_isready -h eu1 ; do sleep 1 ; done | uniq # start two more nodes (RF=3) in lab.eu.zone2 and lab.eu.zone3 for i in {2..3} do docker run --name eu$i --network yb --hostname eu$i -d \ yugabytedb/yugabyte \ yugabyted start --cloud_location=lab.eu.zone$i \ --join eu1 --fault_tolerance=zone --background=false until docker exec eu$i postgres/bin/pg_isready -h eu$i ; do sleep 1 ; done | uniq done ``` I have designed a table to store messages along with their timestamps. ```sh docker exec eu1 ysqlsh -h eu1 -c " create table messages ( primary key (tim, seq) , tim timestamptz default now() , seq bigint generated always as identity ( cache 1000 ) , message text ) " ``` ## Get information from the primary universe To set up the replication, I will need the source cluster identifier, the table definition to create on the target, and the identifiers of the tables to replicate. ```sh # UUID of the primary cluster primary=$( docker exec eu1 yb-admin --master_addresses eu1:7100 get_universe_config | jq -r '.clusterUuid' | tee /dev/stderr ) # DDL to create the table on the target ddl=$( docker exec eu1 postgres/bin/ysql_dump -h eu1 -t messages | tee /dev/stderr ) ## Table IDs to replicate with xCluster oids=$( docker exec eu1 yb-admin --master_addresses eu1:7100 \ list_tables include_db_type include_table_id include_table_type | grep -E '^ysql.yugabyte.messages .* table$' | tee /dev/stderr | awk '{print $(NF-1)}' | tee /dev/stderr | paste -sd, ; ) ``` ## Asynchronous Replicas for xCluster For each region (US, AU, JP), I need to start a new cluster with `yugabyted create`, get its cluster identifier with `yb-admin get_universe_config`, create the table with the DDL extracted from the primary, and then set up the replication using `yb-admin setup_universe_replication`. I've configured the nodes to listen on all interfaces using the `--advertise_address=0.0.0.0` option. This is easier for my tests: when I disconnect one container, it continues to run because the `yb-master` and `yb-servers` are still receiving their heartbeats through the localhost interface but they are isolated from other containers. ```sh for replica in us au jp do # start the cluster docker run --name $replica --network yb --hostname $replica -d \ yugabytedb/yugabyte \ yugabyted start --advertise_address=0.0.0.0 \ --cloud_location=lab.$replica.zone1 \ --fault_tolerance=zone --background=false tgt=$( docker exec $replica yb-admin --master_addresses localhost:7100 \ get_universe_config | jq -r '.clusterUuid' | tee /dev/stderr ) until docker exec $replica postgres/bin/pg_isready -h localhost ; do sleep 1 ; done | uniq # create the table echo "$ddl" | docker exec $replica ysqlsh -h localhost -e # setup the replication docker exec $replica yb-admin --master_addresses localhost:7100 \ setup_universe_replication $primary eu1 "$oids" # show replication status sleep 5 docker exec $replica yb-admin --master_addresses localhost:7100 \ get_replication_status done ``` You may decide to add more nodes to make the replicas resilient to one failure. The replication target is a regular cluster. You can also replicate to the same cluster if this makes sense for you. ## Testing the xCluster configuration I can insert new messages on the primary: ```sql # on the primary docker exec eu1 ysqlsh -h eu1 -ec "insert into messages(message) values ('hello')" docker exec eu2 ysqlsh -h eu2 -ec "select * from messages order by 1,2,3" ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kv8h5exyqyu6q4hja4yp.png) The messages are visible on each read replica ```sql # on the replicas docker exec us ysqlsh -h localhost -ec "select * from messages order by 1,2,3" docker exec au ysqlsh -h localhost -ec "select * from messages order by 1,2,3" docker exec jp ysqlsh -h localhost -ec "select * from messages order by 1,2,3" ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/efryhwj1xhmv5mi86qv0.png) If one region is isolated from the regions, the primary cluster is still available for reads and writes, synchronizing to the available regions. From the isolated region, reads are still available but stale. ```sh docker network disconnect yb jp docker exec eu1 ysqlsh -h eu1 -ec "insert into messages(message) values ('hello')" docker exec au ysqlsh -h localhost -ec "select * from messages order by 1,2,3" docker exec jp ysqlsh -h localhost -ec "select * from messages order by 1,2,3" ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w0fbt1og6a0p285sl302.png) When the network is back, the replication continues: ```sh docker network connect yb jp docker exec jp ysqlsh -h localhost -ec "select * from messages order by 1,2,3" ``` After reconnecting, the gap will be resolved after a few seconds: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4bmsej7bpxcbegis0ok.png) ## Comparing with Read Replicas To show the difference, I have set up read replicas in all regions. ```sh for replica in us au jp do # start the read replica docker run --name rr-$replica --network yb --hostname rr-$replica -d \ yugabytedb/yugabyte \ yugabyted start --read_replica --join eu1.yb \ --cloud_location=lab.$replica.zone2 \ --fault_tolerance=zone --background=false done # configure for read replica docker exec eu1 \ yugabyted configure_read_replica new ``` As it is still the same YugabyteDB universe, and replicating all tables, I can see the read replicas with the same number of tablet peers, but no leaders ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nvq57y7lkvz2t51xnm1q.png) The detail of tablets shows an additional READ_REPLICA role in the Raft groups. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9iupuefdk3nu9ev27lqv.png) This method is simpler. You can connect to the read replica nodes to perform both reads and writes. These actions will be directed to the primary nodes, unless you specify the transaction as read-only and permit follower reads, so that they can be served locally. However, even if it can read locally, this approach only functions when there is network connectivity to the primary nodes. If there is a network partition, the primary nodes are not affected, which is the main advantage compared to a stretched cluster where followers participate in the quorum, but the read replicas become inaccessible. It is also interesting to look at the timestamps of the write operations. I've run the following to dump the WAL that contained my 'hello' message: ```sh for wal in $(grep -r hello /root/var/data/yb-data/tserver) do /home/yugabyte/bin/log-dump $wal done 2>/dev/null | grep -b1 WRITE_OP ``` In `eu1`, `eu2`, `eu3`, `rr-jp`, `rr-au`, and `rr-us` I see all the same timestamps because it is the same universe with its Hybrid Logical Clock: ``` 3017- hybrid_time: HT{ days: 19891 time: 08:00:26.018685 } 3072: op_type: WRITE_OP 3092- size: 202 -- 3801- hybrid_time: HT{ days: 19891 time: 08:01:38.036026 } 3856: op_type: WRITE_OP 3876- size: 202 ``` However, xCluster is more like logical replication and the timestamp is the target cluster's HLC when the replication occurred: In `jp`: ``` 4593- hybrid_time: HT{ days: 19891 time: 08:00:26.034767 } 4648: op_type: WRITE_OP 4668- size: 161 -- 5244- hybrid_time: HT{ days: 19891 time: 08:01:48.792729 } 5299: op_type: WRITE_OP 5319- size: 161 ``` In `au`: ``` 4593- hybrid_time: HT{ days: 19891 time: 08:00:26.098103 } 4648: op_type: WRITE_OP 4668- size: 161 -- 5244- hybrid_time: HT{ days: 19891 time: 08:01:38.086656 } 5299: op_type: WRITE_OP 5319- size: 161 ``` In `us`: ``` 4593- hybrid_time: HT{ days: 19891 time: 08:00:26.094410 } 4648: op_type: WRITE_OP 4668- size: 161 -- 5244- hybrid_time: HT{ days: 19891 time: 08:01:38.081754 } 5299: op_type: WRITE_OP 5319- size: 161 ``` The timestamps are all different and we can see that `jp` was disconnected about 10 seconds. ## How xCluster replication works When aiming for higher availability by accepting some staleness, similar to eventual consistency, xCluster replication is the appropriate method. However, it's important to understand how it functions. xCluster replication involves fetching changes from the primary database and applying them to the local table in a different database. Therefore, only read-only operations should be performed on it. Modifying the table can cause the replica to diverge from the primary unless two-way replication has been specified. For instance, deleting all rows in the replica will not synchronize them again, unless there are modifications in the primary that will replicate them once more. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3iylx57dxk5r9xrji9u7.png) It's important to note that although it looks like traditional logical replication, xCluster bypasses the SQL layer on the replica. This means that triggers and constraints are not raised and checked, as the data was already validated on the source. Additionally, the indexes are replicated rather than maintained by the replica. This is in contrast to Postgresql logical replication, which has scalability limitations and often requires dropping indexes and foreign keys to catch up after initialization on large tables. Thanks to its two-layered architecture, YugabyteDB combines the benefits of physical replication (bypassing the SQL layer on the destination) and logical replication (the ability to replicate a single table, as shown in this example) and provides multiple modes: synchronous, to quorum, or asynchronous, by pushing the changes to read replicas, or pulling them from xCluster replicas.
franckpachot
1,888,903
Why I (mostly) never write `const fn = () => {}`
A short article here on a subjective/objective manner. Compare this: const justAConstant =...
0
2024-06-14T19:16:21
https://dev.to/latobibor/why-i-mostly-never-write-const-fn--3j2i
javascript, codeux
A short article here on a subjective/objective manner. Compare this: ```javascript const justAConstant = 1244430; const conditionalValue = justAConstant > 13 ? 'yes' : 'no'; const updateResult = updateInDb(conditionalValue); const updateInDb = value => fetch.post('/url', JSON.stringify(value)); ``` with this: ```javascript const justAConstant = 1244430; const conditionalValue = justAConstant > 13 ? 'yes' : 'no'; const updateResult = updateInDb(conditionalValue); function updateInDb(value) { return fetch.post('/url', JSON.stringify(value)); } ``` Let's start with the trivial matter: while `function` is hoisted and you can declare wherever you want them (I like to define functions that solve partial problems under the function that solves the main problems) the order of constant declarations matter. In the above example will throw this error: `ReferenceError: can't access lexical declaration 'updateInDb' before initialization`. This is, as I said, trivial, very easy to fix. However there's something more appealing to me in the second version: Separation of `values` and `functions that operate over values`. A quick glance at the second example will immediately show me where I can find the business logic, because they look different. > In UX design we plan for the ease of traversing the UI: things that are different must look different. While your preference to use **named functions** vs. **named unanymous functions saved as constants** is subjective it is good to know that categorization of syntactical elements can be objectively confusing or clear.
latobibor
1,888,894
Transaction against concurrency
Keling kichik app yasab ko'ramiz. Aytaylik user registratsiya qilmoqchimiz, stack uchun Express.js +...
0
2024-06-14T19:13:11
https://dev.to/zayniddindev/transaction-against-concurrency-18hf
concurrency, transactions, postgres
Keling kichik app yasab ko'ramiz. Aytaylik user registratsiya qilmoqchimiz, stack uchun Express.js + PostgreSQL: ```js const express = require("express"); const pg = require("pg").Pool; const app = express(); app.use(express.json()); const dbClient = new pg({ host: "localhost", user: "root", password: "1234", database: "test" }); async function checkEmailExist(email) { const users = await dbClient.query( `SELECT id FROM users WHERE email = $1 LIMIT 1`, [email] ); return users.rowCount > 0; } async function registerUser(email) { return await dbClient.query( `INSERT INTO users (email) VALUES ($1) RETURNING id`, [email] ); } app.post("/register", async (req, res) => { // get email from request body const { email } = req.body; // check if email is not taken const emailExist = await checkEmailExist(email); if (emailExist) return res.status(400).json({ error: "Email already exist" }); // register new user await registerUser(email); return res.status(201).json({ data: "User registered successfully!" }); }); async function main() { try { await dbClient.connect(); app.listen(3000, () => console.log("OLD server is running on port 3000")); } catch (error) { console.error(error); } } main(); ``` Boshlanishiga hammasi yaxshi. App lauch bo'ldi va minglab foydalanuvchilar ro'yhatdan o'tdi. Keyin database'ga bir qarab ko'rsangiz ba'zi email'lar duplicate bo'lgan. WTH! Shu joyida _concurrency_ esga tushadi va bugni reproduce qilishga urinib ko'rasiz(pastdagi script). Ya'ni faraz qilaylik, `checkEmailExist` funksiyasi ishlashi uchun biror N vaqt oladi va shu vaqt oralig'idagi boshqa /register endpointga kelgan requestlar ham execute bo'ladi. Concurrency! Va qarabsizki sizda duplikatlar bor: ```js const axios = require("axios"); async function register() { return await axios.post("http://localhost:3000/register", { email: "johndoe@gmail.com", }); } (async () => { for (let i = 0; i < 5; i++) { register() .then((res) => console.log(res.data)) .catch((err) => console.error(err.response?.data)); } })(); // Surprise! We have 5 johndoe@gmail.com's in db ``` Qanday yechim qilamiz? **Transactions**! Bizga _wrapper_ funksiya kerak bo'ladi: ```js async function $transaction(f) { try { // start db transaction await dbClient.query("BEGIN"); await dbClient.query("SET TRANSACTION ISOLATION LEVEL SERIALIZABLE"); // execute main function const result = await f(); // commit db transaction await dbClient.query("COMMIT"); return result; } catch (error) { await dbClient.query("ROLLBACK"); throw error; } } ``` `checkEmailExist` va `registerUser` funksiyalari o'zgarmaydi, rout handlerni azgina o'zgartiramiz: ```js app.post("/register", async (req, res) => { // get email from request body const { email } = req.body; try { await $transaction(async () => { // check if email is not taken const emailExist = await checkEmailExist(email); if (emailExist) { throw "Email already exist"; } // register new user await registerUser(email); }); return res.json({ data: "User registered successfully!" }); } catch (error) { console.error(error); return res.json({ error }); } }); ``` Va qarabsizki muammo hal! Qanday test qilaman? `checkEmailExist` funksiyani sekin ishlashini _simulate_ qilamiz, masalan quyidagini qo'shib qo'yamiz: ```js // simulate delay await new Promise((resolve) => setTimeout(resolve, 1000)); ``` [Loyiha kodlari](https://github.com/zayniddindev/transaction-against-concurrency) | [Telegram kanal](https://t.me/scriptjs)
zayniddindev
1,888,902
Matthew Danchak on the Importance of Self-Care for Mental Health
In today's fast-paced world, the concept of self-care has become more important than ever. With the...
0
2024-06-14T19:13:01
https://dev.to/matthewdanchak/matthew-danchak-on-the-importance-of-self-care-for-mental-health-28oj
selfcare, mentalhealthatters, wellnessjourney, mindfulness
In today's fast-paced world, the concept of self-care has become more important than ever. With the constant demands of work, family, and social obligations, it's easy to neglect our own well-being. [Matthew Danchak](https://www.f6s.com/member/matthew-danchak), a prominent mental health advocate, underscores the crucial role self-care plays in maintaining mental health. In this blog, we'll explore his insights on self-care and why it's essential for our mental well-being. ## Understanding Self-Care Self-care isn't just about pampering yourself with spa days and indulgent treats, although those can certainly be part of it. According to Matthew Danchak, self-care encompasses a broad range of activities that help you maintain your physical, emotional, and mental health. It's about making choices that prioritize your well-being and prevent burnout. ## The Connection Between Self-Care and Mental Health [Matthew Danchak](https://wikialpha.org/wiki/Matthew_Danchak) emphasizes that self-care is not a luxury but a necessity. It's a proactive approach to managing stress and preventing mental health issues. When we neglect self-care, we can become overwhelmed, leading to anxiety, depression, and other mental health problems. By taking time for ourselves, we can recharge and build resilience against life's challenges. ## Practical Self-Care Strategies ** 1. Establish a Routine **: One of the simplest yet most effective self-care practices is to establish a daily routine. Danchak suggests starting and ending your day with activities that promote relaxation and well-being. This could include meditation, journaling, or simply enjoying a quiet cup of tea. 2. **Stay Active**: Physical activity is a powerful tool for maintaining mental health. Regular exercise releases endorphins, which are natural mood lifters. Danchak advises finding an activity you enjoy, whether it's yoga, walking, or dancing, and making it a regular part of your life. 3. **Prioritize Sleep**: Quality sleep is fundamental to mental health. Without adequate rest, our brains can't function properly, leading to increased stress and emotional instability. Danchak recommends creating a sleep-friendly environment by keeping your bedroom cool, dark, and quiet, and establishing a regular sleep schedule. 4. **Connect with Others**: Social connections are vital for our mental well-being. Spending time with loved ones, joining a club, or participating in community activities can provide a sense of belonging and support. Danchak highlights the importance of surrounding yourself with positive influences and reaching out for help when needed. 5. **Set Boundaries**: Learning to say no is a critical aspect of self-care. Matthew Danchak points out that overcommitting can lead to stress and burnout. It's important to recognize your limits and set boundaries to protect your mental health. This might mean declining invitations or delegating tasks to others. ** ## Mindfulness and Mental Health ** Mindfulness, the practice of being present in the moment, is another key component of self-care. Matthew Danchak advocates for incorporating mindfulness into daily life as a way to reduce stress and enhance emotional regulation. Simple practices like mindful breathing, meditation, or even mindful eating can help you stay grounded and focused. ## **Overcoming Barriers to Self-Care** Despite its importance, many people struggle to prioritize self-care. Danchak acknowledges that societal pressures and personal guilt can make self-care seem selfish or indulgent. However, he stresses that taking care of yourself is not only beneficial to you but also to those around you. When you're mentally and emotionally healthy, you're better equipped to support others. ** ## Self-Care as a Long-Term Commitment ** Finally, Matthew Danchak reminds us that self-care is a lifelong journey, not a one-time fix. It's about consistently making choices that enhance your well-being. This might involve regular check-ins with yourself to assess your needs and adjust your self-care practices accordingly. ** ## Conclusion ** Matthew Danchak's insights on self-care highlight its critical role in maintaining mental health. By incorporating practical self-care strategies into our daily lives, we can build resilience, reduce stress, and improve our overall well-being. Remember, self-care is not a luxury—it's a necessity. Prioritize yourself, and you'll find that you're better able to navigate life's challenges with strength and grace.
matthewdanchak
1,888,899
Golang - Unmarshal a JSON message with a multi-type property
Go provides a convenient library that makes parsing a JSON message simple. Just call the...
0
2024-06-14T19:04:12
https://dev.to/cjr29/golang-unmarshal-a-json-message-with-a-multi-type-property-115i
go, json, unmarshal
Go provides a convenient library that makes parsing a JSON message simple. Just call the json.Unmarshal(msg, &parsedStructure). It takes the JSON msg and parses out the individual properties defined by the parsedStructure. It works perfectly unless you have a property that is not well-behaved in the message. For example, in my case, the message from a weather station can be parsed into a structure called WeatherData. The one glitch in the parsing is that a property in the message does not conform to a single data type. Specifically, the Channel value can appear as a letter, e.g. 'A', or as a numeral. e.g., '1'. Since json.Unmarshal() uses the data types of the properties of the destination structure to determine how to parse the message, it can only handle one data type. So, if I want the final result of the parsed message to include a string value for the Channel, it works fine until it encounters a message with a field like this: 'channel: 1'. Since it is expecting a string value for 'channel' in the WeatherData structure, it fails when it sees the numeral '1' instead of "A". How do we deal with exceptions like that? The Go JSON library includes an interface for UnmarshalJSON() that allows you to create a dedicated function to handle the case of a special type. Unfortunately, it can only be applied to a structure as a method. To make it work, I created a special structure called CustomChannel that has only one property, Channel as a string. Then, I wrote a new UnmarshalJSON() function per the interface that will handle instances of Channel as string and Channel as int. The json.Unmarshal() function invokes the interface CustomChannel function when it gets to the Channel property rather than trying to parse it simply as a string. When my custom UnmarshalJSON function returns, it has placed an appropriately converted integer to a string in the case of an int value, or passes back the string, if that was in the original message. Since I want to work with a string value for Channel, I created a separate structure, WeatherDataRaw, for the raw parsed message with the CustomChannel structure, and a final structure WeatherData that I will work with in the program to write to a file or a database. Code snippets are shown below of the structures and message handling code. You can see that the function handling an individual message calls json.Unmarshal(), but then the interface function is activated to handle the CustomChannel property. A helper function retrieves the string value of Channel from the WeatherDataRaw structure once it is processed so it can be stored in the final WeatherData structure. ``` incoming WeatherDataRaw outgoing WeatherData ) type WeatherDataRaw struct { Time string `json:"time"` //"2024-06-11 10:33:52" Model string `json:"model"` //"Acurite-5n1" Message_type int `json:"message_type"` //56 Id int `json:"id"` //1997 Channel CustomChannel `json:"channel"` //"A" or 1 Sequence_num int `json:"sequence_num"` //0 Battery_ok int `json:"battery_ok"` //1 Wind_avg_mi_h float64 `json:"wind_avg_mi_h"` //4.73634 Temperature_F float64 `json:"temperature_F"` //69.4 Humidity float64 `json:"humidity"` // Can appear as integer or a decimal value Mic string `json:"mic"` //"CHECKSUM" } type CustomChannel struct { Channel string } func (cc *CustomChannel) channel() string { return cc.Channel } type WeatherData struct { Time string `json:"time"` //"2024-06-11 10:33:52" Model string `json:"model"` //"Acurite-5n1" Message_type int `json:"message_type"` //56 Id int `json:"id"` //1997 Channel string `json:"channel"` //"A" or 1 Sequence_num int `json:"sequence_num"` //0 Battery_ok int `json:"battery_ok"` //1 Wind_avg_mi_h float64 `json:"wind_avg_mi_h"` //4.73634 Temperature_F float64 `json:"temperature_F"` //69.4 Humidity float64 `json:"humidity"` // Can appear as integer or a decimal value Mic string `json:"mic"` //"CHECKSUM" } var messageHandler1 mqtt.MessageHandler = func(client mqtt.Client, msg mqtt.Message) { log.Printf("Received message: %s from topic: %s\n", msg.Payload(), msg.Topic()) // Sometimes, JSON for channel returns an integer instead of a letter. Check and convert to string. err := json.Unmarshal(msg.Payload(), &incoming) if err != nil { log.Fatalf("Unable to unmarshal JSON due to %s", err) } copyWDRtoWD() printWeatherData(outgoing, "home") } ``` I'm sure there are a dozen other ways to achieve this, but after spending hours reading the library descriptions and postings online, this was the method that made the most sense to me. The code is posted on github for a weather dashboard project I'm working on. Feel free to check it out and comment. It is still in the early stages and no GUI has been implemented yet, but this is a side project that will progress over time.
cjr29
1,888,897
How Internet Search Works
When we search for any website over the internet, there are many process that occurs to get our...
0
2024-06-14T18:52:17
https://dev.to/jay818/how-internet-search-works-5fma
internet, google, howitworks, career
When we search for any website over the internet, there are many process that occurs to get our desired result. Here are the steps : 1. **DNS Resolution** 2. **Establishing a TCP connection** 3. **Sending HTTP Request** **DNS Resolution** - This step invloves converting the domain name to the `IP Address`. when someone enters a URL in the browser, then a request is sent to the DNS serves which reverts with the IP Address. The IP address tells us where the server is present which hosts the particular website. ![DNS Resolution](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5asgfh2e3jo3t0f9bx8.png) **Establishing a TCP Connection** - After we get the IP Address , Our browser initiates a TCP connection with the web server at that IP Address. This process involes a 3-way handshake. 1. SYN(synchronize) Packet - Our browser sends a SYN packet to the server to initiate the connection. It is the 1st packet sent from client to server. It contain serveral fields such as source port, destination port, sequence number [ It is used to ensure the data integrity. ] 2. SYN-ACK (Server to Client) - The server responds with a SYN-ACK packet, acknowledging the client’s SYN and providing its own initial sequence number. 3. ACK (Client to Server): The client sends an ACK packet back to the server, acknowledging the server’s SYN-ACK. This completes the handshake, and the connection is established. A Established TCP connections ensures that the data is transmitted properly. But TCP can't dictate how is data is structured and interpreted , so we use HTTP on the top TCP. HTTP (Hypertext Transfer Protocol) operates on top of TCP. Since HTTP is a stateless protocol, it relies on TCP to maintain the order of data transmission. Without TCP's sequencing mechanism, the HTTP protocol wouldn't guarantee that web pages and resources are delivered and rendered correctly. **Sending HTTP Request** - After Establishing the TCP connection our browser sends the HTTP request to server. The Request got processed on the server and send the response which is then rendered by the browser. `Extras` 1. HTTP is stateless, meaning each request-response cycle is independent of previous interactions. It follows a request-response model. 2. Websocket is also a communication protocol that provide full-duplex communication over single TCP connection. It enables real-time, bidirectional communication between a client (such as a web browser) and a server. 3. TCP is a Transport layer protocol & HTTP or websockets are Application layer protocol.
jay818
1,888,895
HIRING A QUALIFIED CRYPTO RECOVERY EXPERT - HACK SAVVY TECHNOLOG
THEIR CONTACT INFORMATION ARE LISTED BELOW. Mail them via: contactus@hacksavvytechnology. com Mail...
0
2024-06-14T18:48:58
https://dev.to/thiago_santiago_379292670/hiring-a-qualified-crypto-recovery-expert-hack-savvy-technolog-4deg
THEIR CONTACT INFORMATION ARE LISTED BELOW. Mail them via: contactus@hacksavvytechnology. com Mail them via: Support@hacksavvytechrecovery. com WhatsApp No: +7 999 829‑50‑38, website: https:// hacksavvytechrecovery. com Investing in Bitcoin can be both exhilarating and risky. For many, the attempt at quick gains can overshadow the potential risks, leading to unfortunate encounters with scammers and fraudsters. I, too, fell victim to the promises of easy wealth, only to find myself ensnared in a web of deceit and financial loss. It was January 2024 when I stumbled upon an enticing opportunity, a chance to multiply my investments in Bitcoin exponentially. The promise of unrealistically high returns clouded my judgment, blinding me to the warning signs that should have been glaringly obvious. In my eagerness to seize this supposed golden opportunity, I neglected to conduct thorough due diligence, a mistake that would cost me dearly. As the weeks passed, my initial excitement gave way to growing unease as I realized that something was amiss. Requests for additional funds, vague explanations, and a general lack of transparency left me feeling increasingly apprehensive. Despite my growing doubts, I continued to invest, clinging to the hope that my fortunes would soon turn. my hopes were shattered when the truth finally emerged. I had fallen victim to a sophisticated cryptocurrency scam, and my hard-earned money had vanished into the depths of the digital abyss. The realization hit me like a sledgehammer, leaving me feeling betrayed, angry, and helpless. I found myself adrift in a sea of despair, unsure of where to turn or how to reclaim what was rightfully mine. The prevailing belief that recovering stolen bitcoins was a futile endeavor only added to my sense of hopelessness, leaving me resigned to the notion that my funds were gone for good. But then, a glimmer of hope appeared on the horizon in the form of a Reddit post. A user shared their experience with HACK SAVVY TECHNOLOGY, a company specializing in the recovery of stolen cryptocurrency. Skeptical yet desperate, I decided to reach out to them, clinging to the slim possibility that they might be able to help me. To my astonishment, HACK SAVVY TECHNOLOGY responded swiftly, offering reassurance and guidance every step of the way. Their professionalism, expertise, and dedication instilled a newfound sense of hope within me, dispelling the doubts and fears that had plagued my mind for so long. Through meticulous investigation and relentless pursuit, HACK SAVVY TECHNOLOGY was able to trace the path of my stolen bitcoins, unraveling the intricate web of deception woven by the scammers. With each passing day, their progress filled me with renewed optimism, until finally, the moment of truth arrived. I watched in disbelief as my lost bitcoins were returned to me, like a phoenix rising from the ashes of despair. The sense of relief and gratitude that washed over me was indescribable, a testament to the invaluable assistance provided by HACK SAVVY TECHNOLOGY . My experience serves as a cautionary tale for anyone considering investing in Bitcoin or other cryptocurrencies. While the profit potential is undeniably enticing, it is crucial to exercise caution and vigilance at all times. Trusting blindly in promises of easy wealth can lead to devastating consequences, but with the right support and guidance, recovery is not only possible but achievable. As I reflect on my journey, I am filled with gratitude for the opportunity to reclaim what was rightfully mine. Thanks to the unwavering dedication of HACK SAVVY TECHNOLOGY, I have emerged from this ordeal stronger, wiser, and more resilient than ever before. Kindly find their contact details above. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/irehc18wogqnznxizcmy.jpeg)
thiago_santiago_379292670
1,880,619
Exploring Common JavaScript Methods
JavaScript is a versatile language known for its rich set of methods that make it powerful for web...
0
2024-06-14T18:48:08
https://dev.to/jlotti2/exploring-common-javascript-methods-35np
webdev, javascript, beginners, tutorial
JavaScript is a versatile language known for its rich set of methods that make it powerful for web development. These methods provide developers with a wide range of functionalities to manipulate data, interact with the DOM, and perform various tasks efficiently. In this blog, we'll explore five common JavaScript methods, discussing their key concepts, when to use them, and why they are essential in web development. **Array.map()** The map() method creates a new array by applying a function to each element of the original array. The map() method is often used when you need to transform each element of an array without mutating the original array. It's particularly useful for applying the same operation to every element in an array and generating a new array with the modified values. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xryzzo9yt2rii40hm10k.png) **Array.reduce()** The reduce() method executes a reducer function on each element of the array, resulting in a single output value. When you need to perform calculations or aggregations on array elements and produce a single result, reduce() is the go-to method. It's useful for tasks like summing up values, calculating averages, or flattening arrays. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b3ufi6jv9ns1zhk83x8g.png) **Array.forEach()** The forEach() method executes a provided function once for each array element. Use forEach() when you want to perform an operation on each element of an array without creating a new array. It's suitable for scenarios where you need to iterate over elements and perform actions like logging, updating UI, or making API calls. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x9z9v92g47w9529pt9tx.png) **Array.includes()** The includes() method determines whether an array includes a certain value among its entries, returning true or false as appropriate. When you need to check if an array contains a specific element, includes() comes in handy. It's useful for conditionally executing code based on the presence or absence of an element in an array. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uaourayud6pmz6gi3kq3.png) In conclusion, these common JavaScript methods play a crucial role in modern web development. They provide developers with powerful tools for manipulating arrays, iterating over elements, and performing various operations efficiently. By understanding when and how to use each method, developers can write cleaner, more concise code and tackle complex tasks with ease. Whether you're transforming data, filtering elements, or aggregating values, these methods empower you to write more expressive and maintainable JavaScript code. In conclusion, these common JavaScript methods play a crucial role in modern web development. They provide developers with powerful tools for manipulating arrays, iterating over elements, and performing various operations efficiently. By understanding when and how to use each method, developers can write cleaner, more concise code and tackle complex tasks with ease. Whether you're transforming data, filtering elements, or aggregating values, these methods empower you to write more expressive and maintainable JavaScript code.
jlotti2
1,888,893
سایت دنس بت
سایت دنس بت یکی از سایت‌های شرط بندی آنلاین فارسی زبان است که در سال 1398 فعالیت خود را آغاز کرده...
0
2024-06-14T18:44:10
https://dev.to/simin197/syt-dns-bt-1f68
[سایت دنس بت](https://dancebet.pro) یکی از سایت‌های شرط بندی آنلاین فارسی زبان است که در سال 1398 فعالیت خود را آغاز کرده است. این سایت به ارائه خدمات متنوعی در زمینه شرط بندی ورزشی، کازینو آنلاین، بازی‌های زنده و پیش‌بینی مسابقات می‌پردازد. برخی از ویژگی‌های سایت دنس بت عبارتند از: رابط کاربری ساده و آسان: استفاده از سایت دنس بت برای کاربران مبتدی و حرفه‌ای به آسانی امکان‌پذیر است. ارائه طیف گسترده‌ای از بازی‌ها: دنس بت طیف وسیعی از بازی‌های کازینو آنلاین، از جمله اسلات، رولت، بلک جک، پوکر و ... را به کاربران خود ارائه می‌دهد. پشتیبانی از روش‌های مختلف پرداخت: کاربران می‌توانند از طریق روش‌های مختلفی مانند کارت‌های بانکی، پرفکت مانی، ارزهای دیجیتال و ... در این سایت حساب خود را شارژ یا وجه خود را برداشت کنند. ارائه بونوس‌ها و جوایز: دنس بت بونوس‌ها و جوایز متعددی را به کاربران خود ارائه می‌دهد، از جمله بونوس خوش‌آمدگویی، بونوس‌های ورزشی، کازینو و ... . پشتیبانی 24 ساعته: تیم پشتیبانی دنس بت به صورت 24 ساعته در 7 روز هفته آماده پاسخگویی به سوالات و حل مشکلات کاربران هستند. برخی از مزایای سایت دنس بت عبارتند از: اعتبار و مجوز: دنس بت از مجوزهای لازم برای ارائه خدمات شرط بندی آنلاین برخوردار است. امنیت بالا: این سایت از تمهیدات امنیتی بالایی برای محافظت از اطلاعات کاربران خود استفاده می‌کند. واریزی‌های سریع: در دنس بت واریزی‌ها به سرعت و به صورت آنی انجام می‌شوند. پشتیبانی از زبان فارسی: تمامی بخش‌های سایت دنس بت به زبان فارسی ارائه شده‌اند و کاربران فارسی زبان به راحتی می‌توانند از خدمات این سایت استفاده کنند.
simin197
1,887,240
How to Create Deploy and Connect to a Virtual Machine on Azure
Step 1: Set up your Azure Account Step 2: Create a Virtual Machine Step 3: Connect to your Virtual...
0
2024-06-14T18:38:14
https://dev.to/florence_8042063da11e29d1/step-by-step-guide-to-create-deploy-and-connect-to-a-virtual-machine-on-azure-1b00
azure, virtualmachine, cloudcomputing, resourcegroup
Step 1: Set up your Azure Account Step 2: Create a Virtual Machine Step 3: Connect to your Virtual Machine Step 1: Set Up Your Azure Account Sign Up or Log In: If you don't have an Azure account, sign up at Azure Free Account. If you already have an account, log in at Azure Portal. Azure Subscription: Ensure you have an active subscription. If you're new to Azure, you might start with a free trial which includes credits for initial usage. Step 2: Create a Virtual Machine Navigate to Azure Portal: Once logged in, go to the Azure Portal dashboard. Create a Resource: Click on "Create a resource" in the left-hand menu. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4237g9pef8udsr0ep28i.png) Select Virtual Machine: In the "New" section, search for "Virtual Machine" and select it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/id2r8690vila5uwgh71e.png) Configure Basics: Subscription: Select your Azure subscription. Resource Group: Create a new resource group or select an existing one. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekms5qm6rwagk52l0yim.png) Virtual Machine Name: Enter a name for your VM. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88947spiv9ne5rfwopm3.png) Region: Choose the region from the dropdown where you want your VM to be hosted. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7oczfbkr888b9wnnt491.png) Choose the availability option and availability zone. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pq37he6gzt4zixcdjijh.png) Image: Select the operating system or application for the virtual machine from the list of available operating systems. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vrxmzxfoh73z5t904uxh.png) Size: Choose the appropriate VM size based on your requirements (CPU, RAM, etc.). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1mkp7ewvlogs7kdjl109.png) Authentication Type: Choose "Password" and create a username and password for logging in to the VM. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zld86n2mx66eohha4183.png) Disk Options: Configure the OS disk and any additional data disks if needed. Default settings usually suffice for general purposes. Networking: Configure networking settings. The default options are usually fine for a basic setup. Ensure "Public IP" is enabled to allow remote access to your VM. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uwrispezli4eynbjl95q.png) Management: Configure monitoring, identity, and backup options. Default settings are typically sufficient for a basic VM. Review and Create: Review your configuration settings. Click "Create" to deploy the VM. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rc3afj027lgfhcbhxjxr.png) Step 3: Connect to Your Windows 11 VM Go to Virtual Machines: Once the deployment is complete, navigate to "Go to Resource" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zg216hnnx3r6hcmyry0.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vo3163s9jpndg9zuid5w.png) Select Your VM: Find and select the Window Virtual Machine you created from the list. Connect: In the VM's overview page, click on the "Connect" button at the top. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6a9q4xj4z3usko8n364r.png) Choose "RDP" (Remote Desktop Protocol) as the connection method. Download RDP File: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ua1qqyqp94evpleokowd.png) Click "Download RDP File" and open it with your Remote Desktop then click "Connect" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7yo55wc90z1wxl1oe4a.png) Log In: Enter the username and password you created during the VM setup process. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xuqwqmuckezlikctwz81.png) You should now be connected to your Windows 11 Virtual Machine. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/idkkqrstfxg94vf3kie7.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xn1apuq9jw4d8h9jkfke.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqd1z5k2jb93lug47rfw.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rdwb0xfeqqkkq7mdgnv9.png)
florence_8042063da11e29d1
1,888,892
Eclipse Shortcuts for Java Developers
Here are the top 30 Eclipse shortcuts for Java programmers: F3: Jumps to include file or variable...
0
2024-06-14T18:36:31
https://dev.to/marudhu99/eclipse-shortcuts-for-java-developers-1la5
java, eclipse, shortcuts, springboot
Here are the top 30 Eclipse shortcuts for Java programmers: F3: Jumps to include file or variable declaration/definition. Alt + Left and Alt + Right: Navigate through source files (back and forward). Ctrl + Space: Content assist for proposing methods, member variables, and more. Ctrl + 3: Quick access to views, perspectives, and more. Ctrl + M: Maximizes the current view or editor (can be toggled). Ctrl + Shift + /: Insert block comment, remove it again with Ctrl + Shift + . Ctrl + Shift + T: Open an element with wildcard support. Ctrl + F7: Switch to next view, iterate to the next view with Ctrl + Shift + F7. Ctrl + Alt + H: Opens the call hierarchy. Ctrl + O: Open the Quick Outline View. Ctrl + F: Find in current editor. Ctrl + 1: QuickFix. Ctrl + H: Search. Ctrl + L: Go to line. Alt + Shift + W: Show a class in package explorer. Ctrl + Shift + Up and Down: Navigate from member to member (variables and methods). Ctrl + K and Ctrl + Shift + K: Find next/previous. F4: See the type hierarchy of a class. Ctrl + F4 or Ctrl + W: Close current file or all files. Ctrl + T: Toggle between supertype and subtype. Ctrl + E: Go to other open editors. Ctrl + .: Move to next problem in a file, and Ctrl + , for the previous problem. Alt + ← and Alt + →: Hop back and forth through the files you have visited. Ctrl + Shift + G: Search the workspace for references to the selected method or variable. Ctrl + Shift + L: View the listing for all Eclipse keyboard shortcuts. Alt + Shift + J: Add Javadoc at any place in the Java source file. Ctrl + Shift + P: Find a closing brace. Alt + Shift + X, Q: Run Ant build file using keyboard shortcuts in Eclipse. Ctrl + Shift + F: Autoformatting. Ctrl + Shift + F: Format. These shortcuts can significantly improve your productivity while working in Eclipse. Youtube video link: [40+ Eclipse keyboard shortcuts for java developers](https://youtu.be/uwDmr8zpsaY?si=dcCmdFsVCW6JU4ZX)
marudhu99
1,888,658
Requestly Update – May, 2024
Hey Requestlians 👋, Here is some progress we made in May and a quick sneak peek of what we’re up...
0
2024-06-14T18:30:00
https://requestly.com/blog/requestly-update-may-2024/
Hey Requestlians 👋, Here is some progress we made in May and a quick sneak peek of what we’re up to. ## 🚀 Introducing a new pricing option – Lite Plan For a limited time, We’re offering a Lite Plan that offers the following - Access to all HTTP Modification Types - GraphQL Modifications - Overriding & Stubbing API Responses - Injecting Scripts - Unlimited Header Modifications - Upto 5 API Mocks Check out the Lite Plan here → [https://requestly.io/pricing](https://requestly.io/pricing) ## 🪴 Manifest V3 Rollout (Huge Thanks) Last month, we shared the beta version with you with MV3 changes. We asked for feedback on our beta track and we really appreciate the support, feedback, and bug reports. We have rolled out the MV3 version in a controlled manner and as I type this post, all the new users of Requestly and 10% of our userbase are already on MV3. We plan to do the complete rollout by the end of this month. Refer this GitHub Issue for more details – [https://github.com/requestly/requestly/issues/1690](https://github.com/requestly/requestly/issues/1690) ## 🤖 Introducing RequestBot – AI Assistant in Requestly We’ve introduced RequestBot aka “Ask AI” feature. You can ask RequestBot anything about Requestly and It can help you quickly understand what Requestly can help with. Here are some helpful prompts that you can try - How to Override API Responses? - How to record HTTP Session? - How to solve CORS Issue? - How to record HTTP status code Checkout the blog post here – https://requestly.com/blog/introducing-requestbot-ai-assistant-in-requestly/ ## 👩‍💻 Best ModHeader Alternative Lately, A lot of Modheader users are moving to Requestly. Modheader is in the news for injecting Ads and their recent release made the product very unstable. If you are looking for an alternative, Requestly can be a great option and with the Lite Plan, you get unlimited HTTP header modifications. Checkout this → https://requestly.com/alternatives/a-better-and-well-documented-alternate-to-modheader/ We also support Requestly to be used in automation So if you need to use Requestly for modifying API Responses or modifying HTTP headers, contact us. ## 💫 The FrontendBytes Podcast – A Deep Dive into Software Testing & Challenges in Testing Ads We had the pleasure of speaking with Kenton DeAngeli, a seasoned veteran in the AdTech industry with over 20 years of experience. He took us through his journey into software testing, shedding light on his work’s challenges and the software testing Industry. Here are some of the things we covered - How did Kenton get into Testing, & What challenges did you face in your early career? - What is the most challenging part of the Software Testing role? - Developers and QA Relationship - When developers can’t reproduce bugs? - Resources to Become a Good Software Tester - How do you hire a good software tester - Good Interview Questions & Red Flags in Interview - Rapid-Fire Questions Check out the podcast here → [https://frontendbytes.substack.com/p/deep-dive-into-software-testing-role](https://frontendbytes.substack.com/p/deep-dive-into-software-testing-role) ## ❤️ Community Love Check out this great post by Robert Hostak who used Requestly to switch Supabase environments in his WeWeb project and shared a great post in the WeWeb community. Here is the post link – [https://community.weweb.io/t/supabase-switch-between-production-and-staging-in-one-click/8972](https://community.weweb.io/t/supabase-switch-between-production-and-staging-in-one-click/8972) ## ⭐️ Checkout our Awesome Resources - [Github](https://github.com/requestly/requestly/) - [Chrome Store](https://chromewebstore.google.com/detail/requestly-intercept-modif/mdnleldcmiljblolnjhpnblkcekpdkpa) - [LinkedIn](https://www.linkedin.com/company/requestly) - [Twitter](https://x.com/RequestlyIO) - [Reddit](https://www.reddit.com/r/requestly/) - [Documentation](https://developers.requestly.com/) - [Learning Center](https://requestly.com/academy/requestly-academy-basics/) ## 🎁 Get $25 in credits & Get featured in our next newsletter We love interacting with our customers and getting feedback is a critical part in how we build this amazing product. - Give a shoutout on X (Twitter) and get $25 in credits in Requestly - Rate us on Chrome Store and We will feature the best review in the next newsletter Please send an email to sachin [at] requestly.io with the tweet link / Chrome Store Review screenshot. That’s all for this month. Happy developing!
requestlyio
1,888,890
Integration testing of Redux-components with MSW
Introduction I recently started working on integration tests for a React application with...
0
2024-06-14T18:26:46
https://dev.to/janbe30/integration-testing-of-redux-components-with-msw-4emb
react, testing, webdev, javascript
## Introduction I recently started working on integration tests for a React application with Redux-connected components, specifically Redux-Toolkit. I quickly learned a common tool to use is the [Mock Service Worker (MSW) library](https://mswjs.io/) to mock network requests; this library allows you to intercept outgoing network requests and mock their responses making it super easy to test the implementation details of your components. Setting up MSW in my application is quick and easy - after installing the library, you set up the server and integrate with your testing environment (Jest, Vitest,..), and you create the handlers to mock your requests and corresponding responses. Sounds easy enough, right? Well, not so much so if you're using Axios! :upside_down_face: ## The problem After setting up MSW and creating the handlers I started running into issues. The requests were being intercepted by MSW but my tests kept failing because the Redux state was not being updated. I knew my Redux store and component were working as expected and the requests were being dispatched just fine, yet my tests kept failing. Was something else misconfigured somewhere?! I could _not_ find any problems _anywhere_ in my codebase. I spent hours debugging and asking around hopelessly until I noticed something interesting... the response data from the successful requests was empty, yet the error response was `undefined`. So the data was being lost somewhere between MSW and the Axios request in my Redux slice. Hmmmm. ## Issue discovered This discovery lead me to wonder if these two libraries were even compatible and sure enough, the [first Google result](https://github.com/mswjs/msw/issues/2026) shed some light on the issue. MSW and Axios _are_ compatible, however, there are some discrepancies in the way the libraries handle network requests causing the response body to be dropped or unrecognized in the Axios request. ## Solution A good solution that worked for me was to get Axios to be mocked to use native `fetch` while testing, as mentioned by someone in the Github thread above. There are several ways to do this and this is a quick simple solution that allows you to keep the benefits of axios in your actual application. _jest.setup.js or jest.polyfills.js (depending on your setup)_ ``` import axios from 'axios'; // Globals needed for MSW const { TextEncoder, TextDecoder, ReadableStream } = require('node:util'); Reflect.set(global, 'TextEncoder', TextEncoder); Reflect.set(global, 'TextDecoder', TextDecoder); Reflect.set(global, 'ReadableStream', ReadableStream); const { Request, Response, Headers, FormData } = require('undici'); Reflect.set(global, 'Request', Request); Reflect.set(global, 'Response', Response); Reflect.set(global, 'Headers', Headers); Reflect.set(global, 'FormData', FormData); // Create a custom adapter that uses fetch axios.defaults.adapter = async (config) => { const { url, method, data, headers } = config; // Convert axios config to fetch config const response = await fetch(url, { method, body: data ? JSON.stringify(data) : undefined, headers: { 'Content-Type': 'application/json', ...headers, }, }); const responseData = await response.json(); return { data: responseData, status: response.status, statusText: response.statusText, headers: response.headers, config, }; }; ``` _Example Redux slice_ ``` import { createAsyncThunk, createSlice } from '@reduxjs/toolkit'; import axios from 'axios'; export const getMyData = createAsyncThunk( 'example/getMyData', async (location) => { const response = await axios.get(`https://google.com/get/mydata`, { headers: { 'x-api-key': 'YOUR_API_KEY', }, }); return response.data; }, ); /* Initial state */ /* Slice fn */ ``` _Example MSW Handlers_ ``` import { http, HttpResponse } from 'msw'; http.get(`${baseURL}/get/mydata`, ({ request }) => { console.log('Captured a "GET /get/mydata"', request.method, request.url); return HttpResponse.json('123ABCDE', { status: 200 }); }), ``` _Start and stop MSW in SetupTests.js_ ``` import { server } from './mocks/server'; // Establish API mocking before all tests. beforeAll(() => { server.listen(); }); // Reset any request handlers that we may add during the tests, // so they don't affect other tests. afterEach(() => server.resetHandlers()); // Clean up after the tests are finished. afterAll(() => { server.close(); }); ``` Hope this helps somebody else in their testing endeavors!
janbe30
1,888,888
Create Api Endpoint in Next.js
I recently learned I could create an api endpoint in nextjs without having to use node/express. I'll...
0
2024-06-14T18:20:57
https://dev.to/jahkamso/create-api-endpoint-in-nextjs-4b6f
I recently learned I could create an api endpoint in nextjs without having to use node/express. I'll share my little knowledge on what I've learned and how you can start creating api endpoints in nextjs. First you need to initialise a next app ![nextjs setup command](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2mvagvmbkjw6iau7fb5y.png) ![nextjs setup options](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/91pnccpy3e4pxyyo79vp.png) Once the new project is created, you should have something like this when you open up your project👇🏻 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4xageagoj2hpfznxns3.png) Now, it's time to create some folders (nextjs uses folders for routing). Create 2 folders inside the `app` folder with a new file called `hello.js`. It should look something like this👇🏻 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/krah9yl87mdw7tyxl395.png) It's time to throw the boring stuff in the trash ![trash throw gif] (https://media.giphy.com/media/NHs9GJQzKh3uU/giphy.gif?cid=790b7611kcnx6h3ferjyhhsnuqd2iuwdvv2myqdssrjldirz&ep=v1_gifs_search&rid=giphy.gif&ct=g) Let's create our first endpoint, which is going to be a GET request. First, we need to import some things from `next/server`👇🏻 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/41x0m2fnpnlih7k7wtnq.png) This 2 imports we've made are basically telling nextjs that we want to be able to send a response and make a request, similarly to the request and response argument in nodejs. Now, let's create a simple function for our GET request. Here's how you do it👇🏻 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cf57vjnsfaxysefo8vpx.png) To test if this works, start your project by typing `npm run dev` in the terminal. Now, head over to postman or use an extension in vscode called Thunderclient, and then add this url `http://localhost:3000/api/endpoints`. You should have something like this👇🏻 ![nextjs get request thunderclient](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/it26i8zx1tgxd7zze50g.png) Great! Now we've successfully created a GET request, now what? Well, now it's time to learn how to create a POST request to be able to get information from users on the frontend. To do this, we basically repeat the same thing but with some little tweaks as shown below👇🏻 ![nextjs get request thunderclient](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70bpy5wo2r5x0zqgjcam.png) Now, let's try making a POST request in postman/Thunderclient. The result should be something like this👇🏻 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/39d7bwyyb0e8qu97rdc8.png) Awesome work! You now know the basics of creating endpoints in nextjs ![clapping gif](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExczk4YzJseHdtbW5heTA5aWppb2MxYnd2emk5YzZ5ZTY1ZjBpanQ0eCZlcD12MV9naWZzX3NlYXJjaCZjdD1n/l3q2XhfQ8oCkm1Ts4/giphy.gif)
jahkamso
1,888,871
Understanding Change Streams in MongoDB
In the dynamic landscape of modern applications, real-time data processing has become essential....
0
2024-06-14T18:15:22
https://dev.to/nichetti/understanding-change-streams-in-mongodb-2c9f
In the dynamic landscape of modern applications, real-time data processing has become essential. MongoDB, a popular NoSQL database, offers a powerful feature called Change Streams, enabling applications to track changes in the database in real-time. This article explores what Change Streams are, their benefits, and how to implement them in MongoDB using Java. ## What are Change Streams? Change Streams in MongoDB allow applications to listen to real-time data changes on a collection, database, or deployment level. This feature leverages MongoDB’s replication capabilities to provide a continuous, reliable stream of data changes, making it ideal for building reactive applications, audit trails, real-time analytics, and more. **Benefits of Using Change Streams** 1. **Real-Time Data Processing:** Applications can respond instantly to data changes, ensuring timely updates and actions. 2. **Scalability:** Change Streams can be used across distributed systems, handling large volumes of data efficiently. 3. **Simplicity:** Implementing Change Streams is straightforward, reducing the complexity of managing custom polling mechanisms. 4. **Granularity:** They offer fine-grained control, allowing you to listen to changes at various levels – from a single collection to an entire deployment. ## How Change Streams Work? Change Streams work by leveraging MongoDB’s oplog (operations log) in replica sets. When an operation (insert, update, delete, etc.) is performed on the database, it is recorded in the oplog. Change Streams tap into this oplog to emit events corresponding to these operations. ## Implementing Change Streams **1. Setup MongoDB Replica Set** • Ensure your MongoDB instance is running as a replica set. If you're using a standalone MongoDB instance, convert it to a replica set: ```sh mongod --replSet rs0 ``` • Initialize the replica set: ```js rs.initiate() ``` **2. Connecting to MongoDB and Watching a Change Stream** • Add MongoDB Java Driver Dependency: If you are using Maven, add the following dependency to your pom.xml: ```xml <dependency> <groupId>org.mongodb</groupId> <artifactId>mongodb-driver-sync</artifactId> <version>4.4.0</version> </dependency> ``` • Watch Change Streams on a Collection: ```java import com.mongodb.client.MongoClient; import com.mongodb.client.MongoClients; import com.mongodb.client.MongoCollection; import com.mongodb.client.MongoDatabase; import com.mongodb.client.model.changestream.ChangeStreamDocument; import com.mongodb.client.model.changestream.OperationType; import org.bson.Document; public class ChangeStreamExample { public static void main(String[] args) { String uri = "mongodb://localhost:27017"; try (MongoClient mongoClient = MongoClients.create(uri)) { MongoDatabase database = mongoClient.getDatabase("exampleDB"); MongoCollection<Document> collection = database.getCollection("exampleCollection"); collection.watch().forEach((ChangeStreamDocument<Document> change) -> { System.out.println("Change detected: " + change); if (change.getOperationType() == OperationType.INSERT) { System.out.println("Document inserted: " + change.getFullDocument()); } else if (change.getOperationType() == OperationType.UPDATE) { System.out.println("Document updated: " + change.getUpdateDescription()); } else if (change.getOperationType() == OperationType.DELETE) { System.out.println("Document deleted: " + change.getDocumentKey()); } }); System.out.println("Watching for changes..."); } } } ``` **3. Handling Different Change Events** • Change Streams can capture various types of changes, including inserts, updates, deletes, and replacements. Here’s how to handle different event types: ```java collection.watch().forEach((ChangeStreamDocument<Document> change) -> { switch (change.getOperationType()) { case INSERT: System.out.println("Document inserted: " + change.getFullDocument()); break; case UPDATE: System.out.println("Document updated: " + change.getUpdateDescription()); break; case DELETE: System.out.println("Document deleted: " + change.getDocumentKey()); break; default: System.out.println("Other operation: " + change); } }); ``` **4. Filtering Change Streams** • You can filter Change Streams to only listen to specific changes using aggregation pipelines. ```java import com.mongodb.client.model.Aggregates; import com.mongodb.client.model.Filters; import org.bson.conversions.Bson; import java.util.Arrays; import java.util.List; List<Bson> pipeline = Arrays.asList( Aggregates.match(Filters.eq("fullDocument.status", "active")) ); collection.watch(pipeline).forEach((ChangeStreamDocument<Document> change) -> { System.out.println("Filtered change detected: " + change); }); ``` **5. Watching Changes at the Database Level** (To watch changes across all collections in a database) ```java database.watch().forEach((ChangeStreamDocument<Document> change) -> { System.out.println("Change detected in database: " + change); }); ``` **6. Watching Changes at the Database Level** (To watch changes across the entire deployment) ```java mongoClient.watch().forEach((ChangeStreamDocument<Document> change) -> { System.out.println("Change detected in deployment: " + change); }); ``` ## Conclusion • MongoDB Change Streams provide a robust mechanism for building real-time, reactive applications. By leveraging this feature, developers can efficiently track and respond to data changes, enabling a wide range of use cases from live notifications to real-time analytics. With the simplicity and power of Change Streams, MongoDB continues to be a strong contender in the world of modern databases.
nichetti
1,888,836
How to create a windows 11 Virtual Machine (VM) on Azure
Table of Contents Introduction Step 1. Login to Azure Portal Step 2. Select/click Virtual...
0
2024-06-14T18:13:50
https://dev.to/yuddy/how-to-create-a-windows-11-virtual-machine-vm-on-azure-114l
**Table of Contents** Introduction Step 1. Login to Azure Portal Step 2. Select/click Virtual Machine Step 3. Create Azure Virtual Machine Step 4. Create new Resource Group Step 5. Basic Tab: Fill all the Virtual Machine Instance Details Step 6. Disk Tab: Fill all the Disk fields Step 7. Network Tab: Fill in Network Fields Step 8. Management Tab: Fill in Management Fields Step 9. Monitoring Tab: Set Monitoring Fields Step 10. Advance Tab: Set Advance Fields Step 11. Tags Tab: Set Tag Fields Step 12. Review and Create Tab Step 13. Connect to VM Step 14. Download RDP Step 15. Next, Next and Accept Step 16. VM is ready Step 17. Deleting A Resource Group Introduction Virtual Machine is one the resources Azure has to help organizations and individuals execute various tasks without accessing a local computer. It simply means, having a full computer system running in the cloud with lots of advantages than carrying or stationing a physical computer system. Being a virtual machine, a single computer system can be accessed by multiple persons all over the globe quite unlike a stationed physical computer. Shortly we shall be taking some steps to learn how Virtual Machines can be deployed via Azure platform. **Step 1. Login to Azure Portal** Open a browser, type url: portal.azure.com Fill in your registered username and password, then process your entries. A successful login lands into Azure portal where various tasks can be executed accordingly. Also make sure you have a subscription in Azure to enable the creation of VM. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k6knivkzax5cqc5c5lwt.png) **Step 2. Select/click Virtual Machine** There are 3 ways to locate Virtual Machine in the portal (via search bar/resource group/portal tab). Go to the search bar type in virtual machine, select Virtual Machine from the dropdown list. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bayakno8bztpysbq1x5q.jpg) **Step 3. Create Azure Virtual Machine** Locate the create button up left, click Connect from the dropdown. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pzdq24xqnsshfw5rsady.jpg) **Step 4. Basic Tab: Create new Resource Group** **Notice:** There is a note on how each instance field works. This icon is found suffix each field label. Take a cursor point to the icon, then read. Also all red asterixis label field are compulsory and must be filled. Select from already created resource group or create new one if none exists. Click New and type in the new resource group name, click ok. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wtdjdzkctx4k0yvkro49.jpg) **Step 5. Basic Tab: Fill all the Virtual Machine Instance Details** - Virtual Machine Name: Valid name could contain alpha-numeric, alpha + some special characters excluding underscore (_). Eg. Denz-VM, Denz4 etc - Region: Select from the dropdown according to your architectural plan. - Region: Select a desired data center region - Availability options: Select how you want your availability zone to run (eg. no infrastructural redundancy required etc.) - Availability zone: Select your zone(s) or - Security type: Select how you want to secure the VM. - Image: Select either Linux type of OS or windows type of OS (e.g Windows 11 pro - VM architecture: Could be on x64 - Size: Select from the dropdown as it is available in the regional zone selected. - Enable Hibernation: Check if you want otherwise ignore - Username: Type in VM username - Password: Type in VM password - Confirm Password: Type in VM password for confirmation - Public inbound ports: Allow selected ports - Select inbound ports: Select how you want the VM to be accessed (e.g RDP port allows for remote access) - I confirm I have an eligible Windows 10/11 license with multi-tenant hosting rights: Check this box and click Next Disk. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u4quv9l2y6qrz64p834.jpg) **Step 6. Disk Tab: Fill all the Disk fields** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q2f6a2rew9sx4prnox9i.jpg) **Step 7. Network Tab: Fill in Network Fields** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vn7u3pjo7fkpfjaenero.jpg) **Step 8. Management Tab: Fill in Management Fields** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sfe1cxmdljoi662jisvi.jpg) **Step 9. Monitoring Tab: Set Monitoring Fields** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1cjl1j46gxzdgv3oos4u.jpg) **Step 10. Advance Tab: Set Advance Fields** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ovc90vkg63hmg2ltk8rf.jpg) **Step 11. Tags Tab: Set Tag Fields** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ndtrtn1au4ob1ogi21kq.jpg) **Step 12. Review and Create Tab** Review all the fields to ensure they are well inputted and selected according to your architectural plan. Then click on create button to create the VM. Validations will be carried out to determine any error exists in each Tab, if not error is found, it will pass a successful VM creation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s9cvhx7u3spe6wr4j87v.jpg) **Step 13. Connect to VM** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdoasdfgyyol7btsosro.jpg) **Step 14. Download RDP** Here you download the RDP file in your local PC, open file and install the remote access connection. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1v8ypopodzvf94sglx9.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mtz8p78d8xfssgbtkx05.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xpw8tssbn7kvp9qila3k.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nj6au5m860d4wnzekq87.jpg) **Step 15. Next, Next and Accept** Follow the instructions and accept the VM connection. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f31qydrkrw21tbh8myv2.png) **Step 16. VM is ready** At this point the VM is ready to be used. You have successfully created and deployed VM and can be accessed across the globe. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s8m73bvkctjgscf9ky0.png) **Step 17. Deleting A Resource Group** You clear a resource you don't want to be running otherwise it shall be building unwanted cost. So to clear this VM resource. Close the remotely accessed VM: - Go to Resource group, locate and open the right resource group housing the VM. - Select all the installed resources, scroll up and click Delete Resource Group Tab, Confirm by re-entering the resource group name clicking Delete. Wait for it to delete all the installed resources. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/40egz2ib2icsy7cv7p61.jpg)
yuddy
1,888,862
How to use of post method in react js
This code is a React component that renders a form for creating a new post. Here's a detailed...
0
2024-06-14T17:54:08
https://dev.to/04anilr/how-to-use-of-post-method-in-react-js-k99
This code is a React component that renders a form for creating a new post. Here's a detailed description of the code: ### Imports 1. **React**: Importing React to use JSX and other React functionalities. 2. **axios**: Importing Axios for making HTTP requests. ```jsx import React from "react"; import axios from "axios"; export default function Create() { const submitHandler = (event) => { event.preventDefault(); const title = event.target.title.value; const body = event.target.body.value; const author = event.target.author.value; const data = { title, body, author }; axios .post("url", data) .then((response) => { console.log(response); event.target.reset(); }) .catch((error) => { console.log(error); }); }; return ( <> <div className="container"> <div className="row"> <div className="col-lg-8 col-md-10 mx-auto"> <p>Create a post......?</p> <form className="sentMessage" id="contactForm" onSubmit={submitHandler} > <div className="control-group"> <div className="form-group floating-label-form-group controls"> <label>Title</label> <input type="text" className="form-control" placeholder="Title" id="title" required name="title" /> <p className="help-block text-danger"></p> </div> </div> <div className="control-group"> <div className="form-group floating-label-form-group controls mt-3"> <label>Body</label> <textarea id="body" className="form-control" name="body" placeholder="Body" ></textarea> <p className="help-block text-danger"></p> </div> </div> <div className="control-group"> <div className="form-group floating-label-form-group controls mt-3"> <label>Author</label> <input type="text" className="form-control" placeholder="Author" id="author" required name="author" /> <p className="help-block text-danger"></p> </div> </div> <br /> <div id="success"></div> <button type="submit" className="btn btn-primary" id="sendMessageButton" > Send </button> </form> </div> </div> </div> </> ); } ``` ### Function Component: `Create` The component is defined as a function named `Create`. #### `submitHandler` Function - **Purpose**: This function handles the form submission. - **Parameters**: Takes an `event` object as a parameter. - **Functionality**: - Prevents the default form submission behavior using `event.preventDefault()`. - Retrieves the values from the form fields (title, body, and author). - Constructs a `data` object with these values. - Makes a POST request to a specified URL using Axios, sending the `data` object. - If the request is successful, logs the response and resets the form fields. - If the request fails, logs the error. #### JSX Structure The component returns a JSX fragment (`<>...</>`) containing the following elements: - **Container Div**: A div with the class `container` to contain the form. - **Row Div**: A div with the class `row` to structure the layout. - **Column Div**: A div with classes `col-lg-8`, `col-md-10`, and `mx-auto` to center the form within the container. - **Paragraph**: A paragraph element with the text "Create a post......?". - **Form**: A form element with: - Class `sentMessage` - ID `contactForm` - `onSubmit` handler set to `submitHandler` #### Form Fields The form contains three input fields and a submit button: 1. **Title Input**: - Type: `text` - Class: `form-control` - Placeholder: "Title" - ID: `title` - Required: `true` - Name: `title` 2. **Body Textarea**: - Class: `form-control` - Placeholder: "Body" - ID: `body` - Name: `body` 3. **Author Input**: - Type: `text` - Class: `form-control` - Placeholder: "Author" - ID: `author` - Required: `true` - Name: `author` 4. **Submit Button**: - Type: `submit` - Class: `btn btn-primary` - ID: `sendMessageButton` - Text: "Send" ### Summary This React component provides a form for creating a post with a title, body, and author. When the form is submitted, the `submitHandler` function collects the input data and sends it to a specified URL using Axios. If the submission is successful, the form is reset; otherwise, any errors are logged to the console.
04anilr
1,888,868
Creative Full Screen Showcase Slider
This CodePen demo features a stunning full-screen showcase slider built using Swiper.js. It...
0
2024-06-14T18:12:24
https://dev.to/creative_salahu/creative-full-screen-showcase-slider-3597
codepen
This CodePen demo features a stunning full-screen showcase slider built using Swiper.js. It highlights various creative projects with immersive background images and parallax effects. Each slide includes dynamic headings and subtitles, and there are smooth transitions between slides. Features: Full-screen slides with beautiful background images Parallax scrolling for a dynamic visual experience Responsive design for optimal viewing on all devices Navigation controls and pagination for easy browsing Autoplay functionality to cycle through the slides automatically Perfect for showcasing portfolios, products, or any visual content in a captivating way. {% codepen https://codepen.io/CreativeSalahu/pen/pompzNW %}
creative_salahu
1,888,867
(AUBSS) Receives Full Accreditation From BILIM-STANDARD, The Independent Accreditation Agency Of Kyrgyz Republic
American University of Business and Social Sciences (AUBSS) is proud to announce that it has...
0
2024-06-14T18:11:28
https://dev.to/aubss_edu/aubss-receives-full-accreditation-from-bilim-standard-the-independent-accreditation-agency-of-kyrgyz-republic-4ik2
education, news, qahe, aubss
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q95s2eo6dgdzyycifjx3.png) American University of Business and Social Sciences (AUBSS) is proud to announce that it has achieved full accreditation from The Public Foundation “Independent Accreditation Agency ‘BILIM-STANDARD'” of Kyrgyz Republic. This prestigious accreditation body operates internationally and holds memberships in prominent organizations such as the Asia-Pacific Quality Assurance Network (APQN), the International Network of Quality Assurance Agencies in Higher Education (INQAAHE), and the Eurasian Association for the Assessment of Quality in Education. The accreditation by BILIM-STANDARD is a testament to AUBSS’s commitment to upholding the highest standards of quality in education. It recognizes the university’s dedication to providing exceptional academic programs and services to its students. AUBSS offers a range of programs including the Honorary Doctorate Degree, Mini-MBA, MBA, DBA, and Distinguished Professorship Award. These programs are designed to cater to the diverse needs of professionals seeking career advancement and personal development. Candidates interested in pursuing the Honorary Doctorate Degree, Mini-MBA, MBA, DBA, or the Distinguished Professorship Award are invited to contact AUBSS for further details. They can reach out to the university by emailing admission@aubss.edu.pl or visiting the official website at www.aubss.university. This accreditation milestone solidifies AUBSS’s position as a leading institution in business and social sciences education. The university remains dedicated to providing a transformative learning experience and empowering individuals to excel in their chosen fields.
aubss_edu
1,866,762
Docker Compose for a Full-Stack Application with React, Node.js, and PostgreSQL
The Premise So you've built a Full Stack application that you got working as you wanted,...
0
2024-06-14T18:11:17
https://dev.to/snigdho611/docker-compose-for-a-full-stack-application-with-react-nodejs-and-postgresql-3kdl
docker, node, postgres, react
### The Premise So you've built a Full Stack application that you got working as you wanted, and want to show it off. However, dependencies and environments make it so that it only runs on your device. Well, as you already may know that Docker Compose can take care of that. Let's start going through how this can be done without further ado. This tutorial is for those who have some idea on creating applications and servers, and some basic knowledge of Docker as well. #### TL;DR The source code can be found [here on Github](https://github.com/snigdho611/docker-compose-react-nodejs-postgres). To get this project up and running, follow these steps 1. Make sure you have Docker installed in your system. For installation steps, follow the following steps: 1. For **[Mac](https://docs.docker.com/desktop/install/mac-install/)** 2. For **[Ubuntu](https://docs.docker.com/engine/install/ubuntu/)** 3. For **[Windows](https://docs.docker.com/desktop/install/linux-install/)** 2. Clone the repository into your device 3. Open a terminal from the cloned project's directory (Where the `docker-compose.yml` file is present) 4. Run the command: `docker compose up` That's all! That should get the project up and running. To see the output, you can access `http://127.0.0.1:4172` from the browser and you should find a web page with a list of users. This entire system with the client, server & database are running inside of docker and being accessible from your machine. Here is a detailed explanation on what is going on. #### **1. Introduction** [Docker](https://docs.docker.com/) at its core is a platform as a service that uses OS-level virtualization to deploy/deliver software in packages called containers. It is done for various advantages, such as cross platform consistency and flexibility and scalability. [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container applications. It is the key to unlocking a streamlined and efficient development and deployment experience. #### **2. Using Docker and Docker Compose** When it comes to working with Full Stack Applications, i.e. ones that will involve more than one set of technology to integrate it into one fully fledged system, Docker can be fairly overwhelming to configure from scratch. It is not made any easier by the fact that there are various types of environment dependencies for each particular technology, and it only leads to the risk of errors at a deployment level. **Note:** The `.env` file adjacent in the directory with `docker-compose.yml` will contain certain variables that will be used in the docker compose file. They will be accessed whenever the `${<VARIABLE_NAME>}` notation is used. This example will work with PostgreSQL as the database, a very minimal Node/Express JS server and React JS as the client side application. #### **3. Individual Containers** The following section goes into a breakdown of how the `docker-compose.yml` file works with the individual `Dockerfile`. Let's take a look at the docker-compose file first. We have a key called `services` at the very top, which defines the different applications/services we want to get running. As this is a `.yml` file, it is important to remember that indentations are crucial. Lets dive into the first service defined in this docker compose file, the database. ##### **1. Database** First of all, the database needs to be set up and running in order for the server to be able to connect to it. The database does not need any Dockerfile in this particular instance, however, it can be done with a Dockerfile too. Lets go through the configurations. *`docker-compose.yml`* ```yml postgres: container_name: database ports: - "5431:5432" image: postgres environment: POSTGRES_USER: "${POSTGRES_USER}" POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}" POSTGRES_DB: ${POSTGRES_DB} volumes: - ./docker_test_db:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "sh -c 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}'"] interval: 5s timeout: 60s retries: 5 start_period: 80s ``` #### Explanation - ***postgres***: used to identify the service that the section of the compose file is for - ***container_name***: the name of the service/container that we have chosen - ***ports***: maps the host port (making it accessible from outside) to the port being used by the application in Docker. - ***image***: defines the Docker image that will be required to make this container functional and running - ***environment***: defined variables for the environment of this particular service. For example, for this PostgreSQL service, we will be defining a `POSTGRES_USER`,`POSTGRES_PASSWORD` and `POSTGRES_DB`. They're all being assigned with the values in the `.env`. - ***volumes***: This particular key is for we want to create a container that can **_persist_** data. This means that ordinarily, when a Docker container goes down, so does any updated data on it. Using volumes, we are mapping a particular directory of our local machine with a directory of the container. In this case, that's the directory where postgres is reading the data from for this database. - ***heathcheck***: when required, certain services will need to check if their state is functional or not. For example, PostgreSQL, has a behavior of turning itself on and off a few instances at launch, before finally being functional. For this reason, healthcheck allows Docker Compose to allow other services to know when it is fully functional. The few properties below healthcheck are doing the following: - ***test***: runs particular commands for the service to run checks - ***interval***: amount of time docker compose will wait before running a check again - ***timeout***: amount of time that the a single check will go on for, before it times out without any response or fails - ***retries***: total number of tries that docker compose will try to get the healthcheck for a positive response, otherwise fail and declare it as a failed check - ***start_period***: specifies the amount of time to wait before starting health checks ##### **2. Server** *`Dockerfile`* ```Dockerfile FROM node:18 WORKDIR /server COPY src/ /server/src COPY prisma/ /server/prisma COPY package.json /server RUN npm install RUN npx prisma generate ``` **Explanation** ***FROM*** - tells Docker what image is going to be required to build the container. For this example, its the Node JS (version 18) ***WORKDIR*** - sets the current working directory for subsequent instructions in the Dockerfile. The `server` directory will be created for this container in Docker's environment ***COPY*** - separated by a space, this command tells Docker to copy files/folders ***from local environment to the Docker environment***. The code above is saying that all the contents in the src and prisma folders need to be copied to the `/server/src` & `/srver/prisma` folders in Docker, and package.json to be copied to the `server` directory's root. ***RUN*** - executes commands in the terminal. The commands in the code above will install the necessary node modules, and also generate a prisma client for interacting with the database (it will be needed for seeding the database initially). *`docker-compose.yml`* ```yml server: container_name: server build: context: ./server dockerfile: Dockerfile ports: - "7999:8000" command: bash -c "npx prisma migrate reset --force && npm start" environment: DATABASE_URL: "${DATABASE_URL}" PORT: "${SERVER_PORT}" depends_on: postgres: condition: service_healthy ``` **Explanation** ***build***: defines the build context for the container. This can contain steps to build the container, or contain path to Dockerfiles that have the instructions written. The ***context*** key directs the path, and the ***dockerfile*** key contains the name of the Dockerfile. ***command***: executes commands according to the instructions that are given. This particular command is executed to first make migrations to the database and seed it, and then start the server. ***environment***: contains the key-value pairs for the environment, which are available in the .env file at the root directory. `DATABASE_URL` and `PORT` both contain corresponding values in the .env file. ***depends_on***: checks if the dependent container is up, running and functional or not. This has various properties, but in this example, it is checking if the `service_healthy` flag of our postgres container is up and functional or not. The `server` container will only start if this flag is returned being `true` from the ***healthcheck*** from the PostgreSQL ##### **3. Client** *`Dockerfile`* ```Dockerfile FROM node:18 ARG VITE_SERVER_URL=http://127.0.0.1:7999 ENV VITE_SERVER_URL=$VITE_SERVER_URL WORKDIR /client COPY public/ /client/public COPY src/ /client/src COPY index.html /client/ COPY package.json /client/ COPY vite.config.js /client/ RUN npm install RUN npm run build ``` **Explanation** Note: *The commands for `client` are very similar to the already explained above for `server`* ***ARG***: defines a variable that is later passed to the ***ENV*** instruction ***ENV***: Assigns a key value pair into the context of the Docker environment for the container to run. This essentially contains the domain of the API that will be fired from the client later. *`docker-compose.yml`* ```yml client: container_name: client build: context: ./client dockerfile: Dockerfile command: bash -c "npm run preview" ports: - "4172:4173" depends_on: - server ``` **Explanation** Note: *The commands for `client` are very similar to the already explained above for `server` and `postgres`* This tutorial provides a basic understanding of using Docker Compose to manage a full-stack application. Explore the code and docker-compose.yml file for further details. The source code can be found [here on Github](https://github.com/snigdho611/docker-compose-react-nodejs-postgres).
snigdho611
1,888,865
Guide to Blockchain Interoperability
Blockchain technology has undoubtedly revolutionized various industries by providing a decentralized...
0
2024-06-14T18:08:00
https://dev.to/davidking/guide-to-blockchain-interoperability-ff8
blockchain, web3, cryptocurrency, learning
Blockchain technology has undoubtedly revolutionized various industries by providing a decentralized and secure platform for transactions and data management. However, as the number of blockchain networks continues to grow, the need for interoperability between these networks becomes increasingly critical. In this guide, we'll explore the concept of blockchain interoperability, its importance, challenges, approaches, real-world use cases, future trends, and considerations for developers and businesses. ## Understanding Blockchain Networks Blockchain networks are decentralized digital ledgers that record transactions in a secure and transparent manner. Examples include Bitcoin, Ethereum, and Hyperledger, each with its own consensus mechanisms and governance structures. However, these networks often operate in isolation, limiting their potential for collaboration and synergy. ## What is Blockchain Interoperability? Blockchain interoperability involves enabling communication and data exchange between disparate blockchain networks. This can take various forms, including protocol-level interoperability, platform interoperability, and cross-chain interoperability. By establishing interoperability standards and protocols, blockchain networks can seamlessly share information and assets, unlocking new possibilities for innovation and value creation. ## Challenges to Interoperability Despite its potential benefits, achieving blockchain interoperability presents several challenges. These include scalability issues, disparities in consensus mechanisms, security concerns, and regulatory hurdles. Overcoming these challenges requires innovative solutions and collaboration across the blockchain ecosystem. ## Approaches to Achieving Interoperability Several approaches have been proposed to address the challenges of blockchain interoperability. These include atomic swaps, sidechains, cross-chain communication protocols, and interoperability standards. Projects such as Cosmos, Polkadot, Aion, and Wanchain are pioneering efforts to facilitate interoperability between different blockchain networks. ## Prominent Projects and Initiatives Cosmos, Polkadot, Aion, and Wanchain are among the leading projects and initiatives focused on blockchain interoperability. These platforms leverage unique architectures and technologies to enable seamless communication and data exchange between disparate blockchain networks, laying the foundation for a more interconnected and interoperable blockchain ecosystem. ## Real-World Use Cases Blockchain interoperability has numerous real-world applications across various industries. For example, it can facilitate cross-border payments, streamline supply chain management, enable interoperable decentralized finance (DeFi) platforms, and enhance identity management systems. By bridging different blockchain networks, interoperability unlocks new opportunities for efficiency, transparency, and collaboration. ## Future Trends and Developments The future of blockchain interoperability holds significant promise, with ongoing advancements in technology and standards. As interoperability solutions evolve and mature, we can expect to see greater integration between blockchain networks, enhanced scalability, and improved security. These developments will drive further innovation and adoption of blockchain technology across industries. ## Considerations for Developers and Businesses Developers and businesses looking to leverage blockchain interoperability should carefully consider various factors. These include the compatibility of interoperability solutions with existing systems, integration challenges, and the potential risks and opportunities associated with cross-chain transactions. By understanding these considerations, organizations can effectively harness the power of blockchain interoperability to drive innovation and growth. ## Conclusion Blockchain interoperability is a crucial enabler of collaboration, innovation, and value creation in the blockchain ecosystem. By enabling seamless communication and data exchange between disparate blockchain networks, interoperability unlocks new opportunities for efficiency, transparency, and collaboration across industries. As technology advances and standards evolve, the future of blockchain interoperability looks brighter than ever, promising a more interconnected and interoperable blockchain ecosystem for the benefit of all.
davidking
1,888,864
Introduction to HTML: The Backbone of the Web by Michael Savage
In the realm of web development, HTML stands as the fundamental building block, a cornerstone...
0
2024-06-14T17:59:58
https://dev.to/savagenewcanaan/introduction-to-html-the-backbone-of-the-web-48m
html
<p style="text-align: justify;">In the realm of web development, HTML stands as the fundamental building block, a cornerstone technology that shapes the very essence of the internet. HTML, or HyperText Markup Language, is the standard language used to create and design web pages and web applications. Despite the advent of numerous web technologies, HTML remains indispensable, offering a robust, flexible, and straightforward approach to web design.</p> <h2 style="text-align: justify;">What is HTML?</h2> <p style="text-align: justify;">HTML is a markup language, which means it uses tags to annotate text, images, and other content to be displayed in a web browser. These tags define the structure and layout of a web page, enabling browsers to interpret and render the content as intended. HTML is not a programming language but a descriptive language that provides context to the data it encloses.</p> <h3 style="text-align: justify;">The Evolution of HTML</h3> <p style="text-align: justify;">The development of <a href="https://en.wikipedia.org/wiki/HTML">HTML</a> has been a dynamic journey since its inception in 1991 by Tim Berners-Lee, the father of the World Wide Web. The first version, HTML 1.0, was a simple, text-based format used for sharing documents over the internet. As the web grew, so did the need for more complex and interactive features, leading to successive versions with enhanced capabilities.</p> <p style="text-align: justify;">HTML 2.0, released in 1995, laid the groundwork for more standardized web pages. HTML 3.2, introduced in 1997, added support for features like tables and applets. HTML 4.01, released in 1999, was a significant leap, incorporating stylesheets (CSS) and scripting (JavaScript) to create more dynamic and visually appealing web pages.</p> <p style="text-align: justify;">The most recent iteration, HTML5, launched in 2014, represents a substantial advancement. It brings native support for multimedia elements (like audio and video), new semantic tags, enhanced accessibility features, and improved performance. HTML5 has been instrumental in fostering a more interactive and engaging web experience.</p> <h3 style="text-align: justify;">Core Components of HTML</h3> <p style="text-align: justify;">Understanding HTML requires familiarity with its basic components:</p> <ol style="text-align: justify;"> <li> <p><strong>Tags</strong>: HTML uses a set of predefined tags to structure content. Tags are enclosed in angle brackets (e.g., <code>&lt;html&gt;</code>, <code>&lt;head&gt;</code>, <code>&lt;body&gt;</code>). Most tags have an opening tag and a closing tag, with content placed between them.</p> </li> <li> <p><strong>Elements</strong>: An HTML element consists of an opening tag, content, and a closing tag. For example, <code>&lt;p&gt;This is a paragraph.&lt;/p&gt;</code> defines a paragraph element.</p> </li> <li> <p><strong>Attributes</strong>: Tags can have attributes that provide additional information about an element. Attributes are placed within the opening tag and usually come in name/value pairs (e.g., <code>&lt;a href="https://www.example.com"&gt;Link&lt;/a&gt;</code>).</p> </li> <li> <p><strong>Document Structure</strong>: A typical HTML document follows a standard structure:</p> <ul> <li><code>&lt;!DOCTYPE html&gt;</code> declaration to specify the HTML version.</li> <li><code>&lt;html&gt;</code> root element enclosing the entire document.</li> <li><code>&lt;head&gt;</code> element containing meta-information, title, and links to stylesheets.</li> <li><code>&lt;body&gt;</code> element housing the visible content of the web page.</li> </ul> </li> </ol> <h3 style="text-align: justify;">Why HTML is Essential</h3> <p style="text-align: justify;">HTML's significance lies in its simplicity, universality, and compatibility. It provides a platform-independent way to display content across various devices and browsers. Its straightforward syntax makes it accessible to beginners, yet its extensive capabilities allow for the creation of complex web applications.</p> <p style="text-align: justify;">Furthermore, HTML's semantic tags enhance search engine optimization (SEO) and accessibility, making content more understandable to both search engines and assistive technologies. This semantic approach also improves the maintainability and scalability of web projects.</p> <h2 style="text-align: justify;">The Future of HTML</h2> <p style="text-align: justify;">HTML continues to evolve, driven by the needs of modern web development. The integration of HTML with CSS for styling and JavaScript for interactivity creates a powerful trifecta that underpins the vast majority of websites today. As technologies like WebAssembly, Progressive Web Apps (PWAs), and the WebXR API for augmented and virtual reality gain traction, HTML will adapt to support these innovations, ensuring its relevance for years to come.</p> <p style="text-align: justify;">HTML is much more than a mere language; it is the backbone of the web, enabling the creation of diverse and dynamic online experiences. Whether you are a novice developer embarking on your web development journey or an experienced programmer, understanding HTML is crucial to mastering the art and science of building the web.</p>
savagenewcanaan
1,888,858
Rapid profit machine review in 2024
The Rapid Profit Machine (RPM) is a program that teaches affiliate marketing with a focus on using...
0
2024-06-14T17:51:09
https://dev.to/trustreviews/rapid-profit-machine-review-in-2024-1nld
review
The Rapid Profit Machine (RPM) is a program that teaches affiliate marketing with a focus on using solo ads for traffic generation. While it offers some basic training on affiliate marketing, there are red flags to consider, such as the upselling of additional products and the potential for promoting spammy offers. Here's a summary of what you might find helpful to know about RPM: * **Pros:** Free to start, offers basic affiliate marketing training * **Cons:** Upsells, potential for promoting low-quality products, focus on solo ads (which can be expensive and ineffective) If you're interested in learning affiliate marketing, there are many free and reputable resources available online. You can also consider checking out other affiliate marketing programs that don't rely on solo ads. [view full content](https://thedailylifereview.com/review-content-01 )
trustreviews
1,888,856
Optimiza Tu PC para Disfrutar de Juegos de Estrategia
El mundo de los videojuegos ha evolucionado a pasos agigantados en los últimos años, llevando a los...
0
2024-06-14T17:47:32
https://dev.to/carlos_eduardoriveraurb/optimiza-tu-pc-para-disfrutar-de-juegos-de-estrategia-2e3c
El mundo de los videojuegos ha evolucionado a pasos agigantados en los últimos años, llevando a los desarrolladores a crear obras maestras llenas de acción, gráficos impresionantes y mecánicas de juego innovadoras. Sin embargo, para disfrutar al máximo de estos juegos, es crucial contar con un equipo que cumpla con los **requisitos del sistema** necesarios. Algunos jugadores cometen el error de no comprobar si su PC puede soportar las demandas de un nuevo juego antes de comprarlo. Esto puede llevar a una experiencia de juego frustrante, caracterizada por bajas velocidades de cuadro, gráficos entrecortados y tiempos de carga eternos. Para evitar esto, siempre es una buena idea verificar si tu máquina está a la altura de las circunstancias antes de instalar cualquier título nuevo. Una de las joyas recientes en el género de los juegos de estrategia es Kingdom Wars 2: Battles. Este juego combina la construcción de bases, la gestión de recursos y combates tácticos, llevándote a un mundo medieval lleno de intriga y batallas épicas. Pero ¿cómo puedes asegurarte de que tu PC puede manejar este título sin problemas? ## Tabla de Requisitos Mínimos y Recomendados Requisito Mínimo Recomendado Procesador Intel i5 Intel i7 Memoria 6 GB de RAM 8 GB de RAM Tarjeta Gráfica NVIDIA GTX 660 NVIDIA GTX 970 Almacenamiento 10 GB 15 GB Estas especificaciones básicas pueden darte una idea de lo que necesitas, pero determinar la compatibilidad puede ser más complicado de lo que parece. Afortunadamente, existen herramientas en línea que pueden ayudarte a verificar automáticamente si tu PC cumple con los requisitos para un juego específico. Si tienes alguna duda sobre la capacidad de tu equipo, te recomiendo revisar esta guía completa que te ofrece detalles precisos y fáciles de seguir para evaluar si tu PC es lo suficientemente potente para ejecutar Kingdom Wars 2: Battles. Recuerda, no hay nada más decepcionante que estar emocionado por un nuevo juego solo para descubrir que tu equipo no puede manejarlo. Haz tu investigación, mejora tu equipo si es necesario y sumérgete en el emocionante mundo de los juegos de estrategia.
carlos_eduardoriveraurb
1,888,855
What exactly is middleware?
Middleware is software that acts as an intermediary between different components of a system. It...
0
2024-06-14T17:45:51
https://dev.to/samuel_kelv/what-exactly-is-middleware-193m
devchallenge, cschallenge, computerscience, beginners
Middleware is software that acts as an intermediary between different components of a system. It facilitates communication, data processing, and coordination between applications, databases, and other software elements. Middleware provides a consistent interface, abstracts away complexity, and enables different parts of a system to work together seamlessly.
samuel_kelv
1,888,854
Cookies in Javascript
Cookies are small pieces of data stored on the client's browser by a website. They are used to...
0
2024-06-14T17:45:38
https://dev.to/zeeshanali0704/cookies-in-javascript-5fbl
Cookies are small pieces of data stored on the client's browser by a website. They are used to remember information about the user between page loads or sessions. Here's a detailed explanation of cookies in JavaScript and the differences between HTTP and HTTPS cookies: Cookies in JavaScript Creating and Setting Cookies To create and set a cookie in JavaScript, you can use the document.cookie property. Here’s an example: ``` // Set a cookie document.cookie = "username=JohnDoe; expires=Fri, 31 Dec 2024 12:00:00 UTC; path=/"; ``` username=JohnDoe: The name and value of the cookie. expires: The expiration date and time. If not set, the cookie will be a session cookie and will be deleted when the browser is closed. path=/: The path where the cookie is accessible. If set to /, the cookie is accessible within the entire domain. Reading Cookies To read cookies, you can access document.cookie, which returns all cookies as a single string: ``` // Read all cookies const cookies = document.cookie; // "username=JohnDoe; otherCookie=SomeValue" ``` To get a specific cookie value, you may need to parse this string: ``` // Get specific cookie value function getCookie(name) { const value = `; ${document.cookie}`; const parts = value.split(`; ${name}=`); if (parts.length === 2) return parts.pop().split(';').shift(); } ``` Deleting Cookies To delete a cookie, set its expiration date to a past date: ``` // Delete a cookie document.cookie = "username=; expires=Thu, 01 Jan 1970 00:00:00 UTC; path=/;"; ``` **HTTP vs. HTTPS Cookies** ###HTTP Cookies Unsecured Transmission: HTTP cookies are transmitted over regular HTTP connections. This means the data, including the cookie information, is not encrypted and can be intercepted by attackers. Set with the Set-Cookie Header: When a server responds to an HTTP request, it can include a Set-Cookie header to store a cookie in the browser. Basic Usage: Suitable for non-sensitive data or during development in a secure environment. ###HTTPS Cookies Secured Transmission: HTTPS cookies are transmitted over HTTPS connections, ensuring that the data is encrypted during transmission. This prevents eavesdropping and man-in-the-middle attacks. Secure Attribute: When setting cookies for HTTPS, you can use the Secure attribute to ensure the cookie is only sent over HTTPS: Secure ``` Set-Cookie: name=value; Secure ``` HttpOnly Attribute: This attribute can be added to cookies to prevent JavaScript from accessing them, enhancing security against XSS (Cross-Site Scripting) attacks: HttpOnly ``` Set-Cookie: name=value; HttpOnly ``` Differences Security: HTTP: Data is not encrypted, making it vulnerable to interception. HTTPS: Data is encrypted, providing a secure transmission. Attributes: Secure Attribute: Ensures cookies are only sent over HTTPS. HttpOnly Attribute: Prevents access to cookies via JavaScript, providing additional security. Usage: HTTP Cookies: Suitable for non-sensitive data or during development. HTTPS Cookies: Recommended for sensitive data and production environments to ensure data security. Example Here’s an example of setting a secure and HttpOnly cookie: http Set-Cookie: sessionToken=abc123; Secure; HttpOnly; Path=/; Expires=Fri, 31 Dec 2024 12:00:00 GMT; sessionToken=abc123: The name and value of the cookie. Secure: The cookie is only sent over HTTPS. HttpOnly: The cookie cannot be accessed via JavaScript. Path=/: The cookie is accessible within the entire domain. Expires: The expiration date and time. By understanding and correctly implementing cookies, you can manage user sessions, store user preferences, and enhance the security of your web applications.
zeeshanali0704
1,888,852
Rapport de Stage chez Kali Academy
Remerciements Je tiens à remercier monsieur Abel Mbula CEO de Kali Academy madame Patience Kavira...
0
2024-06-14T17:36:32
https://dev.to/cub_ger24/rapport-de-stage-chez-kali-academy-2gn4
github, opensource, react, wikipedia
**Remerciements** Je tiens à remercier monsieur Abel Mbula CEO de Kali Academy madame Patience Kavira pour l'opportunité de participer à ce stage, ainsi que les formateurs monsieur Delord Wayire et les collegues de la cohort Mars 2024 de Kali Academy spétialement mon binome Firmin Finva et mes coéquipiers du projet Wikidata Query AI, Landry Bitege, Patrice Kalwira et Espoir Birusha pour leur collaboration, leur soutien et leur partage de connaissances. **Introduction** Ce rapport présente mon expérience de stage chez Kali Academy à Goma, en République Démocratique du Congo. Le stage s'est déroulé du 11 mars au 14 Juin 2024 et a été l'occasion de découvrir l'univers du développement open-source et de la culture hacker. **Première leçon : L'attitude hacker** Le premier mois du stage a été consacré à la présentation du concept de hacker, en s'appuyant sur le texte "What is the Hacker & Hacker Attitude" d'Eric Steven Raymond. Ce texte a permis de comprendre la distinction fondamentale entre les hackers et les crackers. Les hackers sont des passionnés de technologie qui construisent et partagent des outils et des ressources, tandis que les crackers sont des individus motivés par des intentions malveillantes. L'attitude hacker est basée sur cinq principes clés : * **Le monde regorge de problèmes fascinants qui attendent d'être résolus.** * **Aucun problème ne devrait jamais devoir être résolu deux fois.** * **L'ennui et la corvée sont mauvais.** * **La liberté est une bonne chose.** * **L'attitude ne remplace pas la compétence.** En adoptant ces principes, les hackers développent une approche proactive et créative pour résoudre des problèmes et contribuer à l'avancement technologique. **Learn In Public** Le deuxième mois du stage a été consacré au concept de "Learn In Public". Ce concept encourage à partager ses connaissances et à apprendre en permanence en créant du contenu éducatif et en participant à des événements. L'objectif est de ne pas se focaliser sur la reconnaissance extérieure, mais plutôt sur l'amélioration personnelle et l'aide aux autres. **Contribuer sur un issue sur GitHub** Le dernier mois du stage a été consacré à l'apprentissage des étapes à suivre pour contribuer à un problème (issue) sur GitHub. Les étapes clés sont les suivantes : 1. **S'assurer que l'issue est toujours ouverte** 2. **Lire les instructions** 3. **Fork le dépôt** 4. **Cloner le dépôt en local** 5. **Créer une branche locale** 6. **Effectuer les modifications nécessaires** 7. **Commit et push** 8. **Créer une Pull Request (PR)** 9. **Suivre les commentaires et effectuer les modifications nécessaires** 10. **Attendre la validation et fusion** En suivant ces étapes, les contributeurs peuvent apporter des modifications aux projets open-source et participer à leur développement. Ce stage chez Kali Academy a été une expérience enrichissante qui m'a permis de découvrir la culture hacker et les principes du développement open-source. J'ai également appris à partager mes connaissances et à contribuer à des projets collaboratifs. Je suis reconnaissant envers les formateurs et les membres de la communauté Kali Academy pour leur accueil et leur partage de connaissances. **Prochaines étapes** Je compte mettre en pratique les connaissances acquises lors de ce stage en continuant à apprendre sur le développement open-source et en contribuant à des projets qui me passionnent. Je souhaite également partager mes connaissances avec d'autres personnes en créant du contenu éducatif et en participant à des événements. **Deuxième leçon : Choisir un projet open-source et apprendre les bases de Git** Le deuxième jour du stage a été consacré à deux thématiques importantes : **1. Choisir un projet open-source** La sélection d'un projet open-source sur lequel contribuer est une étape cruciale pour s'impliquer dans la communauté open-source. Pour faire un choix éclairé, il est recommandé de prendre en compte les points suivants : * **Intérêts personnels:** Orienter le choix vers un projet en adéquation avec ses centres d'intérêt, qu'il s'agisse d'un domaine technologique spécifique, d'un langage de programmation apprécié ou d'une cause qui tient à cœur. * **Niveau de compétence:** Sélectionner un projet correspondant à son niveau de compétences. Certains projets s'adressent davantage aux débutants, tandis que d'autres exigent des compétences plus avancées. * **Taille du projet:** Considérer la taille du projet et de sa communauté. Les projets plus petits peuvent offrir une expérience plus collaborative, tandis que les projets plus importants peuvent présenter des opportunités d'apprentissage et de contribution à plus grande échelle. * **Problèmes ouverts:** Explorer la liste des problèmes ouverts (issues) sur GitHub pour identifier des tâches à réaliser. Certains projets signalent les problèmes adaptés aux nouveaux contributeurs, ce qui peut constituer un bon point de départ. **Ressources pour trouver des projets open-source:** * **GitHub Explore:** Explorer la section "Explore" sur GitHub pour découvrir des projets populaires et en vogue dans divers domaines. * **GitHub Topics:** Utiliser la fonction "Topics" sur GitHub pour rechercher des projets par catégorie, langage de programmation ou technologie. * **Hacktoberfest:** Participer à des événements comme Hacktoberfest qui encouragent la contribution à des projets open-source pendant le mois d'octobre. * **Sites Web dédiés:** Consulter des sites web comme Open Source Friday, Up For Grabs, CodeTriage ou First Timers Only qui répertorient des projets open-source accueillant les nouveaux contributeurs. * **Réseaux sociaux et forums:** Suivre des comptes et des communautés liés à l'open-source sur les réseaux sociaux, et participer à des forums comme Reddit r/opensource pour découvrir des projets intéressants. **2. Apprendre les bases de Git** Git est un système de contrôle de version essentiel pour le développement open-source. Il permet de suivre les modifications apportées aux fichiers, de collaborer avec d'autres contributeurs et de gérer différentes versions du code. Le stage a permis d'acquérir les connaissances fondamentales pour utiliser Git, notamment : * **Création d'un branch:** `git checkout -b nom_de_la_branche` * **Navigation dans les fichiers et répertoires:** `cd`, `ls`, `pwd` * **Création, copie, déplacement et suppression de fichiers et répertoires:** `touch`, `mkdir`, `cp`, `mv`, `rm` * **Affichage du contenu des fichiers:** `cat`, `more`, `less` * **Recherche de fichiers:** `find` * **Changement des permissions des fichiers:** `chmod` * **Commandes utiles:** `grep`, `wc`, `head`, `tail` * **Utilisation de l'éditeur Nano:** `nano` * **Utilisation de l'éditeur Vim:** `vim` **Ressources pour apprendre Git:** * [https://www.freecodecamp.org/news/git-and-github-for-beginners/](https://www.freecodecamp.org/news/git-and-github-for-beginners/) * [https://docs.gitlab.com/ee/ci/migration/github_actions.html](https://docs.gitlab.com/ee/ci/migration/github_actions.html) * [https://support.atlassian.com/jira-cloud-administration/docs/link-github-workflows-and-deployments-to-jira-issues/](https://support.atlassian.com/jira-cloud-administration/docs/link-github-workflows-and-deployments-to-jira-issues/) * [https://www.atlassian.com/git/tutorials/comparing-workflows](https://www.atlassian.com/git/tutorials/comparing-workflows) * [https://youtu.be/RGOj5yH7evk](https://youtu.be/RGOj5yH7evk) Les deux premiers mois du stage ont permis de découvrir les principes fondamentaux du développement open-source, l'importance de la contribution à des projets existants et les outils essentiels pour s'impliquer dans la communauté. Le choix d'un projet adapté et l'apprentissage des bases de Git constituent des étapes préalables importantes pour débuter en tant que contributeur open-source. **Troisième mois : Contribution à un projet open-source et apprentissage de Wikidata** Le troisième mois du stage ont été consacrés à la mise en pratique des connaissances acquises. Les stagiaires ont eu l'opportunité de : **1. Contribuer à un projet open-source** En se basant sur les notions apprises les deux premiers jours, les stagiaires ont choisi un projet open-source auquel ils souhaitaient contribuer. Ils ont suivi les étapes suivantes : * **Forker le projet:** Ils ont créé une copie du projet sur leur propre compte GitHub. * **Cloner le projet en local:** Ils ont téléchargé le projet sur leur ordinateur. * **Effectuer les modifications:** Ils ont apporté des modifications au code du projet en suivant les directives et les besoins du projet. * **Tester les modifications:** Ils ont testé leurs modifications pour s'assurer qu'elles ne causaient pas de problèmes. * **Pusher les modifications:** Ils ont envoyé leurs modifications vers leur fork sur GitHub. * **Créer une Pull Request:** Ils ont soumis une demande de fusion pour que leurs modifications soient intégrées au projet original. **2. Apprendre les bases de Wikidata** Wikidata est une base de données collaborative permettant de stocker et d'interroger des informations structurées. Les stagiaires ont découvert : * **La structure de Wikidata:** Ils ont compris comment les données sont organisées en triplets sujet-prédicat-objet. * **Le langage SPARQL:** Ils ont appris à utiliser SPARQL pour interroger les données de Wikidata. * **Des exemples de requêtes SPARQL:** Ils ont pratiqué la formulation de requêtes pour récupérer des informations spécifiques de Wikidata. **Ressources pour apprendre Wikidata:** * [https://query.wikidata.org/](https://query.wikidata.org/) * [https://www.wikidata.org/wiki/Wikidata:SPARQL_tutorial](https://www.wikidata.org/wiki/Wikidata:SPARQL_tutorial) * [https://www.wikidata.org/wiki/Q31386861](https://www.wikidata.org/wiki/Q31386861) **Conclusion** Le stage chez Kali Academy a permis aux participants d'acquérir des connaissances fondamentales sur le développement open-source et l'utilisation de Wikidata. Ils ont pu mettre en pratique leurs acquis en contribuant à un projet open-source et en explorant les requêtes SPARQL. Ce stage a constitué une expérience enrichissante et motivante pour les stagiaires, les encourageant à s'impliquer davantage dans la communauté open-source et à poursuivre leur apprentissage dans ce domaine. Gérard Cubaka Bisimwa https://ger-cub.github.io/portfoliogerard/
cub_ger24
1,888,851
What are props in ReactJs?
In React.js, props (short for properties) are the way to pass data from a parent component to a child...
0
2024-06-14T17:36:08
https://dev.to/mojahidulislam11/what-are-props-in-reactjs-3d0a
In React.js, props (short for properties) are the way to pass data from a parent component to a child component. This allows for a unidirectional data flow, where the data flows from the parent to the child, and not the other way around. Suppose we have a parent component named Dad and a child component named MySelf. Inside the MySelf component, we have another child component called SpecialPerson. The goal is for the Dad component to send a gift to the SpecialPerson component. **Dad Component:** The Dad component wants to send a gift to the SpecialPerson component. To do this, it will pass a prop called gift to the MySelf component. `<MySelf gift="Gold Ring" />` **MySelf Component:** The MySelf component receives the gift prop as an object in its function parameter. `const MySelf = ({ gift }) => { // Now, the `MySelf` component can access the `gift` prop return ( <div> <p>I received a {gift} from my dad!</p> <SpecialPerson gift={gift} /> </div> ); }; **SpecialPerson Component:** The MySelf component then passes the gift prop to the SpecialPerson component. `const SpecialPerson = ({ gift }) => { // The `SpecialPerson` component can now access the `gift` prop return ( <div> <p>Wow, I received a {gift} from my dad through my myself!</p> </div> ); }; `
mojahidulislam11
1,888,850
Use Kafka in your Web Api
Hello guys! Today I challenged myself to learn a little bit about event streaming, so I created an...
0
2024-06-14T17:34:26
https://dev.to/vzldev/use-kafka-in-your-web-api-38ha
kafka, dotnet, csharp, tutorial
Hello guys! Today I challenged myself to learn a little bit about event streaming, so I created an example using Kafka. ## What is event streaming? Event streaming is the practice of capturing data in real-time from event sources like databases, sensors, mobile devices, cloud services, and software applications in the form of streams of events; storing these event streams durably for later retrieval; manipulating, processing, and reacting to the event streams in real-time as well as retrospectively; and routing the event streams to different destination technologies as needed. (https://kafka.apache.org/intro) ## What is Kafka? Kafka is an event streaming platform, that allows: - Write/read data. - Store data. - Process data. ## Key Kafka Concepts - **Event**, the events in a nutshell are the data that we'll store. It can also be called record or message. When you read or write data to Kafka, you do this in the form of events. - **Topic**, is the place where the events will be stored. - **Producers** are those client applications that publish (write) events to Kafka - **Consumers**are those that subscribe to (read and process) events In my project, i made changes to my EventsAPI, so that now when a user is registered in an event, i will store some data in a topic, that a new LogsApi will read and store in a db. ## Using Kafka step by step ## Step 1 To start using Kafka I recommend using an image container and run that image in the docker container. After you pull the image, run it. ## Step 2 In your Producer and Consumer API install the following nuget package: - Confluent.Kafka ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gzk8wzosc2mvlxgnqx48.png) ## Step 3 Update your producer api to write data to a kafka topic: ``` [Route("[controller]")] [ApiController] public class EventEnrollmentController : ControllerBase { protected IEventsRepository _repository; private readonly IProducer<string, string> _producer; public EventEnrollmentController(IEventsRepository repository) { _repository = repository; var config = new ProducerConfig { BootstrapServers = "localhost:9092" }; _producer = new ProducerBuilder<string, string>(config).Build(); } [HttpPut("{eventId}/register")] public async Task<ActionResult<bool>> AddUserToEvent(Guid userId, Guid eventId) { Event evento = await _repository.GetEventById(eventId); if (evento == null) { return NotFound(); } evento.Users.Add(userId); LogEntry log = new LogEntry() { Id = Guid.NewGuid(), EventTitle = evento.Title, UserId = userId, RegistrationTime = DateTime.UtcNow, }; await _producer.ProduceAsync("user-event-registrations", new Message<string, string> { Key = "UserEventRegistered", Value = JsonConvert.SerializeObject(log) }); return await _repository.UpdateEvent(evento); } } ``` In this case, a topic called _user-event-registrations_ is created. ## Step 4 On your consumer API, do the following changes, so it will consume data of the _user-event-registrations_ topic ``` public class LogService : BackgroundService { private readonly IServiceProvider _serviceProvider; private readonly IConsumer<string, string> _consumer; private readonly ILogger<LogService> _logger; public LogService(IServiceProvider serviceProvider, ILogger<LogService> logger) { _serviceProvider = serviceProvider; _logger = logger; var config = new ConsumerConfig { GroupId = "log-group", BootstrapServers = "localhost:9092", AutoOffsetReset = AutoOffsetReset.Earliest }; _consumer = new ConsumerBuilder<string, string>(config).Build(); } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { _consumer.Subscribe("user-event-registrations"); while (!stoppingToken.IsCancellationRequested) { try { var consumeResult = _consumer.Consume(stoppingToken); var registration = JsonConvert.DeserializeObject<LogEntry>(consumeResult.Message.Value); using (var scope = _serviceProvider.CreateScope()) { var context = scope.ServiceProvider.GetRequiredService<LogContext>(); context.LogEntries.Add(registration); await context.SaveChangesAsync(stoppingToken); } } catch (Exception ex) { _logger.LogError(ex, "Error processing Kafka message."); } } } } ``` Don't forget to add the dependencies to your Program.cs file. And that's it guys, a simple use case of event streaming using Kafka. I hope you liked it, stay tuned for more!
vzldev
1,888,849
Creative CSS Border
Check out this Pen I made!
0
2024-06-14T17:34:21
https://dev.to/kemiowoyele1/creative-css-border-1cme
codepen
Check out this Pen I made! {% codepen https://codepen.io/frontend-magic/pen/MWdONMo %}
kemiowoyele1
1,888,847
Java Journey: From Novice to Ninja
Introduction to Java Importance of Java Java is one of the most popular and...
0
2024-06-14T17:32:00
https://dev.to/tutorialq/java-journey-from-novice-to-ninja-3527
java, javaroadmap, programming, javacollections
## Introduction to Java ### Importance of Java Java is one of the most popular and widely used programming languages in the world. Its platform independence, robust performance, and extensive libraries make it a top choice for developing a wide range of applications, from mobile apps and enterprise software to web services and scientific applications. ### Key Features of Java: - **Platform Independence**: Java's "write once, run anywhere" capability is made possible through the Java Virtual Machine (JVM). - **Object-Oriented**: Encourages modular, reusable code. - **Robust and Secure**: Strong memory management and security features. - **Multithreading**: Built-in support for concurrent programming. - **Rich API**: Extensive set of libraries for everything from data structures to networking. - **Community Support**: Large, active community providing frameworks, tools, and resources. ### Uses of Java Java is used in a variety of domains, including: - **Enterprise Applications**: Banking, finance, e-commerce (e.g., web servers, transaction processing). - **Mobile Applications**: Android apps (Java is the primary language for Android development). - **Web Applications**: Server-side technologies such as JavaServer Pages (JSP) and Servlets. - **Scientific Applications**: High-performance applications using Java’s computational capabilities. - **Big Data**: Hadoop and other big data technologies are written in Java. - **Embedded Systems**: Java ME for small devices and embedded systems. ### Top-Tier Companies Using Java Several leading companies rely on Java for their critical systems and applications: - **Google**: Uses Java for Android app development. - **Amazon**: Backend services and infrastructure. - **Netflix**: Microservices architecture. - **LinkedIn**: Server-side development. - **Uber**: Backend systems. - **Airbnb**: Data infrastructure and web services. ## Java Learning Roadmap The roadmap is divided into three main levels: Basic, Intermediate, and Advanced. Each level builds on the previous one, ensuring a comprehensive understanding of Java programming. ### Basic Level #### Introduction to Java - Java history and evolution - Setting up the development environment (JDK, IDEs like IntelliJ IDEA, Eclipse) #### Basic Syntax and Constructs - Keywords in Java - Variables and Data Types - Operators and Expressions - Control Structures (if-else, switch, loops) - In-Depth Analysis of Strings in Java #### Object-Oriented Programming (OOP) - Classes and Objects - Methods - Constructors - Inheritance - Polymorphism - Encapsulation - Abstraction #### Basic I/O - Reading from and writing to the console - Working with files (FileReader, FileWriter) #### Exception Handling - try-catch blocks - Throwing exceptions - Custom exceptions #### Collections Framework - Lists (ArrayList, LinkedList) - Sets (HashSet, TreeSet) - Maps (HashMap, TreeMap) - Queue - Stack ### Intermediate Level #### Advanced OOP Concepts - Interfaces and Abstract Classes - Inner Classes - Static and Final Keywords #### Generics - Generic Classes and Methods - Bounded Type Parameters - Wildcards #### Asynchronous Programming - Futures and CompletableFutures #### Advanced Concepts - Multithreading and Concurrency - Thread lifecycle - Creating threads (Runnable, Thread class) - Synchronization - Concurrency utilities (Executors, Future, Callable, Locks) - Streams and Lambda Expressions - Introduction to Functional Programming - Lambda Expressions - Stream API (map, filter, reduce) - Collectors - Functional Programming Best Practices - File I/O and Serialization - Advanced file handling (NIO) - Object Serialization and Deserialization - Networking - Sockets and ServerSockets - HTTP requests and responses - Building simple web servers - Databases - JDBC (Java Database Connectivity) - Connecting to databases - Executing SQL queries - ORM with Hibernate ### Advanced Level #### Design Patterns - Creational (Singleton, Factory. Builder, Facade) - Structural (Adapter, Decorator, Proxy, Composite) - Behavioral (Observer, Strategy, Command, Template Method) #### Java Memory Management - Garbage Collection - Memory Leaks - Profiling and Monitoring Tools (JVisualVM, YourKit) #### JVM Internals - JVM architecture - Class loading mechanism - Just-In-Time (JIT) compiler #### Web Development with Java - Servlets and JSP - JavaServer Faces (JSF) - RESTful Web Services (JAX-RS, Spring REST) - WebSockets #### Security - Java Security API - Encryption/Decryption (JCA, Bouncy Castle) - Secure Coding Practices #### Testing - Unit Testing with JUnit and TestNG - Mocking with Mockito - Integration Testing - Test-Driven Development (TDD) ## Conclusion Learning Java is a journey that starts with understanding the basics and gradually moves towards mastering advanced concepts and technologies. This roadmap is designed to guide you through this journey, ensuring you build a solid foundation before tackling more complex topics. By following this roadmap, you'll gain the skills needed to develop robust, scalable, and high-performance applications using Java. Java's versatility and widespread adoption make it an essential skill for developers in various domains. Whether you're just starting or looking to deepen your existing knowledge, this roadmap provides a comprehensive path to becoming proficient in Java.
tutorialq
1,888,846
Top 10 Most Famous and Efficient AI Tools Available in the Market
Artificial Intelligence (AI) tools have become indispensable in various industries, from healthcare...
0
2024-06-14T17:30:23
https://dev.to/futuristicgeeks/top-10-most-famous-and-efficient-ai-tools-available-in-the-market-10jf
webdev, ai, machinelearning, programming
Artificial Intelligence (AI) tools have become indispensable in various industries, from healthcare and finance to marketing and customer service. These tools enhance productivity, provide deep insights, and streamline complex processes. In this article, we will explore the top 10 most famous and efficient AI tools available in the market today, highlighting their features, applications, and benefits. **1. TensorFlow** TensorFlow, developed by Google Brain, is an open-source machine learning framework widely used for building and deploying machine learning models. It supports a range of deep learning and neural network algorithms. **2. PyTorch** Developed by Facebook’s AI Research lab, PyTorch is another leading open-source machine learning library that is particularly popular for deep learning applications. **3. IBM Watson** IBM Watson is a suite of AI tools and services aimed at business applications. It provides a range of AI capabilities through cloud-based APIs. **4. Microsoft Azure AI** Microsoft Azure AI provides a suite of AI services and tools for building, training, and deploying machine learning models through Azure’s cloud infrastructure. **5. Google Cloud AI** Google Cloud AI offers a wide array of machine learning and AI tools powered by Google’s expertise in AI and cloud computing. Check out our latest article for a detailed report on the complete list of top AI tools: https://futuristicgeeks.com/top-10-most-famous-and-efficient-ai-tools-available-in-the-market/ Stay ahead with the latest tech insights by following us and visiting our website for more in-depth articles: **[FuturisticGeeks]**(https://futuristicgeeks.com). Don't miss out on staying updated with the top AI tools and trends!
futuristicgeeks
1,888,844
Unique characters in a string
This is the first in a series of coding challenges for golang developers. We'll start easy, so let's...
27,729
2024-06-14T17:27:00
https://dev.to/johnscode/simple-golang-coding-question-1-4hlg
interviewquestions, go, programming, career
This is the first in a series of coding challenges for golang developers. We'll start easy, so let's get started. Write a golang function to compute the maximum number of unique characters in a string. ``` package main import ( "fmt" ) func main() { str := "Hello, World!" maxUnique := maxUniqueCharacters(str) fmt.Printf("The string \"%s\" contains %d unique characters.\n", str, maxUnique) } func maxUniqueCharacters(str string) int { charSet := make(map[rune]bool) for _, char := range str { charSet[char] = true } return len(charSet) } ``` The solution makes use of a `map[rune]bool` to indicate the presence of a given rune. The value in the map acts as a logical or if there are multiple occurrences of a given rune. So once a given rune is found, it is not added to the map again. Once the string is completely scanned, the length of the map is the number of unique characters in the string. How can this be improved? Post your thoughts in the comments. Thanks! _The code for this post and all posts in this series can be found [here](https://github.com/johnscode/gocodingchallenges)_
johnscode
1,888,834
Crypto Commerce - Future of E-Commerce?
Problem Tax and death are the two things in the world you cannot avoid! I disagree, both...
0
2024-06-14T17:26:33
https://dev.to/savyjs/crypto-commerce-future-of-e-commerce-hlp
## Problem Tax and death are the two things in the world you cannot avoid! I disagree, both of them could be avoidable! At least the first one. consider you can sell goods to people without paying any non-rational tax. Also, some countries are controlling people and building walls between different nations. This is unacceptable. You have the right to trade freely and connect with people all over the world whenever you want. ## Solution Crypto currencies could be our solution. despite the nonrefundable nature of most crypto currencies, they could make humans truly free in terms of trading and payment. Human built a form of currency no government can control. ## Features 1. Selling goods/services with cryptocurrency 2. Managing inventory 3. Online presence of products 4. Connecting to different markets 5. Converting prices to different cryptocurrencies 6. Payer can use a wide range of cryptocurrencies 7. Seller can receive their money quickly in their wallet ## Challenges 1. Seller needs to trust the POS mechanism 2. Payer needs to trust the seller 3. Settlement must be fast 4. Seller should guarantee their product 5. Seller should reward customers who pay via crypto ## Advantages I can say the best advantage of using Crypto Commerce is reducing taxes. Second, most cryptocurrencies have investment potential. Although sellers can receive stablecoins like USDT, receiving crypto such as BTC or ETH could have long-term benefits. ## POV: Customers Why should someone trust sellers who only accept crypto? While fiat gives all the rights to the buyer, crypto gives rights to the receiver. In this case, the seller has a tough time building trust. I think the best way to create a culture of paying via crypto requires following steps: - **Rewarding crypto payers**: If we are selling our products via cryptocurrencies to reduce taxes, it's fair to share some of this savings with the buyer. For example, if you sell your product for $100 and typically pay 13% tax ($13), you should sell it for around $93.50 when accepting crypto. This way, both you and the buyer benefit, splitting the savings: $6.50 for you and $6.50 for the buyer who trusted you. A win-win situation. - Start Small: teach your customers that paying via crypto is safe, let them pay via crypto coins. reward them. give them the feeling they are pioneer of new modern era of money and currency. but this experience should be risk-free. start with very cheapest products. repeat this type of experience for your customer. - Provide Guarantee and Alternatives: Offer all legal information to your customers, show them who you are, and maintain a trustworthy site. Join trust-building platforms, release videos, and present yourself on social media. Take all necessary steps to build trust. Start accepting crypto as an alternative payment method with rewards. Offer PayPal or other traditional forms of payment, so customers feel free to choose their preferred payment method. Never force them to choose cryptocurrency.
savyjs
1,888,845
Dr. Stephen Nkem Asek Awarded Doctor Of Social Sciences (Honoris Causa) By AUBSS And QAHE
The American University of Business and Social Sciences (AUBSS) and the International Association...
0
2024-06-14T17:25:41
https://dev.to/aubss_edu/dr-stephen-nkem-asek-awarded-doctor-of-social-sciences-honoris-causa-by-aubss-and-qahe-1732
news, education, top, aubss
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oz4no5gs14atyscf9lc6.jpg) The American University of Business and Social Sciences (AUBSS) and the International Association for Quality Assurance in Pre-Tertiary and Higher Education (QAHE) are delighted to announce that Stephen Nkem Asek has been honored with the prestigious Doctor of Social Sciences (Honoris Causa) degree. Mr. Asek, a highly accomplished professional with over 17 years of experience in the development and humanitarian fields, has made significant contributions to peacebuilding, conflict analysis, and the promotion of social cohesion. He has demonstrated exceptional leadership and expertise throughout his career, particularly in fragile and conflict-affected settings. The Doctor of Social Sciences (Honoris Causa) award recognizes Mr. Asek’s outstanding achievements and his unwavering commitment to advancing social change and conflict resolution. His expertise in conflict sensitivity, mediation, and social behavioral change interventions has had a transformative impact on communities and organizations globally. In addition to his remarkable professional accomplishments, Mr. Asek holds a wealth of educational qualifications, including an MSc in Education for Sustainability, a BSc in Political Science, and a Diploma in Resilient Markets: Market Systems Development for Fragile and Conflict-Affected Settings (FCAS). He has also received certifications in project management, dialogue and mediation, and various UN trainings on safety, ethics, and prevention of sexual harassment and abuse. The conferment of the Doctor of Social Sciences (Honoris Causa) upon Mr. Asek is a testament to his exemplary contributions to the field and his dedication to promoting peace and sustainable development. AUBSS takes immense pride in recognizing Mr. Asek’s remarkable achievements and looks forward to his continued endeavors in creating positive social impact. About AUBSS: The American University of Business and Social Sciences (AUBSS) is a renowned institution committed to providing high-quality education in the fields of business and social sciences. AUBSS offers a range of programs designed to equip students with the knowledge and skills necessary to excel in their chosen careers.
aubss_edu
1,888,840
Buy Verified Paxful Account
https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are...
0
2024-06-14T17:13:29
https://dev.to/lekocot138/buy-verified-paxful-account-4ced
react, python, ai, productivity
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t5b6etfvguny27hja1rx.png)\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com"
lekocot138
1,888,839
Understanding CI/CD: A Beginner's Guide to Continuous Integration and Continuous Deployment
Introduction to CI/CD In the world of software development, Continuous Integration and...
0
2024-06-14T17:10:57
https://dev.to/dipakahirav/understanding-cicd-a-beginners-guide-to-continuous-integration-and-continuous-deployment-1n1m
devops, cicd, learning, beginners
#### Introduction to CI/CD In the world of software development, Continuous Integration and Continuous Deployment (CI/CD) have become essential practices for delivering high-quality software at a rapid pace. If you're new to CI/CD, this guide will help you understand its basics, importance, and benefits. Let's dive into the world of CI/CD and see how it can transform your development workflow. please subscribe to my [YouTube channel](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) to support my channel and get more web development tutorials. #### What is CI/CD? CI/CD stands for Continuous Integration and Continuous Deployment. It is a set of practices and tools designed to automate and streamline the process of integrating and deploying code changes. CI/CD aims to ensure that software is always in a releasable state, reducing the risk of integration issues and speeding up the development process. **Continuous Integration (CI)** is the practice of regularly merging code changes into a central repository. Each merge triggers an automated build and test process to identify any integration issues early. This helps maintain code quality and ensures that the software can be released at any time. **Continuous Deployment (CD)** extends CI by automating the deployment of code changes to production. Once the code passes all stages of the CI pipeline, it is automatically deployed to the production environment. This allows for frequent and reliable software releases. #### Importance of CI/CD Implementing CI/CD practices offers several advantages: - **Ensures Releasable Software:** CI/CD ensures that the software is always in a state where it can be released to customers. - **Reduces Integration Issues:** Regular integration of code changes helps detect and fix issues early, reducing the risk of major integration problems. - **Speeds Up Development:** Automation of build, test, and deployment processes accelerates the development cycle, allowing for faster delivery of new features and bug fixes. #### Benefits of CI/CD 1. **Faster Development Cycles:** CI/CD enables more frequent code integrations and deployments, reducing the time it takes to deliver new features and updates to customers. 2. **Improved Code Quality:** Automated testing ensures that code changes meet quality standards before they are integrated and deployed. This leads to more stable and reliable software. 3. **Enhanced Collaboration:** CI/CD fosters better collaboration among developers by allowing them to work on different parts of the application simultaneously. It reduces the chances of conflicting changes and improves overall team productivity. #### Key Concepts in CI/CD - **Continuous Integration (CI):** Developers regularly merge their code changes into a central repository, triggering automated builds and tests. This practice helps identify and address bugs quicker, improving software quality. - **Continuous Deployment (CD):** Every code change that passes all stages of the production pipeline is automatically released to customers. This automation allows for frequent and reliable software releases, reducing the time and effort required for manual deployments. - **Continuous Delivery:** Continuous Delivery is similar to Continuous Deployment but with a key difference: code changes are automatically prepared for a release to production but require manual approval before deployment. This practice balances the need for automation with the need for human oversight. #### CI/CD Pipeline Overview A typical CI/CD pipeline consists of several stages: 1. **Source Code Management:** Code is stored in a version control system (e.g., Git). Developers merge their changes into the main branch. 2. **Build:** The code is compiled and built into executable artifacts. Automated tests are run to ensure the build is successful. 3. **Testing:** Various types of automated tests (unit tests, integration tests, etc.) are executed to validate the code changes. 4. **Deployment:** If all tests pass, the code is deployed to a staging environment for further testing. Once approved, it is deployed to production. 5. **Monitoring:** The deployed application is monitored for performance and errors. Feedback is collected and used to improve the CI/CD process. #### Conclusion CI/CD is a powerful set of practices that can significantly improve the efficiency and quality of your software development process. By automating the integration, testing, and deployment of code changes, CI/CD enables faster development cycles, improved code quality, and enhanced collaboration among developers. As you embark on your CI/CD journey, remember that the key to success is to start small, iterate, and continuously improve your pipeline. Stay tuned for the next post in this series, where we'll dive into setting up your first CI/CD environment and explore some of the best tools and practices to get you started. Feel free to leave your comments or questions below. If you found this guide helpful, please share it with your peers and follow me for more web development tutorials. Happy coding! ### Follow and Subscribe: - **Website**: [Dipak Ahirav] (https://www.dipakahirav.com) - **Email**: dipaksahirav@gmail.com - **Instagram**: [devdivewithdipak](https://www.instagram.com/devdivewithdipak) - **YouTube**: [devDive with Dipak](https://www.youtube.com/@DevDivewithDipak?sub_confirmation=1 ) - **LinkedIn**: [Dipak Ahirav](https://www.linkedin.com/in/dipak-ahirav-606bba128)
dipakahirav
1,888,838
Create a API Gateway using Ocelot
Hello guys! Today I challenged myself to learn a little bit about Api Gateway, so I created one in my...
0
2024-06-14T17:10:30
https://dev.to/vzldev/create-a-api-gateway-using-ocelot-3nh3
apigateway, ocelot, dotnet, csharp
Hello guys! Today I challenged myself to learn a little bit about Api Gateway, so I created one in my project using Ocelot. ## What is an API Gateway? An API gateway is a server that acts as an intermediary between clients and backend services. It functions as a reverse proxy to accept all application programming interface (API) calls, aggregate the various services required to fulfill them, and return the appropriate result. ## What is Ocelot? Ocelot is an open-source API Gateway built on the .NET platform. It is designed to manage and secure the flow of traffic between clients and backend services, particularly in microservices architectures. Ocelot provides a range of features to facilitate the handling of API requests and responses. ## How to create an API Gateway using Ocelot step by step? ## Step 1 **First of all, you need to set up 2 different API's**, in my project I created an UsersAPI and an EventsAPI I have the following endpoints for the 2 controllers: - https://localhost:7235/Events - https://localhost:7239/Users ## Step 2 Next you'll need to create a new api for the gateway, and do not add any controller. ## Step 3 Add the Ocelot package to your gateway api. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4xmrzucgjscs6vqieu7.png) ## Step 4 Create a file named **ocelot.json** in your gateway api. Here's mine: ``` { "Routes": [ { "DownstreamPathTemplate": "/users/{everything}", "DownstreamScheme": "https", "DownstreamHostAndPorts": [ { "Host": "localhost", "Port": 7239 } ], "UpstreamPathTemplate": "/api/users/{everything}", "UpstreamHttpMethod": [ "GET", "POST", "PUT", "DELETE" ] }, { "DownstreamPathTemplate": "/events/{everything}", "DownstreamScheme": "https", "DownstreamHostAndPorts": [ { "Host": "localhost", "Port": 7235 } ], "UpstreamPathTemplate": "/api/events/{everything}", "UpstreamHttpMethod": [ "GET", "POST", "PUT", "DELETE" ] } ], "GlobalConfiguration": { "BaseUrl": "https://localhost:7174" } } ``` - In the GlobalConfiguration section, the BaseUrl is the URL of the API Gateway. When running the API Gateway it will run on this base URL and the users will interact with this URL. - The UpStreamPathTemplate section represents the API Gateway endpoint, this will be the public endpoint for the users to ping. - The DownStreamTemplate section represents the APIs endpoints. By the above configuration, when you hit the endpoint http://localhost:7174/api/events, it will be redirected to API endpoint http://localhost:7235/Events ## Step 5 In your program.cs file, add the following: ``` builder.Configuration.AddJsonFile("ocelot.json", optional: false, reloadOnChange: true); builder.Services.AddOcelot(builder.Configuration); app.UseOcelot(); ``` ## Step 6 Try yourself and enjoy! And that's it guys, a simple use case of using an api gateway with ocelot. I hope you liked it, stay tuned for more!
vzldev
1,888,837
How much does hourly SEO Copywriting cost in European Countries?
In European Regions like the Netherlands including Eastern Europe, In Western Europe, Northern...
0
2024-06-14T17:08:00
https://dev.to/seosiri/how-much-does-hourly-seo-copywriting-cost-in-european-countries-3ibl
eu, seo, copywriting, contentwriting
In European Regions like the Netherlands including Eastern Europe, In Western Europe, Northern Europe, and Southern European countries (EU) average SEO Copywriting Cost by hour (remote, freelance) starts from € 30, € 70, and € 100. And, the mentioned above SEO Copywriting (English) rate differs (increase and digress) due to: - SEO Copywriter's SEO Copywriting Skillsets. - SEO Copywriting Content's Length. - Vary for content niches (keywords competitions). - Freelance remote Copywriting Professionals GEO location. - SEO Copywriter year of experience in Copywriting and SEO Copywriting Projects Completion. Okay, Let me explain [How much hourly SEO Copywriting costs in European Countries, like the Netherlands](https://www.seosiri.com/2021/12/seo-copywriting-cost-in-eu.html). #seocopywriting #copywriting #seo
seosiri