id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,883,735 | [Game of Purpose] Day 23 | Today I created my first Blueprint Function and Macro. It sets movement mode based on provided enum... | 27,434 | 2024-06-10T22:45:18 | https://dev.to/humberd/game-of-purpose-day-23-1jif | gamedev | Today I created my first Blueprint Function and Macro. It sets movement mode based on provided enum `Drone State`.

I also made drone fly up and down by pressing Space or Left Ctrl.
At the end I turining Drone's engine on and off. When it's off (by pressing 2) it completely stops (for now). When it's on (by pressing 1) it can fly.
{% embed https://youtu.be/0wpDQcguCL4 %} | humberd |
1,883,734 | Machine Learning Expert Specializing in Customized Jewelry Solutions | Hi, I’m Bay Ray, a passionate Machine Learning expert dedicated to revolutionizing the world of... | 0 | 2024-06-10T22:43:00 | https://dev.to/ray_bay_e2beb6849a7781754/machine-learning-expert-specializing-in-customized-jewelry-solutions-58oo | Hi, I’m Bay Ray, a passionate Machine Learning expert dedicated to revolutionizing the world of customized jewelry, especially diamond pieces. With a robust background in artificial intelligence and data science, I focus on creating products that personalize and enhance the jewelry shopping experience.
My expertise lies in developing cutting-edge machine learning models that analyze customer preferences and design bespoke jewelry pieces tailored to individual tastes. By [leveraging AI](https://openai.com/blog/chatgpt/), I transform raw data into actionable insights, allowing for a seamless and personalized shopping experience. My innovative approach not only enhances customer satisfaction but also drives efficiency and creativity in the jewelry industry.
Throughout my career, I’ve had the privilege of collaborating with leading jewelry brands and tech companies, spearheading projects that bridge the gap between tradition and technology. My work has resulted in sophisticated recommendation systems, virtual try-on applications, and predictive analytics tools that elevate the process of designing and selecting custom jewelry.
I hold a degree in Computer Science with a specialization in Machine Learning from a prestigious university. This academic foundation, combined with hands-on experience, positions me at the forefront of AI-driven customization in the jewelry sector. I’m also an active contributor to several industry publications and frequently speak at conferences, sharing my insights on the future of AI in luxury goods and retail. I work for [RK Diamonds](https://www.rkdiamondjewelry.com/).
When I’m not developing groundbreaking algorithms, I enjoy exploring new trends in jewelry design and experimenting with the latest AI technologies. My commitment to continuous learning and innovation ensures that I remain a leading force in the intersection of machine learning and jewelry customization.
My vision is to create a world where every piece of jewelry tells a unique story, crafted precisely to reflect the individual style and preferences of its owner. With deep expertise and unwavering dedication, I’m transforming the jewelry industry, one algorithm at a time.
 | ray_bay_e2beb6849a7781754 | |
1,883,724 | git commit -m is a lie! How to commit like a pro | When you first started learning git, you probably learned that the way to commit something is by... | 0 | 2024-06-10T22:21:30 | https://dev.to/andrews1022/git-commit-m-is-a-lie-how-to-commit-like-a-pro-l1 | webdev, git, vscode, beginners | When you first started learning git, you probably learned that the way to commit something is by using `git commit -m "your message"`. This is fine as a beginner, but once you start working in a professional environment, you'll quickly realize that using `-m` is insufficient. In this article, I'll cover some different ways to commit your changes, as well as some handy git tricks.
The `-m` flag is the most basic way to commit your changes. It's fine for small changes, but it's not very useful for larger commits. When you use `-m`, you're limited to a single line of text, which can make it difficult to explain what you're doing. For example, if you're fixing a bug, you might want to explain what the bug was and how you fixed it. With `-m`, you're limited to something like this:
```bash
git commit -m "Fixed bug"
```
This is not ideal as it doesn't give much information. If you want to provide more information, you can use the `-m` flag multiple times:
```bash
git commit -m "Fixed bug" -m "The bug was caused by a missing semicolon"
```
If you haven't seen this pattern before, you can think of it like this:
```bash
git commit -m "Title" -m "Description"
```
Instead, what would be even better, is to to write a multi-line commit message. Doing this allows you to not only provide a title and description, but the ability to write a much more detailed description of why you made the changes that you did, and provide any other relevant information.
The simplest way of doing this is to configure your editor to be the default editor for git. Listed below, I've included a few examples of how to do this for some popular editors:
```bash
# For VS Code
git config --global core.editor "code --wait"
# For Sublime Text
git config --global core.editor "subl -n -w"
# For Vim
git config --global core.editor "vim"
# For Emacs
git config --global core.editor "emacs"
```
Configuring your editor in this way not only provides you with the benefit of being able to write a more detailed commit message, but it also allows you to easily update your global git configuration file.
Now let's say it's time to commit something. Since you've configured your editor to be the default editor for git, now run the commit command without the `-m` flag:
```bash
git commit
```
This will open up your editor, which for VS Code would look something like this:

Something else to note is when you're working on a team, it's likely you'll be following a commit message convention. This is especially true when you're working on a project with multiple people with an issue / project management tool such as Jira. It's convention to start your commit message with the Jira ticket number, followed by a colon, and then the title of your commit. It would look something like this:

This is looking a lot better! Once you're done writing, save and close the file, and your changes will be committed.
But, why? Why would you want to commit changes like this? There are a few reasons:
1. **Clarity**: Writing a multi-line commit message allows you to provide a clear and concise explanation of what you're doing. This is useful when you're working on a team, as it allows your teammates to quickly understand why you made the changes that you did. This is especially true when conducting code reviews on pull requests.
2. **History**: When you write a detailed commit message, you're creating a history of your changes. This can be useful if you ever need to go back and see why you made a particular change.
3. **Documentation**: A good commit message can serve as documentation for your code. If you ever need to go back and see why you made a particular change, you can look at the commit message to get a better understanding of what you were thinking at the time.
Another commit flag you may have used is `-a`. This flag is only semi-useful. The reason why I say that is that it will only commit previously tracked files. If you've created a new file as part of your current commit, you'll need to stage it first before using the `-a` flag. So, you'll need to use the `add` command for that.
With that in mind, I created a simple git alias for myself that will add all files to the staging area and then run the commit command without any flags. What is a git alias you may ask? A git alias is a way to create a shortcut for a git command, or a series of git commands.
To create a git alias, you'll need to add them to your global git configuration file. Since you've already configured your editor to be the default editor for git, you can run the following command to open up your global git configuration file:
```bash
git config --global -e
```
Now, add the following alias to the file:
```bash
[alias]
ac = !git add . && git commit
```
Now, when you want to commit changes, you can run the following command:
```bash
git ac
```
I use this alias all the time, and it's a nice little time saver. Here are some other git aliases that I've created for myself that you might find useful:
```bash
[alias]
cg = !git config --global -e
cpom = !git checkout main && git pull origin main
mm = !git merge main
ru = !git remote update
dasm = !git branch | grep -v "main" | xargs git branch -D
```
As a quick side note, if you're not sure what the last command is doing, it's deleting all local branches except for the `main` branch. This is useful if you're working on a project with a lot of branches and you're lazy like me and forget to delete them once they're merged into `main`.
I also created this extremely simple repo on GitHub that you can see the difference in the commits:
https://github.com/andrews1022/commit-example/commits/main/
I hope you found this article helpful! Let me know if you have any questions, or leave a comment if you have any other git tips and tricks you'd like to share.
| andrews1022 |
1,883,657 | My story creating a new React meta-framework | I have a great idea for a new hobby project! As I am a React developer, I'll use it again for... | 0 | 2024-06-10T22:20:51 | https://dev.to/lazarv/my-story-creating-a-new-react-meta-framework-732 | react, vite, rsc | I have a great idea for a new hobby project! As I am a React developer, I'll use it again for implementing this project, that's an easy choice. I use Next.js at work. It's fine, but there are some pain points in using it, so I don't want to go with it for my own project. I struggle with it during work, so why should I struggle in my spare time. I want to have some fun!
I love Vite! When I used Vite for a React SPA app, with React Router and react-query, I liked it a lot! It was blazing-fast! No need to wait for any Webpack bundling, just using ES modules, it was heavenly. I started to use Vite when I started experimenting with streaming server-side rendering using React Suspense. I also created a micro-frontend system based on Vite and a self-hosted fork of esm.sh instead of Module Federation, but that's another story.
But this Vite setup I had for an SPA didn't work for server-side rendering as I wanted it to. Should I use Remix then? Remix now supports Vite! Though choice. I like some parts of Next.js since Next.js 13. More precisely, I like the new React server-side rendering features a lot. React Server Components are awesome! To have complete control over the React hydration and which parts of the rendering is server vs client is great. Using server actions with Next.js is also fantastic! I like how much this new way is close to traditional web development. I like that links and forms are driving navigation, mutation and that state management is shared between the client and the server using standard solutions, like search params and RESTish routing, even some cookies if needed. Web development is again so much easier and fun this way. But I don't want to have all the caching problems of Next.js. I don't want bundles and waiting. So what should I use again? Remix is not supporting these. What about Astro? I like Astro, I would possibly use it for a more content driven project, but this time, it's not my best choice. I only want to use pure React. I heard there's a new minimal React framework, Waku. I want to have even less setup than that!
To summarize, what will I need or want to use for this project?
- Vite
- server-side rendering
- React Server Components
- client components
- server actions
- full access to the HTTP context
- a minimal setup, no boilerplate
- fast dev server, build and deploy
Is there a framework like that? No. I worked on some libraries and frameworks in the past, like JayData or odata-v4-server, so if I would create a new React framework with the requirements I have, how should it be done? That's a though question as there's no documentation for these React features yet, just some tests in the React repo and while Next.js is open source, the codebase is huge! I love to experiment with un-documented tech, I loved when I learned the intricacies of the unknown and then tweaking it to the max to create an awesome, jaw-dropping experience, like porting Wolfenstein 3D to the web from scratch, directly porting some C/C++ code to JavaScript while learning a lot or using WebAssembly to create a DOOM web runtime environment to play any WAD files from DOOM I or II, even other DOOM-engine games like Heretic. I loved re-engineering these retro games to make them playable in the browser or on my phone or tablet. I also started to make a dive and started messing around with the Elder Scrolls series to remake it in the browser! It was so much fun! So, without any hesitance, I started to create some proof of concepts.
RSC is easy. You just need to call `renderToReadableStream` and send it to the browser. But what about hydration and client components? Now that's a challenge! You need to render your React tree to the RSC payload in a thread with "react-server" module resolution conditions. This payload will include all the static HTML nodes and references to client components and server actions. To render client components properly, you need to have a snapshot. That's the RSC payload, with the static HTML nodes. Then, you need to get a new React tree from this. To get that, you need to use `createFromReadbaleStream` to transform your RSC stream to a React tree, then use a different type of `renderToReadableStream`, but this time not using the *react-server-dom-webpack* package, but *react-dom*. While doing this, you need to send your RSC payload in parallel with the HTML content to hydrate your React application on the client as you stream the server-side rendered content. You also need to have all the references ready to tag client components with the `react.client.reference` symbol and if you also want server actions, then with the `react.server.reference` symbol. So you need to transform your files differently for multiple environments during rendering and you also need to manage all the module resolution for these. Take this with all the workarounds you need to do to make this work with Vite. So after creating some Vite transform plugins and dealing with the complex rendering (not just a simple `renderToString` like in the old days), you're good to go, right?
A framework is much more, than making some proof of concept work. It's much more, than achieving something in a sandbox. It's very similar to game development or it's very similar to me, to my own game development experience as I always worked on some rendering engine for games and not as a game designer or creating content for a new game (with some tiny exceptions). After you finish with the gameplay, you need to start building all the other parts, like menus, configuration management, etc. A framework is not just a rendering engine and some core functionality. It has to provide some great developer experience. I had this partially because of Vite. If I would not have Vite, my job would have been much harder. But I had an idea. A vision.
When I experienced node.js the first time, I was amazed. I already worked as a frontend engineer, so node.js for me was a gate to another world, to another dimension. I was able to use all my expertise with JavaScript on the backend. I worked a little bit at this point with other backend solutions, like PHP or ASP.NET. But despite it's quirks, I loved JavaScript (I know, I know...). So node.js was the best thing to continue with. What made node.js awesome? `node server.js`. If you had a JavaScript file with any size, be it a tiny script or a large backend for a complex architecture, in the end it's just running a simple `node` command to start your server.
`react-server ./App.jsx`. I wanted to have this. So badly! All the frameworks are handling your entrypoint differently. Next.js is enforcing it's own file-based routing. It's so much more opinionated compared to React. Remix has a long setup to follow. I just wanted an `App.jsx` and a CLI tool to run it as my entrypoint, taking away all the project setup work that every project needs at the start.
I imagined this "Get started" (I also love *pnpm* btw):
```
pnpm add @lazarv/react-server
pnpm exec react-server ./App.jsx
```
No need for anything else. Simple as that. Not even a configuration file. Not even installing React directly! Just running the Vite development server, like I would do it for an SPA using `npx vite` with an `index.html` in place. I was amazed by a Vite demo Evan You did more than 2 years ago. It was minimal. It was elegant. It was awesome! I wanted to have the simplicity of node.js and Vite, just for React.
I also have to mention a very important presentation. It changed how I look on RSCs and the new way of React server-side rendering. It was Dan Abramov’s RemixConf 2023 presentation titled: “React from Another Dimension”. If you haven’t watched it yet, do it. RSC clicked for me watching it! Also the title was great as I listen to Liquid Tension Experiment a lot! :)
My hobby project grew out of it’s cooking pot. Maybe it’s some sort of megalomania. But I always lose some scope as I dream big. In all my works, I have a much larger target in the end than at the start of that project. But if I’m committed to that project, I will finish and achieve my vision no matter what.
If you share the same view on frameworks like me, you can get my experience. Just head over to https://github.com/lazarv/react-server and give as much feedback as possible. I also sort of completed my hobby project. But not the original one. I used my own framework to create the documentation site for it. Check it out at https://react-server.dev. I hope you will also have fun trying it out! | lazarv |
1,883,729 | Run any code from Vim | Do you want to quickly run and test any code without leaving Vim and without even opening a... | 0 | 2024-06-10T22:13:08 | https://dev.to/woland/run-any-code-from-vim-1i2o | Do you want to quickly run and test any code without leaving Vim and without even opening a `:terminal`?
There is a very simple way of achieving this. All we need is a runner function, a run command and a mapping to call it.
**Here is an example:**
```vim
function! CodeRunner()
if &ft ==# 'python'
execute 'RPy'
endif
endfunction
```
**and the run command:**
```vim
command! RPy :!python3 %
```
**and the mapping:**
```vim
nnoremap <F12> :call CodeRunner()<CR>
```
**I prefer to wrap the mapping and the run command inside an augroup:**
```vim
augroup code_runner
au!
au FileType python command! RPy :!python3 %
nnoremap <F12> :call CodeRunner()<CR>
augroup end
```
Place the above snippets in your .vimrc and source it. Now, every time you’re inside a python file, you can just press F12 to run the code and see its output.
**You can add other languages to the function and to the augroup too:**
```vim
"===[ Execute From Vim ]==="
function! CodeRunner()
if &ft ==# 'python'
execute 'RPy'
elseif &ft ==# 'sh'
execute 'RB'
elseif &ft ==# 'javascript'
execute 'RJs'
elseif &ft ==# 'php'
execute 'RPHP'
elseif &ft ==# 'go'
execute 'RGo'
endif
endfunction
```
**And the corresponding augroup:**
```vim
augroup code_runner
au!
au FileType python command! RPy :!python3 %
au FileType sh command! RB :!bash %
au FileType javascript command! RJs :!node %
au FileType go command! RGo :!go run %
au FileType php command! RPHP :!php %
nnoremap <F12> :call CodeRunner()<CR>
augroup end
```
With this, you can run bash, python, javascript, golang and php with the F12 key.
**A similar method can be used for compiled languages, such as C.**
**Example**:
```vim
function! CompileAndRun()
let current_file = expand('%')
let file_name = fnamemodify(current_file, ':t:r')
let compile_cmd = 'gcc ' . current_file . ' -o ' . file_name . ' && ./' . file_name
execute '!'.compile_cmd
endfunction
```
The function above, will compile the C code inside the current buffer using gcc, and then execute the binary output.
**Naturally, we need a corresponding mapping:**
```vim
nnoremap <F8> :call CompileAndRun()<CR>
```
More arguments can be added to the compile _cmd to serve your needs.
**One last addition that I recommend, is a shortcut for clearing the terminal:**
```vim
nnoremap \l :!clear<CR><CR>
```
Once your terminal becomes full with output, just press \l (that is L) in normal mode to clear it.
| woland | |
1,883,728 | Essential Guide to Business Loans for Startups | Starting a business is an exciting venture, startups can provide the ss, and tips for approval. | 0 | 2024-06-10T22:12:53 | https://dev.to/muhammad_alijaffer/essential-guide-to-business-loans-for-startups-5g1c |

Starting a business is an exciting venture, startups can provide the ss, and[ tips for approval](https://aliloans.site/the-essential-guide-to-business-loans-for-startups/). | muhammad_alijaffer | |
1,878,576 | Simplifying Local Development with Docker, mkcert, DNSMasq and Traefik. | Hello dev.to community! I've created a project called wayofdev/docker-shared-services that my team... | 0 | 2024-06-10T22:10:55 | https://dev.to/lotyp/simplifying-local-development-with-docker-mkcert-dnsmasq-and-traefik-3k57 | devops, tutorial, ssl, webdev | Hello dev.to community!
I've created a project called [wayofdev/docker-shared-services](https://github.com/wayofdev/docker-shared-services) that my team and I use to streamline our local development.
It simplifies the setup for Dockerized projects on macOS and Linux, and I’m excited to share it with the community. Let's dive into how it can help you and your team.
<br>
## 🗂️ Table of Contents
- [Key Features](#key-features)
- [Problem](#problem)
- [The Solution: Docker Shared Services](#the-solution-docker-shared-services)
- [Requirements](#requirements)
- [Quick Start Guide (macOS)](#quick-start-guide-macos)
- [Connecting Your Projects to Shared Services](#connecting-your-projects-to-shared-services)
- [Example: Spin-up Laravel Sail Project](#example-spinup-laravel-sail-project)
- [Example: Want to See a Ready-Made Template?](#example-want-to-see-a-readymade-template)
- [Linux](#linux)
- [Conclusion](#conclusion)
<br>
## Key Features 🌟
By implementing this approach, you will have:
- **Automated Local DNS and SSL Setup**: No more manual edits to `/etc/hosts` or dealing with self-signed certificate warnings. You'll have an automated solution for DNS and SSL.
- **Consistent Development Environment**: All team members will have the same setup, reducing environment-related bugs and making onboarding new developers faster and easier.
- **Elimination of Port Conflicts**: Using Traefik, you can avoid port conflicts entirely, allowing multiple dockerized projects to run concurrently without issues.
- **User-Friendly Local URLs**: Access your projects via custom local domains like `project.docker` instead of `localhost:8000`, improving the overall development experience.
- **Simplified CORS and Cookie Management**: With SSL in place for local domains, configuring CORS and Cookies will mirror production settings, reducing debugging time.
- **Enhanced Testing Environment**: Test OAuth, Secure Cookies, and HTTPS APIs locally, ensuring they work exactly as they will in production.
- **Improved Service Discovery and Routing**: Traefik provides automatic service discovery and routing, making it easier to manage and access various services within your Docker network.
- **Ease of Integration with Existing Projects**: Quickly connect your existing Docker projects to the `docker-shared-services` setup, leveraging DNSMasq and Traefik for better service management.
<br>
## Problem 🤔
### → Common Challenges in Local Development
1. **Manual DNS Configuration**:
Developers often need to update their `/etc/hosts` file to direct traffic for local domains like `yourproject.local` or `yourproject.domain.local` to `127.0.0.1`. This is tedious and requires admin access.
2. **Lack of SSL Support**:
Local domains often lack SSL, making it hard to test secure connections and leading to issues with OAuth providers, CORS, and cookies.
3. **Port Conflicts**:
Forwarding Docker service ports to the host machine can cause conflicts, especially when multiple services use common ports like 80 or 443.
4. **Cumbersome Hostnames**:
Using hostnames like `localhost:8000` for local projects is not user-friendly and complicates development.
5. **Complex CORS and Cookie Configuration**:
Without SSL and proper DNS setup, configuring CORS and Cookies becomes more challenging.
6. **Consistency Across Team Members**:
When a team is working on the same project, maintaining consistency in the local development environment across different machines is challenging. Differences in setup can lead to issues that are hard to debug and resolve, slowing down the development process.
### → The Hard Way
Traditionally, setting up a local development environment with SSL involves a series of manual steps:
- Generating and trusting self-signed certificates in the system
- Editing the `/etc/hosts` file
- Setting up and configuring Dockerized projects with custom ports
- Solving CORS and SSL related problems between local services
These steps are time-consuming and prone to errors, leading to a frustrating development experience.
<br>
## The Solution: Docker Shared Services 🐳
The [wayofdev/docker-shared-services](https://github.com/wayofdev/docker-shared-services) simplifies local development by providing a Docker-powered environment that integrates Traefik, mkcert, and DNSMasq. This setup offers:
- Automated DNS and SSL support
- A consistent development environment across different machines
- Simplified service discovery and routing using Traefik
- Elimination of port conflicts on the host network
- The ability to test your Dockerized projects in an environment that closely mimics real-world scenarios, ensuring that your applications behave as expected before deployment
### → Key Components
1. **Traefik**:
Traefik acts as a reverse proxy and load balancer, providing routing for your Docker services. It integrates with Docker via docker-compose labeling functionality to automatically discover and route traffic to your services.
2. **mkcert**:
[mkcert](https://github.com/FiloSottile/mkcert) generates locally-trusted development certificates, enabling SSL support for your local domains.
3. **DNSMasq**:
[Dnsmasq](https://thekelleys.org.uk/dnsmasq/doc.html) provides lightweight DNS and DHCP services, allowing you to use custom local domains without editing the `/etc/hosts` file.
### → Going Further
You can go further and pack this project into an Ansible setup. This way, you can automate the provisioning of macOS or Linux hosts, ensuring that all necessary tools and configurations are in place. This not only saves time but also ensures consistency across different environments.
<br>
## Requirements 🚩
- **macOS** Monterey+ or **Linux**
- Tested on Ubuntu 22.04, but should also work on Debian or Arch variants
- **Docker** 26.0 or newer
- [How To Install and Use Docker on Ubuntu 22.04](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-22-04)
- [How To Install Docker Desktop on Mac](https://docs.docker.com/desktop/install/mac-install/)
- **Homebrew** (macOS only): Install via [brew.sh](https://brew.sh/)
- Installed [mkcert](https://github.com/FiloSottile/mkcert) binary in system
- See full installation instructions in their official [README.md](https://github.com/FiloSottile/mkcert)
- Quick installation on macOS: `brew install mkcert nss`
<br>
## Quick Start Guide (macOS) 🚀
1. **Install Homebrew** (if not installed):
If [Homebrew](https://brew.sh) is not already installed, run the following command:
```bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
2. **Install Docker** (if not installed):
Set up Docker Desktop via Homebrew:
```bash
brew install --cask docker
```
3. **Install `mkcert` and `nss`:**
`mkcert` is a tool that creates locally-trusted development certificates, and `nss` provides support for mkcert certificates in Firefox.
```bash
brew install mkcert nss
```
4. **Create shared project directory:**
This repository should be run once per machine, so let's create a shared directory for this project:
```bash
mkdir -p ~/projects/infra && cd ~/projects/infra
```
5. **Clone this repository:**
```bash
git clone \
git@github.com:wayofdev/docker-shared-services.git \
~/projects/infra/docker-shared-services && \
cd ~/projects/infra/docker-shared-services
```
6. **Create `.env` file:**
Generate a default `.env` file, which contains configuration settings for the project.
```bash
make env
```
Open this file and read the notes inside to make any necessary changes to fit your setup.
7. **Install root certificate** and generate default project certs:
This step installs the root certificate into your system's trust store and generates default SSL certificates for your local domains, which are listed in the `.env` file, under the variable `TLS_DOMAINS`.
```bash
make cert-install
```
> **Note:**
>
> Currently, on macOS, you may need to enter your password several times to allow `mkcert` to install the root certificate.
> **This is a one-time operation** and details can be found in this upstream GitHub [issue](https://github.com/FiloSottile/mkcert/issues/415).
8. **Run this project:**
Start the Docker services defined in the repository.
```bash
make up
```
9. **Check that all Docker services are running:**
Ensure Docker is running and services are up by using the `make ps` and `make logs` commands.
```bash
make ps
make logs
```
10. **Add custom DNS resolver to your system:**
This allows macOS to understand that `*.docker` domains should be resolved by a custom resolver via `127.0.0.1`, where our DNSMasq, which runs inside Docker, will handle all DNS requests.
```bash
sudo sh -c 'echo "nameserver 127.0.0.1" > /etc/resolver/docker'
sudo dscacheutil -flushcache
sudo killall -HUP mDNSResponder
```
You can check that DNS was added by running:
```bash
scutil --dns
```
Example output:
```bash
resolver #8
domain : docker
nameserver[0] : 127.0.0.1
flags : Request A records, Request AAAA records
reach : 0x00030002 (Reachable,Local Address,Directly Reachable Address)
```
> **Note:**
>
> Instead of creating the `/etc/resolver/docker` file, you can add `127.0.0.1` to your macOS DNS Servers in your Ethernet or Wi-Fi settings.
>
> Go to System Preferences → Network → Wi-Fi → Details → DNS and add `127.0.0.1` as the first DNS server.
>
> This allows you to do it one time, and if you need to create a new local domain, for example `*.mac`, in the future, it will be automatically resolved without creating a separate `/etc/resolver/mac` file.
11. **Ping `router.docker` to check if DNS is working:**
Ensure that the DNS setup is functioning correctly.
```bash
ping router.docker -c 3
ping any-domain.docker -c 3
```
12. **Access Traefik dashboard:**
Open [https://router.docker](https://router.docker).
You should see the Traefik Dashboard:

### → Outcome
At this point, you should have a working local development environment with DNS and SSL support ready to be used with your projects.
Services will be running under a shared Docker network called `network.ss`, and all projects or microservices that will share the same [Docker network](https://docs.docker.com/network/) will be visible to Traefik. The local DNS, served by DNSMasq, will be available on `*.docker` domains.
<br>
## Connecting Your Projects to Shared Services 🔌
To connect your projects to the shared services, configure your project's `docker-compose.yaml` file to connect to the shared network and Traefik.
This project comes with an example Portainer service, which also starts by default with the `make up` command. You can check the [`docker-compose.yaml`](https://github.com/wayofdev/docker-shared-services/blob/master/docker-compose.yaml) to see how Traefik labels and the shared network are used to spin up Portainer on the <https://ui.docker> host, which supports SSL by default.
### → Sample Configuration
Your project should use the shared Docker network `network.ss` and Traefik labels to expose services to the outside world.
1. **Change your project's `docker-compose.yaml` file:**
```diff
---
services:
web:
image: wayofdev/nginx:k8s-alpine-latest
restart: on-failure
+ networks:
+ - default
+ - shared
volumes:
- ./app:/app:rw,cached
+ labels:
+ - traefik.enable=true
+ - traefik.http.routers.api-my-project-secure.rule=Host(`api.my-project.docker`)
+ - traefik.http.routers.api-my-project-secure.entrypoints=websecure
+ - traefik.http.routers.api-my-project-secure.tls=true
+ - traefik.http.services.api-my-project-secure.loadbalancer.server.port=8880
+ - traefik.docker.network=network.ss
networks:
+ shared:
+ external: true
+ name: network.ss
+ default:
+ name: project.my-project
```
In this configuration, we added the shared network and Traefik labels to the web service. These labels help Traefik route the traffic to the service based on the specified rules.
2. **Generate SSL certs for your project:**
Go to the `docker-shared-services` directory:
```bash
cd ~/projects/infra/docker-shared-services
```
Edit the `.env` file to add your custom domain:
```bash
nano .env
```
It should look something like this:
```bash
TLS_DOMAINS="ui.docker router.docker *.my-project.docker"
```
Generate SSL certificates:
```bash
make cert-install restart
```
<br>
## Example: Spin-up Laravel Sail Project 🚀
Let's walk through an example of setting up a Laravel project using Sail and integrating it with the `docker-shared-services`.
1. Create an example Laravel project based on Sail:
```bash
curl -s "https://laravel.build/example-app" | bash
```
2. Open the `docker-compose.yaml` file of the `example-app` project and make adjustments:
```diff
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.3
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.3/app
- extra_hosts:
- - 'host.docker.internal:host-gateway'
ports:
- - '${APP_PORT:-80}:80'
- - '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
networks:
- sail
+ - shared
depends_on:
- ...
+ labels:
+ - traefik.enable=true
+ - traefik.http.routers.test-laravel-app-secure.rule=Host(`api.laravel-app.docker`)
+ - traefik.http.routers.test-laravel-app-secure.entrypoints=websecure
+ - traefik.http.routers.test-laravel-app-secure.tls=true
+ - traefik.http.services.test-laravel-app-secure.loadbalancer.server.port=80
+ - traefik.docker.network=network.ss
mailpit:
image: 'axllent/mailpit:latest'
networks:
- sail
+ - shared
ports:
- - '${FORWARD_MAILPIT_PORT:-1025}:1025'
- - '${FORWARD_MAILPIT_DASHBOARD_PORT:-8025}:8025'
+ labels:
+ - traefik.enable=true
+ - traefik.http.routers.mail-laravel-app-secure.rule=Host(`mail.laravel-app.docker`)
+ - traefik.http.routers.mail-laravel-app-secure.entrypoints=websecure
+ - traefik.http.routers.mail-laravel-app-secure.tls=true
+ - traefik.http.services.mail-laravel-app-secure.loadbalancer.server.port=8025
+ - traefik.docker.network=network.ss
networks:
sail:
driver: bridge
+ shared:
+ external: true
+ name: network.ss
```
These changes connect the Laravel app and Mailpit docker services to the shared network and expose them via Traefik.
3. **Run the Laravel project:**
Navigate to the `example-app` directory and start the services using Sail.
```bash
./vendor/bin/sail up -d
```
4. **Check Traefik routers**:
Open <https://router.docker/dashboard/#/http/routers> and check that there are two routers:
* Host(api.laravel-app.docker) → test-laravel-app-secure@docker
* Host(mail.laravel-app.docker)` → mail-laravel-app-secure@docker

5. **Check the setup**:
Ensure that your Laravel application and Mailpit services are running correctly by accessing their respective domains:
* **Laravel app** — https://api.laravel-app.docker
* **Mailpit** — https://mail.laravel-app.docker
At this point, your Laravel project is integrated with the `wayofdev/docker-shared-services`, utilizing DNS and SSL support for local development.
<br>
## Example: Want to See a Ready-Made Template? 👀
If you come from the PHP or Laravel world, or if you want to see how a complete project can be integrated with `docker-shared-services`, check out [wayofdev/laravel-starter-tpl](https://github.com/wayofdev/laravel-starter-tpl).
This Dockerized Laravel starter template works seamlessly with [wayofdev/docker-shared-services](https://github.com/wayofdev/docker-shared-services), providing a foundation with integrated DNS and SSL support, and can show you `the way` to implement patterns, stated in this article, in your projects.
<br>
## Linux 🐧
Linux Users, you are also covered!
If you're using Ubuntu or another Linux distribution, I've included a [Linux Quick Start Guide](https://github.com/wayofdev/docker-shared-services?tab=readme-ov-file#-quick-start-guide-linux) in the README.md of the `wayofdev/docker-shared-services` project.
This guide provides step-by-step instructions to help you set up the environment on a Linux system, ensuring you can have the same streamlined development experience as described in the [macOS Quick Start Guide](#-quick-start-guide-macos) provided in this article.
<br>
## Conclusion ✨
In conclusion, the [wayofdev/docker-shared-services](https://github.com/wayofdev/docker-shared-services) project offers a streamlined and automated solution for managing local development environments using Docker, mkcert, DNSMasq, and Traefik.
By integrating these tools into your workflow, you can ensure that your local environment closely mirrors production, leading to more reliable and predictable outcomes when deploying applications.
### → What do you think? Would you approach this differently?
I hope this guide has been helpful in demonstrating the potential benefits of using this approach for your local development needs. If you have any suggestions or a better way of achieving this setup, please feel free to share your thoughts.
Feel free to fork this repository and adjust it to your needs, or use it as a template.
Thank you for reading, and I hope you find this setup as beneficial as I have.
Happy coding!
<br>
| lotyp |
336,278 | useFakeAsync | You can see the hook takes a few simple parameters, including the familiar pairing of a callback func... | 0 | 2020-05-15T21:42:02 | https://reacthooks.dev/useFakeAsync/ | react, typescript |
You can see the hook takes a few simple parameters, including the familiar pairing of a callback function and a delay in milliseconds. This follows the shape of JavaScript's setTimeout and setInterval methods.
```TypeScript
import { useEffect, useState } from 'react';
enum FakeAsyncState {
PENDING = 'PENDING',
COMPLETE = 'COMPLETE',
ERROR = 'ERROR',
}
export const useFakeAsync: Function = (
callback: Function,
delay: number = 3000,
shouldError: boolean = false,
chaos: boolean = false
) => {
const [state, setState] = useState<FakeAsyncState>(FakeAsyncState.PENDING);
useEffect(() => {
let timer: NodeJS.Timeout;
const fail = chaos ? Math.random() <= 0.5 : shouldError;
if (fail) {
timer = setTimeout(() => {
setState(FakeAsyncState.COMPLETE);
callback();
}, delay);
} else {
setState(FakeAsyncState.ERROR);
}
return () => clearTimeout(timer);
}, [delay, callback, chaos, shouldError]);
return [state];
};
```
The hook also takes a 'shouldError' parameter so that an error condition can be forced.
The fourth parameter is a little more interesting, 'chaos'. I added this to randomise a success or error condition.
The state returned by the hook mimics a promise, it can either be pending, complete or in an error condition.
Hopefully this will help testing behaviour across components, and avoiding those inevitable bugs that creep in when integrating a UI with an API, like stutters between loading and success states, for example.
That's all! Go checkout the code on [GitHub](https://github.com/anthonyhumphreys/hooks/) or install my handy hooks library from [npm](https://www.npmjs.com/package/@anthonyhumphreys/hooks)
This post was for day 1 of my [#100DaysOfCode](https://twitter.com/hashtag/100DaysOfCode) challenge. [Follow me on Twitter](https://twitter.com/aphumphreys) for more.
| anthonyhumphreys |
1,883,727 | Fast Approval Loans | Types of Fast Approval Loans Payday Loans: Short-term loans designed to provide immediate cash,... | 0 | 2024-06-10T22:10:03 | https://dev.to/muhammad_alijaffer/fast-approval-loans-1682 |

[Types of Fast Approval Loans](https://aliloans.site/fast-approval-loans-quick-funding-for-your-immediate-needs/)
Payday Loans: Short-term loans designed to provide immediate cash, typically due on your next payday. They are easy to obtain but often come with high interest rates.
Personal Loans: Unsecured loans that can be used for various purposes. Some lenders offer quick approval and funding within a few days.
Online Loans: Many online lenders specialize in fast approval loans, offering streamlined applications and rapid funding.
Credit Card Cash Advances: If you have a credit card, you can quickly access funds via a cash advance. However, these often come with high fees and interest rates.
| muhammad_alijaffer | |
1,883,726 | everything-ai 4.0.0: build up your local AI power today | What is everything-ai? 🤖 everything-ai is natively a multi-tasking agent, 100% local, that... | 0 | 2024-06-10T22:09:42 | https://dev.to/astrabert/everything-ai-400-build-up-your-local-ai-power-today-5g19 | python, docker, ai, opensource | ## What is everything-ai?
🤖 everything-ai is **natively a multi-tasking agent**, 100% local, that is able to perform several AI-related tasks
## What's new?
🚀 I am more than thrilled to introduce some new functionalities that were added since last release:
- 🦙 `llama.cpp-and-qdrant`: Chat with your PDFs (backed by Qdrant as vector database) through Hugging Face GGUF models running within llama.cpp
- 💬 `build-your-llm`: you can now create a customizable chat LLM to interact with your Qdrant database with the power of Anthropic, Groq, OpenAI and Cohere models, just providing an API key! You can also set the temperature and the max number of output tokens
- 🧬 `protein-folding` interface now shows the 3D structure of a protein along with its molecular one, no more static backbone images!
- 🏋️ `autotrain` now supports direct upload of config.yml file
- 🤗 Small fixes in `retrieval-text-generation` RAG pipeline
How can you use all of these features?
You just need a `docker compose up`!🐋
Where can I find everything I need?
Get the source code (and leave a little ⭐ while you're there):
https://github.com/AstraBert/everything-ai
Get a quick-start with the documentation:
https://astrabert.github.io/everything-ai/
Credits and inspiration
Shout-outs to Hugging Face, Gradio, Docker, AI at Meta, Abhishek Thakur, Qdrant, LangChain and Supabase for making all of this possible!
Inspired by: Jan, Cheshire Cat AI, LM Studio, Ollama and other awesome local AI solutions! | astrabert |
1,883,725 | Guide to Home Equity Loans | Home equity loans can be a valuable financial tool for homeowners looking to leverage the equity... | 0 | 2024-06-10T22:07:49 | https://dev.to/muhammad_alijaffer/guide-to-home-equity-loans-26f6 |

Home equity loans can be a valuable financial tool for homeowners looking to leverage the equity built up in their properties. These loans allow you to borrow against the value of your home, providing a lump sum of cash that can be used for various purposes. Here’s an in-depth look at home equity loans, including their benefits, how they work, and tips for getting the best terms.
[MORE](https://aliloans.site/the-comprehensive-guide-to-home-equity-loans/)
| muhammad_alijaffer | |
1,883,723 | Comprehensive Guide to Bad Credit Loans | Securing a loan with a low credit score can be challenging, but bad credit loans offer a lifeline for... | 0 | 2024-06-10T22:04:46 | https://dev.to/muhammad_alijaffer/comprehensive-guide-to-bad-credit-loans-22hp | Securing a loan with a low credit score can be challenging, but bad credit loans offer a lifeline for those who need financial assistance despite their credit history. This detailed guide explores the types of bad credit loans, their benefits, the application process, and strategies for approval, helping you make informed decisions and improve your financial health.
[MORE](https://aliloans.site/the-comprehensive-guide-to-bad-credit-loans/ | muhammad_alijaffer | |
330,811 | [Reminder-1] var, let, const | I am writing this article at first for me :). Why ? Just to have a fast reminder about them. But i ho... | 6,516 | 2020-05-09T06:19:21 | https://dev.to/flozero/reminder-var-let-const-h4c | javascript, vue | I am writing this article at first for me :). Why ?
Just to have a fast reminder about them. But i hope it can help you too !
We will speak about some javascript concepts here:
- scope
- block
We can use let and const since es2015 (or es6). I am assuming you know what is`"use strict"`. It will force you to init the variable before using it. In older browser it will work and you will have some weird issues.
## SCOPE
- `The scope is the portion of code where the variable is visible.`
- In Javascript only function create new scope (arrow function too). That mean if you try:
```js
function() {
var hi = "hello"
}
console.log(hi) // will be undefined
```
- If a global variable as the same name. The variable inside the function will shadow the global variable (**shadowing**)
- If variable is declared after it is called. If he is after the called function it will be found because JS use the hoisting that put all variables on the top of the file (Always declare you variable properly).
## BLOCK
- A block is what is identified by a pair of curly braces. But except for function it doesn't create a new scope.
that mean's:
```js
if (true) {
var hi = "hello"
}
console.log(hi) // will be "hello" remember hoisting
```
## LET
- With 'let' now you have a scoped version variable even in block
Remember:
```js
if (true) {
let hi = "hello"
}
console.log(hi) // will throw error hi not defined
```
- "let" doesn't create a global variable.
## CONST
It can be scope in a block (remember the if) so it will not be available outside.
const could not be changed or reassigned. Just if the const variable is an object it can be mutate. You can block this behavior by wrapping your object with `Object.freeze({})`. Remember here that freeze will just freeze the level first of your object not the nested ones.
And that's it. See you in the next reminder.
| flozero |
1,883,722 | CSS Snake. | A blog written in 3rd person. Well, why not... London based CSS Artist, Ben Evans, has... | 27,670 | 2024-06-10T22:03:12 | https://css-artist.blogspot.com/2024/05/is-collision-detection-with-css-possible.html | cssart, cssgame, css |
## A blog written in 3rd person. Well, why not...
London based CSS Artist, Ben Evans, has had some lofty ideas in the past and this one follows that trend... His current aim is to make, or even remake, some form of the Nokia classic: Snake. Using only CSS. He's not currently sure if this is possible but he would love you to join him on his journey to finding out...
Ben has previously built CSS concepts with movement grids, the peak of this in-devour being a 3D maze game (which unfortunately doesn't like Safari, or any iOS browser, because they are all Safari really):
{% codepen https://codepen.io/ivorjetski/pen/poQpveN %}
Like most of Ben's CodePen's, there is also a video detailing the creation process on his YouTube Channel. This one can be found here:
{% youtube https://www.youtube.com/watch?v=Z0VHM4KcMFQ %}
Ben, also known as ivorjetski, has already set to work on Snake. So far he's spent far too long drawing the background entirely with CSS:

He thought it would be fun to recreate the old Windows 10 default lockscreen in CSS. He was probably wrong about this.
He tried to keep the background down to a fewer lines of CSS as possible. It's about 1000. Which isn't bad considering the crazy amount of texture that he needed to do for the rocks.
Ben realised he could create the texture using text! 😮 Surely this is where the word TEXTure comes from, he thought to himself, not silly enough to write it in a blog. Little did he know his thoughts could be heard by the alter-ego journalist.
Ben exclaimed: "It's pseudo classes, with the content declaration filled with random ascii shapes. Like this:"
```
&:after, &:before {
content: '●◖● ●◖● .● ● ◖●◖● ● ◖-● ●◖◖● ●◖ ●.● ●◖●◖● ●● ●.●◖●●';
}
```
He has also designed a super simple movement grid for it. And then, for some reason, converted it into this pointless chess like thing. Perhaps so it would make sense when sharing it on CodePen:
{% codepen https://codepen.io/ivorjetski/pen/LYawGQZ %}
This does everything Ben requires for a good basis to build from, and he has managed to get the code down to about 100 lines (not including his CSS drawn initials). He is pretty pleased with how this works. The directional buttons are actually all the same, but rotate themselves, putting themselves in the required position for left, right, up and down. Depending on what ID is shown in relation to what grid square the user is in. There is also a basic boundary detection in place, that Ben would need for the Snake game to work.
Stay tuned for more...
| ivorjetski |
1,884,418 | Unnecessary upload evasion with lftp mirrors | Originally published on peateasea.de. I’ve been using lftp’s reverse mirror feature for years to... | 0 | 2024-06-12T07:42:00 | https://peateasea.de/unnecessary-upload-evasion-with-lftp-mirrors/ | linux, devops | ---
title: Unnecessary upload evasion with lftp mirrors
published: true
date: 2024-06-10 22:00:00 UTC
tags: Linux,DevOps
canonical_url: https://peateasea.de/unnecessary-upload-evasion-with-lftp-mirrors/
cover_image: https://peateasea.de/assets/images/avoid-old-file-upload.png
---
*Originally published on [peateasea.de](https://peateasea.de/unnecessary-upload-evasion-with-lftp-mirrors/).*
I’ve been using `lftp`’s reverse mirror feature for years to upload files to my blog. I’d never worked out how to avoid repeated file uploads. Until now.
## A bit of background
[Jekyll](https://jekyllrb.com/) creates and populates a directory called `_site` when building the production version of a static site. To upload the files in the `_site` directory to my hosting provider, I’ve been using an `lftp` script like this:<sup id="fnref:cannot-use-rsync" role="doc-noteref"><a href="#fn:cannot-use-rsync" rel="footnote">1</a></sup>
```
open sftp://<username>:not-a-password@ftp.<domain-name>
mirror -v --delete --reverse _site/ /public_html/
```
The `open` command opens a connection<sup id="fnref:captain-obvious" role="doc-noteref"><a href="#fn:captain-obvious" rel="footnote">2</a></sup> to the FTP server that my hosting service provides for my domain and logs me in. Since the protocol is SFTP, the connection uses SSH and I can use an SSH key for authentication. This is great because then I don’t need to use a password.
Note that the `not-a-password` component is an important placeholder: it’s there because a password is still expected before the `@` symbol even though authentication uses public key encryption. The authentication mechanism thus ignores it. I chose this placeholder value to remind me that it isn’t a password and that I shouldn’t ever put one here.
The `mirror` command normally _downloads_ files from an upstream (usually remote) source to a local system. Thus the usual mirror process _pulls_ remote files from upstream. Using the `--reverse` option swaps the sense of the mirror mechanism and files are instead _uploaded_ (i.e. pushed) to the upstream system.
In the case I describe here, I push all files from within the `_site/` directory to the `/public_html/` directory on my hosting service.
When [reading the documentation](https://lftp.yar.ru/lftp-man.html) you will see terms like “source” and “target”. When mirroring to a local system then the “source” is the remote system and the “target” is the local system. With the `--reverse` option, these are swapped and the local system is now the “source” and the upstream system is the “target”.
The `--delete` option removes any files from the target system which are not present in the source. In our case, this is anything within the `/public_html/` directory tree. This ensures that if I delete or rename a file, it isn’t still floating around on the production system, which might confuse someone in the future.
The `-v` option turns on the first level of verbosity so that I can get feedback about what’s happening when mirroring the files to the upstream system.
## The problem
So what’s the issue? Well, each time I mirror the site to production, the `lftp` script re-transfers _all_ my files. In particular, `lftp` _removes_ each file from upstream before uploading it again. This happens even if the files haven’t changed. Here’s what I mean:
```shell
$ lftp -f deploy_site.lftp
Removing old file `feed.xml'
Transferring file `feed.xml'
Removing old file `index.html'
Transferring file `index.html'
Removing old file `sitemap.xml'
Transferring file `sitemap.xml'
Removing old file `about/index.html'
Transferring file `about/index.html'
Removing old file `add-favicon-to-mm-jekyll-site/index.html'
Transferring file `add-favicon-to-mm-jekyll-site/index.html'
<snip>
```
Not only is this annoying, but it’s a waste of network resources and time. I’d tried to get `lftp` to only upload changed files in the past, but never seemed to have found the right incantation. Until today. Today, I finally found the information I needed to make this work.
## The solution
If you read the [`lftp` man page](https://lftp.yar.ru/lftp-man.html), you’ll find in the `mirror` section the `--only-newer` option. Adding this option to the `mirror` command mentioned earlier, we get
```
mirror -v --only-newer --delete --reverse _site/ /public_html/
```
Using this command you’ll find that _it still transfers all files upstream_. Gah! Why doesn’t this work?
Today I managed to stumble upon why this is so. [An answer](https://stackoverflow.com/a/15320869) to the StackOverflow question [Why lftp mirror –only-newer does not transfer “only newer” file?](https://stackoverflow.com/q/11490145) mentions a subtlety noted on [Matthieu Bouthours’ blog](https://www.bouthors.fr/wiki/doku.php?id=en:linux:synchro_lftp) and seemingly not mentioned anywhere else:
> When uploading, it is not possible to set the date/time on the files uploaded, that’s why `--ignore-time` is needed.
Therefore, as mentioned in the [StackOverflow answer](https://stackoverflow.com/q/11490145):
> [I]f you use the flag combination `--only-newer` and `--ignore-time` you can achieve decent backup properties, in such a way that all files that differ in size are replaced. Of course it doesn’t help if you really need to rely on time-synchronization but if it is just to perform a regular backup of data, it’ll do the job.
Updating the `mirror` command like so:
```
mirror -v --only-newer --ignore-time --delete --reverse _site/ /public_html/
```
fixes the issue and only uploads new or newly changed files, which is the desired behaviour. Yay! :partying_face:
Implementing this change in my build scripts reduced build and deployment times from 5.5 minutes to 2 minutes. That’s more than halved the time! Brilliant!
## A word of caution
There’s a caveat here, though. If a file is changed and just so happens to be of the same size as its counterpart upstream, it won’t be transferred. One needs to bear this in mind.
To be honest, I’d prefer to use `rsync` because it generates checksums of the files to detect file changes. Then I could be more certain that my scripts upload only newer files and don’t transfer older ones unnecessarily. However, until I have that option, this will do the job nicely.
1. Unfortunately my hosting service doesn’t allow `rsync` (at least not at my service level) and hence I can’t use a more sophisticated synchronisation mechanism. [↩](#fnref:cannot-use-rsync)
2. Thank you, [Captain Obvious!](https://peateasea.de/assets/images/captain-obvious-imgur-com.gif) [↩](#fnref:captain-obvious) | peateasea |
1,847,949 | Server Side Components no React: O Futuro da Renderização? | Introdução React é um dos frameworks de frontend mais populares e versáteis, usado para... | 0 | 2024-06-10T21:59:36 | https://dev.to/vitorrios1001/server-side-components-no-react-o-futuro-da-renderizacao-2b03 | serversidecomponents, nextjs, react, frontend | ## Introdução
React é um dos frameworks de frontend mais populares e versáteis, usado para construir interfaces de usuário interativas e dinâmicas. Recentemente, um novo conceito foi introduzido na comunidade React: os Server Side Components (SSC). Este artigo explora os Server Side Components em React, analisando suas vantagens potenciais para a performance das aplicações e como podem revolucionar o desenvolvimento web.
## O que são Server Side Components?
Server Side Components representam uma abordagem inovadora para a renderização de componentes no servidor em aplicações React. Diferentemente da tradicional Server Side Rendering (SSR) e da Static Site Generation (SSG), os SSC permitem que a lógica dos componentes seja executada no servidor, mas com uma dinâmica de comunicação e carga que reduz o tamanho do bundle e o volume de dados transmitidos entre o servidor e o cliente.
## Benefícios dos Server Side Components
### 1. **Redução de Código no Cliente**
Os SSC permitem que apenas a lógica necessária seja enviada ao cliente, significativamente menos do que os componentes inteiramente renderizados ou até mesmo os componentes dinâmicos tradicionais do React. Isso resulta em tempos de carregamento mais rápidos e uma melhor performance da aplicação.
### 2. **Otimização de Performance de Carregamento**
Ao executar a maior parte da lógica no servidor, os SSC reduzem a quantidade de JavaScript que precisa ser baixada, parseada e executada no cliente. Isso é especialmente benéfico para usuários em dispositivos com capacidade de processamento limitada ou conexões de internet lentas.
### 3. **Melhoria na Experiência do Usuário**
Com a redução do código enviado ao cliente e a execução da lógica no servidor, os usuários percebem uma interface mais responsiva e menos tempo de espera para interações. Isso aprimora significativamente a experiência do usuário, especialmente em aplicações complexas.
## Diferenças Entre SSC e Outras Abordagens
### Comparação com SSR e SSG
- **SSR (Server Side Rendering):** Renderiza o HTML no servidor a cada solicitação, enviando uma página pronta para o navegador. Enquanto isso melhora o SEO, pode aumentar a carga no servidor e o tempo de resposta para o usuário.
- **SSG (Static Site Generation):** Gera páginas HTML estáticas no momento da build. Essas páginas são rapidamente servidas por CDNs, mas podem ser menos dinâmicas e personalizadas.
- **SSC (Server Side Components):** Combina o melhor dos dois mundos, permitindo componentes dinâmicos com execução no servidor e comunicação otimizada com o cliente.
### Vantagens sobre Client-Side Rendering
O rendering tradicional no lado do cliente envolve enviar todo o JavaScript necessário para o navegador, o que pode levar a significativas demoras no tempo de interação e visibilidade. Os SSC, ao reduzirem a carga de processamento no cliente, podem oferecer uma inicialização mais rápida e uma interação mais ágil.
## Potencial Impacto no Desenvolvimento Web
A implementação de SSC pode mudar a forma como os desenvolvedores pensam sobre a arquitetura de aplicações web. Com esta abordagem, é possível:
- **Desacoplar a UI do Estado:** Isso permite uma arquitetura mais flexível e modular, onde o estado e a lógica do servidor podem ser modificados independentemente da UI.
- **Reduzir a Complexidade no Cliente:** Simplifica o código do cliente, reduzindo a necessidade de otimizações específicas e melhorando a manutenabilidade.
- **Fomentar Melhorias no SEO e na Performance:** Como os componentes são pré-processados no servidor, o conteúdo está imediatamente disponível para os motores de busca e para os usuários.
## Conclusão
Os Server Side Components no React representam uma evolução promissora na maneira de construir e servir aplicações web. Por meio da redução da carga de JavaScript necessária no cliente e da execução eficiente no servidor, os SSC oferecem uma rota potencial para melhorar tanto a performance quanto a experiência do usuário. Enquanto a tecnologia ainda está emergindo, seu potencial para influenciar o futuro do desenvolvimento web é significativo e merece atenção detalhada por parte da comunidade de desenvolvedores.
| vitorrios1001 |
1,883,721 | Desiree Dove Realtor,Berkshire Hathaway HomeServices PenFed Realty | With a passion for real estate and a commitment to excellence, I bring a wealth of knowledge and... | 0 | 2024-06-10T21:59:04 | https://dev.to/desireedoverealtor/desiree-dove-realtorberkshire-hathaway-homeservices-penfed-realty-k43 |

With a passion for real estate and a commitment to excellence, I bring a wealth of knowledge and expertise to every client interaction. As a licensed real estate professional, I am deeply invested in helping you navigate the complexities of the market with ease and confidence. Whether you're buying, selling, or investing, I prioritize your goals and tailor my approach to meet your unique needs. With a focus on clear communication and unwavering integrity, I strive to be your trusted advisor every step of the way. Together, let's make your real estate dreams a reality—I'm here for you in all things home.
Desiree Dove Realtor,Berkshire Hathaway HomeServices PenFed Realty
Address: 3106 Plank Rd, Fredericksburg, VA 22407
Phone: 540-842-6340
Website: http://vahomedeals.com
Contact email: dove4homes@gmail.com
Visit Us:
[Desiree Dove Realtor,Berkshire Hathaway HomeServices PenFed Realty LinkedIn](https://www.linkedin.com/in/desiree-dove-8705022b)
[Desiree Dove Realtor,Berkshire Hathaway HomeServices PenFed Realty BBB
](https://www.bbb.org/us/va/fredericksburg/profile/real-estate-agent/desiree-dove-berkshire-hathaway-homeservices-0603-63418341)
Desiree Dove Realtor, BHHS PenFed Realty:
Real Estate Buying, Selling Consultant, and Representation in Fredericksburg, VA | desireedoverealtor | |
1,883,719 | How To avoid Getting Scammed | The scammer could not google the name of the United Nations Secretary-General. I wrote an article... | 0 | 2024-06-10T21:33:46 | https://dev.to/scofieldidehen/how-to-avoid-getting-scammed-11o0 | beginners, security, programming, productivity | The scammer could not google the name of the United Nations Secretary-General.
I wrote an article on how to know a phishing mail.
you can read it here and share your experience if you have encountered such emails before.
[Read](https://lnkd.in/dg_NTikG) | scofieldidehen |
1,883,669 | HOW SWIFT HACK EXPERT HELPED ME RECOVER MY STOLEN BITCOIN | Greetings my name is Trista Ayers, I’m writing this testimony to all those who have suffered... | 0 | 2024-06-10T21:05:28 | https://dev.to/trista_ayers_e928d4ddeb36/how-swift-hack-expert-helped-me-recover-my-stolen-bitcoin-5333 | Greetings my name is Trista Ayers, I’m writing this testimony to all those who have suffered financial losses as a result of a gang of fraudulent brokers impersonating bitcoin and Forex traders. I’m aware that a lot of individuals have lost money on bitcoin due to the misconception that it cannot be traced. My belief was that my $440,000 in bitcoin and USDT was gone forever until I came across an article about SWIFT HACK EXPERT, a reliable money recovery expert who can locate and retrieve lost cryptocurrency rapidly. I contacted SWIFT HACK EXPERT straight away and after providing them with all the information, I was able to enter my restricted account and get my bitcoins back in my wallet. Send an email to them via: swift1@cyberservices.com or WhatsApp: +1-252-228-9013 | trista_ayers_e928d4ddeb36 | |
1,883,668 | Understanding Weatherstack's API Pricing and Real-Time Weather Data | Weather data is essential for a wide array of industries, from agriculture and transportation to... | 0 | 2024-06-10T21:01:36 | https://dev.to/sameeranthony/understanding-weatherstacks-api-pricing-and-real-time-weather-data-41i0 | Weather data is essential for a wide array of industries, from agriculture and transportation to tourism and retail. Access to accurate and up-to-date weather information can make a significant difference in decision-making processes and operational efficiency. For businesses and developers seeking reliable weather data, Weatherstack offers a comprehensive API solution that provides real-time weather information. In this article, we will delve into weather com API pricing structure and explore the benefits of accessing real-time weather data.
## Weatherstack's API Pricing:
Weatherstack offers flexible pricing plans tailored to the diverse needs of businesses and developers. The pricing structure is designed to accommodate varying levels of usage, ensuring affordability and scalability. Here's an overview of [Weatherstack API pricing](https://weatherstack.com/) tiers:
**1. Free Tier:** Weatherstack provides a generous free tier option that allows users to access basic weather data at no cost. This tier is ideal for individuals or small-scale projects looking to experiment with the API or incorporate weather data into personal applications.
**2. Basic Plan:** For users requiring more extensive weather data and higher usage limits, Weatherstack offers a Basic plan starting at a competitive monthly rate. This plan includes access to additional features such as historical weather data, hourly forecasts, and SSL encryption for secure data transmission.
**3. Professional Plan:** The Professional plan is tailored for businesses and enterprises with demanding weather data requirements. It offers advanced features such as real-time weather updates, global weather coverage, and priority support. Pricing for the Professional plan is based on usage volume and specific customization options.
**4. Enterprise Solutions:** For large-scale deployments and custom integration needs, Weatherstack provides enterprise solutions tailored to the unique requirements of each organization. These solutions offer dedicated infrastructure, customizable data endpoints, and service-level agreements (SLAs) to ensure reliability and performance.
Regardless of the chosen pricing tier, Weatherstack's API provides access to a wealth of weather data sourced from trusted meteorological sources worldwide. From current weather conditions and forecasts to historical climate data, users can leverage Weatherstack's API to gain valuable insights and enhance decision-making processes.
**Benefits of Real-Time Weather Data:
**
Access to real-time weather data offers numerous benefits across various industries and applications:
**1. Improved Decision Making:** Real-time weather updates enable businesses to make informed decisions based on current weather conditions. Whether it's optimizing supply chain logistics, scheduling outdoor events, or managing energy resources, accurate weather data is essential for mitigating risks and maximizing efficiency.
**2. Enhanced Safety and Security:** For industries such as aviation, maritime, and emergency management, real-time weather information is critical for ensuring the safety and security of personnel and assets. By monitoring weather patterns in real-time, organizations can proactively respond to adverse conditions and minimize potential disruptions.
**3. Precision Agriculture:** In agriculture, real-time weather data is instrumental in optimizing crop management practices and maximizing yields. By monitoring factors such as temperature, precipitation, and soil moisture levels, farmers can make timely decisions regarding irrigation, planting, and harvesting activities.
**4. Personalized User Experiences:** For mobile applications and online platforms, integrating real-time weather data can enhance user experiences and engagement. Whether it's providing personalized weather forecasts, recommending outdoor activities, or offering weather-based alerts, access to up-to-date weather information adds value to consumer-facing products and services.
In conclusion, Weatherstack's API offers a reliable and cost-effective solution for accessing [real-time weather API](https://weatherstack.com/documentation) . With flexible pricing plans and a comprehensive range of features, Weatherstack empowers businesses and developers to leverage weather information for various applications and industries. By harnessing the power of real-time weather data, organizations can enhance decision-making processes, improve operational efficiency, and deliver superior user experiences.
| sameeranthony | |
1,883,667 | Create CSS Animations with AI | In a recent project, our development team faced the challenge of creating micro-interaction... | 0 | 2024-06-10T21:00:04 | https://dev.to/max_prehoda_9cb09ea7c8d07/create-css-animations-with-ai-5b4o | ai, webdev, design, css | In a recent project, our development team faced the challenge of creating micro-interaction animations for Autt's landing page. Traditionally, implementing CSS animations requires some manual tinkering and extensive trial and error. However, the team discovered AI CSS Animations, a powerful tool that revolutionized their workflow.
[AI CSS Animations](https://www.aicssanimations.com/)
By utilizing AI CSS Animations, our developers generated sophisticated CSS transitions effortlessly based on text or voice prompts. This eliminated the need for manual coding, allowing the team to focus on other critical aspects of the project. The results were impressive, with captivating hover effects, smooth scroll animations, and eye-catching transitions that enhanced the user experience.
AI CSS Animations proved to be a game-changer, empowering the team to bring their creative visions to life efficiently. The tool's intuitive interface and seamless integration streamlined the development process, saving countless hours of effort. As more developers adopt this revolutionary tool, it is poised to transform web development, enabling the creation of captivating and dynamic interfaces with unprecedented ease and efficiency. | max_prehoda_9cb09ea7c8d07 |
1,883,662 | shadcn-ui/ui codebase analysis: examples route explained. | In this article, we will learn about examples app route in shadcn-ui/ui. This article consists of the... | 0 | 2024-06-10T20:43:03 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-examples-route-explained-58mk | javascript, opensource, nextjs, shadcn | In this article, we will learn about examples app [route in shadcn-ui/ui](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/examples/layout.tsx). This article consists of the following sections:
1. Where is examples folder located?
2. What is in examples/layout.tsx?
3. Difference between examples/layout.tsx and (app)/layout.tsx
4. Sub-routes in examples folder
Where is examples folder located?
---------------------------------
Shadcn-ui/ui uses app router and [examples](https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)/examples) folder is located in [(app)](https://github.com/shadcn-ui/ui/tree/main/apps/www/app/(app)), [a route group in Next.js](https://medium.com/@ramu.narasinga_61050/app-app-route-group-in-shadcn-ui-ui-098a5a594e0c).

What is in examples/layout.tsx?
-------------------------------
In any layout.tsx in Next.js, we put the common layout elements such as headers, footers, sidebars and this also loads children which renders page.tsx.
The following code snippet is picked from [shadcn-ui/ui](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/examples/layout.tsx):
```js
import { Metadata } from "next"
import Link from "next/link"
import { cn } from "@/lib/utils"
import { Announcement } from "@/components/announcement"
import { ExamplesNav } from "@/components/examples-nav"
import {
PageActions,
PageHeader,
PageHeaderDescription,
PageHeaderHeading,
} from "@/components/page-header"
import { buttonVariants } from "@/registry/new-york/ui/button"
export const metadata: Metadata = {
title: "Examples",
description: "Check out some examples app built using the components.",
}
interface ExamplesLayoutProps {
children: React.ReactNode
}
export default function ExamplesLayout({ children }: ExamplesLayoutProps) {
return (
<div className="container relative">
<PageHeader>
<Announcement />
<PageHeaderHeading className="hidden md:block">
Check out some examples
</PageHeaderHeading>
<PageHeaderHeading className="md:hidden">Examples</PageHeaderHeading>
<PageHeaderDescription>
Dashboard, cards, authentication. Some examples built using the
components. Use this as a guide to build your own.
</PageHeaderDescription>
<PageActions>
<Link href="/docs" className={cn(buttonVariants(), "rounded-\[6px\]")}>
Get Started
</Link>
<Link
href="/components"
className={cn(
buttonVariants({ variant: "outline" }),
"rounded-\[6px\]"
)}
>
Components
</Link>
</PageActions>
</PageHeader>
<section>
<ExamplesNav />
<div className="overflow-hidden rounded-\[0.5rem\] border bg-background shadow">
{children}
</div>
</section>
</div>
)
}
```
It has the PageHeader , ExamplesNav , children
{children} renders the sub routes with in the examples folder.
Difference between examples/layout.tsx and (app)/page.tsx
---------------------------------------------------------
The following code snippet is picked from shadcn-ui/ui
```js
import Image from "next/image"
import Link from "next/link"
import { siteConfig } from "@/config/site"
import { cn } from "@/lib/utils"
import { Announcement } from "@/components/announcement"
import { ExamplesNav } from "@/components/examples-nav"
import { Icons } from "@/components/icons"
import {
PageActions,
PageHeader,
PageHeaderDescription,
PageHeaderHeading,
} from "@/components/page-header"
import { buttonVariants } from "@/registry/new-york/ui/button"
import MailPage from "@/app/(app)/examples/mail/page"
export default function IndexPage() {
return (
<div className="container relative">
<PageHeader>
<Announcement />
<PageHeaderHeading>Build your component library</PageHeaderHeading>
<PageHeaderDescription>
Beautifully designed components that you can copy and paste into your
apps. Accessible. Customizable. Open Source.
</PageHeaderDescription>
<PageActions>
<Link href="/docs" className={cn(buttonVariants())}>
Get Started
</Link>
<Link
target="\_blank"
rel="noreferrer"
href={siteConfig.links.github}
className={cn(buttonVariants({ variant: "outline" }))}
>
<Icons.gitHub className="mr-2 h-4 w-4" />
GitHub
</Link>
</PageActions>
</PageHeader>
<ExamplesNav className="\[&>a:first-child\]:text-primary" />
<section className="overflow-hidden rounded-lg border bg-background shadow-md md:hidden md:shadow-xl">
<Image
src="/examples/mail-dark.png"
width={1280}
height={727}
alt="Mail"
className="hidden dark:block"
/>
<Image
src="/examples/mail-light.png"
width={1280}
height={727}
alt="Mail"
className="block dark:hidden"
/>
</section>
<section className="hidden md:block">
<div className="overflow-hidden rounded-lg border bg-background shadow">
<MailPage />
</div>
</section>
</div>
)
}
```
There are few difference between (app)/page.tsx and examples/layout.tsx
1. PageHeader stays the same
2. Hero section content is changed
3. Instead of {children} at the end of file in (app)/page.tsx, you will find the below code
```html
<section className="overflow-hidden rounded-lg border bg-background shadow-md md:hidden md:shadow-xl">
<Image
src="/examples/mail-dark.png"
width={1280}
height={727}
alt="Mail"
className="hidden dark:block"
/>
<Image
src="/examples/mail-light.png"
width={1280}
height={727}
alt="Mail"
className="block dark:hidden"
/>
</section>
<section className="hidden md:block">
<div className="overflow-hidden rounded-lg border bg-background shadow">
<MailPage />
</div>
</section>
```
This is how you are able to see the Mail example with out any change in the URL when you visit [ui.shadcn.com](http://ui.shadcn.com), it just directly loads MailPage component. Interesting.
Sub-routes in examples folder
-----------------------------
There are sub routes in examples as shown below:

These are the folders used in the examples nav shown in the below image:

> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://github.com/Ramu-Narasinga/build-from-scratch) _and give it a star if you like it._ [_Solve challenges_](https://tthroo.com/) _to build shadcn-ui/ui from scratch. If you are stuck or need help?_ [_solution is available_](https://tthroo.com/build-from-scratch)_._
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/examples/layout.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/examples/layout.tsx) | ramunarasinga |
1,883,661 | REACT features | There are so much to learn about React, that I can’t even begin to explain. I am still trying to... | 0 | 2024-06-10T20:40:41 | https://dev.to/daphneynep/react-features-2kld | There are so much to learn about React, that I can’t even begin to explain. I am still trying to figure out if I like it better than JavaScript. But I do know I like the fact that you can use components in a specific way like building blocks to build our web application. We can write codes that looks similar to HTML. It makes it very convenient and easier to understand when it comes to writing codes. There are a lot of great aspect of React that I like so far but the one that I really find useful is being able to return more than one return within the function scope by using components. For example:
function multiply(){
return(
10 * 2
2 * 5
)
}
When we are using the regular function, you can only have one return. The above example will not work. But when using React bellow you can see you can return multiple return statements by using components. With these components, it allows us to use the wrapping method by using a div tag and now you are able to return more than one return.
function App(){
return (
<div>
<h1>Hi</h1>
<h3>Bye</h3>
</div>
)
}
This makes so much more sense to me, because I struggle a bit with JavaScript. In JavaScript, I have to wright so many function codes to get different returns and it can got a little confusing for me.
Learning how to use react is still new to me and I Know with more practice I will have a better understanding of it. What is your favorite feature in React?
| daphneynep | |
330,401 | On Productivity | Personal Productivity vs Generativity – the difference between your team's outcomes with you, vs without you | 0 | 2020-05-08T13:58:11 | https://odone.io/posts/2020-05-08-on-productivity.html | career, productivity, learning | ---
title: On Productivity
description: Personal Productivity vs Generativity – the difference between your team's outcomes with you, vs without you
author: Riccardo
tags: Career, Productivity, Learning
canonical_url: https://odone.io/posts/2020-05-08-on-productivity.html
---
You can keep reading here or [jump to my blog](https://odone.io/posts/2020-05-08-on-productivity.html) to get the full experience, including the wonderful pink, blue and white palette.
---
In my early university years I had reserved space on the wall for diplomas: I was in full exploratory mode. Among others I had certifications for cooking classes, public speaking, international cooperation, freestyle snowboarding, beer brewing.
I never became great at anything and I kept bouncing to the next strange thing on the list. However, the variety equipped me with many different lenses through which I could interpret and interact with the world around me.
Later on, when studies got more difficult, I bumped into the [80/20 principle](https://en.wikipedia.org/wiki/Pareto_principle):
> For many events, roughly 80% of the effects come from 20% of the causes.
It showed me how powerful is to focus on the right twenty percent. Or even better, it taught me not to waste time on the wrong eighty percent. Up until then I was just jumping into things. But armed with the 80/20 principle, I started to include meta-thinking into all my activities.
When I landed my first job, I thought I had the perfect productivity system. I could not understand why people wouldn't do things the way I was. As a result, my first few attempts at leading featured me forcing practices on the team. Unsurprisingly enough, it did not translate to great teamwork.
Recently, I was lucky to bump into Jessica Kerr's ["Generativity"](https://blog.jessitron.com/2019/08/11/generativity/):
> Generativity – the difference between your team's outcomes with you, vs without you.
I heard it all the time that a team is more than the sum of its parts. Still, I kept focusing on my own output, thus encouraging everybody else to do the same.
In practice, I turned the team into the sum of its parts. Even worse, focusing on extreme personal productivity is a strong incentive for meaningless work: more garbage in, more garbage out ([GIGO](https://en.wikipedia.org/wiki/Garbage_in,_garbage_out)).
Generativity, on the other hand, focuses on "outcomes" and not output. Three productive people make three units of garbage, three generative people provide combinatorial units of value.
That's why, I don't want to be a rockstar anymore. Yes, it does feel great to extinguish fires. However, of all the great times I had working in teams, none of them was because I worked with a ninja who moved a lot of tickets to done. Not once.
In the [rhymes of Rayden](https://www.youtube.com/watch?v=WMRxOdaoIVs), the Italian rapper:
> Ciò che conta non sono le vittorie ma avere una persona con cui condividerle.
>
> What counts is not the victories but having someone to celebrate with.
Generativity provides both a chance for success and a team to celebrate with. I'm not going back!
---
[Jessitron](https://jessitron.com/), thank you so much for sharing such a powerful concept together with a clear definition. Being able to give it a name makes all the difference in the world. Looking forward to steal more wisdom!
---
Get the latest content via email from me personally. Reply with your thoughts. Let's learn from each other. Subscribe to my [PinkLetter](https://odone.io#newsletter)! | riccardoodone |
1,883,660 | 🌟 Comprehensive Guide to Firebase Cloud Messaging in React with Vite 🚀 | Firebase Cloud Messaging (FCM) is a powerful tool for sending notifications and messages to your... | 27,668 | 2024-06-10T20:39:55 | https://dev.to/shahharsh/comprehensive-guide-to-firebase-cloud-messaging-in-react-with-vite-abm | react, firebase, frontend, vite | Firebase Cloud Messaging (FCM) is a powerful tool for sending notifications and messages to your users, keeping them engaged and informed. This guide will take you through the process of setting up FCM in your React application using Vite, and we'll also cover how to obtain the necessary FCM keys to test notifications. Let's get started! 🎉
## Table of Contents 📑
1. [Introduction to Firebase Cloud Messaging](#introduction-to-firebase-cloud-messaging)
2. [Creating a Firebase Project 🔥](#creating-a-firebase-project)
3. [Setting Up a Vite + React Project 🛠️](#setting-up-a-vite--react-project)
4. [Integrating Firebase in React 📲](#integrating-firebase-in-react)
- [Installing Firebase SDK 📦](#installing-firebase-sdk)
- [Configuring Firebase in React ⚙️](#configuring-firebase-in-react)
- [Requesting Notification Permissions 🚦](#requesting-notification-permissions)
- [Handling Token Generation 🔑](#handling-token-generation)
5. [Obtaining FCM Key in React 🔑](#obtaining-fcm-key-in-react)
- [Obtaining VAPID Key from Firebase 🔐](#obtaining-vapid-key-from-firebase)
6. [Sending Notifications with FCM 📧](#sending-notifications-with-fcm)
- [Using Firebase Console 🖥️](#using-firebase-console)
- [Sending Notifications Programmatically 💻](#sending-notifications-programmatically)
7. [Receiving Notifications in React 📥](#receiving-notifications-in-react)
- [Handling Foreground Notifications 🌐](#handling-foreground-notifications)
- [Handling Background Notifications 🕶️](#handling-background-notifications)
8. [Best Practices and Security 🛡️](#best-practices-and-security)
9. [Troubleshooting and Common Issues 🐞](#troubleshooting-and-common-issues)
10. [Conclusion 🎉](#conclusion)
## Introduction to Firebase Cloud Messaging
Firebase Cloud Messaging allows you to send notifications and messages across platforms like Android, iOS, and the web. It’s an effective way to keep users engaged with timely updates and notifications. 🚀
## Creating a Firebase Project 🔥
1. **Visit Firebase Console:** Go to the [Firebase Console](https://console.firebase.google.com/).
2. **Create a New Project:** Click "Add project" and follow the prompts to create a new project. 🎯
3. **Enable Cloud Messaging:** In Project Settings > Cloud Messaging, ensure Cloud Messaging is enabled. 📬
4. **Add a Web App:** Register your web app in the Firebase console and note the configuration details provided.
## Setting Up a Vite + React Project 🛠️
Vite is a modern build tool that enhances the development experience. Here’s how to set up a Vite + React project:
1. **Create a Vite Project:** Open your terminal and run:
```bash
npm create vite@latest my-fcm-app -- --template react
cd my-fcm-app
npm install
```
2. **Start the Development Server:** Run:
```bash
npm run dev
```
This starts a development server and opens your React app in the browser. 🎉
## Integrating Firebase in React 📲
### Installing Firebase SDK 📦
First, install the Firebase SDK:
```bash
npm install firebase
```
### Configuring Firebase in React ⚙️
Create a `firebase.js` file in the `src` directory:
```javascript
// src/firebase.js
import { initializeApp } from "firebase/app";
import { getMessaging } from "firebase/messaging";
const firebaseConfig = {
apiKey: "YOUR_API_KEY",
authDomain: "YOUR_PROJECT_ID.firebaseapp.com",
projectId: "YOUR_PROJECT_ID",
storageBucket: "YOUR_PROJECT_ID.appspot.com",
messagingSenderId: "YOUR_MESSAGING_SENDER_ID",
appId: "YOUR_APP_ID",
};
const app = initializeApp(firebaseConfig);
const messaging = getMessaging(app);
export { messaging };
```
Replace placeholders with your Firebase configuration values. 🧩
### Requesting Notification Permissions 🚦
Create a `Notification.js` component to request notification permissions:
```javascript
// src/Notification.jsx
import React, { useEffect } from 'react';
import { messaging } from './firebase';
import { getToken } from "firebase/messaging";
const Notification = () => {
useEffect(() => {
const requestPermission = async () => {
try {
const token = await getToken(messaging, { vapidKey: 'YOUR_VAPID_KEY' });
if (token) {
console.log('Token generated:', token);
// Send this token to your server to store it for later use
} else {
console.log('No registration token available.');
}
} catch (err) {
console.error('Error getting token:', err);
}
};
requestPermission();
}, []);
return <div>Notification Setup 🚀</div>;
};
export default Notification;
```
Replace `'YOUR_VAPID_KEY'` with your VAPID key from the Firebase console. 🔑
### Handling Token Generation 🔑
Ensure the token is securely sent to your server for later use:
```javascript
// src/Notification.jsx
if (token) {
console.log('Token generated:', token);
fetch('/save-token', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ token }),
});
}
```
## Obtaining FCM Key in React 🔑
To test notifications, you need the FCM key, also known as the Server Key. Here’s how to obtain it:
### 1. Access Firebase Console
1. Go to the [Firebase Console](https://console.firebase.google.com/).
2. Log in with your Google account and select your project.
### 2. Navigate to Project Settings
1. Click on the gear icon ⚙️ next to "Project Overview" in the left-hand menu.
2. Select "Project settings" from the dropdown menu.
### 3. Find Cloud Messaging Settings
1. Click on the **Cloud Messaging** tab.
2. Under the **"Project credentials"** section, locate the **Server key**.
3. Copy the Server key to use it for testing.
### Using FCM Key in React
Store this key securely and use it for testing notifications in your React application.
## Sending Notifications with FCM 📧
### Using Firebase Console 🖥️
1. **Go to Cloud Messaging:** In the Firebase Console, navigate to Cloud Messaging.
2. **Compose a Message:** Click "Send your first message" and fill in the details.
3. **Target Users:** Choose your web app and specify the users or topics.
4. **Send the Message:** Click "Send message." 🚀
### Sending Notifications Programmatically 💻
To send notifications programmatically, use the Firebase Admin SDK:
```javascript
// Example with Node.js
const admin = require('firebase-admin');
admin.initializeApp({
credential: admin.credential.applicationDefault(),
});
const message = {
notification: {
title: 'Hello!',
body: 'You have a new message.',
},
token: 'DEVICE_REGISTRATION_TOKEN',
};
admin.messaging().send(message)
.then(response => {
console.log('Message sent:', response);
})
.catch(error => {
console.error('Error sending message:', error);
});
```
Replace `'DEVICE_REGISTRATION_TOKEN'` with the token you received in your React app. 🔄
## Receiving Notifications in React 📥
### Handling Foreground Notifications 🌐
Listen for notifications while the app is open using `onMessage`:
```javascript
// src/Notification.jsx
import { onMessage } from "firebase/messaging";
useEffect(() => {
const unsubscribe = onMessage(messaging, (payload) => {
console.log('Message received. ', payload);
// Customize notification display here
});
return () => {
unsubscribe();
};
}, []);
```
### Handling Background Notifications 🕶️
Create `firebase-messaging-sw.js` in the `public` folder:
```javascript
// public/firebase-messaging-sw.js
importScripts('https://www.gstatic.com/firebasejs/8.10.0/firebase-app.js');
importScripts('https://www.gstatic.com/firebasejs/8.10.0/firebase-messaging.js');
firebase.initializeApp({
apiKey: "YOUR_API_KEY",
authDomain: "YOUR_PROJECT_ID.firebaseapp.com",
projectId: "YOUR_PROJECT_ID",
storageBucket: "YOUR_PROJECT_ID.appspot.com",
messagingSenderId: "YOUR_MESSAGING_SENDER_ID",
appId: "YOUR_APP_ID",
});
const messaging = firebase.messaging();
messaging.onBackgroundMessage((payload) => {
console.log('Received background message ', payload);
// Customize notification here
});
```
Register the service worker in `main.jsx`:
```javascript
// src/main.jsx
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/firebase-messaging-sw.js')
.then((registration) => {
console.log('Service Worker registered with scope:', registration.scope);
})
.catch((error) => {
console.error('Service Worker registration failed:', error);
});
}
```
## Best Practices and Security 🛡️
1. **Secure Your Tokens:** Ensure messaging tokens are securely transmitted and stored.
2. **Handle Permissions Wisely:** Clearly communicate why you need notification permissions to build trust.
3. **Optimize Notification Delivery:** Avoid spamming users with irrelevant notifications. Target notifications for relevance and timeliness. 📆
## Troubleshooting and Common Issues 🐞
1. **Token Issues:** Ensure the Firebase configuration is correct and you are using the correct VAPID key.
2. **Service Worker Issues:** Make sure `firebase-messaging-sw.js` is correctly located in the `public` directory.
3. **No Notifications:** Verify notification permissions and check if they are blocked by the browser or OS.
### Debugging the MIME Type Error
- **Error Explanation:** The error indicates the service worker file `firebase-messaging-sw.js` is served as `text/html`, indicating a 404 error.
- **Solution Steps:**
- Ensure the file exists in the `public` directory.
- Check the path and file name in your registration code.
- Verify access to the file in your browser.
## Conclusion 🎉
Integrating Firebase Cloud Messaging with your Vite-powered React application allows you to engage users with notifications. By following this guide, you can set up and test notifications effectively. Happy coding! 📲🚀
---
**Ready to notify? Dive in and make sure your app keeps users informed and engaged! 💬📲** | shahharsh |
1,883,659 | Future-proof software development with the Azure Serverless Modulith | When developing modern software solutions, architects and developers are faced with the challenge of... | 0 | 2024-06-10T20:39:36 | https://dev.to/florianlenz/future-proof-software-development-with-the-azure-serverless-modulith-12bl | serverless, azure, azurefunctions, eventdriven | When developing modern software solutions, architects and developers are faced with the challenge of designing systems that are scalable, maintainable and cost-effective. Traditionally, monolithic architectures offer the advantage of simplicity in development and deployment, but often reach their limits in terms of scalability and maintainability. Microservices solve many of these problems, but bring their own challenges such as increased complexity and difficult coordination.
## What is a monolith?
Monolithic architectures are characterized by the fact that all components of an application are closely linked and provided as a single unit. This can simplify development and deployment, but leads to difficulties in scalability and maintainability as the application grows.
## Modularity in the monolith
Modularity allows a monolith to be divided into clearly delineated modules. These modules can be developed and maintained independently of each other, which improves maintainability and enables more targeted scaling of individual modules.
## The Serverless Modulith: an innovative solution
The Serverless Modulith combines the advantages of modularity with the strengths of serverless computing. Each module is provided as an independent serverless function, which offers several advantages:
**Scalability**: each module can be scaled independently, based on specific requirements and load.
**Cost efficiency**: Thanks to usage-based billing, companies only pay for the computing time they actually use.
**Reduced complexity**: Developers can focus on the business logic as the cloud provider manages the infrastructure.
**Faster time-to-market**: Individual modules can be developed and deployed independently, which speeds up the introduction of new functions.
## Architecture overview
A serverless modulith with Azure Functions consists of a structured and modularized application. Each functional unit is provided as an independent serverless function. Here is an example of such a structure:
**API Gateway**: An Azure API Management Gateway serves as a central point for all incoming requests and forwards them to the corresponding Azure Functions. It handles authentication, authorization and rate limiting.
**Logic**: Each module of the application is implemented as a separate Azure Function. These modules can perform tasks such as data processing, API endpoints, background tasks or event-driven processes.
**Communication**: The modules communicate with each other via Azure Service Bus or Azure Event Grid to enable loose coupling and asynchronous processing.
**Database**: Azure Cosmos DB, Azure SQL Database or Azure Table Storage can be used as central databases that the individual modules access.

## Challenges when using serverless moduliths
Although the serverless modulith offers many advantages, there are also challenges that need to be considered:
### Vendor lock-in
When using cloud-based serverless services such as Azure Functions, AWS Lambda or Google Cloud Functions, a certain degree of vendor lock-in is unavoidable. Architecture and implementation depend heavily on the specific services and APIs of the chosen cloud provider. However, it is important to emphasize that vendor lock-in is not inherently bad. The use of standard software solutions such as Office 365 or Teams also leads to vendor lock-in. It is crucial that companies carefully consider whether this dependency could be problematic in the future.
One approach to minimizing risk is to use hybrid solutions such as Docker containers in a serverless environment on Azure. These containers can be operated in different cloud environments or even on-premises, which increases flexibility and reduces the risk of lock-in.
### Cold starts
Cold starts are another common problem with serverless architectures. These occur when functions are reactivated after a period of inactivity and cause additional latency. Modern hosting plans such as the Azure Functions Premium Plan or AWS Lambda Provisioned Concurrency offer solutions that significantly reduce this problem. These plans make it possible to keep a certain number of instances warm, which minimizes request latency and improves response times.
## Conclusion
The Serverless Modulith is an innovative solution for modern software development requirements. It combines the advantages of modularity and serverless computing and enables companies to work more flexibly, cost-effectively and scalably. At the same time, the complexity of infrastructure management is reduced. Despite the challenges, such as vendor lock-in and cold starts, the advantages outweigh the disadvantages, especially if suitable strategies are implemented to overcome these challenges.
## Ressources:
- [Azure Serverless Modulith](https://www.florian-lenz.io/blog/serverless-modular-monolithen-der-wegbereiter-fuer-agile-entwicklungen)
- [Azure Serverless Modulith Video](https://www.youtube.com/watch?v=5kbJSH-dKKo) | florianlenz |
1,883,612 | Enforcing 'noImplicitAny' on callbacks to generic functions | One of the first errors every TypeScript developer encounters is the famous noImplicitAny, which... | 0 | 2024-06-10T20:38:24 | https://dev.to/seasonedcc/enforcing-noimplicitany-on-callbacks-to-generic-functions-7f1 | typescript | One of the first errors every TypeScript developer encounters is the famous `noImplicitAny`, which happens when you don't specify a type for a variable. For example:
```ts
function fn(s) {
// ^? Parameter 's' implicitly has an 'any' type.
console.log(s.subtr(3));
}
```
This is a great thing because it forces you to think about the types of your variables and functions, helping you avoid bugs.
## The problem
When developing your own functions that receive a callback, you might lose that error checking if you specify the type of the callback as `(...args: any[]) => any`. This type indicates that the function can receive any number of arguments of any type.
```ts
// @ts-expect-error
function fn(a) {}
const wrapper = (f: (...args: any[]) => any) => f
const test = wrapper((a) => {})
// ^? any
// TS won't complain
```
If you don't know, `(...args: any[]) => any` is one of the most popular ways to type a function. The `Function` primitive is not recommended because it can accept anything that is callable, such as a constructor, a callable object, or even a non-callable class such as `Map`.
But this use case creates a bad developer experience because you're losing one of the basic benefits of the type system. An `any` can easily go unnoticed and cause bugs further down the line.
## The quest
Last week, I started a [Twitter thread](https://twitter.com/gugaguichard/status/1798802769996136459) about this and asked for help from some friends and TS experts.
I was working on our library [composable-functions](https://github.com/seasonedcc/composable-functions) on an [issue by @danielweinmann](https://github.com/seasonedcc/composable-functions/issues/146) that was experiencing this problem.
[The problem](https://tsplay.dev/NVX0xW) seemed impossible and the solutions would require some trade-offs.
- Forbidding the use of `any` in the callback's arguments was reasonable since the library I was working on is for strict TypeScript projects and `any` is probably out of the question. However, it would be a breaking change for the users and quite strict.
- Swapping `(...args: any[]) => any` for `(...args: never[]) => unknown`. This was suggested by our contributor @jly36963 which is a thread on the [TS repo itself](https://github.com/microsoft/TypeScript/issues/37595#issuecomment-604103234). It seems nice because if you don't declare the types, the arguments are gonna be `never`, so you can't use them within the function. There was one concern though, this solutions causes problems with default arguments - as you can read in the last comment on that thread.
This is when I decided to go bold and try that `Function` primitive!
## The solution
The solution was to use it in the function parameter but restrict it anywhere else down the line. We only need to enforce `noImplicitAny` in the declaration of the callback, but we have other ways to enforce a narrower type, such as `(...args: any[]) => any`, elsewhere.
The diff was quite simple:
```diff
-function composable<T extends (...args: any[]) => any>(fn: T): Composable<T> {
+function composable<T extends Function>(
+ fn: T,
+): Composable<T extends (...args: any[]) => any ? T : never> {
```
And it worked! The error was back and the user would have to specify the types of the arguments of the callback.
If the user tried to use something which isn't an actual function, they would get a `Composable<never>` which is impossible to work with.
You can check the [final PR here](https://github.com/seasonedcc/composable-functions/pull/147)
## Conclusion
There were two motivations to write this post:
- I couldn't find any information about this on the internet.
- I love simple solutions that require you to think outside the box and break some rule as long as you cover up for them.
I hope this post can help someone else who is struggling with the same problem. | gugaguichard |
1,883,658 | Overcoming Childhood Struggles and Embracing a Career in Software Engineering | I’ve struggled with studying and grasping things quickly for as long as I can remember. As a kid, I... | 0 | 2024-06-10T20:34:27 | https://dev.to/bmoreinspiring/overcoming-childhood-struggles-and-embracing-a-career-in-software-engineering-63c | learning, webdev, beginners | I’ve struggled with studying and grasping things quickly for as long as I can remember. As a kid, I just couldn’t sit still for hours on end, hitting the books or doing things that didn’t interest me. High school was tough; I barely scraped through my classes and even had to deal with a health scare during my senior year. I missed the last two months of school, which was a relief in some ways, but it made graduating on time a real challenge.
But even when school felt like a mountain I couldn’t climb, technology was my escape. I got into online gaming and tech early on, mostly because of my cousin, who was a huge influence on me. I looked up to him a lot, even if I had a funny way of showing it sometimes. My passion for tech allowed me to skip college and jump straight into a tech career. I didn’t bother with certifications; my learning came from diving in headfirst, getting hands-on experience, and being hungry to learn more.
### Discovering a Passion for Programming
Programming has always fascinated me. The idea of coming up with an idea and turning it into something real that works is amazing. That’s why I’m writing this blog post today. I’ve always dreamed of becoming a software engineer. Despite tinkering with programming for years, I’ve never truly mastered it the way I wanted to.
A lot of this stems from my childhood struggles with studying. I’m constantly worried about failing a coding interview, not knowing key terminologies, and dealing with imposter syndrome. These fears, born out of my early experiences, have stuck with me and often held me back from fully committing to this career.
### Taking the Plunge: My Commitment to Software Engineering
Today, I’m making a promise to myself: I’m going to become a software engineer. This journey requires mental toughness, a solid learning structure, and the determination to push past the fears that have been holding me back.
This isn’t just about picking up new skills; it’s about proving to myself that I can overcome my past struggles and build a successful career in software engineering.
### A Message to Future Employers
To any future employers reading this, I want you to know that I’m ready to work hard and learn everything I need to succeed. My unconventional path has given me resilience, adaptability, and a relentless drive to achieve my goals. I’m excited to bring these qualities to your team and grow as a software engineer.
I truly believe that this career can transform my life and I hope to inspire others who have faced similar challenges. If you’ve battled with anxiety and self-doubt because of childhood experiences, I want you to know it’s never too late to chase your dreams and create a new path for yourself.
### Looking Ahead: The Importance of Consistency
From this point forward, my goal is to stay consistent, no matter what doubts or fears come up. This blog will be my space to document the journey, reflect on the challenges, and celebrate the milestones.
Here’s to embracing the unknown, overcoming past fears, and stepping into a future filled with potential. My journey to becoming a software engineer starts now, and I’m ready for the challenge.
---
This post marks a big step for me—a declaration that I’m committed to a career in software engineering. Follow along as I navigate this path, share the lessons I learn, and celebrate the wins. Let’s face the future with determination and a willingness to learn together. | bmoreinspiring |
1,883,655 | LLM is unfair | Well... At least, that's how I feel. As a non-native English speaker, inefficiency arises even... | 0 | 2024-06-10T20:33:14 | https://dev.to/swimmingpolar/llm-is-unfair-53p3 | llm, cat, promptengineering, language | Well... At least, that's how I feel.
As a non-native English speaker, inefficiency arises even before starting to write instructions or prompts. Why? Obviously, the whole data corpus fed into AI models was mostly written in English. So what? It seems quite a simple job these days. Why don't I go ahead and use one of many AI translators and voila!
Actually, it isn't just about the translation. It's more of a contextual problem. In the LLM world, this context matters. The more I get my hands on various AI products, the more I get the feeling that they (AIs) and I don't get along very well. Yes, sometimes we are not on the same page. I don't know where we missed each other, but somewhere in the middle of the conversation, we walked across each other and didn't notice at all.
This happens simply because, at the essence, cultures differ. While they seem to mean the same thing on the surface, there are cases where they do not. For barely satisfying example, "what's up?" can be interpreted as a casual greeting, while this is not the case in my country. I'm familiar with the phrase when "what" is actually "going" on in my language. The comparison may not satisfy all but should definitely convey what I'm trying to say.
Nuance. This subtle difference is what creates the aforementioned inefficiency. If you are not aware of this, you will forever be getting slightly off but sufficing answers that you don't get to recognize what's wrong. What if you are aware of this? I don't know why Ouroboros comes to my mind. I can't get away from questions.
- Is this what I want to ask? Are you sure?
- Will this be seen enough as what I intended to ask LLMs?
Just whining about having a hard time configuring the LLM to get better responses. So I figured I'd vent a bit. | swimmingpolar |
1,883,656 | LEGITIMATE CRYPTO RECOVERY AGENCY-SPYWARE CYBER | Choosing Spyware Cyber was one of the best decisions I've ever made. From the moment I reached out to... | 0 | 2024-06-10T20:26:50 | https://dev.to/joan_anneliss_ca8f1e9fdad/legitimate-crypto-recovery-agency-spyware-cyber-290a | recover, recovery, sucess | Choosing Spyware Cyber was one of the best decisions I've ever made. From the moment I reached out to them, I knew I was in good hands. Their professionalism, expertise, and unwavering commitment to their clients set them apart from the rest.When I found myself entangled in a web of deception orchestrated by an imposter posing as a legitimate crypto investment company, I felt lost and betrayed. Months of relentless payments to a fraudulent intermediary broker had left me feeling helpless and disillusioned. It was a devastating realization that shook me to the core. However, hope emerged when my trusted uncle recommended Spyware Cyber. With his reassurance and their stellar reputation, I decided to reach out to them for help. And I am so grateful that I did. From the outset, Spyware Cyber approached my case with a sense of urgency and determination. They meticulously analyzed the evidence I provided and quickly uncovered the truth – I had been duped by a counterfeit website. But instead of dwelling on the past, they focused on finding a solution.In just three days, Spyware Cyber managed to retrieve an incredible 95% of the funds that had been stolen from me. It was a monumental success that exceeded all my expectations. Their tireless efforts and unwavering dedication turned what seemed like an insurmountable loss into a story of hope and redemption.But what truly sets Spyware Cyber apart is their unwavering commitment to their clients' well-being. Throughout the entire process, they provided me with constant support and reassurance, guiding me every step of the way. Their empathy and professionalism made all the difference during a time of great distress.I cannot thank Spyware Cyber enough for their outstanding service and unwavering support.As I share my story with others, I hope to inspire anyone who has fallen victim to fraud to seek help and never lose hope. With Spyware Cyber by your side, justice prevails, and there is light at the end of the tunnel. You can also reach Spyware Cyber and bear witness to my testimony:
WhatsApp:+1 878 271 4102
Telegram: Spyware Cyber
Email:spyware@cybegal.com
web:www.cybegal.com
| joan_anneliss_ca8f1e9fdad |
1,883,651 | Estudos em Quality Assurance (QA) - Metodologias de Desenvolvimento | Waterfall: Waterfall é uma metodologia sequencial onde cada fase deve ser concluída antes que a... | 0 | 2024-06-10T20:22:20 | https://dev.to/julianoquites/estudos-em-quality-assurance-qa-metodologias-de-desenvolvimento-a4p | scrum, kanban, agile, qa | **Waterfall**: Waterfall é uma metodologia sequencial onde cada fase deve ser concluída antes que a próxima comece. É ideal para projetos com requisitos bem definidos e pouca necessidade de mudanças.
- Vantagens do Waterfall:
- Estrutura clara e previsível desde o início do projeto.
- Documentação detalhada em cada fase do processo.
- Desvantagens do Waterfall:
- Dificuldade em lidar com mudanças nos requisitos durante o desenvolvimento.
- Longos ciclos de desenvolvimento sem entregas parciais, aumentando o risco de adaptações tardias.
**Agile**: O Agile é uma metodologia focada em entregas rápidas e incrementais. Baseado no Manifesto Ágil, ele prioriza a colaboração, a flexibilidade e a satisfação do cliente.
**Desenvolvimento baseado em features**: O desenvolvimento é organizado em torno de histórias de usuários (stories), que são agrupadas em épicos (Epics). O ciclo segue as etapas: Epic → Story → Develop → Test → Deploy → Review, com duração de duas semanas a dois meses.
- Vantagens do Agile:
- Comunicação constante e efetiva entre a equipe e stakeholders.
- Testes em todas as fases do desenvolvimento, resultando em melhor qualidade do produto.
- Desvantagens do Agile:
- Trocas de contexto frequentes podem levar ao burnout rápido.
- Entregas fragmentadas podem trazer bugs inesperados e lacunas na integração.
**Scrum**: Conhecido como "Agile com esteroides", o Scrum é uma abordagem ágil com foco em equipes pequenas, de 6 a 9 membros. As tarefas são organizadas em sprints de 2 a 4 semanas, sendo 2 semanas o mais comum.
**Papéis e Práticas no Scrum**:
- Scrum Master: Facilita o processo e remove impedimentos.
- Stand Up: Reunião diária matinal para alinhar o progresso.
- Medindo Dificuldade: As tarefas são estimadas com valores como 1, 2, 3, 5, 8, 13, indicando a dificuldade relativa.
**Kanban/Lean**: O Kanban, inspirado no Lean, visa eliminar desperdícios e melhorar continuamente o fluxo de trabalho.
**Princípios do Kanban**:
- Eliminar desperdícios: Foco na eficiência e no uso eficaz dos recursos.
- Testar cedo: Identificação precoce de problemas.
- Pequenas entregas: Lançamentos frequentes de pequenas melhorias.
- Gerenciamento de fluxo: Monitoramento contínuo do progresso.
- Progresso visível: Transparência com o uso de quadros Kanban.
- Limite de trabalho em progresso (WIP): Evitar sobrecarga na equipe, sem sprints fixos. | julianoquites |
1,883,637 | Creating Accessible Web Applications: A Developer’s Guide | Hey there! Let's talk about making your web apps not just good, but great—for everyone. Yes, I’m... | 0 | 2024-06-10T20:21:22 | https://dev.to/buildwebcrumbs/creating-accessible-web-applications-a-developers-guide-49ik | a11y, webdev, beginners, programming | Hey there! Let's talk about making your web apps not just good, but great—for everyone. Yes, I’m talking about making them accessible. Ensuring that your applications are accessible is very important. **Why?** Because it’s all about inclusivity.
Imagine this: What if you threw a party and some of your friends couldn’t get through the door? That’s exactly what happens when web applications aren’t accessible—**some users get left out**. This guide is here to walk you through the essentials of web accessibility, so you can open that door wide for everyone, making sure no one misses out on the digital party!
Here is what we will be talking about:
- **Understanding Web Accessibility**
- **Key Principles of Accessibility**
- **Practical Steps to Improve Accessibility**
- **Testing for Accessibility**
- **Conclusion**
---
## Understanding Web Accessibility
**What is Web Accessibility?**
Web accessibility means that websites, tools, and technologies are designed and developed so that people with disabilities can use them. More specifically, people can perceive, understand, navigate, and interact with the web, and contribute to the web.
**Why is it Important?**
- **Ethical Responsibility**: Ensuring equal access to information and functionalities.
- **Legal Compliance**: Adhering to various international standards and laws like the ADA (Americans with Disabilities Act) and WCAG (Web Content Accessibility Guidelines).
- **Broader Reach**: Expanding your user base to include people with disabilities, which can enhance brand image and increase potential market share.
---
## Key Principles of Accessibility
The WCAG outlines four principles that provide the foundation for web accessibility: Perceivable, Operable, Understandable, and Robust (POUR). Here’s how you can apply these principles:
1. **Perceivable**: Information and user interface components must be presentable to users in ways they can perceive.
- Provide text alternatives for non-text content.
- Use captions and other alternatives for multimedia.
- Create content that can be presented in different ways without losing information or structure (e.g., simpler layout).
2. **Operable**: User interface components and navigation must be operable.
- Make all functionality available from a keyboard.
- Give users enough time to read and use content.
- Do not use content that causes seizures and physical reactions.
3. **Understandable**: Information and the operation of the user interface must be understandable.
- Make text readable and understandable.
- Make content appear and operate in predictable ways.
- Help users avoid and correct mistakes.
4. **Robust**: Content must be robust enough that it can be interpreted reliably by a wide variety of user agents, including assistive technologies.
- Maximize compatibility with current and future user tools.
---
## Practical Steps to Improve Accessibility
### 1. Developing with Accessibility in Mind
- **Semantic HTML**: Use HTML5 semantic elements like `<article>`, `<aside>`, `<figure>`, `<footer>`, `<header>`, `<nav>`, and `<section>` to enhance the meaning of the information on your web pages.
- **ARIA (Accessible Rich Internet Applications)**: Use ARIA roles and properties to enhance the accessibility of your web content, especially dynamic content and advanced user interface controls developed with Ajax, HTML, JavaScript, and related technologies.
- **Accessible Forms**: Label elements clearly, use fieldset and legend for grouping, and ensure every form element has a tab order.
- **Keyboard Navigation**: Ensure that all interactive elements are operable through keyboard interfaces.
## 2. Testing for Accessibility
- **Automated Testing Tools**: Use tools like Axe, Lighthouse, and WAVE to find and fix potential accessibility issues.
- **Maintain and Update Accessibility Practices**: Web technologies are always evolving, and so are accessibility standards. Regularly update your knowledge and your site.
- **User Testing**: While simulations and automated tests are valuable, real feedback makes a significant difference. **If possible, involve people with disabilities in your testing process**. They can provide firsthand insights into the usability of your application, often uncovering issues that automated tests and simulations might miss. This direct feedback is invaluable in creating truly accessible and user-friendly web environments.
---
## Conclusion
Creating accessible web applications is a continuous effort that enhances user experience and broadens your audience.
By following the guidelines and steps outlined in this guide, we developers can contribute to a more inclusive digital world.
**Remember, accessibility is not a feature; it's a fundamental aspect of web development**.
Start incorporating these accessibility guidelines into your projects today. Let's make the web accessible for everyone!
---
At WebCrumbs, we a believe that technology should empower all individuals, regardless of their physical limitations or disabilities.
{% cta https://www.webcrumbs.org/waitlist %} JOIN US {% endcta %}
--- | opensourcee |
1,883,624 | Migrando do CRA para Vite | O tão amado (ou odiado) pacote Create React App (CRA) deixou de ser mantido desde 2023, confira mais... | 0 | 2024-06-10T20:16:53 | https://dev.to/mrtinsvitor/migrando-do-cra-para-vite-1201 | javascript, react, braziliandevs, vite | O tão amado (ou odiado) pacote Create React App (CRA) deixou de ser mantido desde 2023, [confira mais aqui](https://github.com/reactjs/react.dev/pull/5487#issuecomment-1409720741). Isso significa que não irá mais receber nenhuma atualização, tornando-o cada dia mais obsoleto. Embora ainda seja uma opção válida para alguns casos, não é mais recomendado para novas aplicações, assim como para aplicações existentes. A própria documentação do React não recomenda mais o uso do Create React App. O time do React recomenda fortemente meta-frameworks como Next.js ou Remix para novos projetos.
Caso não faça sentido ou sua equipe não tenha recursos suficientes para migrar para um framework mais complexo, como Next.js, pode ser mais interessante considerar opções mais simples, como o Vite.
## O que é Vite?
Vite, "rápido" ou "rapidamente" em francês, é uma ferramenta de construção moderna e rápida que se integra com o React. Ele oferece uma experiência de desenvolvimento mais rápida, eficiente e fácil de usar do que o Create React App. Saiba mais sobre o [Vite aqui](https://vitejs.dev/).
### 1. Instalando dependências e atualizando package.json
Primeiro, precisamos instalar algumas dependências necessárias para utilizar o Vite.
```bash
npm install --save-dev vite @vitejs/plugin-react vite-ts-config-paths vite-plugin-svgr
```
Em seguida, atualize os scripts do seu `package.json` para utilizar o Vite.
```json
{
"scripts": {
"start": "vite",
"build": "tsc && vite build",
"serve": "vite preview"
}
}
```
### 2. Removendo o Create React App
Agora precisamos remover completamente o Create React App do seu projeto. Para isso, podemos simplesmente desinstalar o pacote `react-scripts` usando o comando:
```bash
npm uninstall react-scripts
```
Caso esteja utilizando TypeScript, remova também o arquivo `react-app-env.d.ts` que deve estar na pasta raiz do seu projeto ou na pasta `src`.
### 3. Configurando o Vite
Para configurar o Vite, crie um arquivo chamado `vite.config.ts` na raiz do seu projeto e adicione o seguinte código:
```typescript
// vite.config.ts
import { defineConfig } from "vite";
import react from "@vitejs/plugin-react";
import svgr from "vite-plugin-svgr";
import tsconfigPaths from "vite-ts-config-paths";
export default defineConfig({
plugins: [react(), svgr(), tsconfigPaths()],
// Caso deseje manter um ambiente similar ao Create React App, pode adicionar as seguintes configurações:
server: {
host: "localhost",
port: 3000,
},
base: "/",
build: {
outDir: "build", // O Vite por padrão gera o build na pasta "dist"
},
});
```
Caso esteja utilizando TypeScript, crie um arquivo `vite-env.d.ts` na sua pasta `src` para definir as declarações do Vite no TypeScript:
```typescript
// ./src/vite-env.d.ts
/// <reference types="vite/client" />
```
### 4. Atualizando o index.html
Por último, é necessário mover o arquivo `index.html` para a raiz do seu projeto. Provavelmente, ele está na pasta `public`, como é padrão no Create React App. É necessário também remover todas as referências à variável `%PUBLIC_URL%`.
Por exemplo, no caso do link do favicon:
```html
<!-- Remova -->
<link rel="icon" href="%PUBLIC_URL%/favicon.ico" />
<!-- Adicione -->
<link rel="icon" href="favicon.ico" />
```
---
Com isso, você já tem o necessário para rodar seu projeto React com Vite. Porém, é comum encontrar alguns erros caso esteja migrando uma aplicação maior e mais complexa. A seguir, alguns dos problemas mais comuns e como resolvê-los.
### Variáveis de ambiente
Por padrão, quando criamos um projeto React utilizando CRA, conseguimos acessar as variáveis através do objeto `process.env`. No entanto, quando utilizamos Vite, não conseguimos acessar essas variáveis da mesma forma, pois o Vite expõe suas variáveis no objeto `import.meta.env`. Para resolver este problema, basta adicionar o seguinte código ao seu `vite.config.ts`:
```typescript
// vite.config.ts
export default defineConfig({
...
define: {
'process.env': {},
'process.env.NODE_ENV': JSON.stringify(mode),
},
...
});
```
Além disso, no CRA geralmente declaramos variáveis de ambiente com o prefixo `REACT_APP_`, algo que não é esperado no Vite. O Vite espera que as variáveis de ambiente sejam declaradas com o prefixo `VITE_`. Para resolver este problema, é necessário adicionar o `vite-plugin-environment`.
```bash
npm install --save-dev vite-plugin-environment
```
E adicioná-lo no seu `vite.config.ts`:
```typescript
// vite.config.ts
import EnvironmentPlugin from "vite-plugin-environment";
export default defineConfig({
plugins: [
...
EnvironmentPlugin('all', { prefix: 'REACT_APP_' }),
],
...
});
```
No entanto, é recomendado utilizar a maneira que o Vite define, usando `import.meta.env` para acessar as variáveis e o prefixo `VITE_`. Mas, caso seu projeto seja muito grande e o esforço para fazer essa mudança seja elevado, então é possível usar das opções acima.
### Arquivos .js ou .ts
Talvez você encontre alguns erros ao migrar para o Vite caso seu projeto tenha componentes React com extensão `.js` ou `.ts`. O Vite espera que seus arquivos com JSX tenham a extensão `.jsx` ou `.tsx`. É recomendado renomear os arquivos para evitar problemas, porém é possível ignorar esse problema com o seguinte plugin:
```typescript
// vite.config.ts
import { defineConfig, Plugin, transformWithEsbuild } from 'vite';
export default defineConfig(() => ({
plugins: [
...
{
name: 'treat-js-files-as-jsx',
async transform(code, id) {
if (!id.match(/src\/.*\.js$/)) return null;
return transformWithEsbuild(code, id, {
loader: 'jsx',
jsx: 'automatic',
});
},
},
],
...
}));
```
## Conclusão
Migrar uma aplicação CRA para Vite pode parecer um desafio inicialmente, mas os benefícios em termos de desempenho e simplicidade no desenvolvimento valem o esforço. Seguindo os passos desse guia, você conseguirá realizar a migração de forma mais tranquila e aproveitar as vantagens que o Vite oferece.
Espero que este guia tenha sido útil para ajudar na migração do seu projeto CRA para o Vite. Boa sorte!
| mrtinsvitor |
1,883,623 | Generics in Typescript and how to use them | Sometimes I wonder whether I love or hate Typescript. I see the benefits of implementing it in the... | 0 | 2024-06-10T20:13:24 | https://www.oh-no.ooo/articles/generics-in-typescript-and-how-to-use-them | typescript, javascript, webdev, learning | Sometimes I wonder whether I love or hate Typescript. I see the benefits of implementing it in the majority of frontend solutions, but I struggle with the syntax and the mumbo jumbo names that don't tell me anything. "Generics" would be one of those, as the name kindly suggests too, it's a name that could mean anything 🤷 When I find myself in the situation where I think I have grasped something but I haven't quite, the best solution is to try to write something about it to bring in clarity!
## So what are Generics?
Generics in TypeScript provide a way to create reusable data structures (arrays, stacks, queues, linked lists etc.), components and functions that can work with different data types.
Take for example:
```typescript
interface Dictionary<T> {
[key: string]: T;
}
```
With generics you can create a container type that can hold different types of values based on needs, e.g.:
```typescript
const myDictionary: Dictionary<number> = {
age: 30,
height: 180,
};
const myOtherDictionary: Dictionary<string> = {
name: 'John',
city: 'New York',
};
```
This is useful when you want to create a **flexible and type-safe container**, such as a dictionary or a set. You'll be guaranteed, whether the structure will accommodate strings or numbers, that it will follow the interface `Dictionary` and expect an object with keys whose value will reflect the data you use it with (=== if you have `Dictionary<string>` the values of the keys can only be `string`, if you have `Dictionary<number>` the values of the keys can only be `number`).
Let's make another practical example; let's say you want to have a function that allows you to print whatever type of array in a particular format, like an ordered list. In this case, **the type of the data passed is not relevant to you**, you only care that it's presented in an ordered list.
```typescript
function printArray<T>(array: T[]): string {
const items: string =
array.map(item => `<li>${item}</li>`).join('');
return `<ol>${items}</ol>`;
}
```
Now you can use this function to format the arrays before appending them to a div:
```typescript
function printArray<T>(array: T[]): string {
const items: string =
array.map(item => `<li>${item}</li>`).join('');
return `<ol>${items}</ol>`;
}
// An example function to append HTML content to a div element
function appendToDiv(
containerId: string,
content: string
): void {
const container = document.getElementById(containerId);
if (container) {
container.innerHTML = content;
}
}
// Our data types
const numbers: number[] = [1, 2, 3, 4, 5];
const strings: string[] = ['apple', 'banana', 'orange'];
// Our data types formatted through the printArray function
const numbersListHTML: string = printArray(numbers);
const stringsListHTML: string = printArray(strings);
// Append the lists to an hypotetical div container
// that shows the list
appendToDiv('orderedLists', numbersListHTML);
appendToDiv('orderedLists', stringsListHTML);
```
In the example, `printArray` doesn't care whether the array inputted is a numeric or a string one, but it makes sure to guarantee type safety based on the data type you pass to it.
Not bad, right?
<hr/>
## What is a somewhat more practical application of generics in the real-life coding world?
Generics can be useful when working with promises and asynchronous operations, allowing you to specify the type of the resolved value.
```typescript
// Function that returns a promise to fetch data from an API
function fetchData<T>(url: string): Promise<T> {
return fetch(url)
.then(response => {
if (!response.ok) {
throw new Error('Network response was not ok');
}
return response.json() as Promise<T>;
})
.catch(error => {
console.error('Error fetching data:', error);
throw error;
});
}
// Usage
interface UserData {
id: number;
name: string;
email: string;
}
const apiUrl = 'https://api.example.com/users/1';
fetchData<UserData>(apiUrl)
.then(data => {
// Here, 'data' will be of type UserData
console.log('User Data:', data);
})
.catch(error => {
console.error('Error fetching user data:', error);
});
```
While this example is not perfect, it offers us a great way to dynamically query for various URLs, and for each case we have the possibility to provide the type we expect to receive: we query for `https://api.example.com/users/1` - we expect to receive a response containing user information - so we provide the interface `UserData` to the function `fetchData<UserData>(apiUrl)` and if the request doesn't fail, our response will be already using the correct interface. Neat!
<hr />
## Challenges with Generics
Not all that shines is gold, and the same can be said about generics. While they offer this **great versatility**, you can imagine **they also provide a lot of complexity to a codebase**, as they will require developers to pay better attention to the data flow and make sure that things won't go as unexpected.
Together with the broader learning curve proposed to developers, there are also various limitations of type inferring that might make generics limiting to a practical application. No idea about those limits, will let you know when I stumble on those :D
## Long story short
<mark>If you use them, make sure it makes sense on why you're using them and that they actually bring benefits on what you're doing.</mark> **If you're working on a npm package that needs to accommodate different solutions** (idk, a table with columns that can sort according to the data type provided, where you might not know ahead of time what values will be entered), **generics might be very well what you're looking for**! Bear in mind however, that they will always create an extra layer of complexity that you need to account for your code and for all your team mates!
<br />
## Sources and inspiration
- Various prompts from ChatGPT
- <a href="https://www.typescriptlang.org/docs/handbook/2/generics.html" target="_blank">TypeScript Generics Documentation</a>
- <a href="https://www.digitalocean.com/community/tutorials/how-to-use-generics-in-typescript" target="_blank">How to use Generics in TypeScript</a> by <a href="https://www.digitalocean.com/community/users/jonathancardoso" target="_blank">Jonathan Cardoso</a> via <a href="https://www.digitalocean.com/" target="_blank">Digital Ocean</a>
- Cover: <a href="https://www.freepik.com/free-psd/concept-mobile-application-cloud-services-male-character-sits-big-cloud-sign-with-phone-3d-illustration_26314276.htm" target="_blank">Concept mobile application and cloud services character</a> by <a href="https://www.freepik.com" target="_blank">Freepik</a>, <a href="https://www.freepik.com/free-vector/realistic-white-monochrome-background_16223330.htm" target="_blank">Realistic white monochrome background</a> by <a href="https://www.freepik.com" target="_blank">Freepik</a>
<hr />
Originally posted in <a href="https://oh-no.ooo">oh-no.ooo</a> (<a href="https://www.oh-no.ooo/articles/generics-in-typescript-and-how-to-use-them">Generics in Typescript and how to use them</a>), my personal website. | mahdava |
1,883,622 | 🤔PEPE: Is ‘buying the dip’ a good move to make? | 📉 PEPE Price Drop: Pepe (PEPE) has recently experienced a 32.6% drop, retracing to $0.00001131 after... | 0 | 2024-06-10T20:12:37 | https://dev.to/irmakork/pepe-is-buying-the-dip-a-good-move-to-make-133 |
📉 PEPE Price Drop: Pepe (PEPE) has recently experienced a 32.6% drop, retracing to $0.00001131 after reaching its all-time high of $0.00001724. The current price is $0.00001264, up 3.17% in the last 24 hours but down 16.74% over the past week.
🔄 Support Level and Buying Opportunity: PEPE has pulled back to a confluence point at the $0.00001131 support level, which aligns with an ascending trendline that has been retested several times in the past two months. This retracement presents a potential buying opportunity for traders looking to capitalize on the dip and position themselves for the next surge.
💹 Market Metrics:
Market Cap: $5.3 billion, up 3.17% in the last 24 hours.
Trading Volume: $860 million, down 35.8% in the last 24 hours.
📈 Increased User Activity:
Active Addresses: Surge in daily active addresses and transaction volumes over the past few weeks, with peaks surpassing 200,000 active addresses in a 24-hour period.
Transaction Volume: Data indicates a higher volume of transactions in profit, suggesting increased user activity and potential accumulation.
🔍 Technical Indicators:
Stochastic RSI: Oversold at press time, signaling a potential price reversal.
MACD Histogram: Crossed above the signal line, indicating a potential bullish crossover.
💡 Conclusion:
PEPE's current dip presents a potential buying opportunity due to:
Surge in active addresses and transaction volumes.
Strong support along the ascending trendline.
Oversold conditions on the Stochastic RSI.
However, if the support fails to hold, further dips in price may occur. Investors should weigh these factors before making any decisions.

| irmakork | |
1,883,621 | 🐳Ethereum: Analyzing whether $4.8K is in sight for ETH | 📉 Ethereum Price Drop: Ethereum (ETH) bears dominated last week, with the price dropping over 2%. At... | 0 | 2024-06-10T20:12:10 | https://dev.to/irmakork/ethereum-analyzing-whether-48k-is-in-sight-for-eth-2fbo |
📉 Ethereum Price Drop: Ethereum (ETH) bears dominated last week, with the price dropping over 2%. At the time of writing, ETH is trading at $3,687.02 with a market cap of $442 billion.
🚀 Potential Breakout: ETH is testing a key resistance level, and a breakout above this could spark a massive bull rally in the coming weeks or months. According to popular crypto analyst Milkybull, this might be the last chance to buy ETH under $3.7k in this cycle.
📊 Market Sentiment and On-Chain Data:
Buying Pressure: CryptoQuant data shows low net deposits of ETH on exchanges compared to the last seven days’ average, indicating high buying pressure.
Selling Sentiment: ETH’s Coinbase Premium is red, reflecting dominant selling sentiment among U.S. investors.
📈 Odds of a Bull Rally:
Market Bottom: Glassnode data suggests ETH is near its market bottom, per the Pi cycler top indicator, increasing the chances of a bullish momentum in the coming days. If bullish momentum occurs, ETH might reach $4.8k soon.
🔍 Technical Indicators:
MACD: The MACD indicates a bearish crossover, favoring sellers.
RSI: The Relative Strength Index (RSI) is bearish, remaining below its neutral mark.
CMF: The Chaikin Money Flow (CMF) has turned bullish, moving towards the neutral mark in recent days.
Conclusion:
Ethereum's current dip presents a potential buying opportunity if it breaks through the key resistance level, which could trigger a massive bull rally. While some indicators hint at a continued decline, other data points suggest an imminent price increase. Investors should weigh these factors and consider their risk tolerance before making a decision.

| irmakork | |
1,883,620 | 🔥Cardano faces resistance at $0.44-$0.49: Will ADA drop to $0.42? | 🔻 Cardano (ADA) Decline: Cardano has lost 6.11% of its gains in the past 30 days. The Global In/Out... | 0 | 2024-06-10T20:11:48 | https://dev.to/irmakork/cardano-faces-resistance-at-044-049-will-ada-drop-to-042-1431 |
🔻 Cardano (ADA) Decline: Cardano has lost 6.11% of its gains in the past 30 days. The Global In/Out of Money (GIOM) indicator from IntoTheBlock suggests ADA could face further declines.
📊 GIOM Indicator Insights:
Addresses in Loss: 402,720 addresses accumulated 6.39 billion ADA between $0.44 and $0.49. This cohort is currently at a loss.
Resistance Level: These holders might sell once ADA reaches these levels, potentially creating resistance and causing a retracement to $0.42 or even $0.40 if selling pressure intensifies.
📉 Current Price and Transactions: ADA is currently trading at $0.43. Despite the price decrease, large transactions on the network increased by 11.32% in the last 24 hours. However, this rise does not necessarily indicate buying pressure; it could signal token movements between wallets or sell-offs.
📉 Active Addresses: Active addresses on the Cardano network were around 35,000 on June 8 but have since decreased to 32,100. This decline indicates reduced participation in transactions, correlating with ADA's price drop.
📈 Correlation with Network Activity: Historically, ADA’s price has correlated with active addresses. For example, when active addresses reached 39,000 on June 7, ADA’s price jumped to $0.48. A further decline in active addresses could drive ADA’s price down.
🔍 Open Interest (OI): OI measures traders' open contracts linked to a cryptocurrency. For ADA, OI has decreased, indicating traders are closing positions. If this trend continues, ADA’s price might drop below $0.42.
👥 ADA Holders: The total number of ADA holders is 4.47 million, the same as in April, indicating that Cardano has struggled to attract new buyers.
Conclusion:
Cardano is facing potential declines with strong resistance levels at $0.44 to $0.49. A decrease in active addresses and open interest suggests bearish sentiment, potentially pushing ADA’s price below $0.42. Investors should closely monitor these metrics and market trends before making decisions.

| irmakork | |
1,883,619 | 🔥Crypto Prices Today June 10: BTC & Altcoins Regain Momentum, NOT & OM Top Gainers | Today’s crypto market saw a notable uptick after a weekend dip, with Bitcoin edging towards $70K and... | 0 | 2024-06-10T20:11:25 | https://dev.to/irmakork/crypto-prices-today-june-10-btc-altcoins-regain-momentum-not-om-top-gainers-51dh |
Today’s crypto market saw a notable uptick after a weekend dip, with Bitcoin edging towards $70K and altcoins like Ethereum, Solana, and XRP posting gains of 0.5-2%. Meme coins like Pepe (PEPE) and Shiba Inu (SHIB) followed suit, while others like Notcoin (NOT) and MANTRA (OM) emerged as top gainers.
📈 Market Overview:
Global crypto market cap rose by 0.90% to $2.55 trillion.
Total market volume dropped by 9.05% to $48.48 billion, indicating a potential slowdown in trading activity.
🔝 Top Crypto Prices Today:
Bitcoin (BTC): Gained 0.51% to $69,601.19, with a market cap of $1.37 trillion.
Ethereum (ETH): Rose 0.23% to $3,680.45, holding a market cap of $442.24 billion.
Solana (SOL): Increased by 0.90% to $159.66.
XRP: Jumped 1.31% to $0.4996.
💹 DOGE & SHIB Prices:
Dogecoin (DOGE): Slipped marginally by 0.02% to $0.1459.
Shiba Inu (SHIB): Rose 1.11% to $0.00002338.
Pepe Coin (PEPE): Surged 4.53% to $0.00001275.
🚀 Top Crypto Gainers Today:
MANTRA (OM): Rallied 18.99% to $1.08.
Oasis (ROSE): Surged 14.07% to $0.12.
Chiliz (CHZ): Up 7.37% to $0.1268.
Notcoin (NOT): Jumped 6.51% to $0.01913.
📉 Top Crypto Losers Today:
Lidao DAO (LDO): Slipped 2.64% to $1.88.
Cronos (CRO): Dipped 2.10% to $0.1092.
Uniswap (UNI): Fell 1.90% to $9.77.
NEAR Protocol (NEAR): Down 1.84% to $6.51.
📉 Hourly Market Volatility: BTC, ETH, and other coins experienced price fluctuations, indicating increased market volatility throughout the day. | irmakork | |
1,883,618 | 🚀Notcoin Gearing Up for a Fresh Bullish Spell; Can NOT Price Surge by 50% to Form a New ATH? | Despite the overall sluggish market trend, Notcoin (NOT) has shown signs of strength, attempting to... | 0 | 2024-06-10T20:11:06 | https://dev.to/irmakork/notcoin-gearing-up-for-a-fresh-bullish-spell-can-not-price-surge-by-50-to-form-a-new-ath-3nd2 |
Despite the overall sluggish market trend, Notcoin (NOT) has shown signs of strength, attempting to stabilize and initiate an upswing. While trading activity remains subdued, hopes of a bullish trend towards a new all-time high (ATH) persist.
Currently, NOT price is trading within a falling wedge pattern, indicating the potential for a bullish breakout. However, technical indicators suggest a minor pullback may precede this breakout. The Ichimoku cloud remains bearish, while the stochastic RSI suggests a short-term pullback towards the lower support around $0.016.
The MACD remains bullish, hinting at a potential upswing in the near future. Additionally, as Bitcoin consolidates, the overall market, including Notcoin, may experience a small squeeze. However, whether this leads to a fresh bullish spell depends on the strength of the bulls.
To form a new ATH above $0.03, bulls need to show strength, particularly around the key resistance zone near $0.025. Failure to do so may result in continued consolidation within a range, with minor price movements.

| irmakork | |
1,883,594 | Be careful! Freewallet app may scam you! | Many people seek a free wallet to send and receive crypto assets without incurring any fees. However,... | 0 | 2024-06-10T18:38:29 | https://dev.to/feofhan/be-careful-freewallet-app-may-scam-you-363i | Many people seek a free wallet to send and receive crypto assets without incurring any fees. However, it’s important to understand that this is impossible—using any blockchain always involves some fees. But that’s not the main concern.
If you're looking for a free wallet, there's a high chance you'll come across the site Freewallet. Be warned: the app available on this site is a scam. Users who install this wallet find that the administration blocks their ability to withdraw assets. Avoid using the Freewallet app at all costs—it's a scam!

**Freewallet app is not free!**
The administration selectively blocks user accounts, demanding documents for KYC/AML verification. In the end, these documents are rejected, preventing the client from withdrawing their assets.
This is essentially a scam, though the administration’s actions appear legal, allowing them to steal assets without facing legal repercussions.
Users and businesses affected by KYC fraud on Freewallet can face significant issues, including unauthorized transactions, loss of funds, and compromised personal information.
Check more information about Freewallet app scam and the ways to get coins back here.
If you have experienced similar problems with Freewallet, we urge you to spread the word. Share your story with us at freewallet-report@tutanota.com.
Never use Freewallet app. It’s not a free wallet. It’s a total scam!
| feofhan | |
1,883,617 | SQLite in the Cloud: Scalable Solutions for Data Management | Introduction In recent years, the proliferation of cloud computing has revolutionized the way... | 0 | 2024-06-10T20:10:28 | https://dev.to/brianmk/sqlite-in-the-cloud-scalable-solutions-for-data-management-3plg | sql, database | Introduction
In recent years, the proliferation of cloud computing has revolutionized the way developers approach data management. Traditionally, SQLite has been synonymous with embedded databases in mobile and desktop applications. However, its lightweight nature and simplicity make it an attractive option for cloud-based solutions as well. In this article, we will explore the use of SQLite in the cloud and discuss scalable solutions for efficient data management.
Understanding SQLite:
SQLite is a self-contained, serverless, zero-configuration, transactional SQL database engine. It is widely known for its simplicity, reliability, and small footprint, making it a popular choice for embedded systems and standalone applications. Unlike client-server database management systems like MySQL or PostgreSQL, SQLite operates directly on the disk and does not require a separate server process.
Challenges in Cloud Data Management:
While SQLite excels in scenarios where simplicity and low resource consumption are paramount, its suitability for cloud-based applications has historically been questioned due to scalability concerns. Cloud environments typically handle large volumes of data and require robust scalability and concurrency features, areas where SQLite has traditionally been perceived as lacking.
Scalable Solutions with SQLite in the Cloud:
Despite its perceived limitations, SQLite can be effectively utilized in cloud environments with the implementation of certain strategies and best practices:
Data Sharding:
One approach to scaling SQLite in the cloud is data sharding, where the dataset is horizontally partitioned across multiple SQLite databases.
Each shard can be hosted on a separate node or instance within the cloud environment, allowing for parallel query processing and improved scalability.
Developers can implement custom sharding logic based on specific criteria such as user IDs, geographical locations, or time intervals.
Replication and Load Balancing:
Replication involves maintaining multiple copies of the database across different nodes to ensure high availability and fault tolerance.
Load balancers distribute incoming requests across these replicated instances, preventing any single node from becoming a bottleneck.
SQLite's support for read-only replicas makes it well-suited for scenarios where read-heavy workloads need to be distributed across multiple nodes.
Caching and In-Memory Operations:
Leveraging in-memory databases or caching mechanisms can significantly improve the performance of SQLite in cloud environments.
Frequently accessed data can be cached in memory using tools like Redis or Memcached, reducing disk I/O overhead and speeding up query execution.
Developers should carefully identify hotspots in their application and employ caching strategies accordingly to maximize performance gains.
Asynchronous Task Queues:
Asynchronous task queues such as Celery or RabbitMQ can be used to offload long-running database operations from the main application thread.
By decoupling database operations from request handling, developers can improve responsiveness and scalability without sacrificing performance.
Tasks can be processed in the background, allowing the application to continue serving requests uninterrupted.
Case Study: SQLite in a SaaS Application:
To illustrate the practical implementation of SQLite in a cloud-based environment, let's consider a hypothetical Software-as-a-Service (SaaS) application that utilizes SQLite for data storage:
Scenario:
Our SaaS application provides project management services to clients, allowing them to create, organize, and collaborate on various projects.
Each project consists of multiple tasks, comments, and attachments, all of which need to be stored and accessed efficiently.
Architecture:
The application is deployed on a cloud platform such as Amazon Web Services (AWS) or Microsoft Azure, using a microservices architecture.
SQLite databases are sharded based on the tenant ID, with each tenant having its dedicated database instance.
Replication is implemented to ensure high availability and fault tolerance, with read-only replicas serving read-heavy queries.
Benefits:
SQLite's lightweight nature and ease of deployment make it a cost-effective choice for startups and small businesses.
The application can scale horizontally by adding more shards or replicas as the user base grows, without significant architectural changes.
Despite handling thousands of concurrent users, the application maintains low latency and high throughput, thanks to efficient data management strategies.
Conclusion:
SQLite, once considered primarily for embedded systems and standalone applications, has evolved to address the scalability requirements of modern cloud-based environments. By employing techniques such as data sharding, replication, caching, and asynchronous task processing, developers can leverage SQLite to build scalable and efficient cloud applications. As cloud computing continues to dominate the software landscape, SQLite remains a compelling choice for developers seeking simplicity without compromising on performance or scalability. | brianmk |
1,883,564 | My Eight-Month Journey as a Self-Taught Developer | After almost two years working as an auditor, I knew I needed a change. I wasn't satisfied with my... | 0 | 2024-06-10T20:03:46 | https://dev.to/migueldonado/my-eight-month-journey-as-a-self-taught-developer-5ah9 | beginners, career, learning, productivity | After almost two years working as an auditor, I knew I **needed a change**. I wasn't satisfied with my job and felt I had a lot more potential that I could tap into. That feeling wasn't new.
Since the last year of my bachelor's (BBA), I had the idea of studying for a Master's in Big Data. I thought that a couple of years working would allow me to save money for that goal. After two years working, I realized that I had the money, but I didn't have the programming skills to be eligible for any programs. During this time, I had been doing a few programming courses, but nothing really serious.
That's when I realized that if I wanted things to be different, I had to do different things. As Jim Rohn said, *"When someone asks me how the next two years will be, I answer that they will be like the last two years unless they change something."*
I didn't leave the job; I decided I would work full-time for a living while **working part-time toward my goal.**
And here I am, after eight months working part-time toward my goal of becoming a developer. I've learned a lot during this time and have made many mistakes on my self-taught journey. I'll surely continue making mistakes because that's part of the learning process.
I have two months left in my audit job before leaving it to start a Master's in Big Data in September 2024. Looking back, I feel proud of my commitment to becoming part of the developer community. It has allowed me to discover a field that has become one of my passions and a main driver of my curiosity.
The purpose of this blog is to share with the community the learning process of a newbie developer and to serve as a journal where I write down my feelings and thoughts along this journey. Since I’ve drawn inspiration and help from others stories, I hope mine can be useful for someone." | migueldonado |
1,883,616 | Top Mold Removal & Remediation Services in Oakville | Mold can be a silent but dangerous intruder in your home, posing significant health risks and... | 0 | 2024-06-10T19:59:29 | https://dev.to/johnson321/top-mold-removal-remediation-services-in-oakville-4947 | webdev |

Mold can be a silent but dangerous intruder in your home, posing significant health risks and potentially causing extensive property damage. For residents of Oakville, finding reliable and effective mold removal and remediation services is crucial. This article explores the top mold removal and remediation services in Oakville, highlighting the importance of professional intervention to ensure a safe and healthy living environment.
Understanding the Importance of Mold Removal
Mold thrives in damp, humid environments, often hidden in places like basements, bathrooms, and behind walls. While some molds are harmless, others can cause allergic reactions, respiratory issues, and other health problems. Black mold, in particular, is known for its toxic effects. Therefore, prompt and thorough mold removal is essential to safeguard your family's health and preserve your property's integrity.
Top Mold Removal & Remediation Services in Oakville
GreenTech Mold Removal
GreenTech Mold Removal is renowned for its eco-friendly approach to mold remediation. Using non-toxic, biodegradable products, they ensure that mold is eradicated without compromising the environment or your health. Their comprehensive services include mold inspection, testing, removal, and prevention strategies.
Pure Maintenance
Pure Maintenance utilizes a patented dry fog technology that not only eliminates mold but also prevents its recurrence. This innovative method is highly effective in reaching mold hidden in hard-to-access areas. Their team of certified professionals provides fast, reliable service, ensuring your home is mold-free in no time.
PuroClean Restoration Services
PuroClean is a trusted name in the restoration industry, offering expert mold remediation services. Their technicians are IICRC certified, and they use advanced equipment to detect and remove mold thoroughly. PuroClean’s commitment to customer satisfaction and quality service makes them a top choice in Oakville.
MoldTech
MoldTech offers a holistic approach to mold remediation, addressing not only the visible mold but also the underlying moisture issues that cause mold growth. Their services include detailed mold inspections, air quality testing, and customized remediation plans. With years of experience, MoldTech ensures a thorough and lasting solution to mold problems.
Paul Davis Restoration
Paul Davis Restoration provides comprehensive mold remediation services, from initial inspection to final cleanup. Their certified technicians use state-of-the-art equipment and industry-leading techniques to remove mold and restore affected areas. Paul Davis is known for its rapid response and effective solutions, making them a reliable choice for Oakville residents.
Choosing the Right Mold Removal Service
When selecting a mold removal and remediation service, consider the following factors:
Certification and Experience: Ensure the company has certified and experienced technicians who follow industry standards.
Inspection and Testing: Look for services that offer thorough inspections and mold testing to identify the extent of the problem.
Remediation Techniques: Choose a company that uses effective, safe, and environmentally friendly remediation methods.
Customer Reviews: Check customer reviews and testimonials to gauge the company’s reputation and reliability.
Guarantees and Warranties: Opt for services that provide guarantees or warranties on their work, ensuring peace of mind.
Conclusion
[Oakville Mold Removal Services](https://servekings.ca/mold-removal/oakville/) are critical to maintaining a healthy home environment. Oakville residents have access to several top-notch services that offer professional, effective solutions to mold problems. By choosing the right service provider, you can protect your home and health from the dangers of mold. Don't wait until mold becomes a severe issue—take action today and ensure a mold-free living space for you and your family.
| johnson321 |
1,883,615 | I made a application, which shows you a film🍿 based on your mood. | Hello everyone, I made a small application, the idea of which is to show a selection of films... | 0 | 2024-06-10T19:51:40 | https://dev.to/artemowandrei/i-made-a-application-which-shows-you-a-film-based-on-your-mood-332 | webdev, webapp, javascript, development | Hello everyone,
I made a small application, the idea of which is to show a selection of films based on the user's mood.
try it here: https://moodwatch.vercel.app
I need feedback, what would you like to see in this app, or what bugs needs to be fixed, or what features needs to be added?
thanks | artemowandrei |
1,883,614 | The Evolution of API Development Styles: The GraphQL Architecture | In the changing landscape of modern application development, the selection of API architecture style... | 0 | 2024-06-10T19:48:12 | https://dev.to/paltadash/the-evolution-of-api-development-styles-the-graphql-architecture-3jjl | In the changing landscape of modern application development, the selection of API architecture style has become a critical factor in ensuring the efficiency, scalability and adaptability of solutions. While traditional approaches have been widely used, an increasingly popular alternative is the use of GraphQL.
##What is GraphQL?
**1. Type system:** GraphQL has a type system that defines how data is
structured in the API, making it easy to explore and understand the
API.
**2. Flexible queries:** Clients can perform complex, nested queries to
get the data they need, rather than having to make multiple requests
to different endpoints.
**3. Execution environment:** GraphQL has a server-side execution
environment that interprets client queries, retrieves the necessary
data and returns a response in JSON format.
**4. API evolution:** GraphQL facilitates API evolution, since clients
can continue to use the same API even if the server adds or modifies
fields, as long as it does not remove fields that clients are
already using.
##Benefits of GraphQL API Architecture
**- Efficiency in data usage:** GraphQL allows clients to get only the
data they need, which reduces network traffic and improves
performance.
**- Flexibility and control:** Clients can request exactly the data they
need, giving them more control over the information they receive.
**- API evolution:** GraphQL makes it easy to evolve APIs without
breaking compatibility with existing clients.
**- Integrated documentation:** GraphQL's type system provides integrated
API documentation, making it easy to explore and use.
**- Reduced complexity:** GraphQL simplifies the architecture by
eliminating the need for multiple endpoints and response formats.
##GraphQL Use Cases
**- Web and mobile applications:** GraphQL enables clients to get only
the data they need, improving performance and user experience.
**- Microservices:** GraphQL can act as an integration layer between
multiple microservices, simplifying the interaction between them.
**- IoT applications:** GraphQL can be useful for managing communication
and data flow between IoT devices and the core application.
**- Large-scale data applications:** GraphQL facilitates querying and
manipulating large data sets, without the need for multiple requests.
##Example
###Schema Definition
Suppose we have a simple API to manage a list of books. We will start by defining the GraphQL schema:
```
type Book {
id: ID!
title: String!
author: String!
published: Int
}
type Query {
books: [Book]
book(id: ID!): Book
}
type Mutation {
createBook(title: String!, author: String!, published: Int): Book
updateBook(id: ID!, title: String, author: String, published: Int): Book
deleteBook(id: ID!): Book
}
```
In this schema, we have:
Type Book: Represents a book, with fields such as id, title, author and published.
Type Query: Defines query operations, such as getting a list of books or a book by its ID (book).
Mutation type: Defines mutation operations, such as creating a new book (createBook), updating an existing book (updateBook) and deleting a book (deleteBook).
###Query Example
Suppose we want to get a list of books with their titles and authors:
```
query {
books {
id
title
author
}
}
```
The response would look something similar to this:
```
{
"data": {
"books": [
{
"id": "1",
"title": "The Great Gatsby",
"author": "F. Scott Fitzgerald"
},
{
"id": "2",
"title": "To Kill a Mockingbird",
"author": "Harper Lee"
},
{
"id": "3",
"title": "1984",
"author": "George Orwell"
}
]
}
}
```
| paltadash | |
1,883,085 | Nucleoid: Reasoning Engine for Neuro-Symbolic AI | Nucleoid is a reasoning engine for Neuro-Symbolic AI, implementing symbolic AI through declarative (logic-based) programming. Neuro-symbolic AI combines neural networks (which excel in pattern recognition and data-driven tasks) with symbolic AI (which focuses on reasoning and rule-based problem solving) to create systems that can both interpret complex data and understand abstract concepts. | 0 | 2024-06-10T19:43:55 | https://github.com/NucleoidAI/Nucleoid | ai, showdev, node, javascript | ---
description: Nucleoid is a reasoning engine for Neuro-Symbolic AI, implementing symbolic AI through declarative (logic-based) programming. Neuro-symbolic AI combines neural networks (which excel in pattern recognition and data-driven tasks) with symbolic AI (which focuses on reasoning and rule-based problem solving) to create systems that can both interpret complex data and understand abstract concepts.
---
Nucleoid is Declarative (Logic) Runtime Environment, which is a type of Symbolic AI used for reasoning engine in Neuro-Symbolic AI. Nucleoid runtime that tracks given statements in JavaScript syntax and creates relationships between variables, objects, and functions etc. in the logic graph. In brief, the runtime translates your business logic to fully working application by managing the JavaScript state as well as storing in the built-in data store, so that your application doesn't require external database or anything else.

### Neural Networks: The Learning Component
Neural networks in Neuro-Symbolic AI are adept at learning patterns, relationships, and features from large datasets. These networks excel in tasks that involve classification, prediction, and pattern recognition, making them invaluable for processing unstructured data, such as images, text, and audio. Neural networks, through their learning capabilities, can generalize from examples to understand complex data structures and nuances in the data.
### Symbolic AI: The Reasoning Component
The symbolic component of Neuro-Symbolic AI focuses on logic, rules, and symbolic representations of knowledge. Unlike neural networks that learn from data, symbolic AI uses predefined rules and knowledge bases to perform reasoning, make inferences, and understand relationships between entities. This aspect of AI is transparent, interpretable, and capable of explaining its decisions and reasoning processes in a way that humans can understand.
<br/>

#### Declarative Logic in Symbolic Reasoning
Declarative logic is a subset of declarative programming, a style of building programs that expresses the logic of a computation without describing its control flow. In declarative logic, you state the facts and rules that define the problem domain. The runtime environment or the system itself figures out how to satisfy those conditions or how to apply those rules to reach a conclusion. This contrasts with imperative programming, where the developer writes code that describes the exact steps to achieve a goal.
Symbolic reasoning refers to the process of using symbols to represent problems and applying logical rules to manipulate these symbols and derive conclusions or solutions. In AI and computer science, it involves using symbolic representations for entities and actions, enabling the system to perform logical inferences, decision making, and problem-solving based on the rules and knowledge encoded in the symbols.
By integrating Nucleoid into Neuro-Symbolic AI, the system benefits from enhanced interpretability and reliability. The declarative logic and rules defined in Nucleoid provide clear explanations for the AI's decisions, making it easier for users to understand and trust the system's outputs. Furthermore, the explicit reasoning capabilities help ensure that decisions are made based on logical principles, adding a layer of reliability and consistency to the AI's behavior.

## Hello World :zap:
```javascript
User:
> 'Define human with name'
Assistant:
> 'The user requested to define a human with a name using a declarative class'
---
// Ubiquitous Code:
'use declarative';
class Human {
name: string;
constructor(name: string) {
this.name = name;
}
}
```
```javascript
User:
> 'All humans are mortal'
Assistant: All humans are mortal
> 'All humans were declared as mortal.'
---
// Ubiquitous Code:
'use declarative';
$Human.mortal = true;
```
```javascript
User:
> 'Create a human with name "Socrates"'
Assistant:
> 'A human named 'Socrates' was successfully created.'
---
// Ubiquitous Code:
'use imperative';
var human1 = new Human('Socrates');
// Result: {"id":"human1","name":"Socrates","mortal":true}
```
```javascript
User:
> 'Is human with name 'Socrates' mortal?'
Assistant:
> 'The human named 'Socrates' was determined to be mortal.'
---
// Ubiquitous Code:
`use imperative`;
var human1 = new Human('Socrates');
human1.mortal;
// Result: true
```
> :bulb: **This executes the business logic and stores the state in the data store. `const app = nucleoid()` manages internal data store without requiring an external database.**
Learn more at [nucleoid.com/docs/get-started](https://nucleoid.com/docs/get-started)
### Under the hood: Declarative (Logic) Runtime Environment
Nucleoid is an implementation of symbolic AI for declarative (logic) programming at the runtime. As mentioned, the declarative runtime environment manages JavaScript state and stores each transaction in the built-in data store by declaratively rerendering JavaScript statements and building the knowledge graph (base) as well as an execution plan.
<p align="center">
<img src="https://cdn.nucleoid.com/media/taxonomy.png" width="450" alt="Nucleoid's Taxonomy"/>
</p>
The declarative runtime isolates a behavior definition of a program from its technical instructions and executes declarative statements, which represent logical intention without carrying any technical detail. In this paradigm, there is no segregation regarding what data is or not, instead approaches how data (declarative statement) is related with others so that any type of data including business rules can be added without requiring any additional actions such as compiling, configuring, restarting as a result of plasticity. This approach also opens possibilities of storing data in the same box with the programming runtime.
<div align="center">
<table>
<tr>
<th>
<img src="https://cdn.nucleoid.com/media/diagram1.png" width="225" alt="Logical Diagram 1"/>
</th>
<th>
<img src="https://cdn.nucleoid.com/media/diagram2.png" width="275" alt="Logical Diagram 2"/>
</th>
</tr>
</table>
</div>
In short, the main objective of the project is to manage both of data and logic under the same runtime. The declarative programming paradigm used by Nucleoid allows developers to focus on the business logic of the application, while the runtime manages the technical details.This allows for faster development and reduces the amount of code that needs to be written. Additionally, the sharding feature can help to distribute the load across multiple instances, which can further improve the performance of the system.
## Benchmark
This is the comparation our sample order app in Nucleoid IDE against MySQL and Postgres with using Express.js and Sequelize libraries.
<img src="https://cdn.nucleoid.com/media/benchmark.png" alt="Benchmark" width="550"/>
> Performance benchmark happened in t2.micro of AWS EC2 instance and both databases had dedicated servers with <u>no indexes and default configurations</u>.
This does not necessary mean Nucleoid runtime is faster than MySQL or Postgres, instead databases require constant maintenance by DBA teams with indexing, caching, purging etc. however, Nucleoid tries to solve this problem with managing logic and data internally. As seen in the chart, for applications with average complexity, Nucleoid's performance is close to linear because of on-chain data store, in-memory computing model as well as limiting the IO process.
---
<center>
<b>⭐️ Star us on GitHub for the support</b>
</center>
Thanks to declarative logic programming, we have a brand-new approach to Neuro-Symbolic AI. As we continue to explore the potential of this AI architecture, we welcome all kinds of contributions!
<p align="center">
<img src="https://cdn.nucleoid.com/media/nobel.png" alt="Nobel" />
</p>
<center>
Join us at
<br/>
<a href="https://github.com/NucleoidAI/Nucleoid">https://github.com/NucleoidAI/Nucleoid</a>
</center>
---
{% embed https://github.com/NucleoidAI/Nucleoid %}
| canmingir |
1,883,282 | CodeBehind 2.6 Released | A new version of the CodeBehind framework has been released by Elanat. This is version 2.6 of this... | 0 | 2024-06-10T19:27:09 | https://dev.to/elanatframework/codebehind-26-released-1dne | news, dotnet, backend, github | A new version of the [CodeBehind framework](https://github.com/elanatframework/Code_behind) has been released by [Elanat](https://elanat.net). This is version 2.6 of this back-end framework. In version 2.6, the main focus is on dependency injection.
## Controller class constructor and Model class constructor
The constructor of the Controller class and the Model class has nothing to do with the CodeBehind constructor.
Before we explain the details about the constructor of the Controller class and the Model class, we must say that the constructor of the Controller class and the Model class has nothing to do with the CodeBehind constructor.
Please read the following article about the CodeBehind constructor:
[CodeBehind Constructor](https://github.com/elanatframework/Code_behind/blob/elanat_framework/doc/constructor_method.md)
As you know, in the modern MVC architecture in the CodeBehind framework, there is no need to configure the Controller in the Route, and the requests reach the View first. Now, in the CodeBehind framework, a possibility has been added to be able to set values of Controller classes and Model classes in the constructor methods of the View.
Example
View (Default.aspx)
```html
@page
@controller MyController()
@controllerconstructor(context.Request.Form)
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>Controller class constructor</title>
</head>
<body>
<form method="post" action="/">
<label for="txt_TextBox">TextBox</label>
<input name="txt_TextBox" id="txt_TextBox" type="text" />
<br>
<input name="btn_Button" type="submit" value="Click to send data" />
</form>
</body>
</html>
```
The HTML code above is a CodeBehind Framework View page that has a button that submits a textbox. The `@controllerconstructor` variable passes the `context.Request.Form` value to the constructor of the Controller class.
Controller
```csharp
using CodeBehind;
public partial class MyController : CodeBehindController
{
private readonly IFormCollection _Form;
public MyController(IFormCollection Form = null)
{
_Form = Form;
}
public void CodeBehindConstructor()
{
if (!string.IsNullOrEmpty(_Form["btn_Button"]))
btn_Button_Click();
}
private void btn_Button_Click()
{
string TextBoxValue = _Form["txt_TextBox"];
Write(TextBoxValue);
IgnoreViewAndModel = true;
}
}
```
The `MyController` class has a constructor that accepts an `IFrmCollection` object that is used to access form data. When the Controller is called, the `_Form` field, which is private and read-only, is initialized in the constructor method of `MyController`.
The `_Form` field is initialized with the `context.Request.Form` input argument from the View page. This is a dependency injection.
The `CodeBehindConstructor` method is called when the page loads and checks if the button has been clicked. If the button is clicked, it calls the `btn_Button_Click` method, which reads the value of the text box and writes it to the page. The `IgnoreViewAndModel` property is set to true, which clears the contents of the View page and displays only the textbox string in the output.
> Note: Considering that in the CodeBehind framework it is possible to configure the Controller in the Route and also to call a controller, the controller class must have a constructor without arguments. For this reason, we set the IFormCollection parameter in the constructor method equal to null.
## Define constructor method class by Attribute in View
You can call constructor method of Controller class and Model class on View pages.
**Razor syntax**
To call the constructor method of the Controller class in View, the string `@controllerconstructor` must be written and then the input arguments should be placed between parentheses.
Example:
`@controllerconstructor(26, "my text", 'c')`
To call the constructor method of the Controller class in View, the string `@modelconstructor` must be written and then the input arguments should be placed between parentheses.
Example:
`@modelconstructor(26, "my text", 'c')`
**Standard syntax**
To call the constructor method of the Controller class in View, the `controllerconstructor` string must be written then the equals character must be added, and the input arguments should be placed between parentheses and must be placed between the double quotation marks (`"`). If the input arguments contain quotation marks (`"`), you must use the code `"` instead of the quotation marks.
Example:
`<%@ page ... controllerconstructor="(26, "my text", 'c')" ... %>`
To call the constructor method of the Model class in View, the `modelconstructor` string must be written then the equals character must be added, and the input arguments should be placed between parentheses and must be placed between the double quotation marks (`"`). If the input arguments contain quotation marks (`"`), you must use the code `"` instead of the quotation marks.
Example:
`<%@ page ... modelconstructor="(26, "my text", 'c')" ... %>`
## Improved detection of View page attributes in standard syntax
In this version, we have improved the processing of standard syntax attributes. From now on, you don't need to separate the attributes with the space character; From this version onwards, you can use the tab character or the next line instead of a space.
Example:
```html
<%@ Page
Controller="MyController"
Model="MyModel"
%>
```
### Related links
CodeBehind on GitHub:
https://github.com/elanatframework/Code_behind
CodeBehind in NuGet:
https://www.nuget.org/packages/CodeBehind/
CodeBehind page:
https://elanat.net/page_content/code_behind | elanatframework |
1,883,610 | Demystifying Observability 2.0 | Our systems have gotten complex. Like really complex. Organizations have mostly shifted from... | 0 | 2024-06-10T19:21:13 | https://open.substack.com/pub/geekingoutpodcast/p/demystifying-observability-20 | observability, opentelemetry, cloudnative, apm |

Our systems have gotten complex. Like _really_ complex. Organizations have mostly shifted from monoliths to microservices. They’ve embraced the Cloud, and with it, Kubernetes (PS: happy 10th b-day to Kubernetes!) and all sorts of other cloud native tools that help run the things that we’ve grown accustomed to having in our tech-dependent lives: access to government services, social media, airline booking, shopping, streaming services, and so on.
As our systems get more and more complex, engineers need a way to understand them when things go 💩, so that services can be restored in a timely manner.
Enter [Observability](https://adri-v.medium.com/list/unpacking-observability-be1835c6dd23), which helps with just that. Observability has been around for a while now, and it’s been really exciting to see so many organizations embarking on their respective observability journeys.
Now, if you’ve been following the interwebs, you may have heard some rumblings about **_Observability 2.0_**. Cool. But what is it _really_, and how does it differ from Observability 1.0? Well, you’ve come to the right place. Sit back, relax, and let me take you on a journey.
## **Defining Observability**
Before we get into Observability 1.0 vs 2.0, let’s start with a definition of Observability, also known as o11y to us folks who sometimes get lazy and don’t want to write out the whole word. 🙃 (For the uninitiated: o11y == the 11 letters between “o” and “y” in “Observability”.)
The “classic” definition of Observability comes from [control theory](https://en.wikipedia.org/wiki/Observability):
> Observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.
>
> _– Rudolf E. Kálmán_
[This definition](https://hazelweakly.me/blog/redefining-observability/#observability:-control-theory) was popularized by [Charity Majors](https://x.com/mipsytipsy).
I love this definition, and I’ve used it for many years, including in my very [first blog post on Observability](https://storiesfromtheherd.com/unpacking-observability-a-beginners-guide-833258a0591f), and more recently, in my [O’Reilly Observability video course](https://learning.oreilly.com/videos/fundamentals-of-observability/0636920926597/).
That being said, there’s a refinement to the definition of Observability that I’ve been embracing of late, which was coined by my good friend, [Hazel Weakly](https://www.linkedin.com/in/hazelweakly/), who has an [amazing blog post on redefining Observability](https://hazelweakly.me/blog/redefining-observability). (Hazel is also incredibly smart and super astute and you should totally [follow her on LinkedIn](https://www.linkedin.com/in/hazelweakly/)):
> Observability is the process through which one develops the ability to ask meaningful questions, get useful answers, and act effectively on what you learn.
>
> _– Hazel Weakly_
It’s so simple, and so elegant, and I love it. Also, it applies to both Observability 1.0 and 2.0, and does not hold us back from continuing to refine Observability.
Okay, now that we’ve gotten the basics out of the way, let’s tackle this 1.0 vs. 2.0 business.
I set out to write this piece because I’ve found myself talking a lot about Observability 2.0 recently, including last week on [Whitney Lee’s Enlightning show](https://www.youtube.com/live/WiFoSr54kjM?si=5edzpyd33RKcp2te), and in an [upcoming episode of The Cloud Gambit](https://www.linkedin.com/posts/adrianavillela_observability-otel-bouldering-activity-7204876506731335680-ACL2). After all this talking about it, I wanted a place to jot down my thoughts, and to also share them with y’all. I honestly thought it would be a straight regurgitation of what I’d already said. But then I asked Hazel to look over this piece, and her feedback encouraged me to think about this further, thereby refining some of my understanding and thoughts around this. Which is awesome, because it’s so fitting, given that I’m talking about the evolution of our understanding of Observability!

### **Observability 1.0**
When Observability burst onto the scene, it was still a very APM-dominated world. Many APM vendors, sensing that Observability was becoming an Actual Thing, pivoted to Observability. This pivot, however, was mostly in name only, in much the same way that many organizations pivoted from Ops to DevOps (or SRE or Platform Engineering) in name only. New name, but business as usual. And perhaps we can’t blame them for that. These are paradigm shifts and paradigm shifts are often hard to swallow. You’ve gotta start somewhere, and maybe a name change is as good a place as any.
So, time for the big reveal…**_Observability 1.0 is APM._** But more specifically, what is Observability 1.0? Observability 1.0 is focused on:
#### **1- Yow you operate your code**
This means that it’s more of an Ops concern, and not so much of an Everyone concern.
#### **2- Known unknowns**
Also known as “predicable shit happens”. We know the usual things that go wrong with our systems, and [we put dashboards in place to represent all of things that we know can go wrong with our systems](https://faun.pub/observability-mythbusters-observability-anti-patterns-2b3062405b54) (and for which we know the fixes), so that we can keep an eye on things if they go sideways.
#### **3- Multiple sources of truth**
These “sources of truth” are [traces, metrics, and logs](https://medium.com/dzerolabs/observability-journey-understanding-logs-events-traces-and-spans-836524d63172), also known as “The Three Pillars”. I actually hate that term, because it implies that these things are siloed from one another (more on that later). I much prefer the term “signal”. A signal is anything that gives you data.
I suppose that the whole Three Pillars thing kind of makes sense for Observability 1.0, where traces, metrics, and logs were often not correlated. This is especially true since, in the early days of Observability, we didn’t really have a common language for even talking about these signals. Each vendor had their own standard, and that may or may not have included a way to correlate the three signals.
I also want to add that there was much more of an emphasis on logs and metrics, because that’s just something that developers and operators are familiar with. Traces have been around, but were not very widely used.
### **Observability 2.0**
So now that we know what Observability 1.0 is all about, let’s look at how it differs from Observability 2.0.
First things first. Credit where credit is due. The term “Observability 2.0” was coined by [Charity Majors](https://x.com/mipsytipsy). Observability 2.0 is the acknowledgment that Observability, like all things tech and non-tech, continues to evolve. The evolution to Observability 2.0 is the recognition that we made a decent stab at Observability (i.e. 1.0), but unfortunately, it didn’t really fulfill the promise of the definition of Observability that we saw earlier on. No problem, because things are constantly evolving.
So what makes Observability 2.0 different from 1.0? It has the following characteristics:
#### **1- It’s focused not only on how you operate your code, but also on how you develop your code**
This means that Observability is part of the [systems development lifecycle (SDLC)](https://en.wikipedia.org/wiki/Systems_development_life_cycle), and is therefore a concern of developers, QAs, and SREs. How?
**[Developers instrument their code](https://youtube.com/shorts/GIF61VCIqQI?feature=share)** so that they can troubleshoot it during development. 🤯 Instrumentation is the process of adding code to software to generate telemetry signals for Observability purposes. Software engineers already rely on logging for troubleshooting (hello, “print” statements?), so why not add traces and metrics into the mix?
**Quality Assurance (QA) analysts leverage instrumented code during testing**. When they encounter a bug, QAs can use telemetry data to enable them to troubleshoot code and file more detailed bug reports to developers. Or, if they’re unable to troubleshoot the code with the telemetry provided, it means that the system has not been sufficiently instrumented. Again, they go back to developers with that information so that developers can add more instrumentation to the code.
**QAs further take advantage of instrumented code by [creating trace-based tests](https://thenewstack.io/trace-based-testing-the-next-step-in-observability/) (TBT) for integration testing.** In a nutshell, TBT leverages traces to create integration tests. For anyone interested in seeing TBT in action, the [OpenTelemetry Demo leverages TBT](https://github.com/open-telemetry/opentelemetry-demo/tree/main/test) using the opens source version of [Tracetest](https://tracetest.io/).
**SREs leverage instrumented code to create [service-level objectives (SLOs)](https://thenewstack.io/translating-failures-into-service-level-objectives/)**. SLOs help us answer the question, “What is the reliability goal of this service?” SLO are based on [Service Level Indicators](https://youtu.be/Mgzt4bq0JU4?si=lXigHRUPmoiMtd7Q) (SLIs), which are themselves based on metrics. Metrics that were instrumented by your developer! 🤯 SREs can create alerts based on these SLOs, so that when an SLO is breached, they’re notified right away. Furthermore, since the SLO is ultimatley tied a metric (via an SLI), which was correlated to a trace (more on signal correlation shortly), the SRE knows where to start looking when an issue arises in production.
**[CI/CD pipelines are instrumented](https://thenewstack.io/how-to-observe-your-ci-cd-pipelines-with-opentelemetry/). **CI/CD pipelines are the backbone of modern SDLC. They are responsible for packaging and delivering code to production in a timely manner. When they fail, we can’t get code into production, which means angry users. Nobody likes angry useres. Ever. Therefore, having observable CI/CD pipelines allows us to address pipeline failures in a more timely manner to help alleviate software delivery bottlenecks.
#### **2- It’s focused on unknown unknowns**
Also known as “unpredictable shit happens”. Let’s face it, you can’t know every problem that there’s ever going to be. This is especially true in the world of microservices, where services interact with each other in such weird and unpredictable ways because…well, we users tend to use systems in very weird and unpredictable ways! 🤯 [Traditional dashboards can’t save you](https://faun.pub/observability-mythbusters-observability-anti-patterns-2b3062405b54), but [SLO-based alerts can](https://youtu.be/5maHQiElGgY?si=FF7ZO8QN6ivnt_BZ).
#### **3- It’s focused on a single source of truth: events**
Wait…what? What about traces, metrics, and logs? Well, traces, metrics, and logs all types of _events_. An event is information about a thing that happened. They are structured (think JSON-like), and timestamped. **_Traces, metrics, and logs are therefore different types of events that serve different and important purposes, each contributing to the Observability story._** Furthermore, they’re all _correlated_. Instead of Three Pillars, they’re more like [the three strands that make up a braid](https://thenewstack.io/modern-observability-is-a-single-braid-of-data/) (shoutout to my teammate [Ted Young](https://twitter.com/tedsuo) for this analogy).
In addition, we now have a common standard for defining and correlating traces, metrics, and logs: [OpenTelemetry](https://opentelemetry.io/). Most Observability vendors are all in on OpenTelemetry, which means that it has become the de-facto standard for instrumenting code (and also the second most popular CNCF project in terms of contributions 🎉). It also means that these vendors all ingest the same data, and it’s up to how those vendors render the data that differentiates them from one other.
I also want to add that in this Observability story, we place traces front and center, since they help give us that end-to-end picture of what happens when someone does a thing to a system, with metrics and logs serving as supporting actors which add useful details to that picture. And of course, everything correlated.
### **Final thoughts**
Observability has come a long way from its early days, and Observability 2.0 is the acknowledgement that Observability is evolving, and most importantly, that we’re getting closer and closer to fulfilling the promise of Observability itself.
I can’t wait to see what the future has in store!
Now, please enjoy this photo of my rat Katie, enjoying some hangtime in the pocket of my husband’s bathrobe. 💜

Until next time, peace, love, and code. ✌️💜👩💻
| avillela |
1,883,604 | How to Deploy Flutter on Upsun | Recently, a ticket came through regarding a user who wanted to deploy a Node.js frontend, but... | 0 | 2024-06-10T19:19:09 | https://dev.to/upsun/how-to-deploy-flutter-on-upsun-3m1g | flutter, devops, upsun, nix | Recently, a ticket came through regarding a user who wanted to deploy a Node.js frontend, but alongside a Flutter backend, and it that was something that was possible on Platform.sh/Upsun.
Youbetcha!
## Deploying
To get started locally (I was on a Mac), I ran:
```bash
brew install --cask flutter
```
I found a (now archived, but still useful) example repository at https://github.com/flutter/gallery, which showcases a number of example views and components.
```bash
git clone git@github.com:flutter/gallery.git && cd gallery
mkdir .upsun
touch .upsun/config.yaml
```
Create a new project (`upsun project:create`) and connect to the remote when prompted. Then edit the `.upsun/config.yaml` file to contain the following:
```yaml
applications:
gallery:
variables:
env:
FLUTTER_VERSION_DL: "3.22.2"
source:
root: "gallery"
type: "nodejs:20"
hooks:
build: |
set -eux
curl -s \
https://storage.googleapis.com/flutter_infra_release/releases/stable/linux/flutter_linux_$FLUTTER_VERSION_DL-stable.tar.xz \
-o flutter.tar.xz
tar -xf $PLATFORM_APP_DIR/flutter.tar.xz -C .
export PATH="$PLATFORM_APP_DIR/flutter/bin:$PATH"
flutter build web
web:
locations:
/:
root: build/web
index:
- index.html
expires: 2m
scripts: false
allow: true
rules:
static\/*:
expires: 365d
routes:
"https://{default}/":
id: gallery
type: upstream
primary: true
upstream: "gallery:http"
"https://www.{default}":
id: gallery-redirect
type: redirect
to: "https://{default}/"
```
Flutter is downloaded during the build hook, so the choice of `type` is largely arbitrary at this point. In this example, the most recent version (3.22.2) is used and set to the environment variable `FLUTTER_VERSION_DL` so that it can be downloaded. Finally, the command `flutter build web` actually downloads dependencies and builds the application
With this configuration file now in hand, we can commit and push to Upsun:
```bash
git add . && git commit -m "Upsunify the example."
upsun push
```
And that's it! The configuration shown here works for many Flutter apps, including all of the examples in https://github.com/gskinnerTeam/flutter_vignettes.
## Next steps
1. **Caching the install.** Upsun provides a super flexible build process that's both completely configuration _and_ smart enough to understand when a commit has been pushed that actually requires to rebuild an application. That assumption won't hold true for our downloads of Flutter though -- that is, we will redownload Flutter every time a commit requires a new build image, even if we want to continue to use the same version of Flutter. This build hook could be improved by utilizing Upsun's build cache to double check if the version has been edited by us, or otherwise reuse a cached download, which will save us some time.
2. **Nix.** A great thing that's come out recently on Upsun is support for Nix via our composable image syntax. In the example so far, we used the legacy `type: "node:20"` syntax, which pulls one of our maintained image versions. Instead, we could configure the project like this:
```yaml
applications:
gallery:
source:
root: "gallery"
stack:
- "flutter"
```
Where the `stack` entry `"flutter"` pulls [the most recent version from Nix Packages](https://search.nixos.org/packages?channel=24.05&show=flutter&from=0&size=50&sort=relevance&type=packages&query=flutter).
Seeing as Upsun Nix support doesn't yet allow for the kind of version pinning we might want in Beta, we'll just have to hold off on that experiment until next time!
Until then, happy deploying!
Chad C. | chadwcarlson |
1,883,603 | 1051. Height Checker | 1051. Height Checker Easy A school is trying to take an annual photo of all the students. The... | 27,523 | 2024-06-10T19:18:28 | https://dev.to/mdarifulhaque/1051-height-checker-ai | php, leetcode, algorithms, programming | 1051\. Height Checker
Easy
A school is trying to take an annual photo of all the students. The students are asked to stand in a single file line in non-decreasing order by height. Let this ordering be represented by the integer array expected where expected[i] is the expected height of the ith student in line.
You are given an integer array heights representing the current order that the students are standing in. Each heights[i] is the height of the ith student in line (0-indexed).
Return the number of indices where heights[i] != expected[i].
**Example 1:**
- **Input:** heights = [1,1,4,2,1,3]
- **Output:** 3
- **Explanation:**
```
heights: [1,1,4,2,1,3]
expected: [1,1,1,2,3,4]
Indices 2, 4, and 5 do not match.
```
**Example 2:**
- **Input:** heights = [5,1,2,3,4]
- **Output:** 5
- **Explanation:**
```
heights: [5,1,2,3,4]
expected: [1,2,3,4,5]
All indices do not match.
```
**Example 3:**
- **Input:** heights = [1,2,3,4,5]
- **Output:** 0
- **Explanation:**
```
heights: [1,2,3,4,5]
expected: [1,2,3,4,5]
All indices match.
```
**Constraints:**
- <code>1 <= heights.length <= 100</code>
- <code>1 <= heights[i] <= 100</code>
**Solution:**
```
class Solution {
/**
* @param Integer[] $heights
* @return Integer
*/
function heightChecker($heights) {
$ans = 0;
$currentHeight = 1;
$count = array_fill(0, 101, 0);
foreach ($heights as $height) {
$count[$height]++;
}
foreach ($heights as $height) {
while ($count[$currentHeight] == 0) {
$currentHeight++;
}
if ($height != $currentHeight) {
$ans++;
}
$count[$currentHeight]--;
}
return $ans;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)** | mdarifulhaque |
1,883,602 | How to Create a storage account with high availability | Storage account with high availability. Creating a highly available Azure Storage account... | 0 | 2024-06-10T19:18:21 | https://dev.to/ajayi/how-to-create-a-storage-account-with-high-availability-h5 | tutorial, beginners, cloud, azure | ##Storage account with high availability.
Creating a highly available Azure Storage account involves configuring settings to ensure data durability and accessibility even in the event of hardware failures or data center outages
Steps to create a storage account with High availability
Step 1
Create a storage account
Note: check previous post on how to create a storage account
Step 2
On your storage account, Go to resource.

Step 3
In the storage account, in the Data management section, select the Redundancy blade.

Step 4
Ensure Read-access Geo-redundant storage is selected.

Step 5
Go to your storage account, in the Settings section, select the Configuration blade.

Step 6
Ensure the Allow blob anonymous access setting is Enabled and save

Conclusion
By carefully selecting the appropriate replication options, configuring networking settings, and enabling data protection features, you can create a highly available Azure Storage account. This ensures your data remains durable, accessible, and secure, providing robust support for your applications even in the face of hardware failures or regional outages.
| ajayi |
1,883,601 | Real-time Applications with NestJS and WebSockets | Real-time applications have become an integral part of modern web development, enabling instant... | 0 | 2024-06-10T19:10:48 | https://dev.to/ezilemdodana/real-time-applications-with-nestjs-and-websockets-5afk | nestjs, backend, typescript, webdev | Real-time applications have become an integral part of modern web development, enabling instant communication and dynamic interactions. NestJS, with its powerful framework and support for WebSockets, makes it easy to build such applications. In this article, we'll explore how to create real-time applications with NestJS and WebSockets, covering key concepts, implementation steps, and best practices.
**Why Use WebSockets?**
WebSockets provide a full-duplex communication channel over a single TCP connection, allowing real-time data exchange between the client and server. Unlike traditional HTTP requests, WebSockets maintain an open connection, enabling instant updates without the need for polling.
**Setting Up a NestJS Project**
First, create a new NestJS project:
```
nest new real-time-app
```
Navigate to the project directory:
```
cd real-time-app
```
**Installing WebSocket Dependencies**
NestJS provides a WebSocket module out of the box. Install the necessary package:
```
npm install @nestjs/websockets @nestjs/platform-socket.io
```
**Creating a WebSocket Gateway**
A gateway in NestJS acts as a controller for WebSocket events. Create a new gateway using the NestJS CLI:
```
nest generate gateway chat
```
This will create a new chat.gateway.ts file. Let's define our gateway to handle real-time chat messages:
```
import {
SubscribeMessage,
WebSocketGateway,
OnGatewayInit,
OnGatewayConnection,
OnGatewayDisconnect,
WebSocketServer,
} from '@nestjs/websockets';
import { Logger } from '@nestjs/common';
import { Socket, Server } from 'socket.io';
@WebSocketGateway()
export class ChatGateway implements OnGatewayInit, OnGatewayConnection, OnGatewayDisconnect {
@WebSocketServer() server: Server;
private logger: Logger = new Logger('ChatGateway');
@SubscribeMessage('message')
handleMessage(client: Socket, payload: string): void {
this.server.emit('message', payload);
}
afterInit(server: Server) {
this.logger.log('Init');
}
handleConnection(client: Socket, ...args: any[]) {
this.logger.log(`Client connected: ${client.id}`);
}
handleDisconnect(client: Socket) {
this.logger.log(`Client disconnected: ${client.id}`);
}
}
```
**Explanation**
- @WebSocketGateway(): Decorator to mark the class as a WebSocket gateway
- @WebSocketServer(): Decorator to inject the WebSocket server instance
- handleMessage(): Method to handle incoming messages and broadcast them to all connected clients
- afterInit(): Lifecycle hook called after the gateway is initialized
- handleConnection(): Lifecycle hook called when a client connects
- handleDisconnect(): Lifecycle hook called when a client disconnects
**Handling Events on the Client-Side**
To handle WebSocket events on the client side, you can use the socket.io-client library. Install it using:
```
npm install socket.io-client
```
Here’s an example of a simple client-side implementation in an HTML file:
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Chat App</title>
<script src="https://cdn.socket.io/4.0.1/socket.io.min.js"></script>
<script>
document.addEventListener('DOMContentLoaded', () => {
const socket = io('http://localhost:3000');
const form = document.getElementById('form');
const input = document.getElementById('input');
const messages = document.getElementById('messages');
form.addEventListener('submit', (e) => {
e.preventDefault();
if (input.value) {
socket.emit('message', input.value);
input.value = '';
}
});
socket.on('message', (message) => {
const item = document.createElement('li');
item.textContent = message;
messages.appendChild(item);
});
});
</script>
</head>
<body>
<ul id="messages"></ul>
<form id="form" action="">
<input id="input" autocomplete="off" /><button>Send</button>
</form>
</body>
</html>
```
**Explanation**
- Socket.IO: The socket.io-client library is used to handle WebSocket connections
- Connecting to the Server: io('http://localhost:3000') establishes a connection to the WebSocket server
- Sending Messages: The form submission event sends messages to the server using socket.emit('message', input.value)
- Receiving Messages: The socket.on('message') event handler updates the DOM with new messages
**Best Practices**
1. Authentication and Authorization: Ensure that WebSocket connections are authenticated and authorized. Use middlewares or guards to secure your WebSocket endpoints
2. Error Handling: Implement proper error handling for your WebSocket events to ensure robustness
3. Scalability: Use tools like Redis for pub/sub patterns if you need to scale your WebSocket server
4. Monitoring and Logging: Integrate monitoring tools to keep track of WebSocket connections and events
**Conclusion**
Building real-time applications with NestJS and WebSockets is straightforward and efficient. By leveraging NestJS's WebSocket module, you can create highly interactive and responsive applications. Follow best practices for authentication, error handling, and scalability to ensure your real-time application is robust and secure.
**My way is not the only way!**
| ezilemdodana |
1,883,600 | Keeping Your Mind On Coding, While Life Distracts You | We've all been there. You stare at the blinking cursor, your favorite IDE (Integrated Development... | 0 | 2024-06-10T19:09:28 | https://dev.to/dommieh97/keeping-your-mind-on-coding-while-life-distracts-you-17lb | webdev, programming, productivity | We've all been there. You stare at the blinking cursor, your favorite IDE (Integrated Development Environment) mocking you with its blank canvas. The coffee's cold, that bug you were wrestling with feels like it has a PhD in hiding, and the siren song of social media beckons from the other tab. Let's face it, staying focused on coding when life gets busy can feel like scaling a sheer cliff face.
## The Mental Health Maze
Life throws a lot our way. Work deadlines, family obligations, the ever-present undercurrent of global news – it's enough to make anyone's brain feel like a scrambled egg. When mental health takes a dip, focus becomes the first casualty. That complex algorithm you were dissecting suddenly resembles a bowl of alphabet soup.
## The Work-Life Tightrope Walk
Work itself can be a double-edged sword. A demanding job can leave you mentally drained, with little energy left for your personal coding projects. Conversely, the pressure to stay relevant in a competitive market can lead to guilt trips when you take a break to recharge.
## Taking a Breather: A Coder's Lifeline
Here's the truth: stepping away from the code is not a sign of weakness, it's a sign of self-awareness. Just like any athlete, coders need rest and recovery to perform at their best. Take a walk, spend time with loved ones, pursue a completely different hobby. A refreshed mind will return to those coding challenges with renewed vigor.
## The Job Market Jungle: It's Still Out There
The tech industry can feel like a jungle at times, with a million applicants vying for every open position. But here's the secret most experienced coders know: there are still fantastic opportunities out there. Taking a break to prioritize your well-being won't make you unemployable. A clear head and a focused approach will land you that dream job much faster than burnout ever could.
Remember, coding is a marathon, not a sprint. Pace yourself, take care of your mental health, and don't be afraid to hit the pause button when you need it. The coding world will be waiting for you when you come back, ready to tackle those challenges with renewed focus and a clearer mind. | dommieh97 |
1,883,599 | Mappings 1.16.5 | Where can I find mappings.srg for Forge 1.16.5? | 0 | 2024-06-10T19:03:22 | https://dev.to/simpatae_db4017929eb0f42/mappings-1165-4c6p | forge | Where can I find mappings.srg for Forge 1.16.5?
| simpatae_db4017929eb0f42 |
1,883,598 | Breaking Free from Analysis Paralysis | Pet project Ever found yourself paralyzed by endless choices when starting a new project?... | 0 | 2024-06-10T18:56:41 | https://swiderski.tech/2024-06-07-breaking-free-from-analysis-paralysis/ | procrastination, programming, go, productivity | ## Pet project
Ever found yourself paralyzed by endless choices when starting a new project? I recently did, and it all began with binge-watching way too much [ThePrimeagen](https://www.youtube.com/@ThePrimeTimeagen/featured). Inspired, I decided to learn GoLang, thinking it would be the perfect next step for my half-baked Vim skills. I had this pet project idea simmering for months, and GoLang seemed like a solid choice for the backend. But as I soon discovered, the journey from idea to implementation is riddled with obstacles, especially when you’re not a seasoned backend developer.
## Paralysis
There are a lot of questions when working on the backend:
- Where to host it?
- What DB to use?
- How will I use Kafka with it, and where to put the AI engine and blockchain?
I started by creating a new project in Todoist (no way I’m using Jira for a pet project) and started adding tasks: set repo, write hello world app, add REST API endpoint, set CI, deploy... to where?
I do have a VPS that I sometimes use for pet projects, but it’s not an all-in-one solution, more like a remote Linux I can run things on. I need to host the DB somewhere, have monitoring and alerting, a notification service, a messaging queue… I don’t want to do it all. I don’t know how to do it all. What’s the point of even starting a project when I don’t know the basics?
Let me start googling for…
## Service Vendors
There are three standard choices: AWS, Google, and Microsoft, which provide all possible services out of the box. I used AWS, and I don't like it, no particular reason, just the look and feel. Google, as for an Android dev, looks familiar to me, but Google can kill the whole platform next week, I'll pass. Microsoft seems nice, and we use it in our current work project. Is it a good choice? I don’t want to be vendor-locked from the very beginning of the project. Or regret the choice, migrate to something else…
> "Do you pine for the nice days of minix-1.1, when men were men and wrote their own device drivers?" - Linus Torvalds
I won’t be writing my own DB, and I don’t even want to host and administer it. I don’t want to write my own OAuth or keep users' data. I need 3rd party vendors: Firebase for notifications, an OAuth provider with SDKs for mobile and web, and a DB provider.
Which DB to use? There is about a gazillion of those, in various paradigms. Later I may need an event store or message queue. Even more, decisions to make.

## Stop it, get some help
What am I doing here, I simply want to build an app.
For now, I decided to screw all this and build it locally as much as possible. When I'm satisfied with the results, I’ll happily pay someone to figure out how to put it online. I can go pretty far with the Docker Compose setup I made in 3 minutes. **And it works on my machine**. Good enough for the next few months :)
I’ve spent way too much time trying to find the best hosting/DB. And even more when trying to make it work. Comparing free/paid plans, calculating how much horsepower and space I will need. Going through tutorials, YT videos, and setup guides.
I don’t even have a concrete plan of what and how I want to implement it yet. But I’m burning time finding a nice place to put it online. There’s like a 95% chance it won’t even leave my computer. And it’s still gonna be worth it if I learn a new language or other tech.
But I don’t. I want to build an app, not fiddle with infra. **Infra is my kryptonite.**
It’s easy to fall into flashy vendor websites after attending meetups and conferences where their logos are omnipresent. It’s not a bad thing, it’s business.
But it obfuscates the true need I have. I want to build a working software solution and have fun learning new tech along the way.
### Procrastination
Since I don’t know the language well or have a concrete app design in my head (or documented like an adult), it’s easy to escape into googling all possible tools I may or may not need for this project. Doing endless research rather than working on the project itself. The project is doomed to fail that way. I’m losing my motivation momentum on comparing `Azure` to `Digital Ocean`. My app should have a working API, data models, and DB collections by now. Or at least a directory structure. Having a high-level design wouldn’t hurt.
But no, let’s compare OAuth providers when I don’t even have endpoints.
How is it that when I write code, I divide the problem into smaller pieces and conquer them one by one, but when I’m about to start a whole new greenfield project, I fall into this trap? I try everything new at once, losing sight of the actual problem I want to solve. Is it that when working in a team, other devs are subconsciously holding my horses, before I go too deep into details? Or I feel more responsible in my daily job, while a pet project may never be finished, and it’s not there to earn any money (so it should cost little to build).
Maybe I should change my approach and treat a pet project seriously. As an investment. Plan it better, and see potential future income. But will it be fun and a learning experience or just another job?
It’s hard to merge those two faces of software development. One where I’m being paid for doing my job, which may not always be fun and joyful. And the other, where I don’t have to be paid, but I need to have fun. I will enjoy it more when the thing I’m building works and even has users. But ultimately, it’s the journey that matters in pet projects.
Starting the journey with options paralysis is often its end.
### Oops I did it again
While writing this post, I decided to test a few writing apps. I mainly used [iA Writer](https://ia.net/writer), but I also downloaded [Paper](https://papereditor.app/), [Ulysses](https://www.ulysses.app/), and [Scrivener](https://www.literatureandlatte.com/scrivener/overview). Distraction-free writing is amazing. I can configure my Vim and Obsidian to look and feel similar. Wait… What am I doing?
Instead of writing this post, I was doing tutorials and watching comparisons between writing software.
I did it again. The same thing I did when starting the backend project.
Is it procrastination that drives me more into simple, non-measurable tasks, like “doing research” and “checking out tools” rather than doing real work? It reminds me of when I had exams and suddenly house chores became exciting.
How to escape that?
## Tips
I came up with a few tips to help me stay focused:
- **Start Small**: Begin with a minimal viable product (MVP) to avoid getting overwhelmed.
- **Set Clear Goals**: Define specific, achievable milestones for your project.
- **Use Local Development**: Focus on building and testing locally before worrying about deployment.
- **Leverage Simple Tools**: Use straightforward tools and services to avoid decision paralysis.
- **Time-Box Research**: Allocate a fixed amount of time for research to prevent endless comparisons.
- **Iterate Quickly**: Regularly review and refine your project to maintain momentum.
This should help me keep motivated and make progress. With just a slight scent of pet project being another job :)
## Conclusion
I think I’ve made the same mistake multiple times in my life. It starts with a pet project idea, but instead of working on the unique part of the solution, I dive into checking tools for the most commonly solved problems. I want to pick the best one while having zero experience with the topic, and I’m scared of making a wrong decision. Selecting the wrong tool would feed my Impostor Syndrome, and destroy the joy of creating something cool.
Later I started to think that picking ANY tool would be just fine, and I wasted hours. I’m losing the creative urge to build something; the momentum was wasted on useless work, and I’ve built nothing. And I’m already tired.
I hope that noticing my tendencies and acting on them fast, using the tips I put above will save me from doing it again.
Have you been there?
| asvid |
1,883,609 | Salário Do Programador Brasileiro 2024: Confira Essa Pesquisa | A “Pesquisa Salarial de Programadores Brasileiros 2024” vem em sua quarta edição realizada pelo canal... | 0 | 2024-06-23T13:50:48 | https://guiadeti.com.br/salario-programador-brasileiro-2024-confira/ | notícias, empregabilidade, mercadodeti, preparaçãoparaomerca | ---
title: Salário Do Programador Brasileiro 2024: Confira Essa Pesquisa
published: true
date: 2024-06-10 18:54:26 UTC
tags: Notícias,empregabilidade,mercadodeti,preparaçãoparaomerca
canonical_url: https://guiadeti.com.br/salario-programador-brasileiro-2024-confira/
---
A “Pesquisa Salarial de Programadores Brasileiros 2024” vem em sua quarta edição realizada pelo canal Código Fonte TV, dedicada a expor a realidade atual do mercado de trabalho para programadores do Brasil, incluindo aqueles que residem e atuam no exterior.
Com dados coletados por meio de um formulário entre 03 de fevereiro e 04 de junho de 2024, esta pesquisa contou com a participação de 15.049 profissionais, trazendo insights valiosos sobre as tendências salariais e as condições de emprego nesse setor em evolução.
## Pesquisa Salarial de Programadores Brasileiros 2024
A “Pesquisa Salarial de Programadores Brasileiros 2024” é a quarta edição realizada pelo canal Código Fonte TV, que objetiva fornecer uma análise detalhada do mercado de trabalho para programadores no Brasil e para aqueles que vivem e trabalham no exterior.

_Imagem da página da pesquisa_
Realizada entre 03 de fevereiro e 04 de junho de 2024, esta pesquisa contou com a participação de 15.049 profissionais, trazendo uma visão ampla sobre diversos aspectos da carreira de desenvolvimento.
### Principais Descobertas da Pesquisa
#### Satisfação Salarial e Educação
Dos participantes, 41.7% expressaram satisfação com seus salários, enquanto 63.8% dos respondentes possuem pelo menos o ensino superior completo, destacando a alta qualificação dos profissionais na área.
#### Experiência e Uso de Tecnologia
Cerca de 16% dos programadores têm entre 4 a 6 anos de experiência, evidenciando um mercado com profissionais relativamente experientes.
Notavelmente, 83.6% dos participantes utilizam inteligência artificial em suas programações, apontando para uma tendência crescente na adoção de tecnologias avançadas.
#### Perspectivas sobre Inteligência Artificial
Apesar da alta adoção de IA, 53.7% dos desenvolvedores acreditam que a inteligência artificial não substituirá os programadores no futuro, mostrando otimismo quanto à estabilidade de suas carreiras.
#### Saúde Mental e Trabalho Remoto
Um dado preocupante é que 72.7% dos respondentes relataram sofrer ou já ter sofrido de ansiedade, refletindo as pressões associadas ao campo.
Por outro lado, 65.7% dos programadores trabalham remotamente, possibilitando uma maior flexibilidade no ambiente de trabalho.
### Mobilidade Internacional e Oportunidades de Carreira
Apenas 1.9% dos desenvolvedores moram fora do Brasil, mas 12.5% trabalham para empresas estrangeiras, indicando uma conexão significativa com o mercado global.
30.9% conseguiram oportunidades através do LinkedIn, e 37.4% já fizeram transição de carreira para a área de desenvolvimento.
### Dados Salariais e Comparativos
A pesquisa também fornece médias salariais detalhadas por nível de carreira, área de atuação e comparações por gênero, oferecendo uma perspectiva valiosa sobre a equidade e a distribuição de renda no setor.
41.6% dos programadores já participaram de mais de oito processos seletivos, destacando a competitividade e as oportunidades contínuas no mercado de desenvolvimento.
A pesquisa trás outros vários dados muito pertinentes. Não deixe de conferir a pesquisa na íntegra através do link disponível ao final do post!
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Pesquisa-Salarial-de-Programadores-280x210.png" alt="Pesquisa Salarial de Programadores" title="Pesquisa Salarial de Programadores"></span>
</div>
<span>Salário Do Programador Brasileiro 2024: Confira Essa Pesquisa</span> <a href="https://guiadeti.com.br/salario-programador-brasileiro-2024-confira/" title="Salário Do Programador Brasileiro 2024: Confira Essa Pesquisa"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Curso-De-SAS-Gratuito-280x210.png" alt="Curso De SAS Gratuito" title="Curso De SAS Gratuito"></span>
</div>
<span>Curso De SAS Gratuito Para Iniciantes: 500 Vagas Disponíveis</span> <a href="https://guiadeti.com.br/curso-sas-gratuito-iniciantes-500-vagas/" title="Curso De SAS Gratuito Para Iniciantes: 500 Vagas Disponíveis"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/LinkedIn-Cursos-Gratuitos-280x210.png" alt="LinkedIn Cursos Gratuitos" title="LinkedIn Cursos Gratuitos"></span>
</div>
<span>LinkedIn Cursos Gratuitos: Inteligência Artificial, Excel E Mais 73 Opções</span> <a href="https://guiadeti.com.br/linkedin-cursos-gratuitos-ia-excel/" title="LinkedIn Cursos Gratuitos: Inteligência Artificial, Excel E Mais 73 Opções"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Curso-de-SAP-Gratuito-280x210.png" alt="Curso de SAP Gratuito" title="Curso de SAP Gratuito"></span>
</div>
<span>Webinar De SAP Para Iniciantes Gratuito Da Ka Solution</span> <a href="https://guiadeti.com.br/webinar-sap-para-iniciantes-gratuito-ka-solution/" title="Webinar De SAP Para Iniciantes Gratuito Da Ka Solution"></a>
</div>
</div>
</div>
</aside>
## Programador
A profissão de programador envolve a criação de código para desenvolver software que possa ser usado para uma variedade de aplicações, desde simples aplicativos a complexos sistemas operacionais.
Como uma área que está no coração da revolução tecnológica, a programação é fundamental para quase todos os setores da indústria moderna, oferecendo soluções inovadoras para problemas antigos e novos.
### Domínio de Linguagens de Programação
Um programador eficaz deve ter um bom domínio de pelo menos uma linguagem de programação. Algumas das linguagens mais comuns incluem JavaScript, Python, Ruby, e Java.
Cada linguagem tem suas peculiaridades e áreas de aplicação, sendo essencial escolher aquelas que melhor se alinham com os objetivos e projetos desejados.
### Ferramentas de Desenvolvimento
Os programadores precisam familiarizar-se com diversas ferramentas de desenvolvimento, como ambientes de desenvolvimento integrados (IDEs), sistemas de controle de versão como Git e plataformas de desenvolvimento colaborativo como GitHub. Estas ferramentas são essenciais para escrever, testar e manter o código de forma eficiente.
### Diversidade de Setores
Programadores podem trabalhar em uma ampla gama de setores, incluindo tecnologia da informação, saúde, finanças, educação, e muitos outros.
Cada setor apresenta desafios e requisitos únicos, o que pode exigir especializações ou conhecimentos específicos em determinadas tecnologias ou frameworks.
### Especializações na Programação
Dentro da programação, existem várias especializações, tais como desenvolvimento front-end (focado na parte visual e na interação com o usuário), desenvolvimento back-end (centrado nos servidores e bancos de dados), e desenvolvimento de aplicativos móveis. Cada uma dessas áreas exige um conjunto específico de habilidades e conhecimentos técnicos.
### Oportunidades de Crescimento
A carreira de programador oferece muitas oportunidades de crescimento profissional. Com experiência, um programador pode evoluir para posições de liderança técnica, como arquiteto de software ou gerente de projetos de TI.
Muitos programadores optam por se tornar freelancers ou iniciar seus próprios negócios, explorando a flexibilidade e independência que a profissão oferece.
## Código Fonte TV
O [Código Fonte TV](https://www.youtube.com/@codigofontetv) é um canal popular no YouTube, conhecido por sua abordagem dinâmica e informativa sobre programação, desenvolvimento de software, e as últimas tendências na tecnologia da informação.
O canal se vem se destacando como uma fonte valiosa de conhecimento, oferecendo tutoriais, análises e discussões que ajudam tanto iniciantes quanto desenvolvedores experientes a aprimorarem suas habilidades e se manterem atualizados com as mudanças rápidas do setor de tecnologia.
### Tutoriais Práticos e Guias de Aprendizagem
Código Fonte TV fornece uma vasta gama de tutoriais que cobrem diversas linguagens de programação, frameworks, e ferramentas de desenvolvimento.
Os tutoriais são projetados para serem acessíveis, permitindo que programadores de todos os níveis de habilidade possam aprender novas técnicas e resolver problemas comuns de programação de forma eficaz.
### Análises de Tecnologia e Novidades
O canal regularmente publica análises detalhadas de novas tecnologias, softwares e ferramentas emergentes no mercado.
Essas análises ajudam os espectadores a compreenderem melhor quais ferramentas escolher para seus projetos e como as últimas inovações podem impactar o desenvolvimento de software.
### Debates e Discussões
Código Fonte TV também é conhecido por hospedar debates e discussões sobre temas atuais no mundo da tecnologia, como segurança cibernética, ética em IA, e o futuro do trabalho em tecnologia.
Essas discussões são fundamentais para fomentar uma compreensão mais profunda dos desafios e oportunidades no campo da tecnologia.
### Interação com a Comunidade
O canal incentiva a interação, permitindo que os espectadores façam perguntas, sugiram tópicos e participem de discussões nos comentários. Esta interação enriquece a experiência de aprendizado e permite que a comunidade cresça em conhecimento de forma colaborativa.
## Link da pesquisa ⬇️
A [Pesquisa Salarial de Programadores Brasileiros 2024](https://pesquisa.codigofonte.com.br/2024) pode ser vizualizada no site do Código Fonte TV.
## Compartilhe a Pesquisa com sua rede para ampliar o conhecimento!
Gostou do conteúdo sobre a Pesquisa Salarial de Programadores Brasileiros 2024, feita pela Código Fonte TV? Então compartilhe com a galera!
O post [Salário Do Programador Brasileiro 2024: Confira Essa Pesquisa](https://guiadeti.com.br/salario-programador-brasileiro-2024-confira/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,883,597 | Master Nodemailer: The Ultimate Guide to Sending Emails from Node.js | Are you a Node.js developer looking to integrate email capabilities into your applications? Look no... | 0 | 2024-06-10T18:41:32 | https://blog.learnhub.africa/2024/06/10/master-nodemailer-the-ultimate-guide-to-sending-emails-from-node-js/ | webdev, node, beginners, programming |
Are you a Node.js developer looking to integrate email capabilities into your applications? Look no further than Nodemailer - the powerful and flexible Node.js module that simplifies sending emails from your server.

## Wanna get started with nodemailer : [A Beginner’s Guide to Nodemailer](https://blog.learnhub.africa/2024/06/07/a-beginners-guide-to-nodemailer/)
In this comprehensive guide, we'll explore the world of Nodemailer, covering everything from basic setup to advanced configurations. We will ensure you have all the tools to master email delivery in your Node.js projects.
## Introduction to Nodemailer
Nodemailer is an open-source Node.js module that easily allows you to send emails from your server. Whether you need to communicate with users, send notifications, or handle transactional emails, Nodemailer has covered you.
With its simple yet robust API, Nodemailer abstracts away the complexities of email delivery, allowing you to focus on building amazing applications.

## [Build an Advanced Contact Form in React with Nodemailer](https://blog.learnhub.africa/2024/02/02/build-an-advanced-contact-form-in-react-with-nodemailer/)
## Getting Started with Nodemailer
Before we dive into the nitty-gritty details, let's set up our Node.js environment and install Nodemailer. First, ensure your system has installed Node.js and npm (Node Package Manager). You can verify this by running the following commands in your terminal:
node -v
npm -v
If both commands return their respective versions, you're good to go. Otherwise, visit the official Node.js website and download the appropriate installer for your operating system.
Next, create a new directory for your project and navigate into it using the following commands:
mkdir nodemailer-project
cd nodemailer-project
Once inside the project directory, initialize a new Node.js project by running:
npm init -y
This command will create a `package.json` file, keeping track of your project's dependencies.
Now, it's time to install Nodemailer. Run the following command to add Nodemailer to your project:
npm install nodemailer
Nodemailer is now installed and ready to be used in your Node.js application.
## Sending Your First Email with Nodemailer
With Nodemailer set up, let's dive into sending our first email. Create a new file, e.g., `app.js`, and add the following code:
```javascript
const nodemailer = require('nodemailer');
// Create a transporter object
const transporter = nodemailer.createTransport({
service: 'gmail', // Use Gmail as the email service
auth: {
user: 'your-email@gmail.com', // Your Gmail email address
pass: 'your-email-password' // Your Gmail password
}
});
// Define the email options
const mailOptions = {
from: 'your-email@gmail.com', // Sender's email address
to: 'recipient@example.com', // Recipient's email address
subject: 'Hello from Nodemailer', // Subject line
text: 'This is a test email sent using Nodemailer!' // Plain text body
};
// Send the email
transporter.sendMail(mailOptions, (error, info) => {
if (error) {
console.log(error);
} else {
console.log('Email sent: ' + info.response);
}
});
```
In this example, we first import the Nodemailer module and create a transporter object using the `createTransport` method. We specify `'gmail'` as the email service and provide our Gmail email address and password for authentication.
How do you secure your personal mail service? Find out in this guide

## [Securing Nodemailer with Proper Authentication](https://blog.learnhub.africa/2023/10/27/securing-nodemailer-with-proper-authentication/)
Next, we define the email options using the `mailOptions object. This object includes the sender's and recipient's email addresses`, subject line, and email body.
Finally, we call the `sendMail` method of the transporter object, passing in the `mailOptions` object.
Nodemailer will handle the email delivery process, and depending on the outcome, we'll receive a success or error message in the console.
Remember that using your actual Gmail credentials in your code is not recommended for production environments, as it poses a security risk. Instead, you should use environment variables or a secure credentials management system.
## Configuring Nodemailer with Gmail
While the previous example demonstrates sending emails using Nodemailer and Gmail, an additional step is required to ensure reliable delivery. Gmail has implemented security measures that may block emails from untrusted sources, including your Node.js application.
To overcome this obstacle, you must configure your Google Cloud Platform (GCP) account and enable the Gmail API. This process involves creating a new GCP project, enabling the Gmail API, and generating OAuth 2.0 credentials. Don't worry; we'll guide you through the entire process step by step.
If you don’t have a [Google Cloud Platform](https://console.cloud.google.com/home) account, set one up as a prerequisite. Once you have that setup, create a new project by clicking on the dropdown menu in the upper left corner.

Select the New Project option:

In the next window, we must give our project a name. Pick whatever you like, but we will continue with out **NodemailerProject** name. For the location property, you can leave it as No organization.

It may take a few seconds to for the project to be set up, but after that you will be able to see this screen:

Open up the navigation menu by clicking the three dashed lines in the top left corner and select **APIs and Services:**

To use Nodemailer and Gmail, we will have to use OAuth2. If you aren’t familiar with OAuth, it is a protocol for authentication. I won’t get into the specifics here as it is unnecessary, but if you want to understand more, go [here](https://oauth.net/2/).
First, we will have to configure our OAuth Consent Screen:

If you are not a G-Suite member, the only option available will be External for User Type.

After clicking create, the next screen requires us to fill out the application’s information (our server):

Fill in your email in the User support email field and also in the Developer contact information field. Clicking Save and Continue will bring us to the Scopes phase of this configuration. Skip this phase, as it is irrelevant to us, and head into the Test Users phase.

Here, add yourself as a user, click Save, and continue.
## **How to Configure Your OAuth Settings**
In this phase, we will create OAuth credentials for use with Nodemailer. Go to the Credentials tab above the OAuth Consent Screen. Click on the plus (➕) sign with the text **Create Credentials** ****** and choose OAuth Client ID.

In the Application type dropdown menu, choose **Web Application**:

In the **Authorized Redirect URIs** section, make sure to add OAuth2 Playground ([https://developers.google.com/oauthplayground](https://developers.google.com/oauthplayground/)) as we will use it to get one of the keys that were mentioned at the beginning of this article.

After clicking Create, you will receive your client ID and client secret. **Keep these to yourself and never expose them in any way, shape, or form**.
**Get Your OAuth Refresh Token**
To get the refresh token, which we will use within the transporter object in Nodemailer, we need to go to the OAuth2 Playground. We approved this URI for this specific purpose at an earlier stage.
1. Click on the gear icon to the right (which is OAuth2 Configuration) and check the checkbox to use your own OAuth2 Credentials:

2. Look over to the left side of the website and you will see a list of services. Scroll down until you see Gmail API v1.

3. Click **Authorize APIs**
You will be given a screen to login to any of your Gmail accounts. Choose the one you listed as a Test user.
4. The next screen will let you know that Google still hasn’t verified this application, but this is ok since we haven’t submitted it for verification. Click continue.

5. On the next screen, you will be asked to grant your project permission to interact with your Gmail account. Do so.

6. Once done, you will be redirected back to the OAuth Playground. An authorization code is in the menu to the left.
Click on the blue button labeled **Exchange authorization code for tokens**.
The fields for the refresh token and the access token will now be filled.
7. **Update your Nodemailer code**: Replace the `auth` object in your Nodemailer transporter configuration with the following:
```javascript
auth: {
type: 'OAuth2',
user: 'your-email@gmail.com', // Your Gmail email address
clientId: 'your-client-id', // OAuth 2.0 client ID
clientSecret: 'your-client-secret', // OAuth 2.0 client secret
refreshToken: 'your-refresh-token' // OAuth 2.0 refresh token
}
```
By following these steps, you'll ensure that your Node.js application is authorized to send emails through the Gmail service, improving the reliability and deliverability of your emails.
## Advanced Nodemailer Features
Nodemailer offers many advanced features and configurations to enhance your email delivery capabilities. Here are a few notable examples:
1. **HTML Email Templates**: Nodemailer supports sending HTML-formatted emails, allowing you to create visually appealing and responsive email templates.
2. **Attachments**: You can attach files, such as documents or images, to your emails using Nodemailer's built-in attachment handling.
3. **Embedded Images**: Nodemailer enables you to embed images directly within the email body, enhancing the visual appeal of your messages.
4. **Custom Email Headers**: Customize email headers to include additional metadata, such as X-headers or custom tracking parameters.
5. **Email Previews**: Nodemailer allows users to preview email templates before sending them, ensuring they look exactly as intended.
6. **Templating Engines**: Integrate popular templating engines, like EJS or Handlebars, to create dynamic and reusable email templates.
7. **Scheduling and Queueing**: Implement email scheduling and queueing mechanisms to manage high-volume email delivery effectively.
8. **Email Testing and Debugging**: Utilize Nodemailer's built-in testing and debugging tools to identify and resolve issues during development and deployment.
These are just a few examples of Nodemailer's advanced features. As you delve deeper into the email delivery world, you'll discover even more powerful capabilities tailored to your specific project requirements.
## Conclusion
Nodemailer is a versatile and indispensable tool for Node.js developers seeking to integrate email functionality into their applications. With its straightforward setup, beginner-friendly API, and advanced features, Nodemailer empowers you to deliver engaging and reliable email experiences to your users.
Following this guide, you've learned how to set up Nodemailer, configure it with Gmail, and leverage its advanced capabilities to enhance your email delivery pipeline.
Whether you're building a simple notification system or a complex transactional email platform, Nodemailer has everything you need to succeed.
So, what are you waiting for? Dive into the world of Nodemailer, master email delivery in Node.js, and take your applications to new heights!
## Resource
* [connecting nodemailer to Gmail](https://www.freecodecamp.org/news/use-nodemailer-to-send-emails-from-your-node-js-server/)
[](https://www.freecodecamp.org/news/use-nodemailer-to-send-emails-from-your-node-js-server/)
| scofieldidehen |
1,883,596 | Day 15 of 30 of JavaScript | Hey reader👋 Hope you are doing well😊 In the last post we have seen about objects in JavaScript. In... | 0 | 2024-06-10T18:40:56 | https://dev.to/akshat0610/day-15-of-30-of-javascript-2a4c | webdev, javascript, beginners, tutorial | Hey reader👋 Hope you are doing well😊
In the last post we have seen about objects in JavaScript. In this post we are going to know about `this` keyword in JavaScript, we are going to start from very basic and take it to advanced level.
So let's get started🔥
## `this` Keyword
In JavaScript, the `this` keyword is a special identifier that refers to the context in which the current code is executing. Its behavior can vary depending on the mode (strict or non-strict) and the type of function call.
## Different Contexts for `this`
**Global Context (or default binding):**
In the global execution context (outside of any function), `this` refers to the global object. In a browser, the global object is `window`.

**`this` in a Function (Default)**
In a function, the global object is the default binding for `this`.

**`this` in event handlers**
In HTML event handlers, `this` refers to the HTML element that received the event.

**Object Method Binding**
When a function is called as a method of an object, `this` refers to the object the method is called on.

**Arrow Functions**
Arrow functions do not have their own `this` context. Instead, `this` is lexically inherited from the surrounding non-arrow function or global context.
In regular functions the `this` keyword represented the object that called the function, which could be the window, the document, a button or whatever.
With arrow functions the `this` keyword always represents the object that defined the arrow function.

As here the function is called when the button is clicked so here `this` repersents the button.

In this case `this` refers to the owner of function i.e. window object.
**Explicit Function Binding**
**`call()` Method-:**
The `call()` method is a predefined JavaScript method. It can be used to invoke (call) a method with an owner object as an argument (parameter).

This example calls the fullName method of person, using it on person1.
**`apply()` Method-:**
The `apply()` method is similar to the `call()` method.The difference is-: the `call()` method takes arguments separately whereas the `apply()` method takes arguments as an array.


**`bind()` Method-:**
With the `bind()` method, an object can borrow a method from another object.

Here the member object borrows the fullname method from the person object.
Sometimes the `bind()` method has to be used to prevent losing `this`.

Inside the `setTimeout` callback, `this` no longer refers to obj. Instead, it refers to the global object (window in browsers) in non-strict mode or is undefined in strict mode.

The value of `this` can be lost in callbacks because the context in which the callback is executed might differ from the context in which it was defined. This happens due to how JavaScript handles function calls and the default binding of `this`. To address this, you can use `bind`, arrow functions, or store `this` in a variable to preserve the intended context.
Understanding how `this` works is crucial for writing correct and predictable JavaScript code, especially in the context of object-oriented programming, event handling, and functional programming paradigms.
I hope you have understood this blog. Don't forget to follow me and leave some reaction for this post.
Thankyou 🩵 | akshat0610 |
1,883,556 | Introduction to Git | Quality control is critical, and developers work in small teams using Git for version... | 27,667 | 2024-06-10T18:39:39 | https://dev.to/aws-builders/introduction-to-git-ga9 | git, github, vcs, scm | Quality control is critical, and developers work in small teams using Git for version control.
Introduction to Git
{% embed https://www.youtube.com/watch?v=9uGS1ak_FGg %}.
**What is version control?**
- Version control system (VCS) is a program or set of programs that tracks changes to a collection of files.
- Another goal is to allow several team members to work on a project, even on the same files, at the same time without affecting each other's work.
- Another name for a VCS is a software configuration management (SCM) system.
- To learn more about git and [official documentation](https://git-scm.com/)
**With VCS**
- You can see who made the changes and their comments at the time of committing files
- Retrieve past versions of the entire project
- Create branches
- Attach a tag to a version—for example, to mark a new release.
**Distributed version control**
Earlier instances of VCSes, including CVS, Subversion (SVN) used a centralized server to store a project's history. This centralization meant that the one server was also potentially a single point of failure.
**Git is distributed**, which means that a project's complete history is stored both on the client and on the server. You can edit files without a network connection, check them in locally, and sync with the server when a connection becomes available.
**Git Terminology**
- Repository (repo): The directory, located at the top level of a working tree, where Git keeps all the history and metadata for a project. Repositories are almost always referred to as repos.
- Commit: When used as a verb, commit means to make a commit object.
- Branch: A branch is a named series of linked commits. The most recent commit on a branch is called the head. The default branch, which is created when you initialize a repository, is called main, often named master in Git. The head of the current branch is named HEAD.
- Remote: A remote is a named reference to another Git repository. When you create a repo, Git creates a remote named origin that is the default remote for push and pull operations.
**Git Command line** : Different GUIs available for Git
- Git Desktop
- Visual Studio Code
**Differences between Git and GitHub**
| Git | GitHub |
| ---------------- |------------- |
| Git is a distributed version control system (DVCS) that multiple developers and other contributors can use to work on a project. | GitHub is a cloud platform that uses Git as its core technology.GitHub act s as the remote repository|
**Key features provided by GitHub include:**
- Issues
- Discussions
- Pull requests
- Notifications
- Labels
- Actions
- Forks
- Projects
**Try out** - https://learn.microsoft.com/en-us/training/modules/intro-to-git/2-exercise-configure-git
**References :**
1. [Introduction to GitHub](https://learn.microsoft.com/en-us/training/modules/introduction-to-github/)
2. [Getting Started with GitHub](https://docs.github.com/en/get-started)
**Basic Git commands**
- git status : git status displays the state of the working tree
- git add : git add is the command you use to tell Git to start keeping track of changes in certain files. You'll use git add to stage changes to prepare for a commit. All changes in files that have been added but not yet committed are stored in the staging area.
- git command
- git log : The git log command allows you to see information about previous commits.
- git help : Each command comes with its own help page, too. You can find these help pages by typing git <command> --help. For example, `git commit --help` brings up a page that tells you more about the git commit command and how to use it.
**References:**
- [Every day git](https://git-scm.com/docs/everyday)
💬 If you enjoyed reading this blog post and found it informative, please take a moment to share your thoughts by leaving a review and liking it 😀 and follow me in [dev.to](https://dev.to/srinivasuluparanduru) , [linkedin ](https://www.linkedin.com/in/srinivasuluparanduru)and [buy me a coffee](https://buymeacoffee.com/srinivasuluparanduru)
| srinivasuluparanduru |
1,883,595 | React Native and OpenAI | Hello. I've just developed the simple application using OpenAI / ChatGPT and had a lot of fun and... | 0 | 2024-06-10T18:38:44 | https://dev.to/oivoodoo/react-native-and-openai-4e9a | reactnative, openai, mobileapp, react | ---
title: React Native and OpenAI
published: true
description:
tags: reactnative,openai,mobileapp,react
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-10 18:32 +0000
---
Hello.
I've just developed the simple application using OpenAI / ChatGPT and had a lot of fun and how easy it was.
[App preview](https://www.youtube.com/watch?v=4NjaXtJHTwM)
I know the app is really joke but anyway I liked the ability to pass the image and ask the specific questions about it.
For me it's really the first experience and probably it's really too obvious for a lot of developers who are involved now in using AI models for their applications.
Lets see the communication function and it would be really clean enough what the app does.
```typescript
const sendMessage = async (image_url: string, language: AppLanguage) => {
try {
const openai = new OpenAI({
apiKey: OPENAI_API_KEY, // This is the default and can be omitted
});
const { data: completion } = await openai.chat.completions
.create({
model: "gpt-4-turbo",
stream: false,
messages: [
{
role: "system",
content: `
You are palm reader, explain by reading palm the user background, life, future, fate. Please write at least 10 setences about each topic:
- user line of life
- user line of heart
- user line of mind
- user line of fate
- user extra information that we need to mention
Only provide a RFC8259 compliant JSON response:
[
{
"line_life": "user line of life",
"line_heart": "user line of heart",
"line_mind": "user line of mind",
"line_fate": "user line of fate",
"line_extra": "user extra information"
}
]`,
},
{
role: "user",
content: [
{
type: "text",
text: `
- user line of life, 10+ sentences
- user line of heart, 10+ sentences
- user line of mind, 10+ sentences
- user line of fate, 10+ sentences
- user extra information that we need to mention, 10+ sentences
`,
},
{
type: "text",
text: `Use language ${language} for answers.`,
},
{
type: "text",
text: 'Describe my palm but in format { "line_heart": "text...", "line_life": "text...", "line_mind": "text...", "line_fate": "text...", "line_extra": "text..." }',
},
{
type: "image_url",
image_url: {
url: `data:image/jpeg;base64,${image_url}`,
},
},
],
},
],
})
.withResponse();
const replies = completion.choices
.flatMap((choice) => {
try {
if (choice.message.content === null || choice.message.content === undefined) {
return [];
} else {
const messages = JSON.parse(choice.message.content);
return messages;
}
} catch (error) {
Sentry.Native.captureException(error);
return [];
}
})
.reduce((acc, cur) => ({ ...acc, ...cur }), {});
return replies;
} catch (error) {
Sentry.Native.captureException(error);
}
};
```
As you can see I used the specific prompts to output the responses in json and pass the image to analyze it. Also I localized the app by using en, es, de, fr, ru languages and saying to ChatGPT to respond me in specific language and it works out of the box. it's really math magic. :)
Of course I will publish the app but the huge problem for now that it's really expensive to ask `gpt-4-turbo` and probably the app will stay for awhile and then I would need to disable it.
| oivoodoo |
1,847,840 | Different ways to get a Job! | Getting a job can be a daunting and frustrating process, but with the right approach and strategies,... | 0 | 2024-06-10T18:37:47 | https://dev.to/avinash201199/different-wasy-to-get-a-job-2i8i | job, softwareengineering, developers, programming | Getting a job can be a daunting and frustrating process, but with the right approach and strategies, you can increase your chances of success.
We already know about the traditional way to apply for jobs, which is by exploring job boards. We can easily find job openings on companies' career portals or by exploring third-party job boards. However, today we will discuss some alternative and effective ways to find jobs!
Top Strategies for Landing Your Dream Job✅
## 1- Cold Outreach:
While unconventional, cold outreach can sometimes pay off. Research companies you're interested in and send personalized emails or letters expressing your interest and qualifications. This approach can help you stand out and potentially land an opportunity that may not have been advertised.
Make a list of companies where you want to work. Go to their career portals and find HR contacts or hiring managers. You can also reach out to founders and co-founders of small startups for jobs using cold emails!
Here are some startup companies list where you can try your luck:
## Startup Lists to enquire for jobs
1- [𝐋𝐢𝐬𝐭 𝐨𝐟 𝐅𝐮𝐧𝐝𝐞𝐝 𝐒𝐭𝐚𝐫𝐭𝐮𝐩𝐬 𝐢𝐧 𝐁𝐚𝐧𝐠𝐚𝐥𝐨𝐫𝐞 𝐅𝐨𝐫 𝟐𝟎𝟐𝟒](https://atozplacementkit.substack.com/p/025)
2- [**Hyderabad Startup List!**](https://atozplacementkit.substack.com/p/hyderabad-startup-list)
3- [**Gurgaon Startup List for Job Search**](https://atozplacementkit.substack.com/p/gurgaon-startup-list-for-job-search)
4- [**Pune Startup List for job seekers**](https://atozplacementkit.substack.com/p/pune-startup-list-for-job-seekers)
5- [𝐌𝐮𝐦𝐛𝐚𝐢 𝐒𝐭𝐚𝐫𝐭𝐮𝐩 𝐋𝐢𝐬𝐭](https://atozplacementkit.substack.com/p/b97)
**Here’s what you need to do:**
**1- Visit the Company's Website**: Each startup on the list has a link to its website. Navigate to their careers page to explore current job openings and apply with your updated resume.
**2- Cold Email for Job Inquiry**: If you cannot find a career section, don't hesitate to reach out via a cold email. Most company websites provide contact details in the footer section.
**3- Personalize Your Message**: Before sending a cold email, research the company thoroughly. Craft a personalized message explaining why you are interested in working with them and how your skills align with their needs.
_Taking these steps can significantly increase your chances of landing a role in one of these dynamic and innovative startups._
**Template for Job Inquiry:**
**Subject: Inquiry Regarding [Job Title] Opportunity**
Dear [Hiring Manager's Name],
I hope this email finds you well. My name is [Your Name], and I am writing to express my interest in the [Job Title] position listed on [where you found the job posting].
With a background in [your relevant skills or experiences], I am confident in my ability to contribute to your team's success. I am particularly drawn to [specific aspects of the job or company], and I am eager to bring my skills in [mention a relevant skill] to support [Company Name]'s goals.
I have attached my resume for your reference. I would greatly appreciate the opportunity to discuss how my experience aligns with the needs of your team. Please let me know if there is a convenient time for a brief call or meeting.
Thank you for considering my application. I look forward to the possibility of discussing how my skills can benefit [Company Name].
Best regards,
[Your Full Name]
[Your Resume Attached]
[Your Contact Information]
_Use the above template and customize it for yourself to reach out to hiring managers for job inquiries. You might not get instant results, but keep trying!_
## 2. Open Source Contribution
Contributing to open-source projects can be a strategic move to land your dream job. It elevates your technical skills through real-world coding experience, problem-solving, and collaboration. Your contributions become a public portfolio, showcasing your abilities to potential employers. Furthermore, being an active contributor increases your visibility within the developer community, leading to networking opportunities and potential endorsements. Contributing also demonstrates your passion for the field and initiative, qualities valued by many employers. To maximize these benefits, focus on high-quality contributions in projects aligned with your career goals, and actively network with the project's community. By strategically leveraging open source, you can significantly enhance your job search.

**Here is a list of famous open-source programs**
1. [Digital Ocean Hacktoberfest](https://hacktoberfest.digitalocean.com/)
2. [Google Summer of Code (GSoC)](https://summerofcode.withgoogle.com/)
3. [MLH Fellowship](https://fellowship.mlh.io/)
4. [Google Season of Docs (GSoD)](https://developers.google.com/season-of-docs)
5. [Outreachy](https://www.outreachy.org/)
6. [Season of KDE](https://season.kde.org/)
7. [Open Mainframe Project Mentorship Program](https://www.openmainframeproject.org/projects/mentorship-program)
8. [FOSSASIA Codeheat](https://codeheat.fossasia.org/)
9. [Linux Kernel Mentorship Program](https://kernelmentor.org/)
10. [Redox OS Summer of Code](https://www.redox-os.org/rsoc/)
11. [Open Summer of Code](https://summerofcode.be/)
12. [Free Software Foundation (FSF) Internship Program](https://www.fsf.org/resources/internships)
13. [GirlScript Summer of Code (GSSoC)](https://www.gssoc.tech/
)
Open source contributions provide job seekers with a unique platform to build their skills, gain recognition, and connect with mentors. Participating in open source projects allows individuals to work on real-world applications, enhance their technical capabilities, and contribute to meaningful initiatives.
## 3. Networking
Networking can significantly enhance job seekers' prospects by providing access to hidden job opportunities often filled through referrals. Establishing connections with industry professionals allows job seekers to gain valuable insights, advice, and mentorship, guiding them through their career path more effectively. Networking helps build credibility and trust, as referrals from personal connections add weight to job applications. Staying informed about industry trends through networking keeps candidates competitive and relevant.

Participate in conferences, seminars, meetups, or industry events related to your field of interest. These events provide excellent opportunities to meet professionals, learn about job openings, and make valuable connections.
You can explore **[meetup.com](https://www.meetup.com/)** for free meetups events,it is one of the most popular platforms for finding all sorts of local meetups, including those focused on technology. You can search for specific technology-related keywords or browse through the technology category to find relevant events in your area.
## 4. Referral
Asking for a referral on Linkedin is a good way to get closer towards your desired job. Though it doesn't guarantee you for the job, it increases the chance that your application will be seen by a recruiter or hiring manager. Referrals from the employees show your talents for a specific position to potential employers. Current employees can vouch for you and your talents and it’s a golden ticket of job seeking. Read this article [How to ask for a referral on LinkedIn](https://dev.to/avinash201199/how-to-ask-for-a-referral-on-linkedin-1g7e) to learn more.

_The best way to ask for a referral is to connect with your alumni on LinkedIn, or you can also connect with your seniors for a referral!_
There are also some websites that provide referrals. Although there may be paid subscriptions, you can try the free plans to get referrals once a week or according to their free plan.
## 5. Jobs Boards
We already know about the traditional way to apply for jobs, which is by exploring job boards.
Here are some popular job boards:
[Indeed](indeed.com)
[LinkedIn Jobs](linkedin.com/jobs)
[Glassdoor](glassdoor.com)
[Monster](monster.com)
[SimplyHired](simplyhired.com)
[CareerBuilder](careerbuilder.com)
[ZipRecruiter](ziprecruiter.com)
[Dice (for tech jobs)](dice.com)
[AngelList (for startups)](angel.co)
[FlexJobs (for remote and flexible jobs)](flexjobs.com)
## Before applying for a Job, you need to do a few things
**1. Having a Good Resume Template with Updated Details:**
- Your resume serves as your first impression to potential employers, so it's essential to ensure it's well-crafted and up-to-date.
- Make sure your contact information, educational background, and work experience are current and accurate.
- Tailor your resume to the specific internship you're applying for, highlighting relevant skills and experiences.
Below, I have attached an ATS-friendly resume template. Download and edit it for yourself!
{% embed https://docs.google.com/document/d/1MZkBSJG1TPvMwHBo2yuMjQ6CokWpZPNx/edit?usp=drive_link %}
**2. Good Projects in Your Resume:**
- Including projects on your resume demonstrates your practical skills and shows employers what you're capable of beyond theoretical knowledge.
- Choose projects that are relevant to the job you're applying for and showcase your ability to solve problems, work in a team, or demonstrate creativity.
_**Note:** It is always recommended to create unique projects based on real-world problems. I have included some projects in the repository below for suggestions._
{% embed https://github.com/avinash201199/Projects %}
**3- Experience Section in Your Resume:**
- While jobs are often sought after to gain experience, any relevant experience you have, such as part-time jobs, volunteer work, or extracurricular activities, can be valuable.
- Highlight any relevant skills or accomplishments from past experiences that demonstrate your ability to succeed in the internship role.
_If you don't have any experience, you can opt for virtual internships from top companies and add them to your resume from the Forage website._
{% embed https://www.theforage.com/ %}
**4- LinkedIn Profile Optimization:**
- LinkedIn is a powerful tool for networking and showcasing your professional profile.
- Ensure your LinkedIn profile is complete and up-to-date, including a professional photo, headline, summary, and detailed descriptions of your education, experience, and skills.
- Connect with professionals in your field of interest, join relevant groups, and engage with content to expand your network.
_You can read the article below to optimize your LinkedIn profile._
{% embed https://apnajourney.com/linkedin-workshops/ %}
**5- GitHub Profile for Developers:**
- For those in technical fields, a GitHub profile can be a valuable asset to showcase your coding skills and contributions to projects.
- Keep your GitHub profile updated with your latest projects, contributions to open-source projects, and any other relevant code samples.
_In this repository, I have attached some good profile readme templates which you can use to optimize your GitHub profile_
{% embed https://github.com/avinash201199/profile-readme-templates %}
Once you have completed the above steps, it's time to start searching for job openings. You can join this [Telegram channel](https://t.me/offcampusjobsupdatess) for regular jobs , internships and resources updates!
Reach out to me on [LinkedIn](https://www.linkedin.com/in/avinash-singh-071b79175/) for any help! If you have anything to add, you can drop it in the comment section!
| avinash201199 |
1,883,592 | Developing My Developer Voice | Have you ever looked at seemingly unrelated technologies and wondered how they might connect? That's... | 0 | 2024-06-10T18:32:43 | https://dev.to/statueofdavid/developing-my-developer-voice-3aoi | noob, general, firstpost, softwareengineering | Have you ever looked at seemingly unrelated technologies and wondered how they might connect? That's a feeling I get all the time. In 2023, I explored this by connecting a raspberry pi and a Billy Bass Fish puppet and then attempted to run a chat llm on that raspberry pi. While the results were, shall we say, unique, the project solidified my fascination with connections and the importance of documenting the journey.
Why am I joining the Dev.to community?
I see this platform as a way to sharpen my communication skills as a developer. Writing for a public audience will force me to be clear, concise, and well-organized in my explanations. In my opinion, technical documentation is a crucial component of being a professional developer, and I believe consistent posting here will help me refine this skill.
Beyond communication skills, Dev.to allows me to create a record of my knowledge. Documenting my journey as a continuous learner through blog posts will serve as a reference point for the future. Additionally, building a public knowledge base on Dev.to can establish me as a resourceful developer within the community.
So, feel free to comment with your thoughts, share your own experiences, or suggest topics you'd like me to explore. You can also check out my portfolio at https://david.declared.space to learn more about my background. | statueofdavid |
1,883,590 | Best Gynecologist in Hyderabad | Dr.Himabindu Annamraju | Dr. Himabindu is the best Gynecologist in Hyderabad. She is an Obstetrician and laparoscopic surgeon... | 0 | 2024-06-10T18:29:13 | https://dev.to/drhimaraju/best-gynecologist-in-hyderabad-drhimabindu-annamraju-28g2 | bestgynecologistinhyderabad, drhimabinduannamraju, rainbowhospital, pcos | Dr. Himabindu is the best Gynecologist in Hyderabad. She is an Obstetrician and laparoscopic surgeon at Rainbow Hospital, Financial District, Nanakramguda, Hyderabad. She has over 19 years of experience in all aspects of obstetrics and gynecology including her role as a consultant obstetrician & gynecologist at Buckinghamshire Healthcare NHS Trust in the UK.Make an appointment. | drhimaraju |
1,883,297 | Key AI Terminologies | What is Artificial Intelligence? Artificial Intelligence (AI) refers to the simulation of... | 0 | 2024-06-10T18:28:19 | https://dev.to/ak_23/key-ai-terminologies-1ncc | ai, learning, beginners | ### What is Artificial Intelligence?
Artificial Intelligence (AI) refers to the simulation of human intelligence in machines. These machines are programmed to think like humans and mimic their actions, learning from experience, adjusting to new inputs, and performing tasks that typically require human intelligence.
### Key AI Terminologies
**Machine Learning (ML)**
- **Definition:** A subset of AI that uses algorithms and statistical models to enable computers to improve their performance on a task through experience.
- **Example:** Think of Netflix's recommendation system that learns your preferences and suggests movies you might like.
**Deep Learning (DL)**
- **Definition:** A subset of machine learning that uses neural networks with many layers (hence "deep") to analyze various factors of data.
- **Example:** Voice assistants like Siri and Alexa, which understand and respond to voice commands.
**Neural Network**
- **Definition:** A series of algorithms that recognize underlying relationships in a set of data through a process that mimics how the human brain operates.
- **Example:** Image recognition systems that identify objects in photos.
**Natural Language Processing (NLP)**
- **Definition:** A branch of AI that helps computers understand, interpret, and respond to human language.
- **Example:** Chatbots that answer customer service queries.
**Algorithm**
- **Definition:** A set of rules or instructions given to an AI, computer, or other machines to help it learn on its own.
- **Example:** The algorithms used by search engines to rank web pages.
**Supervised Learning**
- **Definition:** A type of machine learning where the model is trained on labeled data, meaning the input comes with the correct output.
- **Example:** Email filtering systems that classify emails as spam or not based on past examples.
**Unsupervised Learning**
- **Definition:** A type of machine learning where the model is given data without explicit instructions on what to do with it. The system tries to learn the patterns and structure from the data.
- **Example:** Clustering customers based on purchasing behavior without prior labels.
**Reinforcement Learning**
- **Definition:** A type of machine learning where an agent learns to make decisions by taking actions in an environment to maximize cumulative reward.
- **Example:** Self-driving cars that learn to navigate roads through trial and error.
**Computer Vision**
- **Definition:** A field of AI that trains computers to interpret and make decisions based on visual data from the world.
- **Example:** Security cameras that detect and alert for suspicious activities.
**Data Mining**
- **Definition:** The process of discovering patterns and knowledge from large amounts of data.
- **Example:** Market basket analysis in retail to find products that are frequently bought together.
### Practical Tips for Beginners
- **Start with Basics:** Understanding basic programming and statistics is crucial. Python is great for AI due to its simplicity and the availability of libraries.
- **Online Courses:** Enroll in courses from platforms like Coursera, Udacity, or edX. They offer structured learning paths.
- **Projects and Practice:** Apply what you learn through small projects. Kaggle competitions are a great way to practice real-world data science problems.
- **Stay Updated:** Follow AI news, blogs, and research papers to stay current with the latest developments.
### Conclusion
Stepping into the world of AI can be daunting, but with a solid understanding of these basic terminologies, you’re well on your way. Remember, the key is to keep learning and experimenting. AI is a rapidly evolving field, and there's always something new to discover. Happy learning!
### Inspirational Quote
"AI is not just a technology; it’s a different way of thinking about everything." — Mark Cuban
| ak_23 |
1,883,573 | Keep your business code separate from the rest | One of the best things you can do for a codebase is to keep the business logic separate from... | 0 | 2024-06-10T18:28:10 | https://dev.to/seasonedcc/keep-your-business-code-separate-from-the-rest-4ek2 | cleancode, architecture, productivity, programming | One of the best things you can do for a codebase is to keep the business logic separate from everything else.
Imagine you have an app from 20 years ago, written in an old framework no one uses anymore. Thousands of people rely on it for their jobs, because it is the only app that deeply understands how this particular industry works.
The user experience could be so much better but the app works and no competitor beat them yet. Now you're tasked with bringing this giant to the modern days by adopting current technologies to improve the overall experience.
You take a quick look at the code, and being a seasoned dev with a sharp sword, you prepare for war. You know it's going to take your team a long time and it's going to be messy. Why?
Because most of your time will be spent untangling the logic that needs to stay from the parts that need to go. And what needs to stay is the business logic. The rest is old framework code no one cares about.
Now, while you untangle this mess, are you going to leave another one for future generations? Of course not. Because you know the current framework you're using now will undoubtedly be replaced by something else in a decade or so, you will do your best to keep the business code separate from everything else.
To do so, ask yourself the following question at all times:
> ## If I were to migrate to a completely different framework tomorrow, would I be able to take the business logic with me?
If the answer is "yes", you're writing code that lasts. If not, you're [borrowing from the future](https://en.wikipedia.org/wiki/Technical_debt) and some refactoring is in order. | danielweinmann |
1,883,587 | Inside the Box: May Community Update | Hi there 👋 With a new month upon us, it's time for a fresh issue of "Inside the Box" with all of... | 26,773 | 2024-06-10T18:27:48 | https://dev.to/codesandboxio/inside-the-box-may-community-update-223l | webdev, product, community | Hi there 👋
With a new month upon us, it's time for a fresh issue of "Inside the Box" with all of last month's highlights.
Let's go!
## Latest Product News
**[Storybook add-on](https://codesandbox.io/blog/announcing-the-codesandbox-storybook-add-on) 🧩** — We're making every story come to life with the [CodeSandbox Storybook add-on](https://github.com/codesandbox/storybook-addon)! After configuring the add-on on your Storybook instance, you will see a button to open that story in a Sandbox, which is great for [design system teams](https://codesandbox.io/blog/how-to-use-codesandbox-with-your-design-system) and sharing bug repros.

**[CodeSandbox is now SOC 2 compliant](https://codesandbox.io/blog/codesandbox-is-now-soc-2-compliant)** ✅ — This milestone reflects how we handle and process customers' data securely, meeting key security standards.

**Unlimited public Sandboxes** 🚀 — After countless chats with our users, we decided to update our Free plan so it now includes unlimited public Sandboxes, 5 private Sandboxes and private npm.
**Manual VM hibernation 🧊** — We have added an option to manually hibernate a VM from the editor's menu. This may be useful in cases where you want to immediately stop VM runtime.

**VM credit spend meter 🧮** — We have added a UI element to the bottom left of the dashboard that displays VM credit expenditure in reference to the allocated quota. This makes it easier to monitor your spending.

**Public VM templates for free** — We have updated the logic of our public templates running in VMs so that their runtime is not billed to the workspace that created the template. This way, we're empowering everyone to create and share powerful templates with the community at no extra cost.
---
## Web Bytes
**[How we scale our microVM infrastructure using low-latency memory decompression](https://codesandbox.io/blog/how-we-scale-our-microvm-infrastructure-using-low-latency-memory-decompression)** — If you have a knack for infrastructure, you probably don't want to miss this blog post by our Co-Founder Ives, who explains how we scale our infrastructure to handle 2.5 million microVM resumes per month.

**[JS Heroes 2024](https://x.com/jsheroes/status/1795818857065759218)** — It's all good vibes after this year's JS Heroes conference, founded by our Product Engineer Alex Moldovan. Lots of familiar faces around and many demos run in CodeSandbox. A sight to behold!
---
## Thank You 🖤
We hope you enjoyed this issue of Inside the Box!
We are curious about what **you** feel is missing from these newsletters and what **you** would like us to add to the next one!
What should we bring next? Tell us on [our community space](https://www.codesandbox.community/)! | filipeslima |
1,883,589 | Cryptocurrecny Recovery Experts Contact A1 wizard Hackes | Cryptocurrecny Recovery Experts Contact A1 wizard Hackes "I had suffered a significant loss in the... | 0 | 2024-06-10T18:27:25 | https://dev.to/edward_johnson_ba9dc50dd8/cryptocurrecny-recovery-experts-contact-a1-wizard-hackes-22g0 | cryptocurrency | Cryptocurrecny Recovery Experts Contact A1 wizard Hackes
"I had suffered a significant loss in the cryptocurrency market a few months ago and was struggling to recover." I had invested $598,500 in cryptocurrency with a company that I subsequently discovered online, which I ended up learning it's a standard Crypto scam company. A1 wizard Hackes went above and beyond to assist me in recovering my losses. Their services are superb, and the crew is fantastic, with excellent communication and results. A1 wizard Hackes is highly recommended; look no further. Incase you are a victim of such predicament Please ensure also to reach out to them for help via Contact details Below
E-mail : A1wizardhackes @ cyberservices . com
whatsApp : +1 678 439 9760
Telegram : @ A1wizardhackes
| edward_johnson_ba9dc50dd8 |
1,883,586 | Navigating Automotive Excellence: The Significance of Workshop Manuals in PDF Format | In the digital age, Workshop Manuals in PDF format have revolutionized the way automotive enthusiasts... | 0 | 2024-06-10T18:24:28 | https://dev.to/downwork_manuals_bf1d9720/navigating-automotive-excellence-the-significance-of-workshop-manuals-in-pdf-format-bh6 | In the digital age, Workshop Manuals in PDF format have revolutionized the way automotive enthusiasts and professionals approach vehicle maintenance and repair. These comprehensive guides provide detailed instructions, technical specifications, and troubleshooting advice tailored to specific vehicle makes and models, all conveniently accessible in a portable and versatile digital format. In this article, we'll explore the significance of **[Workshop Manuals in PDF](https://downloadworkshopmanuals.com/)** format, their benefits, contents, and how they empower individuals to master the art of automotive care.
The Evolution of Workshop Manuals: Embracing PDF Format
With the transition to digital resources, Workshop Manuals in PDF format have emerged as a preferred choice for automotive enthusiasts and professionals. PDF format offers several advantages over traditional printed manuals, including portability, searchability, and ease of access across various devices. This shift to digital has democratized automotive knowledge, making it more accessible and convenient for users worldwide.
Understanding the Contents of Workshop Manuals in PDF Format
Workshop Manuals in PDF format cover a comprehensive range of topics essential for vehicle maintenance and repair. Here’s a breakdown of the typical contents found within these manuals:
Introduction and Overview: Provides an introduction to the manual, safety precautions, and essential tools required for automotive tasks.
Maintenance Procedures: Detailed instructions for routine maintenance tasks such as oil changes, fluid checks, and filter replacements, tailored to specific vehicle makes and models.
Technical Specifications: Comprehensive specifications, including torque values, fluid capacities, and wiring diagrams necessary for precise repairs and adjustments.
Diagnostic Procedures: Instructions for diagnosing various mechanical and electrical issues, complete with troubleshooting charts and diagnostic codes.
Repair Procedures: Step-by-step guides for repairing and replacing components ranging from engines and transmissions to brakes and suspension systems.
Specialized Procedures: Advanced procedures for tasks such as engine overhauls, transmission rebuilds, and complex electrical system troubleshooting.
Benefits of Workshop Manuals in PDF Format
Workshop Manuals in PDF format offer numerous benefits that enhance the automotive experience for users:
Portability: PDF format allows users to access manuals on various devices, including smartphones, tablets, and laptops, enabling them to reference information anytime, anywhere.
Search Functionality: PDF manuals often include search functions, allowing users to quickly find specific information without flipping through pages manually.
Cost Savings: By empowering individuals to perform their own maintenance and repairs, Workshop Manuals in PDF format help save significant costs associated with professional services.
Environmental Impact: Digital manuals reduce paper usage, making them a more environmentally friendly option compared to printed versions.
Regular Updates: Digital formats can be updated more easily than printed manuals, ensuring that users have access to the most current information and procedures.
Utilizing Workshop Manuals in PDF Format Effectively
To maximize the benefits of Workshop Manuals in PDF format, users should consider the following tips:
Familiarization: Take the time to familiarize yourself with the manual’s layout, structure, and contents before starting any automotive task.
Follow Procedures: Adhere to the step-by-step procedures outlined in the manual, ensuring that each task is performed meticulously and in the correct sequence.
Invest in Quality Tools: Use high-quality tools and equipment as recommended in the manual to facilitate smooth and efficient automotive maintenance and repairs.
Stay Informed: Stay informed about any updates or revisions to the manual to ensure that you have access to the most current information and procedures.
Safety First: Always prioritize safety by following the recommended safety precautions and guidelines provided in the manual.
**Conclusion**
Workshop Manuals in PDF format represent a significant advancement in the world of automotive maintenance and repair. By providing comprehensive guidance in a portable and accessible format, these manuals empower individuals to master the art of automotive care, ensuring the longevity and performance of their vehicles. Whether performing routine maintenance or tackling complex repairs, Workshop Manuals in PDF format are indispensable companions on the journey to automotive excellence.
Embracing the knowledge contained within these manuals not only enhances your understanding of automotive mechanics but also fosters a sense of confidence and accomplishment. So, the next time you face an automotive task, turn to your Workshop Manual in PDF format and unlock the full potential of your automotive expertise. With these manuals by your side, you are well-equipped to navigate any automotive challenge that comes your way.
| downwork_manuals_bf1d9720 | |
1,883,583 | Organization schemes for note taking | Let's delve into various organization schemes for effective note-taking, along with practical... | 0 | 2024-06-10T18:22:57 | https://dev.to/charudatta10/organization-schemes-for-note-taking-j18 | note, beginners, tutorial, orgnization | Let's delve into various organization schemes for effective note-taking, along with practical examples:
1. **File and Folder Organization:**
- **Files**: Think of each note as a separate file (e.g., "MeetingNotes.md" or "ProjectIdeas.txt").
- **Folders (or Notebooks)**: Create folders to group related notes. For example:
- **Inbox**: A default folder for quick notes or temporary storage.
- **Archive**: Store completed or less frequently accessed notes.
- **Projects**: Organize notes by specific projects or topics (e.g., "ProjectA," "Personal," "Work").
- **Reference**: For long-term reference material (e.g., research papers, manuals).
2. **Labels and Tags:**
- **Labels**: Assign descriptive labels to notes (e.g., "Urgent," "Important," "Review").
- **Tags**: Use tags for cross-referencing (e.g., "Meeting," "Ideas," "Code Snippets").
3. **Metadata**:
- **Creation Date**: Include timestamps to track when notes were created.
- **Author**: Useful for collaborative notes (e.g., "Created by John").
- **Location**: If relevant (e.g., "Conference Notes - Seattle").
4. **Links**:
- **Internal Links**: Create hyperlinks within notes to connect related content.
- **External Links**: Reference web pages, articles, or other resources.
**Example Folder Structure**:
- **Inbox**:
- Unprocessed notes go here initially.
- Regularly review and move them to appropriate folders.
- **Archive**:
- Completed or historical notes.
- Rarely accessed but still valuable.
- **Projects**:
- Subfolders for each project or topic.
- E.g., "ProjectA," "ProjectB," "Personal," "Work."
- **Reference**:
- Subfolders for specific types (e.g., "Research," "Tutorials").
- Store long-term reference material.
Remember, adapt these structures to your needs and preferences. Consistency is key! 📂📝✨
| charudatta10 |
1,883,575 | Specialty Fats and Oils Market: Forecast and Key Growth Drivers for 2024-2033 | The global specialty fats and oils market, valued at US$ 54,811.7 Mn in 2024, is projected to grow at... | 0 | 2024-06-10T18:18:35 | https://dev.to/swara_353df25d291824ff9ee/specialty-fats-and-oils-market-forecast-and-key-growth-drivers-for-2024-2033-3ghp | The global specialty fats and oils market, valued at US$ 54,811.7 Mn in 2024, is projected to grow at a CAGR of 4.6%, reaching around US$ 82,174.3 Mn by 2033. Specialty fats and oils, essential nutrients composed mainly of triglycerides, are tailored for specific applications in the food, personal care, and other industries. They serve as substitutes for common fats like cocoa butter and milk fats. The market has seen significant growth due to increasing demand for nutritional and clean-label functional foods, with top four countries accounting for 27.5% of the market share in 2023.
Key Trends in the Specialty Fats and Oils Market
1. Rising Demand for Health-Oriented Products
The increasing focus on health and wellness is driving demand for specialty fats and oils that offer nutritional benefits. Consumers are seeking products with healthier profiles, such as those high in unsaturated fats and low in trans fats.
2. Growth in Clean-Label and Natural Products
There is a strong consumer preference for clean-label products that are free from artificial additives and preservatives. This trend is boosting the market for specialty fats and oils perceived as natural and healthier alternatives.
3. Expansion of Functional Foods Segment
The specialty fats and oils market is benefiting from the growth of functional foods, which provide additional health benefits beyond basic nutrition. These fats and oils are used to enhance the nutritional profile and functional properties of foods.
4. Technological Innovations
Advancements in processing technologies are leading to the production of high-quality specialty fats and oils. Innovations in extraction and refining processes are improving product functionality and expanding application possibilities.
5. Increased Use in Industrial Applications
Specialty fats and oils are finding extensive use in various industrial applications, including food and beverages, personal care, and cosmetics. Their unique properties make them ideal for diverse uses such as emulsifiers, texture enhancers, and stabilizers.
6. Development of Free-From Foods
The growing demand for "free-from" foods, such as gluten-free, dairy-free, and allergen-free products, is creating new opportunities for specialty fats and oils. These niche markets are expanding rapidly as consumers with specific dietary needs seek suitable alternatives.
7. Sustainability and Ethical Sourcing
Environmental sustainability and ethical sourcing are becoming increasingly important in the specialty fats and oils market. Consumers and companies are focusing on sustainable sourcing practices, such as certified sustainable palm oil, to reduce environmental impact and promote ethical practices.
8. Regulatory Support for Healthier Ingredients
Supportive regulatory frameworks are promoting the use of healthier fats and oils in food products. Regulations that limit trans fats and encourage the use of healthier alternatives are driving market growth.
In a nutshell, the Persistence Market Research report is a must-read for start-ups, industry players, investors, researchers, consultants, business strategists, and all those who are looking to understand this industry. Get a glance at the report at- https://www.persistencemarketresearch.com/market-research/specialty-fats-and-oils-market.asp
Key Players in the Specialty Fats and Oils Market
Cargill, Incorporated
Wilmar International Limited
AAK AB
Bunge Limited
IOI Group
Fuji Oil Co., Ltd.
Mewah International Inc.
Archer Daniels Midland Company
Musim Mas Group
The Nisshin OilliO Group, Ltd.
Market Segmentation
By Type
The specialty fats and oils market is segmented by type into specialty fats and specialty oils. Specialty fats include cocoa butter substitutes, milk fat replacers, and other tailored fats used in various applications. Specialty oils cover coconut oil, palm kernel oil, olive oil, and others, each offering unique properties for different industrial uses.
By Application
Segmentation by application highlights the diverse uses of specialty fats and oils across multiple industries. In the food and beverage industry, they are used in bakery products, confectionery, dairy products, and processed foods. In the personal care and cosmetics industry, these fats and oils are key ingredients in products like lotions, creams, and makeup. Additionally, they find applications in pharmaceuticals and other industrial sectors.
By Source
The market can also be segmented based on the source of the fats and oils. Plant-based sources, such as palm, coconut, and olive, are prominent, while animal-based sources, like milk fats, are also significant. The choice of source often depends on the desired properties and applications of the fats and oils.
By Geography
Geographical segmentation divides the market into regions such as North America, Europe, Asia-Pacific, Latin America, and the Middle East & Africa. Each region presents unique growth opportunities and challenges based on factors like consumer preferences, regulatory environments, and economic conditions. For instance, the Asia-Pacific region is experiencing rapid growth due to rising urbanization and increasing disposable incomes.
By Functionality
Segmentation by functionality focuses on the specific roles that specialty fats and oils play in end products. This includes emulsification, texture enhancement, flavor improvement, and nutritional enrichment. Different functionalities are critical for meeting the specific requirements of various applications, driving innovation and product development in the market.
Country-wise Insights
United States
The United States represents a significant market for specialty fats and oils, driven by a high demand for processed and convenience foods. The growing trend towards clean-label and health-oriented products further boosts the market. The presence of major food processing companies and advancements in food technology contribute to market growth. Regulatory support for healthier food ingredients also plays a crucial role in market expansion.
China
China is a rapidly growing market for specialty fats and oils, fueled by increasing urbanization and rising disposable incomes. The demand for high-quality, nutritional food products is on the rise, along with a growing preference for functional foods. The expanding middle class and changing dietary habits are significant drivers. Additionally, China's large population and growing food industry present vast opportunities for market growth.
India
In India, the specialty fats and oils market is expanding due to increasing consumer awareness about health and nutrition. The country's large vegetarian population drives the demand for plant-based fats and oils. Growth in the bakery and confectionery sectors, along with rising disposable incomes, supports market development. Moreover, the traditional use of specialty oils like coconut and palm oil in Indian cuisine boosts the market.
Japan
Japan's specialty fats and oils market is characterized by a high demand for premium and functional food products. The country's aging population drives the need for nutritional and health-enhancing ingredients. Innovations in food technology and a strong focus on quality and safety standards support market growth. Additionally, Japan's well-established food industry and consumer preference for clean-label products are key factors.
Germany
Germany is a leading market for specialty fats and oils in Europe, driven by a robust food processing industry and high consumer awareness regarding health and wellness. The demand for organic and non-GMO products is strong, and regulatory frameworks support the use of healthier fats and oils. Germany's focus on sustainability and environmental concerns also influences the market, encouraging the adoption of sustainable sourcing practices.
Brazil
Brazil's specialty fats and oils market is growing due to increasing urbanization and rising consumer incomes. The demand for processed and convenience foods is on the rise, along with a growing preference for healthier food options. Brazil's significant agricultural base provides ample raw materials for the production of specialty fats and oils. Additionally, the expanding food industry and export opportunities contribute to market growth.
Indonesia
Indonesia is a major player in the specialty fats and oils market, primarily due to its extensive palm oil production. The country's strong agricultural sector supports the availability of raw materials. Rising domestic consumption of processed foods and increasing exports of specialty fats and oils drive market growth. Moreover, government initiatives to support sustainable palm oil production enhance the market's prospects.
United Kingdom
The United Kingdom has a well-established specialty fats and oils market, supported by a high demand for bakery and confectionery products. The growing trend towards clean-label, organic, and non-GMO products boosts the market. The UK's advanced food processing industry and consumer preference for premium, health-oriented products drive innovation and market growth. Additionally, regulatory frameworks promoting healthier food ingredients support market development.
Future Outlook
The future of the specialty fats and oils market looks promising, driven by increasing consumer demand for nutritional, clean-label, and functional foods. Technological advancements and innovations in food processing are expected to enhance product quality and expand application possibilities. Rising health consciousness and higher living standards, particularly in emerging markets, will continue to fuel market growth. Additionally, a strong focus on sustainability and regulatory support for healthier food ingredients will play a crucial role in shaping the market's trajectory. Overall, the specialty fats and oils market is poised for steady growth, with significant opportunities for expansion and innovation.
Our Blog-
https://www.scoop.it/topic/persistence-market-research-by-swarabarad53-gmail-com
https://www.manchesterprofessionals.co.uk/articles/my?page=1
About Persistence Market Research:
Business intelligence is the foundation of every business model employed by Persistence Market Research. Multi-dimensional sources are being put to work, which include big data, customer experience analytics, and real-time data collection. Thus, working on micros by Persistence Market Research helps companies overcome their macro business challenges.
Persistence Market Research is always way ahead of its time. In other words, it tables market solutions by stepping into the companies’/clients’ shoes much before they themselves have a sneak pick into the market. The pro-active approach followed by experts at Persistence Market Research helps companies/clients lay their hands on techno-commercial insights beforehand, so that the subsequent course of action could be simplified on their part.
Contact:
Persistence Market Research
Teerth Technospace, Unit B-704
Survey Number - 103, Baner
Mumbai Bangalore Highway
Pune 411045 India
Email: sales@persistencemarketresearch.com
Web: https://www.persistencemarketresearch.com
LinkedIn | Twitter
| swara_353df25d291824ff9ee | |
1,883,567 | Automating RDS CA (Certificate Authority) Using AWS Lambda | In the last few days, we have dealt with the challenge of updating all our RDS CA services, running... | 0 | 2024-06-10T18:16:47 | https://dev.to/lrdplopes/automating-rds-ca-certificate-authority-using-aws-lambda-3kb7 | aws, python, devops, tutorial | In the last few days, we have dealt with the challenge of updating all our RDS CA services, running more than three hundred instances. Therefore, the AWS guidance is to update it manually as a matter of production critical behavior; however, manually doing it is frustrating. That's why we came up with the idea of deploying a `Lambda` code to handle it.
‼️ **<u>Disclaimer:</u>** As the data is sensitive, the examples will be explained using fictitious scenarios. I hope you grasp the main idea. If not, please reach out. It will be a pleasure to hear from you.
---
### Why this is essential
Managing SSL/TLS certificates is crucial for maintaining the security of any database system. AWS RDS uses CA certificates to secure connections, and these certificates need regular updates to prevent service disruptions. Manually updating these certificates across multiple instances can be time-consuming and error-prone. Automating this process with AWS Lambda can save time and reduce the risk of human error. We hope to guide you through creating a Lambda function to automate RDS CA certificate updates.
### Concept
The primary goal is to provide a step-by-step guide to automate the update of RDS CA certificates using an AWS Lambda function. The solution will iterate over specific RDS instances, verify their eligibility for updates, and apply the new CA certificate. This ensures that your RDS instances remain secure with minimal manual intervention.
### Challenges
1. **Handling Multiple Instances:** Many environments consist of numerous RDS instances spread across different namespaces. Updating each instance manually is impractical.
2. **Rate Limiting (Throttling):** AWS imposes rate limits on API calls. When updating multiple instances, exceeding these limits can cause errors.
3. **Ensuring Idempotency:** The function must handle retries gracefully and ensure that updates are applied correctly without causing inconsistencies.
4. **Namespace and Identifier Matching:** Properly identifying which instances need updates based on their identifiers and namespaces requires careful string handling.
### Step-by-Step Guide
#### Prerequisites
- An AWS account with appropriate permissions.
- Basic understanding of AWS Lambda and IAM roles.
- Python and Boto3 library installed.
#### Setting Up IAM Roles
Create an IAM role for your Lambda function with permissions to:
- Describe RDS instances.
- Modify RDS instances.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds:DescribeDBInstances",
"rds:ModifyDBInstance"
],
"Resource": "*"
}
]
}
```
#### Writing the Lambda Function
Create a new Lambda function and attach the IAM role created in the previous step. Use the following code to automate the CA certificate updates:
```
import boto3
import logging
import time
# Logging configuration
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# AWS region where your RDS instances are located
REGION_NAME = "<REGION>"
# New CA certificate ID
NEW_CA_CERTIFICATE_ID = "<CERTIFICATE>"
# Create an RDS client
rds_client = boto3.client('rds', region_name=REGION_NAME)
def lambda_handler(event, context):
namespaces = event.get('namespaces', [])
if not namespaces:
logger.error("The event does not provide a namespace.")
return {
'statusCode': 400,
'body': 'The event does not provide a namespace.'
}
for namespace in namespaces:
logger.info(f"Updating instances for namespace: {namespace}")
paginator = rds_client.get_paginator('describe_db_instances')
page_iterator = paginator.paginate()
for page in page_iterator:
for instance in page['DBInstances']:
instance_id = instance['DBInstanceIdentifier']
# Check if instance_id ends with -0, -1, -2, etc. or serverless
if instance_id.startswith(namespace) and (instance_id.split('-')[-1] in ['0', '1', '2']:
try:
rds_client.modify_db_instance(
DBInstanceIdentifier=instance_id,
CACertificateIdentifier=NEW_CA_CERTIFICATE_ID,
ApplyImmediately=True
)
logger.info(f"Successfully updated CA certificate for instance: {instance_id}")
except rds_client.exceptions.ThrottlingException as e:
logger.warning(f"Throttling encountered for instance: {instance_id}. Retrying...")
time.sleep(5) # Wait for a few seconds before retrying
try:
# Retry the operation
rds_client.modify_db_instance(
DBInstanceIdentifier=instance_id,
CACertificateIdentifier=NEW_CA_CERTIFICATE_ID,
ApplyImmediately=True
)
logger.info(f"Successfully updated CA certificate for instance: {instance_id} after retry")
except Exception as retry_e:
logger.error(f"Error updating CA certificate for instance: {instance_id} after retry")
logger.error(retry_e)
except Exception as e:
logger.error(f"Error updating CA certificate for instance: {instance_id}")
logger.error(e)
return {
'statusCode': 200,
'body': 'CA certificate update completed.'
}
```
#### ‼️ Detailed Code Explanation
- **Initialization:** Set up logging, region, and the RDS client.
- **Handler Function:** The `lambda_handler` function reads namespaces from the event, iterates over them, and processes each RDS instance.
- **Namespace Iteration:** For each namespace, the function paginates through RDS instances and checks if their identifiers match the required pattern (-0, -1, -2).
>
In our use case, the instances are called, for example, `namespace-mock-0` | `namespace-mock-1` | `namespace-mock-2` -- and so on.
- **Updating CA Certificates:** For each matching instance, the function attempts to update the CA certificate. If throttling errors occur, it retries after a delay.
- **Logging and Error Handling:** Logs are used extensively to track progress and errors. The function handles throttling by retrying the operation after a short delay.
### Configuring the Lambda Function
- **Memory:** Allocate enough memory to handle the load. Start with 512MB and adjust based on performance.
- **Timeout:** Set an appropriate timeout value. A higher timeout (e.g., 5 minutes) may be necessary for environments with many instances.
- **Environment Variables:** Use environment variables to store configuration values such as REGION_NAME and NEW_CA_CERTIFICATE_ID.
### Setting Up Event Triggers
You can trigger the Lambda function manually or set up an automated trigger using Amazon CloudWatch Events to run periodically _(PS: in our use case, running it manually was the correct option.)_
### Testing the Function
Create a test event in the Lambda console with the following JSON payload:
```
{
"namespaces": ["namespace-mock-test", "namespace-mock-stg", "namespace-mock-prd"] # and so on.
}
```
Run the test and monitor the logs in CloudWatch to verify that the instances are being updated as expected.
## At the end of the day
Please remember the following: "Automating maintenance tasks, such as updating RDS CA certificates, can greatly improve how efficiently things run and reduce the chance of mistakes made by people. AWS Lambda, with its serverless architecture, is a great way to automate tasks like these. The solution in this article automates the update process and handles common issues like throttling, making sure that the update process is strong and reliable. Using this kind of automation can help you easily keep your database environments secure and reliable." | lrdplopes |
1,881,884 | Redis For Beginners | Basic Overview 📖 Why Redis is needed ?? Ans: In contemporary application development,... | 0 | 2024-06-10T18:14:05 | https://dev.to/jemmyasjd/redis-for-beginners-1lnl | ## **Basic Overview 📖**
- **Why Redis is needed ??**
**Ans:** In contemporary application development, Redis plays a critical role by serving as a highly efficient in-memory data store. Its primary function is to cache computed data, which significantly reduces server load caused by excessive database queries. This caching mechanism ensures that frequently accessed data can be retrieved quickly, thus minimizing response times.
By integrating Redis, applications achieve two key optimizations:
**Reduced Query Load:** Redis stores frequently requested data in memory, alleviating the need for repeated queries to the primary database. This not only reduces the workload on the database server but also enhances overall system performance by avoiding bottlenecks associated with frequent data retrieval operations.
**Enhanced Response Time:** When data is cached in Redis, it can be returned to the user almost instantaneously. This rapid data access is crucial for applications requiring real-time responses, thereby improving user experience and making the application more responsive and efficient.
- **Define Redis:** Redis is an open source in-memory data store that can be used as a database, cache, or message broker. It's often used for caching web pages and reducing the load on servers.Redis run on the port number **6379**
- Redis have various data-types to store the data. It includes String,Set,List,Hash,etc
---------------------------------------------
## **Redis Architecture: ⚙️**

- **Cache Lookup:** Upon receiving a request, the system first checks Redis to see if the requested data is already cached. If the data is found (cache hit), it is immediately returned to the client, ensuring a fast response.
- **Cache Miss Handling:** If the data is not found in Redis (cache miss), the system then queries the persistent database to retrieve the necessary data. This ensures that the data retrieval process continues, albeit at a slightly slower pace compared to a cache hit.
- **Cache Priming:** Once the data is fetched from the persistent database, it is not only returned to the client but also stored in Redis. This step, known as priming the cache, ensures that subsequent requests for the same data can be served quickly from the cache, improving overall system performance.
------------------------------------------------------------
## **Installation ⚒️**
- Firstly we create the redis container in Docker. For that you must have installed the Docker Desktop. You may visit my blog inorder to know how to install the Docker: [Click here](https://dev.to/jemmyasjd/setting-up-the-multi-container-application-with-docker-5gdj)
- Open the Docker Desktop and open the command line and paste the following command.
`docker run -d --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest`
- As soon as you run the command you will find the container named "redis stack" running in the docker desktop.

- Open the chrome and write **localhost:8001**, if you get following output then your redis installation is successfull:

- Now write the following command to get into the container shell :
`docker exec -it redis-stack bash` and write `redis-cli --version` to get the redis version :

- Write the `redis-cli` command to get into the redis server

- Now you are ready to write the redis command and execute.
--------------------------------------
## **Setting up in VS Code in order to code along**
- Create a folder in vs code and do `npm install` in terminal. You have to install ioredis package to use it with nodejs. Run the following command to do it :
`npm i ioredis`
- Make a file name client.js and write the code shown below: [code](https://github.com/jemmyasjd/redis/)

-------------------------------------------------
## **Commands for Strings:**
-> **set key value :** For eg. set name redis : To set the data
-> **get key:** for eg. get name => "redis" : To get the data

-> **set key value nx :** Here nx is a special command that will cached the data only if not already cached before.
-> **mget key1 key2 key3 :** To simultaneously get more than 1 cached data
-> **mset key1 "value1" key2 "value2":** To set multiple cached data simultaneously

-> **incr key :** to increment the value by 1
-> **incrby key n :** to increment value by n

-> **expire key nsec :** will delete the key value after n sec
- Implementing string data type in nodejs: [code](https://github.com/jemmyasjd/redis/)

- Additional Commands :
-> SETRANGE key offset value
-> GETRANGE key start end
-> SUBSTR key start end
-------------------------------------------------
## **Commands for List:**
-> **lpush key value:** push the element into the list from left side
-> **rpush key value:** push element into the list from right side
-> **lrange key start stop:** return the element in range
-> **llen key value:** return the length of the list

-> **lpop key value:** removes and return the head of the list

-> **del key:** to delete the list

- Similar like it there is other command to do operation from right side such as rpush, rpop, etc.
- Additional Commands :
-> LMOVE source destination <LEFT | RIGHT> <LEFT | RIGHT> : atomically moves elements from one list to another.
-> LRANGE key start stop : extracts a range of elements from a list.
-> LTRIM key start stop : reduces a list to the specified range of elements.
- Implementing list data type in nodejs: [code](https://github.com/jemmyasjd/redis/)

-------------------------------------------------
## **Commands for Set:**
-> **sadd key value:** to add value to the set
->**srem key value:** to remove from the set
-> **sismember key value:** to check that value present in set or not
-> **sinter key1 key2:** to get the common element between two set
-> **smembers key:** to get the all element in set


---------------------------------
## **Commands for Hash:**
-> **hset key field value field value:** to add the data to the hash set
-> **hget key field :** to get the value of particular field
->**hget all key:** to get all the value of key
-> **hmget key field1 field2:** : to get the value of multiple field of key
-> **hincrby key value n** : to increment field value by n
- Implementing hash data type in nodejs: [code](https://github.com/jemmyasjd/redis/)

----------------------------------
## **Commands for Sorted Set:**
->**zadd key n value**: sort with according to the lowest value of n
-> **zrange key start stop:** to get all the value
-> **zrevrange key start stop:** to get all the value in reverse order
-> **zrank key value:** to get the rank of the given value
-----------------------------------------------------------
## **Practical Example of Fetching todo list:**
### [**Code**](https://github.com/jemmyasjd/redis/) :

- Before implementing Redis :

- After implementinf Redis:

- Thus we can see that after implementing redis the time taken to fetch the todos is very less compare to the previous one. This is because the data is already cached and get it directly from there.
---------------------------
## **Conclusion**
### Conclusion
In this blog, we delved into the essential role of Redis in modern application development. By serving as a high-performance in-memory data store, Redis significantly reduces server load and enhances response times through effective caching mechanisms. We explored Redis's architecture, which efficiently handles cache lookups, cache misses, and cache priming to ensure swift data retrieval and improved system performance. Furthermore, we provided detailed installation steps, essential commands for various Redis data types, and practical examples to illustrate its implementation in Node.js. The practical comparison demonstrated how integrating Redis dramatically reduces data fetching times, highlighting its value in optimizing application performance and user experience. For those interested in hands-on implementation, the provided GitHub repository offers a comprehensive guide to getting started with Redis.
| jemmyasjd | |
1,883,572 | Golden chance to earn a lot of money | I am looking for a non-technical partner to start a business on an AI platform. The ideal... | 0 | 2024-06-10T18:12:09 | https://dev.to/devmaster03/golden-chance-to-earn-a-lot-of-money-ail | I am looking for a non-technical partner to start a business on an AI platform. The ideal collaborator should have their own computer and be able to keep it running 24/7.
Candidates must be from the United States, United Kingdom, Canada, Australia, Mexico, or Argentina and have work authorization in their respective country. This opportunity ensures a passive income of $500 per week.
If interested, please leave a message here or contact me at johnfranklin.web29@gmail.com.
I look forward to hearing from you.
| devmaster03 | |
1,883,570 | ** Navegando por las arenas del conocimiento: Habilidades para el Desarrollo Tecnológico: Una Odisea en el Desierto de Dune **🌕 | ¡Hola Chiquis! 👋🏻 ¿Preparados para conquistar el mundo digital? En la era actual, donde la tecnología... | 0 | 2024-06-10T18:11:23 | https://dev.to/orlidev/-navegando-por-las-arenas-del-conocimiento-habilidades-para-el-desarrollo-tecnologico-una-odisea-en-el-desierto-de-dune--436h | webdev, tutorial, role, beginners | ¡Hola Chiquis! 👋🏻 ¿Preparados para conquistar el mundo digital? En la era actual, donde la tecnología lo domina todo, las habilidades especializadas son la clave para el éxito.🧑🏻 ¿Te apasiona crear software, diseñar webs, desarrollar juegos o navegar por los datos?

¡Prepárate para descubrir las habilidades que te harán brillar como un gusano de arena bajo el sol de Tatooine!👾
En el árido mundo de Dune, 👽 la especia, una sustancia tan valiosa como peligrosa, impulsa la tecnología y la supervivencia. De manera similar, en el panorama tecnológico actual, las habilidades especializadas son la especia que te permitirá navegar por las arenas del conocimiento y convertirte en un maestro de tu propio destino.
Habilidades para diferentes sectores tecnológicos en la era de Dune 👨🚀
En el vasto universo de la tecnología, cada especialista es como una casa noble en el mundo de Dune, cada uno con sus propias habilidades y estrategias para sobrevivir y prosperar en el árido paisaje del desarrollo y la innovación. Cada rol tecnológico posee habilidades únicas que lo convierten en un engranaje vital en la maquinaria del mundo digital.
1. Ingeniero de Software (Software Engineer): Los arquitectos de Arrakis 🌌
Los ingenieros de software son los arquitectos de los sistemas informáticos, diseñando y construyendo la infraestructura digital sobre la que se sustentan nuestras vidas. Son como los arquitectos de Arrakeen, planificando y erigiendo las ciudades que albergan la tecnología imperial. Los ingenieros de software necesitan una variedad de habilidades para sobresalir en su campo. Estas incluyen:
- Conocimiento profundo de lenguajes de programación: Dominar lenguajes de programación como Java, Python, C++, C# y JavaScript es fundamental.
- Habilidades de resolución de problemas: Deben poder identificar y resolver problemas de manera eficiente. Pensamiento lógico, resolución de problemas complejos, capacidad de abstracción.
- Comunicación efectiva: Una comunicación clara y concisa es clave para trabajar con equipos y clientes.
- Gestión del tiempo: La capacidad de gestionar el tiempo de forma efectiva permite cumplir con los plazos de los proyectos.
- Adaptabilidad y aprendizaje continuo: La tecnología está en constante evolución, por lo que deben mantenerse actualizados y aprender nuevas habilidades continuamente.
Al igual que Paul Atreides, el protagonista de Dune, un ingeniero de software debe ser adaptable, capaz de aprender rápidamente y tener una visión estratégica para resolver problemas complejos. Paul es capaz de adaptarse a la vida en Arrakis y liderar a los Fremen debido a su capacidad para aprender y su visión estratégica. De manera similar, un ingeniero de software necesita adaptarse a diferentes lenguajes de programación y tecnologías, aprender rápidamente nuevas habilidades y tener una visión estratégica para resolver problemas complejos y desarrollar soluciones eficientes.

Otro ejemplo: ☄️
Ingeniero de Software (Casa Atreides): Como los Atreides, los ingenieros de software deben ser líderes visionarios y estrategas. Deben tener:
+ Responsabilidad en proyectos.
+ Diseño y producción de componentes de software.
+ Dominio de lenguajes de programación como Java, JavaScript, Python, C y C++.
+ Gestión de requerimientos y ambientes de desarrollo.
2. Desarrollador Web (Web Developer): Los tejedores de la red 🌠
Los desarrolladores web son los tejedores de la red, creando las interfaces y experiencias que dan vida a los sitios web y aplicaciones web. Son como los tejedores Ixi, utilizando su maestría en la tecnología para crear tapices digitales que atrapan la atención.
Los desarrolladores web necesitan habilidades específicas para crear sitios web y aplicaciones web eficientes:
- Conocimientos de HTML/CSS: Son fundamentales para estructurar y diseñar sitios web.
- Capacidades analíticas: Para entender y optimizar el rendimiento del sitio web.
- Habilidades en JavaScript: Para agregar interactividad a los sitios web; frameworks web (React, Angular), diseño UX/UI.
- Habilidades interpersonales: Para trabajar en equipo y entender las necesidades del cliente.
- Habilidades de prueba y depuración: Para garantizar que los sitios web funcionen correctamente.
Al igual que los navegadores de la especia en Dune, que deben entender y navegar por el vasto desierto de Arrakis, un desarrollador web debe ser capaz de navegar y entender el vasto mundo de la web y sus tecnologías. Los navegadores de la especia utilizan la especia para expandir su conciencia y navegar por el espacio. De manera similar, un desarrollador web utiliza su conocimiento y habilidades para navegar por el mundo de la web y desarrollar sitios web y aplicaciones web eficientes.
Otro ejemplo: 🛰️
- Desarrollador Web (Casa Harkonnen): Astutos y adaptables, los desarrolladores web deben:
- Planificar y organizar recursos.
- Realizar seguimiento del progreso y resolver problemas durante el desarrollo.
3. Desarrollador de Videojuegos (Game Developer): Los maestros de la inmersión 🚀
Los desarrolladores de juegos son los maestros de la inmersión, creando mundos virtuales que cautivan la imaginación y transportan a los jugadores a nuevas realidades. Son como los Maestros Manipuladores del Gusano de Arena, controlando criaturas gigantescas de tecnología para moldear experiencias de juego únicas.
Los desarrolladores de videojuegos necesitan una combinación de habilidades técnicas y creativas:
+ Capacidad narrativa: Para crear historias y personajes atractivos, narrativa interactiva.
+ Habilidades: Lenguajes de programación (C++, C#, Unity, Java, Unreal Engine), diseño de juegos, gráficos 3D.
+ Conocimientos de psicología: Para entender qué motiva a los jugadores y cómo reaccionan a los desafíos del juego.
+ Habilidades técnicas: Conocimientos de programación, gráficos, física, inteligencia artificial, entre otros.

Al igual que los Fremen en Dune, que deben adaptarse y sobrevivir en el hostil ambiente de Arrakis, un desarrollador de videojuegos debe ser capaz de adaptarse y prosperar en el desafiante mundo del desarrollo de videojuegos. Los Fremen son capaces de sobrevivir en el desierto de Arrakis debido a su resistencia, ingenio y conocimiento del entorno. De manera similar, un desarrollador de videojuegos necesita ser resistente, creativo y tener un profundo conocimiento de la programación y el diseño de juegos para crear juegos atractivos y exitosos.
Otro ejemplo: 🔭
- Desarrollador de Juegos (Casa Ordos): Innovadores y creativos, los desarrolladores de juegos necesitan:
- Creatividad para diseñar mundos y personajes envolventes.
- Conocimiento técnico en motores de juegos y físicas.
4. Desarrollador de Aplicaciones (App Developer): Los constructores de herramientas 📡
Los desarrolladores de aplicaciones son los constructores de herramientas, creando aplicaciones móviles y de escritorio que resuelven problemas y mejoran nuestras vidas. Son como los Fremen, utilizando su conocimiento del desierto para crear herramientas ingeniosas que les permiten sobrevivir y prosperar en un entorno hostil.
Los desarrolladores de aplicaciones necesitan habilidades específicas para crear aplicaciones eficientes y atractivas:
- Creatividad: Para innovar y traer nuevas ideas.
- Habilidades: Lenguajes de programación (Java, Kotlin, Swift), frameworks móviles (React Native, Flutter), diseño de interfaces móviles.
- Perseverancia: Para superar los desafíos y obstáculos que surgen durante el desarrollo de la aplicación.
- Organización: Para planificar y gestionar eficazmente el desarrollo de la aplicación.
- Inteligencia: Para resolver problemas complejos y desarrollar soluciones eficientes.
- Flexibilidad: Para adaptarse a los cambios y aprender nuevas habilidades constantemente.
Al igual que las Bene Gesserit en Dune, que deben ser flexibles, inteligentes y perseverantes para alcanzar sus objetivos, un desarrollador de aplicaciones debe poseer estas cualidades para crear aplicaciones exitosas. Las Bene Gesserit son conocidas por su inteligencia, flexibilidad y perseverancia, cualidades que también son esenciales para un desarrollador de aplicaciones.
Otro ejemplo: 🌐
+ Desarrollador de Aplicaciones (Fremen): Resilientes y autosuficientes, los desarrolladores de apps deben ser capaces de:
+ Autogestión y adaptabilidad a diferentes plataformas móviles.
+ Conocimiento en UX/UI para crear aplicaciones intuitivas.
5. Ciberseguridad: Los guardianes del Imperio 🗡️
Los profesionales de la ciberseguridad son los guardianes del Imperio, protegiendo los sistemas informáticos de las amenazas y ataques maliciosos. Al igual que los Mentats, que son expertos en análisis y estrategia, los profesionales de la ciberseguridad deben ser capaces de analizar y responder a las amenazas de seguridad.
Los profesionales de la ciberseguridad necesitan habilidades específicas para proteger los sistemas y los datos:
- Conocimiento técnico en redes y sistemas informáticos: Para entender y proteger los sistemas.
- Habilidades: Lenguajes de programación(Python, C++, Linux). Redes y seguridad de la información, criptografía, pruebas de penetración, análisis de malware.
- Habilidades en análisis de vulnerabilidades y riesgos: Para identificar y evaluar posibles vulnerabilidades.
- Experiencia en herramientas de seguridad: Para proteger los sistemas y los datos.
- Comprensión de criptografía y seguridad de datos: Para proteger la integridad y la confidencialidad de los datos.
- Habilidades en detección y respuesta a incidentes: Para detectar, investigar y responder a incidentes de seguridad.

Los Mentats son humanos entrenados para realizar tareas de análisis y estrategia complejas, al igual que los profesionales de la ciberseguridad necesitan ser capaces de analizar las amenazas de seguridad y desarrollar estrategias para proteger los sistemas y los datos.
Ciberseguridad y los Fremen: analogía interesante ⚔️
Los Fremen, habitantes nativos del desértico planeta Arrakis, son conocidos por su resistencia, adaptabilidad y conocimiento profundo del entorno en el que viven. Son maestros en la supervivencia y en el uso eficiente de los recursos, especialmente el agua, que es extremadamente escasa en su mundo.
De manera similar, los profesionales de la ciberseguridad deben ser resistentes y adaptables frente a un entorno hostil y en constante cambio: el ciberespacio. Deben tener un conocimiento profundo de los sistemas y redes que protegen, identificando vulnerabilidades y anticipando ataques antes de que ocurran. Así como los Fremen utilizan cada gota de agua con el máximo cuidado, los expertos en ciberseguridad deben gestionar los recursos de seguridad de la información de manera eficiente, asegurando que cada medida de protección se aplique de manera óptima para salvaguardar los activos más valiosos de una organización.
Además, los Fremen son estratégicos y conocen la importancia de la especia 'melange',🛸 que es vital para el control del universo de Dune. En el contexto de la ciberseguridad, la 'especia' puede ser vista como los datos críticos y la información confidencial que los ciberseguridad deben proteger a toda costa. La habilidad de los Fremen para navegar por el peligroso desierto de Arrakis se asemeja a la habilidad de los profesionales de ciberseguridad para navegar por el complejo paisaje de amenazas y proteger contra los peligros invisibles del ciberespacio. En resumen, la ciberseguridad es como los Fremen de Dune: resiliente, estratégica y esencial para la supervivencia en un entorno lleno de desafíos y peligros.
6. Inteligencia Artificial y Aprendizaje Automático (AI & ML): Los visionarios del futuro ✨
Los expertos en IA y ML son los visionarios del futuro, utilizando algoritmos inteligentes para crear máquinas que aprenden, razonan y toman decisiones. Son como los Navegantes Espaciales, utilizando su presciencia y habilidades de navegación para guiar a la humanidad a través del vasto universo de la tecnología.
Los profesionales de la IA y el ML necesitan habilidades específicas para desarrollar y aplicar algoritmos eficientes:
- Probabilidad y Estadística: Para entender y aplicar los fundamentos de los algoritmos de aprendizaje automático.
- Habilidades: Lenguajes de programación(Python, Java, Julia, R, Haskell). Matemáticas, estadística, aprendizaje automático, aprendizaje profundo, redes neuronales.
- Cálculo multivariable y álgebra lineal: Para entender y aplicar los fundamentos matemáticos de los algoritmos de aprendizaje automático.
- Programación: Para implementar y optimizar los algoritmos de aprendizaje automático.
- Exploración de datos: Para entender y preparar los datos para el aprendizaje automático.
- Gestión de bases de datos: Para almacenar y recuperar los datos de manera eficiente.
Al igual que la especia Melange en Dune, que mejora las habilidades y la percepción de quienes la consumen, la IA y el ML mejoran la capacidad de las máquinas para entender y aprender de los datos. La especia Melange es esencial para los viajes espaciales en Dune porque expande la conciencia y mejora las habilidades de quienes la consumen. De manera similar, la IA y el ML permiten a las máquinas aprender de los datos y mejorar su rendimiento.

Otro ejemplo: 🌟
+ Inteligencia Artificial y Aprendizaje Automático (Guild Navigators): Como los Navegantes, estos especialistas deben prever y modelar el futuro con:
+ Algoritmos avanzados y modelos predictivos.
+ Análisis de datos para entrenar y mejorar sistemas inteligentes.
7. Ciencia de Datos: Los sabios de la información 🌑
Los científicos de datos son los sabios de la información, extrayendo conocimiento y patrones valiosos de grandes conjuntos de datos. Son como los Planetologists, estudiando las arenas de Arrakis para comprender sus secretos y desbloquear su potencial.
Los científicos de datos, como los Mentats, deben ser los cerebros analíticos y calculadores, con:
- Estadísticas y matemáticas para interpretar datos.
- Habilidades: Estadística, minería de datos, visualización de datos, aprendizaje automático, Python, R.
- Programación para manipular y visualizar información.
Es importante considerar, que hay otros roles tecnológicos significativos que son estratégicamente importantes para el éxito de las organizaciones. Aquí tienes algunos ejemplos:
+ Analista en Business Intelligence (BI): Se encarga de transformar datos en información que facilite la toma de decisiones estratégicas.
+ Arquitecto o Desarrollador en Business Intelligence (BI): Diseña y desarrolla soluciones de BI para mejorar la comprensión del negocio.
+ Ingeniero de Datos: Especializado en diseñar, construir y mantener sistemas de gestión de datos.
+ Arquitecto en Bases de Datos: Responsable de la creación y gestión de bases de datos complejas.
+ Ingeniero de Seguridad: Protege los sistemas de información contra amenazas y vulnerabilidades.
+ Especialista en Administración de Seguridad: Gestiona las políticas y procedimientos de seguridad de la información.
+ Diseñador/Desarrollador/Ingeniero en Internet de las Cosas (IoT): Crea dispositivos y sistemas que se conectan e interactúan a través de Internet.
+ Desarrollador y Diseñador UI/UX: Se enfoca en la experiencia del usuario y la interfaz de usuario para crear productos atractivos y funcionales.
+ Gerente de Cambio: Lidera y gestiona el cambio dentro de las organizaciones, especialmente en proyectos de transformación digital.
+ Ingeniero y Diseñador de IoT: Trabaja en el desarrollo de soluciones basadas en el Internet de las Cosas.
+ Consultor de Transformación: Ayuda a las empresas a navegar por el proceso de transformación digital.
Estos roles son cruciales en la actualidad y se espera que su demanda aumente en el futuro, ya que las tecnologías como la nube, 👾 la inteligencia artificial y el big data continúan evolucionando y transformando la industria.
Conclusión 👩🏾🚀
En el juego de Dune, cada facción tiene sus propias unidades y estrategias especiales. De manera similar, cada rol en el desarrollo tecnológico requiere un conjunto único de habilidades y conocimientos para navegar por el desierto de la innovación y controlar la producción de la 'especia' más valiosa de nuestro tiempo: la tecnología.
Al igual que Paul Muad'Dib navegando por las arenas de Dune, 🎮 tú también puedes convertirte en un maestro de tu propio destino en el mundo tecnológico. Identifica las habilidades que te apasionan, desarrolla tu conocimiento y conviértete en un agente de cambio en la era digital. Recuerda, el camino hacia el dominio es largo y desafiante, pero con determinación y las herramientas adecuadas, podrás alcanzar tus metas y convertirte en un líder en tu campo.

¡Recuerda que, al igual que en Dune, 🕹️ el camino hacia el éxito en cualquier campo requiere perseverancia, adaptabilidad y aprendizaje constante! Espero que este post te sirva de inspiración y te ayude a entender mejor las habilidades necesarias en el campo tecnológico, con el universo de Dune!
🚀 ¿Te ha gustado? Comparte tu opinión.
Artículo completo, visita: https://lnkd.in/ewtCN2Mn
https://lnkd.in/eAjM_Smy 👩💻 https://lnkd.in/eKvu-BHe
https://dev.to/orlidev ¡No te lo pierdas!
Referencias:
Imágenes creadas con: Copilot (microsoft.com)
##PorUnMillonDeAmigos #LinkedIn #Hiring #DesarrolloDeSoftware #Programacion #Networking #Tecnologia#Empleo #Roles


| orlidev |
1,883,569 | Defending the Script Length Rule in vue-mess-detector | Hey everyone, I've been receiving a lot of feedback on vue-mess-detector, particularly concerning... | 0 | 2024-06-10T18:11:08 | https://dev.to/rrd/defending-the-script-length-rule-in-vue-mess-detector-12hp | webdev, vue | Hey everyone,
I've been receiving a lot of feedback on vue-mess-detector, particularly concerning the rule for script length. I understand this rule is a bit controversial, but I wanted to take a moment to explain why I believe it’s not just valid, but essential for writing maintainable code.
## Why Script Length Matters
**Readability**: Long scripts can be difficult to navigate. When scripts are too lengthy, it becomes harder to find specific pieces of code, understand their context, and figure out how different parts interact. Keeping scripts shorter enhances readability and makes life easier for developers, especially those who are new to the project.
**Maintainability**: A shorter script is easier to maintain. Changes are less likely to introduce bugs when the codebase is segmented into smaller, more manageable chunks. This is especially important for large projects where multiple developers might be working on different parts of the same file.
**Testing**: Smaller scripts are easier to test. When functions and components are more concise, unit tests can be more targeted and effective. This leads to more reliable code and a smoother development process.
**Separation of Concerns**: Enforcing a script length limit encourages developers to break their code into smaller, more focused modules. This aligns with the principle of separation of concerns, making the codebase more modular and reusable. For me *this* is the main reason.
## Addressing the Controversy
I understand that some developers feel restricted by this rule. It can seem like an unnecessary limitation, especially when dealing with complex features that naturally require more lines of code. However, I believe that this rule pushes us to think critically about our code structure and encourages best practices in software development.
Rather than viewing the script length rule as a constraint, I see it as a guideline that promotes better coding habits. It’s about finding the balance between flexibility and structure, ensuring that our projects remain clean, efficient, and easy to work with over time.
## Conclusion
While the script length rule in vue-mess-detector might spark debate, I stand by its validity. It's a tool designed to foster maintainable and readable code, which ultimately benefits all developers involved in a project.
I’m always open to feedback and discussions, so feel free to share your thoughts on this. Let's keep improving our code together!
Slap into the code! | rrd |
1,883,568 | Fix SHAP Multiclass Summary Plot - Downgrade to v0.44.1 from 0.45.0 | Here's a helpful tip for anyone using #SHAP (SHapley Additive exPlanations): If you're trying to use... | 0 | 2024-06-10T18:10:37 | https://dev.to/omranic/fix-shap-multiclass-summary-plot-downgrade-to-v0441-from-0450-27m | shap, explainableai, machinelearning | Here's a helpful tip for anyone using #SHAP (SHapley Additive exPlanations):
If you're trying to use `shap.summary_plot(shap_values, X_train_tfidf_dense, plot_type="bar")` for a multiclass summary, and it keeps showing a dotted interaction plot regardless of the plot type you pass; it's likely a bug in `v0.45.0`.
Downgrading to v0.44.1 fixed it for me. Hope this saves someone time! 👍 #ExplainableAI
---
For those who don't know what SHAP is, it's a framework that can explain why your #MachineLearning model made that specific prediction and what features contributed to that output.
SHAP (SHapley Additive exPlanations) is a game-theoretic approach to explaining the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions.
---
Below is a straightforward code to demonstrate the issue:
```python
# Create a synthetic dataset
X, y = make_classification(n_samples=100, n_features=5, n_informative=3, n_redundant=1, n_clusters_per_class=1, n_classes=3, random_state=42)
features = [f"Feature {i}" for i in range(X.shape[1])]
X = pd.DataFrame(X, columns=features)
# Train a RandomForest model
model = RandomForestClassifier(n_estimators=50, random_state=42)
model.fit(X, y)
# Create the SHAP Explainer
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X)
# Plot SHAP values for each class
shap.summary_plot(shap_values, X, plot_type="bar", class_names=['Class 0', 'Class 1', 'Class 2'])
```
---
Here are the screenshots for both versions:

 | omranic |
1,883,563 | ACID Transactions | Have you ever heard the phrase "ACID Transactions"? If yes, this post is for you 😎 If not, now you... | 0 | 2024-06-10T18:07:07 | https://www.linkedin.com/pulse/acid-transactions-loc-nguyen-ewizc/ | sql, acid, transactions | Have you ever heard the phrase "ACID Transactions"?
- If yes, this post is for you 😎
- If not, now you know it 😁
## What are transactions?
In normal life, transactions occur when **you sell - I buy** and vice versa. It represents the process between you and me when we give something to each other with the same value, maybe money or another thing with the same price.
In SQL, it is another definition, transactions **is a unit or sequence of work that is performed on a database**. It can be made manually by the user or automatically by the software. For example:

<figCaption>Unit Transaction</figCaption>

<figCaption>Sequence Transaction</figCaption>
## What are ACID transactions?
**ACID** is an acronym that refers to the set of 4 key properties of a transaction. Each of them are:
- **Atomicity (A)**
- **Consistency (C)**
- **Isolation (I)**
- **Durability (D)**
### Atomicity
Ensures that a transaction is either fully committed or not executed at all. In other words, **if any part of the transaction fails, the entire transaction is rolled back, and no changes are applied to the database**. This property maintains the accuracy and reliability of the database by treating the transaction as an indivisible unit of work
### Consistency
**Ensures that the database remains valid both before and after a transaction**. In simpler terms, the database schema must adhere to all constraints and rules. If any transaction violates these constraints, it is rolled back, preserving the database’s integrity and ensuring accurate and reliable data.
### Isolation
**Ensures that each transaction operates independently of other transactions**, which means that a transaction’s effects should only become visible to other transactions after it has been committed.
### Durability
**Ensures that any changes made to the database during a transaction are permanent, even in the event of system failure.** Once a transaction is committed, its effects persist, even if the system is destroyed or experiences power loss
## ACID Transactions Use Case
You can apply it to systems that require high reliability and consistency like:
- Banking
- Healthcare
- E-commerce
## Summary
This is what I know about ACID transactions. Hope you can get some extra knowledge through my post.
Happy coding 😎! | locnguyenpv |
1,883,566 | Navigating the Tech Space: Essential Steps, Possible Challenges, and Solutions for Beginners | ** Introduction ** Look, I know the average people in the world, wants to dive into the... | 0 | 2024-06-10T18:03:57 | https://dev.to/asharfdevv/navigating-the-tech-space-essential-steps-possible-challenges-and-solutions-for-beginners-4m0o | beginners, programming, javascript | **
## Introduction
**
Look, I know the average people in the world, wants to dive into the technology world but, starting a journey in the tech world can be both exciting and daunting. With so much to learn and explore, it's easy to feel overwhelmed. This article aims to guide you through the essential steps to get started, identify potential challenges they may face, and offer practical solutions to overcome them. Understanding what the tech industry entails and the key areas within it is the first step to finding your path.
**
## Essential Steps to Get Started
**
**Step 1: Identify Your Interest**
Before diving into learning, take some time to explore different fields in tech. there are many categories and subcategories in the tech, Do you enjoy solving problems with code? Or maybe analyzing data to find patterns? Finding what excites you will make your learning journey more enjoyable and sustainable. Spend time researching different areas within tech. Some of the popular fields include Software development, Data Science, Cybersecurity, Artificial Intelligence, Cloud Computing, Web-Development and UI/UX Design. Each of these fields has its own unique set of skills (languages) tools and methodologies.
Below are tables that show some popular fields along with their associated languages and frameworks that can help you navigate your interests.

**Step 2: Learn the Basics**
Once you've identified your interest, start learning the basics. Familiarize yourself with the fundamental skills and knowledge required for your chosen field. At this stage, many people find it daunting and discouraging to learn the basics. It is best practice to start with a specific language or skill set rather than trying to learn too much at once.
For example, if you're interested in web development, you might start by learning HTML, CSS, and JavaScript respectively. Ensure you have an adequate understanding of these basics before moving on to frameworks such as React.js, Vue.js, or Angular.js. By building a strong foundation, you'll be better equipped to tackle more advanced concepts and technologies later on.
Another example is if you're interested in data science. Begin by learning the basics of Python programming and essential libraries like NumPy and Pandas. Once you have a good grasp of these, you can progress to more complex topics such as machine learning with Scikit-Learn or deep learning with TensorFlow and PyTorch. Starting with the basics ensures you have the necessary groundwork to understand and effectively use more advanced tools and techniques in your field.
This can be done through free and paid resources, such as online courses, tutorials, and books. For example, if you're interested in software development, you might start by learning a programming language like Python or JavaScript.
**Step 3: Build a Strong Foundation**
Practice is key to building a strong foundation. Work on small projects to apply what you've learned. Joining online communities and forums can also provide support and additional resources. Engaging with others who share your interests can help you stay motivated and learn faster.
**To build a strong foundation:**
- **Work on Projects:** Start with simple projects that align with your learning. For example, create a personal website, develop a basic app, or analyze a dataset. This helps reinforce your skills and provides practical experience.
- **Participate in Coding Challenges:** Websites like LeetCode, HackerRank, and Codewars offer coding challenges that can help improve your problem-solving skills and proficiency in programming.
- **Engage in Peer Learning:** Join online communities, forums, and study groups where you can collaborate, share knowledge, and seek feedback. There are many free online communities you can join such as Reddit, Quora even Linkedin, and many more. Engaging with others can provide new perspectives and accelerate your learning.
- **Seek Feedback:** Regularly seek feedback on your projects and code from more experienced developers. Constructive criticism can help you identify areas for improvement and enhance your skills. Join developer communities by taking part in online forums, discussion boards, and sites like Stack Overflow, Reddit (r/learnprogramming), and GitHub. This will help you get feedback on your projects from other developers. Find a mentor through LinkedIn or coding bootcamps for regular advice and feedback. Go to meetups, hackathons, and workshops to show your work and get helpful criticism from peers and experts. Share your code on GitHub, GitLab, or Bitbucket and invite others to review it. Use tools like CodeClimate, SonarQube, ESLint, or Pylint for real-time feedback on code quality and style. By doing these things, you can improve your coding skills and get valuable insights from experienced developers.
**Step 4: Gain Hands-on Experience**
Look for internships or entry-level jobs that provide hands-on experience. These opportunities allow you to apply your skills in real-world scenarios. Additionally, consider contributing to open-source projects to gain practical experience and improve your skills. For programming, seek internships at tech companies or contribute to open-source projects on GitHub. In data analysis, join data science competitions on platforms like Kaggle or intern with companies that handle large datasets. UI/UX designers can find internships at design agencies, participate in hackathons, and showcase their work on Behance or Dribbble. For cybersecurity, apply for roles at cybersecurity firms or participate in Capture The Flag (CTF) competitions. Web developers should seek internships at web development agencies or work on freelance projects through Upwork. AI enthusiasts can join AI competitions or intern at AI-focused companies. Software developers should look for internships at software companies and contribute to open-source projects. By actively pursuing these opportunities, you can enhance your skills and build a strong portfolio.
**
## Possible Challenges and Solutions
**
**Challenge 1: Information Overload**
With so much information available, it's easy to feel overwhelmed. To avoid this, focus on one topic at a time and set small, achievable goals. Break down your learning into manageable chunks. For instance, instead of trying to master an entire programming language in one go, start by learning the syntax and basic operations. Once you're comfortable with those, move on to more complex concepts like data structures or algorithms.
Additionally, create a structured learning plan. Allocate specific times for studying each topic, and take regular breaks to avoid burnout. Use a variety of resources, such as tutorials, books, and online courses, to keep your learning engaging and diverse.
Remember, learning is a marathon, not a sprint. Celebrate your small victories along the way, and don’t hesitate to revisit topics if you feel uncertain. By managing your learning process and staying organized, you can overcome the challenge of information overload and make steady progress in your studies.
**Challenge 2: Keeping Up with Rapid Changes**
The tech industry evolves quickly, and keeping up can be challenging. Stay updated by following tech news, joining professional groups, and participating in continuous learning. Subscribe to reputable tech blogs, podcasts, and YouTube channels to stay informed about the latest trends and advancements. Engage in webinars, workshops, and online courses to continuously upgrade your skills. Joining professional associations or online communities, such as GitHub, Reddit, Discord channel, Stack Overflow, or LinkedIn groups, can also provide valuable insights and resources to help you stay current. Additionally, consider setting aside regular time each week to review new developments in your field, experiment with new tools, and read up on emerging technologies. Being proactive in your learning approach will help you remain adaptable and competitive in the ever-evolving tech landscape.
**Challenge 3: Building a Network**
Networking is crucial for career growth, but building a network can be intimidating. Attend local meetups, engage on social media, and participate in online forums to connect with like-minded individuals. Look for industry conferences, hackathons, and networking events in your area where you can meet professionals face-to-face. Online platforms like LinkedIn, Twitter, and professional forums are excellent for engaging with experts and peers. Actively participate in discussions, share your own insights, and offer help to others. By contributing to the community, you'll build meaningful relationships that can lead to mentorship, collaboration opportunities, and career advancement.
When engaging in conversations online, don’t hesitate to comment and share your thoughts. Always showcase what you’ve learned by uploading your work or insights, especially on platforms like Twitter or Discord. Use trending hashtags to increase visibility. Believe me, you will later connect with people across the globe. By actively participating and sharing your progress, you'll attract the attention of like-minded professionals and expand your network internationally.
** Conclusion**
Starting a tech career is a journey filled with learning and growth.
> As Nelson Mandela once said, "It always seems impossible until it's done."
By following these essential steps and addressing potential challenges, you can navigate the tech space successfully. Stay persistent, keep learning, and remember that every expert was once a beginner. Always remember that every expert was once a beginner, so trust in your abilities and keep pushing forward. With dedication and perseverance, you'll undoubtedly pave your path to success in the tech industry.
| asharfdevv |
1,883,565 | useEffect React and TypeScript | useEffect React and TypeScript | 27,665 | 2024-06-10T18:00:08 | https://www.johnatanortiz.tech/blog/useeffect-react-and-typescript | # React + Typescript
Using `useEffect` in a React component with TypeScript involves ensuring that your effect functions, dependencies, and cleanup functions are properly typed. Here’s how you can do this:
### Basic Example
Here's a simple example of using `useEffect` with TypeScript:
```tsx
tsxCopy code
import React, { useEffect, useState } from 'react';
const MyComponent: React.FC = () => {
const [data, setData] = useState<string | null>(null);
useEffect(() => {
// Fetching data
fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => setData(data));
// Cleanup (if necessary)
return () => {
console.log('Cleanup if necessary');
};
}, []); // Dependency array
return <div>Data: {data}</div>;
};
export default MyComponent;
```
### Typing the State
When using state, you need to specify the type of the state variable. In the example above, `useState<string | null>` specifies that the state can be a string or null.
### Example with Dependencies
If you have dependencies, you should also type them:
```tsx
tsxCopy code
import React, { useEffect, useState } from 'react';
const MyComponent: React.FC = () => {
const [count, setCount] = useState<number>(0);
useEffect(() => {
document.title = `Count: ${count}`;
}, [count]); // Re-run the effect only if count changes
return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
</div>
);
};
export default MyComponent;
```
### Cleanup Function
Here’s an example with a cleanup function, typically used for subscriptions or timers:
```tsx
tsxCopy code
import React, { useEffect, useState } from 'react';
const TimerComponent: React.FC = () => {
const [seconds, setSeconds] = useState<number>(0);
useEffect(() => {
const interval = setInterval(() => {
setSeconds(prev => prev + 1);
}, 1000);
return () => {
clearInterval(interval);
};
}, []); // Run once on mount and cleanup on unmount
return <div>Seconds: {seconds}</div>;
};
export default TimerComponent;
```
### Fetching Data with TypeScript
For data fetching, you can define the type of the data you expect:
```tsx
tsxCopy code
import React, { useEffect, useState } from 'react';
interface Data {
id: number;
name: string;
}
const FetchDataComponent: React.FC = () => {
const [data, setData] = useState<Data | null>(null);
useEffect(() => {
fetch('https://api.example.com/data')
.then(response => response.json())
.then((data: Data) => setData(data))
.catch(error => console.error(error));
return () => {
// Any necessary cleanup
};
}, []); // Empty dependency array ensures this runs once on mount
return <div>{data ? `Name: ${data.name}` : 'Loading...'}</div>;
};
export default FetchDataComponent;
```
### Handling Subscriptions
If you’re dealing with subscriptions, you can type the cleanup function appropriately:
```tsx
tsxCopy code
import React, { useEffect, useState } from 'react';
const SubscriptionComponent: React.FC = () => {
const [message, setMessage] = useState<string>('');
useEffect(() => {
const subscription = subscribeToMessages((newMessage: string) => {
setMessage(newMessage);
});
return () => {
subscription.unsubscribe();
};
}, []); // Empty array ensures the effect runs only once
return <div>Message: {message}</div>;
};
function subscribeToMessages(callback: (message: string) => void) {
// Dummy subscription function
const interval = setInterval(() => {
callback('New message received');
}, 1000);
return {
unsubscribe: () => clearInterval(interval),
};
}
export default SubscriptionComponent;
```
### Summary
1. **Type the state variables**: Ensure that the state types are correctly defined.
2. **Type the data**: When fetching data, define the type of the data you expect to receive.
3. **Type the cleanup function**: Ensure that any cleanup functions are typed correctly.
By following these guidelines, you can effectively use `useEffect` in your TypeScript React components. | johnatan_stevenortizsal | |
1,883,562 | Open-Source Exploitation | Hi folks, This is not my title, but the title of a presentation I saw few days ago. I wanted to... | 0 | 2024-06-10T17:57:45 | https://dev.to/dagnelies/open-source-exploitation-2eh4 | watercooler, opensource, techtalks | Hi folks,
This is not my title, but the title of a presentation I saw few days ago. I wanted to share with you as I think it is important for the open source community as a whole.
{% embed https://www.youtube.com/watch?v=9YQgNDLFYq8 %}
For my part, I agree completely to what he said. What about you? | dagnelies |
1,883,561 | Dreamin' in Color Countdown | Check out this Pen I made! | 0 | 2024-06-10T17:57:21 | https://dev.to/arbrazil/dreamin-in-color-countdown-3ldh | codepen | Check out this Pen I made!
{% codepen https://codepen.io/arbrazil/pen/qBwgNmM %} | arbrazil |
1,883,560 | Smart Cities: How Tech is Revolutionizing Urban Living | Intelligent Transportation Systems Advanced traffic management and real-time data integration... | 0 | 2024-06-10T17:53:27 | https://dev.to/bingecoder89/smart-cities-how-tech-is-revolutionizing-urban-living-1l75 | webdev, javascript, devops, ai | 1. **Intelligent Transportation Systems**
- Advanced traffic management and real-time data integration reduce congestion and improve public transport efficiency.
2. **Energy Efficiency and Smart Grids**
- Smart grids and renewable energy sources optimize energy consumption, reduce waste, and lower costs.
3. **IoT and Sensor Networks**
- IoT devices and sensors monitor environmental conditions, infrastructure, and resources to enhance urban management.
4. **Smart Buildings**
- Automated systems in buildings improve energy efficiency, safety, and comfort through centralized control.
5. **Sustainable Waste Management**
- Smart waste collection systems use data to optimize routes and schedules, reducing costs and environmental impact.
6. **Public Safety and Security**
- Surveillance systems and data analytics improve emergency response times and enhance crime prevention strategies.
7. **E-Government and Citizen Services**
- Digital platforms streamline interactions between citizens and municipal services, increasing accessibility and transparency.
8. **Healthcare and Telemedicine**
- Remote monitoring and telehealth services provide efficient healthcare solutions, improving access and reducing strain on urban hospitals.
9. **Smart Water Management**
- Advanced monitoring and management systems ensure efficient use of water resources and early detection of leaks.
10. **Urban Planning and Development**
- Data analytics and modeling tools support sustainable urban development and help city planners make informed decisions.
Happy Learning 🎉 | bingecoder89 |
1,883,559 | PYTHON P-3 PROJECT | ==>Creating a URL shortener is a great project that leverages databases, ORMs, and database... | 0 | 2024-06-10T17:52:15 | https://dev.to/victor_wangari_6e6143475e/python-p-3-project-250o | ==>Creating a URL shortener is a great project that leverages databases, ORMs, and database diagrams effectively. Here’s how you can approach building a URL shortener using Python with SQLite3 and SQLAlchemy.
**Project: URL Shortener**
**Features**
1. _Shorten URLs_
- Accept long URLs and generate short URLs
- Ensure unique short URLs
-Optionally, allow custom short URLs
2._Redirect Short URLs_
-Redirect users to the original URL when
they visit the short URL
3._URL Analytics_
-Track the number of visits to each short
URL.
4._User Management(OPTIONAL)_
-Allow users to register and manage their
short URLs .
-Implement user authentication(login/logout)
5._Database Diagram_
-Visual representation of the database
schema using tools like **dbdiagram.io** or
**SQLAlchemy's** build-in features
**Steps to Implement**
1._Setup the Project_
-Create a new directory for your project
-Set up a virtual environment
-Install necessary dependencies: flask,
** sqlalchemy**, **flask_sqlalchemy**,
**flask_migrate**
2._Database Design_
-Define the database schema using SQLAlchemy
models
-Create tables for URLs and Users (if user
management is implemented)
3._Database Diagram_
-Use a tool like dbdiagram.io to create a
visual representation of your database
schema
-Alternatively, use SQLAlchemy to generate
the schema and visualize it using a library
like **ERAlchemy**
4._URL Shortening Logic_
-Implement logic to generate short URLs
-Handle URL collisions to ensure unique short
URLs
5.Redirect Logic__
-Implement route to handle redirection from
short URLs to original URLs
6._Analytics_
-Track and store analytics data such as
visit counts and access dates
7._User Management (Optional)_
-Implement user registration and login using
Flask-Login or Flask-Security
-Allow users to manage their short URLs
8._Testing and Deployment_
-Write unit tests for your application
-Deploy the application using a service like
Heroku or AWS
| victor_wangari_6e6143475e | |
1,883,532 | Is it Possible to Use Grafana Without Prometheus? | Grafana is widely known for its powerful data visualization capabilities, often used in conjunction... | 0 | 2024-06-10T17:50:03 | https://signoz.io/guides/is-it-possible-to-use-grafana-without-prometheus/ | devops, prometheus, grafana, observability | Grafana is widely known for its powerful data visualization capabilities, often used in conjunction with Prometheus for monitoring and metrics collection. However, many users wonder if it is possible to use Grafana without Prometheus. The answer is yes! Grafana's versatility allows it to support a wide range of data sources, making it a flexible tool for various use cases.
In this article, we will explore the possibility and benefits of using Grafana without Prometheus, and provide a step-by-step guide on setting up Grafana with MySQL as the data source.
## Understanding Grafana
Grafana is an open-source platform for monitoring and observability that excels in visualizing data from a variety of sources. It offers a highly customizable dashboarding experience, allowing users to create, explore, and share dashboards that display data from different systems.
## Common Data Sources Supported by Grafana
Grafana supports numerous data sources out-of-the-box, including:
- MySQL
- InfluxDB
- Elasticsearch
- PostgreSQL
- Prometheus
- Graphite
- Cloudwatch
- Azure Monitor
These integrations enable users to visualize data from various databases and services, making Grafana a versatile tool for many applications.
## Benefits of Using Grafana Without Prometheus
- **Flexibility in Data Source Selection**: Grafana's support for multiple data sources means you can choose the one that best fits your needs, whether it's a SQL database, a time-series database, or a cloud service.
- **Simplified Architecture**: By using Grafana with a data source like MySQL, you can simplify your monitoring architecture, which might be beneficial for specific use cases.
- **Cost Savings**: Depending on your requirements, using a single data source can reduce complexity and costs associated with managing multiple monitoring tools.
## Practical Example: Setting Up Grafana with MySQL
Let's dive into a practical example of setting up Grafana with MySQL as the data source.
**Step 1: Install Grafana**
Download and install Grafana from the [official website](https://grafana.com/docs/grafana/latest/setup-grafana/installation/mac/). Follow the installation instructions for your operating system.
**Step 2: Set Up MySQL and Create a Sample Database**
Install MySQL on your system if it's not already installed.
Create a sample database and table for storing your data:
```sql
CREATE DATABASE sample_db;
USE sample_db;
CREATE TABLE metrics (
id INT AUTO_INCREMENT PRIMARY KEY,
metric_name VARCHAR(255) NOT NULL,
value FLOAT NOT NULL,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP
);
INSERT INTO metrics (metric_name, value) VALUES ('cpu_usage', 23.5), ('memory_usage', 55.1);
```

**Step 3: Configure Grafana to Connect to MySQL**
- Open Grafana in your browser (default URL is http://localhost:3000).
- Log in with the default credentials (admin/admin, you an read detailed guide to login [here](https://grafana.com/docs/grafana/latest/setup-grafana/sign-in-to-grafana/)).

- Navigate to Configuration > Data Sources and click Add data source.

- Select MySQL from the list of data sources.
- Enter the necessary connection details (e.g., host, database name, user, and password) and click Save & Test to verify the connection.
- Host URL will be likely `localhost:3306`
- Database Name: `sample_db`
- Add your database username and password in the authentication.

**Step 4: Create a Simple Dashboard Using MySQL Data**
- Navigate to Create > Dashboard and click Add new panel.

- Select your MySQL data source.
- Write a simple SQL query to fetch data from your sample table:
```sql
SELECT
UNIX_TIMESTAMP(timestamp) as time_sec,
value as value,
metric_name as metric
FROM metrics
```
Customize the visualization as needed and save your dashboard.

## Real-World Use Cases
Many organizations leverage Grafana with data sources other than Prometheus to meet specific needs:
- Web Application Monitoring: Using MySQL or PostgreSQL to track user interactions and performance metrics.
- Business Analytics: Visualizing data from SQL databases to monitor KPIs and other business metrics.
- IoT Data Visualization: Utilizing InfluxDB to collect and visualize sensor data.
## Conclusion
Grafana is a versatile tool that can be used with a variety of data sources beyond Prometheus. This flexibility allows users to tailor their monitoring and visualization setup to their specific needs, simplifying architecture and potentially reducing costs.
For a deeper comparison between Prometheus and Grafana, you can check out this [comprehensive article](https://signoz.io/comparisons/prometheus-vs-grafana/).
Have you used Grafana with a data source other than Prometheus? Share your experiences and questions in the comments below. | yuvraajsj18 |
1,883,558 | Robot input by html & css & javascript | add me on : Codepen: https://codepen.io/hussein009 github... | 0 | 2024-06-10T17:48:18 | https://dev.to/hussein09/robot-input-by-html-css-javascript-2ok9 | codepen, javascript, html, css | add me on :
Codepen:
https://codepen.io/hussein009
github :
https://github.com/hussein-009
https://heylink.me/hussein009
{% codepen https://codepen.io/hussein009/pen/mdYwRzN %} | hussein09 |
1,883,557 | Getting started on AWS DeepRacer🏎️ | At the beginning of March 2024, I had never created a machine learning model. However, by the end of... | 0 | 2024-06-10T17:48:11 | https://dev.to/muhammedsalie/getting-started-on-aws-deepracer-a02 | genain, ai, machinelearning, futureofwork | At the beginning of March 2024, I had never created a machine learning model. However, by the end of April, my AWS DeepRacer model ranked in the top 50 in the Middle East & Africa Region competition, earning me a $99 Amazon gift card. Here are some of the lessons I learned along the way.
**Getting Started**
If you don't have an AWS account yet, you'll need to create one. Then, search for DeepRacer and click on "Get Started" in the left menu.
**Setting Up Your First Model and Choosing Training Algorithm**
Give your model an exciting name, like 'super fast model.' On the next page, select the time trial option.
I recommend starting with the PPO training algorithm. While SAC can provide a more optimized model, it only works with a continuous action space, an option available on the next page.
As for hyperparameters, the default settings are typically sufficient and are designed to work well for most use cases. If your model begins to plateau after several iterations, it might be worth tweaking these settings.
**Writing a Reward Function**
Check out the provided reward function examples. I began with the "follow the center line" model and made a minor adjustment for the first training session.

**Your First Training Session**
After completing the first training session, evaluate your model on a track. Take note of its strengths and, more importantly, its weaknesses. Now, the fun part begins.

**Rinse and Repeat**
If you're satisfied with your model's performance, clone it to build on its existing knowledge. For the remaining training sessions, setting the maximum time under stop conditions to 60 to 120 minutes should suffice. If it's too short, the model won't have sufficient learning time; if it's too long, overfitting becomes a concern.
Unless your model's performance deteriorates after a training session, keep cloning your most recent model to build on its existing learning.
**Enjoy the Experience!**
DeepRacer provides a fun and competitive way to get started with machine learning. However, it can become expensive, so be sure to monitor your AWS account billing regularly or try and get AWS credits which helped me, and delete any models you no longer need to keep costs down. See you at the finish line. | muhammedsalie |
1,883,443 | Step-by-Step Guide to Typesafe Translations in Next.js Without Third-Party Tools | Multilingual support is a critical feature for modern web applications. In this guide, we will walk... | 0 | 2024-06-10T17:44:10 | https://dev.to/ryanmabrouk/step-by-step-guide-to-typesafe-translations-in-nextjs-without-third-party-tools-2mii | nextjs, tutorial, typescript, programming | Multilingual support is a critical feature for modern web applications. In this guide, we will walk through how to implement a typesafe translation system in a Next.js application without relying on any third-party libraries. This approach ensures that our translations are robust and maintainable.
## Step 1: Define Translation Data
Create a directory named **locales** at the root of your project to store your translation files. Inside this directory, create JSON files for each language you want to support. For example, **en.json** for English and **es.json** for Spanish.
locales/en.json:
```json
{
"greeting": "Hello",
"farewell": "Goodbye"
}
```
locales/es.json:
```json
{
"greeting": "Hola",
"farewell": "Adiós"
}
```
## Step 2: Create Server Action getTranslation
Define a server action named `getTranslation.ts` to fetch translation data. Here, we utilize an object called dictionaries, mapping language codes to functions that dynamically import the corresponding JSON file.
getTranslation.ts:
```typescript
"use server";
const dictionaries = {
en: () => import("./locales/en.json").then((module) => module.default),
es: () => import("./locales/es.json").then((module) => module.default),
};
export default async function getTranslation() {
const { defaultLang, error } = await getPreferredLang(); // Optional: Determine user's preferred language
const lang = await dictionaries?.[defaultLang ?? "en"]?.();
return {
lang,
error,
};
}
```
## Step 3: Create useTranslation Hook
In this example, I used React Query for managing state, but feel free to use any other state management solution of your choice. The main objective remains constant: crafting a versatile hook responsible for fetching translations, thereby enabling its use as a client-side state.
useTranslation.ts:
```typescript
"use client";
import getTranslation from "@/translation/getTranslation";
import { useQuery } from "@tanstack/react-query";
export default function useTranslation() {
return useQuery({
queryKey: ["lang"],
queryFn: () => getTranslation(),
});
}
```
## Step 4: Integrate Translation in Your Components
Now, let's use our custom hook in an Next.js client component.

You can also use the `getTranslation()` server action directly in your server components.

| ryanmabrouk |
1,883,608 | Curso De SAS Gratuito Para Iniciantes: 500 Vagas Disponíveis | Participe do curso online e gratuito “SAS Guide para Iniciantes” e comece sua introdução à tecnologia... | 0 | 2024-06-23T13:50:53 | https://guiadeti.com.br/curso-sas-gratuito-iniciantes-500-vagas/ | cursogratuito, analisededados, cursosgratuitos, dados | ---
title: Curso De SAS Gratuito Para Iniciantes: 500 Vagas Disponíveis
published: true
date: 2024-06-10 17:38:48 UTC
tags: CursoGratuito,analisededados,cursosgratuitos,dados
canonical_url: https://guiadeti.com.br/curso-sas-gratuito-iniciantes-500-vagas/
---
Participe do curso online e gratuito “SAS Guide para Iniciantes” e comece sua introdução à tecnologia de análise de dados e às ferramentas SAS.
Este curso oferece um caminho para novas oportunidades no mercado de trabalho através do ensino das bases necessárias em um campo de constante expansão.
Importante destacar que o acesso é limitado aos primeiros 500 participantes que se inscreverem no dia de lançamento, devido à capacidade da sala virtual.
Aproveite esta chance de aprimorar suas habilidades em análise de dados conhecendo o Statistical Analysis System!
## SAS Guide para Iniciantes
O curso online e gratuito “SAS Guide para Iniciantes” é uma excelente porta de entrada para quem deseja explorar o mundo da análise de dados e as tecnologias Statistical Analysis System.

_Imagem da página de inscrição_
Feito para expandir as possibilidades dos participantes no mercado profissional, este curso não requer conhecimentos prévios em tecnologia, sendo aberto a qualquer pessoa com curiosidade e interesse em aprender mais sobre análise de dados.
### Detalhes do Curso e Inscrição
O curso sobre Statistical Analysis System acontecerá no dia 15 de junho, às 08:30 da manhã. É importante ressaltar que as vagas são limitadas aos primeiros 500 inscritos que acessarem a sala virtual no dia do evento.
A participação é aberta a profissionais de áreas de negócios e técnicas, estudantes, autodidatas e curiosos que desejam adquirir habilidades práticas em trabalhar com análise de dados.
### Conteúdo e Aprendizado
Os participantes do “SAS Guide para Iniciantes” serão introduzidos ao Statistical Analysis System, conhecendo conceitos gerais de análise de dados e explorando a ferramenta SAS Enterprise Guide.
O curso focará nas estruturas básicas de acesso para importação e transformação de dados em informações úteis, facilitando a criação de fluxos de negócios (ETL) de forma totalmente gráfica.
O conteúdo prático tem objetivo de equipar os participantes com as habilidades necessárias para começar a explorar e entender a área da análise de dados.
### Benefícios Exclusivos para Participantes
Importante destacar que apenas aqueles que participarem do curso terão acesso à gravação do treinamento e aos descontos ofertados.
Esses benefícios são projetados para incentivar a participação ativa e garantir que os inscritos possam revisar o material aprendido e continuar sua jornada de aprendizado mesmo após a conclusão do curso.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Pesquisa-Salarial-de-Programadores-280x210.png" alt="Pesquisa Salarial de Programadores" title="Pesquisa Salarial de Programadores"></span>
</div>
<span>Salário Do Programador Brasileiro 2024: Confira Essa Pesquisa</span> <a href="https://guiadeti.com.br/salario-programador-brasileiro-2024-confira/" title="Salário Do Programador Brasileiro 2024: Confira Essa Pesquisa"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Curso-De-SAS-Gratuito-280x210.png" alt="Curso De SAS Gratuito" title="Curso De SAS Gratuito"></span>
</div>
<span>Curso De SAS Gratuito Para Iniciantes: 500 Vagas Disponíveis</span> <a href="https://guiadeti.com.br/curso-sas-gratuito-iniciantes-500-vagas/" title="Curso De SAS Gratuito Para Iniciantes: 500 Vagas Disponíveis"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/LinkedIn-Cursos-Gratuitos-280x210.png" alt="LinkedIn Cursos Gratuitos" title="LinkedIn Cursos Gratuitos"></span>
</div>
<span>LinkedIn Cursos Gratuitos: Inteligência Artificial, Excel E Mais 73 Opções</span> <a href="https://guiadeti.com.br/linkedin-cursos-gratuitos-ia-excel/" title="LinkedIn Cursos Gratuitos: Inteligência Artificial, Excel E Mais 73 Opções"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Curso-de-SAP-Gratuito-280x210.png" alt="Curso de SAP Gratuito" title="Curso de SAP Gratuito"></span>
</div>
<span>Webinar De SAP Para Iniciantes Gratuito Da Ka Solution</span> <a href="https://guiadeti.com.br/webinar-sap-para-iniciantes-gratuito-ka-solution/" title="Webinar De SAP Para Iniciantes Gratuito Da Ka Solution"></a>
</div>
</div>
</div>
</aside>
## Statistical Analysis System
SAS (Statistical Analysis System) é uma suíte de software avançada desenvolvida pela SAS Institute para análise de dados, business intelligence, análises preditivas, e muito mais.
Desde seu desenvolvimento na Universidade Estadual da Carolina do Norte na década de 1970, o SAS tem sido amplamente utilizado por organizações para melhorar seu desempenho e tomar decisões baseadas em dados.
### Análise de Dados e Estatística
O SAS é conhecido por sua capacidade de processamento de dados e funcionalidades estatísticas.
Ele oferece uma ampla variedade de procedimentos estatísticos para análise exploratória de dados, regressão, análise multivariada e muito mais.
### Business Intelligence (BI)
O software SAS fornece soluções de Business Intelligence que ajudam as empresas a converter dados brutos em informações inteligíveis.
Tendo ferramentas para relatório, análise de dados interativa, e visualização de dados, o SAS ajuda as organizações a entenderem suas operações, mercados e clientes de forma mais profunda.
### Análise Preditiva e Machine Learning
O Statistical Analysis System também é utilizado para desenvolver modelos preditivos e algoritmos de machine learning. A plataforma oferece recursos avançados para modelagem preditiva, otimização e simulação, permitindo às empresas antecipar mudanças futuras e tomar decisões proativas.
### Suporte a Diversas Plataformas e Idiomas
O SAS pode ser executado em ambientes Windows, Unix e mainframes, e seu próprio código de programação é conhecido por ser extensivamente documentado e suportado, permitindo uma grande flexibilidade e integração com outras tecnologias, facilitando a implantação em diferentes infraestruturas de TI.
### SAS Viya – A Plataforma na Nuvem
SAS Viya é a plataforma de análise na nuvem do Statistical Analysis System, que proporciona alta performance e escalabilidade para análises avançadas, inteligência artificial e tarefas de machine learning.
Viya foi projetado para ser aberto e interoperável, permitindo a utilização de APIs públicas e integração com outras linguagens de programação como Python, R, Java e Lua.
### Barreiras de Entrada e Custo
Embora o Statistical Analysis System ofereça muitos benefícios, é frequentemente criticado por seus altos custos de licenciamento em comparação com outras ferramentas de código aberto como R e Python. Além disso, a curva de aprendizado para a programação SAS pode ser íngreme para iniciantes.
### Considerações Éticas na Análise de Dados
Como qualquer poderosa ferramenta de análise de dados, o uso do Statistical Analysis System traz responsabilidades éticas, especialmente no que diz respeito à privacidade dos dados e ao potencial para a tomada de decisões baseadas em modelos que podem perpetuar preconceitos existentes.
## SAS Education
SAS Education é a divisão educacional da SAS Institute, dedicada a fornecer treinamento e recursos educacionais sobre o uso do software Statistical Analysis System e suas aplicações em análise de dados, estatística e business intelligence.
### Cursos de Capacitação
SAS Education oferece uma grande variedade de cursos que cobrem desde fundamentos de Statistical Analysis System até tópicos avançados em análise preditiva e inteligência artificial.
Esses cursos estão disponíveis em vários formatos, incluindo treinamentos presenciais, online ao vivo e cursos on-demand, para atender às diferentes necessidades de aprendizado dos alunos.
### Certificações Oferecidas
Uma das características mais importantes da instituição é seu programa de certificação. Obtendo certificações SAS, os profissionais podem validar suas habilidades e conhecimentos no uso das ferramentas e tecnologias Statistical Analysis System.
As certificações são reconhecidas mundialmente e podem significativamente aumentar as oportunidades de carreira dos certificados.
### Materiais Didáticos e Tutoriais
A escola fornece uma rica biblioteca de materiais didáticos, tutoriais em vídeo e documentação extensiva. Esses recursos são projetados para ajudar os usuários a aprenderem no seu próprio ritmo e aprofundarem seu entendimento em áreas específicas de interesse.
### Desenvolvimento de Habilidades em Dados
Em um mundo cada vez mais orientado por dados, ter habilidades proficientes em análise de dados é ffundamental.
A educação dessa instituição equipa os indivíduos com essas habilidades, preparando-os para enfrentar desafios complexos e tomar decisões baseadas em evidências em suas carreiras profissionais.
## Link de inscrição ⬇️
As [inscrições para o curso SAS Guide para Iniciantes](https://sas.zoom.us/webinar/register/WN_w9BsfMm4SBK9PMbthY6Bnw#/registration) devem ser realizadas por meio de formulário.
## Compartilhe esta oportunidade de aprendizado com sua rede!
Gostou do conteúdo sobre o curso introdutório de Statistical Analysis System? Então compartilhe com a galera!
O post [Curso De SAS Gratuito Para Iniciantes: 500 Vagas Disponíveis](https://guiadeti.com.br/curso-sas-gratuito-iniciantes-500-vagas/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,883,555 | Buy Verified Paxful Account | https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are... | 0 | 2024-06-10T17:37:45 | https://dev.to/kayajo3925/buy-verified-paxful-account-2mnp | react, python, ai, devops | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | kayajo3925 |
1,883,554 | PACX ⁓ Create a table | We have previously talked about what does it mean to manually manipulate the Dataverse data model,... | 0 | 2024-06-10T17:34:43 | https://dev.to/_neronotte/pacx-create-a-table-1lgo | powerplatform, dataverse, opensource, tools | [We have previously talked about what does it mean to manually manipulate the Dataverse data model](https://dev.to/_neronotte/pacx-data-model-manipulation-579e), and the benefits of data model scripting.
We'll now deep dive on what it means to create tables, columns and relations via PACX.
---
[Creating a table with PACX](https://github.com/neronotte/Greg.Xrm.Command/wiki/pacx-table-create) is quite easy.
As a **prerequisite**, you must connect to a Dataverse environment via `pacx auth create` or `pacx auth select`, and optionally you can select a default solution via `pacx solution setDefault`.
Then you can just type
```Powershell
pacx table create --name "My Table"
```
to create a new table named "My Table" in the context of the solution set as default via `pacx solution setDefault`. If no solution is set as default, the `--solution` argument is mandatory.
The table schema name will be automatically generated extrapolating only chars, numbers and underscores from the display name, setting them lowercase, prefixed with the solution's publisher prefix.
In this case, if the publisher prefix is `greg`, the generated schema name will be `greg_mytable`.
> The command can be also run using the alias `pacx create table`
The following default conventions apply automatically:
- **DisplayCollectionName** (aka the plural name of the entity) can be specified using `--plural` argument. If not specified, is inferred automatically pluralizing the display name (automatic pluralization is done via [Pluralize.NET](https://github.com/sarathkcm/Pluralize.NET), and is currently supported only if base language is English - 1033).
- **Description** can be specified using `--description` argument, otherwise is left empty.
- **SchemaName** can be specified using `--schemaName` argument, otherwise is inferred from the display name as described above.
- **OwnershipType** can be specified using `--ownership` argument, otherwise is set as `UserOwned`.
- **IsActivity** by default is `false`, can be changed via `--isActivity` argument
- **IsAvailableOffline** by default is `false` unless the table is an activity table, or the `--offline` argument is specified
- **IsValidForQueue** by default is `false` unless the table is an activity table, or the `--queue` argument is specified
- **IsConnectionsEnabled** by default is `false` unless the table is an activity table, or the `--connection` argument is specified
- **HasNotes** by default is `false` unless the table is an activity table, or the `--notes` argument is specified
- **HasFeedback** by default is `false` unless the table is an activity table, or the `--feedback` argument is specified
- **IsAuditEnabled** by default is `true`, but can be overridden via `--audit false` argument
About the table **primary attribute**:
- You can set it as an **Autonumber** field via `--primaryAttributeAutoNumberFormat` argument (`-paan`). If not specified, it is assumed to be plain text.
- The **display name**
- if the table is an activity, is fixed to `Subject`
- otherwise, it can be specified via `--primaryAttributeName` (`-pan`) argument.
- if not specified
- if the the primary attribute is an autonumber, it's set by default to `Code`
- otherwise it's set to `Name`
- The **requirement level**
- can be specified via `--primaryAttributeRequiredLevel` (`-par`).
- if not specified
- if it's an autonumber, it's set by default to `None`
- otherwise it's set by default to `ApplicationRequired`
- The **max length** is _100_ by default, unless specified via `--primaryAttributeMaxLength` (`-palen`) argument
- The description can be set via `--primaryAttributeDescription` argument (`-pad`). If not specified, is left empty
[Take a look to the official documentation](https://github.com/neronotte/Greg.Xrm.Command/wiki/pacx-table-create) to get the list of all the command arguments.
Once created, you can use the following command
```Powershell
pacx table exportMetadata --table greg_mytable -r
```
To [generate and open a JSON representation](https://github.com/neronotte/Greg.Xrm.Command/wiki/pacx-table-exportMetadata) of the table metadata.
---
Hope this can be helpful to start using PACX in your daily activities. In the next articles we'll deep dive on how to work with _columns_.
| _neronotte |
1,883,553 | How to make Email Template Using HTML | Would you like to learn how to create an email template using HTML and style it with CSS? Usually,... | 0 | 2024-06-10T17:29:06 | https://dev.to/yasminsardar/create-email-template-using-html-and-css-only-3phm | tutorial, html, webdev, beginners | Would you like to learn how to create an email template using HTML and style it with CSS?
Usually, you can make an email template with HTML elements. But for Gmail, you'll need to add a `<table>` tag along with `<td>` and `<tr>` properties.
Otherwise, your email template won't function properly when shared in Gmail.
You need to add the first table tag as main-container and add a `tr>td>table` inside it, And there is no specific tr, td and table element that you need to create for the section,
But each section is created separately with only tr td and table and then depends on your template design how many tr td table you need to create.
And once you have completed the HTML section and you can do CSS styling using inline CSS. And external, and internal CSS is a risk that will not work in the browser.
Here's a simple DOMA example with one section, along with the code:
```js
<!-- Three Columns -->
<tr>
<td>
<table width="100%">
<tr>
<td class="three-con" style="background-color: #ffffff">
<table class="con">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<a href="#"
><img
src="images/feature1.png"
width="150"
alt=""
/></a>
<tr>
<td>
<p style="font-size: 17px; font-weight: bold">
Avoid Medicine
</p>
<p>
Orange juice is a concentrated source of
vitamin C, a water-soluble vitamin that
doubles as a powerful antioxidant and plays
a central role in immune function.
</p>
</td>
</tr>
</td>
</tr>
</table>
</td>
</tr>
</table>
<table class="con">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<a href="#"
><img
src="images/feature2.png"
width="150"
alt=""
/></a>
<tr>
<td>
<p style="font-size: 17px; font-weight: bold">
Prevent Kidney Stones
</p>
<p>
Kidney stones are small mineral deposits
that accumulate in your kidneys, often
causing symptoms like severe pain, nausea,
or blood in your urine.
</p>
</td>
</tr>
</td>
</tr>
</table>
</td>
</tr>
</table>
<table class="con">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<a href="#"
><img
src="images/feature3.png"
width="150"
alt=""
/></a>
<tr>
<td>
<p style="font-size: 17px; font-weight: bold">
Improve Heart Health
</p>
<p>
Orange juice has also been shown to increase
levels of “good” HDL cholesterol in people
with elevated levels — which could improve
heart health.
</p>
</td>
</tr>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
```
I understand it might seem a bit messy and confusing, but that's how email templates are typically created. This is just one section with three columns, and the section containing a images, headings, and paragraphs.
Take a look at the image below; it'll give you a clearer understanding.

And here is a full email teample code:
```
<!DOCTYPE html>js
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Two_2</title>
<style>
body {
margin: 0;
padding: 0;
background-color: #cccccc;
}
table {
border-spacing: 0;
}
td {
padding: 0;
}
img {
border: 0;
}
.container {
width: 100%;
table-layout: fixed;
background-color: #cccccc;
}
.contain {
background-color: #1d3c45;
margin: 0 auto;
width: 100%;
max-width: 600px;
border-spacing: 0;
font-family: sans-serif;
color: #1d3c45;
}
.two-col {
font-size: 0;
text-align: center;
}
.two-col .col {
width: 100%;
display: inline-block;
max-width: 300px;
vertical-align: top;
text-align: center;
}
.three-con {
text-align: center;
font-size: 0;
padding: 35px 0 20px;
}
.three-con .con {
width: 100%;
max-width: 200px;
display: inline-block;
}
.three-con .padding {
padding: 15px;
}
.three-con .content {
display: inline-block;
font-size: 17px;
line-height: 20px;
}
.two-con.last {
padding: 35px 0;
background-color: #ffffff;
}
.two-con .padding {
padding: 10px;
}
.two-con .content {
font-size: 17px;
line-height: 20px;
text-align: left;
}
.two-con .column {
width: 100%;
max-width: 280px;
}
.two-con .column:first-child {
padding-left: 20px;
}
.column {
display: inline-block;
}
.footer {
text-align: center;
padding: 35px 0;
}
</style>
</head>
<body>
<center class="container">
<table class="contain">
<tr>
<td style="background-color: #ef721e; height: 4px"></td>
</tr>
<!-- Header -->
<tr>
<td style="padding: 14px 0 4px">
<table width="100%">
<tr>
<td class="two-col">
<table class="col">
<tr>
<td style="padding: 0 62px 10px">
<a href="#"
><img
src="images/logo.png"
width="180"
alt="Logo"
title="logo"
/></a>
</td>
</tr>
</table>
<table class="col">
<tr>
<td style="padding: 10px 68px">
<a href="#"
><img
src="images/fb.png"
width="32"
style="border-radius: 50%; padding-left: 5px"
/></a>
<a href="#"
><img
src="images/ig.png"
width="32"
style="border-radius: 50%; padding-left: 5px"
/></a>
<a href="#"
><img
src="images/in.png"
width="32"
style="border-radius: 50%; padding-left: 5px"
/></a>
<a href="#"
><img
src="images/tw.png"
width="32"
style="border-radius: 50%; padding-left: 5px"
/></a>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
<!-- Banner -->
<tr>
<td>
<table width="100%">
<tr>
<td>
<a href="#"
><img
src="images/banner.png"
alt=" "
width="600 "
style="max-width: 100%"
/>
</a>
</td>
</tr>
</table>
</td>
</tr>
<!-- Three Columns -->
<tr>
<td>
<table width="100%">
<tr>
<td class="three-con" style="background-color: #ffffff">
<table class="con">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<a href="#"
><img
src="images/feature1.png"
width="150"
alt=""
/></a>
<tr>
<td>
<p style="font-size: 17px; font-weight: bold">
Avoid Medicine
</p>
<p>
Orange juice is a concentrated source of
vitamin C, a water-soluble vitamin that
doubles as a powerful antioxidant and plays
a central role in immune function.
</p>
</td>
</tr>
</td>
</tr>
</table>
</td>
</tr>
</table>
<table class="con">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<a href="#"
><img
src="images/feature2.png"
width="150"
alt=""
/></a>
<tr>
<td>
<p style="font-size: 17px; font-weight: bold">
Prevent Kidney Stones
</p>
<p>
Kidney stones are small mineral deposits
that accumulate in your kidneys, often
causing symptoms like severe pain, nausea,
or blood in your urine.
</p>
</td>
</tr>
</td>
</tr>
</table>
</td>
</tr>
</table>
<table class="con">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<a href="#"
><img
src="images/feature3.png"
width="150"
alt=""
/></a>
<tr>
<td>
<p style="font-size: 17px; font-weight: bold">
Improve Heart Health
</p>
<p>
Orange juice has also been shown to increase
levels of “good” HDL cholesterol in people
with elevated levels — which could improve
heart health.
</p>
</td>
</tr>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
<!-- Two section -->
<tr>
<td style="background-color: #1d3c45; height: 4px"></td>
</tr>
<tr>
<td>
<table width="100%">
<tr>
<td class="two-con last">
<table class="column">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<p
style="
font-size: 17px;
font-weight: bold;
display: inline-block;
"
>
Rich in Several Important Nutrients
</p>
<p style="display: inline-block">
An 8-ounce (240-ml) serving of orange juice
provides approximately
</p>
<p style="display: inline-block">
Not to mention, it’s an excellent source of the
mineral potassium, which regulates blood
pressure, prevents bone loss, and protects
against heart disease and stroke.
</p>
</td>
</tr>
</table>
</td>
</tr>
</table>
<table class="column">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<a href="#"
><img
src="images/orange.gif"
width="260px"
style="max-width: 260px"
/></a>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td>
<table width="100%">
<tr>
<td class="two-con last">
<table class="column">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<a href="#"
><img
src="images/orange2.gif"
width="260px"
style="max-width: 260px"
/></a>
</td>
</tr>
</table>
</td>
</tr>
</table>
<table class="column">
<tr>
<td class="padding">
<table class="content">
<tr>
<td>
<p
style="
font-size: 17px;
font-weight: bold;
display: inline-block;
"
>
May Decrease Inflammation
</p>
<p style="display: inline-block">
Some studies suggest that orange juice could
decrease inflammation and problems tied to it.
</p>
<p style="display: inline-block">
Orange juice may help decrease markers of
inflammation, which could help reduce your risk
of chronic disease.
</p>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
<tr>
<td style="background-color: #1d3c45; height: 4px"></td>
</tr>
<!-- Footer -->
<tr>
<td>
<table width="100%">
<tr>
<td>
<table width="100%" class="footer">
<tr>
<td class="padding">
<a href="#"
><img
src="images/logo.png"
alt="logo"
title="logo"
width="180px"
style="border-radius: 10px"
/></a>
</td>
</tr>
<tr>
<td style="padding: 10px 68px">
<a href="#"
><img
src="images/fb.png"
width="32"
style="border-radius: 50%; padding-left: 5px"
/></a>
<a href="#"
><img
src="images/in.png"
width="32"
style="border-radius: 50%; padding-left: 5px"
/></a>
<a href="#"
><img
src="images/ig.png"
width="32"
style="border-radius: 50%; padding-left: 5px"
/></a>
<a href="#"
><img
src="images/tw.png"
width="32"
style="border-radius: 50%; padding-left: 5px"
/></a>
<p style="color: #ffffff; font-size: 15px">
307 S. Main St. Suite 202 Bentonville, AR 72712, USA
</p>
<p style="color: #ffffff; font-size: 15px">
© 2011-2022, MADE GROCERY. All rights reserved
</p>
</td>
</tr>
</table>
</td>
</tr>
</table>
</td>
</tr>
<td style="background-color: #ef721e; height: 4px"></td>
</table>
</center>
</body>
</html>
```
This will work well on any device, adjusting to different screen sizes easily.
And a Output:

I hope this post helps you learn how to create an email template using HTML and CSS.
If you need an email template, feel free to check out my Fiverr profile! https://www.fiverr.com/yasminsardar/do-responsive-html-email-templates-design
| yasminsardar |
1,883,552 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-10T17:28:43 | https://dev.to/kayajo3925/buy-verified-cash-app-account-2ong | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n" | kayajo3925 |
1,883,550 | Designed by JavaScript without tree | add me on : Codepen: https://codepen.io/hussein009 github... | 0 | 2024-06-10T17:26:17 | https://dev.to/hussein09/designed-by-javascript-without-tree-f2o | codepen, javascript, beginners, programming | add me on :
Codepen:
https://codepen.io/hussein009
github :
https://github.com/hussein-009
https://heylink.me/hussein009
{% codepen https://codepen.io/hussein009/pen/MWdEwRM %} | hussein09 |
1,883,549 | Exhibition Stand Design Company in Nuremberg | Bellatrum is one of the best three exhibition stand companies in Nuremberg. We have more than 25... | 0 | 2024-06-10T17:25:45 | https://dev.to/bellatrum/exhibition-stand-design-company-in-nuremberg-coo | Bellatrum is one of the best three exhibition stand companies in Nuremberg. We have more than 25 years of experience in designing and building exhibition stands for trade shows in Nuremberg.
From BIOFACH to Interzoo to SMTConnect, we have covered all the [top exhibitions in Nuremberg](url=https://bellatrum.com/stand-designs-in-nuremberg/). Our technologically advanced exhibition stands have established our dominance in the exhibition industry. As an experienced stand designs company in Nuremberg, we understand that every business has different requirements. That’s why we offer different types and different sizes of exhibition stands.
Get ready to present your brand and network with the stakeholders at the upcoming trade show in Nuremberg with our personalised [exhibition stand services.](url=https://bellatrum.com/custom-exhibition-stand/) | bellatrum | |
1,883,536 | Generating replies using Groq and Gemma in NestJS | Introduction In this blog post, I demonstrated generating replies with Groq SDK and Gemma... | 27,661 | 2024-06-10T17:11:59 | https://www.blueskyconnie.com/groq-sdk-and-gemma-for-generating-replie-in-nestjs-2/ | generativeai, tutorial, nestjs, groq | ##Introduction
In this blog post, I demonstrated generating replies with Groq SDK and Gemma 7B model. Buyers can provide ratings and comments on sales transactions on auction sites like eBay. When the feedback is negative, the seller must reply promptly to resolve the dispute. This demo aims to generate responses in the same language as the buyer according to the tone (positive, neutral, or negative) and topics. Chatbot and user engaged in multi-turn conversations to obtain the feedback's language, sentiment, and topics. Finally, the model generates the final reply to keep customers happy.
###Generate Groq API Key
Log in Groq Cloud and navigate to https://console.groq.com/keys to generate an API key.
###Create a new NestJS Project
```bash
nest new nestjs-groq-customer-feedback
```
###Install dependencies
```bash
npm i --save-exact @nestjs/swagger @nestjs/throttler dotenv compression helmet class-validator class-transformer groq-sdk
```
###Generate a Feedback Module
```bash
nest g mo advisoryFeedback
nest g co advisoryFeedback/presenters/http/advisoryFeedback --flat
nest g s advisoryFeedback/application/advisoryFeedback --flat
nest g s advisoryFeedback/application/advisoryFeedbackPromptChainingService --flat
```
Create an `AdvisoryFeedbackModule` module, a controller, a service for the API, and another service to build chained prompts.
###Define GROQ environment variables
```
// .env.example
PORT=3001
GROQ_API_KEY=<groq api key>
GROQ_MODEL=gemma-7b-it
```
Copy `.env.example` to `.env`, and replace `GROQ_API_KEY` and `GROQ_MODEL` with the actual API Key and the Gemma model.
- PORT - port number of the NestJS application
- GROQ_API_KEY - API Key of GROQ
- GROQ_MODEL - GROQ model and I used Gemma 7B in this demo
Add `.env` to the `.gitignore` file to prevent accidentally committing the Groq API Key to the GitHub repo.
###Add configuration files
The project has 3 configuration files. `validate.config.ts` validates the payload is valid before any request can route to the controller to execute.
```typescript
// validate.config.ts
import { ValidationPipe } from '@nestjs/common';
export const validateConfig = new ValidationPipe({
whitelist: true,
stopAtFirstError: true,
forbidUnknownValues: false,
});
```
`env.config.ts` extracts the environment variables from process.env and stores the values in the env object.
```typescript
import dotenv from 'dotenv';
dotenv.config();
export const env = {
PORT: parseInt(process.env.PORT || '3001'),
GROQ: {
API_KEY: process.env.GROQ_API_KEY || '',
MODEL_NAME: process.env.GROQ_MODEL || 'llama3-8b-8192',
},
};
```
`throttler.config.ts` defines the rate limit of the API
```typescript
// throttler.config.ts
import { ThrottlerModule } from '@nestjs/throttler';
export const throttlerConfig = ThrottlerModule.forRoot([
{
ttl: 60000,
limit: 10,
},
]);
```
Each route allows ten requests in 60,000 milliseconds or 1 minute.
###Bootstrap the application
```typescript
// bootstrap.ts
export class Bootstrap {
private app: NestExpressApplication;
async initApp() {
this.app = await NestFactory.create(AppModule);
}
enableCors() {
this.app.enableCors();
}
setupMiddleware() {
this.app.use(express.json({ limit: '1000kb' }));
this.app.use(express.urlencoded({ extended: false }));
this.app.use(compression());
this.app.use(helmet());
}
setupGlobalPipe() {
this.app.useGlobalPipes(validateConfig);
}
async startApp() {
await this.app.listen(env.PORT);
}
setupSwagger() {
const config = new DocumentBuilder()
.setTitle('ESG Advisory Feedback with Groq and Gemma')
.setDescription('Integrate with Groq to improve ESG advisory feebacking by prompt chaining')
.setVersion('1.0')
.addTag('Groq, Gemma, Prompt Chaining')
.build();
const document = SwaggerModule.createDocument(this.app, config);
SwaggerModule.setup('api', this.app, document);
}
}
```
Added a Bootstrap class to set up Swagger, middleware, global validation, CORS, and, finally application start.
```typescript
// main.ts
import { env } from '~configs/env.config';
import { Bootstrap } from '~core/bootstrap';
async function bootstrap() {
const bootstrap = new Bootstrap();
await bootstrap.initApp();
bootstrap.enableCors();
bootstrap.setupMiddleware();
bootstrap.setupGlobalPipe();
bootstrap.setupSwagger();
await bootstrap.startApp();
}
bootstrap()
.then(() => console.log(`The application starts successfully at port ${env.PORT}`))
.catch((error) => console.error(error));
```
The bootstrap function enabled CORS, registered middleware to the application, set up Swagger documentation, and validated payloads using a global pipe.
I have laid down the groundwork, and the next step is to add an endpoint to receive payload for generating replies with prompt chaining.
###Define Feedback DTO
```typescript
// feedback.dto.ts
import { IsNotEmpty, IsString } from 'class-validator';
export class FeedbackDto {
@IsString()
@IsNotEmpty()
prompt: string;
}
```
`FeedbackDto` accepts a prompt, which is customer feedback.
###Construct Gemma Model
```typescript
// groq.constant.ts
export const GROQ_CHAT_MODEL = 'GROQ_CHAT_MODEL';
```
```typescript
// groq.provider.ts
import { Provider } from '@nestjs/common';
import { GROQ_CHAT_MODEL } from '../constants/groq.constant';
import Groq from 'groq-sdk';
import { env } from '~configs/env.config';
export const GroqChatModelProvider: Provider<Groq.Chat> = {
provide: GROQ_CHAT_MODEL,
useFactory: () => new Groq({ apiKey: env.GROQ.API_KEY }).chat,
};
```
`GroqChatModelProvider` is a Gemma model that writes a short reply in the same language of the feedback.
###Implement Reply Service
```typescript
// groq.config.ts
import { ChatCompletionCreateParamsNonStreaming } from 'groq-sdk/resources/chat/completions';
import { env } from '~configs/env.config';
export const MODEL_CONFIG: Omit<ChatCompletionCreateParamsNonStreaming, 'messages'> = {
model: env.GROQ.MODEL_NAME,
temperature: 0.5,
max_tokens: 1024,
top_p: 0.5,
stream: false,
};
```
```typescript
// sentiment-analysis.type.ts
export type SentimentAnalysis = {
sentiment: 'POSITIVE' | 'NEUTRAL' | 'NEGATIVE';
topic: string;
};
```
```typescript
// advisory-feedback-prompt-chaining.service.ts
// Omit the import statements
@Injectable()
export class AdvisoryFeedbackPromptChainingService {
private readonly logger = new Logger(AdvisoryFeedbackPromptChainingService.name);
private chatbot = this.groq.completions;
constructor(@Inject(GROQ_CHAT_MODEL) private groq: Groq.Chat) {}
async generateReply(feedback: string): Promise<string> {
try {
const instruction = `You are a professional ESG advisor who can reply in the same language as the customer's feedback.
The reply is short and should also address the sentiment and topics of the feedback.`;
const messages: ChatCompletionMessageParam[] = [
{
role: 'system',
content: instruction,
},
{
role: 'user',
content: `Please identify the language used in the feedback. Give me the language name, and nothing else.
If the language is Chinese, please specify Traditional Chinese or Simplified Chinese.
If you do not know the language, give 'Unknown'.
Feedback: ${feedback}
`,
},
];
const response = await this.chatbot.create({
...MODEL_CONFIG,
messages,
});
const language = response.choices?.[0]?.message?.content || '';
this.logger.log(language);
messages.push(
{ role: 'assistant', content: language },
{
role: 'user',
content: `Identify the sentiment and topic of feedback and return the JSON output { "sentiment": 'POSITIVE' | 'NEUTRAL' | 'NEGATIVE', "topic": string }.`,
},
);
const analysis = await this.chatbot.create({
...MODEL_CONFIG,
messages,
});
const jsonAnalysis = JSON.parse(analysis.choices?.[0]?.message?.content || '') as SentimentAnalysis;
const { sentiment, topic } = jsonAnalysis;
this.logger.log(`sentiment -> ${sentiment}, topic -> ${topic}`);
const chainedPrompt = `The customer wrote a ${sentiment} feedback about ${topic} in ${language}. Please give a short reply.`;
messages.push(
{ role: 'assistant', content: `The sentiment is ${sentiment} and the topics are ${topic}` },
{ role: 'user', content: chainedPrompt },
);
this.logger.log(chainedPrompt);
this.logger.log(messages);
const result = await this.chatbot.create({
...MODEL_CONFIG,
messages,
});
const text = result.choices[0]?.message?.content || '';
this.logger.log(`text -> ${text}`);
return text;
} catch (ex) {
console.error(ex);
throw ex;
}
}
}
```
`AdvisoryFeedbackPromptChainingService` injects a chat model in the constructor.
- groq - A Chat API to have an assistant to answer the queries of the user.
- generateReply - In this method, a user asked the chat model about the language, sentiment and topics of the feedback. Then, the assistant gave the answers according to the instructions of the prompts. Next, I manually appended the queries and answers to the messages array to update the chat history. It was important because the chatbot referred to previous conversations to form the correct context to answer future questions. Finally, the chatbot generated replies in the same language based on sentiment and topics.
```typescript
const response = await this.chatbot.create({
...MODEL_CONFIG,
messages,
});
const language = response.choices?.[0]?.message?.content || '';
messages.push(
{ role: 'assistant', content: language },
{
role: 'user',
content: `Identify the sentiment and topic of feedback and return the JSON output { "sentiment": 'POSITIVE' | 'NEUTRAL' | 'NEGATIVE', "topic": string }.`,
},
);
```
`this.chatbot.create` returned the language, I appended the value and the next user query to the messages array.
The process for generating replies ended by producing the text output from generateReply. The method asked questions iteratively and wrote a descriptive prompt for the LLM to draft a reply that was polite and addressed the need of the customer.
```typescript
// advisory-feedback.service.ts
// Omit the import statements to save space
@Injectable()
export class AdvisoryFeedbackService {
constructor(private promptChainingService: AdvisoryFeedbackPromptChainingService) {}
generateReply(prompt: string): Promise<string> {
return this.promptChainingService.generateReply(prompt);
}
}
```
`AdvisoryFeedbackService` injects `AdvisoryFeedbackPromptChainingService` and constructs multiple chains to ask the chat model to generate a reply.
###Implement Advisory Feedback Controller
```typescript
// advisory-feedback.controller.ts
// Omit the import statements to save space
@Controller('esg-advisory-feedback')
export class AdvisoryFeedbackController {
constructor(private service: AdvisoryFeedbackService) {}
@Post()
generateReply(@Body() dto: FeedbackDto): Promise<string> {
return this.service.generateReply(dto.prompt);
}
}
```
The `AdvisoryFeedbackController` injects `AdvisoryFeedbackService` using Groq SDK and Gemma 7B model. The endpoint invokes the method to generate a reply from the prompt.
- /esg-advisory-feedback - generate a reply from a prompt
###Module Registration
The `AdvisoryFeedbackModule` provides `AdvisoryFeedbackPromptChainingService`, `AdvisoryFeedbackService` and `GroqChatModelProvider`. The module has one controller that is `AdvisoryFeedbackController`.
```typescript
// advisory-feedback.module.ts
// Omit the import statements due to brevity reason
@Module({
controllers: [AdvisoryFeedbackController],
providers: [GroqChatModelProvider, AdvisoryFeedbackPromptChainingService, AdvisoryFeedbackService],
})
export class AdvisoryFeedbackModule {}
```
Import AdvisoryFeedbackModule into AppModule.
```typescript
// app.module.ts
@Module({
imports: [throttlerConfig, AdvisoryFeedbackModule],
controllers: [AppController],
providers: [
{
provide: APP_GUARD,
useClass: ThrottlerGuard,
},
],
})
export class AppModule {}
```
###Test the endpoints
I can test the endpoints with cURL, Postman or Swagger documentation after launching the application.
```bash
npm run start:dev
```
The URL of the Swagger documentation is http://localhost:3001/api.
In cURL
```bash
curl --location 'http://localhost:3001/esg-advisory-feedback' \
--header 'Content-Type: application/json' \
--data '{
"prompt": "Looking ahead, the needs of our customers will increasingly be defined by sustainable choices. ESG reporting through diginex has brought us uniformity, transparency and direction. It provides us with a framework to be able to demonstrate to all stakeholders - customers, employees, and investors - what we are doing and to be open and transparent."
}'
```
###Dockerize the application
```
// .dockerignore
.git
.gitignore
node_modules/
dist/
Dockerfile
.dockerignore
npm-debug.log
```
Create a `.dockerignore` file for Docker to ignore some files and directories.
```
// Dockerfile
# Use an official Node.js runtime as the base image
FROM node:20-alpine
# Set the working directory in the container
WORKDIR /app
# Copy package.json and package-lock.json to the working directory
COPY package*.json ./
# Install the dependencies
RUN npm install
# Copy the rest of the application code to the working directory
COPY . .
# Expose a port (if your application listens on a specific port)
EXPOSE 3001
# Define the command to run your application
CMD [ "npm", "run", "start:dev"]
```
I added the Dockerfile that installed the dependencies, built the NestJS application, and started it at port 3001.
```yaml
// docker-compose.yaml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
environment:
- PORT=${PORT}
- GROQ_API_KEY=${GROQ_API_KEY}
- GROQ_MODEL=${GROQ_MODEL}
ports:
- "${PORT}:${PORT}"
networks:
- ai
restart: unless-stopped
networks:
ai:
```
I added the `docker-compose.yaml` in the current folder, which was responsible for creating the NestJS application container.
###Launch the Docker application
```bash
docker-compose up
```
Navigate to http://localhost:3001/api to read and execute the API.
This concludes my blog post about using Groq SDK and Gemma 7b model to tackle generating replies regardless the written languages. Generating replies with Generative AI reduces the efforts that a writer needs to compose a polite reply to any customer. I hope you like the content and continue to follow my learning experience in Angular, NestJS, Generative AI, and other technologies.
##Resources:
- Github Repo: https://github.com/railsstudent/fullstack-genai-prompt-chaining-customer-feedback/tree/main/nestjs-groq-customer-feedback
- Groq Chat Completion: https://console.groq.com/docs/text-chat
- Groq Cookbook: https://github.com/groq/groq-api-cookbook
| railsstudent |
1,883,632 | Secure by default: How WarpStream’s BYOC deployment model secures the most sensitive workloads | by Caleb Grillo Fundamentals of BYOC WarpStream’s Zero Disk Architecture Typically,... | 0 | 2024-06-11T17:45:04 | https://dev.to/warpstream/secure-by-default-how-warpstreams-byoc-deployment-model-secures-the-most-sensitive-workloads-257d | apachekafka, dataengineering, datastreaming | ---
title: Secure by default: How WarpStream’s BYOC deployment model secures the most sensitive workloads
published: true
date: 2024-06-10 17:21:30 UTC
tags: apachekafka,dataengineering,datastreaming
canonical_url:
---
by Caleb Grillo
### Fundamentals of BYOC

WarpStream’s Zero Disk Architecture
Typically, cloud data infrastructure products follow one of two deployment models:
1. **Fully self-managed,** where the customer purchases a software license and support but is ultimately responsible for deploying and managing the software themselves.
2. **Fully-hosted SaaS model** in which the vendor manages all the infrastructure in their own cloud environment and the customer simply receives an endpoint.
The Bring Your Own Cloud (BYOC) deployment model is a hybrid approach to cloud infrastructure that strikes a balance between these two extremes. Generally, it works like this: the software is split into two different components, a “data plane” (compute + storage) and a “control plane”. The control plane runs in the provider’s environment, and the data plane runs in the customer’s environment.
This deployment model has several benefits:
- **Data privacy:** Because the data never leaves your environment, you have greater control over who has access to it and under what circumstances.
- **Data sovereignty:** Data is always stored on resources that you control, so you don’t need to worry about data finding its way to geographical regions where it shouldn’t be.
- **Compliance:** The data plane is deployed in the customer environment, so strict compliance requirements can be fulfilled, and all traffic can be audited.
- **Cost optimization:** Because the infrastructure runs in your environment, you can control factors like instance types, networking configurations, and storage classes to optimize costs. You can also take advantage of committed use discounts, reserved instances, and savings plans to further optimize your costs. And perhaps most importantly for a networking-heavy system like Apache Kafka®, the combination of this deployment model and WarpStreams Zero Disk Architecture eliminates virtually all networking fees which can often account for more than 80% of the TCO of a traditional Kafka deployment..
- **Control:** You have control over the infrastructure that you deploy the software on, so you can choose your own networking topology, instance types, security settings, and storage services that you use.
BYOC makes a lot of sense for mission-critical, data-intensive, and networking-heavy systems like Kafka where throughput is often measured in the hundreds or even thousands of MiBs per second. But historically, BYOC for Kafka has been limited to niche use cases because Kafka (and other equivalent systems) are so stateful and difficult to manage that remotely administering them is almost impossible.
### The problem with BYOC for Kafka
Kafka and its derivatives have stateful architectures, with local disks that store partitions that need to be actively managed in order to prevent a variety of issues like: hot partitions, unbalanced storage, under-replicated topic-partitions, etc. This is why there are so many vendors offering a fully-managed Kafka solution, but relatively few that offer a BYOC variant. Managing Kafka in your own environment is difficult enough, but managing Kafka in someone _else’s_ environment is even more challenging.
Since Kafka clusters need to be constantly managed, existing BYOC deployment models for Kafka require providing the vendor with high level access to your environment so their personnel can keep the cluster healthy and mitigate incidents when they inevitably occur. The BYOC vendor often has the ability to manage a huge range of cloud infrastructure, including security policies and resources for your VPC, service accounts, subnetworks, IAM roles, firewall rules, and storage buckets.
But wouldn’t it be better if external access wasn’t required at all?
### Zero Access BYOC, secure by default
WarpStream’s primary deployment model is BYOC, but it works a little bit differently from the rest. Unlike most BYOC deployment models, WarpStream was designed to operate with _no access_ to the environment that the Agents and object storage are running in. The only requirement for running the WarpStream Agents is that they have permission to access an object storage bucket in which they can store data, and that they have the ability to establish an outbound connection to the WarpStream Cloud control plane. That’s it. No IAM roles or permissive security policies are required.
This is possible because WarpStream was designed from the ground up with a [Zero Disk Architecture](https://www.warpstream.com/blog/zero-disks-is-better-for-kafka) with not only full separation of compute and storage, but also separation of [_data_ _from metadata_](https://docs.warpstream.com/warpstream/overview/architecture)_._ This architecture makes managing the WarpStream Agents trivial. The Agents are just stateless compute, so there are no leader elections, no partition rebalances, no disk resizing, and no manual operations required to keep the cluster healthy. WarpStream clusters can be seamlessly scaled in, out, up, or down, with virtually no effort, just like a traditional web server.
WarpStream leverages this Zero Disk Architecture to provide a very high level of service with very little external control by using a shared responsibility model that separates storage, compute, and metadata.

The cloud provider manages the storage, the customer manages the stateless compute (I.E the WarpStream Agents), and WarpStream Cloud manages the metadata / consensus layer. This means that only metadata is transferred from your environment to WarpStream’s, and no raw data _ever_ leaves your environment.
Of course, this zero-access BYOC deployment model does have a tradeoff: WarpStream users _are responsible_ for managing their own (stateless) compute. Fortunately, this is the one thing that _everyone_ running software in the cloud knows how to do: deploy and scale stateless containers! Of course, we do our best to make this easy by providing infrastructure as code primitives like our [Terraform Provider](https://registry.terraform.io/providers/warpstreamlabs/warpstream/latest/docs) and [Helm chart](https://github.com/warpstreamlabs/charts/tree/main/charts/warpstream-agent).
In exchange for assuming responsibility for deploying and managing the stateless Agents, WarpStream’s users get a deployment model that exposes them to _far less risk_ of a data breach than any other cloud-native model. By design, there is _no way_ for WarpStream Cloud to access your data, even if WarpStream’s cloud account was breached by a hostile actor, or WarpStream was compelled by a government agency.
In fact, WarpStream’s security model is so strong that we even have customers using it for their production workloads in AWS GovCloud regions. While no system can credibly claim to be 100% safe, WarpStream’s design lends itself to a stronger security posture than any of the BYOC products that came before.
### Zero trust makes BYOC safe
WarpStream makes the BYOC model safer and more secure than any alternative. This was a deliberate design choice that was made possible by WarpStream’s Zero Disk Architecture. With a zero-trust BYOC model, our customers truly get the best of both worlds: an (almost) fully managed user experience, but with all of the cost and security benefits of running on their own infrastructure.
To learn more about WarpStream’s secure-by-default BYOC deployment model, [contact us](https://www.warpstream.com/contact-us). Or, if you’re ready to get started, you can [sign up](https://console.warpstream.com/signup) and get up and running with WarpStream in just a few minutes. No credit card is required to get started, and your first $400 is on us. | warpstream |
1,883,545 | MOBILTEL - Mobiltel la voz de tu empresa | Mobiltel, a Mexican telecommunications company founded in 2009, emerged from the merger of two... | 0 | 2024-06-10T17:16:26 | https://dev.to/mobiltel/mobiltel-mobiltel-la-voz-de-tu-empresa-5aen | telecommunication, crm, it, saas | Mobiltel, a Mexican telecommunications company founded in 2009, emerged from the merger of two experienced integrator groups with over 15 years of expertise in voice and data services. Specializing in VoIP cloud solutions, Mobiltel offers a comprehensive suite of services including Virtual PBX, Virtual Numbers, SIP Trunking, CRM, SMS, and Software Call Centers, providing innovative and reliable telecommunications solutions to its clients.
[https://mobiltel.com.mx/conmutador-virtual](https://mobiltel.com.mx/conmutador-virtual) | mobiltel |
1,878,137 | Enhancing Angular Reactive Forms with a Custom Image Uploader | Introduction In this article, I'll guide you through building a special component that... | 27,664 | 2024-06-10T17:14:18 | https://dev.to/cezar-plescan/level-up-your-angular-code-a-transformative-user-profile-editor-project-part-3-refactoring-image-uploading-3ndi | angular, tutorial, fileupload, refactoring | ## Introduction
In this article, I'll guide you through building a special component that makes **uploading images** in forms much easier. This component won't just make the form templates look cleaner, it'll also work seamlessly with the existing reactive forms setup, letting you treat image uploads just like any other form field.
Throughout this article, I'll walk you through the following steps:
- **Creating the component**: Generate the new component and migrate the existing code.
- **Displaying the saved image**: Implement the logic to display the currently saved image.
- **Selecting a new image**: Add functionality to allow users to choose and preview a new image before uploading it.
- **Form integration**: Integrate the custom component with Angular reactive forms system.
- **Removing hardcoded paths**: Refactor the code to avoid hardcoding image URLs and instead fetch them from the server.
- **Custom upload button**: Create a custom upload button for a consistent look across browsers.
_**A quick note**: Before we begin, I'd like to remind you that this article builds on concepts and code introduced in previous articles of this series. If you're new here, I highly recommend that you check out those articles first to get up to speed. You can find the starting point for the code I'll be working with in the `13.validation-error-directive` branch of the https://github.com/cezar-plescan/user-profile-editor/tree/13.validation-error-directive repository._
## Identifying the current issues
Let's take a look at our form in the `user-profile.component.ts` file, which includes controls for name, email, address and avatar:{% embed https://gist.github.com/cezar-plescan/fae1e74f125eb1977d4dea190b642fed %}When I examine the template in `user-profile.component.html` I can easily identify the `input` elements associated with the first 3 controls, thanks to the `formControlName` directive provided by Angular's Reactive Forms Module.{% embed https://gist.github.com/cezar-plescan/f8295980e88b84f78a517949356d7eff %}These `input` elements provide both the display and modification of the form control value. However, when it comes to images, my current approach separates these functionalities.{% embed https://gist.github.com/cezar-plescan/628de75a97870355c0b0f407eebf1bc3 %}I rely on an `<img>` tag to display the existing image, but for uploading a new image, I use a separate `<input type="file"/>` element. This fragmented approach makes the code less maintainable and creates an unintuitive user experience. From my point of view, users should have a clear and consistent way to interact with an image in a form, whether they're viewing the current image or selecting a new one for upload.
To address this, I'll create a dedicated Angular component to manage both image display and upload. I want this component to be able to receive the `formControlName` directive, just like any other form control.
But why do I create a component instead of a directive, like I did for the validation errors? While directives are excellent for manipulating existing elements, a component provides both a view (for displaying the image and selecting a new one for uploading) and its associated logic.
By extracting this functionality into a reusable component, we'll achieve some advantages:
- **Simplified templates**: our form templates become cleaner and more focused on the overall structure.
- **Single Responsibility Principle**: the component adheres to SRP, as it encapsulates all the logic and behavior related to image handling, keeping the parent component concerns separate.
- **Reusability**: we can easily use this component in any form throughout our application.
Let's explore the steps involved in creating this dedicated image form control component.
## Implementing the new component
### Create the new component files
1. Generate the component: I begin by generating a component named **`image-form-control`** within the `src/app/shared/components` folder, using the Angular CLI: `ng generate component image-form-control`.
2. Migrate Existing Code: Next, I'll transfer the relevant code from the form template `user-profile.component.html` into `image-form-control.component.html`. Similarly, I'll move the associated methods (`getAvatarFullUrl` and `onImageSelected`) from the `user-profile.component.ts` component into `image-form-control.component.ts`. Remember to remove these methods from the `UserProfileComponent` class.{% embed https://gist.github.com/cezar-plescan/0fd0ae9addfc7f196960339042a348f4 %}{% embed https://gist.github.com/cezar-plescan/9c6f6db642b5670665f36280036eaa7b %}
3. Integrate the component: Now, I'll update the form template in `user-profile.component.html` to use the `ImageFormControlComponent`:{% embed https://gist.github.com/cezar-plescan/f18fe573fd43460e7997a2d0209ff3a7 %}
At this stage the application is broken. We still have some work to do to make the new component fully functional.
The first visible error is that the `form` property doesn't exist in the `ImageFormControlComponent`. Additionally, we need to instruct the component to interact with the reactive form. In the following sections I'll describe how to resolve these challenges and make the component a fully functional form control.
I'll break down the implementation into two main parts:
- displaying the saved image
- selecting a new image to upload
_**Note**: This new component is essentially a **custom form control**. If you're new to this topic, I highly recommend you to refer to this [comprehensive guide](https://blog.angular-university.io/angular-custom-form-controls/) from the Angular University blog. My implementation will simply follow the steps from this guide._
### Displaying the image
Let's take a closer look at how our component is used in the form template:{% embed https://gist.github.com/cezar-plescan/f18fe573fd43460e7997a2d0209ff3a7 %}We notice that it only has one input property, `formControlName="avatar"`, without any direct reference to the form or the avatar form control itself. Stay with me to explore how to connect our component to the Reactive Forms setup.
For now, I'll just use the `<img>` tag in our template:{% embed https://gist.github.com/cezar-plescan/f623f614a7fbf63a0a3df3d04518f5ff %}Now I need to define how the `imgSrc` property is set in `ImageFormControlComponent` class. Following the [Angular custom form controls guide](https://blog.angular-university.io/angular-custom-form-controls/), I need to perform several steps:
1. declare the `imgSrc` property in the component class
2. the component class should implement the `ControlValueAccessor` interface; this helps to communicate with the reactive forms system
3. implement the `writeValue` method (I'll ignore the other methods for now); this method will be called by Angular to set the initial image source when the form loads
4. register the component in the dependency Injection system; this tells Angular that this component should act like a form control{% embed https://gist.github.com/cezar-plescan/261fd279afdecf8582a592700dae01af %}Now, our component can read the avatar value from the form and display the image.
**Note**: The `writeValue` method contains a hardcoded string for the images path. This is not ideal and I'll address it in a later section.
To understand how our custom form control integrates with Angular reactive forms, let's dive a bit deeper into the internals.
#### NG_VALUE_ACCESSOR injection token
Let's see why we need this token and how it is used internally. I'll jump straight into the `formControlName` directive [source code](https://github.com/angular/angular/blob/17.3.3/packages/forms/src/directives/reactive_directives/form_control_name.ts#L133). The relevant line is in the constructor definition:
`@Optional() @Self() @Inject(NG_VALUE_ACCESSOR) valueAccessors: ControlValueAccessor[]`
What does this actually mean?
**`NG_VALUE_ACCESSOR`** is an injection token provided by Angular. Its purpose is to act as a lookup key for finding the appropriate `ControlValueAccessor` implementation for a given form control. In the component `providers` array, we're essentially saying, "Hey Angular, when you encounter a form control that needs a `ControlValueAccessor`, use my `ImageFormControlComponent` as the implementation." When Angular processes the `formControlName` directive, it looks for a `NG_VALUE_ACCESSOR` provider in the injector hierarchy. Since we've registered our component as the provider, Angular knows how to use the component methods (writeValue, registerOnChange, etc.) to interact with the form control.
The `formControlName` directive, when applied to an element (in our case, the `app-image-form-control` component), uses Angular dependency injection system to look for providers of the `NG_VALUE_ACCESSOR` token on that specific element itself. It's not looking for a global provider or one in a parent component. This is indicated by the `@Self()` decorator on the injection site in the directive's constructor.
When the `formControlName` directive is applied to our component, it finds this provider and uses it to get an instance of the `ControlValueAccessor`. Since we've provided our component itself, the directive gets a direct reference to it.
#### Executing the `writeValue` method
Here is a simplified process of how the method is called internally:
1. Form Control Setup: When we create a `FormControl` in our component (either directly or through `FormBuilder`) and bind it to your custom form control using `formControlName` in the template, Angular establishes a connection between them.
2. Initial Value Setting: If we've provided an initial value to the `FormControl` (e.g., through the `value` property or `setValue` method), Angular will call the `writeValue` method on our custom form control (the `ImageFormControlComponent`) to set that initial value.
3. Value Changes: Whenever the value of the `FormControl` changes (e.g., through user input, `setValue`, or `patchValue`), Angular will again call `writeValue` on our component to update it with the new value. This ensures that the UI element stays in sync with the form control's value.
### Selecting an image to upload
The current template contains only the `<img>` tag. For selecting an image file to upload I have to add the `<input type="file"/>` element back into the template:{% embed https://gist.github.com/cezar-plescan/7fb0bfc28907e6707958c525a6070aea %}I've also included the handler `onImageSelected` for the `change` event, that will be triggered when the user selects a file. This is necessary to display the selected image in place of the original one, before uploading it to the server. Here is its implementation:{% embed https://gist.github.com/cezar-plescan/2191632bcf5fcaaa2518fdbfa5df8aa9 %}I want to upload the image to the server only when the user explicitly submits the form. Why do I do this? Because the user could change their mind after selecting an image and I don't want to immediately send the image to the server, but only when the user is ready to save the form.
#### Serving files locally
The selected image file doesn't exist on the server yet and you might wonder where it is served from. The answer lies in the `URL.createObjectURL(file)` method of the browser's API. This method takes a `File` object (or a `Blob`) and generates a URL string. This URL doesn't point to a physical file on our server; instead, it references the file data directly in the browser's memory.
The generated URL is a special type called a blob URL (e.g., `blob:http://localhost:4200/d9856eeb-2405-4388-8894-064e56c254a8`). We can use this URL as the `src` attribute of an `<img>` tag to display the selected image in the browser without needing to upload it to a server first. You can verify this by inspecting the <img> element in DevTools after selecting an image file; you'll notice the `src` attribute value has a format similar to the example above.
#### Revoking blob URLs
Blob URLs are temporary. They only exist as long as the document in which they were created remains open. Once the document is closed or navigated away from, the blob URL is automatically revoked by the browser. It's a good practice to release blob URLs when they are no longer needed using `URL.revokeObjectURL()`. This helps avoid potential memory leaks, especially if you're dealing with large image files. Let's see how to incorporate this method in our component:{% embed https://gist.github.com/cezar-plescan/7c217c43c1bc8e56e5db74f2d3205f31 %}To verify that the blob URLs are indeed being discarded, follow these steps:
- select an image file
- go to DevTools and open the image source URL in a new tab
- select another image from the form
- go to the previously opened tab and refresh it
- you should no longer be able to view the previous image, confirming that the blob URL has been revoked
#### Notify the form about the new image selection
At this stage, you might have noticed that the Save and Reset buttons don't become active after selecting a new image. This isn't what we expect. When typing into the text input fields, the buttons become active, and we expect the same behavior when selecting a new image. This is because the form doesn't yet know that we've changed the image.
Let's see how to address this. The reactive forms system provides a way to notify the form control that its value has changed by calling a function that Angular registers with our custom form control. The [Angular custom form controls guide](https://blog.angular-university.io/angular-custom-form-controls/) provides a detailed explanation of this process.
Here is the updated `image-form-control.component.ts` file:{% embed https://gist.github.com/cezar-plescan/883aae40f4dbf8a01262a665169d745f %}I've also added a template reference variable for the `<input>` element, which is used in the component as a `ViewChild`:{% embed https://gist.github.com/cezar-plescan/ae83b1b7d5530cb8668c3b919c8a6e78 %}In the `user-profile.component.ts` file I've removed all references to `fileInput` property, as it's now managed entirely by the `ImageFormControlComponent`; two methods were affected:{% embed https://gist.github.com/cezar-plescan/09648e8210f43f98231e8bb08b0a0c86 %}
Now I'll explain the key changes step by step:
##### `registerOnChange` method
Angular calls the `registerOnChange` method, part of the `ControlValueAccessor` interface, internally. The only action required here is to store a reference to the internal function that will be used to notify the form control when its value changes:{% embed https://gist.github.com/cezar-plescan/08f4632817db8027af1b134b9da369b5 %}Now, we need to call this stored function whenever the user selects a new image. In the `onImageSelected` method I've added `this.onChange?.(file)` which will internally notify the form control about the change. This results in the form buttons becoming enabled after selecting an image.
##### Clear the image filename after upload
There's one refinement to make here. Currently, after the form is submitted with a new image, the filename remains displayed in the file input element. To fix this, we need to clear the file input value after the form is submitted and the image is uploaded. To achieve this, we have to modify the `writeValue` method to also clear the file input.
First, we need access to the file input element in the template. I've attached a template reference variable to the element `<input **#fileInput** ... />`. Then, to access it in the component I've used the `@ViewChild` decorator `@ViewChild('fileInput', {static: true}) protected fileInput!: ElementRef<HTMLInputElement>`.
Finally, to clear the filename, I've simply set its value to an empty string within the `writeValue` method: `this.fileInput.nativeElement.value = ''`.
With these changes, the image upload component is now integrated with the reactive form.
## Improvements
In this section I'll explore some enhancements we can make to our image form control component, taking its functionality to a new level.
### Custom upload button
The current implementation of selecting an image to upload is rendered differently across browsers. There's no way of consistently style a plain input of type file. A solution is to hide the default input element and create a custom button that triggers it behind the scenes.
Here is the updated template:{% embed https://gist.github.com/cezar-plescan/4d58915892e796e2aa31d82a6542b001 %}I've added some styling to the component elements:{% embed https://gist.github.com/cezar-plescan/98bb011bd94febe0f0cb7965286caf89 %}Don't forget to import the necessary Angular Material modules `MatButtonModule` and `MatIconModule` into the component.
The trick is that the custom button delegates the click event to the hidden file input. With this approach we can fully customize the appearance of the upload button. If necessary, the filename of the selected image can be extracted and displayed, using additional logic in the template and the component, but I've omitted that for simplicity.
### Remove hardcoded path for image URLs
In the previous implementation, I've hardcoded the path for image URLs within the component. However, it's not ideal to have the client-side responsible for constructing URLs. Ideally, the server should provide the complete URL for each image.
There are several advantages of this approach:
- **Environment Flexibility**: By dynamically generating URLs, our application becomes more adaptable to different environments (development, staging, production) without requiring manual changes to hardcoded paths.
- **Improved Maintainability**: Centralizing URL generation on the server makes it easier to manage and update image paths if needed.
To address this, I've updated the `server.ts` file. When the server sends a response containing the user data, it will include the full URL for the avatar image, while keeping only the filename stored in the `db.json` file:{% embed https://gist.github.com/cezar-plescan/fedb25330cc9340706eaa116c0c43861 %}With this change, the `writeValue` method in the component becomes simpler:{% embed https://gist.github.com/cezar-plescan/c26f580ebf32af30258701f12184e4a7 %}Remember to restart the server to see these changes.
## Additional Resources
Here are some additional resources that will help you dive deeper into the concepts covered in this article.
#### Custom form control
- [Angular Custom Form Controls - Complete Guide](https://blog.angular-university.io/angular-custom-form-controls/): A comprehensive guide to creating custom form controls in Angular, covering the ControlValueAccessor interface, validation, and more.
- [Never Again Be Confused When Implementing ControlValueAccessor in Angular Forms](https://angularindepth.com/posts/1055/never-again-be-confused-when-implementing-controlvalueaccessor-in-angular-forms): An in-depth article explaining the intricacies of the ControlValueAccessor interface and how to use it effectively.
- [Custom Form Controls in Angular](https://medium.com/@oluwaetosin/custom-form-controls-in-angular-d5a63d1a1d60): A practical guide with examples on building various types of custom form controls.
#### File uploading
- [Angular file upload](https://blog.angular-university.io/angular-file-upload/): A detailed tutorial on file uploads in Angular, including how to create custom upload buttons.
- [How to upload files in HTML?](https://imagekit.io/blog/how-to-upload-files-in-html/): A comprehensive guide to file uploads in HTML, covering the basics of file input elements, JavaScript interactions, and best practices.
- [A comprehensive overview of the File API and its capabilities for file selection and reading](https://developer.mozilla.org/en-US/docs/Web/API/File_API/Using_files_from_web_applications)
#### Custom upload button
- [How to create a custom file upload button using HTML, CSS, and JavaScript](https://dev.to/faddalibrahim/how-to-create-a-custom-file-upload-button-using-html-css-and-javascript-1c03): A step-by-step guide with code examples on building a custom upload button.
- [Styling an input type="file" button](https://stackoverflow.com/questions/572768/styling-an-input-type-file-button): A Stack Overflow discussion on different ways to style the file input button.
## Conclusion: Building powerful forms with custom controls
In this article, we've taken a significant step towards mastering Angular forms by creating a reusable `ImageFormControlComponent`. We've seen how to:
- Transform a basic image upload element into a full-fledged form control.
- Integrate it seamlessly with Angular reactive forms system.
- Handle image display, selection, and preview.
- Ensure efficient memory management with blob URLs.
- Address common challenges like cross-browser styling inconsistencies.
This custom component not only improves our code but also enhances the user experience by providing a clear and consistent way to manage image uploads within your forms.
I encourage you to experiment with the code from this article and explore the possibilities for further enhancements. You can find the code for this project at: https://github.com/cezar-plescan/user-profile-editor/tree/14.image-form-control.
If you have any questions, suggestions, or experiences you'd like to share, please leave a comment below! Let's continue the conversation and learn together. | cezar-plescan |
1,883,544 | Explorando ORM: Facilitando o Desenvolvimento com Bancos de Dados | O desenvolvimento de aplicações modernas frequentemente exige uma interação eficiente e segura com... | 0 | 2024-06-10T17:14:14 | https://dev.to/iamthiago/explorando-orm-facilitando-o-desenvolvimento-com-bancos-de-dados-33m9 | database, orm, tutorial, developer | O desenvolvimento de aplicações modernas frequentemente exige uma interação eficiente e segura com bancos de dados. Uma das ferramentas mais poderosas à disposição dos desenvolvedores para alcançar essa eficiência é o ORM (Object-Relational Mapping). Neste artigo, vamos explorar o que é ORM, suas vantagens, desvantagens e como ele pode transformar seu processo de desenvolvimento.
## O Que é ORM?
ORM, ou Object-Relational Mapping, é uma técnica de programação que facilita a conversão de dados entre sistemas incompatíveis usando a orientação a objetos. Em termos mais simples, um ORM permite que você manipule um banco de dados usando a linguagem de programação de sua escolha, como se estivesse manipulando objetos no seu código.
Por exemplo, em vez de escrever complexas consultas SQL para interagir com o banco de dados, você pode utilizar métodos e propriedades de classes definidas no seu código. Essas classes são mapeadas para tabelas do banco de dados, e suas instâncias são mapeadas para linhas dentro dessas tabelas.
## Vantagens do ORM
### 1. **Produtividade Aumentada**
Uma das maiores vantagens do ORM é a produtividade. Desenvolvedores podem trabalhar com bancos de dados de uma forma mais natural, utilizando suas habilidades em programação orientada a objetos. Isso elimina a necessidade de escrever longas e complexas consultas SQL, permitindo que se concentrem na lógica de negócios.
### 2. **Manutenção Facilitada**
Com o ORM, mudanças no esquema do banco de dados são mais fáceis de gerenciar. Alterações nas tabelas podem ser refletidas nas classes correspondentes, muitas vezes sem a necessidade de alterar o código SQL manualmente.
### 3. **Segurança**
ORMs ajudam a prevenir ataques de injeção SQL, uma das vulnerabilidades mais comuns em aplicações web. Como as consultas são geradas automaticamente pelo framework ORM, o risco de injetar código malicioso é significativamente reduzido.
### 4. **Portabilidade**
ORMs abstraem a lógica específica do banco de dados subjacente. Isso significa que você pode mudar de um banco de dados para outro (por exemplo, de MySQL para PostgreSQL) com mudanças mínimas no seu código.
## Desvantagens do ORM
### 1. **Desempenho**
Embora os ORMs ofereçam muitas vantagens, eles podem introduzir uma sobrecarga de desempenho. Como o ORM precisa mapear objetos para tabelas e vice-versa, isso pode resultar em operações mais lentas comparadas às consultas SQL otimizadas manualmente.
### 2. **Curva de Aprendizado**
Para desenvolvedores acostumados com SQL, aprender a usar um ORM pode ser desafiador. Compreender como o ORM mapeia os objetos para o banco de dados e como ele gera consultas pode exigir um tempo significativo de aprendizado.
### 3. **Complexidade Oculta**
ORMs podem esconder a complexidade das operações de banco de dados, o que pode levar a problemas difíceis de diagnosticar e resolver. Problemas de desempenho ou bugs podem ser mais complicados de depurar quando a camada de abstração do ORM está envolvida.
## Exemplos Populares de ORM
Existem vários frameworks ORM disponíveis, cada um com suas próprias características e funcionalidades. Alguns dos mais populares incluem:
- **Hibernate** (Java)
- **Entity Framework** (C#)
- **Django ORM** (Python)
- **SQLAlchemy** (Python)
- **ActiveRecord** (Ruby)
## Conclusão
Os ORMs são ferramentas poderosas que podem aumentar a produtividade e segurança no desenvolvimento de aplicações que interagem com bancos de dados. No entanto, como qualquer ferramenta, eles vêm com suas próprias limitações e desafios. Ao entender suas vantagens e desvantagens, os desenvolvedores podem tomar decisões mais informadas sobre quando e como usar ORMs em seus projetos.
Se você quer saber mais sobre desenvolvimento e boas práticas, confira meu perfil no GitHub: [IamThiago-IT](https://github.com/IamThiago-IT). Lá você encontrará diversos projetos e recursos que podem ajudar a melhorar suas habilidades como desenvolvedor.
Agradeço a leitura e espero que este artigo tenha esclarecido o conceito e a utilidade dos ORMs no desenvolvimento de software. Até a próxima! | iamthiago |
1,883,359 | Deploying Remix-Vite on Lambda using Pulumi | A bit of context Remix is a very cool React-based framework that makes the final jump back... | 0 | 2024-06-10T15:08:35 | https://dev.to/gautierblandin/deploying-remix-vite-on-lambda-using-pulumi-41oj | ## A bit of context
[Remix](https://remix.run/) is a very cool React-based framework that makes the final jump back from the browser to the server. After starting with SPAs that fully ran in the browser, [Next.js](https://nextjs.org/) got the idea of rendering React components in the server, reducing the initial load time and improving crawlability.
Remix takes this a step further: while Next.js cannot render dynamic content on the server, Remix can. As a user, this means even faster loadings times for any kind of dynamic content, and as a developer, you don't need to think about server-side vs client-side components. You just write React code, and it works.
- - -
## Deploying Remix
Let's create a Remix project, build it, and see what we get.
```bash
npx create-remix@latest remix-aws-tutorial -y
cd remix-aws-tutorial
npm run build
```
The build directory now has the following content:
```none
├── client
│ ├── assets
│ │ ├── _index-B17S9f7F.js
│ │ ├── components-BAmE7OwT.js
│ │ ├── entry.client-jPehgn16.js
│ │ ├── jsx-runtime-56DGgGmo.js
│ │ ├── manifest-3ad53534.js
│ │ └── root-LChrk_Sm.js
│ └── favicon.ico
└── server
└── index.js
```
The server directory contains a single file with all the server code. It is capable of understanding everything about an HTTP request, loading the right data, and sending an HTML document back to the client.
The client directory contains all the static assets that the client needs to load after receiving the initial HTML response.
In order to deploy our Remix application, we can start to understand what we're going to need:
* Something to host and serve the static assets
* A way to run the server
* A way to send HTTP requests to the server and transmit the responses to the client
- - -
## Understanding the architecture
Fortunately for us, our requirements map neatly to a simple serverless architecture hosted on AWS.
* S3 is a great service for hosting and serving static assets
* Lambda functions can run our server code
* API Gateway handles HTTP requests and can forward them to Lambda
* CloudFront can map between S3 and API Gateway, and provides caching and other CDN features
Visually, the architecture we're going to implement looks like this:

- - -
## Creating a working server bundle
When Remix is built, it creates a single index.js file that contains all the server code, with a handler using the [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API), which is not directly compatible with the payload format of API Gateway.
In order to make it compatible, we need an adapter. There is an officially maintained adapter that is compatible with API Gateway HTTP, called @remix-run/architect. Let's install it.
```bash
npm install @remix-run/architect
```
Now, let's create a server.ts file that will be the entry point for the server.
```typescript
import { createRequestHandler } from '@remix-run/architect';
import * as build from './build/server/index.js';
export const handler = createRequestHandler({
build,
});
```
Additionally, API Gateway forwards the stage name inside the request path, so we need to update the adapter code to remove the stage name before calling the remix handler. We're going to assume that the stage name is "dev", but you can change it to whatever you want. We also need to install aws-lambda types for the handler signature:
```bash
npm install -D aws-lambda
```
Update server.ts to remove the stage name at runtime:
```typescript
import { createRequestHandler } from '@remix-run/architect';
import * as build from './build/server/index.js';
import { APIGatewayProxyHandlerV2 } from 'aws-lambda';
const requestHandler = createRequestHandler({
build,
});
export const handler = (...args: Parameters<APIGatewayProxyHandlerV2>) => {
const [apiGatewayEvent, ...rest] = args;
apiGatewayEvent.rawPath = apiGatewayEvent.rawPath.replace(/^\/dev/, '');
apiGatewayEvent.requestContext.http.path = apiGatewayEvent.requestContext.http.path.replace(/^\/dev/, '');
return requestHandler(apiGatewayEvent, ...rest);
};
```
We are now ready to bundle our adapter and remix server into a single file that we'll deploy to Lambda. Let's install esbuild:
```bash
npm install esbuild
```
And create a build.cjs file to configure the build process:
```javascript
const esbuild = require('esbuild');
esbuild
.build({
entryPoints: ['server.ts'],
bundle: true, // Bundle all dependencies into one file
platform: 'node',
target: 'node20',
external: ['node:stream'], // Keep Node.js built-ins external
outfile: 'build/lambda/index.cjs',
sourcemap: true,
format: 'cjs',
})
.catch(() => process.exit(1));
```
Finally, we need to install architect dependencies for the build process to work:
```bash
npm install -D @aws-sdk/client-apigatewaymanagementapi @aws-sdk/client-dynamodb @aws-sdk/client-sns @aws-sdk/client-sqs @aws-sdk/client-ssm @aws-sdk/lib-dynamodb
```
Let's build our lambda function handler:
```bash
node build.cjs
```
The build directory now contains a lambda/index.cjs file, with the whole code of our server, ready to be deployed!
- - -
## Declaring the infrastructure
### Setting up Pulumi
Pulumi is an Infrastructure-as-Code (IaC) tool that enables us to write infrastructure code directly in typescript. It supports multiple cloud providers, and does not rely on CloudFormation templates, instead directly using the AWS CLI under the hood. If you wish to use a different IaC tool for this section, anything supporting AWS will work, you'll simply need to adapt the resource declarations.
First, we'll need to [install the Pulumi CLI](https://www.pulumi.com/docs/install/).
Once that is done, let's create an infrastructure directory, and initialize a Pulumi project:
```bash
mkdir infrastructure
cd infrastructure
pulumi new aws-typescript
```
We also need to configure our AWS credentials to be able to deploy our infrastructure:
```bash
pulumi config set aws:accessKey <your-access-key>
pulumi config set aws:secretKey <your-secret-key> --secret
```
Finally, we need to login to the Pulumi CLI so that we can deploy using the Pulumi engine:
```bash
pulumi login
```
With everything set up, we can proceed to write the infrastructure code.
- - -
### Creating an S3 bucket for the static assets
The first thing we need is an S3 bucket for our static assets. It will need public ACLs and CORS rules to allow our CloudFront distribution to access the bucket.
We also need to install the @pulumi/synced-folder package to automatically sync the build/client directory and the S3 bucket.
Let's create all of this:
```bash
npm install @pulumi/synced-folder
```
```typescript
// index.ts
import * as aws from '@pulumi/aws';
import * as synced from '@pulumi/synced-folder';
export const bucket = new aws.s3.Bucket('bucket', {
corsRules: [
{
allowedOrigins: ['*'],
allowedMethods: ['GET', 'HEAD'],
allowedHeaders: [],
exposeHeaders: [],
maxAgeSeconds: 300,
},
],
});
// Disable block all public access
const blockPublicAcls = new aws.s3.BucketPublicAccessBlock('public-access-block', {
bucket: bucket.bucket,
blockPublicAcls: false,
});
// Needed to allow public-read ACL on the objects
const ownershipControls = new aws.s3.BucketOwnershipControls('ownership-controls', {
bucket: bucket.bucket,
rule: {
objectOwnership: 'ObjectWriter',
},
});
// Automatically sync the client directory to the S3 bucket
new synced.S3BucketFolder(
'synced-folder',
{
path: '../build/client',
bucketName: bucket.bucket,
acl: 'public-read',
},
{ dependsOn: [ownershipControls, blockPublicAcls] },
);
```
Now, let's deploy our infrastructure:
```bash
pulumi up
```
Once the deployment is done, by logging into the AWS console, we can see the S3 bucket with all the build/client content.
Let's make our server now!
- - -
### Creating a lambda function for the server
Let's continue editing our index.ts file to add a lambda function that will act as our server.
We will need:
* A role to give to the lambda function
* The managed Basic Execution Role policy
* The lambda function itself
```typescript
// Add the import at the top of the file
import * as pulumi from '@pulumi/pulumi';
const lambdaRole = new aws.iam.Role('lambdaRole', {
assumeRolePolicy: {
Version: '2012-10-17',
Statement: [
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'lambda.amazonaws.com',
},
Effect: 'Allow',
Sid: '',
},
],
},
});
new aws.iam.RolePolicyAttachment('lambdaRoleAttachment', {
role: lambdaRole,
policyArn: aws.iam.ManagedPolicy.AWSLambdaBasicExecutionRole,
});
const lambda = new aws.lambda.Function('lambdaFunction', {
code: new pulumi.asset.AssetArchive({
'.': new pulumi.asset.FileArchive('../build/lambda'),
}),
runtime: aws.lambda.Runtime.NodeJS20dX,
role: lambdaRole.arn,
handler: 'index.handler',
});
export const lambdaArn = lambda.arn;
```
If you want, you can deploy and test your lambda function to test it manually.
```bash
pulumi up
```
Once the deployment is done, you can test your lambda function by calling it from the AWS console with the following mock payload:
```json
{
"version": "2.0",
"routeKey": "$default",
"rawPath": "/dev",
"rawQueryString": "",
"headers": {
"Header1": "value1",
"Header2": "value2"
},
"queryStringParameters": {},
"requestContext": {
"accountId": "123456789012",
"apiId": "api-id",
"domainName": "id.execute-api.us-east-1.amazonaws.com",
"domainPrefix": "id",
"http": {
"method": "GET",
"path": "/dev",
"protocol": "HTTP/1.1",
"sourceIp": "IP",
"userAgent": "agent"
},
"requestId": "id",
"routeKey": "$default",
"stage": "$default",
"time": "12/Mar/2020:19:03:58 +0000",
"timeEpoch": 1583348638390
},
"pathParameters": {
"parameter1": "value1"
},
"isBase64Encoded": false,
"stageVariables": {
"stageVariable1": "value1",
"stageVariable2": "value2"
}
}
```
You should get a status 200 response with an html body.
- - -
### Creating an API Gateway for the server
Only two more steps to go! For the API Gateway, we're going to need the following components:
* The API Gateway
* A resource policy that enables API Gateway to trigger the lambda function
* A lambda integration
* A route
* An API Stage
Let's write this, continuing to expand index.ts:
```typescript
// It's preferable to move this const to the top of the file
// If you've decided to use something else than dev for the stack/stage name,
// make sure to update the server.ts code accordingly
const stack = pulumi.getStack();
const apigw = new aws.apigatewayv2.Api('httpApiGateway', {
protocolType: 'HTTP',
});
new aws.lambda.Permission('lambdaPermission', {
action: 'lambda:InvokeFunction',
principal: 'apigateway.amazonaws.com',
function: lambda,
sourceArn: pulumi.interpolate`${apigw.executionArn}/*/*`,
});
const integration = new aws.apigatewayv2.Integration('lambdaIntegration', {
apiId: apigw.id,
integrationType: 'AWS_PROXY',
integrationUri: lambda.arn,
payloadFormatVersion: '2.0',
});
const route = new aws.apigatewayv2.Route('apiRoute', {
apiId: apigw.id,
routeKey: '$default',
target: pulumi.interpolate`integrations/${integration.id}`,
});
const stage = new aws.apigatewayv2.Stage('apiStage', {
apiId: apigw.id,
name: stack,
routeSettings: [
{
routeKey: route.routeKey,
throttlingBurstLimit: 5000,
throttlingRateLimit: 10000,
},
],
autoDeploy: true,
});
export const httpApiEndpoint = pulumi.interpolate`${apigw.apiEndpoint}/${stage.name}`;
```
Once again, we can deploy our infrastructure:
```bash
pulumi up
```
And simply open the endpoint in our browser to test it.
Hurray, it works!

However, our static assets are not yet being loaded:

Let's fix that by moving to the last step: creating a CloudFront distribution.
- - -
### Creating a CloudFront distribution
CloudFront distributions are made of two main components: origins, and behaviors.
Origins are the sources of content served by the distribution. In our case, it's going to be the S3 bucket that we configure previously, and the API Gateway that links to our lambda function.
Behaviors define the rules that apply to the requests coming to the distribution, including routing between origins, and caching behavior.
To recap, we're going to need:
* A CloudFront distribution
* An S3 origin
* A custom origin for API Gateway
* A behavior for the default route
* A behavior for the /favicon.ico route
* A behavior for the /assets/\* route
* An origin access control to allow CloudFront to access the S3 bucket
* A resource policy on the S3 bucket to allow CloudFront to access the S3 bucket
Let's create all of this:
```typescript
// Add the import at the top of the file
import * as url from 'url';
// These uuids are defined by AWS, you can find them in the CloudFormation documentation
const cachingDisabledPolicyId = '4135ea2d-6df8-44a3-9df3-4b5a84be39ad';
const cachingOptimizedPolicyId = '658327ea-f89d-4fab-a63d-7e88639e58f6';
const allVieverExceptHostHeaderPolicyId = 'b689b0a8-53d0-40ab-baf2-68738e2966ac';
const cloudfrontOAC = new aws.cloudfront.OriginAccessControl('cloudfrontOAC', {
originAccessControlOriginType: 's3',
signingBehavior: 'always',
signingProtocol: 'sigv4',
});
const distribution = new aws.cloudfront.Distribution('distribution', {
enabled: true,
httpVersion: 'http2',
origins: [
{
originId: 'S3Origin',
domainName: bucket.bucketDomainName,
originAccessControlId: cloudfrontOAC.id,
},
{
originId: 'APIGatewayOrigin',
domainName: pulumi.interpolate`${httpApiEndpoint.apply((endpoint) => url.parse(endpoint).hostname)}`,
originPath: pulumi.interpolate`/${stack}`,
customOriginConfig: {
httpPort: 80,
httpsPort: 443,
originProtocolPolicy: 'https-only',
originSslProtocols: ['TLSv1.2'],
},
},
],
defaultRootObject: '',
defaultCacheBehavior: {
allowedMethods: ['DELETE', 'GET', 'HEAD', 'OPTIONS', 'PATCH', 'POST', 'PUT'],
cachedMethods: ['GET', 'HEAD', 'OPTIONS'],
compress: false,
cachePolicyId: cachingDisabledPolicyId,
originRequestPolicyId: allVieverExceptHostHeaderPolicyId,
targetOriginId: 'APIGatewayOrigin',
viewerProtocolPolicy: 'redirect-to-https',
},
orderedCacheBehaviors: [
{
pathPattern: '/favicon.ico',
allowedMethods: ['GET', 'HEAD'],
cachedMethods: ['GET', 'HEAD'],
compress: true,
cachePolicyId: cachingOptimizedPolicyId,
targetOriginId: 'S3Origin',
viewerProtocolPolicy: 'redirect-to-https',
},
{
pathPattern: '/assets/*',
allowedMethods: ['GET', 'HEAD'],
cachedMethods: ['GET', 'HEAD'],
compress: true,
cachePolicyId: cachingOptimizedPolicyId,
targetOriginId: 'S3Origin',
viewerProtocolPolicy: 'redirect-to-https',
},
],
restrictions: {
geoRestriction: {
restrictionType: 'none',
},
},
viewerCertificate: {
cloudfrontDefaultCertificate: true,
},
});
new aws.s3.BucketPolicy('allowCloudFrontBucketPolicy', {
bucket: bucket.bucket,
policy: {
Version: '2012-10-17',
Statement: [
{
Sid: 'AllowCloudFrontServicePrincipalRead',
Effect: 'Allow',
Principal: {
Service: 'cloudfront.amazonaws.com',
},
Action: ['s3:GetObject'],
Resource: pulumi.interpolate`${bucket.arn}/*`,
Condition: {
StringEquals: {
'AWS:SourceArn': distribution.arn,
},
},
},
],
},
});
export const distributionAddress = pulumi.interpolate`https://${distribution.domainName}`;
```
Let's do our final deploy! It may take a few minutes for the CloudFront distribution to be created.
```bash
pulumi up
```
We can now visit the distribution address to see the result.
Nothing spectacular has been added compared to the http endpoint version as we haven't configured any styling or dynamic behavior yet, but everything does load this time around!

We now have a working Remix application hosted on AWS. If we update anything, we have a simple, three-steps deploy process:
* Build the Remix application:
```bash
npx remix vite:build
```
* Bundle it using esbuild:
```bash
node build.cjs
```
* Deploy the infrastructure:
```bash
pulumi up
```
I recommend encapsulating this in a single build-deploy script:
```json
"scripts": {
"build-deploy": "remix vite:build && node build.cjs && cd infrastructure && pulumi up"
}
```
- - -
## Recap and next steps
In this article, we've learned how to:
* Build Remix into a working server bundle
* Architecture a simple serverless application
* Deploy it to AWS using Pulumi
So, what's next ?
* Add some styling to the application. I recommend using Tailwind, which is very easy to [install on Remix.](https://tailwindcss.com/docs/guides/remix)
* Add a custom domain using Route53
* Create a prod stage in Pulumi
* Build an app you'll be proud of!
The full code that results from this article is available at [https://github.com/gautierblandin/remix-lambda-starter](https://github.com/gautierblandin/remix-lambda-starter).
- - -
## Further improvements
When bundling the server, we've had to use esbuild. I have tried for a few hours to make it work using Vite directly, but no amount of tinkering with rollup plugins made it work. If you manage to make it work using Vite, please let me know! I'm reachable at [gautier.blandin.dev@gmail.com](mailto:gautier.blandin.dev@gmail.com) | gautierblandin |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.