id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,887,948
Backend Developer Skills You Must Know in 2024
A backend developer is a person who is responsible for the server-side application logic and...
0
2024-06-17T02:10:00
https://dev.to/thekarlesi/backend-developer-skills-you-must-know-in-2024-4mi8
html, webdev, beginners, programming
A backend developer is a person who is responsible for the server-side application logic and integration of the front-end elements. They are also responsible for the security and data of the application. They are also responsible for integrating the front-end elements built by a web designer into the application. They are usually supported by a front-end developer and interact with the web designer for web development. Before we continue, if you are struggling with web development or job interviews, [DM me now](https://x.com/thekarlesi) and I'll get back to you. ## Responsibilities of a Backend Developer - Writing reusable, testable, and efficient code - Design and implementation of low-latency, high-availability, and performant applications - Integrating user-facing elements developed by front-end developers with server side logic - Gather and refine specifications and requirements based on technical needs - Create and maintain software documentation - Research and recommend software development best practices - Stay up-to-date with new technology trends ## Becoming a backend developer Becoming a backend developer typically involves the following steps: ### Learn programming basics To become a backend developer, you should have a good understanding of at least one programming language, such as Python, Java, or PHP. All these languages are object-oriented, which means that they are based on the concept of objects. This makes them easier to learn for beginners. You can also learn other programming languages such as C++ or C# later on. The most important thing is to learn the basics of programming, such as variables, loops, functions, and classes. You can learn these by taking an online course or reading a book. Good free courses can be found on YouTube, such as the ones by freeCodeCamp and The Net Ninja. ### Learn databases and SQL Backend developers need to be familiar with databases and SQL (Structured Query Language) which is used to communicate with databases. ### Familiarize yourself with web frameworks Backend developers often use web frameworks such as Flask, Django, or Express to build web applications. This doesn't mean that you need to be an expert in all of them, but it's a good idea to have a basic understanding of how they work. ### Understand APIs Backend developers also need to have a good understanding of APIs (Application Programming Interfaces), which are used to connect different parts of an application. ### Learn about server infrastructure Backend developers need to be familiar with server infrastructure and how to deploy and maintain applications on servers. ### Practice, practice, practice As with any skill, practice is crucial to becoming a good backend developer. Start by building small projects and gradually work your way up to more complex applications. ### Network with other developers Networking with other developers can help you learn from their experiences and stay up-to-date with the latest trends and technologies. You can also find a mentor who can help you learn the ropes and answer any questions you may have. This also warms you up to the idea of working in a team, which is an important skill for backend developers. A good network of developers can also help you find a job as a backend developer. ### Stay up-to-date with the latest technologies The technology landscape is constantly changing, so it's important to stay up-to-date with the latest technologies and tools used in backend development. Be sure to read articles and watch videos about the latest technologies and tools used in backend development. A good place to get these is the official documentation of the tools you're interested in. Another good place to get information is on Twitter. Follow developers and companies that are active in the backend development community. You can also follow hashtags such as #backenddev, #backend, and #backenddevelopment. Overall, becoming a backend developer requires a combination of programming skills, database knowledge, web framework expertise, and server infrastructure understanding. With practice, dedication, a good habit of learning and developing, and a willingness to learn, you can become a successful backend developer. Happy Coding! Karl P.S. If you are struggling with web development or job interviews, [DM me now](https://x.com/thekarlesi) and I'll get back to you.
thekarlesi
1,890,710
Setting up Vim for PHP and laravel in 2024
I am going to mostly document my own journy of setting up Vim (Not Neovim!) for PHP and laravel. Feel...
0
2024-06-17T02:05:54
https://dev.to/woland/setting-up-vim-for-php-and-laravel-in-2024-1pl7
vim, coc, php, laravel
I am going to mostly document my own journy of setting up Vim (Not Neovim!) for PHP and laravel. Feel free to follow along. We're going to use Vim+VimPlug+CoC.nvim, so we need to have nodejs and npm installed. It goes without saying that you should have Vim8+ as well. The current version of Vim as of this writing is Vim-9.1 <br> So let's start from a clean `~/.vimrc` #### Lets place this block, that automatically installs VimPlug, on top of our `~/.vimrc`. ```vim function! VimplugInstaller() let data_dir = has('nvim') ? stdpath('data') . '/site' : '~/.vim' if empty(glob(data_dir . '/autoload/plug.vim')) silent execute '!curl -fLo '.data_dir.'/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim' autocmd VimEnter * PlugInstall --sync | source $MYVIMRC endif endfunction call VimplugInstaller() ``` #### Followed by several essential plugins: ```vim call plug#begin() Plug 'neoclide/coc.nvim', {'branch': 'release'} Plug 'https://github.com/preservim/tagbar.git' Plug 'vim-php/phpctags', {'for':'php'} Plug 'honza/vim-snippets' Plug 'SirVer/ultisnips' Plug 'tpope/vim-commentary' Plug 'sheerun/vim-polyglot' Plug 'jwalton512/vim-blade' call plug#end() ``` The most important one, is coc.nvim. The rest are for various enhancements, that we'll go over soon, but before we do, let's define some extensions for coc.nvim and configure the other plugins. ```vim "===[Coc.nvim]===" inoremap <silent><expr> <CR> coc#pum#visible() ? coc#_select_confirm() : "\<C-g>u\<CR>" " use <tab> to trigger completion and navigate to the next complete item function! CheckBackspace() abort let col = col('.') - 1 return !col || getline('.')[col - 1] =~# '\s' endfunction inoremap <silent><expr> <Tab> \ coc#pum#visible() ? coc#pum#next(1) : \ CheckBackspace() ? "\<Tab>" : \ coc#refresh() inoremap <expr><s-tab> coc#pum#visible() ? coc#pum#prev(1) : "\<c-h>" " Enhanced <CR> for pairs inoremap <silent><expr> <CR> coc#pum#visible() ? coc#pum#confirm() \: "\<C-g>u\<CR>\<c-r>=coc#on_enter()\<CR>" " Disable < expansion for pairs autocmd FileType * let b:coc_pairs_disabled = ["<"] "===[ Coc-Explorer ]===" " set up coc-explorer to open in the current directory let g:coc_explorer_global_mirror = 0 let g:coc_explorer_disable_default_keybindings = 1 let g:coc_explorer_global_root = 'current' nnoremap <space>e <Cmd>CocCommand explorer<CR> "===[ Coc Global Extensions ]===" let g:coc_global_extensions = [ \ 'coc-html', \ 'coc-css', \ 'coc-pairs', \ 'coc-emmet', \ 'coc-explorer', \ 'coc-phpactor', \ 'coc-blade' \ ] " GoTo code navigation nmap <silent> gd <Plug>(coc-definition) nmap <silent> gy <Plug>(coc-type-definition) nmap <silent> gi <Plug>(coc-implementation) nmap <silent> gr <Plug>(coc-references) " Use `[g` and `]g` to navigate diagnostics " Use `:CocDiagnostics` to get all diagnostics of current buffer in location list nmap <silent> [g <Plug>(coc-diagnostic-prev) nmap <silent> ]g <Plug>(coc-diagnostic-next) " " List code actions available for the current buffer nmap <leader>ca <Plug>(coc-codeaction) nnoremap <silent> K :call ShowDocumentation()<CR> function! ShowDocumentation() if CocAction('hasProvider', 'hover') call CocActionAsync('doHover') else call feedkeys('K', 'in') endif endfunction " Add `:Format` command to format current buffer command! -nargs=0 Format :call CocActionAsync('format') " Formatting selected code vmap <leader>f <Plug>(coc-format-selected) " coc-pairs " disable characters for a specified filetypes autocmd FileType markdown let b:coc_pairs_disabled = ['txt'] ``` The above code, configures tab completion, auto pairs, coc-explorer file explorer and other keybinding to get the most out of coc.nvim. In the coc global extension section, we define extensions to be installed automatically. I have decided to use phpactor, you may change it for `coc-intelephense`. #### Lets also configure the snippets: We can manage our snippets with coc.nvim too, but I choose to use UltiSnips for that. ```vim "===[ SNIPPETS ]===" "Use Ctrl j key to trigger the snippets, default was TAB but that conflicts with "The Completion trigger see :h keycodes to change this to sth else "Use Ctrl j and k to move visually within the snippet that was just triggered "Ctrl l lists the available snippets let g:UltiSnipsExpandTrigger='<C-j>' let g:UltiSnipsListSnippets='<C-l>' let g:UltiSnipsJumpForwardTrigger='<C-j>' let g:UltiSnipsJumpBackwardTrigger='<C-k>' let g:copypath_copy_to_unnamed_register = 1 ``` Save and quit Vim and open it again and wait for the plugins and extensions to install. <br> If you get prompted by an error msg, just press enter and wait for the plugins and the extensions to install. ##### It's better to install the laravel extension manually: `:CocInstall @yaegassy/coc-laravel` #### Now lets configure the coc extensions. Issue the `:CocConfig` command from Vim's command mode and paste the following, inside the file. ```json { "suggest.noselect": true, "explorer.icon.enableNerdfont": true, "explorer.file.reveal.auto": true "languageserver": { "phpactor": { "command": "phpactor", "args": ["language-server"], "trace.server": "verbose", "filetypes": ["php"] } } } ``` If you choose to use a local installation of phpactor, you can just remove `coc-phpactor` with `:CocUninstall coc-phpactor` and modify the `~/.vim/coc-settings.json` to include a path to the phpactor executable and remove the languageserver block entirely. ```json { "phpactor.enable": true, "phpactor.path": "/usr/bin/phpactor", } ``` #### The keybindings: The only keybindings that you should know, besides the usual Vim keybindings, are: | Key | Action | |---|---| | Tab | completion | | gd | definition | | gy | type definition | | gi | implementation | | gr | references| | [g | diagnostics prev | | ]g | diagnostics next | | <Leader>ca| code actions | | K | hover docs | | <Leader>f (in visual mode) | format code | | Ctrl-j | Expand Snippet & Move to Next Placeholder | | Ctrl+k | Move to Prev Placeholder | | Ctrl-l | List Snippets | | Space-e| Open Explorer | Now we should have an efficient and comfortable setup for working with PHP and laravel. #### Lets setup several other things for convenience #### Theme: Add these plugins to the plugins list and install them with `:PlugInstall` after sourcing the `~/.vimrc`. ```vim Plug 'ryanoasis/vim-devicons' Plug 'catppuccin/vim', { 'as': 'catppuccin' } ``` #### Setup Vim to use the popular catppuccin theme. ```vim "===[ Theme ]===" set background=dark " Inspect $TERM instead of t_Co if &term =~ '256color' " Enable true (24-bit) colors instead of (8-bit) 256 colors. if has('termguicolors') let &t_8f = "\<Esc>[38;2;%lu;%lu;%lum" let &t_8b = "\<Esc>[48;2;%lu;%lu;%lum" let &t_TI = "" let &t_TE = "" set termguicolors set mouse=a colorscheme catppuccin_mocha endif else colorscheme catppuccin_mocha let &t_TI = "" let &t_TE = "" endif if has('gui_running') set mouse=a set guicursor+=a:blinkon0 set guifont=FiraCodeNerdFont\ 12 colorscheme catppuccin_mocha endif "===[ Curosr Shape ]===" let &t_SI = "\<Esc>[6 q" let &t_SR = "\<Esc>[4 q" let &t_EI = "\<Esc>[2 q" ``` The above block, will make vim use termgui colors (in a termgui capable terminal) and use different cursor shapes for different modes.<br> Make sure your `$TERM` variable reports `xterm-256color`. In the screenshot below, you can see phpactor, coc-explorer and tagbar at work. <br> I've left an error in the syntax to show you what the inline diagnostics look like. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4utqvw2yzsuov37xbe3.png) It would do you well to familiarize yourself with the coc.nvim documentation and commands.<br> For example, running `:CocDiagnostics` will open a drawer with all the errors found by coc.nvim. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xmpf6v3ehn7qfo75q9bc.png) Pressing enter on each item will jump you to the line containing the error. I could not find a decent phpdoc plugin for Vim, but besides that, this setup is very effective. You can find my full config on github, [Woland's Vim](https://github.com/wolandark/wim/tree/Devel) and my personal `~/.vim` [here](https://github.com/wolandark/.vim). If you have any notes or questions, please leave a comment. # Where to go from here Start using the setup and tweak it to your needs, little by little. This is not a drop and go config, it's just a kickstarter to put it plainly. #### Some general notes: - Familiarize yourself with Vim's buffers. - NEVER `:q` Vim, unless you're done for the day - Learn about `:vimgrep` and `:lvimgrep` - Integrate fzf, ack, ag, rg into your workflow, (you need to quickly search large projects) # Possible Q&A Can I use this with Nvim? Yes, you just have to paste all the code into `~/.config/nvim/init.vim` Why not use Nvim with lspconfig and cmp etc ... ? Because it takes way longer to configure and the results aren't better than what coc provides with so much ease.
woland
1,890,709
How to Create a Loading Bar Using HTML, CSS, and JavaScript
Loading bars are essential UI elements that indicate progress to users during operations like file...
0
2024-06-17T02:04:55
https://raajaryan.tech/how-to-create-a-loading-bar-using-html-css-and-javascript
javascript, beginners, tutorial, programming
Loading bars are essential UI elements that indicate progress to users during operations like file uploads, downloads, or data processing. In this tutorial, we'll walk through creating a simple yet effective loading bar using HTML, CSS, and JavaScript. This project is ideal for beginners looking to enhance their front-end development skills or for seasoned developers wanting to refresh their knowledge. ### Project Overview In this project, we'll create a loading bar that simulates the progress of an operation. We'll use: - **HTML**: for the structure of our loading bar. - **CSS**: for styling and animating the loading bar. - **JavaScript**: to control and update the progress of the loading bar. ### Step-by-Step Guide #### 1. Setting Up Your Project First, create a new directory for your project and set up the following files: - **index.html**: Contains the HTML structure. - **style.css**: Manages the appearance and animation. - **script.js**: Handles the logic to update the loading bar. #### 2. HTML Structure In your `index.html` file, set up the basic structure for the loading bar: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Loading Bar Project</title> <link rel="stylesheet" href="style.css"> </head> <body> <div class="loading-bar-container"> <div class="loading-bar" id="loadingBar"></div> </div> <script src="script.js"></script> </body> </html> ``` #### 3. CSS Styling Next, style your loading bar using `style.css`. Here’s an example of how you can style and animate the loading bar: ```css body { display: flex; justify-content: center; align-items: center; height: 100vh; margin: 0; background-color: #f0f0f0; } .loading-bar-container { width: 300px; height: 20px; background-color: #ddd; border-radius: 10px; overflow: hidden; } .loading-bar { width: 0%; height: 100%; background-color: #4caf50; transition: width 0.3s ease; } ``` #### 4. JavaScript Logic Now, add functionality to update the loading progress dynamically in `script.js`: ```javascript // Get the loading bar element const loadingBar = document.getElementById('loadingBar'); // Function to simulate loading progress function simulateProgress() { let width = 0; const interval = setInterval(() => { if (width >= 100) { clearInterval(interval); } else { width++; loadingBar.style.width = width + '%'; } }, 30); // Adjust speed of progress bar here (in milliseconds) } // Call simulateProgress function to start the loading animation simulateProgress(); ``` ### Conclusion Congratulations! You've successfully built a loading bar project using HTML, CSS, and JavaScript. This project not only enhances your understanding of front-end development fundamentals but also equips you with essential skills for creating dynamic user interfaces. Feel free to customize the project further by adding features like different colors, responsive design adjustments, or integrating it with backend operations for real-world applications. Stay tuned for more tutorials and projects to deepen your skills in web development. Happy coding! ---
raajaryan
1,890,708
OK, so where to next ?
I'm here to build my very own Arch OS . . . Read the article . . . . Might I ask where are the...
0
2024-06-17T02:01:58
https://dev.to/m_veans_0bba1003b422acfa/ok-so-where-to-next--4b2g
I'm here to build my very own Arch OS . . . Read the article . . . . Might I ask where are the building plans ? Cheers
m_veans_0bba1003b422acfa
1,890,707
Visual ChatGPT: Revolutionizing Conversational AI
Introduction The realm of artificial intelligence (AI) has been evolving...
27,673
2024-06-17T02:01:45
https://dev.to/rapidinnovation/visual-chatgpt-revolutionizing-conversational-ai-4cn7
## Introduction The realm of artificial intelligence (AI) has been evolving rapidly, transforming how we interact with technology on a daily basis. One of the most significant advancements in this field is the development of conversational AI, which has revolutionized the way businesses and consumers communicate. This introduction will explore the basics of conversational AI and delve into the latest innovation in this area: Visual ChatGPT. ## What is Visual ChatGPT? Visual ChatGPT represents an advanced iteration of AI models that combines the capabilities of natural language processing (NLP) with computer vision. This integration allows the model to understand and generate responses based not only on text input but also on visual data. Essentially, Visual ChatGPT can analyze images, comprehend the context, and engage in a dialogue about the visual content it perceives. ## How Does Visual ChatGPT Work? Visual ChatGPT combines the capabilities of advanced natural language processing (NLP) with computer vision to interact with users through both text and images. This technology allows the AI to understand and respond to queries that involve visual content, making it significantly more interactive and versatile compared to traditional text-based models. ## Types of Applications for Visual ChatGPT Visual ChatGPT is revolutionizing various industries by enabling more interactive and intuitive user interfaces. Here, we explore its applications in customer service, healthcare, e-commerce, and education. ## Benefits of Visual ChatGPT Visual ChatGPT represents an evolution in AI communication systems, combining the capabilities of traditional chatbots with advanced visual understanding. This technology allows users to interact with AI through images, enhancing the user experience in various applications such as customer service, education, and online shopping. ## Challenges in Developing Visual ChatGPT Developing a Visual ChatGPT, which combines the capabilities of AI-driven chatbots with advanced image processing technologies, presents a unique set of challenges. This integration aims to create a system that can understand and respond to textual and visual inputs in a coherent and contextually appropriate manner. ## Future of Visual ChatGPT The future of Visual ChatGPT looks promising as it stands at the intersection of AI's evolution in understanding and generating human-like responses based on visual inputs. As technology progresses, we can anticipate more sophisticated applications of Visual ChatGPT across various sectors including healthcare, automotive, education, and customer service. ## Real-World Examples of Visual ChatGPT Visual ChatGPT is already making its mark across various industries with real- world applications that highlight its potential. For instance, in the retail sector, companies like Sephora are using visual AI chatbots to offer beauty advice to customers. In the automotive industry, companies like Tesla are integrating visual chatbot features into their customer service. ## In-depth Explanations Understanding the AI mechanisms, visual data processing, and language understanding are crucial for grasping the full potential of Visual ChatGPT. These technologies enable the AI to perform complex tasks that require a nuanced understanding of both text and images. ## Comparisons & Contrasts Visual ChatGPT and traditional chatbots represent two generations of chatbot technology, each with its unique capabilities and use cases. Visual ChatGPT can handle a broader range of queries, including those that involve understanding and generating responses based on images, making it a significant step forward from traditional chatbots. ## Why Choose Rapid Innovation for Implementation and Development Choosing Rapid Innovation for the implementation and development of technology projects, especially those involving cutting-edge technologies like AI and blockchain, offers significant advantages. Rapid Innovation allows companies to stay agile, adapt to changes quickly, and reduce the time to market. ## Conclusion In conclusion, Visual ChatGPT represents a significant advancement in the field of artificial intelligence, particularly in the integration of visual data processing with natural language understanding. This technology allows for a more intuitive interaction between humans and machines, offering enhanced capabilities in image understanding and interaction. Drive innovation with intelligent AI and secure blockchain technology! 🌟 Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <http://www.rapidinnovation.io/post/visual-chatgpt-the-next-frontier-of-conversational-ai> ## Hashtags #ConversationalAI #VisualChatGPT #AIInnovation #CustomerServiceAI #FutureOfAI
rapidinnovation
1,890,706
Top 9 Kanban Board Templates for 2024
Kanban Board Templates In the fast-paced and competitive business world, the pressing need...
0
2024-06-17T02:00:24
https://dev.to/bryany/top-kanban-board-templates-for-2024-23mj
kanban, management
## Kanban Board Templates In the fast-paced and competitive business world, the pressing need for [**efficient project management tools**](https://www.leiga.com/feature) to streamline processes and enhance productivity is more crucial than ever. Kanban Board Templates emerge as a versatile solution across industries. Teams across industries are constantly seeking tools that can streamline processes, enhance productivity, and ultimately bring ideas to life most efficiently. One such tool that has acquired significant traction is the [**Kanban Board Template**](https://www.leiga.com/use-case-developer?utm_source=community\&utm_medium=devto\&utm_content=17-killer-tools-web-apps-to-boost-your-productivity-in-2024-5enp). ![](https://static.leiga.com/pms/image/2024/0617/6f0a7ffde5e0417382a11a9f9458d2e6/image.png) Take, for instance, a typical scenario involving a software development project. The team involved might use a Kanban board to visualize the **flow of work**. This board, divided into sections like 'To-Do', 'In Progress', and 'Done', serves as a clear indicator of project status at each stage. The essence of a Kanban Board Template lies in its simplicity and ability to visually organize workflows, making it a go-to project management tool for improving team collaboration and task management.** **The [**real-time update feature**](https://www.leiga.com/feature#automations) allows for accurate tracking of project progression and facilitates quicker decision-making. ![](https://static.leiga.com/pms/image/2024/0529/3531f08b0db34156a02a950685b7e073/image.png) You might be wondering, how this project management tool, namely the Kanban Board Template, functions to streamline your workflow while adhering to Agile methodologies, and what advantages it brings to your software development or content production processes. How does it function? What benefits does it bring to your project management process? Stay tuned as we delve deeper into the essentials of a Kanban Board in the next section. As we believe, understanding is the first step towards implementation! ## Essence of a Kanban Board Template The Kanban Board Template is a project management tool designed to visualize work, limit work-in-progress, and maximize efficiency. The board's structure consists of columns, where each column signifies a status in your workflow. Usually, these statuses include 'To-Do', 'In Progress', and 'Done'. But, of course, these can be tailored to fit the specific needs of your team or project. ![](https://static.leiga.com/pms/image/2024/0530/281774c1ecc44325a4dcee81ea2af177/image.png) At the core of its design, the functionality of a Kanban Board Template facilitates workflow visualization and management through its drag-and-drop feature, allowing tasks to move seamlessly across columns representative of different stages, from 'To-Do' to 'Done'. The Kanban Board Template is primarily composed of the following elements: 1. **Columns: **These represent the stages in your workflow. The basic Kanban Board Template consists of three columns - 'To-Do', 'In Progress', and 'Done', but can be further customized as per your project needs. 2. **Cards: **Each task is represented by a card on the Kanban Board Template. These cards move from one column to another as they progress through the stages of the workflow. Typically, a card will contain details such as the description of the task, assigned team members, and deadlines. 3. **Work In Progress (WIP) Limits: **This is a key feature that helps in managing flow by limiting the amount of work in any column. It prevents overloading of tasks and helps in maintaining a balanced workflow. 4. **Swimlanes: **These are horizontal rows on a Kanban Board Template, used to visually separate different types of tasks or projects. They allow teams to manage multiple projects on the same board. In application scenarios ranging from software development, and marketing campaigns, to hardware development, Kanban board templates provide a visual management board to organize tasks and enhance real-time updates and collaboration, ensuring a streamlined and efficient project progression. Let's take software development as an example. Developers can use an effective Kanban Board Template to organize tasks, track bugs, and visualize sprints, making it transparent and easy to communicate for everyone involved. As the process progresses, everyone on the team can see the status of each task and who's working on what. Overdue tasks are glaringly obvious on a Kanban board, allowing the team to adjust workloads and priorities accordingly. ## Effectiveness of Kanban Board Template in Project Management The benefits of a valid Kanban Board Template in project management are manifold. Here are five key advantages: 1. **Streamlined Workflow: **A useful Kanban Board Template simplifies the workflow by visually dividing tasks based on their statuses. This easy-to-understand format allows teams to identify bottlenecks or blockers, thus enabling a smoother and more [**efficient workflow**](https://www.leiga.com/feature#team-resource). 2. **Improved Collaboration: **The Kanban Board Template provides a shared view of the project's status. Displaying the tasks and their current status makes team members aware of each other’s work and encourages shared responsibility. 3. **Enhanced Transparency: **The visual nature of a Kanban Board Template promotes transparency across all levels of an organization. Any team member or stakeholder can quickly understand the state of a project just by looking at the board. 4. **Better Workload Management: **The concept of Work-In-Progress (WIP) limits in a Kanban Board Template ensures that teams or individuals are not overwhelmed with too many tasks at once, thus promoting a [**balanced distribution of work**](https://guide.leiga.com/project/workload). 5. **Real-time Data and Insights: **Project managers can easily track progress and identify potential issues in real time by viewing the Kanban Board Template. This immediate access to progress helps prevent major setbacks and facilitates more informed decision-making. ## Project Management Kanban Board Template Successful project management is all about staying organized, keeping a clear focus, and enabling seamless collaboration within teams. Across industries, standard and customized Kanban Board Templates serve as the foundation to achieve these objectives efficiently. ![](https://static.leiga.com/pms/image/2024/0530/2e27b20de6da4d1f9849dc5cf911f5d3/cc7fd943-239c-4013-89e7-50a1ed7c09b8.jpeg) A well-structured project management Kanban Board Template performs numerous roles. They help streamline processes, enabling teams to monitor each task's progress and manage interdependencies effectively. The visual layout simplifies project planning, execution, and tracking, eliminating confusion, and increasing transparency. Moreover, the [**customizable features**](https://guide.leiga.com/project/create/template) of the board templates allow the fitting of the project's specifics from its onset to closure. Prioritizing tasks becomes straightforward, resulting in improved productivity and quicker turnarounds. Also, these templates serve as collaboration hubs. Every member can comprehensively understand the project's progress, pending tasks, and future steps. Such transparency fosters effective communication, harmony, and synergy within teams. Through their organization, visualization, and collaborative features, project management boards lay the groundwork for efficient, successful, and hassle-free project completion. As we progress, we will dive deeper into the distinctive characteristics and uses of different templates tailored to cater to varied project requirements. # Scrum Board An intuitive Scrum template is designed to enhance project management efficiency. It breaks down projects into a series of tasks and sprints based on core concepts of the Scrum methodology. Furthermore, this template includes process management tools that help teams maintain efficient workflows throughout the project. ![](https://static.leiga.com/pms/image/2024/0617/97cdd2e2e58f468abd3e4ce96869cb75/image.png) Its primary functions include: 1. **Task Management: **The Scrum template allows admins to create and assign tasks, ensuring the project progresses as expected within a sprint. 2. **Sprint Management: **You can create and manage sprints to complete a set of specific tasks within a defined period. 3. **Workflow Management: **Teams can more effectively track task progress, determine dependencies between tasks, and provide notifications for alerts and problem-solving. The advantages of the Scrum template are reflected in the following ways: 1. **Efficiency: **By breaking down projects into a series of manageable tasks, teams can work more efficiently. 2. **Transparency: **Detailed tasks and sprint information in the Scrum board keep all team members informed about the project’s progress and next steps, promoting transparency and collaboration. 3. **Flexibility: **The template is highly customizable, allowing project managers to adapt it to fit different project needs. This particular Kanban Board Template serves as an excellent tool for managing complex projects using the Scrum methodology. From task management to sprint tracking, this tool increases efficiency, transparency, and flexibility in any project management scenario. # Kanban Board The Kanban Board Template is an instrumental tool in the realm of project management, offering a visual representation of a project's workflow. Patterned on the principles of Kanban methodology, it is designed to facilitate efficient design, planning, execution, and monitoring of tasks. ![](https://static.leiga.com/pms/image/2024/0617/ce8267f805864286bd35baedb0cf94e7/image.png) With Kanban boards, project management turns into a more straightforward, controlled process. [**Kanban boards**](https://www.leiga.com/feature#agile-board) provide an exceptional platform for various professions, including SaaS companies, manufacturers, and software developers. By leveraging the flexibility of the Kanban boards, these specialties have experienced substantial improvement in their operations. ![](https://static.leiga.com/pms/image/2024/0617/f6097737739b494b8cf4fe04a582c4de/image.png) Advantages of Kanban Board Template: * [**Visual Workflow Management**](https://guide.leiga.com/project/workload)**:** It presents an at-a-glance understanding of the project's status, promoting transparency and understanding among all stakeholders. * **Efficient Bottleneck Management:** The board identifies bottlenecks in real-time, allowing teams to address them promptly. * **Limiting Work-In-Progress:** Teams get to be more focused on current tasks, which consequently improves productivity. * **Customizability:** The template is adaptable for easy modifications to fit any project or process's unique requirements. Application Scenarios: * **Software Development:** A valid Kanban Board Template is indispensable in software development processes to organize tasks, particularly when using agile methodologies. * **Marketing Campaigns:** It allows marketing teams to track the progress of different campaign tasks visually. * **Event Planning:** It provides a chronological display of event planning tasks, ensuring easy coordination and execution # Software Development Board The 'Software Development Board' is an absolute game-changer that empowers businesses with advanced capabilities to develop software in a well-organized, systematic, and efficient manner. This feature is nestled under the ‘Project Module,’ providing an overview of the software development process and making project management effortless. ![](https://static.leiga.com/pms/image/2024/0617/b7d9972743054c6189e770890df4006a/image.png) Advantages: 1. **Clear Overview & Simplified Workflow:** A visual overview of the development process, makes it easy to track progress, identify bottlenecks, and manage workflows. 2. **Improved Collaboration:** The board promotes effective collaboration among team members, enabling them to communicate, share updates, and manage tasks. 3. **Real-time Updates:** Real-time tracking allows immediate adjustment and optimization of processes, reducing lag time. 4. **Increased Productivity:** Utilizing the board to manage tasks and deadlines, staves off confusion and miscommunication, thereby increasing productivity. Application Scenarios: 1. **Agile Development:** The board is crucial in managing agile software development projects, where visual management plays a key role. 2. **Cross-functional Teams:** For projects that involve cross-department collaboration, the board provides a shared view of progress and task allocation. 3. **Remote Working Scenarios:** The board comes in particularly handy in remote working scenarios, where team members are geographically dispersed yet need to work together on software development tasks. The 'Software Development Board' is a significant feature that elevates software development and project management to newer efficiency and productivity heights. Whether it’s about tracking, communication, or task management, this feature covers it all with ease. ![](https://static.leiga.com/pms/image/2024/0530/a37d52a456864f889d5feaba1ca72e1f/image.png) # Work Requests Board The 'Work Request Board' template serves as a virtual command center for addressing customer feedback and service inquiries. This exceptional feature handles all types of work requests, fostering a unique link between your team and customers. It delivers a seamless, well-coordinated, and streamlined method for managing work requests and enables faster response times within an effective workflow. ![](https://static.leiga.com/pms/image/2024/0617/b61ddbbe2eec4b5ca267d64a11104339/image.png) Key Features and Benefits: 1. **Unified Processing:** This feature consolidates all customer feedback and inquiries into a single location, facilitating tracking and management. 2. **Prompt Response:** Equipped with smart response capabilities, it enables rapid addressing of customer complaints, inquiries, or feedback, hence boosting customer satisfaction levels. 3. **Transparent Workflow:** It promotes transparency in workflow, ensuring every team member is in sync, thereby fostering a collaborative work culture. Application Scenarios: 1. **Customer Service:** Highly effective in customer-centric operations where request management is crucial. 2. **Project Management:** Applicable in scenarios where stakeholders' feedback/recommendations need processing for improved project execution. 3. **Software Development:** Proves instrumental in user feedback management in software development, resulting in product enhancement. With the implementation of the 'Work Request Board', teams can streamline their work-request management, ensure a quick response, cultivate a closely-knit team environment over projects, and take customer relations to greater heights. Do note that each application might require custom adjustments to fit specific use cases better. # HR Interview Board The 'HR Interview Board is an indispensable asset for tracking candidates' progress as they navigate the recruitment process. Much like a radar in pilot navigation, this detailed dashboard offers a comprehensive overview of each candidate's status, enabling you to easily keep up with your recruitment pipeline. ![](https://static.leiga.com/pms/image/2024/0617/0ab80f1e4281461aa703a641b819cba0/image.png) Key Features and Benefits: 1. **Streamlined Tracking:** This feature consolidates all candidate details and their progress into a centralized system, making tracking effortless and efficient. 2. **Status Updates:** Equipped with real-time update capabilities, it facilitates a quick overview of which stage each candidate is at in the interview process, reducing time delays and enhancing decision-making processes. 3. **Transparent Workflow:** The HR Interview Board promotes transparency, maintaining everyone in the HR team on the same page, and underpinning a cooperative hiring culture. Application Scenarios: 1. **Recruitment Management:** The HR Interview Board excels in recruitment setups where candidate tracking is critical, making the recruitment process more organized and less time-consuming. 2. **HR Teams:** This feature can be used efficiently by HR Teams to stay updated on the hiring pipeline and candidate progress. 3. **Talent Acquisition:** It can be instrumental in creating efficient talent acquisition strategies by providing insights into the recruitment funnel. Implementing the 'HR Interview Board allows teams to efficiently manage candidate information, ensure instant updates, foster a collaborative culture in HR operations, and enhance the overall recruitment process. Remember that each application may need custom modifications to fit specific use cases better. # OKR Management Board The 'OKR Management Board' is vital for planning, tracking, and reviewing OKRs (Objectives and Key Results) efficiently. With comprehensive functionality, you can connect development tasks to OKRs to maintain alignment with your daily work and strategic objectives. ![](https://static.leiga.com/pms/image/2024/0617/44e4e077003043828ca0dd8e684a1683/image.png) Key Features and Benefits: 1. **Efficient Planning:** This feature allows for smooth, streamlined planning of Objectives and Key Results. It centralizes all OKR details in one system, making planning simple and effective. 2. **Advanced Tracking:** With real-time update capabilities, the 'OKR Management Board' allows for easy tracking of OKR progress, ensuring milestones are achieved promptly. 3. **Results Review:** This board facilitates a quick and efficient review of key results, allowing teams to recognize achievements and areas of improvement. 4. **Task-Objective Alignment:** By linking development tasks to OKRs, it ensures daily operations are in line with strategic objectives. Application Scenarios: 1. **Performance Management:** This feature is highly beneficial in situations where effective tracking and management of OKRs is a priority. 2. **Strategic Planning:** Applicable for teams aiming to align their strategic planning with daily operations. 3. **Project Management:** It can aid in ensuring projects and tasks are run in alignment with established objectives and key results. The 'OKR Management Board' can revolutionize the way teams plan, track, and review OKRs, and ensure alignment with strategic objectives. Each application might require custom alterations to best fit specific use cases. # Bug Management Board The 'Bug Management Board' is specifically designed for managing product defects through dedicated projects, facilitating increased efficiency and focus for testing teams. ![](https://static.leiga.com/pms/image/2024/0617/e2f70e517a9f4148b8553d613ccac0d4/image.png) Key Features and Benefits: 1. **Defect Management:** This feature centralizes all defect information in one system, drastically simplifying the process of bug tracking and resolution. 2. **Automated Updates:** Real-time updates allow for timely tracking of bug statuses, speeding up the overall testing process, and enhancing productivity. 3. **Project-Defect Alignment:** Managing defects through separate projects, ensures that testing teams can focus on specific areas, thus increasing accuracy and decreasing resolution times. Application Scenarios: 1. **Defect Resolution:** Highly beneficial in situations where tracking, managing, and resolving bugs in the product are priorities. 2. **Testing Optimization:** This can be implemented by testing teams looking to enhance efficiency and precision. 3. **Project Management:** It can assist testing teams in segregating bugs according to the projects, thereby enabling a more streamlined and efficient resolution process. The ' Bug Management Board' can transform the way defects are managed and resolved, leading to a more productive and effective testing process. However, as per the needs of a specific scenario, custom alterations may be required for optimal utility. ![](https://static.leiga.com/pms/image/2024/0530/d7f85423290c4b04866d1488fcee4da5/image.png) # Hardware Development Board The 'Hardware Development Board' is ingeniously designed for managing the hardware development process, streamlining the execution of tasks, and facilitating efficiency. ![](https://static.leiga.com/pms/image/2024/0617/283306c09f9d4ea2b7372bb1b32b585a/image.png) Key Features and Benefits: 1. **Process Design:** This feature offers a structured approach to managing all facets of hardware development, thus making the process more streamlined and efficient. 2. **Real-Time Updates:** This facilitates real-time tracking of hardware development milestones, ensuring that deadlines are met and progress is steady. 3. **Task-Hardware Alignment:** Assigning specific tasks to hardware components, ensures that each piece is developed by the overall plan. Application Scenarios: 1. **Hardware Development:** This board is invaluable where complex hardware development needs to be managed and tracked. 2. **Project Coordination:** It can be immensely useful for project managers to visually track and manage their hardware development tasks. 3. **Product Management:** It can aid in ensuring products involving hardware components are developed in a timely and efficient manner. The** **'Hardware Development Board' can revolutionize the way hardware development tasks are managed, keeping all stakeholders in sync with the progress. While its versatile capabilities cater to a wide range of needs, some customization may be required to fit specific use cases. # Content Management Board The 'Content Management Board' is meticulously designed to manage all stages of content production, from the earliest draft to final delivery. ![](https://static.leiga.com/pms/image/2024/0617/2bff1aa166ad4b58854fed3d8a7fb8c0/image.png) Key Features and Benefits: 1. **Content Life-cycle Management:** This feature enables the seamless coordination of all stages of content production, from ideation to distribution, making the process more streamlined. 2. **Real-Time Updates:** This allows for real-time tracking of content status, ensuring deadlines are met and progress is tracked efficiently. 3. **Task-Content Alignment:** Assigning particular tasks to individual pieces of content, ensures the smooth transition of the content through different stages of the lifecycle. Application Scenarios: 1. **Content Development:** This board is incredibly useful where systematic planning, tracking, and delivery of content are required. 2. **Editorial Planning:** It can be hugely beneficial for content editors and managers to visually track and manage content lifecycle. 3. **Team Collaboration:** It can aid in ensuring teamwork, where everyone is kept informed about the status of content pieces, from draft to delivery. The 'Content Management Board' can revolutionize the way content is managed and delivered, ensuring coherence among all stakeholders. While it is broadly applicable, some alterations may be required to fit specific use cases. ## More Than Just Free Kanban Board Templates Navigating through the rigorous and multifaceted domain of project management—be it drafting content, tracking bugs, coordinating hardware development, or any task in between—demands a comprehensive project management tool. The Kanban Board Template stands out as an intuitive, versatile solution promoting organization, visualization, and efficiency. In this pursuit of streamlined workflow, Kanban Boards have emerged as an intuitive and efficient tool, demonstrating commendable versatility across a range of project spheres. The magic of the Kanban Board Template lies in its simplicity and flexibility. It serves as a visual depiction of work in progress, tasks to be done, those that have been completed, and areas that require immediate attention. Its visual clarity improves task management and effectively streamlines the process flow. In the project management landscape, there now exists an array of sophisticated tools, each designed to cater to these specific needs. Their promise lies in offering an all-encompassing solution that meets your project management requirements on a single platform, eliminating the need to jump across several applications. The representation of tasks is no longer just about allocation – it is about managing processes at every stage of the project management lifecycle. ![](https://static.leiga.com/pms/image/2024/0510/31b210d6c28444cdae3dbd22d691eeb4/image.png) Ready to elevate your project management practices to new heights? Explore how the Leiga project management platform can revolutionize your workflow with customizable Kanban Board Templates tailored to fit your unique project needs. Visit [**Leiga**](https://app.leiga.com/sso/register?referrer=https%3A%2F%2Fwww.leiga.com%2F) today and discover your project's potential unlocked. Do you want to [**learn more**](https://www.leiga.com/blog) about the agile development process? Please check out our free and regular updates on experts' suggestions on how to master your project management skills!
bryany
1,890,705
An intro to Machine Learning
Intro So what is machine learning? Machine learning is the process of teaching machines to do a...
0
2024-06-17T01:44:49
https://dev.to/walkera1/an-intro-to-machine-learning-2i7j
Intro So what is machine learning? Machine learning is the process of teaching machines to do a specific task. Or even have a machine teach itself to do a task! Wouldn't that just lead to a robot uprising? Maybe, but probably not. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nwqwnmwg6c84589ihyzv.png) Wouldn’t it be Easier to Train a Human? 1. Machines are better at processing large amounts of information quickly Yes it does take a while to learn coding and how to teach a machine to do an obvious task for a human, machines were never taught right and wrong or how to ride a bike. But once you are able to train a machine to do a simple task it can think much faster than a human. They are easily scalable as well, if you need another one you can make a copy. Being made of code machine brains can be copied perfectly. 2. Machines make less errors and don’t get influenced by biases Training a machine to make no mistakes takes a while, but once it's done a machine will never forget. Machines can not define the meaning of life, but assembling the same car one thousand times is simple once broken down. Machines are hyper specialized at their specific tasks, as such they are specifically programmed and built to accomplish that task. They can do the task ten times, check for errors 100 times, every time. They also didn’t inherit the complex emotions that come from having a flesh body and brain. That means they will not carry their own biases to influence their decisions and make mistakes. All in all machines are very good at doing the task they were made for, and very bad at anything else. 3. Flexibility Machines are hyper specialists, but that does not mean that they are incapable of change. If the market changes and the specific task is different then your once reliable machine will not be able to keep up. But have no fear, machines are not as afraid of change as people are. If there is a small change in the input and output of your task all you have to do is retrain your machine with slightly different specifications. 4. Safety, Machines can do dangerous jobs and take the risk away from human lives So far I have just been talking about data processing and information management. But machines can handle any sort of jobs they have the body for. Take dangerous jobs lifting heavy objects, dealing with dangerous customers, or even handling radioactive materials! Machines are able to take up jobs that pose a risk to humans or are just plain uncomfortable. ATMs and online banking have made it easier for customers to access their finances and gotten rid of the cashier job that is seemingly always at risk of being robbed in the movies. Machines can handle radioactive materials during a spill and machines have not gotten cancer like humans have after dealing with radioactive materials. 5. Security bonus Along with checking for errors many times a second, machines can also check for risks to security or plain old attacks. Cyber attackers are getting more specialized and covering their tracks better every day. But one little thing out of place that might get missed by a human can be checked over and run through a security test every single time by a machine. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9yq044fztnb0ggduxt10.png) Will This Lead to the Terminator Movie? 1. No, Automation quote from Leto the 2nd No. “Machines just increase the amount of things we can do without thinking” - Leto II from God Emperor of DUNE. Machines and computers are really advanced tools, they make tasks easier for those who do them. Machines and tools have been around forever and they have never turned around to kill us. There are guns ofcourse, but those are just very efficient killing tools, not something that is just inherently lethal to humans. Computers will never evolve and get angry at their creators. 2. History of automation and machine automation. We have used machines to automate tasks since 1943, the Colossus computer was made by the British to break Germain encryption in WWII giving them the upper hand to win the war. Even as early as 27,000 BCE they used sinkers to anchor nets in South Korea. Making it so one person did not have to hold a net down the whole time but they could set up multiple fish nets and do their job much more efficiently. 3. Automation will not make killer robots, but it will enable rich people to make more money Supporting information All in all machines are not thinking beings and they can not make the decision to murder people. But there is the possibility of making a machine that is more efficient than a gun and of course machines to make rich people richer. How do you Train a Machine? What kind of AI learning algorithms are there? And what are they good for? Supervised learning models take in a lot of labeled data to come to the conclusion that is already known. First it has to make its own guess then evaluate its performance based on the correct answer, every time. This model would be good for object recognition, say if you want a machine to tell you if an image contains a dog or not. At first it has no idea what a dog is, but then seeing many images of dogs it starts to learn to recognize dogs. Does the image have ears? Is it fluffy? Does it have a tail? Pros: this type has very high accuracy and it is easy to evaluate. Cons: This type requires a lot of labeled data, that requires a human to go through a lot of data and label it. Which can be time consuming and expensive. Reinforcement learning models are somewhat what they sound like. You let a machine explore and interact with its environment and give it rewards and penalties based on its actions. This model has been used to teach machines to play video games. Insert your AI into a virtual environment, let it play around, and have very specific parameters for rewards and penalties. This one does take longer to learn its task but once it does, it will be very good at it. Pros: Can handle learning complex behaviors that may even be difficult to teach to a human. Can handle dynamic environments and may be used for decision making. Cons: Needs careful tuning of hyper parameters. Does not generalize task well for new environments. Conclusion In conclusion, machine learning can be a great option for automating certain tasks. And no, machines will not make the decision to eliminate humans like in the movie terminator. Sources: [What jobs are being taken by machines](https://www.computerhope.com/issues/ch001799.htm) [Five machine learning types](https://www.ibm.com/blog/machine-learning-types/) [10 most popular AI models](https://dzone.com/articles/top-10-most-popular-ai-models) [How do machines learn?](https://jphmpdirect.com/how-do-machines-learn-part-i-they-train/amp/) [Machine training snipette](https://medium.com/snipette/machine-training-54f5386f304b) [How did Skynet come to have automated](https://www.reddit.com/r/Terminator/comments/lyzvg5/how_did_skynet_come_to_have_the_automated/)
walkera1
1,890,704
Elasticsearch vs OpenSearch: A Technical Guide for Choosing Your Open Source Search Platform
Introduction Since its release under the Apache 2.0 open-source license in 2010,...
0
2024-06-17T01:43:07
https://dev.to/a_lucas/elasticsearch-vs-opensearch-a-technical-guide-for-choosing-your-open-source-search-platform-511e
## Introduction Since its release under the Apache 2.0 open-source license in 2010, Elasticsearch has rapidly emerged as the top choice for enterprise search engines worldwide. Often deployed with Logstash and Kibana, collectively known as the Elastic Stack, it enables a myriad of log analysis use cases such as application observability, security log analysis, and user behavior understanding.<br /> In 2015, Amazon Web Services launched Amazon Elasticsearch Service (Amazon ES) using the open-source license—a managed service that let AWS customers start scalable Elasticsearch clusters, attach data sources to cluster endpoints, and perform load, process, analysis, or visualization operations in the cloud.<br /> Despite its popularity, dispute arose with Elastic NV opposing Amazon's use of its product and trademarks, which culminated in a trademark infringement and false advertising lawsuit in 2019 against the tech giant.<br /> The lawsuit settled in 2021, leading to two significant developments: - In January 2021, Elastic NV announced that starting with version 7.11, Elasticsearch would be licensed under the Server Side Public License (SSPL) and the Elastic License, preventing companies from offering Elasticsearch services without partnering directly with Elastic. - By April 2021, Amazon introduced a new open-source search engine project by forking Elasticsearch's last open-source version (7.10.2). Due to trademark limitations, the new project was named OpenSearch. Amazon also offers this as a cloud service called "Amazon OpenSearch Service." This article will dive deeper into how these search solutions have evolved and diverged in the past three years, helping you decide which is better suited for your enterprise search applications. ``` version: '3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2 environment: - discovery.type=single-node ports: - "9200:9200" ``` <a name="OEo6Z"></a> ## Comparing Elasticsearch and OpenSearch <a name="vcIMx"></a> ### Common Ground OpenSearch started as an advanced fork of Elasticsearch, meaning the core functionalities such as search, analytics, and dashboards are the same for both applications. This foundation makes them both suitable for website search, enterprise search, and log analytics use cases. <a name="O6PPD"></a> ### Differences to Consider <a name="dLTkb"></a> #### License Changes Elasticsearch was available under the Apache 2.0 license until its switch to SSPL and Elastic licensing. The SSPL differs from conventional open-source licenses by requiring that anyone offering the licensed software as a service must make the entire source code available. <a name="B84EC"></a> #### Community Contributions Both Elasticsearch and OpenSearch maintain active and loyal communities. This support indicates neither will fall behind in functionalities or become unsupported anytime soon. Elastic's community is mature, whereas OpenSearch's forum is still growing. <a name="DtaXN"></a> #### Feature Sets OpenSearch, initially forked from Elasticsearch 7.10.2, had to strip out any features incompatible with the Apache 2.0 license, which included disabling paid features and removing the X-Pack codebase. <a name="wdR4Z"></a> #### Security Aspects While both Elasticsearch and OpenSearch offer substantial security features, OpenSearch includes many of these features by default, where some are only accessible to paying Elasticsearch customers. <a name="bKZC7"></a> #### Performance Metrics Elastic has positioned Elasticsearch as having superior performance metrics across key indicators, highlighting faster text querying and sorting capabilities alongside better storage efficiency. <a name="b2ORm"></a> #### Pricing Variance Though both Elasticsearch and OpenSearch can be self-managed and downloaded for free, hosting and managing these platforms come at the cost of your resources. Both can also be utilized via managed services like Alibaba Cloud Elasticsearch, often with added benefits and easier management. <a name="uqbr4"></a> ### Code Example: Installing OpenSearch Dashboards ``` docker pull opensearchproject/opensearch-dashboards:1.0.0 docker run -p 5601:5601 opensearchproject/opensearch-dashboards:1.0.0 ``` <a name="rSWHV"></a> ## Exploring Alibaba Cloud Elasticsearch Alibaba Cloud provides a robust Elasticsearch service, which you can leverage for your search and analytics needs. Their service simplifies the process of setting up, managing, and scaling Elasticsearch instances.<br /> To explore this further, visit the [Alibaba Cloud Elasticsearch product page](https://www.alibabacloud.com/en/product/elasticsearch).<br /> Ready to start your journey with Elasticsearch on Alibaba Cloud? Explore our tailored Cloud solutions and services to take the first step towards transforming your data into a visual masterpiece. Please[ [Click here]](https://c.tb.cn/F3.bTfFpS), Embark on Your 30-Day Free Trial.<br />
a_lucas
1,890,701
Sorting game
Noun Adjective pretty beautiful long short happy family mom dad brother pencil baby
0
2024-06-17T01:32:28
https://dev.to/_eb2b23cba5eb9d65d879b/sorting-game-60c
Noun Adjective pretty beautiful long short happy family mom dad brother pencil baby
_eb2b23cba5eb9d65d879b
1,890,700
SQLynx,Best Web-Based SQL Editor
SQLynx is a new database IDE, designed to meet the needs of professional SQL developers. You can...
0
2024-06-17T01:32:21
https://dev.to/concerate/sqlynxbest-web-based-sql-editor-pag
SQLynx is a new database IDE, designed to meet the needs of professional SQL developers. You can execute queries in multiple modes. It also provides a local history which keeps track of your activity and protects your work from being lost. You can jump to any table or view by the name of the action or directly from the SQL code. This tool gives you a detailed insight into the behavior of your queries and the database engine behavior so that you can optimize your queries. SQLynx allows you to write SQL code more quickly by providing context-sensitive code completion. Completion can detect the table structure, foreign keys and even database objects that were created in the code you are editing. Sqlynx can be installed in your own Linux, Mac, and Windows environments. Download:http://www.sqlynx.com/en/#/home/probation/SQLynx
concerate
1,890,698
Data structure
ctrl +/ or \ 모델링 : 논리적 모델링, row, tuple, record field, attribute, variable, feature 단일값속성...
0
2024-06-17T01:26:40
https://dev.to/suhngeun_yang/data-structure-kan
ctrl +/ or \ 모델링 : 논리적 모델링, row, tuple, record field, attribute, variable, feature 단일값속성 <-> 다중값속성 단순속성 <-> 복합속성 key란 : 유일성과 최소성을 만족하는 릴레이션의 value를 찾을 때 사용하는 값? key란 : tuple을 구분할 수 있는 attribute 키의 종류 : super key, candidate key, primary key, alternate key, foreign key 한 테이블의 중복되지 않은 로우의 개수 : cardinality 매핑 카디널 : 1:1 1:n n:m degree unique, not null, check, default, primary regression cause and result
suhngeun_yang
1,890,577
How to Publish API Route File in Laravel 11
In Laravel 11, API route management takes a slightly different approach compared to earlier versions....
0
2024-06-16T21:01:27
https://dev.to/msnmongare/how-to-publish-api-route-file-in-laravel-11-40df
laravel, webdev, beginners, programming
In Laravel 11, API route management takes a slightly different approach compared to earlier versions. New projects lack a dedicated API route file by default, streamlining the setup for applications without public-facing APIs. But fret not! If your project demands an API, publishing the necessary file and setting up Laravel Sanctum for authentication is a breeze. **1. Install the API Package** Open your terminal and navigate to your Laravel project's root directory. Run the following Artisan command: `php artisan install:api` What this command does: `Creates the routes/api.php file, where your API routes will reside. Installs Laravel Sanctum, a package offering token-based authentication for your API. Generates relevant migrations for Sanctum. Adds a configuration file (config/sanctum.php) for Sanctum settings.` **2. Define Your API Routes** With the routes/api.php file in place, you can start defining your API endpoints using Laravel's routing mechanisms. Here's an example: ``` <?php use Illuminate\Auth\Middleware\Authenticate; use Illuminate\Http\Request; use Illuminate\Support\Facades\Route; Route::get('/user', function (Request $request) { return $request->user(); })->middleware(Authenticate::using('sanctum')); ``` By following these steps, you've successfully published the API route file in your Laravel 11 application and laid the foundation for building secure and efficient API endpoints. This approach not only simplifies the initial setup but also ensures a well-structured and organized codebase for your project.
msnmongare
1,890,697
Python version of Commodity Futures Intertemporal Bollinger Hedge Strategy
The previously written intertemporal arbitrage strategy requires manual input of the hedging spread...
0
2024-06-17T01:24:52
https://dev.to/fmzquant/python-version-of-commodity-futures-intertemporal-bollinger-hedge-strategy-c6g
python, strategy, cryptocurrency, fmzquant
The previously written intertemporal arbitrage strategy requires manual input of the hedging spread for opening and closing positions. Judging the price difference is more subjective. In this article, we will change the previous hedging strategy to the strategy of using the BOLL indicator to open and close positions. ``` class Hedge: 'Hedging control class' def __init__(self, q, e, initAccount, symbolA, symbolB, maPeriod, atrRatio, opAmount): self.q = q self.initAccount = initAccount self.status = 0 self.symbolA = symbolA self.symbolB = symbolB self.e = e self.isBusy = False self.maPeriod = maPeriod self.atrRatio = atrRatio self.opAmount = opAmount self.records = [] self.preBarTime = 0 def poll(self): if (self.isBusy or not exchange.IO("status")) or not ext.IsTrading(self.symbolA): Sleep(1000) return insDetailA = exchange.SetContractType(self.symbolA) if not insDetailA: return recordsA = exchange.GetRecords() if not recordsA: return insDetailB = exchange.SetContractType(self.symbolB) if not insDetailB: return recordsB = exchange.GetRecords() if not recordsB: return # Calculate the spread price K line if recordsA[-1]["Time"] != recordsB[-1]["Time"]: return minL = min(len(recordsA), len(recordsB)) rA = recordsA.copy() rB = recordsB.copy() rA.reverse() rB.reverse() count = 0 arrDiff = [] for i in range(minL): arrDiff.append(rB[i]["Close"] - rA[i]["Close"]) arrDiff.reverse() if len(arrDiff) < self.maPeriod: return # Calculate Bollinger Bands indicator boll = TA.BOLL(arrDiff, self.maPeriod, self.atrRatio) ext.PlotLine("upper trail", boll[0][-2], recordsA[-2]["Time"]) ext.PlotLine("middle trail", boll[1][-2], recordsA[-2]["Time"]) ext.PlotLine("lower trail", boll[2][-2], recordsA[-2]["Time"]) ext.PlotLine("Closing price spread", arrDiff[-2], recordsA[-2]["Time"]) LogStatus(_D(), "upper trail:", boll[0][-1], "\n", "middle trail:", boll[1][-1], "\n", "lower trail:", boll[2][-1], "\n", "Current closing price spread:", arrDiff[-1]) action = 0 # Signal trigger if self.status == 0: if arrDiff[-1] > boll[0][-1]: Log("Open position A buy B sell", ", A latest price:", recordsA[-1]["Close"], ", B latest price:", recordsB[-1]["Close"], "#FF0000") action = 2 # Add chart markers ext.PlotFlag(recordsA[-1]["Time"], "A buy B sell", "Positive") elif arrDiff[-1] < boll[2][-1]: Log("Open position A sell B buy", ", A latest price:", recordsA[-1]["Close"], ", B latest price:", recordsB[-1]["Close"], "#FF0000") action = 1 # Add chart markers ext.PlotFlag(recordsA[-1]["Time"], "A sell B buy", "Negative") elif self.status == 1 and arrDiff[-1] > boll[1][-1]: Log("Close position A buy B sell", ", A latest price:", recordsA[-1]["Close"], ", B latest price:", recordsB[-1]["Close"], "#FF0000") action = 2 # Add chart markers ext.PlotFlag(recordsA[-1]["Time"], "A buy B sell", "Close Negative") elif self.status == 2 and arrDiff[-1] < boll[1][-1]: Log("Close position A sell B buy", ", A latest price:", recordsA[-1]["Close"], ", B latest price:", recordsB[-1]["Close"], "#FF0000") action = 1 # Add chart markers ext.PlotFlag(recordsA[-1]["Time"], "A sell B buy", "Close Positive") # Execute specific instructions if action == 0: return self.isBusy = True tasks = [] if action == 1: tasks.append([self.symbolA, "sell" if self.status == 0 else "closebuy"]) tasks.append([self.symbolB, "buy" if self.status == 0 else "closesell"]) elif action == 2: tasks.append([self.symbolA, "buy" if self.status == 0 else "closesell"]) tasks.append([self.symbolB, "sell" if self.status == 0 else "closebuy"]) def callBack(task, ret): def callBack(task, ret): self.isBusy = False if task["action"] == "sell": self.status = 2 elif task["action"] == "buy": self.status = 1 else: self.status = 0 account = _C(exchange.GetAccount) LogProfit(account["Balance"] - self.initAccount["Balance"], account) self.q.pushTask(self.e, tasks[1][0], tasks[1][1], self.opAmount, callBack) self.q.pushTask(self.e, tasks[0][0], tasks[0][1], self.opAmount, callBack) def main(): SetErrorFilter("ready|login|timeout") Log("Connecting to the trading server...") while not exchange.IO("status"): Sleep(1000) Log("Successfully connected to the trading server") initAccount = _C(exchange.GetAccount) Log(initAccount) def callBack(task, ret): Log(task["desc"], "success" if ret else "failure") q = ext.NewTaskQueue(callBack) p = ext.NewPositionManager() if CoverAll: Log("Start closing all remaining positions...") p.CoverAll() Log("Operation complete") t = Hedge(q, exchange, initAccount, SA, SB, MAPeriod, ATRRatio, OpAmount) while True: q.poll() t.poll() ``` Strategy parameter setting: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e7ccytuc86t0i6cnpx26.png) The overall strategy framework is basically the same as the [Python version of commodity futures intertemporal hedging strategy](https://www.fmz.com/digest-topic/5740), except that the corresponding BOLL indicator parameters are added. When the strategy is running, the K-line data of the two contracts is obtained, and then the price difference is calculated to calculate the spread. The array is used as data of the TA.BOLL function to calculate the Bollinger Bands. When the spread exceeds the Bollinger Band's upper rail, it will be hedged, and when it touches the lower rail, it will be opposed operating. When holding a position, touch the middle rail to close the positions. Backtest: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uhvluolpeudbm1lzip8a.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdtqtqp4nntqsj9utbco.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnsb1r986h8jz656ovre.png) This article is mainly used for study purpose only. Complete strategy: https://www.fmz.com/strategy/213826 From: https://www.fmz.com/digest-topic/5780
fmzquant
1,890,696
NestJS and TypeORM — Efficient Schema-Level Multi-Tenancy with Auto Generated Migrations: A DX Approach
When developing a B2B SaaS application, maintaining strict data isolation while ensuring developer...
0
2024-06-17T01:24:34
https://dev.to/logeek/nestjs-and-typeorm-efficient-schema-level-multi-tenancy-with-auto-generated-migrations-a-dx-approach-jla
nestjs, typeorm, postgres, backend
When developing a B2B SaaS application, maintaining strict data isolation while ensuring developer productivity is paramount. Schema-based multi-tenancy in NestJS with TypeORM and PostgreSQL provides a balanced approach to achieve this, ensuring high security without compromising efficiency. This blog will guide you through setting up a schema-based multi-tenancy system with automatically generated migrations, ensuring your database schema stays in sync with your entity definitions at all times. ### Why Schema-Based Multi-Tenancy? ### The Problem Traditional multi-tenancy approaches, such as the pooling strategy (using a `tenantId` key for scoping requests), often complicate queries and increase the risk of data leaks due to developer oversight. Furthermore, as the number of tenants grows, database performance can degrade, and operations like data extraction or restoration for individual tenants become cumbersome. ### The Solution Schema-based multi-tenancy partitions data by creating a separate schema for each tenant within the same database instance. This setup inherently scopes queries to the tenant’s schema, improving security and simplifying development. ### Setting Up the Groundwork ### Repository Structure Organize your repository with clear distinctions between public and tenant-specific modules. Here’s a simplified structure: ``` src/ modules/ public/ entities/ migrations/ tenant/ entities/ migrations/ tenancy/ tenancy.module.ts tenancy.middleware.ts tenancy.utils.ts ``` ### TypeORM Configuration Configure TypeORM for both public and tenant schemas. Public entities and migrations are straightforward, while tenant-specific configurations must ensure the correct schema is targeted. ```typescript // public-orm.config.ts export const publicConfig: DataSourceOptions = { type: 'postgres', host: process.env.DB_HOST, port: +process.env.DB_PORT, username: process.env.DB_USERNAME, password: process.env.DB_PASSWORD, database: process.env.DB_NAME, entities: [__dirname + '/../modules/public/entities/*.entity{.ts,.js}'], migrations: [__dirname + '/../modules/public/migrations/*{.ts,.js}'], synchronize: false, }; // tenant-orm.config.ts export const tenantConfig: DataSourceOptions = { ...publicConfig, entities: [__dirname + '/../modules/tenant/entities/*.entity{.ts,.js}'], migrations: [__dirname + '/../modules/tenant/migrations/*{.ts,.js}'], }; ``` ### Tenancy Utils We also setup a simple caching system for our Data Sources, this prevents us from having to instantiate new connections on each incoming request. ```typescript // tenancy.utils.ts import { DataSource, DataSourceOptions } from "typeorm"; import { MAX_CONNECTION_POOL_SIZE } from "../../env"; import { tenantConfig } from "../../tenant-orm.config"; export const tenantConnections: { [schemaName: string]: DataSource } = {}; export async function getTenantConnection( tenantId: string ): Promise<DataSource> { const connectionName = `tenant_${tenantId}`; if (tenantConnections[connectionName]) { const connection = tenantConnections[connectionName]; return connection; } else { const dataSource = new DataSource({ ...tenantConfig, name: connectionName, schema: connectionName, poolSize: MAX_CONNECTION_POOL_SIZE, } as DataSourceOptions); await dataSource.initialize(); tenantConnections[connectionName] = dataSource; return dataSource; } } ``` ### Creating Tenants Creating a tenant involves generating a new schema, running tenant-specific migrations, and setting up tenant-specific connections. Here’s an overview of the process in the `TenantsService`: ```typescript // tenants.service.ts import { Injectable } from '@nestjs/common'; import { InjectDataSource } from '@nestjs/typeorm'; import { DataSource } from 'typeorm'; import { Tenant } from './entities/tenant.entity'; @Injectable() export class TenantsService { constructor(@InjectDataSource() private dataSource: DataSource) {} async createTenant(tenantDto: CreateTenantDto): Promise<Tenant> { const tenant = new Tenant(); tenant.name = tenantDto.name; await this.dataSource.getRepository(Tenant).save(tenant); const schemaName = `tenant_${tenant.id}`; await this.dataSource.query(`CREATE SCHEMA ${schemaName}`); // Run migrations for the new schema await this.runMigrations(schemaName); return tenant; } private async runMigrations(schemaName: string) { const tenantConfig = { ...this.dataSource.options, schema: schemaName, }; const tenantDataSource = new DataSource(tenantConfig); await tenantDataSource.initialize(); await tenantDataSource.runMigrations(); await tenantDataSource.destroy(); } } ``` ### Handling Requests ### Middleware for Tenant Identification A middleware extracts the tenant ID from the request header and adds it to the request object for downstream processing. ```typescript // tenancy.middleware.ts import { Injectable, NestMiddleware } from '@nestjs/common'; import { Request, Response, NextFunction } from 'express'; @Injectable() export class TenancyMiddleware implements NestMiddleware { use(req: Request, res: Response, next: NextFunction) { const tenantId = req.headers['x-tenant-id']; if (!tenantId) { return res.status(400).send('Tenant ID is missing'); } req['tenantId'] = tenantId; next(); } } ``` ### Dynamic Connections ### Dependency Injection In your NestJS application, set up dependency injection to provide tenant-specific database connections. This ensures that the correct database connection is used based on the tenant ID. ```typescript // tenancy.module.ts import { Global, Module, Scope } from "@nestjs/common"; import { REQUEST } from "@nestjs/core"; import { Request } from "express"; import { CONNECTION } from "./tenancy.symbols"; import { getTenantConnection } from "./tenancy.utils"; /** * Note that because of Scope Hierarchy, all injectors of this * provider will be request-scoped by default. Hence there is * no need for example to specify that a consuming tenant-level * service is itself request-scoped. * https://docs.nestjs.com/fundamentals/injection-scopes#scope-hierarchy */ const connectionFactory = { provide: CONNECTION, scope: Scope.REQUEST, useFactory: async (request: Request) => { const { tenantId } = request; if (tenantId) { const connection = await getTenantConnection(tenantId); const queryRunner = await connection.createQueryRunner(); await queryRunner.connect(); return queryRunner.manager; } return null; }, inject: [REQUEST], }; @Global() @Module({ providers: [connectionFactory], exports: [CONNECTION], }) export class TenancyModule {} ``` Services that handle tenant-specific operations must dynamically establish connections using the tenant ID. ```typescript @Injectable({ scope: Scope.REQUEST }) export class CatsService { private catRepository: Repository<Cat>; constructor( @Inject(REQUEST) private readonly request: Request, @Inject(CONNECTION) private readonly connection: DataSource ) { this.catRepository = connection.getRepository(Cat); } async createCat(catDto: CreateCatDto): Promise<Cat> { const cat = new Cat(); cat.name = catDto.name; return this.catRepository.save(cat); } async getAllCats(): Promise<Cat[]> { return this.catRepository.find(); } } ``` ### Automatic Migration Generation This is the real secret sauce of peak DX. Preventing hours of pulling out hair, trying to figure out inconsistencies caused by manually written migrations. Automating the generation of migrations for both public and tenant schemas ensures consistency between the database and entity states. Below is a script to generate migrations for public and tenant entities: ```typescript // generateMigrations.ts import { DataSource, DataSourceOptions } from 'typeorm'; import { publicConfig } from './orm.config'; import { tenantConfig } from './tenant-orm.config'; import { camelCase } from 'typeorm/util/StringUtils'; const defaultSchema = 'tenant_default'; if (process.argv.length < 3) { console.log('Please choose either public or tenanted'); process.exit(1); } const environment = process.argv[2]; const type = process.argv[3]; if (['dev', 'development'].includes(environment)) { process.env.NODE_ENV = 'development'; } else if (['test', 'testing'].includes(environment)) { process.env.NODE_ENV = 'test'; } else if (['prod', 'production'].includes(environment)) { process.env.NODE_ENV = 'production'; } else { console.log('Please choose either dev/development, test/testing, or prod/production'); process.exit(1); } let dataSource: DataSource = null; if (type === 'public') { dataSource = new DataSource(publicConfig as DataSourceOptions); } else if (type === 'tenanted') { dataSource = new DataSource({ ...tenantConfig, schema: defaultSchema } as DataSourceOptions); } else { console.log('Please choose either public or tenanted'); process.exit(1); } generateMigrations().then(({ upSqls, downSqls }) => { console.log('Migration generated successfully'); const fileContent = getTemplate('migration', new Date().getTime(), upSqls, downSqls.reverse()); const fs = require('fs'); const path = require('path'); const filePath = path.join(__dirname, `../migrations/${type}/${new Date().getTime()}-migration.ts`); fs.writeFileSync(filePath, fileContent); process.exit(0); }).catch((error) => { console.log(error); process.exit(1); }); function queryParams(parameters: any[] | undefined): string { if (!parameters || !parameters.length) { return ''; } return `, ${JSON.stringify(parameters)}`; } async function generateMigrations() { await dataSource.initialize(); const logs = await dataSource.driver.createSchemaBuilder().log(); let upSqls: string[] = []; let downSqls: string[] = []; logs.upQueries.forEach((upQuery) => { upSqls.push( `await queryRunner.query(\`${upQuery.query.replace(/`/g, '\\`').replace(new RegExp(defaultSchema, 'g'), '${schema}')}\`${queryParams(upQuery.parameters)});` ); }); logs.downQueries.forEach((downQuery) => { downSqls.push( `await queryRunner.query(\`${downQuery.query.replace(/`/g, '\\`').replace(new RegExp(defaultSchema, 'g'), '${schema}')}\`${queryParams(downQuery.parameters)});` ); }); return { upSqls, downSqls }; } function getTemplate(name: string, timestamp: number, upSqls: string[], downSqls: string[]): string { const migrationName = `${camelCase(name, true)}${timestamp}`; return `import { MigrationInterface, QueryRunner } from 'typeorm'; import { PostgresConnectionOptions } from 'typeorm/driver/postgres/PostgresConnectionOptions'; export class ${migrationName} implements MigrationInterface { name = '${migrationName}' public async up(queryRunner: QueryRunner): Promise<void> { const { schema } = queryRunner.connection.options as PostgresConnectionOptions; ${upSqls.join('\n')} } public async down(queryRunner: QueryRunner): Promise<void> { const { schema } = queryRunner.connection.options as PostgresConnectionOptions; ${downSqls.join('\n')} } } `; } ``` Finally, add a npm script in package.json ``` "migration:generate": "ts-node src/generateMigrations.ts" ``` ```bash bun run migration:generate dev tenanted ``` While auto generated migrations are a god’s send, one must always double check the changes manually, writing appropriate data migrations if necessary. TypeORM CAN NOT be blindly trusted. ### Conclusion Schema-based multi-tenancy, combined with automated migration generation, streamlines the development process while ensuring data security and isolation. By dynamically establishing tenant-specific connections and automating migration processes, we reduce the risk of human error and maintain consistency between the codebase and database schema. This approach allows for scalable and maintainable multi-tenant applications, providing a robust foundation for B2B SaaS solutions. With the setup detailed above, you’re equipped to handle tenant-specific data management efficiently and securely. ### Credits This blog post was inspired by [Thomas van den Berg’s article on schema-based multi-tenancy with NestJS, TypeORM, and PostgreSQL](https://thomasvds.com/schema-based-multitenancy-with-nest-js-type-orm-and-postgres-sql/). Thomas’s insights and detailed explanations provided a solid foundation for understanding and implementing schema-based multi-tenancy, which has greatly contributed to the development of this guide.
logeek
1,890,682
Pyenv para Iniciantes.
O Pyenv é uma ferramenta que simplifica o uso do Python no seu computador, especialmente quando você...
0
2024-06-17T01:19:47
https://dev.to/theleanz/pyenv-para-iniciantes-3jkl
pyenv, python, ptbr
O Pyenv é uma ferramenta que simplifica o uso do Python no seu computador, especialmente quando você trabalha em diferentes projetos que podem exigir versões distintas do Python. Com apenas alguns comandos no terminal, você pode instalar várias versões do Python e alternar facilmente entre elas. Isso garante mais consistência e estabilidade em seus projetos, permitindo que cada projeto utilize a versão específica do Python necessária. **COMO INSTALAR:** Para começar, abra o PowerShell como administrador e certifique-se de que o Windows permita a execução de scripts. Cole este comando no PowerShell: ``` Set-ExecutionPolicy RemoteSigned -Scope CurrentUser ``` Agora vamos instalar o Pyenv. Cole o seguinte comando: ``` Invoke-WebRequest -UseBasicParsing -Uri "https://raw.githubusercontent.com/pyenv-win/pyenv-win/master/pyenv-win/install-pyenv-win.ps1" -OutFile "./install-pyenv-win.ps1"; .\install-pyenv-win.ps1 ``` Em alguns computadores Windows, pode surgir uma mensagem de erro dizendo que o Windows não conseguiu adicionar as variáveis de ambiente. Caso isso aconteça, cole este comando: ``` $pyenvPath = "$HOME\.pyenv\pyenv-win\bin" $shimsPath = "$HOME\.pyenv\pyenv-win\shims" ``` ``` [System.Environment]::SetEnvironmentVariable('PYENV', "$HOME\.pyenv\pyenv-win\", [System.EnvironmentVariableTarget]::User) [System.Environment]::SetEnvironmentVariable('Path', "$pyenvPath;$shimsPath;" + [System.Environment]::GetEnvironmentVariable('Path', [System.EnvironmentVariableTarget]::User), [System.EnvironmentVariableTarget]::User) ``` Pronto, feche o PowerShell e abra novamente. Verifique se o Pyenv foi instalado corretamente com o comando: ``` pyenv --version ``` Ele irá retornar a versão atual do Pyenv se estiver tudo funcionando corretamente. Para atualizar o Pyenv e seus plugins, primeiro use: ``` pyenv update ``` Agora podemos listar todas as versões do Python disponíveis para download: ``` pyenv install --list ``` Isso retornará algo semelhante a isso: ``` :: [Info] :: Mirror: https://www.python.org/ftp/python 3.12.1 3.12.2-win32 3.12.2-arm 3.12.2 3.12.3-win32 3.12.3-arm 3.12.3 3.13.0a1-win32 3.13.0a1-arm 3.13.0a1 3.13.0a2-win32 3.13.0a2-arm 3.13.0a2 ``` Para instalar uma versão específica, use: ``` pyenv install 3.12.3 ``` Para usar esta versão globalmente: ``` pyenv global 3.12.3 ``` Ou, se quiser definir uma versão apenas para seu projeto específico, entre na raiz do projeto e use o comando: ``` pyenv local 3.12.3 ``` Para ver as vesões que você ja baixou: ``` pyenv versions ``` Nesse link você encontra a documentação completa do Pyenv caso tenha mais dúvidas: > https://pypi.org/project/pyenv-win/ E é isso, até a proxima e bons estudos.
theleanz
1,889,656
The loss functions for Neural Network in PyTorch
A loss function is the function which can get the mean(average) of the sum of the losses(differences)...
0
2024-06-17T00:55:05
https://dev.to/hyperkai/the-loss-functions-for-neural-network-in-pytorch-hf9
pytorch, lossfunction, neuralnetwork, function
A loss function is the function which can get the mean(average) of the sum of the losses(differences) between a model's predictions and true values(train or test data) to optimize a model during training or to evaluate how good a model is during testing. *Loss function is also called Cost Function or Error Function. There are popular loss functions as shown below: (1) L1 Loss: - can compute the mean(average) of the sum of the absolute losses(differences) between a model's predictions and true values(train and test data). - 's formula is as shown below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/20qsszmsf0bjexmh8p16.png) - 's pros are as shown below: - Less sensitive to outliers. - We can easily complare the losses because they are just made absolute so the range of them is not big. - 's cons are as shown below: - The absolute loss of |0| cannot be differentiable according to [this post](https://math.stackexchange.com/questions/991475/why-is-the-absolute-value-function-not-differentiable-at-x-0) and [this post](https://datascience.stackexchange.com/questions/71380/if-mean-absolute-loss-is-not-differentiable-how-it-can-be-used-in-neural-networ). - is used for a regression model. - is also called Mean Absolute Error(MAE). - is [L1Loss()](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html#torch.nn.L1Loss) in PyTorch. (2) L2 Loss: - can compute the mean(average) of the sum of the squared losses(differences) between a model's predictions and true values(train and test data). - 's formula is as shown below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63wr94q5w1dkr132hiwq.png) - 's pros are as shown below: - All squared losses can be differentiable. - 's cons are as shown below: - Sensitive to outliers. - We cannot easily complare the losses because they are squared so the range of them is big. - is used for a regression model. - is also called Mean Squared Error(MSE). - is [MSELoss()](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss) in PyTorch (3) Huber Loss: - can do the similar computation of either L1 Loss or L2 Loss depending on the absolute losses(differences) between a model's predictions and true values(train and test data) compared with `delta` which you set. *Memos: - `delta` is 1.0 basically. - Be careful, the computation is not exactly same as L1 Loss or L2 Loss according to the formulas below. - 's formula is as shown below. *The 1st one is L2 Loss-like one and the 2nd one is L1 Loss-like one: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l90k9w1cswr768d9mzor.png) - 's pros are as shown below: - Less sensitive to outliers. - All losses can be differentiable. - We can more easily complare the losses than L2 Loss because only small losses are squared so the range of them is smaller than L2 Loss. - 's cons are as shown below: - The computation is more than L1 Loss and L2 Loss because the formula is more complex than them. - is used for a regression model. - is [HuberLoss()](https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html#torch.nn.HuberLoss) in PyTorch. - with `delta` of 1.0 is same as Smooth L1 Loss which is [SmoothL1Loss()](https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html) in PyTorch. (4) BCE(Binary Cross Entropy) Loss: - can compute the mean(average) of the sum of the losses(differences) between a model's binary predictions and true binary values(train and test data). - s' formula is as shown below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jigp72l8v0qakuwvp1km.png) - is used for Binary Classification. *Binary Classification is the technology to classify data into two classes. - is also called Binary Cross Entropy or Log(Logarithmic) Loss. - is [BCELoss()](https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html#torch.nn.BCELoss) in PyTorch. *Memos: - There is also [BCEWithLogitsLoss()](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) which is the combination of [BCE Loss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) and [Sigmoid Activation Function](https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html) in PyTorch. - Sigmoid Activation Function suits BCE Loss to be more stable. (5) Cross Entropy Loss: - can compute the mean(average) of the sum of the losses(differences) between a model's predictions and true values(train and test data). *A loss is between 0 and 1. - s' formula is as shown below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhp1a7wp6jbxwjir8l9i.png) - is used for Multiclass Classification and Computer Vision. *Memos: - Multiclass Classification is the technology to classify data into multiple classes. - Computer vision is the technology which enables a computer to understand objects. - is [CrossEntropyLoss()](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) in PyTorch.
hyperkai
1,890,671
Data Dynamics: Exploring Information Theory Essentials
Information theory studies efficient data transmition, processing, and storage. Key ideas are entropy...
0
2024-06-17T00:45:04
https://dev.to/kevinbg012/data-dynamics-exploring-information-theory-essentials-1eoa
devchallenge, cschallenge, computerscience, beginners
Information theory studies efficient data transmition, processing, and storage. Key ideas are entropy (uncertainty), data compression (saving space), channel capacity (max info transmission), noise (interference), and error correction (fixing mistakes).
kevinbg012
1,890,678
Building a Weather Application with OpenWeatherMap API
Introduction In today's fast-paced world, having access to accurate and up-to-date weather...
0
2024-06-17T00:34:40
https://dev.to/kartikmehta8/building-a-weather-application-with-openweathermap-api-3ldh
javascript, beginners, programming, tutorial
## Introduction In today's fast-paced world, having access to accurate and up-to-date weather information is crucial. Whether you are planning a trip or simply want to know what to wear for the day, a weather application can be an essential tool. With the OpenWeatherMap API, building a weather application has become easier and more efficient. Let's explore the advantages, disadvantages, and features of using this API for your weather application. ## Advantages of Using OpenWeatherMap API 1. **Ease of Integration:** The OpenWeatherMap API provides a simple and user-friendly interface, making it easy for developers to incorporate weather data into their applications. 2. **Comprehensive Weather Data:** It offers a wide range of weather data, including current weather conditions, daily and hourly forecasts, historical data, and more. 3. **Support for Multiple Programming Languages:** This makes it accessible to a larger developer community, enhancing its usability across different platforms. ## Disadvantages of Using OpenWeatherMap API 1. **Limitations on API Calls:** The free version of the API has a limit on the number of calls that can be made per minute, which can be a hindrance for applications that require frequent updates. ## Features of OpenWeatherMap API 1. **Location-Based Weather Data:** Provides weather updates based on specific geographic locations. 2. **Diverse Meteorological Information:** Includes data on precipitation, wind, UV index, and air quality index. 3. **Customization Options:** Allows users to choose the type of data they want to receive, such as Celsius or Fahrenheit for temperature. 4. **Multilingual Support:** Offers data in multiple languages, catering to a global audience. ### Example of Integrating OpenWeatherMap API in JavaScript ```javascript // Fetch weather data using OpenWeatherMap API fetch('https://api.openweathermap.org/data/2.5/weather?q=London&appid=YOUR_API_KEY') .then(response => response.json()) .then(data => console.log(data)) .catch(error => console.error('Error:', error)); ``` This simple JavaScript code snippet demonstrates how to make a basic API call to OpenWeatherMap to retrieve current weather data for London. Make sure to replace `YOUR_API_KEY` with your actual API key from OpenWeatherMap. ## Conclusion In conclusion, the OpenWeatherMap API is a valuable tool for building a weather application. It offers easy integration, a diverse range of weather data, and useful features to enhance user experience. However, it's important to consider the limitations of the free version when incorporating it into your application. Overall, with the OpenWeatherMap API, you can create a user-friendly and accurate weather application that meets the needs of your users.
kartikmehta8
1,890,675
Install PHP 8.3 and config Apache in Ubuntu 22.04
Install PHP: sudo apt install php8.3 Enter fullscreen mode Exit...
0
2024-06-17T00:33:48
https://dev.to/automata/install-php-83-and-to-config-apache-in-ubuntu-2204-602
## Install PHP: ``` sudo apt install php8.3 ``` ## Install PHP module of Apache: ``` apt install libapache2-mod-php ``` ## Config PHP Module of Apache By default, the PHP module is automatically enabled, if for some reason it is not enabled, you should run: ``` a2enmod php8.3 ``` To update the PHP module configuration you must modify the files `/etc/apache2/mods-enabled/php8.3.conf` and `/etc/apache2/mods-enabled/php8.*.load`. Then you must restart the apache server by running the following command: ``` systemctl restart apache2.service ``` ## Confirm Installation ``` php -v ``` ## Optional Packages ### PHP-CLI Run PHP scripts via the CLI ``` apt install php-cli ``` ### PHP-CGI Execute PHP scripts without installing the Apache PHP module ``` sudo apt install php-cgi ``` ### PHP-MySQL ``` apt install php-mysql ``` ### PHP-PgSQL ``` apt install php-pgsql ```
automata
1,890,676
React Native (Native Modules)
I have the Insta360 Office Enterprise SDK. With its use, please help me achieve connecting and...
0
2024-06-17T00:26:42
https://dev.to/gokulrajalp/react-native-native-modules-4dj0
reactnative, android, javascript, kotlin
I have the Insta360 Office Enterprise SDK. With its use, please help me achieve **connecting and controlling the Insta360 camera from my React Native Android app,** including starting recording, ending recording, and capturing the recorded video.
gokulrajalp
1,890,541
Twilio Challenge: Can You Beat Gemini? AI-Powered Game of Tic Tac Toe
This is a submission for the Twilio Challenge What I Built I built an interactive...
0
2024-06-17T00:20:22
https://dev.to/thatcoolguy/twilio-challenge-can-you-beat-gemini-tic-tac-toe-1em8
devchallenge, twiliochallenge, ai, twilio
*This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)* ## What I Built I built an interactive AI-powered game of Tic-Tac-Toe called "Can You Beat Gemini?" This project allows players to test their Tic-Tac-Toe skills against Gemini AI, providing a challenging and fun experience. ### Features **AI Integration**: The game uses the Gemini AI model to analyse the game state and predict the best possible moves to challenge the player effectively. **Real-Time Interaction**: Players interact with the game through digits (1-9). The game captures the player's move and processes it. The AI then responds with its move. This interaction is facilitated using Twilio's Programmable Voice capabilities, making the gameplay seamless and engaging. **Dynamic Game Board**: The game state is visually represented, updating in real-time as moves are made. This ensures players can keep track of their and the AI's moves. ## Demo You can try it out by scanning the QR Code in the image or by texting the code **join front-trace** to the number **+14155238886** on WhatsApp. Next call number **+1 (806) 615-3056** to play the Tic Tac Toe game. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o39n4watwqkwm2utv81g.png) ### Video of How it Works {% embed https://vimeo.com/960750355?share=copy %} In the video description above, a call is made to **+1 (806) 615-3056** to begin the game. The user selects any number between (1-9) to make their move. Gemini AI then analyzes the game state and responds with its move. The game continues with the user and AI taking turns until a winner emerges. ### Source Code {% github ThatCoolGuyyy/Tic-Tac-Toe-Gemini-Twilio %} ## Twilio and AI I used Twilio to get input from the user in digits (1-9), each representing a cell on the Tic-Tac-Toe board. Then, I used **Gemini AI** to play against the human user. Finally, I used the **Twilio Programmable Messaging (WhatsApp Sandbox)** to send the updated board when both AI and human players make their moves. I also used **Twilio Functions** to host the code. ## Additional Prize Categories **Twilio Times Two** - The project uses **Twilio Programmable Messaging (WhatsApp Sandbox)**, **Twilio Programmable Voice** and, **Twilio Functions.** **Entertaining Endeavors** - The project creatively combines a classic game with advanced AI technology, making it not just a simple Tic Tac Toe game but an engaging battle of wits against an AI opponent. By leveraging Twilio to receive user inputs and send real-time game updates via WhatsApp, the project ensures continuous engagement and keeps the players entertained throughout the game.
thatcoolguy
1,890,288
What is state in React?
We write components in React, A LOT of components, and many times we want them to undergo visual...
0
2024-06-16T12:35:17
https://dev.to/ark7/what-is-state-in-react-4fe9
webdev, javascript, beginners, programming
We write components in React, **A LOT** of components, and many times we want them to undergo visual changes as a result of user or computer interactions. For this purpose, a component needs to “remember” things about itself. This is where state comes in. **_State is a component’s memory._** Any exciting application you build is likely to change over the time the user is exploring it. The changes could be as basic as _toggling a dropdown menu_ or as complex as _fetching data from an API._ React provides primitives to manipulate the state of your apps, more specifically components, to make them dynamic. In this lesson, we will learn how to use **state in React**. In React, state is a _built-in object_ that allows components to create and manage their own data. It is an essential concept for building dynamic and interactive user interfaces, enabling components to keep track of information that may change over time and re-render in response to these changes. ## Key Characteristics of State 1. **Mutable:** Unlike props, which are read-only, state is mutable and can be changed. Changes to state trigger re-renders of the component to reflect the new state. 2. **Private**: State is local to the component and cannot be directly accessed or modified by other components. It is intended to encapsulate data that is specific to the component’s functionality. 3. **Managed Internally:** State is managed within the component itself, often initialized in the constructor (for class components) or using hooks (for functional components). ## Understanding the useState Hook in React The _useState_ hook is a fundamental hook in React that allows functional components to manage state. It is used to add state variables to functional components, enabling them to handle dynamic data and trigger re-renders when the state changes. ## Syntax and Usage The _useState_ hook is called inside a functional component and returns an array with two elements: - > State Variable: The current value of the state. - > State Setter Function: A function that updates the state and triggers a re-render. Here is the syntax for using the _useState_ hook: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aniq9jh7cn22bgxcyh7x.png) > _stateValue_: This is the current state value. > _setStateValue_: This is the function used to update the state. > _initialValue_: This is the initial value of the state, which can be any type (string, number, array, object, etc.) ## Example: Counter Component ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lc2fjq4plo4zq2b1xcpu.png) **In this example:** > const [count, setCount] = useState(0); initializes a state variable count with an initial value of 0. > setCount is used to update the count state when the button is clicked. ## Best Practices of the useState 1. Initialize State with Proper Default Values: - Ensure you initialize state with meaningful default values to avoid issues during rendering. 2. Avoid Complex State Logic: - If a state variable requires complex logic or computations to update, consider using the useReducer hook instead of useState. 3. Use Functions for State Updates Based on Previous State: - When updating state based on the previous state value, pass a function to the state setter to ensure you get the latest state. ``` setCount(prevCount => prevCount + 1); ``` 4. Keep State Local and Manageable: - Try to keep state management as local as possible within components. For global state management, consider using state management libraries like Redux or Context API. The useState hook is a powerful and essential tool for managing state in React functional components. By understanding and utilizing useState effectively, you can create dynamic and interactive components that respond to user input and changing data.
ark7
1,890,672
Elevate Your Online Presence with DEV and Herow.io's Website Creation Tools
As a software developer and active member of the DEV Community, I've long recognized the importance...
0
2024-06-17T00:20:03
https://dev.to/kathleenwood/website-creation-tool-on-herow-230m
webdev, javascript, beginners, programming
As a software developer and active member of the DEV Community, I've long recognized the importance of having a strong online presence. Whether you're showcasing your projects, sharing your expertise, or connecting with fellow developers, having a well-designed, user-friendly website is crucial. That's why I was thrilled to discover the powerful website creation apps featured on [Herow.io](https://herow.io/app-category/website-creation/), a comprehensive app discovery platform that has become an indispensable resource in my digital toolkit. **Unlocking the Full Potential of DEV.to** DEV.to is an incredible platform that empowers developers like myself to build an audience, collaborate on projects, and grow our skills. But as valuable as the DEV Community is, having a dedicated website to complement your presence on the platform can take your online visibility to new heights. This is where Herow.io's website creation apps come into play. By leveraging tools like Wix, Squarespace, and WordPress, I've been able to quickly and easily build professional-looking websites that seamlessly integrate with my DEV.to profile. These platforms offer a wide range of customization options, intuitive drag-and-drop editors, and robust e-commerce capabilities – allowing me to create a truly unique and engaging online home for my development work. **Streamlining the Website Building Process** One of the biggest challenges I often faced when it came to building a website was the sheer time and effort required. As a busy developer, I simply didn't have the bandwidth to get bogged down in the technical details of web design and development. But with Herow.io's curated selection of website creation tools, the process has become infinitely more streamlined and efficient. These platforms handle the heavy lifting, from hosting and domain management to SEO optimization and mobile responsiveness. This allows me to focus on the content and branding, safe in the knowledge that the technical aspects are being taken care of. **Leveraging AI-Powered Enhancements** But Herow.io's website creation offerings don't stop at just the basics. The platform also features a range of AI-powered tools that can further enhance the functionality and performance of my online presence. For example, I've been able to leverage AI-driven website builders to generate custom, mobile-optimized layouts tailored to my specific needs. And [Herow.io's AI-powered content creation apps](https://herow.io/app-category/website-creation/) have helped me craft more engaging, SEO-friendly copy and visuals to captivate my audience. **Unlocking New Opportunities for Growth** By combining the power of DEV.to's vibrant developer community with the website creation tools available on Herow.io, I've been able to unlock new opportunities for growth and visibility in the digital landscape. Not only have I been able to showcase my projects, share my knowledge, and connect with fellow developers more effectively, but I've also been able to leverage my online presence to attract potential employers, clients, and collaborators. The seamless integration between these two platforms has been a game-changer, allowing me to present a cohesive, professional, and impactful digital identity. If you're a developer looking to elevate your online presence and take your career to new heights, I highly recommend exploring the synergies between DEV.to and Herow.io's website creation offerings. With these powerful tools at your fingertips, the possibilities for growth and success are truly endless.
kathleenwood
1,890,670
UI vs UX: Understanding the Difference
Exploring the Digital Canvas: Decoding UI In the intricate tapestry of digital design, the User...
0
2024-06-17T00:17:41
https://dev.to/dev_king_22/ui-vs-ux-understanding-the-difference-4541
webdev, ui, ux, website
**Exploring the Digital Canvas: Decoding UI** In the intricate tapestry of digital design, the User Interface (UI) serves as the forefront of user interaction. It's the visual aspect of a product that users engage with, akin to an artist's canvas that displays a blend of colors, shapes, and textures. UI is where functionality meets aesthetics, creating a harmonious balance that facilitates user interaction. **Decoding UI (User Interface)** A. **Definition:** At its essence, UI is the sum of all visual components that users interact with on any digital platform. It's the graphical layout of an application—it includes the buttons users click on, the text they read, the images they view, and all other items they interact with. This includes screen layout, transitions, interface animations, and every single micro-interaction. Any sort of visual element, interaction, or animation must all be designed. B. **Focus:** The focus of UI is on the look and feel of the product interface. Designers aim to create interfaces that are not only visually appealing but also conducive to easy navigation and user efficiency. It's about crafting an environment that feels intuitive and reduces the learning curve for users. C. **Components:** The components of UI are varied and multifaceted. They range from screens that provide context and space for content to buttons that afford interaction; from icons that communicate function and process to typography that ensures readability and understanding. D. **Goal:** The goal of UI is to create an aesthetically appealing interface that resonates with users and encourages engagement. It's about making sure that every visual element feels like it belongs on the canvas and contributes to a seamless user journey. UI is not just about creating something beautiful; it's about creating a space where beauty meets purpose. A well-designed UI not only looks good but feels right to the user, providing a pathway for them to achieve their goals with ease and satisfaction. **Unraveling UX (User Experience)** While UI is the canvas, UX is the journey across it. User Experience (UX) is about how users interact with and experience a product. It's not just about creating something that looks good but ensuring it also works well and meets the users' needs. A. **Definition:** UX encompasses all aspects of the end-user's interaction with the company, its services, and its products. It's about understanding the users' needs, values, abilities, and limitations. It also takes into account business goals and objectives to create a seamless marriage between user satisfaction and business success. B. **Focus:** The focus of UX is on optimizing a product for effective and enjoyable use. UX designers work to enhance customer satisfaction by improving the usability, accessibility, and pleasure provided in the interaction with the product. C. **Components:** UX involves a lot of research to understand user behavior, followed by design iteration to create the best user flow. Components include usability testing, interaction design, content strategy, and user research. D. **Goal:** The goal of UX is to create products that provide meaningful and relevant experiences to users. This involves integrating the product's design with aspects of branding, usability, function, and even storytelling. **The Convergence of UI and UX** UI and UX are often spoken of together because they are deeply interconnected. While UI focuses on the product's looks, UX focuses on the feel. However, they both aim to create a product that is not only beautiful but also functional. A. Both UI and UX designers work closely to ensure that every aspect of the product aligns with the users' needs. B. They share a common goal: to provide a positive experience for the user. **The Collective Impact of UI and UX** The collective impact of UI and UX is profound. Together they create a product that is not only aesthetically pleasing but also functionally sound. A. A well-designed UI can attract users to a product, but it's the UX that keeps them there. B. The collective goal of UI/UX design is to lead users across a digital landscape in a way that feels intuitive and natural while also delivering on all functional requirements. **Conclusion: The Artistry of UI** In the realm of digital product design, UI stands as a pivotal element that shapes the visual journey of users. It is the artistry behind the screens, buttons, and icons that users interact with. A well-executed UI design not only captivates users with its aesthetic appeal but also facilitates a smooth and intuitive navigation experience. The success of UI lies in its ability to translate a product's functionality into a visually coherent and engaging interface that resonates with users, encouraging interaction and exploration. **Conclusion: The Journey of UX** UX is the compass that guides users through the digital landscape. It encompasses every facet of a user's interaction with a product, from the initial discovery to the final touchpoint. UX design is not merely about usability; it's about crafting an experience that feels personal, efficient, and rewarding. The essence of UX lies in its focus on user satisfaction, achieved through meticulous research, empathetic design, and continuous optimization. A product with a superior UX is one that seamlessly blends utility with delight, ensuring that users not only achieve their goals but enjoy the journey. Together, UI and UX form the cornerstone of digital design, each contributing uniquely to the creation of products that are not just functional but also meaningful. They are the silent narrators of a brand's story, inviting users into a world where every click and swipe leads to a fulfilling experience.
dev_king_22
1,892,740
How to Create Excel Spreadsheets with Styling Options Using JavaScript
TLDR: An Open Source fork of SheetJS lets you create and style Excel spreadsheets with JavaScript. ...
0
2024-06-20T19:07:51
https://www.davegray.codes/posts/how-to-create-excel-spreadsheets-with-styling-options-using-javascript
javascript, excel, xlsx
--- title: How to Create Excel Spreadsheets with Styling Options Using JavaScript published: true date: 2024-06-17 00:00:00 UTC tags: javascript,js,excel,xlsx canonical_url: https://www.davegray.codes/posts/how-to-create-excel-spreadsheets-with-styling-options-using-javascript cover_image: https://raw.githubusercontent.com/gitdagray/my-blogposts/main/images/how-to-create-excel-spreadsheets-with-styling-options-using-javascript.png --- **TLDR:** An Open Source fork of SheetJS lets you create and style Excel spreadsheets with JavaScript. ## Creating XLSX Files An XLSX file is a Microsoft Excel spreadsheet. I previously documented [How to Download xlsx Files from a Next.js Route Handler](https://www.davegray.codes/posts/how-to-download-xlsx-files-from-a-nextjs-route-handler). In that blog post, I used the [xlsx](https://docs.sheetjs.com/docs/getting-started/installation/frameworks) package which is also known as SheetJS. However, I discovered a limitation of the community edition of the xlsx package: It did not allow the row and cell styling that my stakeholders desired in their final product. The PRO edition does allow styling, but I looked for an open source solution and found it. ## xlsx-js-style [xlsx-js-style](https://www.npmjs.com/package/xlsx-js-style) is a fork of SheetJS combined with code from a couple of other open source projects that were adding styles to SheetJS. The only drawback I see is the last version was published 2 years ago, but it does not have the vulnerability I warned about if you install the [xlsx dependency](https://www.davegray.codes/posts/how-to-download-xlsx-files-from-a-nextjs-route-handler#xlsx-dependency) directly from npm. [xlsx-js-style](https://www.npmjs.com/package/xlsx-js-style) allows you to create Excel spreadsheets with JavaScript and style the cells with borders, colors, alignment, and font styles. ## Add xlsx-js-style to Your Project Install xlsx: ```bash npm i xlsx-js-style ``` Next, import xlsx-js-style into your project: ```ts import XLSX from "xlsx-js-style" ``` You can use this package in any JavaScript or TypeScript project. ## Creating and Styling the XLSX Worksheet Here's an example of how to use the xlsx-js-style package: ```ts // define your headers const headers = [ "FirstName", "LastName", "Email", ] // set column widths const colWidths = [ { wch: 30 }, { wch: 30 }, { wch: 50 }, ] // get the data const userData = await getUserData() // early return if no data if (!userData || !userData[0]) { return null } // set header row height // consider if you have vertical headers const headerRowHeight = [ { hpt: 80 }, ] // Dynamically set row height based on size of data const dataRowHeight = Array.from({ length: userData[0].length }, () => ({ hpt: 30 })) // Combine header row height and data row height const rowHeight = [...headerRowHeight, ...dataRowHeight] // Create a new worksheet: const worksheet = XLSX.utils.json_to_sheet([]) // Assign widths to columns worksheet['!cols'] = colWidths // Assign height to rows worksheet['!rows'] = rowHeight // Enable auto-filter for columns worksheet['!autofilter'] = { ref: "A1:C1" } // Add the headers to the worksheet: XLSX.utils.sheet_add_aoa(worksheet, [headers]) // add data to sheet XLSX.utils.sheet_add_json(worksheet, userData, { skipHeader: true, origin: -1 }) // get size of sheet const range = XLSX.utils.decode_range(worksheet["!ref"] ?? "") const rowCount = range.e.r const columnCount = range.e.c // Add formatting by looping through data in sheet for (let row = 0; row <= rowCount; row++) { for (let col = 0; col <= columnCount; col++) { const cellRef = XLSX.utils.encode_cell({ r: row, c: col }) // Add this format to every cell worksheet[cellRef].s = { alignment: { horizontal: "left", wrapText: true, }, } // vertical header - 1st column only if (row === 0 && col === 0) { worksheet[cellRef].s = { //spreads in previous cell settings ...worksheet[cellRef].s, alignment: { horizontal: "center", vertical: "center", wrapText: false, textRotation: 180, }, } } // Format headers bold if (row === 0) { worksheet[cellRef].s = { //spreads in previous cell settings ...worksheet[cellRef].s, font: { bold: true }, } } } } return worksheet // After this, add worksheet to workbook, download, etc. // See docs and my previous article ``` ## Other Styles Available I only applied a few of the styles available in the above example. Checkout all of the available style settings on the [npm page for xlsx-js-style](https://www.npmjs.com/package/xlsx-js-style). ## Learning More: - Read my previous article on [How to Download xlsx Files](https://www.davegray.codes/posts/how-to-download-xlsx-files-from-a-nextjs-route-handler) if you want to create these files in a Node.js backend and request them from your frontend. My example is with Next.js, but you could just set it up in Node.js without Next.js. - If you read the previous article or watch the video below, remember to replace the `xlsx` dependency with the `xlsx-js-style` dependency I discussed in this article. Enjoy creating Excel spreadsheets with JavaScript! <hr /> ## Let's Connect! Hi, I'm Dave. I work as a full-time developer, instructor and creator. If you enjoyed this article, you might enjoy my other content, too. **My Stuff:** [Courses, Cheat Sheets, Roadmaps](https://courses.davegray.codes/) **My Blog:** [davegray.codes](https://www.davegray.codes/) **YouTube:** [@davegrayteachescode](https://www.youtube.com/davegrayteachescode) **X:** [@yesdavidgray](https://x.com/yesdavidgray) **GitHub:** [gitdagray](https://github.com/gitdagray) **LinkedIn:** [/in/davidagray](https://www.linkedin.com/in/davidagray/) **Patreon:** [Join my Support Team!](patreon.com/davegray) **Buy Me A Coffee:** [You will have my sincere gratitude](https://www.buymeacoffee.com/davegray) Thank you for joining me on this journey. Dave
gitdagray
1,890,849
🥙 Callgentive UI - Home-Cooked Software Development
Think of every each of APP as a chat-box, but not in text format. The responses are all `widgets`, and requests are just clicks, scrolls and inputs...
0
2024-06-17T06:22:09
https://docs.callgent.com/blog/generative-ui-home-cooked-software-development
industrytrends, architecturepatterns, productdesign
--- slug: generative-ui-home-cooked-software-development title: 🥙 Callgentive UI - Home-Cooked Software Development published: true description: Think of every each of APP as a chat-box, but not in text format. The responses are all `widgets`, and requests are just clicks, scrolls and inputs... keywords: [Home-Cooked Software, Generative UI, Generative Design, Callgentive UI] date: 2024-06-17 00:00:00 UTC tags: IndustryTrends,ArchitecturePatterns,ProductDesign canonical_url: https://docs.callgent.com/blog/generative-ui-home-cooked-software-development --- AI Large Language Models (LLMs) are ushering in a new golden age of [Home-Cooked Software](https://maggieappleton.com/home-cooked-software). and callgent is providing the solid infrastructure and solutions to make it happen. Let's take a look at how. ## What is Generative UI? From now on, forget the traditional software paradigm. Imagine if we treated every interaction between a user and an app as a chat, with the only difference being that the content of the chat is not text, but rather a collection of widgets. ![textual chat => widget chat](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gr0gfpcl747ah8x4ii4d.png) Users interact with widgets to send chat requests to AI agents, which then decide how to process the request, encapsulate the result in widgets, and present it to the user in a certain way. The underlying layer of the chat is still textual + agents, but the interaction between the user and the app will now take place through a visual widget UI instead of text. :::tip From the end-user's perspective, a Generative App is no different from a traditional app (although a Generative App may appear more personalized). ::: ## Callgentive UI How does callgent leverage generative UI, as we name it `Callgentive UI`. A picture is worth a thousand words: ![Callgentive UI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fnqp9p7wnvea5v44l735.png) With predefined business services and standard widgets, the `Home-Cooked Software` works as follows, - The landing pages are generative UI of widgets generated from some initializing Agent prompts. - As user interacts with the widgets, the interactions are transformed into a chat request. - The backend Callgent maps the chat requests to business service invocations, and gets response data from services. - The Callgent makes the response data into a set of widgets, and presents it to the user. ### How does it work? Callgent's core functionality is to do native function-calling to orchestrate business services. Callgent has server endpoints and client endpoints to adapt various invocation scenarios. Specifically, widget based User Interface, is just one more type of client endpoints, to adapt human interactions.
callgent
1,890,668
Microservices Architecture
In a microservices architecture, services communicate with each other primarily through network calls...
0
2024-06-16T23:55:48
https://dev.to/oloruntobi600/microservices-architecture-2nnp
In a microservices architecture, services communicate with each other primarily through network calls over HTTP or other lightweight protocols. There are several common patterns for interservice communication: RESTful APIs: Services expose REST endpoints that other services can invoke to request or send data. This is one of the most common and straightforward methods of communication. Messaging Queues: Services can use messaging queues like RabbitMQ, Kafka, or AWS SQS to send messages asynchronously. This approach decouples services and enables reliable asynchronous communication. Service Mesh: A service mesh like Istio or Linkerd provides a dedicated infrastructure layer for handling service-to-service communication, including load balancing, service discovery, and security features. gRPC: A modern, high-performance RPC (Remote Procedure Call) framework that can be used for synchronous communication between microservices, especially when performance and efficiency are critical. Event-Driven Architecture: Services can communicate via events and event streams. This allows for decoupled and highly scalable interactions where services react to events rather than direct requests. Monolithic vs. Microservice Architecture Monolithic Architecture: Structure: A single, unified application where all functionalities are tightly coupled and deployed as a single unit. Scaling: Scaling requires scaling the entire application, which may not be efficient if only certain components need scaling. Development: Easier to develop initially as everything is in one place, but maintenance and scaling can become complex as the application grows. Technology: Uses a single technology stack for the entire application. Microservice Architecture: Structure: Composed of multiple small services, each responsible for a specific business function and deployed independently. Scaling: Each service can be scaled independently based on demand, allowing for more efficient resource utilization. Development: Encourages decentralized development, enabling teams to work independently on services using diverse technology stacks. Complexity: Introduces complexities like service discovery, distributed data management, and network latency. Resilience: Services can fail independently without affecting the entire system, enhancing overall resilience. Synchronous Communication in Microservices Synchronous communication involves direct request-response interactions between services. Key aspects include: Protocol: Typically HTTP-based communication where the client sends a request to the server and waits for a response. Advantages: Simple to implement and debug, suitable for use cases where immediate response is required. Challenges: Can lead to increased coupling between services, making it harder to scale independently or handle failures gracefully. Asynchronous Communication in Microservices Asynchronous communication involves services interacting via messages or events without waiting for an immediate response. Benefits include: Decoupling: Services are loosely coupled, enabling independent development, deployment, and scaling. Scalability: Supports handling of bursts of traffic and processing tasks in the background without blocking the main flow. Reliability: Reduces the impact of failures or downtime in downstream services by buffering messages and processing them when services are available. Event-Driven Development in Microservices Event-Driven Development (EDD) focuses on handling events and messages as the primary means of communication and coordination between services. Key principles include: Event Production: Services produce events when specific actions or changes occur, signaling something of interest to other services. Event Consumption: Services consume events and react accordingly, updating their own state or triggering further actions. Loose Coupling: Services are decoupled because they only need to react to events they're interested in, without direct dependencies on other services' internal workings. Scalability: Event-driven architectures can be highly scalable as services can independently consume and react to events. In conclusion, microservices architectures emphasize decentralized communication, allowing for flexibility, scalability, and resilience compared to monolithic architectures. Synchronous and asynchronous communication patterns, along with event-driven development principles, are instrumental in achieving these architectural goals.
oloruntobi600
1,890,666
2 Tips to make your Github Repository Stand Out
Have you ever wondered "How can I make my GitHub Repository Stand Out?"? Well Here are 2 tips to do...
27,749
2024-06-16T23:52:17
https://dev.to/nhelchitnis/2-tips-to-make-your-github-repository-stand-out-n3k
tips, beginners
Have you ever wondered "How can I make my GitHub Repository Stand Out?"? Well Here are 2 tips to do just that ### Tip One Get Rid of everything you are not using Getting rid of things you are not using will help people not click on something and then just leave. ### Tip Two Have a Brief but in depth description Having a Brief but in depth description will help more people come to your Repository and Star and Fork it. Well That's it, See You next time, Bye
nhelchitnis
1,890,664
BITCOIN, USDT, RECOVERY SERVICES FOR FROZEN INVESTMENT ARE MADE POSSIBLE BY OPTIMISTIC HACKER GAIUS. A CYBER RECOVERY MASTER
A post by Dan Avery
0
2024-06-16T23:50:09
https://dev.to/dan_avery_96212ad4756b10f/bitcoin-usdt-recovery-services-for-frozen-investment-are-made-possible-by-optimistic-hacker-gaius-a-cyber-recovery-master-405g
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ui91fedj0uy0g2bqojxq.jpeg)
dan_avery_96212ad4756b10f
1,890,662
i got 100x % increase in my side projects in just 9 months - here's how i did it
invest in yourself before anywhere else 💰 investing in the share market can give you returns of...
0
2024-06-16T23:48:16
https://dev.to/darkinventor/i-got-100x-increase-in-my-side-projects-in-just-9-months-heres-how-i-did-it-37g4
sideprojects, webdev, javascript, programming
**invest in yourself before anywhere else 💰** investing in the share market can give you returns of 12-20%, but **investing in yourself can give you returns of 100% to infinite.** yes, you read it right. there are no limits. if you are still wondering, let me share my own story. here’s how much i am making since last few months: - **february**: $2.2k - **march**: $5.4k - **april**: $4k - **may**: $4.4k `this is a 130% increase in monthly income compared to what i was making 5 months ago` here's how i did it: i was consistently making $2k per month before i took a 5-month vacation to india to upskill and educate myself. i made very little during that time, almost around $500 per month, which was a 300% decrease compared to what i was making before. but now, after 5 months, i've already reached what i was making before, which is an increase of 300% (from $500 to $2k). even better, i am making almost 120% more than what i was consistently making. i've been averaging $4k per month continuously for the last 3 months. from decreasing income on average from $2k/mo to $500/mo (by 300%), to moving back to making $2k/mo on average, to moving it to around $4k/mo (from $500 to $4k, which is a rapid increase of 700%). ---- > investing in the share market is really good; it can give you a return of around 10-15% easily. but to gain more immediate and extremely high interest, invest in yourself. ---- ---- i paid $2k for my ticket to india, which was covered by the money i made in 5 months while i was there. this means the trip to india was literally free for me. the only loss was i couldn’t save $1k/mo for 5 months (my necessary spendings cost me only $1k/mo to survive), which i could have while living here in canada. it also meant that i lost the opportunity of saving $5k in india. but who cares? my income jumped from $2k to $4k, which is literally double what i was used to making. my risk of not saving $5k in 5 months allowed me to invest time in myself, travel, and educate myself. because of all this, i covered $5k and saved that amount in just 1.5 months rather than in 5 months. i learned this year to: invest in myself, take risks, and take long vacations (they're like mini-retirements). making more money requires more education, investment in yourself, and most importantly, vacations and time with your family. note: the numbers i am making per month are not consistent. the moment i start becoming lazy, my income stops coming in, and the number starts decreasing back to $2k. ps- the most important lesson of the last 9 months is that you shouldn’t leave a full-time job to make your startup and wallet successful. humans do well when they are with their family and have their job. having family and a job gives security, peace, and satisfaction, which in turn allows us to perform well. pps- i am not here to fight. i can be wrong, and you might not agree with some of the things i am saying here. FYI, these monthly numbers i shared are a combined income from my full-time job ($2k/mo) and what my side projects are making. ---- Bonus- here's the song i am listening to right-now, [click here to check it out.](https://www.youtube.com/watch?v=zhPayDylYSc&ab_channel=buildspace) - kathan
darkinventor
1,890,657
Making a Multiplayer WEB RPG - Part 10: Customizing Characters
Hi All, I've been working on character customization systems. It is nearly complete and includes a...
27,892
2024-06-16T23:37:38
https://dev.to/orion3d/making-a-multiplayer-web-rpg-part-10-customizing-characters-2nhd
gamedev, babylonjs, colyseus
Hi All, I've been working on character customization systems. It is nearly complete and includes a big performance boost compare to previous method. --- I have 2 types of items (I’m sure none of this is new to you guys, but sharing is caring :slight_smile: ): - **dynamic items** that I attached via code to the relevant bone (weapon, hat, shields, etc…) - **embedded items** which are rigged item in blender (head & armor currently), not much choice there as a skirt for example needs to move with the legs (not possible with the dynamic method) So I now have one single VAT running that is used by all my “humanoid” characters. Currently, I can mix and match the following very easily: - 20 materials, - 6 heads, - 3 dynamic items (weapon/shield/helm,head) - 2 embedded items (white robe, purple robe) Next step is being able to customize the material for the “embedded” items, currently I apply one material to the full mesh (including the embedded items). Performance are still good (50-60fps / 40 draw calls approx) Here are a few shots: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/68r1rffxm159va91g3i9.jpeg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qt09kp6cxnnnbijkh5hk.jpeg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ugpgc1nqg30jykud9ha.jpeg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f8tka15tn61r3mmy6wpn.png)
orion3d
1,890,654
Heroes of DDD: BEING perspective. What am I?
Cover sources: Heroes of Might and Magic III (Ubisoft) and Heroes III Board Game (Archon...
27,739
2024-06-16T23:27:43
https://dev.to/mateusznowak/heroes-of-ddd-being-perspective-what-am-i-3jgc
ddd, oop, complexity, kotlin
<figcaption>Cover sources: Heroes of Might and Magic III (Ubisoft) and Heroes III Board Game (Archon Studio).</figcaption> </br> ## 🫣 BEING perspective: what am I? Creatures in Heroes III have their own names and specific levels. Each belongs to a different faction, but there are also neutral creatures. Some creatures can be upgraded. Each creature has a defined recruitment cost and a base growth rate (how many can be recruited each week). Creatures have specific stats (like attack, defense, hit points etc.), which are especially important during combats. ### 📜 Playing a board game? Start with the instructions! When you focus on nouns, data structures and database tables for creatures, what clarifying questions can you ask based on this description to create a better "model"? And what answers are you likely to get? - _Question_: What does a creatures have? _Answer_: A name... - _Question_: How many characters can a creature's name have? _Answer_: It varies... probably up to 50 to keep the UI from breaking. - _Question_: How do we define the level? _Answer_: A number from 1 to 7. - _Question_: How many upgrades can a creature have? _Answer_: From 0 to 1. Since we did so well with defining "what a creature has," we could continue and draw out the entire database, but see what King Julien has to say about that: </br> {% youtube wqRnt1uwEjY %} Only advantage of this approach is that I didn't waste much time on it. ChatGPT did the work quickly and beautifully laid out the tables. Such programmers will likely be replaced by AI soon. But you want to stay in the industry, right? So keep reading. ![Heroes3_DatabaseModel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhqw9egzlu5nrozcdqkt.png) <figcaption>One huge database model for the entire Heroes III created by ChatGPT. But what's the point? In college, I would probably get an A grade for this, but in practice, it deserves no more than a D.</figcaption> </br> **Starting a project with a database schema is like opening a board game (by the way, a Heroes III version in this form was recently released), looking at the contents: "OK, I have 50 cards, and I have a board"... and trying to play without reading the instructions.** It's no surprise that with a similar approach to programming (without understanding the domain and its processes), we'll apply a solution suitable for CRUD everywhere — most board games also typically have cards and a board... From the database tables diagram, do you already know what you need to do first? Or where you can parallelize the work of developers? Can separate teams work on it? Or what dependencies or user interface designs are still missing? And finally: does it bring you closer to understanding the business processes? Do you know why creatures have levels, or what is the point of upgrading them? First, let's focus on understanding the essence of the problem. We'll deal with the rest, like database details, later. ### 😌 Relax, relax... I do OOP and have classes, not tables! So how do we model an "object-oriented" creature? Below, you can see the implementation in Kotlin and the object parsed into JSON (with example values). ```kotlin data class Creature( val id: String, val level: Int, val faction: Faction, val growth: Int, val upgrades: Set<Creature>, val cost: Resources, val attack: Int, val defense: Int, val damage: Range, val health: Int, val speed: Int, val shots: Int = 0, val size: Int = 1, val spells: Set<Spell>, val abilities: Set<SpecialAbility>, ) ``` ```json { "id": "Angel", "level": 7, "faction": "castle", "growth": 1, "upgrades": [ "Archangels" ], "cost": { "gold": 3000, "crystal": 1, "wood": 0, "ore": 0, "sulfur": 0, "mercury": 0, "gems": 0 }, "attack": 20, "defense": 20, "damage": { "low": 30, "high": 50 }, "health": 200, "speed": 12, "shots": 0, "size": 1, "spells": [], "abilities": [ { "type": "HATE", "creatures": [ "Devil", "ArchDevil" ] }, { "type": "ConstRaisesMorale", "amount": 1 } ] } ``` </br> ### 🔴 Accidental complexity and all tests red Are the variables below well-named? Yes. Do all these attributes belong to the creature or are they related to it? Yes. So, is there anything wrong with this solution? Before we answer that question, let's listen to two dialogues between a programmer and domain experts: - Expert #1: "A hero ALWAYS belongs to a player." - Programmer: "Are you sure? Could there ever be a case where a hero doesn't belong to a player?" - Expert #1: "No, no... a hero on the map is always under some player's flag. That will NEVER change." Satisfied, you create the perfect model that meets business requirements, in collaboration with experts, feeling that this is truly Clean Code: ```kotlin data class Hero( val id: HeroId, val player: PlayerId ) ``` After some time, you talk to another expert: - Expert #2: "In the Tavern, we buy a hero who doesn't belong to any player." 🤯 - Programmer: "What? But Expert #1 said that a hero ALWAYS belongs to a player." - Expert #2: "He must be wrong... I've been working here longer and I know better." What happens to your code in such case? You introduce a modification because the code must reflect the business logic. ```kotlin data class Hero( val id: HeroId, val player: PlayerId?, // null only in tavern val cost: Resources // cost of hiring a hero ) ``` So, it's finally a small change — just adding the possibility of `null` in the `player` field. Is that the end of the work? Not at all! Now, all the tests that created a Hero instance are failing and need to be updated (which means they no longer protect you from regression). Additionally, everywhere you reference `hero.player`, you have to introduce an `if` statement to check against null. ![EyesGif](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o0upov55p6hpv80wm5y9.gif) <figcaption>How do such changes hit your project?</figcaption> </br> It's still manageable if you're using Kotlin (or another language with sophisticated [Null Safety](https://kotlinlang.org/docs/null safety.html)) and the compiler alerts you, rather than causing abug in production. **Even if you manage to handle this, you now want to merge the changes and... bam (as my 1-year-old son says)! It turns out another developer has already overwritten your changes, and we have a marge conflict! Your morale and work efficiency drop like a hero's army in Heroes III when you mix different factions. Wouldn't it be simpler, instead of modifying, to apply the Open-Closed Principle at the architecture level and have two separate models? Models that even separate teams of developers can work on without any issues.** Let's see an example below. ![Heroes3 Hero Bounded Context](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/439pwl1apb8prvpyttks.png) <figcaption>Actually, splitting the model increases the number of classes, but ultimately creates more models with less complexity.</figcaption> </br> ### 🐉 Bad habits — here be dragons! Different attributes describing a Hero are needed in the context of the tavern compared to when exploring the adventure map. Simply add another namespace/module, or whatever it's called in your environment, and create two separate classes. Nothing is limiting you here. Unless, of course, you're still thinking in terms of the `Hero` table and adding a nullable column, or the relationships between tables. This is how we were taught from the beginning, and fighting bad habits is the hardest. Those habits are like dangerous dragons, but if you defeat them — you and your project will receive greater rewards than after beating a Dragon Utopia. ![DragonUtopiaHeroes 3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0hqe1zth8j3ginjl9x0q.gif) <figcaption>Dragon Utopia — In Heroes III, if you want to acquire rare artifacts, you must defeat the dragons guarding them.</figcaption> </br> **When you hear conflicting information from experts about the same noun, trying to reconcile them in a single model introduces what's known as accidental complexity.** Now, every programmer, even one who knows almost the entire system (but not the tavern and hero hiring), will ask you: "Dude, why do I have to check for null here?" And that's the least dangerous form of this problem... both of you will just waste some time. As a result, such problem exists only in the code and make the domain itself harder to understand than it really is. Is one of the experts lying to you or incompetent? Not at all! In the tavern, you can buy a hero, but you can't buy any of the heroes moving on the map because they belong to you or another player. Such inconsistencies in "business" statements are a clear sign that we should now talk about behaviors rather than move forward with nouns. The noun "hero" is the same, but the behaviors - operations that can be performed on the object - are completely different, depends on the context. --- If you want to actively participate in modeling the solution or just don't miss the upcoming parts, let's sign up for my mailing list [HERE](https://subscribepage.io/mateusznowak).
mateusznowak
1,890,524
How to automate provisioning in Proxmox Using Cloud images
There are many ways to create virtual machines. In this article, we aim to leverage cloud images as...
0
2024-06-16T23:24:56
https://dev.to/caiocampoos/how-to-automate-provisioning-in-proxmox-using-cloud-images-7do
proxmox, cloud, virtualization, cloudinit
There are many ways to create virtual machines. In this article, we aim to leverage cloud images as our primary method for automation while provisioning on Proxmox. ## Cloud Images Cloud images are pre-configured disk images designed to be used in virtualized environments, such as cloud infrastructure or virtual machine hosts. These images include minimal operating system installation and are optimized for quick deployment and scalability. They have support for quick configuration tools, we will be using cloud-init for our example. ## Script You'll need a Proxmox Virtual Environment node with SSH access. In our script, we will perform some actions to download and prepare a cloud image. - Variable and packages First, we set up some variables and install some useful packages we will be using in the script. ```bash ### variables VM_TEMPLATE_ID=999 TEMPLATE_NAME='ubuntu-2204-template' UBUNTU_IMAGE='ubuntu-22.04-server-cloudimg-amd64-disk-kvm.img' UBUNTU_IMAGE_QCOW2='ubuntu-22.04.qcow2' USERNAME='cap' PASSWORD='12345' MEMORY='4096' CPUS='2' apt update -y && apt install nano wget curl libguestfs-tools -y ``` - Idempotency Run a couple of commands to make our script idempotent. We want a quick way of iteration, so we will be deleting the template in case of changes. ``` # remove old image rm -rfv ${UBUNTU_IMAGE} # remove old template container - WILL DESTROY COMPLETELY qm destroy ${VM_TEMPLATE_ID} --destroy-unreferenced-disks 1 --purge 1 ``` - Download the image: ``` # download new image wget http://cloud-images.ubuntu.com/releases/22.04/release/${UBUNTU_IMAGE} ``` - Customize the image Add the QEMU guest agent to the image so we don't need to install it later. Here, you can use virt-customize to add other tools and have them pre-installed on the image. ```bash virt-customize -a ${UBUNTU_IMAGE} --install qemu-guest-agent ``` QEMU guest agent is a daemon installed in the guest and helps us get information about our VMs in the Proxmox environment. See more documentation on [PVE docs](https://pve.proxmox.com/wiki/Qemu-guest-agent). - Change image extension Change the img extension on the image file to QCOW2, if you don't rename it in some versions of proxmox qemu will not work ```bash mv ${UBUNTU_IMAGE} ${UBUNTU_IMAGE_QCOW2} ``` - Resize Now we resize the image to a generic size. It doesn't really matter now; we will configure each clone based on the template later. - Create the vm Again, the values for resources here are not really important. Set up a generic value for memory and CPU. We are setting up a network interface. Another key configuration is the storage controller; we use virtio-scsi as it covers more use cases. To learn more about virtualization of 'physical' devices on the virtio family and emulated storage controllers of the virtio family, go to [Virtio](https://docs.oasis-open.org/virtio/virtio/v1.1/virtio-v1.1.html) ```bash qm create ${VM_TEMPLATE_ID} --memory ${MEMORY} --cores ${CPUS} --net0 virtio,bridge=vmbr0 --name ${TEMPLATE_NAME} --scsihw virtio-scsi-pci ``` After creating the vm we need to configure and convert to a template, so it can be cloned. - Configure the vm We will perform in order the following configuration: - Import the image to a disk and attach to the vm - attach a cloud-init drive to the vm via ide interface - Configuring vga output for the console on serial0 - Setting up dhcp for the network so we have connection - enabling the qemu agent - seting default user and password - adding ssh keys so we can have ssh access, it will use the same ssh keys from the host in witch you will be executing the script ```bash qm set ${VM_TEMPLATE_ID} --scsi0 local-lvm:0,import-from=/root/${UBUNTU_IMAGE_QCOW2} qm set ${VM_TEMPLATE_ID} --ide2 local-lvm:cloudinit qm set ${VM_TEMPLATE_ID} --boot order=scsi0 qm set ${VM_TEMPLATE_ID} --serial0 socket --vga serial0 qm set ${VM_TEMPLATE_ID} --ipconfig0 ip=dhcp qm set ${VM_TEMPLATE_ID} --agent enabled=1 qm set ${VM_TEMPLATE_ID} -ciuser ${USERNAME} qm set ${VM_TEMPLATE_ID} -cipassword ${PASSWORD} qm set ${VM_TEMPLATE_ID} --sshkeys ~/.ssh/authorized_keys ``` - Converting to template In the last step, we convert this VM to a template. A template is a VM with a frozen state used as a base for many machines. It provides a stable starting point and makes it easy for provisioning automation. ```bash qm template ${VM_TEMPLATE_ID} ``` The complete script can be found on my homelab [github](https://github.com/caiocampoos/homelab-k8s) To run over ssh we can pipe the output via ssh running: ```bash cat ubuntu-22.04.sh | ssh root@your-PVE-ip /bin/bash ``` Result should be a template on your Proxmox node ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0d4ul5s13832l02af2hh.gif) Now you can run any number of copies, we can use qemu set to configure any variable to the clone we want to change from the template. ```bash # clone qm clone ${VM_TEMPLATE_ID} ${VM_ID} --name ${VM_NAME} # configure the vm qm set ${VM_ID} --scsi1 local-lvm:40 qm set ${VM_ID} --memory ${MEMORY} qm set ${VM_ID} --cores ${CPUS} qm set ${VM_ID} --boot order=scsi0 qm set ${VM_ID} --serial0 socket --vga serial0 qm set ${VM_ID} --ipconfig0 ip=dhcp qm set ${VM_ID} --agent enabled=1 # start qm start ${VM_ID} ``` Running again over ssh should create a new vm based on the template: ```bash cat clone-templates.sh | ssh root@your-PVE-ip /bin/bash ``` Its as fast as it gets, less than 30 seconds and you have a vm running: ![VM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5c3igfvkwcfvp353nxmr.png) Now we know how to create vms based on templates we can explore more options of remote code execution for provisioning. Socials: [Twitter](https://twitter.com/caiocbrr) [Linkedin](https://www.linkedin.com/in/caio-campos-borges-rosa-392588143/) [Github](https://github.com/caiocampoos) Photo by <a href="https://unsplash.com/@arnosenoner?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Arno Senoner</a> on <a href="https://unsplash.com/photos/blue-and-brown-metal-bridge-yqu6tJkSQ_k?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
caiocampoos
1,890,515
Paralelismo e Concorrência 101
Vamos explorar conceitos de paralelismo e concorrência.
0
2024-06-16T23:10:46
https://dev.to/hugaomarques/paralelismo-e-concorrencia-101-2pgc
java, concurrency, parallelism, ptbr
--- title: Paralelismo e Concorrência 101 published: true description: Vamos explorar conceitos de paralelismo e concorrência. tags: #java #concurrency #parallelism #ptbr # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-16 19:13 +0000 --- > O termo 101 é usado aqui nos US para denotar algo para iniciantes e quem tá começando. É comum as primeiras disciplinas na faculdade serem <tópico 101>. Recentemente, tenho escrito bastante no X sobre alguns aprendizados que tive lidando com programação concorrente em Java. Este post é o primeiro de uma série para documentar o que aprendi neste período. O assunto é extenso, repleto de nuances e "dependes". É bom começarmos com a base para termos uma terminologia comum. Vamos lá? ## Concorrência e Paralelismo **Concorrência** refere-se à capacidade de um sistema de lidar com múltiplas tarefas ao mesmo tempo. Não significa necessariamente que essas tarefas estão sendo executadas simultaneamente, mas sim que o sistema pode trocar rapidamente entre as tarefas, de modo que parece que estão ocorrendo ao mesmo tempo. Um exemplo de concorrência é como o seu SO troca o tempo de CPU entre diversos programas. Isso dá a impressão que todos eles estão executando ao mesmo tempo (talvez estejam se você tiver vários "cores"), mas, na verdade, a sua CPU só é MUITO rápida entre trocar quem tem a vez na CPU. Paralelismo, por outro lado, refere-se à execução simultânea de várias tarefas. Isso requer múltiplos núcleos de CPU, onde cada núcleo pode executar uma tarefa diferente ao mesmo tempo. O paralelismo é sobre fazer várias coisas ao mesmo tempo. Um exemplo seria no caso que você está executando os diversos programas acima e eles realmente estão rodando em paralelo, cada um roda isoladamente na sua própria CPU ou core sem precisar "ceder" a vez. <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/liy4nb34fxlz22pucrf9.png" alt="Processo concorrente vs paralelo"/> <figcaption>*Processamento concorrente vs paralelo. Fonte: Baeldung CS.*</figcaption> </figure> Observe no exemplo acima como a CPU salta de task em task no processamento concorrente. Isso é feito tão rápido que dá a sensação que as tasks são executadas ao mesmo tempo. Já no processamento paralelo, cada core é responsável por um task e essas tasks realmente acontecem ao mesmo tempo. As aplicações podem ser: 1. Nem concorrentes e nem paralelas. 2. Concorrentes, mas não paralelas. 3. Paralelas, mas não concorrentes. 4. Concorrentes e paralelas. #### Não-concorrentes e não-paralelas Imagina o seu programa que roda sequencial em uma única thread. Cada comando é executado em sequência até seu final. #### Concorrentes e não-paralelas Uma aplicação pode ser concorrente sem ser paralela. Isso significa que ela pode gerenciar e trocar rapidamente entre várias tarefas, mas não as executa simultaneamente. Exemplo: Um sistema de tempo compartilhado em um único núcleo de CPU. Aqui, o sistema operacional alterna entre diferentes tarefas tão rapidamente que parece que todas estão sendo executadas ao mesmo tempo. Na realidade, o processador está executando apenas uma tarefa por vez. #### Não-concorrentes e paralelas Uma aplicação pode ser paralela sem ser concorrente. Isso significa que ela executa várias tarefas simultaneamente, mas essas tarefas não são gerenciadas de forma que possam cooperar ou competir pelo mesmo recurso. Exemplo: Uma aplicação que realiza cálculos massivamente paralelos em um cluster de computadores, onde cada nó do cluster está executando um cálculo diferente de forma independente. Não há interação entre as tarefas enquanto elas estão sendo executadas. #### Concorrentes e paralelas Uma aplicação pode ser concorrente e paralela ao mesmo tempo. Isso significa que ela não só gerencia várias tarefas ao mesmo tempo, mas também executa algumas delas simultaneamente em múltiplos núcleos ou processadores. Exemplo: Um servidor web que atende várias requisições de clientes. Ele pode utilizar threads para gerenciar várias requisições ao mesmo tempo quando ele está ocupado fazendo IO e, se estiver rodando em um sistema multi-core, ele pode processar várias dessas requisições simultaneamente (paralelismo). ### Benefícios O exemplo clássico de benefício da concorrência é sua CPU poder trabalhar em alguma coisa enquanto a tarefa que ela estava executando espera algum outro recurso. Por exemplo, imagine que você está fazendo download de dados pela rede. Grande parte do trabalho ali é de IO (input/output) e não precisa da CPU. Nesse meio tempo, em vez de ficar esperando, a CPU pode pular pra outra tarefa. <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ztbdcjd5eymaczc0h4sg.png" alt="Processamento concorrente vs IO"/> <figcaption>*CPU alterna entre tarefas enquanto espera IO. Fonte: Baeldung CS.*</figcaption> </figure> No caso do paralelismo, o benefício é mais direto. Se eu tenho 10 tarefas, em vez de executá-las 1 por vez, eu posso executar N tarefas onde N é o número de cores da minha CPU. 😣 cansativo, mas bacana né? Calma que ainda tem mais... ## IO vs CPU intensivo Quando a gente lida com concorrência e paralelismo, é sempre importante entendermos se a nossa tarefa é *IO intensiva* ou * CPU intensiva*. Uma operação *IO intensiva* normalmente vai ser limitada pela velocidade de entrada e saída do sistema. Por exemplo, consumir dados de um microserviço, salvar um arquivo em disco, se comunicar com o banco de dados. Já uma operação *CPU intensiva* vai ser limitada pela velocidade de cálculo e processamento da CPU. Exemplos: Realizar cálculos matemáticos, transformar dados durante uma chamada de serviço, o banco de dados processar uma query de leitura. #### E o que *CPU intensiva* e *IO intensiva* tem a ver com concorrência e paralelismo? Se a sua aplicação é bem *IO intensiva* ela vai se beneficiar bastante da capacidade de usar concorrência. Assim toda vez que sua aplicação estiver usando os dispositivos de entrada/saída a CPU pode trocar o contexto pra ir fazer outra coisa. <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6lgrq5gm7cktfbrk2zjt.png" alt="IO bound"/> <figcaption>*Tarefa IO intesiva. Fonte: leimao.github.io*</figcaption> </figure> Já se sua aplicação é *CPU intensiva*, concorrência não vai te ajudar muito. Por quê? Oras, o seu cálculo precisa da CPU pra terminar. Se há uma troca de contexto, o cálculo para até o contexto voltar pra sua aplicação. Porém, você se beneficia muito de paralelismo, porque cada cálculo pode ser executado sem parada no seu próprio núcleo. Quanto mais núcleos, mais execuções podemos fazer ao mesmo tempo. <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/241en8elo5a505canvn8.png" alt="CPU bound"/> <figcaption>*Tarefa CPU intesiva. Fonte: leimao.github.io*</figcaption> </figure> ## E agora? Se você chegou até aqui, parabéns 😉! Não rolou muito código nesse artigo porque todo o código depende muito de onde ele está sendo executado pra gente definir se ele é concorrente e/ou paralelo. Os conceitos acima são bem legais e bem importantes para as próximas lições. Durante muito tempo eu não entendia o que eram atividades "IO intensiva" 🤓. Saber esses fundamentos te ajuda, por exemplo, a ter uma ideia do número de threads que você quer usar e/ou se você deveria usar um `parallelStream()` ou um `thread pool`. Mas isso são tópicos para o próximo capítulo! Se curtiu, dá o joinha e manda uma pergunta pra gente responder em breve. Happy coding! 💻 p.s. Quando eu estava para publicar esse texto eu percebi que o mestre fidelissauro tem uma versão SUPER completa sobre o assunto. Coloquei o link nas referências. ## Referências * https://leimao.github.io/blog/Python-Concurrency-High-Level/ * https://www.baeldung.com/cs/concurrency-vs-parallelism#:~:text=Simultaneously%20executing%20processes%20and%20threads,terms%20for%20the%20parallelism%20taxonomy. * https://medium.com/@peterlee2068/concurrency-and-parallelism-in-java-f625bc9b0ca4 * https://fidelissauro.dev/concorrencia-paralelismo/
hugaomarques
1,890,649
Heroes of DDD: Is a "good" domain model the Holy Grail?
Cover: https://www.youtube.com/watch?v=DzWUt2GSZ6g 🗿 Model - what is it? A model is not a...
27,739
2024-06-16T23:08:05
https://dev.to/mateusznowak/heroes-of-domain-driven-design-looking-for-the-holy-grail-and-the-good-model-42a2
domaindrivendesign, ddd, model, boundedcontext
<figcaption>Cover: https://www.youtube.com/watch?v=DzWUt2GSZ6g</figcaption> ## 🗿 Model - what is it? A model is not a copy of the real world, but a representation made for a specific purpose: to solve a particular problem and answer specific questions. As programmers, we constantly create abstractions in order to understand and implement project domain and requirements. However, you use various models (or abstractions) every day, perhaps without even realizing it. When you drive a car, the lines forming the road on the navigation system are not the actual street you are driving on, but they answer the question: "How do I get to my destination as quickly as possible." ### 🪑 It depends on the point of view So, what makes a "good model"? It depends on who is asking. When I ordered a custom kitchen, I received two models: one technical drawing (on the left) and another in 3D (on the right). If I want to see how the colors match and imagine if the arrangement of the appliances will be ergonomic — the 3D model is suitable. However, for the person who has to cut and assemble the furniture, it will be useless because it does not provide the exact measurements. This is an example of two different representations of the same reality — both useful, but in different contexts. ![Kitchen model](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gg6nkinzkulje4fmb4cj.png) <figcaption> The model is applicable only in defined context and useful for certain purpose.</figcaption> #### 🗡️ Heroes III — Who attacks first? **Do we already know what a "good model" is? We can still repeat the standard consultant's answer like a mantra: "it depends." But it's important to know — what does it depend on? It depends on what the question is.** You don't have to look far for an accurate example, as the following screenshot from Heroes III provides it, where we can see two representations of the same "reality": the battle between armies of creatures. Of course, the word "reality" is not entirely appropriate in the context of a game, but Domain-Driven Design methods can be applied here as well. Nowadays, it is natural — various businesses (e.g., job listing portals, online stores, banks, etc.) are so closely tied to software that it would be misleading to say that as programmers, we only model something "real" that exists outside of computers. ![Heroes3 Battle Queue](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rdgxct0d71cac17v5b8i.png) <figcaption>A given model is specialized to answer specific questions.</figcaption> </br> The presented screen shows two different models of the same battle situation, but answering different questions. ##### ⬢ Model #1: Battlefield Composed of hexagons (similar to squares in chess) on which creatures move. A glance at this model can answer questions such as: - What is the Angel's movement range? The currently dimmed hexagons indicate where the highlighted Angel can fly or attack a creature standing on them — in this case, only the Dragon. - How many creatures has the player lost? In the bottom-right corner, you can see that one group of Angels has already fallen. - Where is the best spot to attack with the Dragon (as it can attack 2 adjacent creatures)? ##### ⏹ Model #2: Battle Queue Visible at the bottom, depicting the upcoming moves of creatures and the battle rounds. ![Heroes3 Queue Only](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i13cqywmxy3bgxgwqw8r.png) Here we have a very specialized model, serving practically to answer one question: - In what order will the creatures move? Thanks to this representation, we have a clear answer, which we could extract from the previous model, but with much more effort. It would require us to analyze the stats of each creature (their speed), which still wouldn't be enough because factors like artifacts owned by the hero or cast spells also influence the move order. Additionally, a creature can wait each turn, placing it at the end of that round — without a visible queue, each player would have to remember this. The model is much simpler compared to the battlefield and contains less information (e.g., we don't need to know the creatures' attack strength because it doesn't affect the move order), but it better serves its purpose. In Event Sourcing terminology, one could say that the Battle Queue model is a projection of events such as `CreatureMoved`, `CreatureAttacked`, `CreatureWaited`, `SpellCasted`, etc. ##### 🤖 Experience -> Automation In the original version of the game, only the battlefield view was available (without the initiative bar). However, subsequent expansions/mods added this feature based on player experiences. This is an example of automating a process that was previously done in the players' heads, taking away a lot of time and enjoyment from the game. Maybe in your company, there are Excel sheets that waste a lot of time and money until someone (like you) automates them? If you have the opportunity, it's worth spending a day outside the IT department to listen in on what people complain about whose work you're supposed to simplify with your software. You might find HotSpots faster than by sticking notes during EventStorming! #### 🕳️ Have we found the Holy Grail? Is it possible to illustrate all contexts with a single model? This is often what tutorials or documentation for a given framework show you (it's impossible to summarize all the books on best practices in such a place). Unfortunately, even developers with many years of experience still try to do this in applications with complex business processes. And it works, until you eventually start getting lost in the jungle of if-statements. They then excuse themselves with a misunderstood "pragmatism" and say, "we'll fix it later" — but you know that "later" never comes. **That's why you should invest time now to expand your toolkit and choose the right solutions for the specific class of a problem. And be sure to apply them in practice — only then you will grasp their full potential!** ![Heroes3 Holy Grail](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m0gbbhlml58cytj3g60y.png) </br> The methods I will present are not a Holy Grail that save every project for sure. However, the most important thing is to be aware of alternative approaches to those you usually use and to be able to choose the right one in a given context in order to solve a specific problem using the most appropriate available tool. > If you don't know that a power drill exists, you'll screw everything in with a screwdriver until someone tells you about its existence. ### 🗄️ Categorizing models **In your dresser in the bedroom, the most important thing is to know what you will find in each drawer. Once you know that, you can apply the appropriate strategy for each one: you neatly fold your trousers, but you likely toss your socks in loosely.** Anyone (or the programmers who came to maintain that software later) who has ever tried to create one huge model consisting of hundreds of tables (or a dresser with one giant drawer) that is supposed to be "one-size-fits-all" has sooner or later failed. Let's focus on dividing into logical business subdomains rather than technical layers or microservices. This way, each of these problems can be solved in a simpler (and probably cheaper) manner, and developers can avoid cognitive overload. ![Drawer_BoundedContexts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m7llijpc1ys14czrviaf.png) <figcaption>At least you know where to search.</figcaption> </br> A model can't exist without boundaries (just like drawers set boundaries in a dresser), because then it would just reflect the real world. All models are wrong, but some are useful in a specific context (like the mentioned kitchen designs). **To find the right model for certain context, we need to look at the problem from different perspectives.** --- If you want to actively participate in modeling the solution or just don't miss the upcoming parts, let's sign up for my mailing list [HERE](https://subscribepage.io/mateusznowak).
mateusznowak
1,889,718
Design patterns: Part Two - A brief explanation of structural pattern
The picture was taken from boredpanda. Hello everyone! Starting my second post with one...
0
2024-06-16T23:06:30
https://dev.to/andres_paladines/design-patterns-part-two-a-brief-explanation-of-structural-pattern-4e95
architecture, designpatterns, javascript, swift
The picture was taken from [boredpanda](https://www.boredpanda.com/digital-art-cats-brutalist-architecture-cats-of-brutalism/). ### Hello everyone! Starting my second post with one of the most basic but never unnecessary topics; Design patterns. The [previous post](https://dev.to/andres_paladines/design-patterns-part-one-a-brief-explanation-of-creational-pattern-5507) targets `Creational Patterns`, and this one will focus on `Structural Patterns`. Let's start listing them again: ``` . ├── Creational Patterns (previous post) │ ├── Singleton │ ├── Factory Method │ ├── Builder │ ├── Abstract Factory │ ├── Prototype ├── Structural Patterns (current post) │ ├── Adapter │ ├── Decorator │ ├── Facade │ ├── Composite │ ├── Proxy ├── Behavioral Patterns (Working on it) │ ├── Observer │ ├── Strategy │ ├── Command │ ├── Chain of Responsibility │ ├── Mediator │ ├── State │ ├── Template Method . ``` # Structural Patterns This post will focus on `Structural Patterns` and how to construct them with swift and javascript. So, Let's start! ## What does that mean by Structural pattern? It is how classes and objects are composed to form larger structures. These patterns help ensure that if one part of a system changes, the entire structure doesn’t need to change. They facilitate design by identifying a simple way to realize relationships among entities. --- ## Adapter Basically, it allows incompatible interfaces to work together by acting as a bridge. **Use case:** Adapting a UIKit View inside SwiftUI. #### Swift (Adapting UIKit to SwiftUI): ```swift // This is simple SwiftUI view representing a label class UILabelView: UIView { private let label: UILabel init(text: String) { self.label = UILabel() super.init(frame: .zero) self.label.text = text self.label.textAlignment = .center self.addSubview(label) label.ytanslatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ label.centerXAnchor.constraint(equalTo: self.centerXAnchor), label.centerYAnchor.constraint(equalTo: self.centerYAnchor) ]) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented.") } } ``` **Use case:** Adapting a Firebase logger to have extra features in case it cannot be extended (Open close principle, not edit but extend object functionalities). #### JavaScript: ```javascript class FirebaseAnalytics { logEvent(name, parameters) { console.log(`Logging event to Firebase: ${name}, parameters: ${JSON.stringify(parameters)}`); } } class FirebaseAdapter { constructor() { this.firebaseAnalytics = new FirebaseAnalytics(); } trackEvent(name, properties) { this.firebaseAnalytics.logEvent(name, properties); } trackUser(name, email) { this.trackEvent(name, { method: email }); } } // Usage const analyticsService = new FirebaseAdapter(); analyticsService.trackEvent("user_signup", { method: "email" }); analyticsService.trackUser("John", "Doe"); ``` --- ## Decorator Adds extra responsibilities to an object dynamically without altering its core functionality. **Use case:** Adding different logging levels (info, debug, error) dynamically. #### Swift: ```swift protocol Logger { func log(message: String) } class BasicLogger: Logger { func log(message: String) { print(message) } } class LoggerDecorator: Logger { private let wrapped: Logger init(_ wrapped: Logger) { self.wrapped = wrapped } func log(message: String) { wrapped.log(message: message) } } class InfoLogger: LoggerDecorator { override func log(message: String) { super.log(message: "INFO: \(message)") } } class ErrorLogger: LoggerDecorator { override func log(message: String) { super.log(message: "ERROR: \(message)") } } // Usage let logger: Logger = InfoLogger(ErrorLogger(BasicLogger())) let loggerInfo: Logger = InfoLogger(BasicLogger()) logger.log(message: "An error occurred") //ERROR: INFO: An error occurred loggerInfo.log(message: "An info message was sent") //INFO: An info message was sent ``` #### JavaScript: ```javascript class BasicLogger { log(message) { console.log(message); } } class LoggerDecorator { constructor(wrapped) { this.wrapped = wrapped; } log(message) { this.wrapped.log(message); } } class InfoLogger extends LoggerDecorator { log(message) { super.log(`INFO: ${message}`); } } class ErrorLogger extends LoggerDecorator { log(message) { super.log(`ERROR: ${message}`); } } // Usage const logger = new ErrorLogger(new InfoLogger(new BasicLogger())); const loggerInfo = new InfoLogger(new BasicLogger()); logger.log("An error occurred"); loggerInfo.log("This is an info message"); ``` --- ## Facade With this, we create a simple interface that covers a complex object or objects, making it/them easier to use. **Use case:** Providing a simplified interface to a complex set of APIs. #### Swift: ```swift class ComplexSystemA { func operationA1() { print("Operation A1") } func operationA2() { print("Operation A2") } } class ComplexSystemB { func operationB1() { print("Operation B1") } func operationB2() { print("Operation B2") } } class Facade { private let systemA = ComplexSystemA() private let systemB = ComplexSystemB() func operationOne() { systemA.operationA1() systemB.operationB1() } func operationTwo() { systemA.operationA2() systemB.operationB2() } } // Usage let facade = Facade() facade.operationOne() ``` #### JavaScript: ```javascript class ComplexSystemA { operationA1() { console.log("Operation A1"); } operationA2() { console.log("Operation A2"); } } class ComplexSystemB { operationB1() { console.log("Operation B1"); } operationB2() { console.log("Operation B2"); } } class Facade { constructor() { this.systemA = new ComplexSystemA(); this.systemB = new ComplexSystemB(); } simplifiedOperationOne() { this.systemA.operationA1(); this.systemB.operationB1(); } } // Usage const facade = new Facade(); facade.simplifiedOperation(); ``` --- ## Composite This pattern allow us to compose a tree of objects to represent part or whole hierarchies. **Use case:** Implementing a File System Structure: In a file system, files and directories can be treated uniformly. Both can be added, removed, or displayed. Directories can contain files or other directories. #### Swift: ```swift //The protocol will define a common interface for leaf and composite objects. protocol FileSystemComponent { var name: String { get } func display(indent: String) } //File represents the leaf objects class File: FileSystemComponent { var name: String init(name: String) { self.name = name } func display(indent: String) { print("\(indent)- \(name)") } } //Directory represents the composite objects class Directory: FileSystemComponent { var name: String //`components` represents a list of new leafs or composite objects private var components = [FileSystemComponent]() init(name: String) { self.name = name } func add(component: FileSystemComponent) { components.append(component) } func display(indent: String) { print("\(indent)+ \(name)") for component in components { component.display(indent: indent + " ") } } } //Finally, the usage of these objecs: let file1 = File(name: "File1.txt") let file2 = File(name: "File2.txt") let file3 = File(name: "File3.txt") let directory1 = Directory(name: "Directory1") directory1.add(component: file1) let directory2 = Directory(name: "Directory2") directory2.add(component: file2) directory2.add(component: file3) let rootDirectory = Directory(name: "RootDirectory") rootDirectory.add(component: directory1) rootDirectory.add(component: directory2) rootDirectory.display(indent: "|") /* The result will be the following: | + Directory1 | - File1.txt | + Directory2 | - File2.txt | - File3.txt */ ``` #### JavaScript: ```javascript class FileSystemComponent { constructor(name) { this.name = name; } display(indent) { throw new Error("This method must be overridden!"); } } class File extends FileSystemComponent { constructor(name) { super(name); } display(indent) { console.log(`${indent}- ${this.name}`); } } class Directory extends FileSystemComponent { constructor(name) { super(name); this.components = []; } add(component) { this.components.push(component); } display(indent) { console.log(`${indent}+ ${this.name}`); this.components.forEach(component => { component.display(indent + " "); }); } } const file1 = new File("File1.txt"); const file2 = new File("File2.txt"); const file3 = new File("File3.txt"); const directory1 = new Directory("Directory1"); directory1.add(file1); const directory2 = new Directory("Directory2"); directory2.add(file2); directory2.add(file3); const rootDirectory = new Directory("RootDirectory"); rootDirectory.add(directory1); rootDirectory.add(directory2); rootDirectory.display("|"); /* The result will be the following: | + Directory1 | - File1.txt | + Directory2 | - File2.txt | - File3.txt */ ``` --- ## Proxy This pattern is used to provide controlled access to an object by creating a proxy object that performs actions such as access control, lazy initialization, logging, or caching. **Use case:** Image is downloaded from a URL, cached on the device, and displayed in a SwiftUI view. #### Swift: ```swift protocol Image { func display() -> UIImage? } class RealImage: Image { private var filename: String private var url: URL private var image: UIImage? init(filename: String, url: URL) { self.filename = filename self.url = url self.image = loadImageFromDisk(filename: filename) if self.image == nil { self.image = loadImageFromURL(url: url) } } private func loadImageFromDisk(filename: String) -> UIImage? { let path = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let filePath = path.appendingPathComponent(filename) return UIImage(contentsOfFile: filePath.path) } private func loadImageFromURL(url: URL) -> UIImage? { guard let data = try? Data(contentsOf: url), let image = UIImage(data: data) else { return nil } saveImageToDisk(image: image, filename: filename) return image } private func saveImageToDisk(image: UIImage, filename: String) { let path = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let filePath = path.appendingPathComponent(filename) if let data = image.jpegData(compressionQuality: 1.0) { try? data.write(to: filePath) } } func display() -> UIImage? { return image } } class ProxyImage: Image { private var realImage: RealImage? private var filename: String private var url: URL init(filename: String, url: URL) { self.filename = filename self.url = url } func display() -> UIImage? { if realImage == nil { realImage = RealImage(filename: filename, url: url) } return realImage?.display() } } //Presenting ProxyImage in a SwiftUI view struct ContentView: View { @State private var image: UIImage? = nil private let proxyImage = ProxyImage(filename: "kitty.jpg", url: URL(string: "https://www.boredpanda.com/blog/wp-content/uploads/2021/06/CK9sRAEMEo1-png__605.jpg")!) var body: some View { VStack { if let image = image { Image(uiImage: image) .resizable() .aspectRatio(contentMode: .fit) .frame(width: 300, height: 300) } else { Text("Loading image...") .onAppear { self.image = self.proxyImage.display() } } } } } #Preview { ContentView() } ``` #### Here we'll create an entire web page to make understandable and testable at once. #### JavaScript: ```javascript <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Proxy Pattern Example</title> <style> img { max-width: 300px; height: auto; } </style> </head> <body> <div id="app"> <p>Loading image...</p> </div> <script> class Image { display(callback) { throw new Error("This method must be overridden!"); } } class RealImage extends Image { constructor(filename, url) { super(); this.filename = filename; this.url = url; this.image = this.loadImageFromLocalStorage(filename); if (!this.image) { this.loadImageFromURL(url, (img) => { this.image = img; this.saveImageToLocalStorage(filename, img.src); callback(img); }); } else { callback(this.image); } } loadImageFromLocalStorage(filename) { const imageData = localStorage.getItem(filename); if (imageData) { const image = new window.Image(); image.src = imageData; return image; } return null; } loadImageFromURL(url, callback) { const xhr = new XMLHttpRequest(); xhr.open('GET', url, true); xhr.responseType = 'blob'; xhr.onload = function () { if (xhr.status === 200) { const reader = new FileReader(); reader.onloadend = function () { const image = new window.Image(); image.src = reader.result; callback(image); } reader.readAsDataURL(xhr.response); } }; xhr.send(); } saveImageToLocalStorage(filename, data) { localStorage.setItem(filename, data); } display(callback) { if (this.image) { callback(this.image); } } } class ProxyImage extends Image { constructor(filename, url) { super(); this.realImage = null; this.filename = filename; this.url = url; } display(callback) { if (this.realImage === null) { this.realImage = new RealImage(this.filename, this.url, callback); } else { this.realImage.display(callback); } } } document.addEventListener("DOMContentLoaded", () => { const proxyImage = new ProxyImage("example.jpg", "./gato2.jpg"); proxyImage.display((imageElement) => { const app = document.getElementById("app"); app.innerHTML = ""; // Clear the loading text app.appendChild(imageElement); }); }); </script> </body> </html> ``` These examples shows how to implement and use each of the specified structural patterns in Swift and JavaScript with use cases. Next post will focus on `Behavioral Patterns`. See you next time!
andres_paladines
1,890,652
[Game of Purpose] Day 29
Today I was trying to recreate helicopter flying with my drone. I had many problems. First was that I...
27,434
2024-06-16T23:02:32
https://dev.to/humberd/game-of-purpose-day-29-315o
gamedev
Today I was trying to recreate helicopter flying with my drone. I had many problems. First was that I wanted to separate the mesh from the flying funcitonality, but it turned out that `Child Actor` made so many problems that I had to abandon the separation. I also imported the Drone model to Blender and it turned out that I had to export it swapped axes. When I imported Drone model downloaded from the Internet it was weirdly rotated, so I created a Blueprint for that mesh and fixed it manually in Unreal. In Blender I could reexport the same mesh, but with fixed axes, so Blueprint was not that needed.
humberd
1,890,651
Vite vs. Create React App (CRA): Qual Escolher?
Desenvolvedores que começam um novo projeto em React muitas vezes enfrentam a decisão entre Vite e...
0
2024-06-16T23:02:16
https://dev.to/julianoalvescode/vite-vs-create-react-app-cra-qual-escolher-3f91
vite, react, javascript
Desenvolvedores que começam um novo projeto em React muitas vezes enfrentam a decisão entre Vite e Create React App (CRA). Ambos têm seus méritos, mas qual é o mais adequado para você? Vamos explorar! ##⚡ Vite: A Nova Geração Vite é um bundler moderno que oferece uma experiência de desenvolvimento ultrarrápida. Desenvolvido pelo criador do Vue.js, Vite utiliza a ESM (EcmaScript Module) para proporcionar uma experiência de desenvolvimento ágil. Vantagens do Vite: - **Carregamento Rápido**: Graças ao HMR (Hot Module Replacement) ultrarrápido, Vite recarrega apenas os módulos alterados, tornando o desenvolvimento muito mais ágil. - **Build Eficiente**: Usa Rollup sob o capô para produções otimizadas. - **Simplicidade:** Menos configuração e mais produtividade. - **Suporte Nativo ao ESM**: Aproveita o suporte nativo ao ESM em browsers modernos para uma performance superior no desenvolvimento. ## Quando Usar: - Projetos que exigem velocidade de desenvolvimento. - Aplicações que se beneficiam de um build mais otimizado. - Desenvolvimento moderno com suporte ao ESM. ## ⚙️ Create React App: O Clássico **Create React App (CRA)** é uma ferramenta amplamente utilizada para configurar projetos em React sem a necessidade de configuração manual de Webpack e Babel. É mantida pela equipe do React e é ideal para iniciantes. **Vantagens do CRA:** - Facilidade de Uso: Configuração mínima. Ideal para iniciantes em React. - Padrão: Mantido pela equipe do React, garantindo compatibilidade. - Ampla Documentação: Recursos abundantes para aprendizado e suporte. ## Quando Usar: - Iniciantes que estão aprendendo React. - Projetos onde a simplicidade inicial é mais importante do que a performance máxima no desenvolvimento. - Equipes que precisam de um setup confiável e bem documentado. Espero que este post ajude você a escolher a ferramenta que melhor se adapta às suas necessidades. Bora codar!
julianoalvescode
1,890,650
White Label Exchange Software: The Future of Trading Platforms
Introduction The concept of white label exchange software has become...
27,673
2024-06-16T23:00:28
https://dev.to/rapidinnovation/white-label-exchange-software-the-future-of-trading-platforms-2gpn
## Introduction The concept of white label exchange software has become increasingly significant in the financial and technology sectors. This software solution allows businesses to set up and run their own branded trading platforms for various assets, including cryptocurrencies, stocks, and commodities. By using white label software, companies can launch their exchange platforms without having to develop the technology from scratch, enabling a focus on branding and customer service instead. ## What is White Label Exchange Software? White Label Exchange Software is a ready-made product designed to facilitate the rapid deployment of a cryptocurrency exchange without the need to develop the technology from scratch. This type of software is typically developed by specialized companies and sold to other businesses, which can then brand and customize it as their own. ## How Does White Label Exchange Software Work? White label exchange software is a ready-made platform that allows businesses to launch their own cryptocurrency exchange without having to develop the software from scratch. This type of software is designed to be customizable, enabling companies to apply their own branding and make specific modifications to suit their needs. ## Types of White Label Exchange Software White label exchange software varies significantly in terms of functionality, customization, and the specific markets they cater to. Generally, these can be categorized into three main types: cryptocurrency exchanges, decentralized exchanges (DEX), and hybrid exchanges. ## Features of White Label Exchange Software White label exchange software offers a plethora of features that are designed to facilitate the rapid deployment and management of cryptocurrency exchanges. These platforms are typically developed by seasoned vendors and are customized and branded by various businesses to make them appear as their own. ## Benefits of Using White Label Exchange Software Using white label exchange software comes with numerous benefits, making it an attractive option for businesses looking to enter the cryptocurrency market. One of the primary advantages is the speed of deployment. White label solutions are pre-built platforms that can be customized and branded as per the business requirements, significantly reducing the time to market compared to developing an exchange from scratch. ## Challenges in Implementing White Label Exchange Software Implementing white label exchange software comes with its set of challenges that businesses need to navigate. These challenges can range from integration issues with existing systems to meeting regulatory requirements in different jurisdictions. ## Real-World Examples of White Label Exchange Software White label exchange software has become a pivotal tool in the cryptocurrency market, allowing businesses to launch their own branded trading platforms quickly and efficiently. This technology is used by several well-known platforms in the industry. ## Future Trends in White Label Exchange Software The future of white label exchange software looks promising, with several trends poised to redefine the landscape. One of the most significant trends is the integration of emerging technologies such as blockchain, artificial intelligence (AI), and the Internet of Things (IoT). ## Why Choose Rapid Innovation for Implementation and Development In today's fast-paced technological landscape, choosing rapid innovation for implementation and development is essential for businesses to stay competitive and meet the evolving needs of consumers. Rapid innovation involves the quick ideation, testing, and deployment of technologies to address immediate market demands and opportunities. ## Conclusion White label exchanges are poised to play a pivotal role in the future of financial markets, particularly in the cryptocurrency sector. These platforms, which allow companies to launch their own branded exchange without having to develop the technology from scratch, are becoming increasingly popular due to their cost-effectiveness and efficiency. Drive innovation with intelligent AI and secure blockchain technology! 🌟 Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <https://www.rapidinnovation.io/post/white-label-exchange-software-features-constraints> ## Hashtags #Here #are #five #relevant #hashtags #for #the #provided #text: #1. #WhiteLabelSoftware #2. #CryptoExchange #3. #FintechInnovation #4. #BlockchainTechnology #5. #DigitalAssets #These #hashtags #encapsulate #the #core #themes #of #the #text, #including #white #label #exchange #software, #cryptocurrency #exchanges, #fintech #advancements, #blockchain #integration, #and #digital #asset #trading.
rapidinnovation
1,846,754
Heroes of DDD: Prologue & knowledge crunching
Cover photo source: Heroes of Might and Magic III, Ubisoft 🧠💪 In this series we...
27,739
2024-06-16T22:52:11
https://dev.to/mateusznowak/heroes-of-domain-driven-design-prologue-f37
eventmodeling, modularization, ddd, eventstorming
<figcaption>Cover photo source: Heroes of Might and Magic III, Ubisoft</figcaption> </br> ## 🧠💪 In this series we will: - Discover how modularization influences business development and opens up opportunities for new products. - Translate sticky notes from EventStorming and Event Modeling directly into working code. - Explore daily and rapid enhancement of your business process modeling skills. - Improve project quality using methods better than code review. - Consider diverse stakeholder perspectives such as UX/UI, frontend, backend, and analytics when modeling processes by mean of Event Modeling. - Generate unit tests from the appropriate model notation with the help of ChatGPT. - Keep code complexity aligned with the business process being modeled. - Implement the Decider pattern to express business logic in a functional style. - Familiarize yourself with various modeling practices that will inspire your further growth. - Revolutionize your programming experience forever. Could I describe all of this using examples like a cinema or a shopping cart? Sure! But why keep repeat examples on the same domain again and again (others done it well in books and during conferences)...? It's time to enter the world of heroes, magic, elves, and other fantastic creatures. ![Event Modeling of the recruitment process in Heroes domain](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/11ih4rsf51ikv95mfs5k.png) <figcaption> Event Modeling allows us to tell a story like frames in the movie and connect on a single diagram different layers of the system: user actions, interface designs, REST APIs, data storage, etc. This ensures that there are no gaps in the requirements and everyone involved in the project are on the same page. Illustrated dependencies show where work can proceed in parallel. The result is a project that we translate directly into code, eliminating misunderstanding and time-consuming discussions during code reviews. </figcaption> </br> Above, you can see a piece of Event Modeling of the autonomous recruitment model in Heroes III. But how did we get to this point!? And what does "model" mean, especially "autonomous"!? After all, no one recruits Angels at the very beginning of the game... Similarly, no one knows right away what a given business process looks like and what abstractions to use to express it. Exhausted Hero! It's time to leave the tavern and dive into the world of might (proven patterns and heuristics) and magic (also known as intuition) of modeling in the spirit of Domain-Driven Design. 🧙‍♂️ --- Recently, after a few years' break, I launched Heroes III and immediately began to analyze how I would program something similar, based on my experience from commercial projects (disclaimer: I don't develop games, but I automate business processes such as publishing job advertisements, working in a call center, and in a bank, etc.). ![AstrologesProclaimed](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u4vm137bgybt1whtk2h4.png) </br> Flashbacks from my first commercial projects and the mistakes I made also returned, as I used to start implementation without the proper process of learning and planning. Although it is just a game (but the best I've ever played!), you can find many analogies to real business processes in it, which I will explore together with you (starting with this post) and translate into code. ## 👁 The curse of knowledge > A hero's stats affect their troops (creatures) in combat, and the combat results affects the hero's army (because of lost creatures). > You can hire a hero in the tavern, which can be built in the town. You recruit the creatures in unit dwellings, which may or may not be in the town. The tavern works the same way. Creatures availability renews every week, unless the astrologers proclaimed a week of plague. Of course, you also need resources to buy them, which are collected on the map... Oh, and don't forget to develop your hero! This is just a snippet of the requirements you would get if you asked a "domain expert" about the processes in the world of Heroes III. Imagine how confused my wife was when I wanted her to play with me for the first time and I tried to explain such complicated game rules as an "expert". She gave up after a while... And that was just the beginning. It's very similar when the so-called business tries to explain its project requirements to us... **That's why our task (and responsible duty as a Software Engineer) is to carry out the process of knowledge crunching, which means extracting domain knowledge from those who have it. We need to learn to ask the right questions at the right time to avoid being overwhelmed by the experts' knowledge.** Fortunately, there are proven methods for this, developed by the Domain-Driven Design community. I will use some of them, and you can find more on [GitHub DDD Crew](https://github.com/ddd-crew). This post is the first in a series. In the following posts, we will level up in the Mage Guild and dive deeper into the mentioned techniques. ## ♞ Why Heroes III? I want to show you my current state of knowledge, how I execute projects, and my way of thinking, as well as my engineering practices workshop. I will dive into the specifics of the methods used in separate posts. For educational purposes, even though the domain is literally fantastical, it is much closer to real-world projects than the examples you often see at many conferences, such as "cinema" or the repeating again and again "ecommerce" example, seen in many books about DDD and Event Sourcing. Every cinema and shopping cart operates differently, so when we invent business requirements, we shape not only the model but also the reality. Here, the reality is already defined by the rules of the game, and we must discover and properly model it—just like in a real project. Although, of course, many great books and presentations by extremely competent people have been created based on the mentioned standard examples, from which I also learned a lot. It's worth checking out examples by Oskar Dudycz (a leading expert on Event Sourcing), where he models a shopping cart in many different ways: [.NET](https://github.com/oskardudycz/EventSourcing.NetCore), [NodeJS](https://github.com/oskardudycz/EventSourcing.NodeJS), [Java](https://github.com/oskardudycz/EventSourcing.JVM). ## 🏋️ Training the intuition I invite you to join this campaign, where we will model the world of Heroes III together, based on practices such as DDD and Event Modeling. Does such "fun" really make sense? The quotation below answers this perfectly: > Often in the modeling process, an unexpected flash of intuition appears, and someone nearby may feel like you're pulling a rabbit out of a hat. This still-unexplained puzzle of intuition is most likely unconscious knowledge derived from experience. We must openly admit that intuition favors those who have had the chance to be exposed to various cases. So, the more cases you see, the better you model. ~ Jakub Pilimon & Sławomir Sobótka, translated from Polish (DomainDrivers.pl) I want to build an intuition with you, so that when you encounter an analogous case in a real project, your brain immediately knows how to react. If you want to actively participate and don't miss the upcoming parts, let's sign up for my mailing list [HERE](https://subscribepage.io/mateusznowak). Or if your speak Polish - sign up [HERE](https://nakodach.pl/lista-mailingowa/).
mateusznowak
1,890,576
Short notes on the following Concepts
What is Virtualization? Scalability Agility High Availability Fault Tolerant Global Reach What is...
0
2024-06-16T21:00:02
https://dev.to/kennyig/short-notes-on-the-following-concepts-5g5f
cloud
1. What is Virtualization? 2. Scalability 3. Agility 4. High Availability 5. Fault Tolerant 6. Global Reach 7. What is the difference between Elasticity and Scalability ## 1. **What is virtualization?** This concept is related to creating or spooling virtualized systems with any of the following features: operating systems, servers, storage devices, or networks, rather than having these digital resources customized to physical hardware. Virtualization technology enables multiple operating systems and applications to run simultaneously and independently on a single physical machine. This allows multiple virtual machines (VMs) to share the same CPU, memory, storage, and other resources while avoiding resource conflict. Virtualization allows the creation of virtual environments to pool resources and reduce IT infrastructure costs whilst increasing access, manageability, and efficiency of resources and system resources to optimize service delivery and enhance performance. Hence, virtualization allows organizations to deploy and use multiple applications, system snapshots and efficiently optimize resource utilization, data storage and accessibility with less complexity, altogether improving scalability and more efficient data processing quicker. Some of the potential benefits of virtualization include better resource management, cost savings on hardware, increased flexibility, virtual machine communication, isolation, easy moving apps, Load Balancing, less downtime offered by saved server occurrences. ## 2. What is scalability in cloud computing? In cloud computing, scalability is the capacity of an infrastructure in the cloud to modify its resources in response to fluctuating demands. This implies that you will not need to wait for complicated hardware upgrades or complicated processes to expand or decrease resources like processor power, storage, or memory on the fly. Below is a summary of the main elements of cloud scalability: Dynamic Adjustment: You can scale your computing power to match your workload by adjusting cloud resources up or down as needed. Cost-effectiveness: You do not have to spend money on extra hardware that sits around when demand is low because you only pay for the resources you utilize. Flexibility: Cloud scalability makes it simple to adapt to unplanned growth, seasonal demands, or traffic spikes, ensuring that your applications can manage the strain. Cloud scaling primarily comes in two flavors: Increasing or decreasing a single server instance's resources, like RAM or CPU, is known as vertical scaling (scaling up/down). Horizontal scaling, often known as scaling out, is the process of dividing the workload over several computers by adding or deleting complete server instances. ## 3. What is Agility in cloud computing? In cloud computing, agility is the capacity of an enterprise to create, test, and implement applications in a cloud environment rapidly. It is basically the flexibility and nimbleness that come from using cloud resources. Consider this: installing and managing IT infrastructure can be a laborious and slow procedure in a traditional on-premises environment. These obstacles are eliminated by cloud computing by providing: On-Demand Resources: Cloud service providers provide an enormous pool of instantaneously provisionable and de-provisionable computer resources, such as memory, processing power, and storage. This removes the requirement for protracted cycles in hardware acquisition. Automated Processes: A lot of cloud platforms come with capabilities for scaling, configuration management, and deployment automation. This allows IT personnel to concentrate on more advanced duties. Elasticity: The ability to quickly scale up or down cloud resources in response to changing demand. This frees companies from worrying about infrastructure constraints so they may test new apps or handle spikes in demand. The following are some advantages of cloud agility: Faster Time to Market: Companies get a competitive edge by being able to provide new concepts and services to consumers more quickly. Enhanced Innovation: A culture of experimentation and creativity is fostered by the cloud's ease of development and testing. Lower Costs: Companies do not have to pay for hardware that is not being used because they only pay for the resources they use. ## 4. Explain High Availability In cloud computing, high availability (HA) describes a system's capacity to continue functioning and being available for prolonged periods of time, even in the event of hardware or software problems. It is essential for applications that are mission-critical and cannot withstand downtime, since any disruption might result in major financial loss or disruptions to services. An examination of high availability's operation is provided below: Redundancy: Multiple servers, storage devices, and network connections are examples of redundant resources that are used by HA systems. There is less downtime when a component fails since another one takes over smoothly. HA designs are built with automated failover in mind. This implies that, in the case of a failure, the system will identify the problem and, with the least amount of disturbance to continuing activities, will immediately transition to a backup resource. Minimal Downtime: Achieving nearly constant uptime, sometimes expressed in "nines," is the aim of HA. For example, a system with three nines, or 99.99% uptime, would only have 53 minutes of outage a year. To ensure high availability, cloud providers provide a range of tools and services, such as: Virtual machines (VMs): You can set up and launch VMs on several hosts using cloud platforms. The program can be swiftly moved to another VM in case one fails. By distributing incoming traffic among several servers, load balancing keeps any one server from getting overworked and possibly failing. Disaster Recovery (DR): This extends beyond hardware malfunctions and entails data and application replication between geographically dispersed sites. This guarantees that your applications can be promptly restored in the event of a widespread outage or natural disaster. High availability is a critical consideration for businesses that rely on continuous operation and minimal downtime. By implementing HA strategies in the cloud, you can ensure your applications are always up and running, fostering better customer experience and business continuity. ## 5. Fault Tolerance? In cloud computing, fault tolerance is all about designing a system that can continue to function normally even in the event that certain components break or malfunction. Similar to having an infrastructure backup plan, it guarantees that your apps continue to function even in the event of technical difficulties. The following describes how fault tolerance functions in a cloud environment: Continuous Operation: The fundamental idea is to keep services running even in the event of software or hardware malfunctions. This is particularly crucial for vital applications since downtime can be expensive. Redundancy is Crucial: Redundancy in cloud fault tolerance refers to the presence of backups for critical components. This can entail redundant network connections, additional servers with mirrored data, or extra storage devices. Automatic Failover: Ideally, the system should automatically transition to a backup resource in the event of a fault. This reduces downtime and guarantees a seamless user transition. To accomplish fault tolerance in the cloud, various methods are employed: Replication is the process of making duplicates of your apps and data on different servers. Everything can continue to function even if one server dies because the other one can take over. By distributing workloads among several servers, load balancing keeps any one server from getting overworked and possibly failing. Self-Healing Mechanisms: A few cloud platforms have automated mechanisms that are able to identify and resolve issues without the need for human assistance. This strengthens the robustness of the system even further. Fault tolerance has many advantages in cloud computing. Reduced Downtime: Guards against interruptions and failures in service, guaranteeing that users can always access your apps. Enhances the overall dependability of your cloud infrastructure by reducing the likelihood of failures. Improved Business Continuity: Reduces possible losses by assisting companies in continuing to operate even in the face of unanticipated events. You may create a more durable and resilient system and guarantee that your users can access and rely on your applications by implementing fault tolerance techniques into your cloud environment. ## 6. Global Reach The ability of cloud providers to make their infrastructure and services available over a wide geographic area is referred to as "global reach" in the context of cloud computing. Businesses can benefit from this in a number of ways. Market Expansion: Cloud computing enables companies to effortlessly enter new international markets. Users may access data and apps with minimal latency, no matter where they are physically situated, thanks to data centers that are positioned strategically throughout multiple locations. Global Scalability: Cloud resources are easily scalable, allowing them to adjust to changing demands in many locations. This eliminates the need for companies to develop and maintain physical infrastructure in each location, enabling them to respond to spikes in user base or traffic in particular areas. Lower Costs: Companies save money by not having to build up and maintain many data centers, which would have required a large upfront investment and continuous upkeep. The infrastructure is handled by cloud providers, who provide a pay-as-you-go model that may be scaled to meet your demands. Enhanced Compliance: Data centers that meet different local data privacy laws are provided by cloud providers. This makes it easier for companies to comply with local rules and regulations when storing and processing data, which is beneficial for worldwide operations. Major cloud providers use their worldwide reach in the following ways: Distributed Data Centers: A global network of data centers has been developed by cloud behemoths such as Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS). This guarantees that users, wherever they may be, may access resources with minimal latency. material Delivery Networks (CDNs): By distributing material among geographically separated servers, these networks allow users worldwide to load pages more quickly. Regional Availability Zones: Cloud companies frequently provide many availability zones, or geographically separated data centers, within each region. In the event of an outage, this offers redundancy and fault tolerance. For companies of all sizes, having a global presence is revolutionary. It gives companies the ability to take on global competition, attract new clients, and maximize the performance and financial efficiency of their cloud deployments. ## 7. What is the difference between Elasticity and Scalability Cloud computing elasticity and scalability both cope with demand fluctuations, but they react differently and have different ideal conditions. Below is a summary of the main distinctions: ## Scalability Focus: Scheduled resource modifications in response to projected increases or decreases in workload. Action: Usually manual or based on pre-established limits. You can ask the cloud provider to release unused resources (scaling down) or request more resources (scaling up). Timeline: Depending on the complexity and the procedure used by the cloud provider, scaling may take some time to implement. Perfect for: Meeting demands for processing power, data storage, or a steady increase in the user base. Example: A business expects the holidays to bring in a spike in internet sales. To manage the extra traffic, they might proactively scale up their server capacity in advance. ## Elasticity Focus: Real-time variations in demand are used to automatically adapt resources. Action: Scaling automatically in accordance with preset criteria and triggers. The cloud system itself adjusts resource scaling up or down based on changes in workload. Timeframe: Elasticity occurs in minutes or even seconds, far faster than deliberate scaling. Perfect for: Managing sporadic variations in resource needs, workload surges, and erratic traffic spikes. Example: A trending topic causes a rapid spike in visitors to a social networking app. Without the need for human intervention, elasticity automatically ramps up resources to maintain performance. Here's a comparison: Consider scalability like Lego construction. You have the freedom to design your structure and change the bricks (materials) as needed. Similar to a stretched band is elasticity. Without the need to physically alter the band, it automatically adapts to varied tension (demand) levels. To sum up, cloud computing offers both scalability and elasticity as beneficial advantages. Elasticity makes sure you can deal with unforeseen spikes in demand, while scalability helps you prepare for future expansion. To maximize both performance and cost-efficiency in your cloud environment, you should ideally combine the two.
kennyig
1,890,647
Arquitetura Hexagonal Explicada: como Transformar seu Desenvolvimento de Software
Introdução Ciclo de um projeto Arquitetura vs Design de Software Arquitetura Hexagonal Hexagonal vs...
0
2024-06-16T22:51:51
https://dev.to/pmafra/arquitetura-hexagonal-explicada-como-transformar-seu-desenvolvimento-de-software-43d9
architecture, systemdesign, beginners, learning
<!-- TOC start (generated with https://github.com/derlin/bitdowntoc) --> - [Introdução](#introdução) - [Ciclo de um projeto](#ciclo-de-um-projeto) - [Arquitetura vs Design de Software](#arquitetura-vs-design-de-software) - [Arquitetura Hexagonal](#arquitetura-hexagonal) - [Hexagonal vs Clean vs Onion](#hexagonal-vs-clean-vs-onion) <!-- TOC end --> # Introdução De forma geral, entende-se que a função de todo desenvolvedor é “resolver problemas utilizando o software como ferramenta”. Esta frase, apesar de simples, pode ser entendida a partir de duas vertentes: complexidade de negócio, envolvendo as regras para se resolver o problema; e a complexidade técnica, que é a tecnologia integrada ao negócio para fazê-lo funcionar. Muitas vezes, porém, o que observamos são aplicações tão intrinsecamente integradas de forma que não é mais possível destacar o que é complexidade de negócio e o que é a complexidade técnica. Considere o seguinte cenário: a Empresa X deseja rastrear eventos de usuários em seu aplicativo, como 'account_created', 'home_viewed', e 'payment_finished', e enviá-los para uma plataforma de CRM, Empresa Y, para construir perfis detalhados e aprimorar estratégias de marketing. Para isso, utilizam o SDK da Empresa Y com o método 'send_event', implementado em cada ponto do sistema onde ocorre um evento. Dois meses depois, com o aumento dos custos pelo uso do sistema da Empresa Y, a Empresa X opta por migrar para a Empresa Z. A substituição do SDK de Y pelo de Z seria um processo simples nesse caso? Podemos facilmente concluir que não. O código de Y ficou acoplado por todo o sistema, “sujando” a complexidade de negócio. Substituir a sua utilização seria um processo árduo e demorado. A arquitetura hexagonal, também chamada de “Ports and Adapters”, vem exatamente para impedir que isso aconteça. Nesta linha, podemos exemplificar alguns pontos importantes sobre uma arquitetura bem feita: - Crescimento sustentável: o software precisa ser escalável e possível de ser melhorado de forma simples, de forma que ao longo do tempo ele pague o seu próprio investimento; - O Software deve ser desenhado por nós e não pelo framework - devemos mitigar o acoplamento entre complexidade técnica e de negócio; - Possível de ser substituída sem muitos esforços Em outras palavras, podemos entender a arquitetura como o futuro de um software. # Ciclo de um projeto Ainda sim, muitas vezes, seja por falta de conhecimento sobre arquitetura, ou mudanças complexas de mercado, vemos um software crescer de uma maneira bem pouco sustentável. A seguir exemplifico o ciclo de vida de muitos projetos hoje em dia. **<u>Fase 1:</u>** Nesta fase, o projeto é iniciado. Implementa-se o banco, cadastros, autenticação, servidor, views, controllers, etc. **<u>Fase 2:</u>** Adiciona-se regras de negócio no cadastro, implementa-se uma ACL para autorização, adicionam-se logs, criação e consumo de APIs. **<u>Fase 3:</u>** Os acessos aumentam muito, escala-se horizontalmente a aplicação (upgrade de hardware), começamos a trabalhar com cache, consumir apis de terceiros e implementar regras de negócio de parceiros. **<u>Fase 4:</u>** Mais acessos, mais upgrades, mais consultas ao banco que começam a gerar gargalos, implementa-se uma v2 da API. **<u>Fase 5:</u>** O software é escalado verticalmente, fazer uploads em multiplas máquinas (começamos a trabalhar com S3), muita refatoração para executar a escala vertical, criação de autoscaling, pipeline de CI/CD. **<u>Fase 6:</u>** Começamos a utilizar graphQL para algumas APIs, começam a aparecer bugs em função da mudança nos formatos, problemas com logs em múltiplas máquinas - começa-se a pensar em algum sistema para centralização, cria-se integração com plataforma de CRM. **<u>Fase 7:</u>** Começamos a ter inconsistência dos dados de CRM, a pensar em conteinerizar nossa aplicação - precisa-se repensar o CI/CD, memória, logs, nos livrar de código legado. **<u>Fase 8:</u>** Começamos a trabalhar com microsserviços, inicia-se compartilhando bancos, começamos a ter problemas de tracing, lentidão no sistema, custo fica elevado. **<u>Fase 9:</u>** Não conseguimos mais trabalhar com containers e vamos para Kubernetes, temos que refazer o CI/CD novamente, começa-se a ter problemas de resiliência, começamos a trabalhar com mensageria sem ter muita experiência e começamos a perder mensagens, começa-se a contratar consultoras de software para auxiliar. **<u>Fase 10:</u>** Use a imaginação! --- A partir da análise dessas fases, podemos ver que um software não se perde do dia pra noite, mas sim um dia de cada vez. O sistema começa a ficar com cara de legado, com débitos técnicos que ninguém quer colocar a mão. A seguir listo algumas reflexões desse ciclo de vida apresentado: - Faltou visão de futuro: não podemos ter a inocência de achar que um software sempre será simples/pequeno. Uma hora ele cresce; - Má definição de limites: não houve uma separação clara entre o que é negócio e o que é framework, gerando um acoplamento e dificultando a troca e adição de componentes; - Falta de preparação para escalamento: no início tudo está no mesmo servidor - cash, logs, upload. Porém, quando começamos escalar horizontalmente, e depois verticalmente, temos que refazer muita coisa. Logo, temos que pensar e nos preparar para escala desde o dia 1. - Falta de limites para otimizações frequentes: a todo momento o software vai precisar de novas features com prazos apertados e, se não definirmos um limite aceitável, começaremos a criar e manter cada vez mais débitos técnicos; - Incapaz de lidar com mudanças bruscas: ex.: empresa trocou a gateway de pagamento, ou de plataforma de CRM → conseguimos alterar facilmente a implementação? Podemos ver que, se tivermos visão de futuro desde o dia 1, conseguimos criar camadas anti-corrupção para que as mudanças não afetem o negócio e não dificultem o nosso trabalho. # Arquitetura vs Design de Software É importante também salientar a diferença entre esses dois conceitos, muitas vezes considerados como a mesma coisa. A seguir segue uma definição que considero bem sucinta e completa, resumida do seguinte artigo: https://eximia.co/quais-sao-as-diferencas-entre-arquitetura-e-design-de-software/ --- _“Atividades relacionadas a arquitetura de software são sempre de design. Entretanto, nem todas as atividades de design são sobre arquitetura. O objetivo primário da arquitetura de software é garantir que os atributos de qualidades, restrições de alto nível e os objetivos do negócio, sejam atendidos pelo sistema. Qualquer decisão de design que não tenha relação com este objetivo não é arquitetural. Todas as decisões de design para um componente que não sejam “visíveis” fora dele, geralmente, também não são.”_ --- Em outras palavras, podemos entender a arquitetura como aquilo faz entregarmos o software com qualidade, eficiência, e garantir que o negócio vai funcionar, algo mais abstrato. O Design seria como vamos fazer isso funcionar, algo mais concreto. Muitas decisões de design não têm a ver com a arquitetura, porém também podem impactar nas decisões arquiteturais. Segue um exemplo: - “Todos os sistemas terão que gerar logs e ficarem armazenados em um local” - **visão arquitetural** - “Para que isso seja possível, vamos fazer o log não ser gravado em arquivo e sim sair pelo terminal” - **visão de design** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ktrnq77voix6v6vxsxxd.png) # Arquitetura Hexagonal De acordo com Alistair Cockburn, podemos definir a Arquitetura Hexagonal da seguinte forma: --- _“Allow an application to equally be driven by users, programs, automated test or batch scripts, and to be developed and tested in isolation from its eventual run-time devices and databases.”_ --- **Obs.:** fica a dica do artigo: https://alistair.cockburn.us/hexagonal-architecture/ De forma sucinta, é basicamente criar software isolando a forma que pessoas e sistemas o acessam, e como ele acessa outros sistemas. Podemos entender melhor esse conceito a partir da seguinte imagem: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/62i0w88qlwlmms0afg3p.png) Observando a imagem, temos o coração de nossa aplicação, o negócio, no centro do hexágono. Para haver a comunicação com objetos externos (tanto cliente quanto servidor abstraídos por seus respectivos adaptadores) criamos interfaces (ports), de forma a impedir que minha aplicação (negócio) fique diretamente interligada ao adaptador e vice-versa. É importante notar aqui que a forma de hexágono é apenas uma estrutura arbitrária para representar - poderia ser qualquer outro formato. A questão é que não queremos misturar o que é complexidade de negócio com a complexidade técnica. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cvwl2b8ecmcxefvk4jyu.png) Nesta linha, conseguimos componentizar a integração com os diversos objetos externos: logs, cache, banco, filas de mensagens, cliente (REST, gRPC, GraphQL), facilitando sua adição e substituição por meio dos “Ports” e “Adapters”. De forma mais prática, precisamos utilizar de um dos conceitos de SOLID - **Princípio da Inversão de Dependência** - para atingir essa clareza: - Módulos de alto nível (negócio) não dependem de módulos de baixo nível (objetos externos), e sim ambos devem depender de abstrações. - Abstrações não devem depender de detalhes e vice-versa. O que fazemos aqui é uma inversão de controle. Por exemplo, se eu instanciar uma classe B dentro de outra classe A diretamente, estarei gerando um acoplamento. O que farei ao invés disso é recebê-la dentro do construtor. Porém, para evitar esse acoplamento entre as duas classes, devo receber uma interface no construtor de A, que representará B indiretamente. Todos esses conceitos mudam completamente a forma como desenvolvemos software de qualidade. # Hexagonal vs Clean vs Onion ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1zbc5pcxjamho2ropva.png) Por fim, podemos diferenciar esses conceitos muitas vezes confundidos. Na arquitetura hexagonal não há um padrão estabelecido de como o código deve ser organizado, quais camadas ou estruturas de pastas utilizar, diferentemente da Clean e Onion Architecture. Ela apenas define o princípio da separação entre o coração da aplicação e os objetos externos. Neste meio, não precisamos necessariamente implementar uma estrutura conhecida (Clean/Onion) contanto que sigamos alguma regra de organização seguindo a essência dos conceitos que falamos neste artigo. Mas é claro, implementar estruturas já estudadas e conhecidas com certeza facilita bastante o processo.
pmafra
1,890,187
Overcoming Imposter Syndrome as a Beginner in Tech
In this realm of technology, where innovation and expertise thrive, many of us have faced a nagging...
0
2024-06-16T22:38:34
https://dev.to/udoka033/overcoming-imposter-syndrome-as-a-beginner-in-tech-20f0
webdev, javascript, beginners, tutorial
In this realm of technology, where innovation and expertise thrive, many of us have faced a nagging sense of inadequacy at some point in our journey. It’s a phenomenon known as imposter syndrome. ## Imposter Syndrome meaning A persistent feeling that despite our accomplishments, we don’t truly belong, our successes are merely flukes, and any recognition we receive is undeserved. Have you ever found yourself questioning your abilities, doubting whether you’re truly cut out for the tech world? Have you downplayed your achievements, attributing them to luck rather than skill and hard work? If so, you’re not alone. In this article, I will share my personal experience, insights, and strategies for overcoming these self-limiting beliefs as a beginner in tech. ## My Story Transitioning from a non-tech background to [front-end development](https://dev.to/udoka033/a-beginners-guide-to-front-end-development-4hjj) was not an easy feat. It was filled with fear, self-doubt, feelings of inadequacy, and intimidation. There were so many concepts and technologies to learn, and varying roadmaps to follow. I was so confused until a friend gave me an Udemy course by [Colt Steele](https://www.udemy.com/user/coltsteele/) which built a strong foundation for my front-end coding. I not only gained valuable skills but also built beginner projects. I was doing okay except that I didn't feel it internally. My projects were getting interesting, and my skills sharpening but I wasn't acknowledging them and that was imposter syndrome. Every line of code seemed like a potential mistake, and every project was completed like a stroke of luck. Every tiny bug made me doubt myself, so I had to look for a way out. They were major issues I started having when I developed this "fraud" syndrome as referred to by [National Institute of Health](https://www.ncbi.nlm.nih.gov/books/NBK585058/), these issues will be explained in the next section. ## Consequences of Imposter Syndrome Low Confidence level: This is a common tech-industry challenge for newbies. The continuous self-doubt associated with impostor syndrome leads to decreased self-confidence and reluctance to take on new challenges. Fear of Failure: There's often an excessive fear of making mistakes with imposters in tech and any other field. This occurs due to perfectionism. Most people experiencing imposter syndrome usually want everything to be perfect. It is okay to strive for excellence but that should focus on the overall outcome rather than the "how". Procrastination: This is a product of fear, and might lead to delays of tasks, and general weakness. Stress and Anxiety: Continuous worry about being exposed as a fraud can lead to heightened stress levels. Career Stagnation: Avoiding opportunities for advancement due to feeling undeserving or unqualified. Burnout: Overworking to prove oneself can lead to burnout and health breakdown. Isolation: This occurs when a person keeps believing that others are more competent. ## How to Overcome Imposter Syndrome - **Learn to celebrate your little efforts** This might seem insignificant but try to appreciate your success no matter how little. Appreciate that concept learned, that tutorial video you finished, that project you led, and that post you shared because they play an integral role in building [confidence](https://dictionary.cambridge.org/dictionary/english/confidence) in your tech skills. Understand that every success brings you steps closer to that future you envisage. - **Practice Learned Optimism:** People who believe that bad events have specific causes are more optimistic. I gained this insight from Martin Seligman's [Learned Optimism book](https://positivepsychology.com/learned-optimism/). Practicing optimism means looking at your mistakes and imperfections in a new light. This optimistic style of explaining good events internally increases self-esteem. This doesn't mean you shouldn't take responsibility for your actions but you should acknowledge your strengths and work towards developing them. - **Have a Growth Mindset** A growth mindset involves the belief that skills and intelligence can be enhanced through persistent effort and determination. This is another crucial coping strategy for imposter syndrome. Instead of viewing challenges as threats to competence, see them as opportunities to learn and improve. When faced with complex tasks, approach them with curiosity and a willingness to learn, rather than fear of failure. - **Collaborate and Seek Feedback** Instead of viewing team members as competition, collaborate with them and improve your skillset. Seek feedback and work on them. Use feedback to improve your skills rather than viewing it as a reflection of your ability. Feedback is invaluable in personal and professional growth as it helps build mental resilience. - **Show Your Work** You don't have to be a genius. Be an amateur. Amateurs are not afraid to make mistakes in public. Even as a professional, the best way to flourish is to retain an amateur spirit and embrace uncertainty. These words from Austin Kleon in his book [Show Your Work](https://pdfcoffee.com/show-your-work-pdf-free.html) stood out for me the most. It is important to share that mini-project, that small milestone reached, that video edited. Learn to share the process, not only products. This way you can overcome perfectionism, and improve your self-belief as a tech professional. ## One Final Note Always strive for success despite imposter syndrome. Overcoming self-doubt in tech might be difficult but with constant practice, the process gets easier. Celebrate your small achievements, practice optimism, develop a growth mindset, collaborate and seek feedback, and finally show your work. Imposter syndrome can be different for everyone but this is my personal experience, research, and insights. Please share how you overcame imposter syndrome in your career, work, or school in the comment section. Please like, comment, and follow for more web development and tech-related topics.
udoka033
1,890,643
Creation of Resource Group in MS Azure
Steps to Create a Resource Group on Azure Login to MS Azure Search for resource...
0
2024-06-16T22:24:54
https://dev.to/kennyig/creation-of-resource-group-in-ms-azure-5f9i
cloud, beginners, computerscience, azure
## Steps to Create a Resource Group on Azure 1. Login to MS Azure 2. Search for resource group ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eivmwu1dvp729uk1ay4j.png) Then select resource group. 3. Click on **+ Create** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/882jt5u4fda5p2u103uf.png) The following create box comes up. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/69h96mhz2derdy23i335.png) 4. Type the name of your resource group and select your region, then click on **review and create** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ak4hqy1fsraw0b8vw3lp.png) 5. Click on **create** to complete the creation of your resource group. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hoi5rr14tqfei6trergf.png) 6. Click on **Go to resource group** to lunch your resource group. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhmqrbm97i5moim5qpqz.png)
kennyig
1,890,612
Career Transition at 31: How I Became a Front-End Developer
My Journey with Programming It all began in the second half of 2013 when I was 24 years...
0
2024-06-16T22:18:07
https://artcodes.dev/blog/career-transition
### My Journey with Programming It all began in the second half of 2013 when I was 24 years old. At the time, I was working in marketing and had started my digital marketing agency just a few months earlier. I was working for small businesses in Rio de Janeiro, providing services like social media management, sponsored link campaign management, and occasionally some design and creative services with the help of partners and colleagues. During this time, I noticed that many of my clients lacked a strong digital marketing strategy. Most of them wanted to invest in advertising and social media, but they didn't have a website to consolidate their products/services and converge traffic from these channels. It was then, identifying an opportunity to offer website development services, that I began to consider learning programming. I had always been the go-to person in my family for solving technology-related problems. I had often been "hired" by my parents and sisters to explain why a program wouldn't open or why the computer was running slowly. But until then, despite my strong interest in IT, I had the perception that the field was very scientific and required extensive knowledge of mathematics. Since I hadn't excelled in these areas in my school life, it never occurred to me to work in IT. However, a little research was enough to dispel that misconception. Yes, there was a technical aspect to programming, but most developers weren't doing complex calculations all the time. So, my preconceptions faded away and turned into great curiosity and interest. I started studying JavaScript, HTML, and CSS on YouTube and also studied on [Codecademy](https://codecademy.com) platform. Learning on my own at that time was quite challenging. Occasionally, I had questions, and I didn't know how to find answers, nor did I have anybody close who worked in the field who could help me. That's when I decided to enroll in an on-site course at a specialized school. I took a web design course where I learned the basics of HTML, CSS, JavaScript, and PHP. After that, I was able to make my first commercial proposal applying programming work. At that time, Facebook had the feature of allowing business pages to have tabs (I don't know if they still do), which were essentially HTML documents within fan pages. My first job was to create a digital menu in one of these tabs for one of my clients. The work was simple and not portfolio-worthy, but I vividly remember the satisfying feeling of seeing the finished code and the menu published. That feeling sparked a desire to continue doing more and more website and programming work. The agency grew, and I had new business partners and more clients, which was great for my marketing career and my company. Over time, we closed some website projects. During that time, we used WordPress extensively, which was a quick and effective way to solve most of our problems and those of our clients. WordPress is a very good content management platform for creating simple institutional websites. It has a large marketplace, and its code is written in PHP, JavaScript, HTML, and CSS. Since there are many templates available, my role was mostly making some CSS and JavaScript adjustments, and I rarely needed to do any PHP logic. Throughout this time, my desire to work with code remained strong. I continued studying online while juggling my work at the agency. Some excellent resources I found were [Brad Traversy's YouTube channel](https://www.youtube.com/c/TraversyMedia), [Curso em Vídeo](https://www.youtube.com/c/CursoemV%C3%ADdeo), the [Tree House](https://teamtreehouse.com/) platform, and some instructors on Udemy, where I collected dozens of courses. I consumed these sources as a hobby and only when there was a need for a project at the agency. --- ### The Turning Point The turning point came in mid-2020 when I decided to make a career transition. At the time, I was 30 years old and no longer working at my agency but at a company in Florianópolis: RD Station a MarTech startup. Working there was an extremely positive experience that changed my career. I worked as a Customer Success Manager (CSM), providing consultative support to agencies that used their software. Working in a SAAS (Software as a Service) business made me excited about programming again. Even though it was in a different area, I was able to absorb some of the dynamics of teams, squads, and agile methodologies. This atmosphere made me want to switch to the product and engineering area. I even considered working as a Product Manager for a while - it made more sense given my experience and background. But in the end, what I really wanted was to become a developer. My motivation was primarily the desire and pleasure of building things, scripts, pages, and products. The same feeling I had when I completed my first project. Additionally, there was a shortage of developers in the market. The high demand was evident, and I saw more and more schools and boot camps offering courses in the field. I also thought a lot about how I was "getting older" and that this might mean fewer opportunities in the market (more on that later), making a career switch more challenging. To add to it, although I liked RD a lot, I no longer found as much pleasure in working with digital marketing and the agency market. All of this created a sense of urgency that led me to make the decision and, most importantly, create a plan to change my career. ### The Plan I needed a strategy. I studied a bit about people who pivoted their careers, read posts like the one I'm writing now, watched some videos, and outlined some actions to take: 1 - Communicate my decision and ask for help, including at my job One of the most important things I did was verbalize my career change decision and ask for help from people who were in the same process as me or were already developers. This helped me in two ways: - It created a sense of commitment and a clear goal. By making my goal public, my team, friends, and family supported me on this journey. This step was fundamental to my success (more on that later). - At 31 years old, I didn't have as much time to make mistakes anymore, so asking for help was a way to find shortcuts more quickly. I know it sounds obvious, but sometimes we tend to try to solve everything on our own, whether out of shyness or some form of pride and ego. Well... Spoiler alert: as a developer, you will need to ask for help many times, especially early in your career (just like in any other career, I would say). During this stage, many friends and people gave me study tips, advice on what to do to stand out, and what mattered to become a developer. I am very grateful to all of them 🙏🏻 With this information, I was able to outline the next steps more clearly. 2 - Go back to studying. But what to study? What really makes a developer good? One of the things I applied here was the Pareto principle, the famous 80/20 rule. I wanted to find the 20% of topics that I needed to study that would give me an 80% chance of doing well in a job interview, finding the levers - as Renato Martins, my coordinator at RD at the time, taught me (thanks, Rê ❤️). Most of the content I found and advice I received pointed to (no surprises here): - Programming logic - Data structures - Algorithms - HTML, CSS, and JavaScript In the past, I had already studied a bit of the last topic, so I focused on the first three. That's when I rediscovered an excellent learning resource, Harvard's [CS50x course](https://cs50.harvard.edu/x/2022/). I had heard about it in 2015 and watched a couple of lectures, but didn't continue with the study at the time. Now it would be essential. CS50 is an introductory course in Harvard's Computer Science curriculum. The course is free and of high quality! It is very didactic, with lectures lasting about 1 hour and 45 minutes and practical exercises that are done online in an integrated development environment (IDE). The code you write in these exercises is validated by a series of automated tests, but what's cool is that the answers are not provided anywhere (although you can find them if you really want on their Discord channels and forums... but that's not the idea, right?). The professor discusses the exercises but does not give the answers (at least at the time I took it). The course covers basic topics in Logic, Data Structures, and Algorithms, using the C and Python languages. I focused on this course for about three months, which helped me a lot. Learning how to fend for yourself and, most importantly, knowing how to research your difficulties is a fundamental part of being a developer. In addition to CS50, I also studied a bit of CSS and Ruby. Here's why: Ruby is the main language used in RD's systems. At the time, I wanted to try to transition from the CS field to Engineering, so I also watched some videos from [OneBitCode](https://www.youtube.com/c/OneBitCode) to learn Ruby and Ruby On Rails (a framework). I recommend getting to know the language. I found it really cool at the time, but I ended up focusing on JavaScript because it had a much larger job market. CSS was a significant area of improvement for me. Despite having studied and worked on some projects where I wrote a lot of CSS, I didn't feel confident. That's when I discovered [Kevin Powell's YouTube channel](https://www.youtube.com/kepowob) and [Josh Comeau's blog](https://www.joshwcomeau.com/). These guys are awesome and still help me to this day. 3 - Projects and Portfolio After a few weeks of studying, I needed to put into practice everything I was learning. Looking for project ideas, I found some platforms that helped me practice: - [Frontend Mentor](https://www.frontendmentor.io/) - This site is excellent for creating projects for your portfolio. The idea is to replicate the layouts they provide and then publish your solution on the platform. There, you can receive feedback on your code and also see solutions from other developers. - [Dev Projects](https://www.codementor.io/projects) - Similar to Frontend Mentor, but it's not limited to front-end; you'll find ideas for back-end and mobile projects too. - [CodeWars](https://www.codewars.com/) and [HackerRank](https://www.hackerrank.com/) - Two platforms for practicing logic and algorithm exercises. Very cool, I still use them for studying and practice. 4 - Job Applications After a few months, toward the end of 2020, I started looking for opportunities within RD. I wanted to transition from the CS field to Engineering or Support to work as a developer. Both my team and my managers were supportive (lots of love 😍). I even participated in an internal job application process, but I wasn't successful. Shortly thereafter, I started applying for positions outside of RD. In early 2021, my sister Gabriela referred me to a job at one of the companies in the group where she works, [Puzzl Software House](https://www.puzzl.com.br/). The hiring process was quite straightforward. I sent my resume and GitHub with some projects, then had a few interviews. And then, in April 2021, after approximately a year of executing my plan, I officially began my career as a junior front-end developer 🎊🎊🎊 ### Conclusion It's important to emphasize that the entire transition process was neither quick nor easy. I felt anxious and distressed for several months, and I had to learn to cope with these emotions and try to alleviate them. As I mentioned earlier, communicating my career change was crucial. Thanks to this action, I was able to find a position at a wonderful company because my sister knew what I was looking for. If you're in a career transition phase, build your support network and seek out the levers that will propel you forward. My only expectation with this post is that I can inspire other people who are trying to become developers, whether they're just starting their careers or transitioning. However, the plan I outlined and described here in this article considers only my scenario. Please don't take the steps listed here as a definitive guide, although they may serve you. Think about your situation, your strengths and weaknesses, and create your plan. ### One more thing... #### English In the IT field, English is indeed important. It's not just another item to add to your resume. You will really need to read and write in English daily. Fortunately, I had the tremendous privilege of being able to study English early in my life, and that gave me a significant advantage in my studies, as many high-quality resources were not yet available in Portuguese. #### About Age In practice, age doesn't matter. It doesn't matter if you're starting at 30, 40, or 50. If you have the desire, go for it! Take it easy and with soul (as a good friend once said). In my journey at Puzzl, I was able to use my age and other professional experiences as leverage. Being more experienced and having other professional backgrounds allowed me to advance in my career plan, and I even got promoted (but that's a story for another article). #### Beware of False Promises and Advertising The IT and Product fields are in high demand. I see many courses and influencers promising to take you from zero to a high-paying job in a short time. For the vast majority, this won't happen. #### Leave Your Ego Behind, Especially If You're Coming from Another Field The fact is that you will work with younger people who are much better than you, especially in technical terms. Instead of competing, try to add value and complement it with your knowledge from other areas and your life experience. #### [Access here](https://artcodes.dev/dev-library) a list of my learning sources with the content I used in this career transition process and still use today. ;) I hope I have helped! Thank you for reading this far. Bye,
artcodes
1,890,510
SteamVR Overlay with Unity: Draw Camera Output
Prepare camera output Before displaying the current time to the watch overlay, try...
27,740
2024-06-16T19:06:13
https://dev.to/kurohuku/part-7-draw-camera-output-d34
unity3d, steamvr, openvr, vr
## Prepare camera output Before displaying the current time to the watch overlay, try displaying the camera output of a simple 3d scene. ### Prepare scene In the hierarchy window, create the following game objects. - Camera - 3D Object > Cube - Light > Directional Light ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svfvgq2tekdqbv6f8ik8.png) Arrange the objects as the camera captures the cube. It’s OK to arrange roughly. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o68gx2fx4qc7yv7y7ucj.png) ### Rotate the cube We want to make animated camera output so create a script to rotate the cube. Create **Rotate.cs** inside the Scripts folder, then copy the following code. ```cs using UnityEngine; public class Rotate : MonoBehaviour { void Update() { transform.Rotate(0, 0.5f, 0); } } ``` Attach the `Rotate.cs` to the Cube object in the scene. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5k8zw57wo9o9hdlg07l.png) Run the program and check the cube is rotating. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dl68i5sm0o18wm9126vy.gif) Here, the test scene setup is done. ### Add camera reference Add a camera member variable to `WatchOverlay.cs`. ```diff public class WatchOverlay : MonoBehaviour { + public Camera camera; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; [Range(0, 0.5f)] public float size; [Range(-0.2f, 0.2f)] public float x; [Range(-0.2f, 0.2f)] public float y; [Range(-0.2f, 0.2f)] public float z; [Range(0, 360)] public int rotationX; [Range(0, 360)] public int rotationY; [Range(0, 360)] public int rotationZ; ... ``` Set the scene `camera` to the camera variable of the `WatchOverlay` object in the Unity inspector. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x5eqmjq9qguzk5tl2od4.png) ### Remove image file code To use the camera output, remove the image file code. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); - var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; - SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } ``` ### Create Render Texture Create a new `RenderTextures` folder inside the `Assets` in the project window. Right click the `RenderTextures` **folder > Create > RenderTexture** to create a new render texture asset, and named `WatchRenderTexture`. We will set the scene camera output to this render texture asset. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hzdwjzel3azgtrlt5t3s.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7y99e6w1c54jcvz3zhvs.png) ### Write camera output to the RenderTexture asset On the hierarchy, click `Camera` object and open the inspector. Drag `WatchRenderTexture` from the project window to the camera **Target Texture** property in the inspector. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g8h7zc8k881byeamdn1g.png) Then the camera output is written into `WatchRenderTexture` asset. ### RenderTexture setting On the project window, click `WatchRenderTexture` asset and open the inspector. Change **Size** to **512 x 512**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9jv8ibnjxox1bxcyfb37.png) --- ### Optional: When you want to create RenderTexture by script If you want to create RenderTexture in the code without creating a RenderTexture asset, you can write as below. ```diff public class WatchOverlay : MonoBehaviour { public Camera camera; + private RenderTexture renderTexture; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; ... private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); + // Set camera.targetTexture to write the camera output to the render texture. + renderTexture = new RenderTexture(512, 512, 16, RenderTextureFormat.ARGBFloat); + camera.targetTexture = renderTexture; SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } ... ``` --- ### Add Render Texture variable Add a member variable to store RenderTexture. ```diff public class WatchOverlay : MonoBehaviour { public Camera camera; + public RenderTexture renderTexture; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; ... ``` Open `WatchOverlay` game object inspector, and set `WatchRenderTexture` asset to `renderTexture` variable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b1j6c5kzvr94josrfkk7.png) ### Wait for texture creation Wait for the render texture to be ready to use. ```diff private void Update() { var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } + if (!renderTexture.IsCreated()) + { + return; + } + + // add render code here } ``` ### Get a native texture pointer The texture data type to pass to the OpenVR is a graphics API, such as DirectX or OpenGL, which works under the Unity API layer. We can get a native texture pointer to access the low level API texture data with Unity’s [GetNativeTexturePtr()](https://docs.unity3d.com/ScriptReference/Texture.GetNativeTexturePtr.html). Call `GetNativeTexturePtr()` in `Update()` to get the texture data reference which will be passed to the OpenVR API. ```diff private void Update() { var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } if (!renderTexture.IsCreated()) { return; } + var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); } ``` --- ### Optional: Sync rendering thread The [GetNativeTexturePtr() documentation](https://docs.unity3d.com/ScriptReference/Texture.GetNativeTexturePtr.html) does not recommend calling it every frame for performance. However, we call it in `Update()` in this tutorial, because rendering texture to the overlay without syncing to the rendering thread sometimes crashes the program. --- ### Create an OpenVR texture Create a texture variable of OpenVR texture data type `Texture_t`. ```diff private void Update() { var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } if (!renderTexture.IsCreated()) { return; } var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); + var texture = new Texture_t + { + eColorSpace = EColorSpace.Auto, + eType = ETextureType.DirectX, + handle = nativeTexturePtr + }; } ``` Set graphics API type to `eType`. Since DirectX is the default graphics API in [this tutorial environment](https://medium.com/@kurohuku/bb99a334d897), we are going to assume that the graphics API is DirectX from now on. Set the native texture pointer to `handle`. The camera output is written into this texture. --- ### Optional: If you want to support APIs other than DirectX You can detect current graphics API with `SystemInfo.graphicsDeviceType`. ```cs switch (SystemInfo.graphicsDeviceType) { case GraphicsDeviceType.Direct3D11: texture.eType = ETextureType.DirectX; break; case GraphicsDeviceType.Direct3D12: texture.eType = ETextureType.DirectX12; break; case GraphicsDeviceType.OpenGLES2: case GraphicsDeviceType.OpenGLES3: case GraphicsDeviceType.OpenGLCore: texture.eType = ETextureType.OpenGL; break; case GraphicsDeviceType.Vulkan: texture.eType = ETextureType.Vulkan; break; } ``` You can check the program behavior with APIs other than Direct3D11 by unchecking **Project Settings > Player > Other Settings > Auto Graphics API for Windows**, and adding the desired graphics API to **Graphics APIs for Windows**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ct2b2t0mgc9i3uzxlwki.png) https://docs.unity3d.com/Manual/GraphicsAPIs.html --- ### Draw texture to the overlay Draw texture to the overlay with [SetOverlayTexture()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_SetOverlayTexture_System_UInt64_Valve_VR_Texture_t__). (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::SetOverlayTexture) for details) Pass the `Texture_t` texture we created before. ```diff rivate void Update() { var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } if (!renderTexture.IsCreated()) { return; } var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); var texture = new Texture_t { eColorSpace = EColorSpace.Auto, eType = ETextureType.DirectX, handle = nativeTexturePtr }; + var error = OpenVR.Overlay.SetOverlayTexture(overlayHandle, ref texture); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to draw texture: " + error); + } } ``` Run the program. Make sure the camera output is shown on the overlay. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fe6od0c2hjdyslcarvpa.gif) ### Flip vertical The camera output is vertically flipped, so we will have to flip it back. This happens because the V-axis direction of the texture UV system is flipped between Unity and DirectX. Unity’s V-axis faces upwards while DirectX’s faces downwards. https://docs.unity3d.com/Manual/SL-PlatformDifferences.html There are different ways to fix this issue. However, we use the OpenVR [SetOverlayTextureBounds()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_SetOverlayTextureBounds_System_UInt64_Valve_VR_VRTextureBounds_t__) to vertically flip the V-axis. (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::SetOverlayTextureBounds) for details) ```cs private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); + var bounds = new VRTextureBounds_t + { + uMin = 0, + uMax = 1, + vMin = 1, + vMax = 0 + }; + var error = OpenVR.Overlay.SetOverlayTextureBounds(overlayHandle, ref bounds); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to flip texture: " + error); + } SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } ``` This flips texture vertically. Run the program and check the texture is flipped. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c5d8okjs15h6xrgzd48q.png) --- ### Optional: Support for APIs other than DirectX In this tutorial, we assume that the graphics API is DirectX so always flip the texture vertically. If you want to support other APIs, check `graphicsDeviceType` and deal as “If the `graphicsDeviceType` is the OpenGL, don’t flip the texture” similar to the above section **“Optional: If you want to support APIs other than DirectX”**. --- ## Create a canvas to display the current time We have displayed the camera output to the overlay. Let’s create the watch next. Remove the Cube and Directional Light, we never use them later. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z40gyltipma9qpzk7fpg.png) Open the Camera inspector, and click Reset on the Transform component to **reset the position to (0, 0, 0)**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhufr7b9o4tcjbqkxfxl.png) Create the below objects into the scene. - **UI > Canvas** - **UI > Text — TextMeshPro** inside the Canvas object ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q1l1gy2245ovd7xpc8pa.png) When we create TextMeshPro for the first time, it shows the dialog below, click **Import TMP Essentials** button. Close the dialog after importing. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mj5h6nwgxpsyy3weg9s7.png) Open the Canvas inspector, and change **Render mode** to **Screen Space -Camera**. Then drag the camera object from the hierarchy to the **Render Camera** property. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/09ljxni2ejjy781qo9l1.png) Open the Text (TMP) object inspector, and center the text vertically and horizontally with **Alignment**. Input “00:00:00” to the text. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yglkgi403rzs8hjhhwd6.png) Select the Camera object, and set **Clear Flags** to **Solid Color**. Click the **Background** color, and make sure **A (Alpha) is 0**. This makes the camera background transparent and displays the current time only. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bo3ozn59qnasr3o6lm6.png) Select the Canvas object, and set the **Plane Distance to 10**. This makes editing easy on the Editor. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ix3jd19pwijfpqz5rent.png) Select Text (TMP), click the anchor setting (the rectangle on the left top of the Rect Transform component), and select the right bottom blue crossed arrow (stretch to both directions). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t3nrfkdjg89zkwhvjb44.png) Set Left, Top, Right, and Bottom to 0. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c297lus7lh0m51txjj5t.png) Scroll the inspector, and set the **TextMeshPro — Text (UI)** component **Font Size to 70**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aj9slgkva6ey3gdivxr0.png) Run the program. Check if the current time is displayed on the left wrist. Adjust the position and font size if it is off. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17kc6l9idwduinfmudtp.png) ## Activate the watch Create a new script `Watch.cs` inside the `Scripts` folder. Copy the following code. ```cs using UnityEngine; using System; using TMPro; public class Watch : MonoBehaviour { private TextMeshProUGUI label; void Start() { label = GetComponent<TextMeshProUGUI>(); } void Update() { var hour = DateTime.Now.Hour; var minute = DateTime.Now.Minute; var second = DateTime.Now.Second; label.text = $"{hour:00}:{minute:00}:{second:00}"; } } ``` Add `Watch.cs` to the **Text (TMP)** object into the scene. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kxuuks6uqxoxvuchuf1w.png) Run the program. It should display the current time. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uc6zajeuud37mp3rwt5k.png) ## Organize code Organize `WatchOverlay.cs`. ### Flip overlay vertically Move flipping code into `FlipOverlayVertical() `. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); - var bounds = new VRTextureBounds_t - { - uMin = 0, - uMax = 1, - vMin = 1, - vMax = 0 - }; - var error = OpenVR.Overlay.SetOverlayTextureBounds(overlayHandle, ref bounds); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to flip texture: " + error); - } + FlipOverlayVertical(overlayHandle); SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } ... + private void FlipOverlayVertical(ulong handle) + { + var bounds = new VRTextureBounds_t + { + uMin = 0, + uMax = 1, + vMin = 1, + vMax = 0 + }; + + var error = OpenVR.Overlay.SetOverlayTextureBounds(handle, ref bounds); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to flip texture: " + error); + } + } ``` ### Draw RenderTexture Move drawing code into `SetOverlayRenderTexture()`. ```diff private void Update() { var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } - var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); - var texture = new Texture_t - { - eColorSpace = EColorSpace.Auto, - eType = ETextureType.DirectX, - handle = nativeTexturePtr - }; - var error = OpenVR.Overlay.SetOverlayTexture(overlayHandle, ref texture); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to draw texture: " + error); - } + SetOverlayRenderTexture(overlayHandle, renderTexture); } ... + private void SetOverlayRenderTexture(ulong handle, RenderTexture renderTexture) + { + if (!renderTexture.IsCreated()) return; + + var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); + var texture = new Texture_t + { + eColorSpace = EColorSpace.Auto, + eType = ETextureType.DirectX, + handle = nativeTexturePtr + }; + var error = OpenVR.Overlay.SetOverlayTexture(handle, ref texture); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to draw texture: " + error); + } + } ``` ## Final code ```cs using UnityEngine; using Valve.VR; using System; public class WatchOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; [Range(0, 0.5f)] public float size; [Range(-0.2f, 0.2f)] public float x; [Range(-0.2f, 0.2f)] public float y; [Range(-0.2f, 0.2f)] public float z; [Range(0, 360)] public int rotationX; [Range(0, 360)] public int rotationY; [Range(0, 360)] public int rotationZ; private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); FlipOverlayVertical(overlayHandle); SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } private void Update() { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } SetOverlayRenderTexture(overlayHandle, renderTexture); } private void OnApplicationQuit() { DestroyOverlay(overlayHandle); } private void OnDestroy() { ShutdownOpenVR(); } private void InitOpenVR() { if (OpenVR.System != null) return; var error = EVRInitError.None; OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); if (error != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + error); } } private void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } private ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } return handle; } private void DestroyOverlay(ulong handle) { if (handle != OpenVR.k_ulOverlayHandleInvalid) { var error = OpenVR.Overlay.DestroyOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to dispose overlay: " + error); } } } private void SetOverlayFromFile(ulong handle, string path) { var error = OpenVR.Overlay.SetOverlayFromFile(handle, path); if (error != EVROverlayError.None) { throw new Exception("Failed to draw image file: " + error); } } private void ShowOverlay(ulong handle) { var error = OpenVR.Overlay.ShowOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to show overlay: " + error); } } private void SetOverlaySize(ulong handle, float size) { var error = OpenVR.Overlay.SetOverlayWidthInMeters(handle, size); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay size: " + error); } } private void SetOverlayTransformAbsolute(ulong handle, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformAbsolute(handle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } private void SetOverlayTransformRelative(ulong handle, uint deviceIndex, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformTrackedDeviceRelative(handle, deviceIndex, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } private void FlipOverlayVertical(ulong handle) { var bounds = new VRTextureBounds_t { uMin = 0, uMax = 1, vMin = 1, vMax = 0 }; var error = OpenVR.Overlay.SetOverlayTextureBounds(handle, ref bounds); if (error != EVROverlayError.None) { throw new Exception("Failed to flip texture: " + error); } } private void SetOverlayRenderTexture(ulong handle, RenderTexture renderTexture) { if (!renderTexture.IsCreated()) return; var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); var texture = new Texture_t { eColorSpace = EColorSpace.Auto, eType = ETextureType.DirectX, handle = nativeTexturePtr }; var error = OpenVR.Overlay.SetOverlayTexture(handle, ref texture); if (error != EVROverlayError.None) { throw new Exception("Failed to draw texture: " + error); } } } ``` Finally, we have displayed the current time on the left wrist. In the next part, we will make the dashboard setting screen to switch which hand to display the overlay.
kurohuku
1,890,611
Enterprise-Level Authentication in a Containerized Environment for NextJS 13
Enterprise-Level Authentication in a Containerized Environment for NextJS 13 ...
0
2024-06-16T22:16:31
https://ulasozdemir.com.tr/enterprise-level-authentication-in-a-containerized-environment-for-nextjs-13
mysql, nextjs, docker, keycloak
--- title: Enterprise-Level Authentication in a Containerized Environment for NextJS 13 published: true tags: mysql,nextjs,docker,keycloak canonical_url: https://ulasozdemir.com.tr/enterprise-level-authentication-in-a-containerized-environment-for-nextjs-13 --- ![](https://cdn-images-1.medium.com/max/1024/1*6gADbtz6UDoy39QMH97ksw.png) _Enterprise-Level Authentication in a Containerized Environment for NextJS 13_ ### TL;DR [https://github.com/ozdemirrulass/keycloak-nextjs-mysql-docker](https://github.com/ozdemirrulass/keycloak-nextjs-mysql-docker) This article aims to provide a step by step guide for Keycloak + NextJS 13 authentication and containerization. By the end of this article you will be able to 1. Set up a Keycloak server based on MYSQL database for authentication. 2. Integrate Keycloak with a Next.js 13 application for user authentication. 3. Implement authentication flows such as login, registration in your Next.js app. 4. Containerize your Keycloak and Next.js applications using Docker for easy deployment and scalability. 5. Understand best practices for managing authentication tokens and sessions in a containerized environment. From start to end we will be using compose to build our application in a containerized environment. Since we will be building our containers for development environment, output source code of this tutorial will be ready to convert multi-environment. Since the requirements and configurations of production and development environments are different, Multi-environment development is strongly advised but we will be only working on development environment for educational purposes. ### Prerequisites I did my best to keep this article as simple as possible to teach the basics but to get the most out of this article it is strongly advised to have some knowledge on Next.js, Docker, and authentication concepts. Tools and Software: Docker, TypeScript, NodeJS, Docker Compose, Keycloak ### Setting Up Development Environment Let’s start creating a new project directory named keycloak-nextjs-docker-tutorial and open it using your favorite IDE. I'll be using VS Code. ![](https://cdn-images-1.medium.com/max/1024/0*jrHpFS0rdzkahKuK) As I mentioned earlier we will be using docker compose tool to create our containers. For those unfamiliar with docker compose tool: It is simply **a tool for defining and running multi-container applications**. To be able to run containers we must define the properties of our containers in a yml file. > By default, Docker Compose looks for a file named docker-compose.yml or docker-compose.yaml in the current directory. However, you can specify a different file using the -f or --file option when running Docker Compose commands. For example, if your docker-compose.yml file is named my-compose.yml, you can use the following command to specify the file: ``` docker-compose -f my-compose.yml up ``` We will be building a development environment so let’s create a file named docker-compose.dev.yml to define our container properties and .env file to specify some environment values. ![](https://cdn-images-1.medium.com/max/556/0*qU2pRpRnxLmbEyd5) Typical docker-compose.yml file contains the following definitions: - Version: This field specifies the version of the Docker Compose file format being used. It’s usually first line of the file and defines the schema of the file. ``` version: '3.8' ``` - Services: This section defines the containers that make up your application. Each service must be identified by a unique name, and its configuration must be under that name. ``` services: keycloak: ... mysql: ... next-app: ... ``` > Be careful while working on YAML or YML files. YAML and YML files uses indentation to indicate the structure of the data. The recommended indentation is two spaces per level but as long as it is consistent across the file it can be 2–3 or anything. - **Container Configuration** : Within each service definition, you can configure various aspects of the container, such as: 1- **Image** : Specifies the Docker image to use for the container. 2- **Build** : Specifies the path to the Dockerfile if the image needs to be built. 3- **Ports** : Maps ports from the container to the host machine. 4- **Volumes** : Mounts volumes from the host machine into the container. 5- **Environment** Variables: Sets environment variables for the container. 6- **Dependencies** : Defines dependencies on other services within the same docker-compose.yml file. 7- **Networks** : Configures networking options for the container. 8- **Command** : Overrides the default command specified in the Docker image. 9- **Healthcheck** : Configures a health check for the container. Having said that let’s start to configure our containers. ### MySQL Let’s start with the database. We need a database for our Keycloak service to store user data. I’ll be using MYSQL for this tutorial but after completing this tutorial I encourage you to try to build the same structure with different database technologies. ``` version: '3.8' services: mysql: container_name: mysql image: "mysql:${MYSQL_VERSION}" healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] environment: MYSQL_DATABASE: keycloak MYSQL_USER: keycloak MYSQL_PASSWORD: password MYSQL_ROOT_PASSWORD: password volumes: - ./mysql_data:/var/lib/mysql ``` We’ve defined a MYSQL container named mysql and a database named keycloak. User credentials of keycloak database are username: keycloak, password: password and the root user's password is also password. > It is strongly advised to check [https://hub.docker.com/\_/mysql](https://hub.docker.com/_/mysql) and learn more about MySQL environment variables. > I’d like to draw your attention to the usage of ${MYSQL\_VERSION} in the Docker Compose file. This is an example of variable substitution or interpolation.${VARIABLE\_NAME} syntax is used to reference environment variables defined either in the shell environment or in an .env file. Docker Compose automatically detects the .env file within the same directory. Open the .env file we created earlier in the /keycloak-nextjs-docker-tutorial directory and add the following variable. ``` MYSQL_VERSION=8.0 ``` I’ll be using [mysql:8.0](https://hub.docker.com/layers/library/mysql/8.0) Another very important part of this service definition is: ``` volumes: - ./mysql_data:/var/lib/mysql ``` > This setup is commonly used to persist data generated by the MySQL database even if the container is stopped or removed. It ensures that data stored within the container’s /var/lib/mysql directory is stored on your local machine and remains accessible across container restarts or recreations. ### Keycloak In this part we will be adding the Keycloak service to our compose definitions and ensure the connectivity between our database. Open the docker-compose.dev.yml file we created earlier in the /keycloak-nextjs-docker-tutorial and add the following definitions after the mysql service. ``` keycloak: container_name: keycloak image: "quay.io/keycloak/keycloak:${KC_VERSION}" command: ["start-dev"] restart: unless-stopped depends_on: - mysql healthcheck: test: ["CMD", "curl", "--fail", "http://localhost:8080"] environment: - KC_DB=mysql - KC_DB_USERNAME=keycloak - KC_DB_PASSWORD=password - KC_DB_URL=jdbc:mysql://mysql:3306/keycloak - KC_FEATURES=${KC_FEATURES} - KEYCLOAK_ADMIN=admin - KEYCLOAK_ADMIN_PASSWORD=${KC_PASSWORD} ports: - ${KC_PORT}:8080 ``` As you can see, we have a few variables here, just like in our mysql service. Let’s quickly open the .env file we created earlier in the /keycloak-nextjs-docker-tutorial directory and add the related variables. ``` KC_VERSION=17.0.1 KC_PORT=8080 KC_FEATURES=account2,admin2,account-api,token-exchange KC_PASSWORD=keycloak ``` > Visit [https://quay.io/repository/keycloak/keycloak](https://quay.io/repository/keycloak/keycloak) to check other versions of Keycloak. There is another very important feature of Docker I’d like to draw your attention. - KC\_DB\_URL=jdbc:mysql://mysql:3306/keycloak Visit [https://quay.io/repository/keycloak/keycloak](https://quay.io/repository/keycloak/keycloak) to check other versions of Keycloak. Here we specify a database connection url using Java Database Connectivity API but as you can see instead of using localhost or a domain or an ip address we are using the container name! > When using Docker Compose to define multiple services, Docker Compose creates a bridge network by default and assigns a unique name to each container within that network. By specifying a container name in Docker Compose, you are giving that container a human-readable alias within the context of the Docker network. Good job! 👏 We’ve just prepared a Docker Compose environment for Keycloak based on MYSQL database. Let’s pause here, run our containers, and take a closer look at what we have accomplished. To build and run our containers execute the following command in the same directory with docker-compose.dev.yml file which in our case /keycloak-nextjs-docker-tutorial. ``` docker compose -f docker-compose.dev.yml up -d ``` Before executing the code I’d like to explain the -f and -d flags. - -f flag stands for "file." as I mentioned earlier if your compose file is not docker-compose.yml/yaml you must specify the filename. - -d flag stands for "detached mode". When you use this flag, Docker Compose will run the containers in the background, allowing you to continue using the terminal for other tasks. If you don't use **-d** flag containers will start in the foreground, and the logs from the containers will be streamed to your terminal. This means you'll see the output of each container's STDOUT and STDERR directly in your terminal window. If you followed the previous steps precisely you must be seeing something like this in your terminal: ![](https://cdn-images-1.medium.com/max/852/0*vo2UCU6BZTvt58h2) PS: Container names must be unique and since I already have a container named mysql on my system I named my MYSQL container as mysql. To see which containers are running open your terminal and execute ``` docker ps ``` You’ll see two containers: ![](https://cdn-images-1.medium.com/max/1024/0*hDND9bUkLW0EaixU) > 0.0.0.0:8080->8080/tcp This is another important feature of docker and we mentioned it earlier but It's worth repeating. It represents the port mapping. This indicates that port 8080 on the host machine is mapped to port 8080 on the container. This means that any traffic directed to port 8080 on the host machine will be forwarded to the corresponding port on the Docker container. This means when you try to reach to service from your host machine you must use the port shows up in the left side but if another service in the same network tries to reach this service it should use the port in the right side of :. It is simply [hostPort]:[containerPort] Let’s check if our Keycloak and MySQL integration is working. Visit [localhost:8080](http://127.0.0.1:8080) on your web browser. You should see the following screen: ![](https://cdn-images-1.medium.com/max/1024/0*jxJn5My231UNQpmI) Seems like Keycloak running fine. Let’s login with the credentials we’ve defined in .env file earlier. Click to Administrator console and login with the following credentials: ``` Username: admin Password: keycloak ``` ![](https://cdn-images-1.medium.com/max/1024/0*yDEeqBSrbNAvTNfp) You will see a Keycloak dashboard after successful login ![](https://cdn-images-1.medium.com/max/1024/0*qOzm3eGLSEmI6-V5) Perfect! Before we proceed further, let me demonstrate how to access container terminals and execute commands within Docker environments. ``` docker exec -it <container_id_or_name> <command> ``` This is the syntax of executing a command in a container. For example, if you have a container running a bash shell and its ID is abcdef123456, you can access its terminal like this: ``` docker exec -it abcdef123456 bash ``` This command opens an interactive terminal (-it) within the specified container ( abcdef123456) running the bash shell. You can replace bash with any other command you want to execute within the container. We will be working inside mysql container so it must be: ``` docker exec -it mysql bash ``` > If you named your containers different you can check the container names executing docker ps command in your terminal. After executing the bash command user@host indicator will replace with bash-5.1# like this: ![](https://cdn-images-1.medium.com/max/1024/0*0YClAqo9pN8hkz5V) This means that you’ve successfully accessed the bash shell of your container. Now Let’s connect to a MySQL database and check tables and records. ``` mysql -u keycloak -p ``` after you execute this command in the bash shell it will require you to enter a password which we specified in docker-compose.dev.yml as ``` MYSQL_USER: keycloak MYSQL_PASSWORD: password ``` ![](https://cdn-images-1.medium.com/max/1024/0*cQDzvsDUCKknw3ux) Let’s check the databases exists in our mysql service. Execute the following query in MySQL monitor ![](https://cdn-images-1.medium.com/max/1024/0*BnwLz6RKK2CKLy2Q) You will see 3 database record. 2 of them are default special databases of MySQL. The information\_schema database provides metadata about MySQL server objects, while the performance\_schema database offers insights into server performance metrics. Important part for us is keycloak database. It is perfectly created just like we defined as MYSQL\_DATABASE: keycloak in our compose file. To be able to see the the contents execute the following commands in order. use keycloak; show tables; This will show us a list of tables which our Keycloak service created automatically. ![](https://cdn-images-1.medium.com/max/1024/0*CSwFCFi9aBkh8zNx) Execute exit; to exit from the MySQL monitor and execute exit in the bash shell to exit from the MySQL container's bash shell. 👏 Congratulations on our achievements so far! Yet, there’s still more to accomplish! Before we proceed, I’d like to introduce the concept of Multi-tenancy. Understanding the Multi-tenancy concept will help you to understand and operate Keycloak better! A “tenant” typically refers to an individual or organization that has its own distinct set of users, data, and configuration settings within the shared software environment. Multitenancy refers to a software architecture where a single instance of the software serves multiple clients (tenants), keeping their data and configurations separate while sharing the same underlying infrastructure. ![](https://cdn-images-1.medium.com/max/1024/0*pVigesTH7bjM8XDA) I strongly advise you to read more on multi-tenancy and understand the concept and types of it. > In Keycloak’s architecture, Realms essentially serve as tenants. > There is a beautiful explanation of [**handling multitenant organization with Keycloak**](https://documentation.cloud-iam.com/how-to-guides/multitenant-with-keycloak.html) in the documentation of cloud-iam. We will go for second for simplicity and consistency reasons. Let’s create our first REALM. ![](https://cdn-images-1.medium.com/max/1024/0*eBiT7GTqyA1Kyul0) ![](https://cdn-images-1.medium.com/max/1024/0*-9p9cAQE83YB3VSu) ![](https://cdn-images-1.medium.com/max/1024/0*GyR3HWOTsZt8iNUx) Perfect! We just created our first REALM. Let’s create a client for our NextJS application now! ![](https://cdn-images-1.medium.com/max/1024/0*4UtQo-PQbZ1Bq5FH) ![](https://cdn-images-1.medium.com/max/1024/0*wXrxDvDKshgn2B6_) ![](https://cdn-images-1.medium.com/max/1024/0*znVvWDgn0eZBoBEZ) ![](https://cdn-images-1.medium.com/max/1024/0*SL1GXz0MD8cW0Wad) If all goes well you must see the following screen: ![](https://cdn-images-1.medium.com/max/1024/0*JAntrEvHzCimxFBP) Congrats 🥳 We have a Realm and a Client for our NextJS application. Before we proceed further with Client configuration and Access Settings we will create and containerize a NextJS application! 💃🕺 > _“If you’re not a disruptor, you will be disrupted.” — John Chambers_ So; > _“Mr. Gorbachev, tear down this wall!”- Reagan_ Open your terminal in the root of our project which is /keycloak-nextjs-docker-tutorial and execute this command: ``` docker compose -f docker-compose.dev.yml down ``` This command will stop and remove all of the containers defined in our compose file as well as the docker network. ![](https://cdn-images-1.medium.com/max/1011/0*CPvDZYf7qVvNpRG0) Did we made all this for nothing !? Of course NO! Open the project directory in your favorite IDE and take a look at the folder structure of the project. ![](https://cdn-images-1.medium.com/max/1024/0*_HYjY4ue9AyuOiVP) Docker Compose and our MySQL service left us a little present. Well, actually we wanted this from them… Do you remember this part of the docker-compose.dev.yml file? ``` volumes: - ./mysql_data:/var/lib/mysql ``` Thanks to this piece of definition whenever we built our service again mysql will mount the latest status and contents of our databases. We will reunite with out precious data but first Let’s create our NextJS application and containerize it! ### NextJS Open the project directory in your terminal /keycloak-nextjs-docker-tutorial and execute the following command to create a NextJS app: ``` npx create-next-app@latest ``` ![](https://cdn-images-1.medium.com/max/1024/0*8djrFk4lQ4lzKvWe) Once again we will open the project folder using our IDE. ![](https://cdn-images-1.medium.com/max/1024/0*1inOjyWe7yhZ3GXk) Here is our next application. Isn’t it adorable? No! Because it is not containerized. Let’s get to work then! But before we work on our application we should add the /next-app directory to workplace otherwise ESLint will throw errors. ![](https://cdn-images-1.medium.com/max/1024/0*5hmZwuNwr4ZLMiCl) File->Add Folder to Workspace->(Choose next-app directory) and click add. Now lets open next.config.mjs file in our /next-app directory Next we add a property inside of the nextConfig const. ``` output: "standalone", ``` ![](https://cdn-images-1.medium.com/max/1024/0*PHVxTpNRyZjjarxz) This declaration is to create a self-contained build of our application. This can be particularly useful for deploying our Next.js application in environments where we want to avoid installing Node.js dependencies directly on the server, such as when using Docker or deploying to a serverless environment. We will see the benefit of this when we want to write a production environment. Until now we built and run an existing images by pulling from their resources. This time we will create a docker image for our NextJs Application. First create a new file as dev.Dockerfile inside the /next-app directory and open the file on IDE. Before writing anything let’s first understand what is Dockerfile then break it down it together. > Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. It has a simple syntax and consists of a series of instructions that are executed in order to build a Docker image. ``` # Base Image: Typically, a Dockerfile starts with a base image upon which # you build your application. # This is specified using the FROM instruction. FROM node:18-alpine # If you set WORKDIR /app, then any commands or file operations # will be relative to the /app directory. WORKDIR /app # Install dependencies based on the preferred package manager # Copy Application Files: You copy the application code or files # into the image using the COPY or ADD instruction. COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ RUN \ if [-f yarn.lock]; then yarn --frozen-lockfile; \ elif [-f package-lock.json]; then npm ci; \ elif [-f pnpm-lock.yaml]; then corepack enable pnpm && pnpm i; \ # Allow install without lockfile, so example works #even without Node.js installed locally else echo "Warning: Lockfile not found. It is recommended to commit lockfiles to version control." && yarn install; \ fi COPY . . # Next.js collects completely anonymous telemetry data about # general usage. Learn more here: https://nextjs.org/telemetry # Comment the following line to enable telemetry at run time ENV NEXT_TELEMETRY_DISABLED 1 # Note: We could expose ports here but instead # Compose will handle that for us # Start Next.js in development mode based on the # preferred package manager CMD \ if [-f yarn.lock]; then yarn dev; \ elif [-f package-lock.json]; then npm run dev; \ elif [-f pnpm-lock.yaml]; then pnpm dev; \ else npm run dev; \ fi ``` Beautiful. Now we will add this Dockerfile to our Compose definitions so that Compose tool will be able to build the image for us just as we defined in our Dockerfile. Add the following block under services in docker-compose.dev.yml. ``` next-app: container_name: next-app build: context: ./next-app dockerfile: dev.Dockerfile restart: unless-stopped environment: - NODE_ENV=development volumes: - ./next-app:/app - /app/node_modules ports: - 3000:3000 ``` > context definition under the build parameter defines the location of our Dockerfile. Before building and running the containers I’d like to mention one more concept. **Network Isolation:** until now we’ve been using the default bridge network which Docker built for us. But let’s think on this for a second. While working on a multi-service environment docker automatically creates a bridge network and assigns each container **within that network with a unique name**. What does this mean? Does it mean when we add NextJS application in this compose file will mysql can be accessible by NextJS ? Answer is Yes! But do we want it though? What are we supposed to do with MySQL in a NextJS front-end client? Nothing. Let’s cut the bonds then! What we have to do is isolating the connectivity between our services like in this diagram by creating custom networks: ![](https://cdn-images-1.medium.com/max/1024/0*vntoyKDr2sGtWvKi) open docker-compose.dev.yml once again and go to very bottom of it and add following piece of definition. ``` networks: frontend-network: keycloak-network: ``` By adding this definition, we’ve created 2 new networks for our environment. As you can see I did not specify any property for our networks because the default network type is bridge. > Bridge type networks allows containers to communicate with each other, and also provides external access if ports are exposed. and this is exactly what we want. Adding the previous piece of definition in our compose file only creates the network. To be able to use it we must introduce our containers to our new networks. ``` version: '3.8' services: mysql: container_name: mysqlk image: "mysql:${MYSQL_VERSION}" healthcheck: test: ["CMD", "mysqladmin", "ping", "-h", "localhost"] environment: MYSQL_DATABASE: keycloak MYSQL_USER: keycloak MYSQL_PASSWORD: password MYSQL_ROOT_PASSWORD: password volumes: - ./mysql_data:/var/lib/mysql networks: - keycloak-network keycloak: container_name: keycloak image: "quay.io/keycloak/keycloak:${KC_VERSION}" command: ["start-dev"] restart: unless-stopped depends_on: - mysql healthcheck: test: ["CMD", "curl", "--fail", "http://localhost:8080"] environment: - KC_DB=mysql - KC_DB_USERNAME=keycloak - KC_DB_PASSWORD=password - KC_DB_URL=jdbc:mysql://mysqlk:3306/keycloak - KC_FEATURES=${KC_FEATURES} - KEYCLOAK_ADMIN=admin - KEYCLOAK_ADMIN_PASSWORD=${KC_PASSWORD} ports: - ${KC_PORT}:8080 networks: - keycloak-network - frontend-network next-app: container_name: next-app build: context: ./next-app dockerfile: dev.Dockerfile restart: unless-stopped environment: - NODE_ENV=development volumes: - ./next-app:/app - /app/node_modules ports: - 3000:3000 networks: - frontend-network networks: frontend-network: keycloak-network: ``` As you can see in the current version of our compose file, I added the networks to the bottom of each service. It’s time to see what we’ve done. Open the terminal in the root directory of our project and run the following command to build image. ``` docker compose -f docker-compose.dev.yml build ``` This will build the images you defined in the Compose file. > You must re-build the image for any changes in Dockerfile. It’s time to run our containers. ``` docker compose -f docker-compose.dev.yml up -d ``` > If you run the up command without using the build command, Compose will check if a pre-built image with the same name exists. If it does, Compose will use that image. If it doesn't, Compose will build the image first and then start the container with it. It will take a bit more time for Keycloak to start comparing other services. In the meanwhile we can check if our next-app. Visit localhost:3000 on your browser. ![](https://cdn-images-1.medium.com/max/1024/0*1P7wVMsWjkxRihoo) Nice work! Let’s make sure our hot load works as expected and our container applies the changes we make in our host machine through the volume. Open /next-app/src/app/page.tsx and replace the content with the following code: ``` export default function Home() { return ( <main> <div>It Works!</div> </main> ); } ``` Visit localhost:3000 and you must see the changes! ![](https://cdn-images-1.medium.com/max/1024/0*dzXYw8ld_tONc6j8) Now it’s time to integrate Keycloak authentication for our our NextJS application. We will be using [NextAuth](https://next-auth.js.org/). Install Next-Auth package: ``` npm install next-auth ``` Create a new directory under /next-app directory as types and a file inside types as node-env.d.ts . ``` // /next-app/types/node-env.d.ts declare namespace NodeJS { export interface ProcessEnv { NEXT_PUBLIC_KEYCLOAK_CLIENT_ID: string KEYCLOAK_CLIENT_SECRET: string NEXT_LOCAL_KEYCLOAK_URL: string NEXT_PUBLIC_KEYCLOAK_REALM: string NEXT_CONTAINER_KEYCLOAK_ENDPOINT: string } } ``` now it’s time to create authentication routes. ``` import { AuthOptions } from "next-auth"; import NextAuth from "next-auth/next"; import KeycloakProvider from "next-auth/providers/keycloak"; export const authOptions: AuthOptions = { providers: [ KeycloakProvider({ jwks_endpoint: `${process.env.NEXT_CONTAINER_KEYCLOAK_ENDPOINT}/realms/myrealm/protocol/openid-connect/certs`, wellKnown: undefined, clientId: process.env.NEXT_PUBLIC_KEYCLOAK_CLIENT_ID, clientSecret: process.env.KEYCLOAK_CLIENT_SECRET, issuer: `${process.env.NEXT_LOCAL_KEYCLOAK_URL}/realms/${process.env.NEXT_PUBLIC_KEYCLOAK_REALM}`, authorization: { params: { scope: "openid email profile", }, url: `${process.env.NEXT_LOCAL_KEYCLOAK_URL}/realms/myrealm/protocol/openid-connect/auth`, }, token: `${process.env.NEXT_CONTAINER_KEYCLOAK_ENDPOINT}/realms/myrealm/protocol/openid-connect/token`, userinfo: `${process.env.NEXT_CONTAINER_KEYCLOAK_ENDPOINT}/realms/myrealm/protocol/openid-connect/userinfo`, }), ], }; const handler = NextAuth(authOptions); export { handler as GET, handler as POST }; ``` Go back to root directory of your next-app and create an environment file as .env.local ``` NEXT_PUBLIC_KEYCLOAK_REALM=<realm-name> NEXT_PUBLIC_KEYCLOAK_CLIENT_ID=<client-name> KEYCLOAK_CLIENT_SECRET=<secret-from-keycloak-client> NEXTAUTH_SECRET=<create-using-openssl> NEXT_LOCAL_KEYCLOAK_URL="http://localhost:8080" NEXT_CONTAINER_KEYCLOAK_ENDPOINT="http://keycloak:8080" ``` > For local network connectivity purposes we will use keycloak url and jwks endpoint differently from each other but in most cases in a development environment both should be same. As I mentioned before Keycloak still has the Realm and the Client we’ve created before. Visit localhost:8080 and sign in and select the Realm we created earlier: ![](https://cdn-images-1.medium.com/max/1024/0*_QTmgA6gMp-Qlh8x) Go to clients and select the client we created earlier. ![](https://cdn-images-1.medium.com/max/1024/0*c_QG1pV5wRf5WpST) Go to Access Settings and define Home URL and Valid redirect URI’s: ![](https://cdn-images-1.medium.com/max/1024/0*rYBpJYP5dOiCY-4H) > Valid URI pattern a browser can redirect to after a successful login or logout. Simple wildcards are allowed such as ‘[http://example.com/](http://example.com/*)'. Relative path can be specified too such as /my/relative/path/. Relative paths are relative to the client root URL, or if none is specified the auth server root URL is used. For SAML, you must set valid URI patterns if you are relying on the consumer service URL embedded with the login request. > Home URL: Default URL to use when the auth server needs to redirect or link back to the client. Save and go to Credentials. ![](https://cdn-images-1.medium.com/max/1024/0*1M40sncmvH1aigs1) Copy Client secret and paste KEYCLOAK\_CLIENT\_SECRET value. ``` KEYCLOAK_CLIENT_SECRET=<secret-from-keycloak-client> ``` For NEXTAUTH\_SECRET create a secret by running the following command and add it to .env.local. This secret is used to sign and encrypt cookies. ``` openssl rand -base64 32 ``` In my case final .env.local file looks like this: ``` NEXT_LOCAL_KEYCLOAK_URL="http://localhost:8080" NEXT_CONTAINER_KEYCLOAK_ENDPOINT="http://keycloak:8080" NEXT_PUBLIC_KEYCLOAK_REALM="myrealm" NEXT_PUBLIC_KEYCLOAK_CLIENT_ID="next-app" KEYCLOAK_CLIENT_SECRET="71ikzeN5p0fEwdHW6Hw5jOmlRvRIEtgO" NEXTAUTH_SECRET="MdiNiCNlDcBP8fUmANd9ARPIB+tlKV/oy3m88W2bTHk=" ``` Create a new folder under /next-app/src as components and create the following components inside the directory. ``` //next-app/src/components/Login.tsx "use client" import { signIn } from "next-auth/react"; export default function Login() { return <button onClick={() => signIn("keycloak")}> Signin with keycloak </button> } //next-app/src/components/Logout.tsx "use client" import { signOut } from "next-auth/react"; export default function Logout() { return <button onClick={() => signOut()}> Signout of keycloak </button> } ``` Go to page.tsx file /next-app/src/app/page.tsx and replace the content with the following block: ``` import { getServerSession } from 'next-auth' import { authOptions } from './api/auth/[...nextauth]/route' import Login from '../components/Login' import Logout from '../components/Logout' export default async function Home() { const session = await getServerSession(authOptions) if (session) { return <div> <div>Your name is {session.user?.name}</div> <div><Logout /> </div> </div> } return ( <div> <Login /> </div> ) } ``` We need a user to test our client. ![](https://cdn-images-1.medium.com/max/1024/0*wwSIj1XkoTnsizfT) ![](https://cdn-images-1.medium.com/max/1024/0*tnRZpP1OBR5F-BtW) Create the user and go to Credentials section to set a password for the user we just created. ![](https://cdn-images-1.medium.com/max/1024/0*7YFQzGpG6ArN7Jrh) ![](https://cdn-images-1.medium.com/max/1024/0*ExGHnBGXwH1zP0Xb) Now its time to test our authentication flow. Go to our nextjs app localhost:3000 ![](https://cdn-images-1.medium.com/max/1024/0*DJwJMMBJrHQt30Os) ![](https://cdn-images-1.medium.com/max/1024/0*mlVa_sg_56WcpaDp) ![](https://cdn-images-1.medium.com/max/1024/0*r74oPxIdJl6RPUtG) Congratulations we have successfully implemented an enterprise-level authentication in a containerized environment for Next.js 13 front-end. For federated logout you my want to check this discussion on github: [https://github.com/nextauthjs/next-auth/discussions/3938](https://github.com/nextauthjs/next-auth/discussions/3938) And here is how you can activate user registration: ![](https://cdn-images-1.medium.com/max/1024/0*eqrIkVl5wfqH5Vld) ![](https://cdn-images-1.medium.com/max/1024/0*5CMZJiVgJLngLRnd) ![](https://cdn-images-1.medium.com/max/1024/0*O6oW_CS7mX7nC0vC) I’ll also write how to integrate it with your backend service and how to monitor your application and keycloak using Prometheus and Grafana as soon as possible. It will also cover securing your Next.js routes using Keycloak’s role-based access control. ### Thank you, see you soon! _Originally published at_ [_https://ulasozdemir.com.tr_](https://ulasozdemir.com.tr/enterprise-level-authentication-in-a-containerized-environment-for-nextjs-13) _on June 12, 2024._
ozdemirrulass
1,890,606
Hashing
Hashing transforms input data into a fixed-size string using a hash function, essential for fast data...
0
2024-06-16T22:03:10
https://dev.to/willysambora/hashing-4nh9
devchallenge, cschallenge, computerscience, beginners
Hashing transforms input data into a fixed-size string using a hash function, essential for fast data retrieval, efficient storage, and ensuring data integrity in applications like hash tables, databases, and cryptography.
willysambora
1,886,861
Top 5 things you should be aware of when dealing with AI models
1. Privacy breaches – Who are you sharing your personal information with? You probably already know...
0
2024-06-16T22:00:00
https://devot.team/blog/ai-privacy
ai, datasecurity, discuss
**1. Privacy breaches – Who are you sharing your personal information with?** You probably already know that training AI requires large amounts of data, which often includes private information. The more data we use, the more accurate and better the model is. The largest models use huge amounts of data and very often do not want to disclose what data they tested the models on. When ChatGPT was first launched, Samsung had significant problems because their employees used ChatGPT at work. Later, it turned out that this data was used to train the model and became publicly available. Also, many of us use Slack, right? I don't know if you are aware, but Slack, by default, uses your messages for model training, and the only way to opt out is to email Slack stating that you do not want your data to be used for training the model. This raises important concerns about data collection practices and the need to ensure that personal and sensitive information is handled responsibly. **2. Discrimination and bias in models - Did you check your facts?** The data used to train AI models can inherently contain biases, which may lead to discriminatory outcomes in AI decision-making. The algorithm itself can be designed to give answers that its owners think are more "suitable" for them. The best example is Google Gemini, which caused a huge uproar because users who asked for a picture of the Pope received an image of a black or female Pope. Also, if you asked for a picture of a German soldier from 1945, you would receive images of African Americans and Asians. Although this may seem like an amusing oversight with pictures, the problem is much deeper. Pushing certain viewpoints can be dangerous because people already use chatbots to seek information, which they later use as facts without verification. That's why I believe developers creating these models certainly possess the technical expertise, but addressing bias is an ethical issue. And who holds the responsibility to deal with this? **3. Data manipulation - Have you heard about Nazi chatbot?** The quality of an AI model directly depends on the quality of the data it is trained on. And as we said, data can be manipulated. Unlike the female Pope, the problem here is not with the code itself or the developers who wrote it. The issue is that data can be manipulated. For example, the first case I heard about 5-6 years ago was when Microsoft developed the "Tay" Nazi Chatbot. It was supposed to be an AI that would communicate with users on Twitter, but within just 24 hours, it started making statements like "Hitler was right" and "I hate Jews." Users found an exploit and convinced the poor bot to become a Nazi. Recently, there was a similar incident with Reddit. When it became known that Reddit data would be used for training the model, users deliberately responded with offensive and inaccurate content to influence the AI's training. "Garbage in—garbage out" (GIGO) is a long-known term in analytics, but with the popularity of AI, it has become even more emphasized. What does it mean? The idea is that the outputs are only as good as the quality of the inputs. In healthcare, this can be a problem if, for example, diagnostic data is poorly labeled or not labeled at all, leading the AI model to draw incorrect conclusions or provide incorrect diagnoses. The healthcare industry, in particular, exemplifies the critical need for accurate data practices to ensure patient safety and effective treatments. **4. Data leaks and data breaches - Is your company prepared for AI technologies?** Despite increased prioritization and budget allocation for AI system security—with 94% of IT leaders setting aside a dedicated AI security budget for 2024—77% of companies have still experienced security breaches related to AI. Even with significant investments, only 61% of IT leaders believe their budgets are adequate to prevent potential cyberattacks. **5. Deepfakes - Is the Nigerian prince trying to contact you?** This may not directly affect us in the development and work with AI, but I believe it will become a very big problem for all of us users of modern technologies and the Internet. On one hand, we will have many fake news and other false content, and it will become increasingly difficult to find accurate and original content. On the other hand, I am convinced that the "Nigerian prince" will soon become very convincing and will try to scam as many people as possible. Personally, I am worried that it will become very easy to manipulate someone's voice and soon their video as well, which could be really unpleasant and dangerous. What impressed me the most was the podcast between Joe Rogan and Steve Jobs that never happened. The models learned from all the materials available about the two of them, and the model itself created the conversation. By using a super realistic text-to-voice model, it sounded like they were truly talking. It's difficult to say what awaits us in the future. Every so often, a newer and better model with incredible capabilities emerges, and we joke internally that ChatGPT 5 will probably have "out of the box" solutions for our clients. And in a year or two or three, who knows, maybe Skynet from Terminator will actually come to life. This article is part of a bigger [blog](https://devot.team/blog/ai-privacy) where we are covering the topic of privacy and data security in the age of AI.
ana_klari_e98cbb26da5af3
1,854,421
Dev: Security
A Security Developer, also known as a Security Engineer or Application Security Engineer, is a...
27,373
2024-06-16T22:00:00
https://dev.to/r4nd3l/dev-security-1fj0
security, developer
A **Security Developer**, also known as a Security Engineer or Application Security Engineer, is a professional responsible for ensuring the security and integrity of software systems, applications, and networks. Here's a detailed description of the role: 1. **Understanding of Security Principles:** - Security Developers have a deep understanding of cybersecurity principles, secure coding practices, encryption techniques, authentication mechanisms, access control, and security protocols. - They stay updated with the latest cybersecurity threats, vulnerabilities, attack vectors, and mitigation strategies to proactively address security risks. 2. **Threat Modeling and Risk Assessment:** - Security Developers conduct threat modeling exercises to identify potential security threats, attack scenarios, and vulnerabilities in software designs and architectures. - They perform risk assessments to prioritize security risks based on their impact, likelihood, and severity, and develop mitigation strategies to address high-risk areas. 3. **Secure Software Development Lifecycle (SDLC):** - Security Developers integrate security into the software development lifecycle (SDLC) by incorporating security requirements, design reviews, code analysis, security testing, and security training into development processes. - They collaborate with cross-functional teams, including developers, QA engineers, architects, and project managers, to ensure security is considered at every stage of the development process. 4. **Secure Coding Practices:** - Security Developers follow secure coding practices to prevent common security vulnerabilities, such as injection attacks (SQL injection, XSS), broken authentication, sensitive data exposure, insecure direct object references, and security misconfigurations. - They adhere to coding standards, guidelines, and best practices recommended by industry frameworks like OWASP (Open Web Application Security Project) and CERT (Computer Emergency Response Team). 5. **Vulnerability Management:** - Security Developers perform vulnerability assessments and penetration testing to identify security weaknesses, misconfigurations, and software flaws in applications, APIs, and infrastructure components. - They use automated scanning tools, manual testing techniques, and ethical hacking methodologies to discover vulnerabilities and recommend remediation actions. 6. **Security Tooling and Automation:** - Security Developers utilize security tools, utilities, and frameworks to automate security tasks, monitor system activity, detect anomalies, and respond to security incidents in real-time. - They configure and manage security solutions such as web application firewalls (WAFs), intrusion detection systems (IDS), security information and event management (SIEM) systems, and endpoint protection platforms (EPP). 7. **Secure Authentication and Authorization:** - Security Developers implement robust authentication mechanisms, multi-factor authentication (MFA), password policies, and session management techniques to ensure secure user authentication and authorization. - They integrate identity and access management (IAM) solutions, single sign-on (SSO) providers, and OAuth/OpenID Connect protocols to manage user identities and permissions effectively. 8. **Data Protection and Encryption:** - Security Developers apply encryption algorithms, cryptographic protocols, and data masking techniques to protect sensitive data at rest, in transit, and in use. - They implement encryption libraries, SSL/TLS protocols, secure key management practices, and data anonymization methods to safeguard confidential information and prevent data breaches. 9. **Incident Response and Forensics:** - Security Developers develop incident response plans, playbooks, and procedures to handle security incidents, data breaches, and cyber attacks effectively. - They conduct forensic investigations, root cause analysis, and post-incident reviews to understand the impact of security incidents, identify lessons learned, and implement corrective actions to prevent future incidents. 10. **Compliance and Regulatory Requirements:** - Security Developers ensure compliance with industry regulations, data protection laws, privacy standards, and security frameworks such as GDPR (General Data Protection Regulation), HIPAA (Health Insurance Portability and Accountability Act), PCI DSS (Payment Card Industry Data Security Standard), and ISO 27001. - They perform security audits, assessments, and certification processes to demonstrate compliance with applicable security standards and regulatory requirements. In summary, a Security Developer plays a crucial role in protecting organizations' assets, data, and infrastructure from cybersecurity threats and vulnerabilities. By applying security best practices, conducting risk assessments, implementing security controls, and fostering a security-aware culture, they contribute to building resilient and secure software systems that withstand evolving cyber threats and ensure the confidentiality, integrity, and availability of digital assets.
r4nd3l
1,890,605
Building Go packages for Windows on ARM
Intro Recently, Qualcomm announced list of laptops that will have their new Snapdragon X...
0
2024-06-16T21:59:09
https://dev.to/x1unix/building-go-packages-for-windows-on-arm-1721
go, cgo, windowsonarm
## Intro Recently, Qualcomm announced [list of laptops](https://www.qualcomm.com/products/mobile/snapdragon/pcs-and-tablets/laptop-device-finder) that will have their new Snapdragon X Elite chip which claim to have the same performance as Apple's M3 processor. This means that in a near future, market will see more and more Windows on ARM laptops and we as developers should be prepared to that. This article is structured in Q&A sections to briefly explain the topic. {%- # TOC start -%} - [WoA can emulate x86, isn't it?](#woa-can-emulate-x86-isnt-it) - [Doesn't Go already support cross compilation?](#doesnt-go-already-support-cross-compilation) - [Okay, isn't MinGW already cover that?](#okay-isnt-mingw-already-cover-that) - [So how do I build a program with CGO for WoA?](#so-how-do-i-build-a-program-with-cgo-for-woa) - [Is there a more convenient way to use llvm-mingw?](#is-there-a-more-convenient-way-to-use-llvmmingw) - [How to use it?](#how-to-use-it) * [Result](#result) * [How can I integrate this into my CI?](#how-can-i-integrate-this-into-my-ci) {%- # TOC end -%} ## WoA can emulate x86, isn't it? Although Windows on ARM (WoA) supports x86 and x86-64 emulation, some features like vector instructions (like AVX2) might be not supported, also emulation brings some performance penalty. People who use libraries like [simdjson](https://github.com/minio/simdjson-go) might be at risk. ## Doesn't Go already support cross compilation? Go offers out-of-box cross compilation support, including ARM64 for Windows, but this doesn't cover CGO. That means, any program that uses CGO libraries like [sqlite3](https://github.com/mattn/go-sqlite3) will require C and/or C++ toolchain for cross-compilation. ## Okay, isn't MinGW already cover that? Most popular solution for cross-compilation from Windows and Linux is MinGW. Unfortunately, MinGW doesn't support ARM64 target. [MSYS2](https://www.msys2.org/wiki/arm64/) toolchain provide ARM support but available only for Windows. That means that in order to build programs for Windows on ARM on a Linux machine, a different option should be considered as MinGW doesn't support arm64 target. ## So how do I build a program with CGO for WoA? Don't worry, this case is already covered. There is a llvm-based alternative toolchain that supports WoA - [llvm-mingw](https://github.com/mstorsjo/llvm-mingw). Unfortunately it's not available in most distros except [Arch Linux](https://aur.archlinux.org/packages/mingw-w64-llvm), but you still can download prebuilt binaries from releases page. ## Is there a more convenient way to use llvm-mingw? Yes, sure! There is a special [Docker image](github.com/x1unix/docker-go-mingw/) with toolchain for all Windows architectures - amd64, x86 and **arm64**. Image is called `x1unix/go-mingw` and offers arm64 target support since go `1.21`. Image uses MinGW for x86 and llvm-mingw for arm64 target. ## How to use it? It's quite simple, just pull the image and call `go build` command with `GOARCH=arm64` environment variable. For example, let's take a simple WinAPI CGO example from [here](https://github.com/x1unix/docker-go-mingw/tree/master/example/hello) and build it using Docker image: {% spoiler Go code %} ```go package main /* #cgo LDFLAGS: -lkernel32 #include <windows.h> #include <stdio.h> // Function to show a MessageBox using WinAPI void hello() { SYSTEM_INFO si; ZeroMemory( & si, sizeof(SYSTEM_INFO)); GetSystemInfo( & si); char * arch; switch (si.wProcessorArchitecture) { case PROCESSOR_ARCHITECTURE_AMD64: arch = "AMD64"; break; case PROCESSOR_ARCHITECTURE_INTEL: arch = "x86"; break; case PROCESSOR_ARCHITECTURE_ARM: arch = "ARM"; break; case PROCESSOR_ARCHITECTURE_ARM64: arch = "ARM64"; break; case PROCESSOR_ARCHITECTURE_IA64: arch = "IA"; break; default: arch = "Unknown"; break; } char message[30]; sprintf(message, "Hello from CGO on %s", arch); MessageBox(NULL, message, "Hello World", MB_OK); } */ import "C" import "fmt" func main() { fmt.Println("Calling C function to open a MessageBox...") C.hello() } ``` {% endspoiler %} ```shell # Go version to use. WoA supported since Go 1.21. export GO_VERSION=1.22 docker run --rm -it -e GOARCH=arm64 \ -v .:/go/work -w /go/work \ x1unix/go-mingw:$GO_VERSION \ go build -o hello.exe . ``` ### Result After building a program, let's try to run it inside any VM with Windows on Arm. Parallels Workstation is used for this example. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8f7pzmjw3d90ig1a7wk7.png) ### How can I integrate this into my CI? As this is a plain Docker image, it can be easily used both in GitHub Actions and Gitlab CI. Please check CI templates [here](https://github.com/x1unix/docker-go-mingw/tree/master/example/ci).
x1unix
1,890,609
How to use Novu with Nuxt.js
In this article, I will guide you through integrating Novu, an open source notification infrastructure, with Nuxt.js, the popular Vue.js framework.
0
2024-06-16T21:59:00
https://dev.to/michaldziuba03/how-to-use-novu-with-nuxtjs-m19
javascript, vue, nuxt, novu
--- title: How to use Novu with Nuxt.js published: true description: In this article, I will guide you through integrating Novu, an open source notification infrastructure, with Nuxt.js, the popular Vue.js framework. tags: javascript,vue,nuxt,novu cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hgnzhvdrfgdxhmy3ha0j.png published_at: 2024-06-16 21:59 +0000 --- In this article, I will guide you through integrating Novu, an open source notification infrastructure, with Nuxt.js, the popular Vue.js framework. ### TL;DR We will integrate Novu notification center with HMAC encryption enabled and then we will trigger notification in Nuxt.js using Novu SDK. Full code is available in my [GitHub repository](https://github.com/michaldziuba03/novu-nuxt). ## Novu dashboard Let's say you have defined Novu [workflow](https://docs.novu.co/workflows/notification-workflows). If you click "Trigger notification" button, you will find `subscriberId` of your test user. In Novu, **Subscriber** is an entity that will receive notifications, so in real app you would use users from your database as [subscribers](https://docs.novu.co/subscribers/subscribers). ![Novu testing workflow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/svovwhl3uogi7fda1b2r.png) You can find credentials in "Settings" page. **Application ID** can be stored safely on client-side but **API Key** must be secret and stored on server-side only. ![Novu credentials](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j2y323im44xav25dlo04.png) For security reasons - enable HMAC encryption for **Novu In-App** integration on "Integration Store" page. HMAC encryption will make sure that a subscriberId is encrypted using the secret API key, and those will prevent malicious actors from impersonating users. ![Novu HMAC](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/orumm68d4pldl7gra0z6.png) ## Nuxt.js setup Create a new Nuxt.js project by running the code snippet below: ```sh npx nuxi@latest init app ``` Install Novu notification center integration for Vue.js: ```sh npm install @novu/notification-center-vue ``` Create `.env` file and paste your values from Novu dashboard: ```sh NOVU_SUBSRIBER_ID='<YOUR_SUBSCRIBER_ID>' NOVU_API_KEY='<YOUR_NOVU_API_KEY>' VITE_NOVU_APP_ID='<YOUR_NOVU_APP_ID>' ``` I use TailwindCSS for styling - you can find Nuxt.js configuration guide on official [Tailwind docs](https://tailwindcss.com/docs/guides/nuxtjs). Now, define **client-side** plugin: > plugins/novu.client.ts ```ts import NotificationCenterPlugin from '@novu/notification-center-vue'; import '@novu/notification-center-vue/dist/style.css'; export default defineNuxtPlugin((nuxtApp) => { nuxtApp.vueApp.use(NotificationCenterPlugin); }); ``` > Nuxt.js automatically recognizes the .client.vue extension and ensures that the plugin is only included in the client-side bundle. In the next step, we wrap Novu notification center in [client-side](https://nuxt.com/docs/guide/directory-structure/components#client-components) component. We simulate request for user session data - we want to obtain subscriberId (user's id from database) and HMAC hash generated on the server. We will add missing API endpoint handler later. > components/NotificationCenter.client.vue ```vue <script lang="ts" setup> const { data, status } = await useAsyncData('session', () => $fetch('/api/user') ) const applicationIdentifier = import.meta.env.VITE_NOVU_APP_ID; const sessionLoaded = () => { console.log('Notification center session loaded successfully!'); }; </script> <template> <NotificationCenterComponent v-if="status == 'success' && data" v-slot="slot" :subscriber-id="data.userId" :subscriber-hash="data.hmacHash" :application-identifier="applicationIdentifier" :session-loaded="sessionLoaded" > <button class="relative w-8 h-8 m-auto" :class="{'before:bg-red-700 before:w-2.5 before:h-2.5 before:rounded-full before:absolute before:top-0 before:right-0': slot.unseenCount > 0}" > <svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"> <path d="M6 8a6 6 0 0 1 12 0c0 7 3 9 3 9H3s3-2 3-9"/> <path d="M10.3 21a1.94 1.94 0 0 0 3.4 0"/> </svg> </button> </NotificationCenterComponent> </template> ``` Now put our notification center component inside the header: > Header.vue ```vue <template> <Container> <header class="w-full pt-6 lg:pt-8 flex gap-4 items-center justify-between"> <h1 class="font-black text-lg">App</h1> <nav class="flex items-center gap-8 font-medium text-sm"> <nuxt-link to="/">Home</nuxt-link> <nuxt-link to="/about">About</nuxt-link> <NotificationCenter /> </nav> </header> </Container> </template> ``` > Container.vue ```vue <template> <div class="px-6 mx-auto max-w-[1200px]"> <slot /> </div> </template> ``` Create hero section with button to trigger notification: > Trigger.vue ```vue <script setup> function triggerNotification() { fetch('/api/trigger', { method: 'POST', headers: { 'Content-Type': 'application/json', }, }); } </script> <template> <Container> <div class="py-24 md:py-32 flex flex-col items-center"> <a href="https://docs.novu.co/getting-started/send-your-first-notification" target="_blank" class="px-3 py-1 text-sm font-medium border border-[#e500c0] text-[#e500c0] rounded-full text-center mb-4"> Check Novu docs 🚀 </a> <h1 class="text-5xl md:text-6xl font-extrabold text-center">Trigger notification</h1> <p class="text-center text-slate-900 text-lg mt-4"> Click the trigger button and then check the notification center by clicking the bell. </p> <div class="flex justify-center gap-4 mt-8"> <button @click="triggerNotification" class="bg-black hover:bg-black/80 text-white font-medium px-8 py-3 rounded-md">Trigger</button> </div> </div> </Container> </template> ``` ## Server-side integration Install Novu Node.js SDK (**SDK** is intended for use exclusively in server-side applications): ```sh npm install @novu/node ``` Now, add our missing API handlers: > server/api/trigger ```ts import { Novu } from '@novu/node'; const novu = new Novu({ apiKey: process.env.NOVU_API_KEY, }); export default defineEventHandler(async (event) => { console.log(`Received event: ${event}`); await novu.trigger('sample-workflow', { to: { subscriberId: process.env.NOVU_SUBSRIBER_ID!, }, payload: {} }); }); ``` Last step - encrypt `subscriberId` with HMAC and return user data. In real app you would fetch currently authenticated user from database. > server/api/user ```ts import { createHmac } from "crypto"; export default defineEventHandler(async (event) => { console.log(`Received event: ${event}`); const userId = process.env.NOVU_SUBSRIBER_ID!; const hmacHash = createHmac("sha256", process.env.NOVU_API_KEY!) .update(userId) .digest("hex"); return { userId, hmacHash, } }); ``` ## What next? If you want to customize your notification center, there is [Novu Headless Library](https://docs.novu.co/guides/headless-notification-center-guide). Novu Headless is plain-JS library and it abstracts underlying real-time WebSockets events and REST API calls. You can also define your workflows programatically (code-first approach), using Novu Echo. Echo supports Nuxt.js out of the box, so check it out: - [Novu Echo](https://docs.novu.co/echo/quickstart) - [Novu Echo with Nuxt](https://docs.novu.co/echo/sdk/frameworks/nuxt) - [How to send notifications with Nuxt.js and Vuemail](https://docs.novu.co/guides/echo-guides/echo-nuxt-vuemail)
michaldziuba03
1,884,496
Setting Up a Secure Wazuh Environment by AWS EC2
Introduction: Wazuh is an open-source security platform that aims to help organizations monitor and...
0
2024-06-16T21:52:01
https://dev.to/zahraajawad/setting-up-a-secure-wazuh-environment-by-aws-ec2-487o
aws, cloudcomputing, cloudnative, community
**Introduction:** Wazuh is an open-source security platform that aims to help organizations monitor and manage security incidents, detect threats, and ensure compliance with security regulations. It provides capabilities for intrusion detection, log analysis, vulnerability detection, and threat intelligence, wazuh is trusted and used by thousands of organizations around the world. With its powerful features and flexible capabilities, it has become a preferred choice for those looking for a reliable and flexible open-source security platform. Wazuh is a powerful security platform that combines threat detection and response capabilities and is characterized by the ability to integrate and customize to meet the needs of different organizations. **Features of Wazuh** **Protect endpoints and cloud workloads:** Wazuh provides continuous protection for on-ground and cloud-based environments, ensuring environments are protected from potential threats. **Integration and scalability:** Wazuh can integrate with other security tools and expand its capabilities based on an organization's needs. **Security monitoring and log data analysis:** Wazuh enables effective threat detection and response through log data analysis. **Security Integration Monitoring and Incident Response:** Wazuh supports regulatory compliance monitoring and cloud security. **Security Tools, Log Control, and Incident Control:** Wazuh provides tools for security, incident control, and log control. **Installation Wazuh by AWS** **Step 1 "Launch Instance"** When logging into the AWS account, we select the EC2 service through Services or by the search box: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lywxncnc6ujxcxz3kn6h.png) Click on **Launch instance** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtla1bl93xk2djvvub1x.png) Under **Name and tags** Enter a name to identify your instance, For this tutorial, name the instance (Wazuh) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ijq3vv6mhttoj8xjvksg.png) Under Application and OS Images: From **Quick Start**, choose an AMI that meets your web server needs Here we choose Ubuntu (which is free tier eligible) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2ux3jnkpswkqu3v0crpe.png) Under **Instance type**: Choose the type of instance, here we choose **t2.medium**.(**It is recommended that the lowest specifications be 2CPU and 4GiB, which are available in the t2.medium or higher instance**). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0y05act8f1n9rcjx5aun.png) Under **Key pair (login)**: Choose the key pair ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/95ys0kycmd8nyvear88n.png) or create new key pair: Give a name to the key pair, then click **Create key pair** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c3ewo70jj1qj0mq8chxf.png) Under **Network settings**: under **Firewall (security groups)** Choose to Create security groups **To Allow SSH traffic** by clicking on the check box **To Allow HTTPS traffic** from the internet by clicking on the check box ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nb46x4h16wy2qiwlfkb1.png) Leave all other configurations as they are (default settings) In the **Summary** panel, review your instance configuration and then choose **Launch instance**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mnve6n8fbasfs4fdyi1k.png) Successfully initiated launch of instance and to see the instance click on the ID: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7ckehjzh6dcqyl52n14.png) Your instance will first be Pending, and will then go into the Running state. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e2emsizf2dt2doqrb0wr.png) **Step 2: "Connect to the instance"** To connect to your instance, select the instance and choose **Connect**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/70lfweizuhrosg1nz9it.png) There are many ways to connect to ec2, here we will choose the **SSH client** to connect. After selecting the **"SSH Client"** section, copy and execute the following commands in the terminal as per the following steps: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9z2mqj4h5fjnmmaejxy2.png) Open Terminal (here we use Git Bash) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71ior9mzuloqrqbkzh1s.png) Change the directory with the cd command (change directory), where you have downloaded your pem file(key pair). In this article, the pem file is stored in the downloads folder. Execute the **cd** command to change the path to the location of the encryption key `cd Download/` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/umwp9n4xyjhh6nn6cetd.png) Execute the following commands sequentially 1. Chmod 400 [key pair name].pem 2. ssh -i /path/key-pair-name.pem instance-user-name@instance-public-dns-name ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/miqxnqplr8g5qq2tco3y.png) After the command is executed you will be prompted to type “Yes” to continue with the connection ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dve2oipn2gyfusfj6dxa.png) And that’s it! Now we’re logged in to our AWS instance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9he8g5joohem1nscaqjb.png) Before installing Wazuh on an EC2 instance: We get root permission by executing the `sudo -i` command ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k2fnf271bs1mat0qm9l1.png) Executing the command "sudo -i" means booting as root on Linux. The main feature of this command is that it gives you full admin (root user) privileges, allowing you to perform commands and operations that require root user privileges. Step 3 "Install Wazuh" Now to install Wazuh, execute the command `curl -sO https://packages.wazuh.com/4.7/wazuh-install.sh&&sudo bash ./wazuh-install.sh -a -i` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zdm9hfnvxbh8o9qhvgg8.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzqjrdfvn86nrkowk9cg.png) After some time has passed, the Wazuh installation is completed and we can log in using the username and password that we obtained with the installation: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pugwlhhuo3pgqawx1lv3.png) Step 4 "login Wazuh" - Return to the AWS account and click on the instance - Then click on the details box - Then click on the public IPV4 address to open in a browser. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17r8f74xjqnv8z00vcud.png) After opening the public IPV4 address in the browser, the following page appears, so we click on **Advanced** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hy9rv9529x7degsewrv3.png) then click below: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vnp83f25vtc14r7dtavl.png) The wazuh construction steps have been completed successfully ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4apkdxerr8tidzys1zx.png) We now login using the username and password that were obtained during the installation process previously: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/58v1khj6kq8r54an3izt.png) You have successfully logged into Wazuh ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6ng5iqsq1pnicayl7sx.png) **References:** https://documentation.wazuh.com/current/installation-guide/wazuh-server/index.html https://dev.to/zahraajawad/our-wordpress-site-installation-by-aws-34jn
zahraajawad
1,890,599
Elanat Brings Web-Forms Back to ASP.NET Core!
The Elanat team will soon bring back the former Microsoft Web-Forms in ASP.NET Standard with a new...
0
2024-06-16T21:49:54
https://elanat.net/content/96/We%20will%20soon%20bring%20Web-Forms%20to%20ASP.NET%20Core.html
news, dotnet, backend, frontend
The [Elanat team](https://elanat.net) will soon bring back the former Microsoft Web-Forms in ASP.NET Standard with a new and powerful architecture to ASP.NET Core. New Web-Forms will be added to the [CodeBehind Framework](https://github.com/elanatframework/Code_behind). Over time, the Web-Forms architecture for developing websites using ASP.NET was out of fashion due to its problems and limitations. But Elanat team wants to bring Web-Forms back to ASP.NET Core by presenting a new revolutionary architecture. Elanat has incorporated new improvements to the former Web-Forms architecture and will surprise web developers. It has been tried to revive this architecture in a new and efficient way. The Elanat team has done comprehensive studies to eliminate all the weaknesses of the former Web-Forms; so, significant improvements can be seen in web-forms. With the addition of new Web-Forms to ASP.NET Core, you will see another revolutionary idea from the Elanat team. This is an important step in the evolution of web development! Problems that existed in the former Web-Forms: - The volume of data sent to the server was large - The problem of constantly returning data from the server (Postback) - The problem of enlarging the page with ViewState as a result of lower loading speed and more bandwidth usage - Lack of control over HTML - High pressure on the server due to the presence of web controls The new Web-Forms that the Elanat team will provide will be combined with the MVC architecture and will give developers high maneuverability. This structure gives new and more flexible solutions in the development of websites and allows them to control their view pages more effectively and with higher quality without worrying about the performance of new Web-Forms. In this new Web-Forms architecture, the front-end is completely abandoned and will give a new meaning to the web. **Wait for this big surprise!** ### Related links CodeBehind on GitHub: https://github.com/elanatframework/Code_behind CodeBehind in NuGet: https://www.nuget.org/packages/CodeBehind/ CodeBehind page: https://elanat.net/page_content/code_behind
elanatframework
1,890,604
Short notes on the following Concepts
What is Virtualization? Scalability Agility High Availability Fault Tolerant Global Reach What is...
0
2024-06-16T21:45:22
https://dev.to/kennyig/short-notes-on-the-following-concepts-4no2
cloud, beginners, computerscience, azure
1. What is Virtualization? 2. Scalability 3. Agility 4. High Availability 5. Fault Tolerant 6. Global Reach 7. What is the difference between Elasticity and Scalability ## 1. **What is virtualization?** This concept is related to creating or spooling virtualized systems with any of the following features: operating systems, servers, storage devices, or networks, rather than having these digital resources customized to physical hardware. Virtualization technology enables multiple operating systems and applications to run simultaneously and independently on a single physical machine. This allows multiple virtual machines (VMs) to share the same CPU, memory, storage, and other resources while avoiding resource conflict. Virtualization allows the creation of virtual environments to pool resources and reduce IT infrastructure costs whilst increasing access, manageability, and efficiency of resources and system resources to optimize service delivery and enhance performance. Hence, virtualization allows organizations to deploy and use multiple applications, system snapshots and efficiently optimize resource utilization, data storage and accessibility with less complexity, altogether improving scalability and more efficient data processing quicker. Some of the potential benefits of virtualization include better resource management, cost savings on hardware, increased flexibility, virtual machine communication, isolation, easy moving apps, Load Balancing, less downtime offered by saved server occurrences. ## 2. What is scalability in cloud computing? In cloud computing, scalability is the capacity of an infrastructure in the cloud to modify its resources in response to fluctuating demands. This implies that you will not need to wait for complicated hardware upgrades or complicated processes to expand or decrease resources like processor power, storage, or memory on the fly. Below is a summary of the main elements of cloud scalability: Dynamic Adjustment: You can scale your computing power to match your workload by adjusting cloud resources up or down as needed. Cost-effectiveness: You do not have to spend money on extra hardware that sits around when demand is low because you only pay for the resources you utilize. Flexibility: Cloud scalability makes it simple to adapt to unplanned growth, seasonal demands, or traffic spikes, ensuring that your applications can manage the strain. Cloud scaling primarily comes in two flavors: Increasing or decreasing a single server instance's resources, like RAM or CPU, is known as vertical scaling (scaling up/down). Horizontal scaling, often known as scaling out, is the process of dividing the workload over several computers by adding or deleting complete server instances. ## 3. What is Agility in cloud computing? In cloud computing, agility is the capacity of an enterprise to create, test, and implement applications in a cloud environment rapidly. It is basically the flexibility and nimbleness that come from using cloud resources. Consider this: installing and managing IT infrastructure can be a laborious and slow procedure in a traditional on-premises environment. These obstacles are eliminated by cloud computing by providing: On-Demand Resources: Cloud service providers provide an enormous pool of instantaneously provisionable and de-provisionable computer resources, such as memory, processing power, and storage. This removes the requirement for protracted cycles in hardware acquisition. Automated Processes: A lot of cloud platforms come with capabilities for scaling, configuration management, and deployment automation. This allows IT personnel to concentrate on more advanced duties. Elasticity: The ability to quickly scale up or down cloud resources in response to changing demand. This frees companies from worrying about infrastructure constraints so they may test new apps or handle spikes in demand. The following are some advantages of cloud agility: Faster Time to Market: Companies get a competitive edge by being able to provide new concepts and services to consumers more quickly. Enhanced Innovation: A culture of experimentation and creativity is fostered by the cloud's ease of development and testing. Lower Costs: Companies do not have to pay for hardware that is not being used because they only pay for the resources they use. ## 4. Explain High Availability In cloud computing, high availability (HA) describes a system's capacity to continue functioning and being available for prolonged periods of time, even in the event of hardware or software problems. It is essential for applications that are mission-critical and cannot withstand downtime, since any disruption might result in major financial loss or disruptions to services. An examination of high availability's operation is provided below: Redundancy: Multiple servers, storage devices, and network connections are examples of redundant resources that are used by HA systems. There is less downtime when a component fails since another one takes over smoothly. HA designs are built with automated failover in mind. This implies that, in the case of a failure, the system will identify the problem and, with the least amount of disturbance to continuing activities, will immediately transition to a backup resource. Minimal Downtime: Achieving nearly constant uptime, sometimes expressed in "nines," is the aim of HA. For example, a system with three nines, or 99.99% uptime, would only have 53 minutes of outage a year. To ensure high availability, cloud providers provide a range of tools and services, such as: Virtual machines (VMs): You can set up and launch VMs on several hosts using cloud platforms. The program can be swiftly moved to another VM in case one fails. By distributing incoming traffic among several servers, load balancing keeps any one server from getting overworked and possibly failing. Disaster Recovery (DR): This extends beyond hardware malfunctions and entails data and application replication between geographically dispersed sites. This guarantees that your applications can be promptly restored in the event of a widespread outage or natural disaster. High availability is a critical consideration for businesses that rely on continuous operation and minimal downtime. By implementing HA strategies in the cloud, you can ensure your applications are always up and running, fostering better customer experience and business continuity. ## 5. Fault Tolerance? In cloud computing, fault tolerance is all about designing a system that can continue to function normally even in the event that certain components break or malfunction. Similar to having an infrastructure backup plan, it guarantees that your apps continue to function even in the event of technical difficulties. The following describes how fault tolerance functions in a cloud environment: Continuous Operation: The fundamental idea is to keep services running even in the event of software or hardware malfunctions. This is particularly crucial for vital applications since downtime can be expensive. Redundancy is Crucial: Redundancy in cloud fault tolerance refers to the presence of backups for critical components. This can entail redundant network connections, additional servers with mirrored data, or extra storage devices. Automatic Failover: Ideally, the system should automatically transition to a backup resource in the event of a fault. This reduces downtime and guarantees a seamless user transition. To accomplish fault tolerance in the cloud, various methods are employed: Replication is the process of making duplicates of your apps and data on different servers. Everything can continue to function even if one server dies because the other one can take over. By distributing workloads among several servers, load balancing keeps any one server from getting overworked and possibly failing. Self-Healing Mechanisms: A few cloud platforms have automated mechanisms that are able to identify and resolve issues without the need for human assistance. This strengthens the robustness of the system even further. Fault tolerance has many advantages in cloud computing. Reduced Downtime: Guards against interruptions and failures in service, guaranteeing that users can always access your apps. Enhances the overall dependability of your cloud infrastructure by reducing the likelihood of failures. Improved Business Continuity: Reduces possible losses by assisting companies in continuing to operate even in the face of unanticipated events. You may create a more durable and resilient system and guarantee that your users can access and rely on your applications by implementing fault tolerance techniques into your cloud environment. ## 6. Global Reach The ability of cloud providers to make their infrastructure and services available over a wide geographic area is referred to as "global reach" in the context of cloud computing. Businesses can benefit from this in a number of ways. Market Expansion: Cloud computing enables companies to effortlessly enter new international markets. Users may access data and apps with minimal latency, no matter where they are physically situated, thanks to data centers that are positioned strategically throughout multiple locations. Global Scalability: Cloud resources are easily scalable, allowing them to adjust to changing demands in many locations. This eliminates the need for companies to develop and maintain physical infrastructure in each location, enabling them to respond to spikes in user base or traffic in particular areas. Lower Costs: Companies save money by not having to build up and maintain many data centers, which would have required a large upfront investment and continuous upkeep. The infrastructure is handled by cloud providers, who provide a pay-as-you-go model that may be scaled to meet your demands. Enhanced Compliance: Data centers that meet different local data privacy laws are provided by cloud providers. This makes it easier for companies to comply with local rules and regulations when storing and processing data, which is beneficial for worldwide operations. Major cloud providers use their worldwide reach in the following ways: Distributed Data Centers: A global network of data centers has been developed by cloud behemoths such as Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS). This guarantees that users, wherever they may be, may access resources with minimal latency. material Delivery Networks (CDNs): By distributing material among geographically separated servers, these networks allow users worldwide to load pages more quickly. Regional Availability Zones: Cloud companies frequently provide many availability zones, or geographically separated data centers, within each region. In the event of an outage, this offers redundancy and fault tolerance. For companies of all sizes, having a global presence is revolutionary. It gives companies the ability to take on global competition, attract new clients, and maximize the performance and financial efficiency of their cloud deployments. ## 7. What is the difference between Elasticity and Scalability Cloud computing elasticity and scalability both cope with demand fluctuations, but they react differently and have different ideal conditions. Below is a summary of the main distinctions: ## Scalability Focus: Scheduled resource modifications in response to projected increases or decreases in workload. Action: Usually manual or based on pre-established limits. You can ask the cloud provider to release unused resources (scaling down) or request more resources (scaling up). Timeline: Depending on the complexity and the procedure used by the cloud provider, scaling may take some time to implement. Perfect for: Meeting demands for processing power, data storage, or a steady increase in the user base. Example: A business expects the holidays to bring in a spike in internet sales. To manage the extra traffic, they might proactively scale up their server capacity in advance. ## Elasticity Focus: Real-time variations in demand are used to automatically adapt resources. Action: Scaling automatically in accordance with preset criteria and triggers. The cloud system itself adjusts resource scaling up or down based on changes in workload. Timeframe: Elasticity occurs in minutes or even seconds, far faster than deliberate scaling. Perfect for: Managing sporadic variations in resource needs, workload surges, and erratic traffic spikes. Example: A trending topic causes a rapid spike in visitors to a social networking app. Without the need for human intervention, elasticity automatically ramps up resources to maintain performance. Here's a comparison: Consider scalability like Lego construction. You have the freedom to design your structure and change the bricks (materials) as needed. Similar to a stretched band is elasticity. Without the need to physically alter the band, it automatically adapts to varied tension (demand) levels. To sum up, cloud computing offers both scalability and elasticity as beneficial advantages. Elasticity makes sure you can deal with unforeseen spikes in demand, while scalability helps you prepare for future expansion. To maximize both performance and cost-efficiency in your cloud environment, you should ideally combine the two.
kennyig
1,890,601
Newsletter "What's up devs ?" is born !
Hi devs, You may know that I publish a weekly French dev newsletter called "Quoi de neuf les devs...
0
2024-06-16T21:35:38
https://dev.to/happytodev/newsletter-whats-up-devs-is-born--pl9
webdev, newsletter, developers
Hi devs, You may know that I publish a weekly French dev newsletter called "[Quoi de neuf les devs ?](https://happytodev.substack.com)". **But today, I'm delighted to announce that its English version is born.** It's called **"[What's up devs?](https://go.itanea.fr/wud1)"** and it's in the same spirit as its French sister: to give you the latest development news every week. I'll try to keep the same rhythm as the French version, so I'll publish it every Friday. Occasionally, the newsletter may be published on Saturday, depending on how tired I am ;-) So I leave you with this newsletter, which I'm very proud to present to you. I hope you enjoy reading it as much as I enjoyed writing it. Don't hesitate to give your opinion on the topics and content. I'll be reading your comments very carefully to improve this newsletter. In this first issue, we'll be talking about ✅ Go, ✅ PostgreSQL, ✅ CSS, ✅ HTTP/3, ✅ Laravel, ✅ Python, ✅ SQLite and much more... Take care and see you next friday !!
happytodev
1,890,600
Grub-customizer:Install to MBR - NVME Disk
I have multiple OS installed. Each have it's own bootloader/grub. I had done customization in...
0
2024-06-16T21:34:07
https://dev.to/ajeebkp23/grub-customizerinstall-to-mbr-nvme-disk-21hg
I have multiple OS installed. Each have it's own bootloader/grub. I had done customization in grub-customizer. I want to make my current os bootloader default. Find the nvme disk name `df -h` I see few entries like `/dev/nvme0n1p1`. I can understand last `p1` stands for partition. I removed it, `/dev/nvme0n1`. This value, `/dev/nvme0n1` is used successfully along with `File -> Install to MBR ...`.
ajeebkp23
1,890,581
Squares+Circles background in HTML/CSS/JS
Check out this Pen I made!
0
2024-06-16T21:12:30
https://dev.to/tidycoder/squarescircles-background-in-htmlcssjs-4g98
codepen
Check out this Pen I made! {% codepen https://codepen.io/TidyCoder/pen/oNRpqVZ %}
tidycoder
1,890,580
Vision Week (test)
Vision Week: A Journey into the Virtual Zoo I'm thrilled to share my latest project with...
0
2024-06-16T21:09:09
https://dev.to/kvnbbg/vision-week-test-52e8
codepen, webdev, javascript, beginners
## Vision Week: A Journey into the Virtual Zoo I'm thrilled to share my latest project with you all – Vision Week, an immersive virtual zoo experience that combines interactive maps, VR tours, and a user-friendly interface to bring the wonders of wildlife directly to your screen. ### Project Overview Vision Week is designed to offer a seamless and engaging experience for users of all ages. The application features various sections, each meticulously crafted to ensure an intuitive and enjoyable journey through the virtual zoo. ### Key Features 1. **Home Screen** - Welcome message and introduction to Vision Week. - Latest posts and animal conservation statistics. 2. **Virtual Tour Screen** - VR viewer for an immersive experience of the Arctic exhibit. - Detailed information about the habitat and animals like polar bears, seals, and more. 3. **Interactive Map Screen** - A 3D map to navigate through the zoo. - Locate different sections and learn about various exhibits. 4. **User Profile Screen** - Manage profile information and subscription details. - View personal stats and achievements. ### Interactive Elements To enhance user engagement, I’ve incorporated various interactive elements such as animations, pop-ups, and geometric shapes. The application is fully responsive, ensuring a smooth experience across all devices. ### Check it Out! I'm excited to invite you to explore Vision Week on CodePen. Here’s the link to the project: [Check out this Pen I made!](https://codepen.io/Kvnbbg-the-animator/pen/VwOypXB) Your feedback and suggestions are highly valued, so please feel free to share your thoughts. Together, we can make Vision Week an even better experience! Happy exploring! 🌍🦊
kvnbbg
1,890,579
Hover the hello word
Let's hover the "hello" word!
0
2024-06-16T21:08:47
https://dev.to/tidycoder/hover-the-hello-word-54a1
codepen
Let's hover the "hello" word! {% codepen https://codepen.io/TidyCoder/pen/gOJoedv %}
tidycoder
1,890,578
Let's dance now!
Let's dance now!
0
2024-06-16T21:04:49
https://dev.to/tidycoder/lets-dance-now-3g9o
codepen
Let's dance now! {% codepen https://codepen.io/TidyCoder/pen/eYayMbY %}
tidycoder
1,890,575
Short notes on the following Concepts
What is Virtualization? Scalability Agility High Availability Fault Tolerant Global Reach What is...
0
2024-06-16T21:00:02
https://dev.to/kennyig/short-notes-on-the-following-concepts-4b7e
cloud
1. What is Virtualization? 2. Scalability 3. Agility 4. High Availability 5. Fault Tolerant 6. Global Reach 7. What is the difference between Elasticity and Scalability ## 1. **What is virtualization?** This concept is related to creating or spooling virtualized systems with any of the following features: operating systems, servers, storage devices, or networks, rather than having these digital resources customized to physical hardware. Virtualization technology enables multiple operating systems and applications to run simultaneously and independently on a single physical machine. This allows multiple virtual machines (VMs) to share the same CPU, memory, storage, and other resources while avoiding resource conflict. Virtualization allows the creation of virtual environments to pool resources and reduce IT infrastructure costs whilst increasing access, manageability, and efficiency of resources and system resources to optimize service delivery and enhance performance. Hence, virtualization allows organizations to deploy and use multiple applications, system snapshots and efficiently optimize resource utilization, data storage and accessibility with less complexity, altogether improving scalability and more efficient data processing quicker. Some of the potential benefits of virtualization include better resource management, cost savings on hardware, increased flexibility, virtual machine communication, isolation, easy moving apps, Load Balancing, less downtime offered by saved server occurrences. ## 2. What is scalability in cloud computing? In cloud computing, scalability is the capacity of an infrastructure in the cloud to modify its resources in response to fluctuating demands. This implies that you will not need to wait for complicated hardware upgrades or complicated processes to expand or decrease resources like processor power, storage, or memory on the fly. Below is a summary of the main elements of cloud scalability: Dynamic Adjustment: You can scale your computing power to match your workload by adjusting cloud resources up or down as needed. Cost-effectiveness: You do not have to spend money on extra hardware that sits around when demand is low because you only pay for the resources you utilize. Flexibility: Cloud scalability makes it simple to adapt to unplanned growth, seasonal demands, or traffic spikes, ensuring that your applications can manage the strain. Cloud scaling primarily comes in two flavors: Increasing or decreasing a single server instance's resources, like RAM or CPU, is known as vertical scaling (scaling up/down). Horizontal scaling, often known as scaling out, is the process of dividing the workload over several computers by adding or deleting complete server instances. ## 3. What is Agility in cloud computing? In cloud computing, agility is the capacity of an enterprise to create, test, and implement applications in a cloud environment rapidly. It is basically the flexibility and nimbleness that come from using cloud resources. Consider this: installing and managing IT infrastructure can be a laborious and slow procedure in a traditional on-premises environment. These obstacles are eliminated by cloud computing by providing: On-Demand Resources: Cloud service providers provide an enormous pool of instantaneously provisionable and de-provisionable computer resources, such as memory, processing power, and storage. This removes the requirement for protracted cycles in hardware acquisition. Automated Processes: A lot of cloud platforms come with capabilities for scaling, configuration management, and deployment automation. This allows IT personnel to concentrate on more advanced duties. Elasticity: The ability to quickly scale up or down cloud resources in response to changing demand. This frees companies from worrying about infrastructure constraints so they may test new apps or handle spikes in demand. The following are some advantages of cloud agility: Faster Time to Market: Companies get a competitive edge by being able to provide new concepts and services to consumers more quickly. Enhanced Innovation: A culture of experimentation and creativity is fostered by the cloud's ease of development and testing. Lower Costs: Companies do not have to pay for hardware that is not being used because they only pay for the resources they use. ## 4. Explain High Availability In cloud computing, high availability (HA) describes a system's capacity to continue functioning and being available for prolonged periods of time, even in the event of hardware or software problems. It is essential for applications that are mission-critical and cannot withstand downtime, since any disruption might result in major financial loss or disruptions to services. An examination of high availability's operation is provided below: Redundancy: Multiple servers, storage devices, and network connections are examples of redundant resources that are used by HA systems. There is less downtime when a component fails since another one takes over smoothly. HA designs are built with automated failover in mind. This implies that, in the case of a failure, the system will identify the problem and, with the least amount of disturbance to continuing activities, will immediately transition to a backup resource. Minimal Downtime: Achieving nearly constant uptime, sometimes expressed in "nines," is the aim of HA. For example, a system with three nines, or 99.99% uptime, would only have 53 minutes of outage a year. To ensure high availability, cloud providers provide a range of tools and services, such as: Virtual machines (VMs): You can set up and launch VMs on several hosts using cloud platforms. The program can be swiftly moved to another VM in case one fails. By distributing incoming traffic among several servers, load balancing keeps any one server from getting overworked and possibly failing. Disaster Recovery (DR): This extends beyond hardware malfunctions and entails data and application replication between geographically dispersed sites. This guarantees that your applications can be promptly restored in the event of a widespread outage or natural disaster. High availability is a critical consideration for businesses that rely on continuous operation and minimal downtime. By implementing HA strategies in the cloud, you can ensure your applications are always up and running, fostering better customer experience and business continuity. ## 5. Fault Tolerance? In cloud computing, fault tolerance is all about designing a system that can continue to function normally even in the event that certain components break or malfunction. Similar to having an infrastructure backup plan, it guarantees that your apps continue to function even in the event of technical difficulties. The following describes how fault tolerance functions in a cloud environment: Continuous Operation: The fundamental idea is to keep services running even in the event of software or hardware malfunctions. This is particularly crucial for vital applications since downtime can be expensive. Redundancy is Crucial: Redundancy in cloud fault tolerance refers to the presence of backups for critical components. This can entail redundant network connections, additional servers with mirrored data, or extra storage devices. Automatic Failover: Ideally, the system should automatically transition to a backup resource in the event of a fault. This reduces downtime and guarantees a seamless user transition. To accomplish fault tolerance in the cloud, various methods are employed: Replication is the process of making duplicates of your apps and data on different servers. Everything can continue to function even if one server dies because the other one can take over. By distributing workloads among several servers, load balancing keeps any one server from getting overworked and possibly failing. Self-Healing Mechanisms: A few cloud platforms have automated mechanisms that are able to identify and resolve issues without the need for human assistance. This strengthens the robustness of the system even further. Fault tolerance has many advantages in cloud computing. Reduced Downtime: Guards against interruptions and failures in service, guaranteeing that users can always access your apps. Enhances the overall dependability of your cloud infrastructure by reducing the likelihood of failures. Improved Business Continuity: Reduces possible losses by assisting companies in continuing to operate even in the face of unanticipated events. You may create a more durable and resilient system and guarantee that your users can access and rely on your applications by implementing fault tolerance techniques into your cloud environment. ## 6. Global Reach The ability of cloud providers to make their infrastructure and services available over a wide geographic area is referred to as "global reach" in the context of cloud computing. Businesses can benefit from this in a number of ways. Market Expansion: Cloud computing enables companies to effortlessly enter new international markets. Users may access data and apps with minimal latency, no matter where they are physically situated, thanks to data centers that are positioned strategically throughout multiple locations. Global Scalability: Cloud resources are easily scalable, allowing them to adjust to changing demands in many locations. This eliminates the need for companies to develop and maintain physical infrastructure in each location, enabling them to respond to spikes in user base or traffic in particular areas. Lower Costs: Companies save money by not having to build up and maintain many data centers, which would have required a large upfront investment and continuous upkeep. The infrastructure is handled by cloud providers, who provide a pay-as-you-go model that may be scaled to meet your demands. Enhanced Compliance: Data centers that meet different local data privacy laws are provided by cloud providers. This makes it easier for companies to comply with local rules and regulations when storing and processing data, which is beneficial for worldwide operations. Major cloud providers use their worldwide reach in the following ways: Distributed Data Centers: A global network of data centers has been developed by cloud behemoths such as Google Cloud Platform (GCP), Microsoft Azure, and Amazon Web Services (AWS). This guarantees that users, wherever they may be, may access resources with minimal latency. material Delivery Networks (CDNs): By distributing material among geographically separated servers, these networks allow users worldwide to load pages more quickly. Regional Availability Zones: Cloud companies frequently provide many availability zones, or geographically separated data centers, within each region. In the event of an outage, this offers redundancy and fault tolerance. For companies of all sizes, having a global presence is revolutionary. It gives companies the ability to take on global competition, attract new clients, and maximize the performance and financial efficiency of their cloud deployments. ## 7. What is the difference between Elasticity and Scalability Cloud computing elasticity and scalability both cope with demand fluctuations, but they react differently and have different ideal conditions. Below is a summary of the main distinctions: ## Scalability Focus: Scheduled resource modifications in response to projected increases or decreases in workload. Action: Usually manual or based on pre-established limits. You can ask the cloud provider to release unused resources (scaling down) or request more resources (scaling up). Timeline: Depending on the complexity and the procedure used by the cloud provider, scaling may take some time to implement. Perfect for: Meeting demands for processing power, data storage, or a steady increase in the user base. Example: A business expects the holidays to bring in a spike in internet sales. To manage the extra traffic, they might proactively scale up their server capacity in advance. ## Elasticity Focus: Real-time variations in demand are used to automatically adapt resources. Action: Scaling automatically in accordance with preset criteria and triggers. The cloud system itself adjusts resource scaling up or down based on changes in workload. Timeframe: Elasticity occurs in minutes or even seconds, far faster than deliberate scaling. Perfect for: Managing sporadic variations in resource needs, workload surges, and erratic traffic spikes. Example: A trending topic causes a rapid spike in visitors to a social networking app. Without the need for human intervention, elasticity automatically ramps up resources to maintain performance. Here's a comparison: Consider scalability like Lego construction. You have the freedom to design your structure and change the bricks (materials) as needed. Similar to a stretched band is elasticity. Without the need to physically alter the band, it automatically adapts to varied tension (demand) levels. To sum up, cloud computing offers both scalability and elasticity as beneficial advantages. Elasticity makes sure you can deal with unforeseen spikes in demand, while scalability helps you prepare for future expansion. To maximize both performance and cost-efficiency in your cloud environment, you should ideally combine the two.
kennyig
1,890,574
AssaultCube | Writing your first external cheat in RustLang
Writing a-salt-cube Prerequisites A decent knowledge of RustLang ...
0
2024-06-16T20:52:15
https://dev.to/zofia/assaultcube-writing-your-first-external-cheat-in-rustlang-3m7
hacking, gamedev, reversing
# Writing a-salt-cube ## Prerequisites - A decent knowledge of RustLang ## Outline ### Writing cheat requirements 1. Finding game modules 2. Finding pointers and offsets (we'll use Cheat Engine) ### Coding 1. Create command line interface to modify cheat properties during execution 2. Open handles to the target process and module 3. Creating threads to handle cheat logic in parallel 4. Dereference pointer chains and write cheat values ## Finding our pointers Open AssaultCube, and enter the singleplayer mode. First, I'm going to open up Cheat Engine and attach it to the Assault Cube process.<br> From here, we can enter our initial health value (100) into the **value** field and start our first scan. From my first scan, I got 996 results. This is not ideal, as we need to find a small, consistent pool of addresses that hold our health value. So, let's run another scan after taking some damage from a bot friend. One result! Wow, that was quick. For larger games, you will encounter more addresses finding pointers this way, and it may be of use then to look into tools designed for reversing game data. Let's click on our address and head down to the red arrow to save it in our address list. After we have found our address(es), we need to run a pointer scan to see what points to our health address. Right-click on the address from the address table and press `Pointer scan for this address`, then simply press ok in the scan options prompt. After our scan has completed we should get fairly large amount of entries. Not to worry though, this doesn't really matter as we're going to automatically sort the offsets. Click on each offset category to sort to the pointer with the least amount and smallest of offsets. We're going to want to look for a module we can read from, followed by a small series of offsets. For AssaultCube, I found the health pointer to be at `"ac_client.exe"+0017E0A8 [0xEC]` I guess I never explained what a pointer, an address, or an offset really is. A pointer is made of up of a base address, and offsets. This stores the address to a location in memory, and the offsets are a series of other pointers, relative to our base pointer. For example, if I have pointer `"game.exe"+0x100 [0x0]`, we would first read from the module "game.exe" to find its address. Then, we read at address `0x100` to get a pointer to, let's say our local player. We read the pointer `0x0` (0) bytes away to get our final address, pointing to I don't know, the player id, which we can then use to later modify the value of the player id. Addresses are our way of accessing values held in memory. When we invoke our system API to read or write process memory, we need to give it an address, a location of where the data is stored at. ``` # 0 - 60 0x0..0x40 => Example Executable Info, # 100 0x64 => Player # 100 | 0x64 => Player Id # 104 - 114 | 0x68..0x72 => Player Name ``` Offsets are just distance from one address to another, take the following example: ```rust // "game.exe" + 0x10 #[repr(C)] struct Player { id: u32, // offset: 0x0 name: [u8; 10], // offset: 0x4 (an u32 is four bytes in size, so our next field would be four bytes away) } ``` ## Creating our project Run `cargo init <NAME>` to create your Cargo project. We will be using one of my Windows-only libraries to assist us in developing the cheat. I know, right? If you're like me, I hate when tutorials do that too. Feel free to look at the library's source to see how we interact with processes and memory, and write your own instead. It's a fun task, really, so don't get discouraged. Let's add the `memwar` library to the Cargo.toml, along with [anyhow](https://crates.io/crates/anyhow) to help with error handling, and my crate `cnsl` for reading user prompts from our command line: ```toml [dependencies] memwar = { git = "https://github.com/imajindevon/memwar" } anyhow = "1.0.86" cnsl = "0.1.3" ``` ## Writing the main function We outlined our program logic previously, but now we need to outline the processes: ```text main cli thread - accepts user input to change values health cheat thread - modifies ingame health value ``` In our main function, I'm going to write some code to find our process and open a handle to it, returning an error if it fails. Although, we need to account for the fact that most of memwar's functions are unsafe. So let's run all of our code in a separate, unsafe function. ```rust // main.rs use memwar::process; use anyhow::{anyhow, Result}; unsafe fn run() -> Result<()> { let pid = process::get_process_by_name("ac_client.exe") .map_err(|e| anyhow!("Failed to get window! OS error: {e}"))? .ok_or_else(|| anyhow!("Failed to find ac_client.exe!"))?; let h_process = process::open_process_handle(pid) .map_err(|e| anyhow!("Failed to open a handle to AssaultCube.exe! OS error: {e}"))?; Ok(()) } fn main() -> Result<()> { unsafe { run() } } ``` Now run your program with `cargo run`, and see if you can catch an error. If so, don't worry. Just remember Google is your best friend as a developer, so make sure to use it to your full advantage. We all make simple mistakes! I got the following error when I first ran the program: ``` Error: Failed to find AssaultCube.exe! ``` After looking through the task manager, I was able to see the problem. The AssaultCube process is named `ac_client.exe`. After fixing our code, we can see the program returns without an error. ## Writing our `tasks.rs` module NOTE: As of memwar v0.1.1, a Task structure is available for you to develop your cheat thread around. Create a module named `tasks.rs`. Our tasks module will contain a Tasks struct that will hold all of our tasks. NOTE: AssaultCube is a 32-bit process. Our following logic will contain code that will NOT work if you build to a 64-bit target. To fix this, append this option to your Cargo build command to compile to a 32-bit target: `--target i686-pc-windows-msvc` Instead of writing the health value once, we will continuously update the value with our modified value in a loop. This is also a decent exercise to get you familiar with threads. ```rust // tasks.rs unsafe fn new_health_task(alloc: SendAlloc) -> Task<i32, u32> { todo!() } pub struct Tasks { health_task: Task<i32, u32>, } impl Tasks { pub fn health_task(&self) -> &Task<i32, u32> { &self.health_task } pub unsafe fn from_alloc(alloc: SendAlloc) -> Self { Self { health_task: new_health_task(alloc), } } } ``` ## Defining constants in the `pointers.rs` module Now that we have our half-finished implementation of our tasks module, we need to store a few constants. These constants will store the game module's address of our base pointer, and the offsets of the chain of later pointers. Remember this? `"ac_client.exe"+0017E0A8 [0xEC]` In this format, `0017E0A8` is the address of our base pointer, (only in ac_client.exe), and after adding it to the base address of module "ac_client.exe", we need to dereference one more pointer. We need to add offset `0xEC` to get the address of our final health pointer. It is safe to assume that address `0017E0A8` gave us our local player address, which we can take advantage of later on. ```rust // pointers.rs pub const LOCAL_PLAYER: usize = 0x0017E0A8; /// Value type: i32 pub const OFFS_LOCAL_PLAYER_HEALTH: [usize; 1] = [0xEC]; ``` ## Finishing `new_health_task` For this approach I used a `Sender<u32>` and `Receiver<u32>` to send and receive the health value across threads. I also used a `Sender<u32>` and `Receiver<u32>` to send and receive errors that occurred in the thread. This will be useful for debugging broken pointers and operations. ```rust unsafe fn new_health_task(alloc: SendAlloc) -> Task<i32, u32> { let (health_sender, health_receiver) = mpsc::channel(); let (error_sender, error_receiver) = mpsc::channel(); todo!() } ``` We use an `AtomicBool` to modify and read if the cheat is enabled. ```rust unsafe fn new_health_task(alloc: SendAlloc) -> Task<i32, u32> { // ... let is_enabled = Arc::<AtomicBool>::default(); let is_enabled_sent = is_enabled.clone(); todo!() } ``` We need to check if our cheat is loaded, or if a new modified health value has been sent from the CLI. ```rust unsafe fn new_health_task(alloc: SendAlloc) -> Task<i32, u32> { // ... thread::spawn(move || { let mut health = None; loop { // Try and read an updated modified health value from the CLI thread. match health_receiver.recv_timeout(Duration::from_millis(100)) { // The CLI thread has sent an updated health value. Ok(v) => health = Some(v), // The CLI thread has not sent an updated health value, so we will continue to // use the value previously stored. Err(RecvTimeoutError::Timeout) => (), // The CLI thread has disconnected, so we exit the thread. Err(RecvTimeoutError::Disconnected) => return, }; // If the cheat is not enabled, continue the loop. if !is_enabled_sent.load(Ordering::Relaxed) { continue; } // TODO } }); todo!() } ``` Finally, if the cheat is enabled, we can write our modified health value. ```rust unsafe fn new_health_task(alloc: SendAlloc) -> Task<i32, u32> { // ... thread::spawn(move || { let mut health = None; loop { // ... if let Some(health) = health { let alloc = Allocation::from(alloc); let p_health = match alloc .deref_chain_with_base(pointers::LOCAL_PLAYER as _, pointers::OFFS_LOCAL_PLAYER_HEALTH) { Ok(v) => v, Err(e) => { let _ = error_sender.send(e); continue; } }; if let Err(err) = alloc.write_i32(p_health, health) { let _ = error_sender.send(err); } } } }); Task::new(health_sender, is_enabled, error_receiver) } ``` ## Finishing our main function Now that we have the main cheat logic out of the way, we will revisit our main function and set up a command line interface for our cheat user, along with getting the base address of `ac_client.exe`. ```rust // main.rs use anyhow::bail; unsafe fn run() -> Result<()> { // ... let p_base = module::get_mod_base(pid, "ac_client.exe") .map_err(|e| anyhow!("Failed to create snapshot of process! OS error: {e}"))?; if p_base.is_null() { bail!("Failed to find ac_client.exe module!") } let alloc = SendAlloc::new(CVoidPtr(h_process), CVoidPtr(p_base)); let tasks = Tasks::from_alloc(alloc); cli(tasks) // We will write this function next } ``` Our CLI function should look something like this, but adjust it to your taste: ```rust // main.rs unsafe fn cli(tasks: Tasks) -> Result<()> { println!("Type help to get a list of commands"); loop { let input = readln!("$ "); let trim = input.trim(); if trim == "help" { println!("help\ntoggle_health"); println!("health"); println!(" \\ value: i32") } if trim == "toggle_health" { tasks.health_task().toggle_enabled(); } if trim == "health" { println!("New health value:"); let health = loop { let health_value = readln!("$ "); match health_value.parse() { Ok(v) => break v, Err(e) => eprintln!("{e}"), } }; tasks.health_task().send_data(health)?; match tasks.health_task().read_error() { Ok(err) => { eprintln!("Thread raised error {err}") } Err(TryRecvError::Empty) => (), Err(TryRecvError::Disconnected) => bail!("Thread disconnected! Aborting.") } } } } ``` Wow. You did it! You wrote your first external cheat. Or, we hope. Let's test it and see if we get any errors or unexpected results. Loading up our program gives us no errors: ``` Type help to get a list of commands $ ``` If we enter `health` and our desired health value: ``` $ health New health value: $ 1000 ``` And let us not forget to toggle our cheat (🤦🏿‍♂️).. ``` $ toggle_health ``` We can see our updated health value ingame!
zofia
1,890,573
Mi enana bella
A post by fabricio Anzora
0
2024-06-16T20:50:23
https://dev.to/fabricio_anzora_4ca13eecd/mi-enana-bella-49h7
codepen
{% codepen https://codepen.io/fabricio-Anzora/pen/qBGpoyN %}
fabricio_anzora_4ca13eecd
1,890,572
Reading Transaction Events from a Smart Contract Using ethers.js
In the burgeoning field of blockchain and decentralized finance (DeFi), reading transaction events...
0
2024-06-16T20:44:49
https://dev.to/spiritmoney/reading-transaction-events-from-a-smart-contract-using-ethersjs-4goo
programming, beginners, tutorial, blockchain
In the burgeoning field of blockchain and decentralized finance (DeFi), reading transaction events from smart contracts is a fundamental skill for developers. Events provide critical insights into contract interactions, such as token transfers, trades, and liquidity additions. This article will guide you through the process of reading these events using ethers.js, a popular JavaScript library for interacting with the Ethereum blockchain. ### Prerequisites Before you start, ensure you have: - Node.js and npm installed on your machine. - Basic understanding of JavaScript. - Access to an Ethereum node provider like Infura or Alchemy. ### Step 1: Install ethers.js First, install ethers.js in your project using npm: ```bash npm install ethers ``` ### Step 2: Connect to an Ethereum Provider Initialize ethers.js and connect to an Ethereum provider. Infura is used in this example, but you can use any Ethereum node provider. ```jsx const { ethers } = require("ethers"); // Connect to the Ethereum network const provider = new ethers.providers.JsonRPCProvider('url'); ``` Replace `'url'` with your actual RPC url. ### Step 3: Define the Contract ABI and Address To interact with a smart contract, you need its ABI (Application Binary Interface) and address. The ABI is a JSON array that describes the contract's functions and events. ```jsx const contractAddress = "0xYourContractAddress"; const contractABI = [ // The ABI of the contract ]; ``` Replace `"0xYourContractAddress"` with the address of the contract you want to interact with, and ensure the ABI corresponds to that contract. ### Step 4: Create a Contract Instance Using ethers.js, create an instance of the contract with the ABI and address. ```jsx const contract = new ethers.Contract(contractAddress, contractABI, provider); ``` ### Step 5: Query Past Events To read past events, use the `queryFilter` method. This method allows you to filter and retrieve specific events within a specified block range. ```jsx async function getPastEvents() { // Define the event filter (e.g., for a specific event) const eventFilter = contract.filters.YourEventName(); // Define the block range const fromBlock = 0; // Start block number const toBlock = 'latest'; // End block number // Query past events const events = await contract.queryFilter(eventFilter, fromBlock, toBlock); // Process the events events.forEach(event => { console.log(event.args); // Access event arguments }); } // Call the function to get past events getPastEvents(); ``` In this example, replace `YourEventName` with the actual event name you want to filter. ### Example: Tracking Liquidity Additions Let's consider a specific use case: tracking liquidity additions to a Uniswap-like pool. In Uniswap V2, the `Mint` event is emitted when liquidity is added. Here’s how to set it up: 1. **Define the ABI for the `Mint` Event**: ```jsx const contractABI = [ "event Mint(address indexed sender, uint256 amount0, uint256 amount1)" ]; ``` 1. **Create the Event Filter and Query Events**: ```jsx async function getLiquidityEvents() { const contract = new ethers.Contract(contractAddress, contractABI, provider); const eventFilter = contract.filters.Mint(); const fromBlock = 0; const toBlock = 'latest'; const events = await contract.queryFilter(eventFilter, fromBlock, toBlock); let totalAmount0 = ethers.BigNumber.from(0); let totalAmount1 = ethers.BigNumber.from(0); events.forEach(event => { totalAmount0 = totalAmount0.add(event.args.amount0); totalAmount1 = totalAmount1.add(event.args.amount1); }); console.log(`Total token0 added as liquidity: ${ethers.utils.formatUnits(totalAmount0, 18)}`); console.log(`Total token1 added as liquidity: ${ethers.utils.formatUnits(totalAmount1, 18)}`); } getLiquidityEvents(); ``` In this example, we are summing up `amount0` and `amount1` from all `Mint` events to get the total volume of tokens added as liquidity. ### Conclusion Reading transaction events from a smart contract using ethers.js is a powerful way to gain insights into blockchain activities. Whether you are tracking trades, liquidity additions, or token transfers, understanding how to query and process these events is essential for developing robust DeFi applications. By following the steps outlined in this article, you can efficiently interact with Ethereum smart contracts and harness the full potential of blockchain data.
spiritmoney
1,890,570
Understanding the Need for Apache Kafka
Welcome back everyone to another exciting blog post! Today, we are going to discuss why Apache Kafka...
0
2024-06-16T20:41:35
https://dev.to/aadarsh-nagrath/understanding-the-need-for-apache-kafka-i44
kafka, webdev, devops, backend
Welcome back everyone to another exciting blog post! Today, we are going to discuss why Apache Kafka is essential, why it is so fast, and whether we actually need Kafka or not. Recently, I came across an interesting question-> "**Isn't it possible for databases to improve their throughput themselves by upgrading their technology and performance, so there would be no need for Kafka or any other kind of middleware service?**" This is a great question. Essentially, it suggests that we need Kafka because it has higher throughput, while databases have lower throughput. So, instead of using Kafka, why not just upgrade our databases to handle higher throughput? To address this question, let's first understand the fundamental purpose of a database. If I ask you, "**What is a database?**", you might say, "**A database is used to store data.**" This is correct but incomplete. A database also provides mechanisms to read and query the data in various ways, such as by ID, indexing, aggregating, and applying multiple conditions. ## Why Do We Need Services Like Kafka? ![.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cluhp7aennzu3fm2937g.png) Kafka is a distributed streaming platform that acts as an intermediary between data producers and consumers. It is designed to handle high throughput and provide low-latency access to data streams. The key feature that sets Kafka apart from traditional databases is its ability to process real-time data efficiently. ### Durability and Storage Mechanism First, let's talk about the durability of databases. Databases store data on durable storage mediums like hard disks or SSDs. This ensures that even if the server goes down and restarts, the data remains intact. Databases are designed to be durable, ensuring that data is not lost even in the event of a failure. Kafka, on the other hand, primarily stores data in memory (RAM), which is much faster but not durable in the same way as databases. RAM is volatile, meaning that data stored in RAM is lost if the system is restarted. However, this volatility also makes RAM extremely fast. ### Data Processing and Querying ![.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6px03f50m2bptfurdbai.png) Databases are not just about storing data; they also provide advanced mechanisms to read and process data efficiently. For example, you can set up primary keys, foreign keys, and indexes, and perform complex queries and aggregations on your data. Databases optimize the storage and retrieval of structured data, allowing for efficient querying and data manipulation. Kafka, in contrast, is designed for high-throughput, real-time data ingestion. It allows applications to produce and consume data at high speeds, but it does not offer the advanced querying capabilities of a traditional database. Kafka is optimized for streaming large volumes of data quickly and reliably, which makes it ideal for use cases involving real-time data processing. ### Structured vs. Unstructured Data Databases are excellent for storing structured data, where the schema is defined, and data is organized in tables with rows and columns. This structured format allows for efficient querying and data manipulation. Kafka excels at handling unstructured or semi-structured data, where the schema may not be predefined, and the data can come in various formats. For example, a real-time application like a delivery tracking system generates data continuously in an unstructured format. Storing this data directly in a database would be inefficient and would quickly overwhelm the database. ### Real-Time Data Processing ![.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/is7gtwmqhmuwmymudmpy.png) Kafka is designed to handle real-time data streams, allowing for the ingestion and processing of data as it is generated. This is crucial for applications that need to process data in real time, such as monitoring systems, IoT applications, and real-time analytics. In a real-time application, data is continuously generated and needs to be processed and stored quickly. Kafka acts as a buffer, ingesting the data quickly and allowing downstream consumers to process and store the data at their own pace. This decoupling of data ingestion and processing ensures that the system remains responsive and scalable. ### Why Kafka Is Fast The speed of Kafka can be attributed to several factors: In-Memory Storage: Kafka stores data in memory (RAM) before writing it to disk. RAM is significantly faster than disk storage, allowing Kafka to process data at lightning speed. Sequential Disk Writes: When Kafka writes data to disk, it does so sequentially. This reduces the overhead of disk seek time, making the write process more efficient. Batch Processing: Kafka processes messages in batches, reducing the overhead of individual message processing. This improves throughput and reduces latency. ### Example Use Case: Delivery Tracking Let's take the example of a delivery tracking system, where a delivery driver generates location data continuously. This data needs to be processed in real-time to provide updates to users. Kafka can ingest this continuous stream of location data and allow consumers to process and store it. A consumer application can aggregate the location data to compute the total distance traveled, the time taken for delivery, and other relevant metrics. This aggregated data can then be stored in a database for further analysis and reporting. Kafka ensures that the raw, unstructured data is ingested quickly, while the consumer application processes and stores the structured data in the database. ### Kafka's Role in a Modern Data Architecture In a modern data architecture, Kafka serves as a high-throughput, real-time data pipeline that bridges the gap between data producers and consumers. It allows for the decoupling of data ingestion and processing, ensuring that the system can scale and handle large volumes of data efficiently. By using Kafka, we can ingest unstructured data quickly and reliably, process it to derive meaningful insights and store the structured data in a database for long-term storage and querying. This architecture ensures that we can handle real-time data streams effectively while leveraging the strengths of both Kafka and traditional databases. I am sure you would also be interested to know - [How Paypal scaled Kafka to 1.3 trillion daily messages](https://blog.bytebytego.com/p/how-paypal-scaled-kafka-to-13-trillion?utm_source=post-email-title&publication_id=817132&post_id=145441664&utm_campaign=email-post-title&isFreemail=true&r=237fik&triedRedirect=true&utm_medium=email) ## Conclusion [My Twitter](https://twitter.com/aadarsh_nagrath) In summary, Kafka is essential for handling high-throughput, real-time data ingestion and processing. It complements traditional databases by providing a scalable and efficient mechanism for ingesting unstructured data and decoupling data ingestion from processing. Databases, on the other hand, are designed for durable storage and efficient querying of structured data. Together, Kafka and databases form a powerful combination that allows us to build scalable, real-time data architectures. I hope this post has clarified why we need services like Kafka and how they fit into a modern data architecture. If you have any more questions or doubts, please feel free to ask in the comments section. Thank you for reading, and I'll see you in the next post!
aadarsh-nagrath
1,802,514
How to build a custom React table component with Typescript (Part 1)
Introduction The purpose of this article is to show you in a simple way, how to create a...
0
2024-06-16T20:29:44
https://dev.to/igbominadeveloper/how-to-build-a-custom-react-table-component-with-typescript-part-1-1ln
webdev, react, nextjs, javascript
## Introduction The purpose of this article is to show you in a simple way, how to create a fully-typed, reusable Table component. For this, we will be using the Material UI (MUI) Library React table component: https://mui.com/material-ui/react-table to move fast without writing any styling but this could be applied to any table component library A few things to help you understand how the MUI Library Table component works: 1. Table Head - renders a <thead></thead> that holds the headings for each column of the table 2. Table Body - renders a <tbody></tbody> that holds the rows that render the data for the table 3. Pagination (optional) - library-provided pagination component - we won't cover it because it's not our focus in this article Let's look at the basic components that make each of the parts of a table as mentioned above: #### Table Head This is a component that renders the `<th></th>` element for each column and wraps it with the right html tags to construct a table header. Each column of the table has a heading `<th></th>`. So let's define the shape of each column so we can build it out: ```ts interface Column { id: string; label: string; } ``` Let's model the Column type and create the columns array that we will use to create the Table Header: ```ts const columns: Column[] = [ { id: "name", label: "Name" }, { id: "email", label: "Email address" }, { id: "phoneNumber", label: "Phone Number", }, { id: "age", label: "Age", }, ]; ``` Now, let's create the table header: ```tsx <TableHead> <TableRow> {columns.map((column) => ( <TableCell key={column.id}>{column.label}</TableCell> ))} </TableRow> </TableHead> ``` And this will take the labels of each column provided to render this: ```html <thead> <tr> <th>Name</th> <th>Email address</th> <th>Phone Number</th> <th>Age</th> </tr> </thead> ``` #### Table Body The body of our table is really not different from the html table body. We will import the `TableBody` component from MUI - this renders the `<tbody></tbody>` element. And then we just provide the rows of the data that it would need to construct the `<td></td>` for each row of the table. To achieve this, we will have an interface that mirrors the structure of the data we want to render. In our case, we will model a Table of `Person` data: ```ts interface Person { name: string; email: string; age: number; phoneNumber: string; } ``` And let's create some sample data to use with it: ```ts const rows: Person[] = [ { name: "John Doe", email: "john.doe@example.com", age: 25, phoneNumber: "+1234567890", }, { name: "Jane Smith", email: "jane.smith@example.com", age: 30, phoneNumber: "+1987654321", }, { name: "Alice Johnson", email: "alice.johnson@example.com", age: 35, phoneNumber: "+1122334455", }, ]; ``` Now, let's create the table body: ```tsx <TableBody> {rows.map((row) => ( <TableRow key={row.name}> {columns.map((column) => ( <TableCell key={column.id}>{row[column.id]}</TableCell> ))} </TableRow> ))} </TableBody> ``` And this will take each object in the array we provided to render this: ```html <tbody> <tr> <td>John Doe</td> <td>john.doe@example.com</td> <td>25</td> <td>+1234567890</td> </tr> ...more rows as is provided in the array </tbody> ``` And now, we can combine it all to render the whole table. ```tsx <TableContainer> <Table> <TableHead> <TableRow> {columns.map((column) => ( <TableCell key={column.id}>{column.label}</TableCell> ))} </TableRow> </TableHead> <TableBody> {rows.map((row) => ( <TableRow key={row.name}> {columns.map((column) => ( <TableCell key={column.id}>{row[column.id]}</TableCell> ))} </TableRow> ))} </TableBody> </Table> </TableContainer> ``` You can play around with the code here {% embed https://stackblitz.com/edit/react-hegdt2?embed=1&file=People.tsx %} or click [here](https://react-hegdt2.stackblitz.io) to view how it looks in the browser P.S. _You can click through the links on the page to see all the tables. If you check the contents of each component, you will notice similarities between them. Actually, the major difference is the structure of the data and the columns of the table. This shows an opportunity to refactor our code to reduce duplication by creating a reusable table component_ But before we proceed to do this, if you look at this part of the `TableBody` inside `Person.tsx` [here](https://stackblitz.com/edit/react-hegdt2?embed=1&file=Person.tsx), you will notice there is a typescript warning there. ![TS error - wrong array index](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fk9m8oerd2mc1m99veac.png) This is because to access an object using the index notation in typescript, you have to use an index that matches an existing property of the object. In our case, we defined `column.id` as a string and that's why we are getting this error: `No index signature with a parameter of type 'string' was found on type 'Data'.`. Since the `id` property of each column we have matches each property of the objects we provided in the array of rows, we can easily change the type of our `Column.id` to match this. ```ts interface Column { id: keyof Person; label: string; } ``` And now with this simple change, for each column object i.e. `{ id: 'name', label: 'Name' }`, we can use the `id` it to access each `Person` object in the array of `rows`. Here is what our final code looks like: {% embed https://stackblitz.com/edit/react-hegdt2-xnzjad?embed=1&file=People.tsx %} And with the wave of our TS wand, we can see that the error has vanished into thin air 😀 ![TS Error fixed - object index](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/swdweyn51e7dtmalkr0x.png) And with that fix, we are ready to proceed to create a component we can resuse to render the `People` and `Countries` pages respectively and re-use as many things as we can. ### Create a Table component Before we discuss how I think we should start approaching this, I would like to point out the problem first and then we can look at a quick solution we can consider: If you take a look at our `Countries.tsx` and `People.tsx` files, you will notice that the visible difference is just table header labels and the structure of the objects in the `rows` array. This means we can abstract away the repetitve lines of code and reuse them from inside a component. As discussed earlier, since our table is made up of two parts (`Header` and `Body`), we will start by breaking this component into these two parts. To make our `CustomTable` component configurable from outside it, we have to make provision to allow this using `Props`. If you find `Props` strange, You can visit the [React Documentation](https://react.dev/learn/passing-props-to-a-component) to read up on this and then come back to this article. Since this is a Typescript project, here is what I like to do first, define the interface of our `Props` The interface for a `Person` is this: ```ts interface Column { id: keyof Person; label: string; } ``` And the one for a `Country` is this: ```ts interface Column { id: keyof Country; label: string; } ``` A quick look at the above code will reveal that the only difference is the type of the `id` inside each column. We need to find a way to create a Column that would be `generic` enough to handle any type. And that's where [Typescript Generics](https://www.typescriptlang.org/docs/handbook/2/generics.html) will come handy as it allows us to create a dynamic types that can be re-usable without being tied to a particular type: Here is what I mean by that: ```ts interface Column<T> { id: keyof T; label: string; } ``` We just created a `Column` interface that expects anyone using it to provide the type to it and it can use it to add type safety wherever it is used. And for our rows, since it's just an array of objects, it's pretty straightforward too. it would just be this: ```ts type Row = T[]; ``` And now, we can combine the two to form the shape of the prop to pass to the `CustomTable` ```ts interface Props<T> { columns: Column<T>[]; rows: T[]; } ``` Since we have got that out of the way, let's use this to create a reusable `CustomTable` component: - Let's start by creating a file: `table.tsx`. _P.S You can give yours any name of your choice_ - Next, we will add the data types we declared above at the top of the file. ```ts export interface Column<T> { id: keyof T; label: string; } interface Props<T> { columns: Column<T>[]; rows: T[]; } ``` _NOTE: we added the `export` statement to the `Column` interface because we need to use it inside the `Countries.tsx` and `People.tsx` files to provide type inference for the rows and columns arrays_ _Also, there is no point creating a dedicated type for the `rows` property since it's just an array of the type provided i.e. `T[]` and that's why I used it directly._ - Next, we want to create the function that would be exported as the component. We will also use the props provided to the component to access the rows and columns. ```tsx export default function CustomTable<T>(props: Props<T>) { const { columns, rows } = props; return ( <Paper> <TableContainer> <Table> <TableHead> <TableRow> {columns.map((column, index) => ( <TableCell key={index}>{column.label}</TableCell> ))} </TableRow> </TableHead> <TableBody> {rows.map((row, index) => { return ( <TableRow key={index}> {columns.map((column, index) => { return <TableCell key={index}>{row[column.id]}</TableCell>; })} </TableRow> ); })} </TableBody> </Table> </TableContainer> </Paper> ); } ``` _NOTE: I used the index as the keys of the array I mapped through here because I am sure the items are not going to change during each render. If you want to know why using array indexes as keys could be a bad idea, you can read about it [here](https://react.dev/learn/rendering-lists#keeping-list-items-in-order-with-key)_ Now, we can use the component we created to re-build our two tables `Countries.tsx` and `People.tsx`. ### People.tsx ```tsx import CustomTable, { Column } from "./custom-table"; interface Person { name: string; email: string; age: number; phoneNumber: string; } const columns: Column<Person>[] = [ { id: "name", label: "Name" }, { id: "email", label: "Email address" }, { id: "phoneNumber", label: "Phone Number", }, { id: "age", label: "Age", }, ]; const rows: Person[] = [ { name: "John Doe", email: "john.doe@example.com", age: 25, phoneNumber: "+1234567890", }, { name: "Jane Smith", email: "jane.smith@example.com", age: 30, phoneNumber: "+1987654321", }, { name: "Alice Johnson", email: "alice.johnson@example.com", age: 35, phoneNumber: "+1122334455", }, ]; export default function People() { return <CustomTable columns={columns} rows={rows} />; } ``` ### Countries.tsx ```tsx import CustomTable, { Column } from "./custom-table"; interface Country { name: string; code: string; population: number; size: number; density: number; } const columns: Column<Country>[] = [ { id: "name", label: "Name" }, { id: "code", label: "ISO\u00a0Code" }, { id: "population", label: "Population", }, { id: "size", label: "Size\u00a0(km\u00b2)", }, { id: "density", label: "Density", }, ]; const rows: Country[] = [ { name: "India", code: "IN", population: 1324171354, size: 3287263, density: 1324171354 / 3287263, // ideally, should be computed dynamically by dividing each population by size, }, { name: "China", code: "CN", population: 1403500365, size: 9596961, density: 1403500365 / 9596961, }, { name: "Italy", code: "IT", population: 60483973, size: 301340, density: 60483973 / 301340, }, ]; export default function Countries() { return <CustomTable columns={columns} rows={rows} />; } ``` Here is the final code if you want to check it out: {% embed https://stackblitz.com/edit/react-hegdt2-c52far?embed=1&file=People.tsx %} Now we can render a million different tables, all we have to do is provide the rows and the column configuration. ### Extending the table functionality Now, what we have created so far works well for almost every table if it's just to render some data which is what most tables do. But imagine if you had to render your table rows in somewhat similar fashion to this table. ![Table with a custom column](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w69txchdo6dnf4tlb7hf.png) To be able to support this behavior, we have to extend the possibilities of our `CustomTable` component but I won't cover that right now. ## Conclusion In this article, we've taken a detailed look at how to create a fully-typed, reusable Table component leveraging the styling the Material UI (MUI) Library in React provides. We started by understanding the basic structure of a table, focusing on the Table Head and Table Body components. We explored how to define and use TypeScript interfaces to model our data, ensuring type safety and reducing errors. We walked through the creation of a simple table, then moved on to build a more complex, reusable `CustomTable` component. This component uses TypeScript generics to handle different data types, making it flexible and versatile for various use cases. By creating a `CustomTable`, we streamlined our code, eliminating redundancy and making it easier to manage and extend. We also discussed how to handle TypeScript warnings and errors, ensuring our code remains clean and maintainable. Finally, we demonstrated how to implement this reusable component in different contexts, such as the `People` and `Countries` tables. This approach not only saves time but also enhances code readability and maintainability. The reusable component can be extended further to support more advanced features and customizations, which we will cover in part two of this series. I hope you found this guide helpful and that you learned something new about creating reusable components in React with TypeScript. Please feel free to leave your comments and questions below. Thanks for reading!
igbominadeveloper
1,890,542
Engenharia de Software: produto vs plataforma
Postado originalmente no Dev na Gringa Substack. Quer receber futuros artigos no seu e-mail? Assine...
0
2024-06-16T20:20:09
https://dev.to/lucasheriques/engenharia-de-software-produto-vs-plataforma-18p3
braziliandevs, softwareengineering, product, platform
Postado originalmente no [Dev na Gringa Substack](https://devnagringa.substack.com/p/produto-vs-plataforma?utm_source=devto). Quer receber futuros artigos no seu e-mail? [Assine gratuitamente aqui](https://devnagringa.substack.com/subscribe?utm_source=devto). --- Nesse artigo, vamos discutir as diferenças entre engenheiros de produto e plataforma. Engenheiros de produto trabalham em software usados pelo consumidor final. Já os de plataforma dão suporte para os engenheiros de produto entregarem software maneira rápida e escalável. Não conhece esses termos? Fique comigo até o final e vamos aprender juntos. Engenharia de Software é uma disciplina recente. [Nossa história começou cerca de 64 anos atrás, em 1960](https://en.wikipedia.org/wiki/History_of_software_engineering). Evoluímos muito nas últimas décadas. Mas, ainda somos uma ciência nova. Por isso, o foco das escolas e universidades se concentra em como criar software. Como devemos manusear o seu ciclo de vida. Quais fundamentos de computação são necessários para criar suas abstrações. Como vamos implantar esse sistema. Escalabilidade, manutenibilidade, confiabilidade. E isso faz sentido. A concepção, planejamento, implementação e *release* de software não são atividades simples. ![O ciclo de vida do desenvolvimento de software](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vavg0haiochm5csbefdl.png) Porém, o **software é feito para os usuários.** E o seu valor final depende se ele atinge o impacto planejado para eles. Isso é ainda mais importante para empresas que tem como software o seu centro de lucro. ## Por que eu não concordo com a separação entre backend e frontend Suponha a seguinte situação. Você é um engenheiro de software focado em frontend. Notou que uma das APIs que você consome para montar uma UI começou a apresentar piora no tempo de resposta. O tempo foi de uma média de 100ms para 600ms. Qual deve ser a sua primeira preocupação? Você pode pensar: preciso chamar alguém de *backend* para resolver isso. Porém, isso irá estender ainda mais a duração do incidente. Portanto, a principal preocupação deve ser o seu cliente. Você deve resolver a degradação de performance o mais rápido possível. Cada segundo que isso continua, você perde um pouco da confiança dos seus usuários. Algo que é difícil de recuperar. Então, isso quer dizer que você precisa tentar resolver você mesmo. Ou então pedir ajuda a alguém com mais experiência que você nessa área. Mas não para aí. Faça um *pair* com essa pessoa. Aprenda o que você não sabia. Não deixe que a sua zona de conforto lhe deixe afastado disso. Nessa situação, talvez envolva adicionar um novo *index* no banco de dados. Algo que você possa não ter feito antes. Mas, é importante que você aprenda. Para que, numa situação futura, se houver um problema semelhante, você consegue agir rapidamente. O ponto é: você não pode deixar que a falta de familiaridade lhe afaste dos problemas mais importantes do negócio. E é por isso que eu não acho que a separação entre *backend* e *frontend* fazem sentido. Estamos numa era onde existem cada vez mais recursos para aprender sobre o que não sabemos. Faculdade, cursos, livros, e mais recente, inteligência artificial. O custo para se familiarizar com algo desconhecido cai a cada dia que se passa. O mais importante é trabalhar **no que vai gerar o maior impacto para o negócio**. Linguagens de programação, *stacks*, são apenas ferramentas. **O importante é você entregar valor para o seu usuário final.** ## E *fullstack*, faz sentido então? Fullstack é um engenheiro que escreve código que roda no cliente e em servidores? Ou ele também dá manutenção em bancos de dados? Ou quem sabe escreve *software development kits* (SDKs) para bibliotecas? Veja que é difícil saber aonde exatamente as linhas terminam. [**Existe *stack* demais**](https://twitter.com/mipsytipsy/status/1659608859961593857). É impossível uma pessoa ser capaz de gerenciar tudo. [![Existe stack demais para se tornar um fullstack](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xchd5e8d75m55rr53bed.png)](https://twitter.com/mipsytipsy/status/1659608859961593857) Porém, aqui está a verdade: **você não precisa saber de tudo*.* Engenharia de software é uma atividade altamente colaborativa. Você pode contar com outras pessoas da sua equipe para lhe ajudar, conforme necessário. Mas, também é importante que você tenha um conhecimento amplo. Que não envolva apenas uma ferramenta específica. Que você saiba se orientar quando houverem incertezas. Um profissional *T-shaped*. ![Um profissional T-shaped: uma especialização com conhecimentos amplos em áreas adjacentes.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fb8mi5rvi0a5zx7v22lw.png) Está tudo bem você se especializar em algum assunto que lhe interesse mais. E isso pode inclusive ajudar no seu crescimento profissional. Mas também é importante que você tenha um conhecimento geral sobre as áreas adjacentes a sua. ## Uma separação mais coerente Ao invés de pensar em *backend* vs *frontend*, quero introduzir um conceito que li recentemente. *Engenheiros de Produto* vs *Engenheiros de Plataforma*. Se você concentra seus esforços no seu usuário final, você está mais perto do lado de **produto**. No entanto, se você trabalhar mantendo a infraestrutura para as aplicações da sua empresa, isso lhe deixa no lado da **plataforma**. Ambas as áreas são necessárias para a entrega de um software de alta qualidade. Porém, os clientes de cada uma serão diferentes. E as preocupações e qualidades de cada um se tornam outras também. ![Times de plataforma dão suporte para engenheiros de produto. Que fazem o aplicações para usuários.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/md4ferar6hgnqdf7cbjg.png) ## Engenheiro de Produto (*Product Engineer*) A principal preocupação de um engenheiro de produto é entregar um ótimo produto. Existe uma preocupação maior com a pergunta: “por que vamos construir isso?”. Isso quer dizer que estes engenheiros também serão mais próximos de outros departamentos na empresa. Pessoas que possam ajudar a construir uma ponte entre o código e o produto. Gerentes de produto, designers, suporte ao usuário, vendas e marketing são alguns exemplos. Engenheiros de produto também se preocupam com o ciclo completo no desenvolvimento de software. Desde a concepção inicial até o suporte ao usuário. ![Escopo de um engenheiro de produto vs um engenheiro de software tradicional](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/natbkxilioqxlt521v3v.png) **A prioridade para engenheiros de produto é entregar valor aos usuários.** Isso quer dizer que o foco deixa de ser em entregar o código perfeito, escalável e modularizado. Pois entendem que o código que vira débito técnico é aquele que gerou valor. E quem sabe algum dia, será refatorado. ## Engenharia de Plataforma (*Platform Engineer*) Sim, software bom é aquele que gera valor e receita. 💰 Porém, ainda precisamos ter certeza que esse software será executado, seguro, testado e escalável. E existem diversos pilares para que isso aconteça. Times de plataforma são formados por engenheiros que tem o foco entender como executar toda a infraestrutura de uma empresa. E fornecer uma interface comum para todos os times de produto dela. No fim, também são engenheiros de produto. Mas o seu usuário final são os próprios engenheiros de software da empresa. Com isso, é possível centralizar diversas preocupações como segurança, *logging* e *compliance* a nível de plataforma na empresa inteira. É uma organização que combina *Developer Operations* (DevOps) e *Site Reliability Engineering* (SRE). O escopo principal aqui é fornecer uma interface unificada para as _apps_ da empresa. Com foco em escalabilidade, manutenibilidade e resiliência. O objetivo dessa organização também é reduzir o esforço cognitivo para os engenheiros de produto. De modo que eles possam dedicar sua atenção àquilo que trás o retorno financeiro para sua empresa: as aplicações utilizadas pelos usuários. ## Conclusão e recomendações Engenharia de Software, na maioria das vezes, não é o produto final. É o meio pelo qual entregamos valor o mundo real. Como o nosso software irá impactar o dia-a-dia das pessoas. Isso quer dizer que todos nós devemos nos centralizar no produto? Não necessariamente. A plataforma também é essencial para fornecer uma boa experiência para nossos desenvolvedores de produtos. Em empresas grandes, com milhões de usuários, a plataforma se torna ainda mais importante. Pois a complexidade dos sistemas aumenta de maneira estratosférica. Em _startups_, engenheiros de produto serão mais essenciais. Pois os desafios se encontram mais na procura de _product market fit_. Ou seja, encontrar uma ideia que é escalável, pode ser resolvida com software, e que principalmente, gera receita. Conforme o sistema cresce, os desafios de sistemas distribuídos se tornam mais difíceis. Pessoas que tem paixão por esses problemas se darão bem na área de plataforma. Por definição, haverão mais vagas de produto do que plataforma. Porque existem mais usuários finais do que times de desenvolvimento. Mas, certamente há uma carência para ambos. Para engenheiros que não se preocupem com as tecnologias. Mas entreguem impacto e produtividade para os seus usuários finais. Sejam eles engenheiros internos da sua empresa, ou usuários externos. Portanto, se você gosta de estar mais perto dos seus usuários, pesquise sobre _product engineering_. Sim, em inglês, para não confundir com a área de Engenharia de Produto, que no Brasil é um outro curso de graduação. Se você prefere estar mais próximo dos servidores, da infraestrutura, e de outros engenheiros de aplicações, foque em plataforma. Entenda os conceitos necessários para que uma aplicação seja disponibilizada no mundo. Leituras adicionais: 1. [The Product-Minded Engineer](https://blog.pragmaticengineer.com/the-product-minded-engineer/) 2. [What is a product engineer?](https://posthog.com/blog/what-is-a-product-engineer) 3. [The Future of Ops is Platform Engineering](https://www.honeycomb.io/blog/future-ops-platform-engineering) 4. [YouTube: What Is a Platform Team and What Problems Do They Solve?](https://www.youtube.com/watch?v=j5M16qooAvo)
lucasheriques
1,890,540
LaPrompt DALL-E 3 Examples: A Guide for Web Design and Content Creation
In the evolving landscape of web design and content creation, AI technology has become a...
0
2024-06-16T20:15:33
https://dev.to/nikolasgx/laprompt-dall-e-3-examples-a-guide-for-web-design-and-content-creation-3b48
design, contentwriting, webdev
In the evolving landscape of web design and content creation, AI technology has become a game-changer, especially with [LaPrompt Gallery](https://laprompt.com/gallery/text-to-image). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ucyfb4rhf1vzoz86yf2e.png) In this wild, wacky world of web design and content creation, AI is the new sheriff in town, and [DALL-E 3](https://laprompt.com/gallery/text-to-image/dalle-3) is its trusty steed. This AI model is like the Picasso of the digital age, turning your words into jaw-dropping images faster than you can say "Wait, how'd it do that?" Now, imagine you've got a gallery of [DALL-E 3's](https://laprompt.com/gallery/text-to-image/dalle-3) greatest hits, curated by none other than LaPrompt AI. It's like having a front-row seat to the coolest art show in town, where the artist is a robot with a sense of humor. And let me tell you, these images are so fresh, they make Van Gogh look like a stick figure. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44jeffqc43zd8lfhk3e2.png) But it ain't just about the pretty pictures, my friends. For all you prompt engineers out there, seeing the text prompts and the resulting masterpieces side by side is like getting a backstage pass to the AI's creative process. You'll be like, "Oh snap, I see what you did there!" and then you'll use that newfound knowledge to craft prompts that are so on point, they make Shakespeare look like a toddler scribbling with crayons. ## How to Use LaPrompt’s DALL-E 3 Examples for Web Design 1. Inspiration for Visuals: Browse the gallery to find prompts that generate images aligning with your design vision. Use these images as inspiration or directly incorporate them into your web design. 2. Custom Graphics: Create unique graphics for your website by tweaking the prompts to generate customized images that fit your brand’s aesthetics. 3. Backgrounds and Textures: Use the generated images as backgrounds or textures to add depth and creativity to your web pages. 4. Hero Images and Banners: Find striking images to use as hero images or banners that capture the visitor’s attention immediately. 5. Icons and Illustrations: Generate specific icons and illustrations that match the theme of your website, ensuring a cohesive design. ## How to Use LaPrompt’s DALL-E 3 Examples for Content Creation 1. Visual Content: Enhance your blog posts, articles, and social media content with relevant images generated from the prompts. 2. Storytelling: Use images to support your storytelling, making your content more engaging and visually appealing. 3. Infographics: Create informative and visually appealing infographics using AI-generated images to illustrate your points. 4. Marketing Materials: Design eye-catching marketing materials, such as brochures and flyers, using unique images from the gallery. 5. Presentations: Make your presentations stand out with custom images that align with your content and themes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gtfa2tlygdrfdj3cdzui.png) And the best part? You can use these AI-generated masterpieces to level up your web design game. Need some inspo for your next project? Just take a stroll through the gallery and let the images wash over you like a warm, digital hug. Want to create something so unique, it'll make your grandma's knitting look like child's play? Tweak those prompts like a boss and watch the magic happen. So, what are you waiting for? [Sign up for LaPrompt AI](https://laprompt.com/gallery/text-to-image), grab a cup of coffee (or something stronger, I won't judge), and dive headfirst into the world of [AI-generated art](https://laprompt.com/gallery/text-to-image). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ea9dc4z3avfy1yur4wg.png) Your creative journey just got a whole lot more interesting, and your grandma's knitting will never be the same again.
nikolasgx
1,890,538
Designed HTML templates
Hey here a collection of designed HTML templates: Click here
0
2024-06-16T20:09:26
https://dev.to/tidycoder/designed-html-templates-2h4d
Hey here a collection of designed HTML templates: <a href="https://tidy-coder.github.io/Designed-HTML-Templates/" target="_blank">Click here</a>
tidycoder
1,890,537
6 Captivating Web Development Tutorials to Boost Your Skills 🚀
The article is about a collection of 6 captivating web development tutorials from LabEx that cover a wide range of topics, from building a 2048 game using jQuery to creating a visually stunning login page and integrating external content like YouTube videos. These hands-on projects are designed to help both beginners and experienced developers enhance their skills and create engaging web experiences. The article provides a brief overview of each tutorial, along with links to the corresponding LabEx labs, making it easy for readers to dive into the tutorials that pique their interest. Whether you're looking to expand your web development toolbox or simply explore new creative possibilities, this article is a must-read for anyone passionate about the art of web design and development.
27,723
2024-06-16T20:09:14
https://dev.to/labex/6-captivating-web-development-tutorials-to-boost-your-skills-m4h
html, coding, programming, tutorial
Embark on an exciting journey of web development mastery with this carefully curated collection of 6 captivating tutorials from LabEx. Whether you're a beginner or a seasoned pro, these hands-on projects will equip you with the skills to create engaging and visually stunning web experiences. 🌟 ## 1. 2048 Web Game Using jQuery (Lab) Dive into the world of interactive web games and learn how to recreate the beloved 2048 game using HTML, CSS, JavaScript, and the powerful jQuery library. This project will guide you through the process of building a responsive and visually appealing web version of the 2048 game. 🎮 [Get started](https://labex.io/labs/298919) ## 2. Give Your Page a Makeover (Lab) Elevate your web design skills by creating a visually stunning login page. In this project, you'll learn how to design and style a login interface that provides a delightful user experience. Unleash your creativity and transform a simple login page into a work of art. 🎨 [Explore the tutorial](https://labex.io/labs/300086) ## 3. HTML External Content/Plugin Discover the power of the HTML `<embed>` tag and learn how to seamlessly integrate external content, such as YouTube videos, into your web pages. This lab will teach you how to enhance your web pages with engaging multimedia elements. 📽️ [Start learning](https://labex.io/labs/70752) ## 4. Build an Image Cropping Tool Using HTML5 (Lab) Dive into the world of interactive web applications and create your own image cropping tool. This project will guide you through the process of building a user-friendly tool that allows users to upload, display, and crop images with ease. 🖼️ [Begin the journey](https://labex.io/labs/298929) ## 5. HTML Strong Importance Explore the significance of the `<strong>` tag in HTML and learn how it can be used to convey the importance of text to browsers and search engines. Understand the semantic meaning behind this tag and how it can enhance the accessibility and SEO of your web pages. 💪 [Dive in](https://labex.io/labs/70843) ## 6. HTML Additional Details Discover the versatility of the HTML `<details>` tag and learn how to create interactive content that can be expanded or collapsed by users. This lab will teach you how to use the `<details>` tag to provide additional information and enhance the user experience of your web pages. 🔍 [Explore the details](https://labex.io/labs/70738) Embark on your web development journey and unlock your full potential with these captivating tutorials from LabEx. Happy coding! 🎉 --- ## Want to learn more? - 🌳 Learn the latest [HTML Skill Trees](https://labex.io/skilltrees/html) - 📖 Read More [HTML Tutorials](https://labex.io/tutorials/category/html) - 🚀 Practice thousands of programming labs on [LabEx](https://labex.io) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,890,535
The Golden Key of Cloud Computing
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-16T20:06:11
https://dev.to/delta02/the-golden-key-of-cloud-computing-1mkh
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Cloud computing stores and accesses data and programs over the internet instead of your computer’s hard drive. It evolved from virtual networks to offering: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).
delta02
1,890,534
Constructor Function ?🧐
A constructor function in JavaScript is a special function used to create and initialize objects....
0
2024-06-16T20:04:59
https://dev.to/__khojiakbar__/constructor-function--1c5n
javascript, constructor, function
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgti92gq6n3jdo1ew6lb.png) > A constructor function in JavaScript is a special function used to create and initialize objects. It's like a blueprint for creating multiple similar objects. When you use the new keyword with a constructor function, it creates a new object based on that blueprint. # Example - Define the constructor function ``` function Pet(name, type) { this.name = name; this.type = type; } ``` - Create new pet objects ``` let pet1 = new Pet("Fluffy", "Cat"); let pet2 = new Pet("Buddy", "Dog"); console.log(pet1.name); // Output: Fluffy console.log(pet1.type); // Output: Cat console.log(pet2.name); // Output: Buddy console.log(pet2.type); // Output: Dog ``` - Adding methods to constructor functions ``` function Pet(name, type) { this.name = name; this.type = type; this.describe = function() { return `${this.name} is a ${this.type}.`; }; } let pet1 = new Pet("Fluffy", "Cat"); let pet2 = new Pet("Buddy", "Dog"); console.log(pet1.describe()); // Output: Fluffy is a Cat. console.log(pet2.describe()); // Output: Buddy is a Dog. ``` # Extra sample ``` // Define the object function Phone (name, color, memory, price) { this.name = name, this.color = color, this.price = price, this.memory = memory, this.tellAbout = function() { return `${this.name} has ${this.memory}GB, and its color is ${this.color}.` } } // Create the object const iPhone_13 = new Phone('iPhone 13', 'blue', 256, 7000000); console.log(iPhone_13.tellAbout()); // iPhone 13 has 256GB, and its color is blue. ```
__khojiakbar__
1,890,532
30 Tips how to use OSINT for bug hunting
*30 Tips how to use OSINT for bug hunting: * Use Google Dorks to find vulnerabilities in web...
0
2024-06-16T20:02:53
https://dev.to/nikhilpatel/30-tips-how-to-use-osint-for-bug-hunting-253l
**30 Tips how to use OSINT for bug hunting: ** 1. Use Google Dorks to find vulnerabilities in web applications. 2. Use Shodan to find vulnerable IoT devices. 3. Use Whois to find information about domain names. 4. Use Maltego to visualize relationships between entities. 5. Use the Wayback Machine to find old versions of websites. 6. Use social media to gather information about targets. 7. Use LinkedIn to gather information about employees. 8. Use GitHub to find sensitive information in code repositories. 9. Use Google Alerts to monitor for mentions of your target. 10. Use DNSDumpster to map out a target's infrastructure. 11. Use Recon-ng to automate OSINT tasks. 12. Use theHarvester to gather email addresses and other information. 13. Use SpiderFoot to automate OSINT tasks and gather intelligence. 14. Use FOCA (Fingerprinting Organizations with Collected Archives) to gather metadata from documents. 15. Use VirusTotal to scan files for malware. 16. Use Censys to find vulnerable systems on the internet. 17. Use Foca Pro to extract metadata from documents and analyze it. 18. Use FOCA Online to extract metadata from documents and analyze it in the cloud. 19. Use FOCA Free Edition for basic metadata extraction from documents. 20. Use Metagoofil to extract metadata from documents and analyze it. 21. Use Datasploit for automated OSINT tasks and data mining. 22. Use Google Hacking Database (GHDB) for advanced Google searches. 23. Use Google Custom Search Engine (CSE) for targeted searches on specific websites or domains. 24. Use Google Advanced Search for advanced searches on Google. 25. Use Google Trends to monitor trends related to your target or industry. 26. Use Google Analytics to gather information about website traffic and user behavior. 27. Use Google AdWords Keyword Planner for keyword research related to your target or industry. 28. Use Google PageSpeed Insights to analyze website performance and identify vulnerabilities. 29. Use Google Search Console (formerly Webmaster Tools) for website analytics and vulnerability identification. 30. Use Google My Business for local SEO optimization.
nikhilpatel
1,890,531
Create an XRP Ledger Account with Web3Auth (Nextjs): A Step-by-Step Guide
Introduction XRP ledger is a decentralized blockchain that focus on cross-border payment,...
0
2024-06-16T20:00:24
https://dev.to/amity808/create-an-xrp-ledger-account-with-web3auth-nextjs-a-step-by-step-guide-a45
blockchain, web3, authjs, javascript
## Introduction **XRP ledger** is a decentralized blockchain that focus on cross-border payment, it is used to document financial transactions, and it is fast and low-cost effective. The XRP Ledger was created by the Ripple co-founder and CEO Chris Larsen. XRP Lede utilized XRP as its native cryptocurrency. In this tutorial, we are implementing XRP Ledger with web3 auth which will generate a new account for us. This account can be used to perform transactions. It can be used to send tokens to any address on the XRP ledger. For users to get an XRPL account user need to authorize either their Google discord or other auth associated with the web3auth. ### To get started with the project. We are going to be utilizing NextJs, Tailwind CSS and other web3 auth libraries. **Open your terminal ** Navigate to the directory where you want your code to be **Create a new Nextjs project ** ``` npx create-next-app@latest my-project --typescript --eslint ``` You should have something like this on your terminal ![XRP ledger web3auth](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/htop7gligejjszbitv2b.png) After you successfully implement that you navigate to my-project ``` cd my-project ``` When you cd my-project, You install tailwind css ``` npm install -D tailwindcss postcss autoprefixer ``` ![XRP ledger web3auth](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kpv1v49o8pb93mirksgc.png) ### After it is successful you run this ``` npx tailwindcss init -p ``` ![XRP ledger web3auth](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iuuvadeuw5m5x5k3b2di.png) You need to install some dependencies which we are going to utilize for our project @web3auth/xrpl-provider - Web3Auth XRPL Provider can be used to interact with wallet or connected EVM compatible chain using RPC calls. refrence [web3auth/xrpl-provider](https://www.npmjs.com/package/@web3auth/xrpl-provider) @web3auth/openlogin-adapter - This adapter is a wrapper around the openlogin library from Web3Auth and enables the main social login features of Web3Auth. refrence [web3auth/openlogin-adapter](https://www.npmjs.com/package/@web3auth/openlogin-adapter ) @web3auth/modal - It provides the main class for using the default Web3Auth Modal. @web3auth/base - Web3Auth Base contains the basic typescript types and interfaces for Web3Auth. This comes in handy by providing you with a standard way of importing the values you need to work with the SDKs. reference [web3auth/base](https://www.npmjs.com/package/@web3auth/base) @nice-xrpl/react-xrpl -React-xrpl is a set of hooks and components for creating applications using xrpl.js in React. such as getting wallet balance, sending XRP to another wallet address reference [nice-xrpl/react-xrpl](https://www.npmjs.com/package/@nice-xrpl/react-xrpl) You use yarn as your package installer or any package installer that you know such as npm and others. ``` yarn add @web3auth/xrpl-provider @web3auth/xrpl-provider @web3auth/openlogin-adapter @web3auth/openlogin-adapter @web3auth/modal @web3auth/base @nice-xrpl/react-xrpl ``` After the installation, we can open our file directory in our vs code You can start our development process In your project directory, you need to create a utils folder and a components folder where we define our provider and other functions that you are going to call such as getting an account that was generated by the web3auth login with the XRP Provider Client. > In the root project directory if you are using src in your project structure you create utils folder if not create it in your root directory, not inside your app or pages folder. Create a new folder name xrpLRPC.ts ``` // You start by importing IProvider type from web3auth/base for our typescript import { IProvider } from "@web3auth/base"; // You import some function which we will use to send input to xrp ledger import { convertStringToHex, Payment, xrpToDrops } from "xrpl" // define our class where we will put all our requests export default class XrplRPC { // Declaring a private provider variable of type IProvider} private provider: IProvider; //You define a Constructor that initializes the provider variable with the provided argument constructor (provider: IProvider) { this.provider = provider } // Method to get accounts associated with the provider getAccounts = async (): Promise<any> => { try { // Request the accounts using the provider const accounts = await this.provider.request<never, string[]>({ method: "xrpl_getAccounts", // Specify the method to get accounts }); console.log(accounts, "accounts"); // Log the accounts for debugging purposes if (accounts) { // Check if any accounts were returned // Request account info for the first account in the list const accInfo = await this.provider.request({ method: "account_info", // Specify the method to get account info params: [ { account: accounts[0], // Use the first account strict: false, // Non-strict mode allows for more lenient account info retrieval ledger_index: "current", // Use the current ledger index queue: true, // Include queued transactions }, ], }); return accInfo; // Return the account info } else { return "No account found, please report issues"; // Return an error message if no accounts are found } } catch (error) { // Handle any errors that occur console.error("Error", error); // Log the error return error; // Return the error } }; // Method to get the balance of the first account getBalance = async (): Promise<any> => { try { // Request the accounts using the provider const accounts = await this.provider.request<string[], never>({ method: "xrpl_getAccounts", // Specify the method to get accounts }); if (accounts) { // Check if any accounts were returned // Request account info for the first account in the list const accInfo = (await this.provider.request({ method: "account_info", // Specify the method to get account info params: [ { account: accounts[0], // Use the first account strict: true, // Strict mode ensures accurate account info ledger_index: "current", // Use the current ledger index queue: true, // Include queued transactions }, ], })) as Record<string, Record<string, string>>; return accInfo.account_data?.Balance; // Return the account balance } else { return "No accounts found, please report this issue."; // Return an error message if no accounts are found } } catch (error) { // Handle any errors that occur console.error("Error", error); // Log the error return error; // Return the error } }; // Method to get the address of the first account getAccountAddress = async (): Promise<any> => { try { // Request the accounts using the provider const accounts = await this.provider.request<string[], never>({ method: "xrpl_getAccounts", // Specify the method to get accounts }); if (accounts) { // Check if any accounts were returned // Request account info for the first account in the list const accInfo = (await this.provider.request({ method: "account_info", // Specify the method to get account info params: [ { account: accounts[0], // Use the first account strict: true, // Strict mode ensures accurate account info ledger_index: "current", // Use the current ledger index queue: true, // Include queued transactions }, ], })) as Record<string, Record<string, string>>; return accInfo?.account; // Return the account address } else { return "No accounts found, please report this issue."; // Return an error message if no accounts are found } } catch (error) { // Handle any errors that occur console.error("Error", error); // Log the error return error; // Return the error } } // Method to sign a message signMessage = async (): Promise<any> => { try { const msg = "Hello world this is tutorial on XRPL by Amityclev"; // Define the message to sign const hexMsg = convertStringToHex(msg); // Convert the message to a hexadecimal string const txSign = await this.provider.request<{ signature: string }, never>({ method: "xrpl_signMessage", // Specify the method to sign a message params: { signature: hexMsg, // Provide the hexadecimal message to be signed }, }); return txSign; // Return the signed message } catch (error) { // Handle any errors that occur console.log("error", error); // Log the error return error; // Return the error } }; // Method to sign and send a transaction signAndSendTransaction = async (): Promise<any> => { try { // Request the accounts using the provider const accounts = await this.provider.request<never, string[]>({ method: "xrpl_getAccounts", // Specify the method to get accounts }); if (accounts && accounts.length > 0) { // Check if any accounts were returned and the list is not empty // Create the payment transaction object const tx: Payment = { TransactionType: "Payment", // Specify the transaction type Account: accounts[0] as string, // Use the first account as the sender Amount: xrpToDrops(50), // Specify the amount to send, converting XRP to drops Destination: "rM9uB4xzDadhBTNG17KHmn3DLdenZmJwTy", // Specify the destination address }; // Request to submit the transaction const txSign = await this.provider.request({ method: "xrpl_submitTransaction", // Specify the method to submit a transaction params: { transaction: tx, // Provide the transaction object }, }); return txSign; // Return the transaction signature } else { return "failed to fetch accounts"; // Return an error message if no accounts are found } } catch (error) { // Handle any errors that occur console.log("error", error); // Log the error return error; // Return the error } }; } ``` You create a component folder if you have not created already. Navigate to your component folder create a file called Web3Authentication.tsx inside compoenent let utilize the various functions we already define in in xrpLRPC.ts ``` // 'use client' ensures that the following code is executed on the client side 'use client' // Import necessary modules and components from various packages import { XrplPrivateKeyProvider } from "@web3auth/xrpl-provider"; // Provides a private key provider for XRPL import { Web3Auth } from "@web3auth/modal"; // Main Web3Auth module for authentication import { getXrplChainConfig } from "@web3auth/base"; // Utility function to get XRPL chain configuration import { CHAIN_NAMESPACES, UX_MODE, WEB3AUTH_NETWORK, IProvider } from "@web3auth/base"; // Various constants and interfaces for Web3Auth import RPC from "../utils/xrpLRPC"; // Custom RPC class for XRPL interactions import { OpenloginAdapter } from "@web3auth/openlogin-adapter"; // Adapter for Openlogin import React, { useEffect, useState } from 'react'; // React hooks for managing state and side effects // Client ID for Web3Auth which you can get from the web3 auth dashboard https://dashboard.web3auth.io/ const clientId = "BKVW17ohm6Mt-A6O_A633ECD5fOYUEkyOwmU5sdoEhtQFj0PiS28wxLO5DkmbqaAEsCgsd_BzqxhYjabDihcjLc"; // Main component for Web3 authentication const Web3Authentication = () => { // Define state variables const [web3auth, setWeb3auth] = useState<Web3Auth | null>(null); // State for Web3Auth instance const [provider, setProvider] = useState<IProvider | null>(null); // State for provider instance const [loggedIn, setLoggedIn] = useState(false); // State to track if the user is logged in // Configuration for the XRPL chain const chainConfig = { chainNamespace: CHAIN_NAMESPACES.XRPL, chainId: "0x2", rpcTarget: "https://s.altnet.rippletest.net:51234/", // RPC endpoint for XRPL testnet wsTarget: "wss://s.altnet.rippletest.net:51233/", // WebSocket endpoint for XRPL testnet ticker: "XRP", tickerName: "XRPL", displayName: "xrpl testnet", blockExplorerUrl: "https://devnet.xrpl.org/", // Block explorer URL for XRPL testnet }; // useEffect hook to initialize Web3Auth when the component mounts useEffect(() => { const init = async () => { try { // Initialize XRPL private key provider const xrplProvider = new XrplPrivateKeyProvider({ config: { chainConfig: getXrplChainConfig(0x2), // Get XRPL chain configuration }, }); console.log(xrplProvider.config, "xrplProvider.config"); // Log provider configuration for debugging // Initialize Web3Auth with the specified configurations const web3auth = new Web3Auth({ clientId, uiConfig: { // UI customization options appName: "W3A", theme: { primary: "red", }, mode: "dark", logoLight: "https://web3auth.io/images/web3authlog.png", logoDark: "https://web3auth.io/images/web3authlogodark.png", defaultLanguage: "en", loginGridCol: 3, primaryButton: "externalLogin", uxMode: UX_MODE.REDIRECT, }, web3AuthNetwork: WEB3AUTH_NETWORK.SAPPHIRE_DEVNET, // Web3Auth network configuration privateKeyProvider: xrplProvider, // XRPL private key provider }); // Initialize and configure Openlogin adapter const openloginAdapter = new OpenloginAdapter({ loginSettings: { mfaLevel: "optional", }, adapterSettings: { uxMode: "redirect", whiteLabel: { logoLight: "https://web3auth.io/images/web3authlog.png", logoDark: "https://web3auth.io/images/web3authlogodark.png", defaultLanguage: "en", }, mfaSettings: { deviceShareFactor: { enable: true, priority: 1, mandatory: true, }, backUpShareFactor: { enable: true, priority: 2, mandatory: true, }, socialBackupFactor: { enable: true, priority: 3, mandatory: true, }, passwordFactor: { enable: true, priority: 4, mandatory: true, }, }, }, }); web3auth.configureAdapter(openloginAdapter); // Configure Web3Auth with the Openlogin adapter setWeb3auth(web3auth); // Set the Web3Auth instance in state await web3auth.initModal(); // Initialize the Web3Auth modal if (web3auth.connected) { // Check if already connected setProvider(web3auth?.provider); // Set the provider instance in state setLoggedIn(true); // Set logged in state to true } } catch (error) { console.error(error); // Log any errors that occur during initialization } }; init(); // Call the init function }, []); // Function to handle user login const login = async () => { if (!web3auth) { // If Web3Auth is not initialized, display a message in the UI console uiConsole("web3auth not initialized yet"); return; } // Connect to Web3Auth and set the provider const webauthProvider = await web3auth.connect(); setProvider(webauthProvider); setLoggedIn(true); // Update the state to indicate the user is logged in }; // Function to authenticate the user const authenticateUser = async () => { if (!web3auth) { // If Web3Auth is not initialized, display a message in the UI console uiConsole("web3auth not initialized yet"); return; } // Authenticate the user and get an ID token const idToken = await web3auth.authenticateUser(); // Display the ID token in the UI console uiConsole(idToken); }; // Function to get user information const getUserInfo = async () => { if (!web3auth) { // If Web3Auth is not initialized, display a message in the UI console uiConsole("web3auth not initialized yet"); return; } // Retrieve user information from Web3Auth const user = await web3auth.getUserInfo(); // Display the user information in the UI console uiConsole(user); }; // Function to handle user logout const logout = async () => { if (!web3auth) { // If Web3Auth is not initialized, display a message in the UI console uiConsole("web3auth not initialized yet"); return; } // Log out the user from Web3Auth await web3auth.logout(); setProvider(null); // Reset the provider state setLoggedIn(false); // Update the state to indicate the user is logged out }; // Function to get the accounts associated with the provider const getAccounts = async () => { if (!provider) { // If the provider is not initialized, display a message in the UI console uiConsole("provider not initialized yet"); return; } // Create a new RPC instance with the provider const rpc = new RPC(provider); // Get the user accounts from the RPC instance const userAccount = await rpc.getAccounts(); // Display the account information in the UI console uiConsole("Account info: ", userAccount); }; // Function to get the balance of the user account const getBalance = async () => { if (!provider) { // If the provider is not initialized, display a message in the UI console uiConsole("provider not initialized yet"); return; } // Create a new RPC instance with the provider const rpc = new RPC(provider); // Get the balance from the RPC instance const balance = await rpc.getBalance(); // Display the balance in the UI console uiConsole("Balance", balance); }; // Function to send a transaction const sendTransaction = async () => { if (!provider) { // If the provider is not initialized, display a message in the UI console uiConsole("provider not initialized yet"); return; } // Create a new RPC instance with the provider const rpc = new RPC(provider); // Sign and send the transaction const result = await rpc.signAndSendTransaction(); // Display the result of the transaction in the UI console uiConsole(result); }; // Function to sign a message const signMessage = async () => { if (!provider) { // If the provider is not initialized, display a message in the UI console uiConsole("provider not initialized yet"); return; } // Create a new RPC instance with the provider const rpc = new RPC(provider); // Sign the message const result = await rpc.signMessage(); // Display the signed message in the UI console uiConsole(result); }; // Function to get the account address const getAccountAddress = async () => { if (!provider) { // If the provider is not initialized, display a message in the UI console uiConsole("provider not initialized yet"); return; } // Create a new RPC instance with the provider const rpc = new RPC(provider); // Get the account address const result = await rpc.getAccountAddress(); // Display the account address in the UI console uiConsole(result); }; // Function to get the wallet seed // const getWalletSeed = async () => { // if (!provider) { // // If the provider is not initialized, display a message in the UI console // uiConsole("provider not initialized yet"); // return; // } // // Create a new RPC instance with the provider // const rpc = new RPC(provider); // // Get the wallet seed // const result = await rpc.getWalletSeed(); // // Log the result in the console for debugging // console.log(result, "result: "); // // Display the wallet seed in the UI console // uiConsole(result); // }; // Function to display messages in the UI console function uiConsole(...args) { // Find the HTML element to display the messages const el = document.querySelector("#console>p"); if (el) { // Display the messages in the HTML element as a formatted JSON string el.innerHTML = JSON.stringify(args || {}, null, 2); } } // Component to display the view when the user is logged in const loggedInView = ( <> <div className="flex-container"> <div> {/* Button to get user information */} <button onClick={getUserInfo} className="card"> Get User Info </button> </div> <div> {/* Button to authenticate user and get ID token */} <button onClick={authenticateUser} className="card"> Get ID Token </button> </div> <div> {/* Button to get the account address */} <button onClick={getAccountAddress} className="card"> Address </button> </div> {/* <div> Button to get the wallet seed <button onClick={getWalletSeed} className="card"> Seed </button> </div> */} <div> {/* Button to get user accounts */} <button onClick={getAccounts} className="card"> Get Accounts </button> </div> <div> {/* Button to get the balance of the account */} <button onClick={getBalance} className="card"> Get Balance </button> </div> <div> {/* Button to sign a message */} <button onClick={signMessage} className="card"> Sign Message </button> </div> <div> {/* Button to send a transaction */} <button onClick={sendTransaction} className="card"> Send Transaction </button> </div> <div> {/* Button to log out the user */} <button onClick={logout} className="card"> Log Out </button> </div> </div> {/* Console to display messages */} <div id="console" style={{ whiteSpace: "pre-line" }}> <p style={{ whiteSpace: "pre-line" }}></p> </div> </> ); // Component to display the view when the user is not logged in const unloggedInView = ( <button onClick={login} className="card"> Login </button> ); // Main component to handle the Web3Auth authentication process return ( <div> <div className="container"> <h1 className="title"> <a target="_blank" href="https://web3auth.io/docs/sdk/pnp/web/modal" rel="noreferrer"> Web3Auth{" "} </a> & ReactJS XRPL Example </h1> {/* Display the loggedInView if logged in, otherwise display unloggedInView */} <div className="grid">{loggedIn ? loggedInView : unloggedInView}</div> <footer className="footer"> {/* Link to the source code */} <a href="https://github.com/Web3Auth/web3auth-pnp-examples/tree/main/web-modal-sdk/blockchain-connection-examples/xrpl-modal-example" target="_blank" rel="noopener noreferrer" > Source code </a> {/* Button to deploy with Vercel */} <a href="https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2FWeb3Auth%2Fweb3auth-pnp-examples%2Ftree%2Fmain%2Fweb-modal-sdk%2Fblockchain-connection-examples%2Fxrpl-modal-example&project-name=w3a-xrpl-modal&repository-name=w3a-xrpl-modal"> <img src="https://vercel.com/button" alt="Deploy with Vercel" /> </a> </footer> </div> </div> ); } export default Web3Authentication; ``` Open your terminal Make sure you are in the project folder in your terminal run ``` yarn run dev ``` When you login with web3auth in the project, it generate you new a xrpl account, for you. Noted the account is not activate until you send a token of 15 xrp to the address. You can check this Demo video on how to send xrp the generated account
amity808
1,890,530
The Rise of AI Chatbots: Transforming Digital Communication
What is an AI Chatbot? AI chatbots are computer programs designed to simulate...
27,673
2024-06-16T19:59:21
https://dev.to/rapidinnovation/the-rise-of-ai-chatbots-transforming-digital-communication-1e4k
## What is an AI Chatbot? AI chatbots are computer programs designed to simulate human-like conversations through machine learning and natural language processing. These chatbots can understand and interpret human language, allowing them to engage with users in real-time, answer questions, provide recommendations, and perform tasks. They have significantly transformed the way businesses interact with their customers, providing instant, personalized, and round-the-clock support. ## Top Examples of AI Chatbots Among the top AI chatbots are ChatGPT, Bing Chat, YouChat, and Jasper. Each of these chatbots has unique capabilities: ## How Do AI Chatbots Work? AI chatbots function through several key mechanisms: ## How to Choose the Right AI Service Provider Choosing the right AI service provider is crucial for a successful AI journey. Consider the following: ## Difference Between AI Chatbots, AI Writers, and Virtual Assistants While all three are AI-powered, they serve different purposes: ## Conclusion: AI Chatbots Are Here To Stay AI chatbots have revolutionized the way we interact with technology, making our lives easier and more efficient. Understanding their capabilities and how to choose the right AI service provider is crucial as they become an integral part of our daily lives and businesses. Build Your AI Chatbots Today! Book A Personalized Consultation with Us. Drive innovation with intelligent AI and secure blockchain technology! 🌟 Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <http://www.rapidinnovation.io/post/the-rise-of-ai-chatbots-examples-services-and-providers> ## Hashtags #AIChatbots #ArtificialIntelligence #CustomerService #MachineLearning #NaturalLanguageProcessing
rapidinnovation
1,890,529
How to create a Github account and install Git on your PC
How to create a github account and install Git on your pc
0
2024-06-16T19:59:15
https://dev.to/ebaye_eligbe_8941d8a7a498/how-to-create-a-github-account-and-install-git-on-your-pc-1p8h
[How to create a github account and install Git on your pc](https://medium.com/@ebayeeligbe/how-to-create-a-github-account-and-install-git-on-your-pc-a8b0150b659b)
ebaye_eligbe_8941d8a7a498
1,890,527
The Importance of Personal Branding for Web Developers: My Journey and Lessons Learned
Hey Dev Community! I'm excited to share some insights and lessons learned from my journey in web...
0
2024-06-16T19:47:26
https://dev.to/ridoy_hasan/the-importance-of-personal-branding-for-web-developers-my-journey-and-lessons-learned-4b3g
webdev, programming, career, learning
Hey Dev Community! I'm excited to share some insights and lessons learned from my journey in web development, especially regarding personal branding. Despite initial hesitations, I've discovered that building a personal brand is essential for standing out in our field. Here's why and how you can get started: **My Biggest Failures and Lessons Learned:** **1**/ Lost over $28k on 2 eCommerce Stores **Lesson:** Failure isn't the end. It's an opportunity to learn and grow. **2**/ Lost over $20k Trading Stocks **Lesson:** Persistence pays off. I turned it around by learning from my mistakes. **3**/ Struggled to Monetize My Following **Lesson:** Building a following takes time, and it's important to engage and provide value consistently. Every failure was a stepping stone. They showed I had the guts to try something bold. Failures don't invalidate you; they are data points and signs that you tried. **Why Personal Branding Matters:** **Attract Your Ideal Clients:** Showcasing your skills and projects helps attract the right clients who value your expertise. **Build Your Network:** Networking is crucial. Your brand can help you connect with other professionals and potential collaborators. **Become a Thought Leader:** Sharing your knowledge and insights can position you as a thought leader in your field. **Overcoming Hesitations:** I had many concerns about personal branding: My life is too busy. I don't see the point. I don't know how to start. I think people will laugh at me. But here's the truth: It doesn't have to be difficult. **Getting Started:** _Surround Yourself with Supportive People:_ Engage with communities like Dev.to where you can find support and inspiration. _Start Small:_ Even a single post or project share is better than not starting. _Be Visible:_ Showcasing your work can bring new opportunities. Ignore the Naysayers: Focus on your growth, not what others think. _Recognize It's a Journey:_ Personal branding is a long-term commitment. _Learn and Enjoy_: It's a continuous learning process. Enjoy it! P.S. Are you building your personal brand? Share your experiences and tips in the comments! Let's inspire each other to grow and succeed in our web development journeys. **_Connect With Me_** 🔗 My LinkedIn: https://www.linkedin.com/in/ridoy-hasan7?lipi=urn%3Ali%3Apage%3Aprofile_edit_contact_info%3B4a5ce70f-63a0-4413-b964-6e4a3cd370a9 🔗 _my portfolio:_ https://ridoyweb.com 🔁 Repost to remind others.🔁 🔔 Follow for more insights and updates. 🔔 Looking forward to hearing your thoughts and experiences!
ridoy_hasan
1,890,526
Dockerise a Golang Application
How can you call yourself a Golang developer without knowing how to Dockerize a Go application? Yes,...
0
2024-06-16T19:43:40
https://dev.to/abhishekcs3459/dockerise-a-golang-application-25lc
docker, devops, go
**How can you call yourself a Golang developer without knowing how to Dockerize a Go application?** Yes, I know that feels very shameful, especially when working among your fellow teammates who are skilled developers. *Mention not, let's give them a good show!* **Prerequisites:** 1. **Go setup**: You can download & install it from [Go Official website](https://go.dev/doc/install). 2. **Docker**: Download and install from [Docker's Official Website](https://www.docker.com). ![meme image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3c7lvscpobuydkkz5a4w.png) ## Step 1: Create a Golang App or use my sample GoLang App by cloning it ```bash git clone https://github.com/AbhishekCS3459/Docker_Blog_Series cd Docker_Blog_Series/Dockerise_Go go mod tidy ``` ### Step 2 (optional): Run the Go application using the below command: ```bash go run main.go ``` **Expected Output:** ![vscode output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtolh47m5qn1qr41wdkf.png) ### Simple Dockerfile for a go application: ![code image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f5gzlj2jh5yhx9aw16bh.png) ### Step 3: Build the Docker image using the following command: ```bash docker build YOUR_IMAGE_NAME . ``` **Note:** `.` represents that you are running the above command in the current directory where your Go application exists. ### Step 4: Check whether your image has been built or not by running the below command: ```bash docker images ``` ### Step 5: Run the container using the following command: ```bash docker run -it -p 8080:8080 YOUR_IMAGE_NAME ``` **Note:** Here, `-it` is a flag to run the container in interactive mode, and `-p` is for mapping the container port with the external port. ## Deploy the Image to Docker Hub Docker Hub is a container registry built for developers and open-source contributors to find, use, and share their container images. With Hub, developers can host public repos that can be used for free, or private repos for teams and enterprises. ![Dockerhub Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jwh5qcrloy837aeg971.png) ### Step 1: Log in to Docker Hub using the command: ```bash docker login ``` You will be prompted to enter your Docker Hub username and password. ### Step 2: Tag your Docker image with your Docker Hub repository name. Replace `YOUR_DOCKERHUB_USERNAME` and `YOUR_IMAGE_NAME` with your Docker Hub username and the name of your image: ```bash docker tag YOUR_IMAGE_NAME YOUR_DOCKERHUB_USERNAME/YOUR_IMAGE_NAME ``` ### Step 3: Push Your Image to Docker Hub Push the tagged image to Docker Hub: ```bash docker push YOUR_DOCKERHUB_USERNAME/YOUR_IMAGE_NAME ``` You can now see your image on Docker Hub and share it with others! ![docker file image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3jil4r4mn2dbk60f57se.png) ### If you want to ask, ping me below. Connect with me on Linkedin: [linkedin/abhishekverman](https://www.linkedin.com/in/abhishekverman/). Further Reading: **[Dockerize a Nodejs Application](https://dev.to/comments/1192621)**
abhishekcs3459
1,890,525
Sekabet En güncel giriş adresi bilgileri
Resmi Web Siteleri ve Sosyal Medya: Sekabet'in resmi web sitesi ve sosyal medya hesapları, en güncel...
0
2024-06-16T19:42:42
https://dev.to/abbybrown/sekabet-en-guncel-giris-adresi-bilgileri-56bd
Resmi Web Siteleri ve Sosyal Medya: Sekabet'in resmi web sitesi ve sosyal medya hesapları, en güncel giriş adresini sürekli olarak paylaşmaktadır. Örneğin, şu anda aktif olan adreslerden biri [sekabet.live](https://sekabet.live ) olarak belirtilmiştir​ ​. Güvenilir Kaynaklar ve Bahis Forumları: Bahis forumları ve inceleme siteleri, Sekabet'in güncel giriş adreslerini sıkça güncellemektedir. Bu kaynaklar üzerinden de doğru bilgilere ulaşabilirsiniz. Güncel adreslerden bir diğeri ise sekabet1196.com olarak listelenmiştir​ . Giriş Rehberleri: Sekabet’in güncel giriş adresine erişim rehberlerini kullanarak, doğru ve güncel bilgiye hızlıca ulaşabilirsiniz. Örneğin, sekabete.com adresinde güncel giriş bilgilerine ulaşabilirsiniz​ ​. Sekabet’in yeni giriş adresleri, kullanıcıların sorunsuz bir şekilde siteye erişimlerini sağlamak amacıyla düzenli olarak güncellenmektedir. Bu sayede, hem güvenli hem de kesintisiz bir bahis deneyimi yaşayabilirsiniz. Ayrıca, Sekabet’in müşteri destek ekibi de bu konuda yardımcı olabilir. Herhangi bir erişim problemi yaşadığınızda tarayıcınızın önbelleğini temizlemek veya farklı bir tarayıcı kullanmak da sorunları gidermeye yardımcı olabilir. En güncel giriş bilgilerini takip ederek, Sekabet’in sunduğu çeşitli bahis ve casino oyunlarının keyfini çıkarabilirsiniz.
abbybrown
1,890,523
Browser MMORPG #indiegamedev #babylonjs Ep20 - Client Side Prediction & Reconciliation
Hey Recently I was implementing entities interpolation and next on the plate was client side...
0
2024-06-16T19:32:13
https://dev.to/maiu/browser-mmorpg-indiegamedev-babylonjs-ep20-client-side-prediction-reconciliation-3fc7
babylonjs, indiegamedev, mmorpg, gamedev
Hey Recently I was implementing entities interpolation and next on the plate was client side prediction, which is smoothing player movements and increase responsiveness (predicting player movement before response from the server is received). On the video I'm presenting effect of three algorithms: - player movement prediction - player movement reconciliation with the server response - player position extrapolation (between physics ticks 20/s) I'm simulating latency by setting server network tick 4/s which effectively making server to send all messages each 250ms. It's not like in real world but i have some insights how it'd work. In general I think it work ok. It's minimizing lags effects quite significantly. I thinking about adding to the entity interpolation similar mechanism which i have in player position reconciliation, which is not updating to position received from the server instantly but makes smooth transition across few simulation ticks (imagine elastic band). Perhaps adding it for all the entities will create too much overhead but adding it for enemy players would be nice. I'd increase experience significantly not only for players experiencing lags but others which interact with them. Hope You like it! {% youtube NYIvQk0-FcQ %}
maiu
1,890,521
Optimizing React Performance: Avoiding Unnecessary Rendering with Hooks
Introduction React's declarative nature and component-based architecture make it a powerful tool for...
0
2024-06-16T19:24:15
https://dev.to/ayas_tech_2b0560ee159e661/optimizing-react-performance-avoiding-unnecessary-rendering-with-hooks-446i
**Introduction** React's declarative nature and component-based architecture make it a powerful tool for building dynamic user interfaces. However, with great power comes great responsibility, especially when it comes to performance. One of the common pitfalls in React applications is unnecessary rendering, which can lead to performance bottlenecks. In this blog, we'll explore how to avoid unnecessary rendering when using React hooks. **Understanding Rendering in React** In React, rendering is the process of calling a component's render method to generate the Virtual DOM. This happens whenever a component's state or props change. While React's diffing algorithm minimizes updates to the real DOM, unnecessary re-renders can still impact performance, particularly in complex applications. **Common Causes of Unnecessary Rendering** **1. State Changes:** Frequent state updates can cause components to re-render more often than necessary. **2. Props Changes:** Passing new props to child components can trigger re-renders. **3. Context Changes:** Updates to context values can cause all consuming components to re-render. **4. Parent Re-renders:** When a parent component re-renders, all its child components re-render as well. **Strategies to Avoid Unnecessary Rendering** **1. useState Optimization** When using the useState hook, ensure that state updates are minimal and necessary. Avoid updating state with the same value repeatedly. ``` const [count, setCount] = useState(0); // Avoid this const increment = () => setCount(count); // Use this const increment = () => setCount(prevCount => prevCount + 1); ``` **2. useMemo and useCallback** useMemo and useCallback are hooks that help memoize expensive computations and callback functions, preventing them from being re-created on every render. useMemo: Memoize a computed value. ``` const expensiveValue = useMemo(() => computeExpensiveValue(a, b), [a, b]); ``` useCallback: Memoize a function. ``` const handleClick = useCallback(() => { doSomething(a, b); }, [a, b]); ``` **3. React.memo** React.memo is a higher-order component that memoizes the rendered output of a functional component. It only re-renders if the props change. ``` const MyComponent = React.memo(({ value }) => { return <div>{value}</div>; }); ``` **4. Avoid Inline Functions and Objects** Passing inline functions or objects as props can cause child components to re-render because a new reference is created on each render. Inline Functions: ``` // Avoid this <ChildComponent onClick={() => handleClick(value)} /> // Use this const memoizedClick = useCallback(() => handleClick(value), [value]); <ChildComponent onClick={memoizedClick} /> ``` Inline Objects: ``` // Avoid this <ChildComponent style={{ color: 'red' }} /> // Use this const style = useMemo(() => ({ color: 'red' }), []); <ChildComponent style={style} /> ``` **5. useRef for Persistent Values** Use useRef to keep a mutable object that does not cause re-renders when updated. ``` const countRef = useRef(0); const increment = () => { countRef.current += 1; console.log(countRef.current); }; ``` **6. Optimize Context Usage** Context updates can cause all consuming components to re-render. To mitigate this, consider splitting context or using the useContextSelector pattern. ``` const ThemeContext = React.createContext(); // Split context values const ThemeProvider = ({ children }) => { const [theme, setTheme] = useState('light'); const value = useMemo(() => ({ theme, setTheme }), [theme]); return <ThemeContext.Provider value={value}>{children}</ThemeContext.Provider>; }; ``` **Conclusion** By understanding and applying these strategies, you can significantly improve the performance of your React applications. Optimizing rendering with hooks involves careful consideration of state management, memoization, and context usage. By minimizing unnecessary renders, you ensure a smoother and more responsive user experience.
ayas_tech_2b0560ee159e661
1,890,519
C++ Best Practices : Naming Conventions
There are various popular naming conventions which are common among various languages, the relevance...
0
2024-06-16T19:17:23
https://dev.to/parthkamal/c-best-practices-naming-conventions-235m
cpp, convention, language
There are various popular naming conventions which are common among various languages, the relevance of these conventions comes from , the legacy which they hold and other factors like readability, purpose, programming cababilities, for example , we most follow camelcase for java, for c++ which comes with great capabilities, we use lowercase with underscores, most of the time. ## why important ? there are two main advantage which comes with this, 1. consistency in the code base - especially you are contributing open source. 2. readability - with common coding style and following it becomes easy for you to understand the code. ## C++ naming convention - 1. Variables - lowercase with underscores, 2. Functions - lower case with underscores, 3. Classes - Pascal case 4. Constants - uppercase with underscores. In gist : following the best practices , and guidelines, make the job easier for everyone.
parthkamal
1,890,517
Unlock New Features: RazorSharp CLI & Dashboard Updates!
Hello, fellow developers! I'm excited to share some fantastic news—RazorSharp CLI 1.0.0-beta09 and...
0
2024-06-16T19:16:49
https://dev.to/jwtiller_c47bdfa134adf302/unlock-new-features-razorsharp-cli-dashboard-updates-i50
dotnet, programming
Hello, fellow developers! I'm excited to share some fantastic news—RazorSharp CLI 1.0.0-beta09 and Dashboard 0.0.5.1 are now available! As we move from beta to a more refined product, there will still be a freemium version available for everyone to enjoy. ### What's New? #### 🔒 Username and Password Protection You can now secure access to your logs with a username and password. The first time you access the dashboard, a configuration wizard will guide you through the setup process. This added layer of security ensures that your log data remains protected. #### 📡 gRPC Log Transfer We’ve introduced gRPC log transfer capabilities. This means you can now send logs directly to the dashboard using gRPC, making the logging process smoother and faster. This enhancement is designed to provide real-time insights and streamline your workflow. ### Getting Started To help you get up and running quickly, we’ve put together a quick start guide. Here’s a brief overview of the steps: #### 1. Install the Dashboard **Database** ```bash docker run -d -p 27017:27017 --name mongodb mongo ``` *For production environments, additional configuration is needed to ensure data persistence and security.* **Dashboard** ```bash docker pull razorsharpdotnet/dashboard:latest docker run -d -p 8080:8080 -p 8081:8081 -e RAZORSHARP_MONGODB_CONNECTION_STRING=mongodb://host.docker.internal:27017 razorsharpdotnet/dashboard:latest ``` #### 2. Install the Application ```bash dotnet tool install Razorsharp.APM.CLI --global --version 1.0.0-beta* razorsharp --inject HelloWorld.dll --dashboard http://localhost:8081 ``` #### 3. Test Perform an API call or a user action to allow RazorSharp APM to generate log entries. Access the dashboard at [http://localhost:8080](http://localhost:8080) to view the topology map and detailed log information. ### Disclaimer RazorSharp APM CLI is currently in beta. It is recommended for use only in test environments at this stage. By using RazorSharp APM CLI, you acknowledge and agree that the tool is provided "as is" without warranty of any kind, express or implied. We do not accept liability for any direct, indirect, incidental, special, exemplary, or consequential damages arising from the use of the software. Please read the license in the NuGet package for further details. ### Ready to Dive In? Check out the [quick start guide](https://razorsharp.dev/Documentation/quick-start) to see how easy it is to get started. I'm excited to continue this journey with you, enhancing .NET application performance and security monitoring. Thanks for being part of the community! --- *Feel free to share your feedback and suggestions in the comments. Let’s make RazorSharp even better together!* If you found this article helpful, please give it a like and share it with your network. Follow me for more updates and insights on .NET application performance and security monitoring. #RazorSharp #dotnet #APM #Security #DevOps #NewRelease #BetaToEnterprise
jwtiller_c47bdfa134adf302
1,890,516
Backblaze B2: How to store a file
Backblaze B2 is a cloud storage solution offering scalable and affordable data storage for...
0
2024-06-16T19:15:28
https://dev.to/artiumws/backblaze-b2-how-to-store-a-file-3obj
webdev, programming, tutorial, cloud
Backblaze B2 is a cloud storage solution offering scalable and affordable data storage for individuals, developers, and businesses. Backblaze's B2 service aim for simplicity and cost-effectiveness. ![Backblaze's price comparison](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eetb5uwr96osf9jgpuxg.png) However, users may find the documentation cumbersome to understand, which can be a hurdle for those new to the platform. ## Intro This article introduce how to use the backblaze cloud storage solution to store private user file that can only be accessed by the owner. This article will cover: - How to setup a bucket for your application - How to authorise your application to access the bucket - Upload a file to your bucket - Download a file for a user with the right access ## Bucket When setting up the bucket make sure the following options are checked: - Private This will ensure your files won’t be accessible without the correct permission - Encryption This will ensure your file are properly encrypted. Make sure there is no sensitive informations in the name or metadata of the files. Those won’t be encrypted! - Object lock (optional) This will ensure your file won’t be altered for the selected timeframe. In my case that wasn’t needed so I left it unchecked ![Backblaze create bucket](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0bpnlpfpr1anp27x4xl2.png) ## Application keys To access your bucket you will need to setup application keys. You can use the master application keys but I won’t advise it as it have all right for all bucket. You should consider creating a dedicated application key for your bucket this will prevent any interaction with other buckets. ![Backblaze application key](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n19tso7zs534b4cbt83v.png) Once created you should have access to your application key id and application key. Save them for later use as they will be hidden as soon as you leave the page! In case that happened no worries just delete and recreate your application key. ## Authorise your application To setup authorisation in your application you will need to fetch the following endpoint. **Example** `https://api.backblazeb2.com/b2api/v3/b2_authorize_account` **Authorisation header** To access this endpoint you will need the application key id and application key setup earlier. Concatenate them in a string, encode them to Base64 and prefix them with `Basic` . Here is a pseudo code example: `Basic Base64({application_key_id}:{application_key})` **Returned data** This endpoint will return data related to your application key. Here is the list of required datas for the next step: - Api url - Bucket id - Bucket name - Authorisation token (1) from application key - Download url **More infos** https://www.backblaze.com/apidocs/b2-authorize-account ## Upload a file Uploading a file is done in 2 steps: 1. Get the upload url 2. Upload the file ### 1) Get the upload url This endpoint will return you the data needed to upload your files. You will need the Api url, Bucket id and Authorisation token from the previous endpoint. **Example** `{api_url}/b2api/v3/b2_get_upload_url?bucketId={bucket_id}` **Authorisation header** Pass the Authorisation token **from application key** to the Authorisation header: `Authorisation {authorisation_token_1}` **Returned data** This endpoint will return data required to upload a file: - Upload url - Authorisation token (2) to upload a file **More infos** https://www.backblaze.com/apidocs/b2-get-upload-url ### 2) Upload a file This endpoint will upload a file to your bucket. You will need the Upload url and Authorisation token from the previous endpoint. **Example** `{upload_url}` Use the url previously returned. **Authorisation header** Pass the Authorisation token **to upload a file** directly to the Authorisation header: `Authorisation {authorisation_token_2}` **Headers** You will need to pass header related to your file metadata. - `X-Bz-File-Name` The name of your file in the bucket. NB: In case you want to separate your file per users you can use this syntax: `{user_id}/{file_name}` This will store the file related to your user in a dedicated folder to prevent other users accessing it. - `Content-Type` The MIME type of your file. eg: `image/png` - `Content-Length` The size of you file in bytes. - `X-Bz-Content-Sha1` The Sha1 of your file to ensure integrity of the file. This need to be generated in your code. In development environment you can pass `do_not_verify` this will skip the integrity validation. **More infos** https://www.backblaze.com/apidocs/b2-upload-file ## Download a file Downloading a file is done in 2 steps: 1. Get the download authorisation 2. Get the file ### 1) Get the download authorisation This endpoint will generate a download authorisation for a / some files. You will need the Api url, Authorisation token and Bucket id from the first authorisation endpoint. **Example** `{api_url}/b2api/v3/b2_get_download_authorization?bucketId={bucket_id}` **Authorisation header** Pass the Authorisation token **from application key** to the Authorisation header: `Authorisation {authorisation_token_1}` **Query params** This endpoint will require some more query params to work properly: - `fileNamePrefix` The prefix (folder) previously used to store our files. In our case that was the `user_id` . This will generate authorisation only for the chosen folder. - `validDurationInSeconds` The number of seconds before your authorisation expire. After passing those query params your endpoint will look something like: `{api_url}/b2api/v3/b2_get_download_authorization?bucketId={bucket_id}&fileNamePrefix={user_id}&validDurationInSeconds={duration_in_sec}` **Returned data** This endpoint will return data required to download a file: - Authorisation token (3) to download a file - File name prefix in our case user id **More infos** https://www.backblaze.com/apidocs/b2-get-download-authorization ### 2) Get a file by name This endpoint will download a file from your bucket based on its name. You will need the Download url, Bucket name from first authorisation endpoint. Authorisation token and File name prefix in our case user id from previous endpoint. **Example** `{download_url}/file/{bucket_name}/{user_id}/{file_name}?Authorization={authorisation_token_3}` This link can be used to fetch a file for a specific user with an expiration limit. **More infos** https://www.backblaze.com/apidocs/b2-download-file-by-name ## Schema To have a better overview of the implementation here is an explication schema: ![backblaze upload download schema](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hwxv2k3birh2u4vsf4tx.png)
artiumws
1,890,509
SteamVR Overlay with Unity: Follow Device
Follow the HMD Overlay following the HMD Remove the position code First,...
27,740
2024-06-16T19:05:18
https://dev.to/kurohuku/part-6-follow-device-33i
unity3d, steamvr, openvr, vr
## Follow the HMD ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ql6lml9c94kxid987g5b.gif) *Overlay following the HMD* ### Remove the position code First, remove the absolute position code we added in the previous part. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, 0.5f); - var position = new Vector3(0, 2, 3); - var rotation = Quaternion.Euler(0, 0, 45); - SetOverlayTransformAbsolute(overlayHandle, position, rotation); ShowOverlay(overlayHandle); ... ``` ### Device Index In SteamVR, connected devices are identified with Device Index that automatically allocates from the system. (read the wiki for details) For HMD, it is defined as [OpenVR.k_unTrackedDeviceIndex_Hmd](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.OpenVR.html#Valve_VR_OpenVR_k_unTrackedDeviceIndex_Hmd ) and is always 0. ### Prepare position and rotation Let’s display the overlay at 2 m ahead (Z-axis) of the HMD. As in the previous part, let `position` and `rotation` variables. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); + var position = new Vector3(0, 0, 2); + var rotation = Quaternion.Euler(0, 0, 0); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, 0.5f); ShowOverlay(overlayHandle); } ``` ### Set relative position based on the HMD Use [SetOverlayTransformTrackedDeviceRelative()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_SetOverlayTransformTrackedDeviceRelative_System_UInt64_System_UInt32_Valve_VR_HmdMatrix34_t__) to set the relative position based on the HMD. (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::SetOverlayTransformTrackedDeviceRelative) for details) ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var position = new Vector3(0, 0, 2); var rotation = Quaternion.Euler(0, 0, 0); var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); + var error = OpenVR.Overlay.SetOverlayTransformTrackedDeviceRelative(overlayHandle, OpenVR.k_unTrackedDeviceIndex_Hmd, ref matrix); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to set overlay position: " + error); + } var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, 0.5f); ShowOverlay(overlayHandle); } ``` Pass the HMD device index (`OpenVR.k_unTrackedDeviceIndex_Hmd`) and the transformation matrix. Run the program, and check the overlay is shown 2 m ahead of the HMD. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ql6lml9c94kxid987g5b.gif) ### Organize code Move the relative position code into `SetOverlayTransformRelative()`. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var position = new Vector3(0, 0, 2); var rotation = Quaternion.Euler(0, 0, 0); - var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); - var matrix = rigidTransform.ToHmdMatrix34(); - var error = OpenVR.Overlay.SetOverlayTransformTrackedDeviceRelative(overlayHandle, OpenVR.k_unTrackedDeviceIndex_Hmd, ref matrix); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to set overlay position: " + error); - } + SetOverlayTransformRelative(overlayHandle, OpenVR.k_unTrackedDeviceIndex_Hmd, position, rotation); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, 0.5f); ShowOverlay(overlayHandle); } ... + // Pass deviceIndex as argument. + private void SetOverlayTransformRelative(ulong handle, uint deviceIndex, Vector3 position, Quaternion rotation) + { + var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); + var matrix = rigidTransform.ToHmdMatrix34(); + var error = OpenVR.Overlay.SetOverlayTransformTrackedDeviceRelative(handle, deviceIndex, ref matrix); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to set overlay position: " + error); + } + } ``` ## Follow the controller ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08cvlc1tipb7w8ibg931.gif) *Overlay following the controller* Use controller device index instead of HMD to make the overlay follow a controller. ### Get the controller device index Get the left controller’s device index with [GetTrackedDeviceIndexForControllerRole()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVRSystem.html#Valve_VR_CVRSystem_GetTrackedDeviceIndexForControllerRole_Valve_VR_ETrackedControllerRole_) of `OpenVR.System`. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var position = new Vector3(0, 0, 2); var rotation = Quaternion.Euler(0, 0, 0); + var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); SetOverlayTransformRelative(overlayHandle, OpenVR.k_unTrackedDeviceIndex_Hmd, position, rotation); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, 0.5f); ShowOverlay(overlayHandle); } ``` The argument is: Left controller: `EtrackedControllerRole.LeftHand` Right controller: `EtrackedControllerRole.RightHand` If it fails to get the device index like the controller is disconnected, `GetTrackedDeviceIndexForControllerRole()` returns `k_unTrackedDeviceIndexInvalid`. ### Follow the controller We have got the left controller index, then make the overlay follow the controller. Pass the controller index to `SetOverlayTransformRelative()` that we previously created for the HMD. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var position = new Vector3(0, 0, 2); var rotation = Quaternion.Euler(0, 0, 0); var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); - SetOverlayTransformRelative(overlayHandle, OpenVR.k_unTrackedDeviceIndex_Hmd, position, rotation); + if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) + { + SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); + } var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, 0.5f); ShowOverlay(overlayHandle); } ``` Make sure the left controller is connected to the SteamVR window, then run the program. The overlay should follow the controller instead of the HMD. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/08cvlc1tipb7w8ibg931.gif) ## Adjust overlay position To make a watch application, we will adjust the overlay position on the left wrist. Make position parameters editable at runtime on the Unity editor. ### Add member variables Add size, position, and rotation variables as class members. Use Range() attribute to show sliders on the inspector. ```diff public class WatchOverlay : MonoBehaviour { private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; + [Range(0, 0.5f)] public float size = 0.5f; + [Range(-0.2f, 0.2f)] public float x; + [Range(-0.2f, 0.2f)] public float y; + [Range(-0.2f, 0.2f)] public float z; + [Range(0, 360)] public int rotationX; + [Range(0, 360)] public int rotationY; + [Range(0, 360)] public int rotationZ; ... ``` Sliders are shown on the inspector. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0rwcmeuudxeu6qfsohll.png) ### Replace variables in the code Replace the existing size and position variables in the code with the added member variables. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); - var position = new Vector3(0, 0, 2); - var rotation = Quaternion.Euler(0, 0, 0); + var position = new Vector3(x, y, z); + var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); - SetOverlaySize(overlayHandle, 0.5f); + SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } ``` ### Update size and position in Update() Make the size and position editable at runtime by adding code to the `Update()`. Note that this code will be deleted later. It is temporary for determining new positions and rotation. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } + private void Update() + { + SetOverlaySize(overlayHandle, size); + + var position = new Vector3(x, y, z); + var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); + var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); + if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) + { + SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); + } + } ... ``` Run the program. Ensure the inspector slider changes the overlay size and position at runtime. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/79hvoidyb6dcwpmkg0ih.png) ### Adjust overlay position Move sliders to adjust the overlay position to the left wrist. I recommend changing sliders from the desktop window that is opened in the SteamVR dashboard. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqjbfksjjm0hy3tjqwtx.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rltzoo88vrrp6rkbvf4f.gif) *Control the Unity editor from the SteamVR dashboard.* Another way is looking at the HMD view on the desktop and adjusting parameters. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tv8nbl7qw0hn8nqqa4bb.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/htyzqso4fvv3pulq9w8m.gif) *It’s helpful if you don’t want to put on the HMD.* Here are sample parameters. ``` size = 0.08 x = -0.044 y = 0.015 z = -0.131 rotationX = 154 rotationY = 262 rotationZ = 0 ``` When adjusting is done, right click on the WatchOverlay component name, and select Copy Component. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j3janr5p3zvi3729050q.png) Stop the program, right click the WatchOverlay component again, and paste the copied values with Paste Component Value. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/28akbb7hcyfwqkil2q47.png) Run the program. Check the overlay is on the left wrist. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4hkf0bknwwnbx2vfngq.png) ### Remove temporary code Remove the code from the `Update()`. ```diff private void Update() { - SetOverlaySize(overlayHandle, size); - - var position = new Vector3(x, y, z); - var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); - var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); - if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) - { - SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); - } } ``` ## When the controller is not connected Currently, the controller must be connected at launch because we get the controller device index at the `Start()`. Move the getting device index code from `Start()` to Update() to support the cases where the controller connects or disconnects in the middle. ```diff private void Start() { InitOpenVR(); OverlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); - var position = new Vector3(x, y, z); - var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); - var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); - if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) - { - SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); - } var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } + private void Update() + { + var position = new Vector3(x, y, z); + var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); + var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); + if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) + { + SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); + } + } ``` Now, it works even if the controller is connected in the middle. ## Final code ```cs using UnityEngine; using Valve.VR; using System; public class WatchOverlay : MonoBehaviour { private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; [Range(0, 0.5f)] public float size; [Range(-0.2f, 0.2f)] public float x; [Range(-0.2f, 0.2f)] public float y; [Range(-0.2f, 0.2f)] public float z; [Range(0, 360)] public int rotationX; [Range(0, 360)] public int rotationY; [Range(0, 360)] public int rotationZ; private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, size); ShowOverlay(overlayHandle); } private void Update() { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } } private void OnApplicationQuit() { DestroyOverlay(overlayHandle); } private void OnDestroy() { ShutdownOpenVR(); } private void InitOpenVR() { if (OpenVR.System != null) return; var error = EVRInitError.None; OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); if (error != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + error); } } private void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } private ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } return handle; } private void DestroyOverlay(ulong handle) { if (handle != OpenVR.k_ulOverlayHandleInvalid) { var error = OpenVR.Overlay.DestroyOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to dispose overlay: " + error); } } } private void SetOverlayFromFile(ulong handle, string path) { var error = OpenVR.Overlay.SetOverlayFromFile(handle, path); if (error != EVROverlayError.None) { throw new Exception("Failed to draw image file: " + error); } } private void ShowOverlay(ulong handle) { var error = OpenVR.Overlay.ShowOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to show overlay: " + error); } } private void SetOverlaySize(ulong handle, float size) { var error = OpenVR.Overlay.SetOverlayWidthInMeters(handle, size); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay size: " + error); } } private void SetOverlayTransformAbsolute(ulong handle, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformAbsolute(handle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } private void SetOverlayTransformRelative(ulong handle, uint deviceIndex, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformTrackedDeviceRelative(handle, deviceIndex, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } } ``` Here, We attach the overlay on the left wrist. Let’s draw Unity camera output to the overlay instead of the static image file in the next part.
kurohuku
1,890,514
Machine Learning: The Crystal Ball of Data
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-16T19:09:23
https://dev.to/jblazquezmartin/machine-learning-the-crystal-ball-of-data-53ca
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer <!-- Explain a computer science concept in 256 characters or less. --> Machine learning is the crystal ball of data. It sifts through vast amounts of information, learning and predicting future trends, much like nature evolves patterns over time but in record time, allowing us to find relationships in data that are beyond our comprehension. ## Additional Context <!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. --> <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> Machine learning is like a superpower for uncovering hidden patterns and relationships in data, almost like magic. It can detect subtle trends and connections that human minds often overlook, enabling breakthroughs in fields like healthcare, finance, and environmental science. By continuously learning and adapting, machine learning models provide insights that drive innovation and decision-making, transforming how we understand and interact with the world. Just as the Fibonacci sequence and the golden ratio or Benford numbers are human discoveries that show the magic implicit in nature, what new natural mechanisms can we discover with the powerful tool of machine learning? <!-- Thanks for participating! -->
jblazquezmartin
1,890,300
Identifiers 101: Understanding and Implementing UUIDs and ULIDs
At first glance, UUIDs (Universally Unique Identifiers) and ULIDs (Universally Unique...
0
2024-06-16T19:08:16
https://dev.to/siddhantkcode/identifiers-101-understanding-and-implementing-uuids-and-ulids-2kc6
database, programming, security, computerscience
At first glance, UUIDs (Universally Unique Identifiers) and ULIDs (Universally Unique Lexicographically Sortable Identifiers) are widely used identifiers in databases and distributed systems. Each has unique characteristics that make them suitable for various scenarios. In this article, we’ll delve into the features of UUIDs and ULIDs, and discuss when to use each. If you are currently using an auto-increment type primary key without much consideration, this article might give you some valuable insights. ### Comparison Table | Feature | Auto Increment | UUID v4 | UUID v7 | ULID | |---------------------------|-----------------|-----------------------------|-----------------------------|----------------------------| | **Data Type (MySQL)** | INT, BIGINT | CHAR(36) | CHAR(36) | CHAR(26) | | **Sort** | ❌ | ❌ | ✅ | ✅ | | **Size** | 4 bytes (for INT)| 16 bytes | 16 bytes | 16 bytes | | **Example** | 1, 2, 3, ... | d61f91c3-d3bf-4b34-9894-e21bfa277ca4 | 019020e0-cd2a-730a-a8ea-11ec3ddc847f | 01J0GCBEEDPE3VDR0NBJ8TM8NQ | ### If You Don't Want to Use Auto Increment Type Auto Increment is a mechanism that automatically generates a unique identifier in the database, typically a numeric column that increments with each new record. However, there are significant security and privacy concerns: 1. **Predictability**: Since Auto Increment IDs are sequential, it is easy to predict the next ID. This increases the risk that an attacker could infer the internal structure of the system and attempt unauthorized access. 2. **Risk of Information Leakage**: Sequential IDs can reveal patterns in the company’s activities. For example, a competitor might analyze the sequential IDs to infer the frequency of product releases or user registrations. Example: - A competitor figured out how often a company releases new products by analyzing the sequential IDs. This allowed them to predict release timings and adjust their strategy accordingly. - The sequential IDs used to manage payments could reveal the number of user registrations and paid subscriptions if exposed. ### UUID (Universally Unique Identifier) A UUID is a 128-bit identifier used widely in distributed systems, with multiple versions available, each having a different generation method. #### UUID v4 UUID v4 is commonly used due to its simplicity and high uniqueness. It generates a random 128-bit value, making it highly unique. **Generation Method**: - **Set Version Bit**: Set 4 specific bits (version field) to `0100`. - **Set Variant Bits**: Set 2 specific bits (variant field) to `10`. Here’s a code snippet to generate a UUID v4 in Go: ```go package main import ( "fmt" "github.com/google/uuid" ) func main() { uuidV4 := uuid.New() fmt.Println(uuidV4) } ``` **Example Output**: ``` d61f91c3-d3bf-4b34-9894-e21bfa277ca4 ``` #### UUID v7 UUID v7 is a recent proposal designed to be sortable by incorporating timestamps into the identifier. **Generation Method**: - **Get Timestamp**: Obtain the current timestamp in milliseconds and convert it to a 48-bit string. - **Generate Random Bits**: Fill the remaining 80 bits with random values. - **Set Version Bit**: Set the version field to `0111`. Here’s how to generate a UUID v7 in Go: ```go package main import ( "crypto/rand" "fmt" "time" ) type UUID [16]byte func NewUUIDv7() UUID { var uuid UUID timestamp := uint64(time.Now().UnixNano() / int64(time.Millisecond)) uuid[0] = byte(timestamp >> 40) uuid[1] = byte(timestamp >> 32) uuid[2] = byte(timestamp >> 24) uuid[3] = byte(timestamp >> 16) uuid[4] = byte(timestamp >> 8) uuid[5] = byte(timestamp) randomBytes := make([]byte, 10) if _, err := rand.Read(randomBytes); err != nil { panic(err) } copy(uuid[6:], randomBytes) // Set version (7) and variant bits (2 MSB as 01) uuid[6] = (uuid[6] & 0x0f) | (7 << 4) uuid[8] = (uuid[8] & 0x3f) | 0x80 return uuid } func main() { uuidV7 := NewUUIDv7() fmt.Printf("%x\n", uuidV7) } ``` **Example Output**: ``` 019020e0-cd2a-730a-a8ea-11ec3ddc847f ``` **Extracting Timestamps from UUID v7**: ```go package main import ( "crypto/rand" "fmt" "time" ) type UUID [16]byte func NewUUIDv7() UUID { var uuid UUID timestamp := uint64(time.Now().UnixNano() / int64(time.Millisecond)) uuid[0] = byte(timestamp >> 40) uuid[1] = byte(timestamp >> 32) uuid[2] = byte(timestamp >> 24) uuid[3] = byte(timestamp >> 16) uuid[4] = byte(timestamp >> 8) uuid[5] = byte(timestamp) randomBytes := make([]byte, 10) if _, err := rand.Read(randomBytes); err != nil { panic(err) } copy(uuid[6:], randomBytes) // Set version (7) and variant bits (2 MSB as 01) uuid[6] = (uuid[6] & 0x0f) | (7 << 4) uuid[8] = (uuid[8] & 0x3f) | 0x80 return uuid } func ExtractTimestampFromUUIDv7(uuid UUID) time.Time { timestamp := uint64(uuid[0])<<40 | uint64(uuid[1])<<32 | uint64(uuid[2])<<24 | uint64(uuid[3])<<16 | uint64(uuid[4])<<8 | uint64(uuid[5]) return time.Unix(0, int64(timestamp)*int64(time.Millisecond)) } func (uuid UUID) String() string { return fmt.Sprintf("%08x-%04x-%04x-%04x-%012x", uuid[0:4], uuid[4:6], uuid[6:8], uuid[8:10], uuid[10:16]) } func main() { uuid := NewUUIDv7() fmt.Println(uuid.String()) timestamp := ExtractTimestampFromUUIDv7(uuid) fmt.Println(timestamp) } ``` **Example Output**: ``` 019020e0-cd2a-730a-a8ea-11ec3ddc847f 2024-06-16 11:48:41.898 +0000 UTC ``` ### ULID (Universally Unique Lexicographically Sortable Identifier) ULID is designed to be a sortable and human-readable alternative to UUIDs, with a focus on chronological order. **Generation Method**: - **Get Timestamp**: Obtain the current timestamp in milliseconds and convert it to a 48-bit string. - **Generate Random Values**: Fill the remaining 80 bits with random values. - **Encoding**: Encode the generated bits using Crockford’s Base32. Here’s how to generate a ULID in Go: ```go package main import ( "fmt" "github.com/oklog/ulid/v2" "math/rand" "time" ) func main() { entropy := ulid.Monotonic(rand.New(rand.NewSource(time.Now().UnixNano())), 0) ulidInstance := ulid.MustNew(ulid.Timestamp(time.Now()), entropy) fmt.Println(ulidInstance) // Extracting and formatting the timestamp timestamp := time.Unix(0, int64(ulidInstance.Time())*int64(time.Millisecond)) fmt.Println(timestamp.Format(time.RFC3339)) } ``` **Example Output**: ``` 01HZYC2028WMB3NJ16WCV9Z9E0 2024-06-09 11:27:38.056 +0000 UTC ``` ### Performance Considerations and Recommendations While UUID v4 is purely random and does not support sorting, UUID v7 and ULID provide sortable identifiers based on timestamps. However, using UUIDs and ULIDs has performance implications compared to auto-incrementing numeric types. #### If You Do Not Want to Use UUID or ULID Even if we consider the adoption of UUID and ULID from the issue of Auto Increment, as mentioned above, there are other issues with UUID and ULID. I'll try to summarize it again: 1. **UUID v4**: - Completely random values lead to performance degradation due to non-sortability. 2. **UUID v7 / ULID**: - Poor performance compared to auto-numbering numbers. - Leakage of generation time (timestamp). To illustrate a concrete example, let's take the case of a large-scale e-commerce site that handles millions of products. **Background**: - The database stores product details, user purchase history, reviews, and more. More data is added every day, and query performance is critical. **Challenges**: - **Performance**: Database performance is critical due to the large amount of data being added. In particular, it is often used to search for products and obtain the purchase history of users. - **Privacy**: Leaking a user's purchase history or review timestamps can identify patterns of user behavior. **UUID v4 Issues**: - The order in which the data is inserted is disjointed, leading to index fragmentation and poor query performance. **UUID v7/ULID Issues**: - The insertion order is preserved, but the ID of the string type is larger than the numeric type, increasing the size of the index. - Because it includes a timestamp, the time at which the data was generated is deducible, which is risky from a user privacy perspective. **Performance Concerns**: - **UUID v4**: Random writes can degrade performance due to reduced cache hit rates. - **UUID v7/ULID**: Slightly better performance than UUID v4 but still less efficient than auto-increment numbers. Timestamps in UUID v7 and ULID can leak generation times. **Recommendation**: For large-scale applications, consider using auto-increment numeric types for primary keys to ensure optimal performance. For public-facing identifiers, generate a separate random string (UUID or ULID) to enhance security and privacy. ### Conclusion Choosing the right identifier depends on your specific use case. While UUIDs and ULIDs offer unique advantages, they also come with performance and privacy trade-offs. By understanding these trade-offs, you can make informed decisions that balance security, performance, and usability. For further reading and implementation details, refer to the official documentation and libraries for UUIDs and ULIDs. Implementing these identifiers thoughtfully can significantly enhance the robustness and security of your systems.
siddhantkcode
1,890,513
The Halting Problem
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-16T19:06:52
https://dev.to/magodyboy/the-halting-problem-41c3
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer The halting problem asks whether there can be a machine that determines whether a program will halt or continue indefinitely. It cannot exist; if we add a negation to this machine and evaluate it on itself, it will always fail because of its contradiction. ## Additional Context Solved by Alan Turing. Explaining this concept is complicated, since it requires the use of several compound functions: 1. **Extended Explanation**: - "H" which can determine whether any program 𝑃 stops or continues given an input. - We use "H" to create a program "D" that takes as input the code of a program "P". - "D(P)" calls "H(P, P)", evaluating "P" with its own code as input. - If "H(P, P)" says "yes" (P stops at P), "D(P)" enters an infinite loop. - If "H(P, P)" says "no" (P does not stop at P), "D(P)" stops. 2. **Paradox:** - Passing "D" to itself as input: "D(D)": - If "H(D, D)" says that "D(D)" stops, "D(D)" enters an infinite loop (contradiction). - If "H(D, D)" says that "D(D)" does not stop, "D(D)" stops (another contradiction).
magodyboy
1,890,512
SteamVR Overlay with Unity: Dashboard Overlay
Let’s create a setting screen on the SteamVR dashboard. It switches which hand to display the watch...
27,740
2024-06-16T19:06:40
https://dev.to/kurohuku/part-8-dashboard-overlay-1809
unity3d, steamvr, openvr, vr
Let’s create a setting screen on the SteamVR dashboard. It switches which hand to display the watch overlay. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/namc1joyp1kyuqg0ulv4.gif) ## Create dashboard overlay Dashboard overlay is an overlay that is displayed on the SteamVR dashboard. We use this overlay as the setting screen. ### Create new script Create DashboardOverlay.cs inside Scripts folder and copy the following code. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hnuterx5qpltp3h0if05.png) ```cs using UnityEngine; using Valve.VR; using System; public class DashboardOverlay : MonoBehaviour { private void Start() { } } ``` ### Put the script into the scene On hierarchy, **right click > Create Empty** to create a new game object named `DashboardOverlay`. Drag `DashboardOverlay.cs` to the object. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/977ayctboi1x1gnt32lv.png) ### Prepare overlay handles Dashboard overlay consists of the main overlay and thumbnail overlay. Both overlays have their handle. The thumbnail overlay is a small overlay at the bottom of the dashboard used to switch between overlays. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17vxwrmbmws71c2uh3d4.png) *The red rectangle is the thumbnail overlay. The large "Right Hand" button is on the main overlay.* Create two variables for the two overlay handles in the `DashboardOverlay.cs`. ```diff using UnityEngine; using Valve.VR; using System; public class DashboardOverlay : MonoBehaviour { + private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; + private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; } ``` ### Create dashboard overlay Create dashboard overlay with [CreateDashboardOverlay()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_CreateDashboardOverlay_System_String_System_String_System_UInt64__System_UInt64__). (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::CreateDashboardOverlay) for details) Here, we set `key` as `"WatchDashboardKey"` and `name` as `"Watch Setting"`. `CreateDashboardOverlay()` creates two overlays and sets their handles to the variables of the 3rd and 4th arguments. ## Create utility class We want to run the program but we must initialize the OpenVR before using the API. The initialization code is in `WatchOverlay.cs`. If `DashboardOverlay.cs` runs earlier than that initialization, it will result in an error. There are various ways but this time, we will create a utility to share common code like initializing OpenVR that is called from other classes. ### Create new script Create `OpenVRUtil.cs` inside `Scripts` folder. Copy the following code. ```cs using UnityEngine; using Valve.VR; using System; namespace OpenVRUtil { public static class System { } } ``` ### Move OpenVR initialization Move the `InitOpenVR()` from `WatchOverlay.cs` to `OpenVRUtil.cs`. Add `static` to allow access from other external classes. `WatchOverlay.cs` ```diff private void OnDestroy() { DestroyOverlay(overlayHandle); ShutdownOpenVR(); } - private void InitOpenVR() - { - if (OpenVR.System != null) return; - - var error = EVRInitError.None; - OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); - if (error != EVRInitError.None) - { - throw new Exception("Failed to initialize OpenVR: " + error); - } - } private void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } ... ``` `OpenVRUtil.cs` ```diff using UnityEngine; using Valve.VR; using System; namespace OpenVRUtil { public static class System { + // Add as public static method + public static void InitOpenVR() + { + if (OpenVR.System != null) return; + + var error = EVRInitError.None; + OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); + if (error != EVRInitError.None) + { + throw new Exception("Failed to initialize OpenVR: " + error); + } + } } } ``` ### Move OpenVR cleanup Similarly, move the `ShutdownOpenVR()` as a `static` method. `WatchOverlay.cs` ```diff // ... private void OnDestroy() { DestroyOverlay(overlayHandle); ShutdownOpenVR(); } - private void ShutdownOpenVR() - { - if (OpenVR.System != null) - { - OpenVR.Shutdown(); - } - } private ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to dispose OpenVR: " + error); } return handle; } // ... ``` `OpenVRUtil.cs` ```diff using UnityEngine; using Valve.VR; using System; namespace OpenVRUtil { public static class System { public static void InitOpenVR() { if (OpenVR.System != null) return; var initError = EVRInitError.None; OpenVR.Init(ref initError, EVRApplicationType.VRApplication_Overlay); if (initError != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + initError); } } + public static void ShutdownOpenVR() + { + if (OpenVR.System != null) + { + OpenVR.Shutdown(); + } + } } } ``` ### Move overlay methods Move overlay methods from `WatchOverlay.cs` to `OpenVRUtil.cs` because they will be used by other classes later. Move all overlay methods from `CreateOverlay()` to `SetOverlayRenderTexture()` in `WatchOverlay.cs`. - CreateOverlay() - DestroyOverlya() - SetOverlayFromFile() - ShowOverlay() - SetoverlaySize() - SetOverlayTransformAbsolute() - SetOverlayTransformRelative() - FlipOverlayVertical() - SetOverlayRenderTexture() `WatchOverlay.cs` ```diff private void OnDestroy() { DestroyOverlay(overlayHandle); OpenVRUtil.System.ShutdownOpenVR(); } - private ulong CreateOverlay(string key, string name) - { - var handle = OpenVR.k_ulOverlayHandleInvalid; - var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to create overlay: " + error); - } - - return handle; - } - - ... - - private void SetOverlayRenderTexture(RenderTexture renderTexture) - { - var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); - var texture = new Texture_t - { - eColorSpace = EColorSpace.Auto, - eType = ETextureType.DirectX, - handle = nativeTexturePtr - }; - var error = OpenVR.Overlay.SetOverlayTexture(overlayHandle, ref texture); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to draw texture: " + error); - } - } ``` Create new `static class Overlay` to `OpenVRUtil.cs` and add all the methods as `public static` method. `OpenVRUtil.cs` ```diff namespace OpenVRUtil { public static class System { public static void InitOpenVR() { if (OpenVR.System != null) return; var initError = EVRInitError.None; OpenVR.Init(ref initError, EVRApplicationType.VRApplication_Overlay); if (initError != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + initError); } } public static void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } } + public static class Overlay + { + public static ulong CreateOverlay(string key, string name) + { + var handle = OpenVR.k_ulOverlayHandleInvalid; + var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to create overlay: " + error); + } + + return handle; + } + + ... + + public static void SetOverlayRenderTexture(ulong handle, RenderTexture renderTexture) + { + var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); + var texture = new Texture_t + { + eColorSpace = EColorSpace.Auto, + eType = ETextureType.DirectX, + handle = nativeTexturePtr + }; + var error = OpenVR.Overlay.SetOverlayTexture(handle, ref texture); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to draw texture: " + error); + } + } + } } ``` ### Update method calls in existing code Change `WatchOverlay.cs` code to call the overlay methods from the `OpenVRUtil` instead of `WatchOverlay.cs` itself. `WatchOverlay.cs` ```diff using System; using UnityEngine; using Valve.VR; + using OpenVRUtil; public class WatchOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; [Range(0, 0.5f)] public float size; [Range(-0.5f, 0.5f)] public float x; [Range(-0.5f, 0.5f)] public float y; [Range(-0.5f, 0.5f)] public float z; [Range(0, 360)] public int rotationX; [Range(0, 360)] public int rotationY; [Range(0, 360)] public int rotationZ; private void Start() { - InitOpenVR(); + OpenVRUtil.System.InitOpenVR(); - overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); + overlayHandle = Overlay.CreateOverlay("WatchOverlayKey", "WatchOverlay"); - FlipOverlayVertical(overlayHandle); - SetOverlaySize(overlayHandle, size); - ShowOverlay(overlayHandle); + Overlay.FlipOverlayVertical(overlayHandle); + Overlay.SetOverlaySize(overlayHandle, size); + Overlay.ShowOverlay(overlayHandle); } private void Update() { var leftControllerIndex = OpenVR.System.GetTrackedDeviceIndexForControllerRole(ETrackedControllerRole.LeftHand); if (leftControllerIndex != OpenVR.k_unTrackedDeviceIndexInvalid) { var position = new Vector3(x, y, z); var rotation = Quaternion.Euler(rotationX, rotationY, rotationZ); - SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); + Overlay.SetOverlayTransformRelative(overlayHandle, leftControllerIndex, position, rotation); } - SetOverlayRenderTexture(overlayHandle, renderTexture); + Overlay.SetOverlayRenderTexture(overlayHandle, renderTexture); } private void OnApplicationQuit() { - DestroyOverlay(overlayHandle); + Overlay.DestroyOverlay(overlayHandle); } private void OnDestroy() { - ShutdownOpenVR(); + OpenVRUtil.System.ShutdownOpenVR(); } } ``` ## Add OpenVR initialize and cleanup We made the utility class to share the common code. Let’s go back to the `DashboardOverlay.cs` to create a dashboard overlay. Add OpenVR initialization and cleanup code. We already call the initialize and cleanup functions in the `WatchOverlay.cs` but there is no problem because if the OpenVR is already initialized or cleaned up, the methods do nothing. `DashboardOverlay.cs` ```diff using UnityEngine; using Valve.VR; using System; + using OpenVRUtil; public class DashboardOverlay : MonoBehaviour { private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { + OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting", ref dashboardHandle, ref thumbnailHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to create dashboard overlay: " + error); } } + private void OnDestroy() + { + OpenVRUtil.System.ShutdownOpenVR(); + } } ``` ## Destroy dashboard overlay Destroy the dashboard overlay at the end of the application. ```diff public class DashboardOverlay : MonoBehaviour { private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting", ref dashboardHandle, ref thumbnailHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to create dashboard overlay: " + error); } } + private void OnApplicationQuit() + { + Overlay.DestroyOverlay(dashboardHandle); + } private void OnDestroy() { OpenVRUtil.System.ShutdownOpenVR(); } } ``` As a note, we can destroy the main overlay only. If we pass the thumbnail overlay handle to `DestroyOverlay()`, `ThumbnailCantBeDestroyed` error will occur. ## Show thumbnail Let’s show image to the thumbnail. We made a function `SetOverlayFromFile()` and a image in part.2 so we will use it. ```diff private void Start() { OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting", ref dashboardHandle, ref thumbnailHandle); if (error != EVROverlayError.None) { throw new Exception("ダッシュボードオーバーレイの作成に失敗しました: " + error); } + var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; + Overlay.SetOverlayFromFile(thumbnailHandle, filePath); } ``` Run the program, and check if the thumbnail is shown at the bottom of the dashboard. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d3np8qh5nugd6q6c5ujo.png) Destroy the dashboard overlay at the end of the application. ```diff public class DashboardOverlay : MonoBehaviour { private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting", ref dashboardHandle, ref thumbnailHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to create dashboard overlay: " + error); } } + private void OnApplicationQuit() + { + Overlay.DestroyOverlay(dashboardHandle); + } private void OnDestroy() { OpenVRUtil.System.ShutdownOpenVR(); } } ``` As a note, we can destroy the main overlay only. If we pass the thumbnail overlay handle to `DestroyOverlay()`, `ThumbnailCantBeDestroyed` error will occur. ## Show thumbnail Let's show image to the thumbnail. We made a function `SetOverlayFromFile()` and a image in part.2 so we will use it. ```diff private void Start() { OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting", ref dashboardHandle, ref thumbnailHandle); if (error != EVROverlayError.None) { throw new Exception("ダッシュボードオーバーレイの作成に失敗しました: " + error); } + var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; + Overlay.SetOverlayFromFile(thumbnailHandle, filePath); } ``` Run the program, and check if the thumbnail is shown at the bottom of the dashboard. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x9h1plm0hnbcfo8df25e.png) We don’t draw any image yet to the main overlay so it shows nothing when the thumbnail is clicked. The `name` passed to `CreateDashboardOverlay()` is shown when the laser pointer hovers over the thumbnail. This time, it’s `"Watch Settings"`. ## Create setting screen Let’s create the setting screen. First, divide the game objects into two groups: the watch group and the dashboard group. ### Watch Group **Right click in hierarchy > Create Empty** to create an empty game object named Watch. Move existing `WatchOverlay`, `Camera`, `Canvas` objects under the `Watch` object. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdbsdtx4yalu4u4br0z6.png) ### Dashboard group Create the below objects under the `Dashboard` object. - Camera - UI > Canvas ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbz2qki1cbdyoxnypzmc.png) ### Camera setting Select Camera under the `Dashboard`, set `Clear Flags` to Solid Color in the inspector. Click `Background` color, then change to **opaque gray (RGBA = 64, 64, 64, 255)**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ubfchjb56u2vbqc45ws.png) Also, `remove` the **AudioListener** component. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ar4wj8e7bocy0pnqhz2e.png) ### Create render texture Create render texture for dashboard overlay. In the project window, **right click Assets/RenderTexture folder > Render Texture** to create a new render texture asset. Change the asset name to `DashboardRenderTexture`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ul9eq9vpvgaqahp4p4j0.png) Set the **Size** to **1024 x 768** in the inspector. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rnk14m3z91uxe7bix5ou.png) Click `Camera` under `Dashboard` in the hierarchy, drag `DashboardRenderTexture` asset to `Target Texture` property. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0qrhx77hnwhojkci8b93.png) Here, the dashboard camera output goes to the render texture asset. ### Canvas setting Select `Canvas` object under `Dashboard`. Set `Render` Mode to `Screen Space — Camera` in the `Canvas` inspector. Drag `Camera` object under `Dashboard` to `Render Camera`. Set `Plane Distance` to `10`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h1ty920x7i6ulc9jof48.png) ### Move group Open `Dashboard` the object’s inspector. **Set Position X to 20** to avoid overlapping groups. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/byahp0tu3sstuex4vdwn.png) ### Create button Right click `Canvas` object under `Dashboard` then create **UI > Button — TextMeshPro** to create a new button named `LeftHandButton`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9b5eiiq0a0jclw0xtcxx.png) Click `Text (TMP)` under `LeftHandButton`, change the text to **“Left Hand”**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ea71sip8bsumx3cl39is.png) Select `LeftHandButton` and set the **Width to 700 and Height to 200**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7dauydwrw65w5v31k7ze.png) Select the `Text (TMP)` and set the **Font Size to 100**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/albd1tk5knyq9jkv4j4b.png) Duplicate `LeftHandButton` by **right click > Duplicate**. Change the duplicated object name to `RightHandButton`. Similarly, change the button text to **“Right Hand”**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7tszxur5miv0m9m4pkoc.png) Set **LeftHandButton Pos Y to 150** and **RightHandButton Pos Y to -150**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pn1f4r8tq4ru5puda32e.png) Here, the appearance of the setting screen is done. ## Draw to dashboard Display the setting screen as a dashboard overlay. ### Add camera variable Add `camera` member variable to `DashboardOverlay.cs`. ```diff public class DashboardOverlay : MonoBehaviour { + public Camera camera; private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; // ... ``` Open `DashboardOverlay` object inspector, drag `Camera` object under `Dashboard` to the `Camera` variable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3q7cica97prze31e9z94.png) ### Add render texture Add `renderTexture` variable in `DashboardOverlay.cs`. ```diff public class DashboardOverlay : MonoBehaviour { public Camera camera; + public RenderTexture renderTexture; private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; // ... ``` Select `DashboardOverlay` in the hierarchy, drag `Assets/RenderTextures/DashboardRenderTexture` asset to the `Render Texture` variable. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i1r69omlr1qrsfkyo0gn.png) ### Flip vertical and set size In advance, flip the texture vertically with `FlipOverlayVertical()` and set overlay size to **2.5 m** with `SetOverlaySize()`. ### Draw render texture to dashboard overlay Create `Update()` and draw the render texture to the dashboard overlay. `DashboardOverlay.cs` ```diff public class DashboardOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { OpenVRUtil.System.InitOpenVR(); var error = OpenVR.Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting", ref dashboardHandle, ref thumbnailHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to create dashboard overlay: " + error); } var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; Overlay.SetOverlayFromFile(thumbnailHandle, filePath); renderTexture = new RenderTexture(1024, 768, 16, RenderTextureFormat.ARGBFloat); camera.targetTexture = renderTexture; Overlay.SetOverlaySize(dashboardHandle, 2.5f); Overlay.FlipOverlayVertical(dashboardHandle); } + private void Update() + { + Overlay.SetOverlayRenderTexture(dashboardHandle, renderTexture); + } // ... ``` Run the program, open the dashboard, and click the thumbnail. The setting screen should be shown. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qq6z0zzonb54lf8oxfxy.png) ## Organize code Move the creating dashboard overlay code to `CreateDashboardOverlay()`. It has to return two values `dashboardhandle` and `thumbnailHandle`, so we use a tuple to combine them. Add the method to the utility’s `Overlay` class. `OpenVRUtil.cs` ```diff ... public static ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } return handle; } + public static (ulong, ulong) CreateDashboardOverlay(string key, string name) + { + ulong dashboardHandle = 0; + ulong thumbnailHandle = 0; + var error = OpenVR.Overlay.CreateDashboardOverlay(key, name, ref dashboardHandle, ref thumbnailHandle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to create dashboard overlay: " + error); + } + + return (dashboardHandle, thumbnailHandle); + } ... ``` Accordingly, change the function call. `DashboardOverlay.cs` ```diff private void Start() { OpenVRUtil.System.InitOpenVR(); - var error = OpenVR.Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting", ref dashboardHandle, ref thumbnailHandle); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to create dashboard overlay: " + error); - } + (dashboardHandle, thumbnailHandle) = Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; Overlay.SetOverlayFromFile(thumbnailHandle, filePath); Overlay.SetOverlaySize(dashboardHandle, 2.5f); Overlay.FlipOverlayVertical(dashboardHandle); } ``` ## Final code `DashboardOverlay.cs` ```cs using UnityEngine; using Valve.VR; using OpenVRUtil; public class DashboardOverlay : MonoBehaviour { public Camera camera; public RenderTexture renderTexture; private ulong dashboardHandle = OpenVR.k_ulOverlayHandleInvalid; private ulong thumbnailHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { OpenVRUtil.System.InitOpenVR(); (dashboardHandle, thumbnailHandle) = Overlay.CreateDashboardOverlay("WatchDashboardKey", "Watch Setting"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; Overlay.SetOverlayFromFile(thumbnailHandle, filePath); Overlay.FlipOverlayVertical(dashboardHandle); Overlay.SetOverlaySize(dashboardHandle, 2.5f); } private void Update() { Overlay.SetOverlayRenderTexture(dashboardHandle, renderTexture); } private void OnApplicationQuit() { Overlay.DestroyOverlay(dashboardHandle); } private void OnDestroy() { OpenVRUtil.System.ShutdownOpenVR(); } } ``` `OpenVRUtil.cs` ```cs using UnityEngine; using Valve.VR; using System; namespace OpenVRUtil { public static class System { public static void InitOpenVR() { if (OpenVR.System != null) return; var error = EVRInitError.None; OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); if (error != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + error); } } public static void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } } public static class Overlay { public static ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } return handle; } public static (ulong, ulong) CreateDashboardOverlay(string key, string name) { ulong dashboardHandle = 0; ulong thumbnailHandle = 0; var error = OpenVR.Overlay.CreateDashboardOverlay(key, name, ref dashboardHandle, ref thumbnailHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to create dashboard overlay: " + error); } return (dashboardHandle, thumbnailHandle); } public static void DestroyOverlay(ulong handle) { if (handle != OpenVR.k_ulOverlayHandleInvalid) { var error = OpenVR.Overlay.DestroyOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to dispose overlay: " + error); } } } public static void SetOverlayFromFile(ulong handle, string path) { var error = OpenVR.Overlay.SetOverlayFromFile(handle, path); if (error != EVROverlayError.None) { throw new Exception("Failed to draw image file: " + error); } } public static void ShowOverlay(ulong handle) { var error = OpenVR.Overlay.ShowOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed show overlay: " + error); } } public static void SetOverlaySize(ulong handle, float size) { var error = OpenVR.Overlay.SetOverlayWidthInMeters(handle, size); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay size: " + error); } } public static void SetOverlayTransformAbsolute(ulong handle, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformAbsolute(handle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } public static void SetOverlayTransformRelative(ulong handle, uint deviceIndex, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformTrackedDeviceRelative(handle, deviceIndex, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } public static void FlipOverlayVertical(ulong handle) { var bounds = new VRTextureBounds_t { uMin = 0, uMax = 1, vMin = 1, vMax = 0 }; var error = OpenVR.Overlay.SetOverlayTextureBounds(handle, ref bounds); if (error != EVROverlayError.None) { throw new Exception("Failed to flip texture: " + error); } } public static void SetOverlayRenderTexture(ulong handle, RenderTexture renderTexture) { if (!renderTexture.IsCreated()) return; var nativeTexturePtr = renderTexture.GetNativeTexturePtr(); var texture = new Texture_t { eColorSpace = EColorSpace.Auto, eType = ETextureType.DirectX, handle = nativeTexturePtr }; var error = OpenVR.Overlay.SetOverlayTexture(handle, ref texture); if (error != EVROverlayError.None) { throw new Exception("Failed to draw texture: " + error); } } } } ``` Now, we have created the setting screen as a dashboard overlay. We will make the button events in the next part.
kurohuku
1,890,511
Some Standard Sample Text Style Configuration in Figma.
Sample Text Style Configuration in Figma
0
2024-06-16T19:06:16
https://dev.to/iam_divs/some-standard-sample-text-style-configuration-in-figma-33bl
webdev, javascript, ui, figma
Sample Text Style Configuration in Figma ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t3hckzym6p7i3ypmvk5h.png)
iam_divs
1,890,508
SteamVR Overlay with Unity: Change Size and Position
Change overlay size Pass width SetOverlayWidthInMeters() to set an overlay size (read the...
27,740
2024-06-16T19:04:04
https://dev.to/kurohuku/part-5-change-size-and-position-47ga
unity3d, steamvr, openvr, vr
## Change overlay size Pass width [SetOverlayWidthInMeters()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_SetOverlayWidthInMeters_System_UInt64_System_Single_) to set an overlay size (read the wiki for details). The width unit is meters. Height is automatically calculated based on the image aspect ratio. The default width is 1 m. Change the overlay width to 0.5 m. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); + var error = OpenVR.Overlay.SetOverlayWidthInMeters(overlayHandle, 0.5f); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to set overlay size: " + error); + } ShowOverlay(overlayHandle); } ``` Run the program, the overlay should be shown as half size. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a99wgyw6zay79ph10lhz.png) ## Set overlay absolute position Let’s display the overlay at the absolute position in the VR space with [SetOverlayTransformAbsolute()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_SetOverlayTransformAbsolute_System_UInt64_Valve_VR_ETrackingUniverseOrigin_Valve_VR_HmdMatrix34_t__). (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::SetOverlayTransformAbsolute) for details) ### Prepare position and rotation We will put the overlay at the absolute position that moved to 2 m in the Y-axis (upper direction), and 3 m in the Z-axis (forward direction). Also, rotate it 45 degrees around the Z-axis. First, prepare the position and rotation with `Vector3` and `Quaternion`. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); var error = OpenVR.Overlay.SetOverlayWidthInMeters(overlayHandle, 0.5f); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay size: " + error); } + var position = new Vector3(0, 2, 3); + var rotation = Quaternion.Euler(0, 0, 45); ShowOverlay(overlayHandle); } ``` ### Create transformation matrix Overlay position is calculated by multiplying an origin and transformation matrix. The origin is the base position like the center of the play area that is defined in [ETrackingUniverseOrigin](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.ETrackingUniverseOrigin.html). The transformation matrix is represented as [HmdMatrix34_t](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.HmdMatrix34_t.html) type. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); var error = OpenVR.Overlay.SetOverlayWidthInMeters(overlayHandle, 0.5f); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay size: " + error); } var position = new Vector3(0, 2, 3); var rotation = Quaternion.Euler(0, 0, 45); + var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); + var matrix = rigidTransform.ToHmdMatrix34(); ShowOverlay(overlayHandle); } ``` ### Change overlay position Pass the transformation matrix to `SetOverlayTransformAbsolute()` to change the overlay position. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); var error = OpenVR.Overlay.SetOverlayWidthInMeters(overlayHandle, 0.5f); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay size: " + error); } var position = new Vector3(0, 2, 3); var rotation = Quaternion.Euler(0, 0, 45); var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); + error = OpenVR.Overlay.SetOverlayTransformAbsolute(overlayHandle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to set overlay position: " + error); + } ShowOverlay(overlayHandle); ... ``` ### Arguments of SetOverlayTransformAbsolute() `SetOverlayTransformAbsolute()` has three arguments. The 1st is the overlay handle. The 2nd `ETrackingUniverseOrigin.TrackingUniverseStanding` means that the floor center of the play area is the origin of the transform. Otherwise, `ETrackingUniverseOrigin.TrackingUniverseSeated` uses the position that the user has reset as the seated position. The 3rd, `ref matrix` is a reference to the transformation matrix. ## Check the program Run the program. The overlay position should be changed. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y54t7zmt2k63603k8w8x.png) *Move upward to 2 m, forward to 3 m, and rotate 45 degrees in the Z-axis from the play area origin* --- ### Optional: Left-handed and Right-handed system Unity uses a left-handed system, and OpenVR uses a right-handed system. In OpenVR, +y is up, +x is right, and -z is forward. For now, we don’t need to pay attention to the difference because the `SteamVR_Utils.RigidTransform` internally converts the two systems. However, when you make the transformation matrix manually, be careful that the Z-axis and the rotation directions are reversed. --- ## Organize code ### Set size Move the size setting code into `SetOverlaySize()`. ```diff public class WatchOverlay : MonoBehaviour { ... private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); - var error = OpenVR.Overlay.SetOverlayWidthInMeters(overlayHandle, 0.5f); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to set overlay size: " + error); - } + SetOverlaySize(overlayHandle, 0.5f); var position = new Vector3(0, 2, 3); var rotation = Quaternion.Euler(0, 0, 45); var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); error = OpenVR.Overlay.SetOverlayTransformAbsolute(overlayHandle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } ShowOverlay(overlayHandle); } ... + private void SetOverlaySize(ulong handle, float size) + { + var error = OpenVR.Overlay.SetOverlayWidthInMeters(handle, size); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to set overlay size: " + error); + } + } } ``` ### Set position Move the position setting into `SetOverlayTransformAbsolute()`. ```diff public class WatchOverlay : MonoBehaviour { ... private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, 0.5f); var position = new Vector3(0, 2, 3); var rotation = Quaternion.Euler(0, 0, 45); - var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); - var matrix = rigidTransform.ToHmdMatrix34(); - error = OpenVR.Overlay.SetOverlayTransformAbsolute(overlayHandle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to set overlay position: " + error); - } + SetOverlayTransformAbsolute(overlayHandle, position, rotation); ShowOverlay(overlayHandle); } ... + private void SetOverlayTransformAbsolute(ulong handle, Vector3 position, Quaternion rotation) + { + var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); + var matrix = rigidTransform.ToHmdMatrix34(); + var error = OpenVR.Overlay.SetOverlayTransformAbsolute(handle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to set overlay position: " + error); + } + } } ``` ## Final code ```cs using UnityEngine; using Valve.VR; using System; public class WatchOverlay : MonoBehaviour { private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); SetOverlaySize(overlayHandle, 0.5f); var position = new Vector3(0, 2, 3); var rotation = Quaternion.Euler(0, 0, 45); SetOverlayTransformAbsolute(overlayHandle, position, rotation); ShowOverlay(overlayHandle); } private void OnApplicationQuit() { DestroyOverlay(overlayHandle); } private void OnDestroy() { ShutdownOpenVR(); } private void InitOpenVR() { if (OpenVR.System != null) return; var error = EVRInitError.None; OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); if (error != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + error); } } private void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } private ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } return handle; } private void DestroyOverlay(ulong handle) { if (handle != OpenVR.k_ulOverlayHandleInvalid) { var error = OpenVR.Overlay.DestroyOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to dispose overlay: " + error); } } } private void SetOverlayFromFile(ulong handle, string path) { var error = OpenVR.Overlay.SetOverlayFromFile(handle, path); if (error != EVROverlayError.None) { throw new Exception("Failed to draw image file: " + error); } } private void ShowOverlay(ulong handle) { var error = OpenVR.Overlay.ShowOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to show overlay: " + error); } } private void SetOverlaySize(ulong handle, float size) { var error = OpenVR.Overlay.SetOverlayWidthInMeters(handle, size); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay size: " + error); } } private void SetOverlayTransformAbsolute(ulong handle, Vector3 position, Quaternion rotation) { var rigidTransform = new SteamVR_Utils.RigidTransform(position, rotation); var matrix = rigidTransform.ToHmdMatrix34(); var error = OpenVR.Overlay.SetOverlayTransformAbsolute(handle, ETrackingUniverseOrigin.TrackingUniverseStanding, ref matrix); if (error != EVROverlayError.None) { throw new Exception("Failed to set overlay position: " + error); } } } ``` Here, we have set the overlay size and position. But we need to attach the overlay to controllers instead of the absolute position to create a watch application. Next part, we will make the overlay follow devices such as HMD or controllers.
kurohuku
1,890,507
SteamVR Overlay with Unity: Draw Image
Prepare an image file Prepare an image file that satisfies the following...
27,740
2024-06-16T19:02:18
https://dev.to/kurohuku/part-4-draw-image-2h3c
unity3d, steamvr, openvr, vr
## Prepare an image file Prepare an image file that satisfies the following conditions. - Size is lower than 1920 x 1080 px. - File format is in PNG, JPG, or TGA (24 or 32 bit color) From here on, I explain with my SNS icon named **“sns-icon.jpg”**. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qwiao2a12n6he8mv7bp6.png) Put the image file inside **Assets/StreamingAssets** folder. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7trgs08csd0zwkpqo2w.png) ## Draw image Call [SetOverlayFromFile()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_SetOverlayFromFile_System_UInt64_System_String_) to draw an image file. (read the [wiki](https://github.com/ValveSoftware/openvr/wiki/IVROverlay::SetOverlayFromFile) for details) ```diff void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); + var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; + var error = OpenVR.Overlay.SetOverlayFromFile(overlayHandle, filePath); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to draw image file: " + error); + } } ``` Pass the overlay handle and image file path to `SetOverlayFromFile()`. We use the [StreamingAssets](https://docs.unity3d.com/Manual/StreamingAssets.html) folder to get the file path at runtime with [Application.streamingAssetsPath](https://docs.unity3d.com/ScriptReference/Application-streamingAssetsPath.html). Error handling remains the same as before. Run the program, then launch the Overlay Viewer. Click `WatchOverlayKey` and check the image is drawing on the right preview area. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3wka7ct1j61zm2w8pq8.png) ## Switch overlay visibility The overlay is hidden by default. It is shown on the Overlay Viewer but not in the VR space. Switch the overlay visibility with [ShowOverlay()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_ShowOverlay_System_UInt64_) and [HideOverlay()](https://valvesoftware.github.io/steamvr_unity_plugin/api/Valve.VR.CVROverlay.html#Valve_VR_CVROverlay_HideOverlay_System_UInt64_). (read the wiki for details) Let’s show the overlay with `ShowOverlay()` at the start of the application. ```diff private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; var error = OpenVR.Overlay.SetOverlayFromFile(overlayHandle, filePath); if (error != EVROverlayError.None) { throw new Exception("Failed to draw image file: " + error); } + error = OpenVR.Overlay.ShowOverlay(overlayHandle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to show overlay: " + error); + } } ``` Run the program, put on your HMD, and look down at the floor. The overlay should appear at the center of the floor. Since overlays can’t be seen from the back, if you can’t view the overlay, move to the opposite side. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nlrei1o9akkc3ezaa1ka.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s7tqdz19w5mksgd90pam.png) *It shows during a VR game (the screenshot is [Legendary Tales](https://store.steampowered.com/app/1465070/Legendary_Tales/))* ## Organize code ### Draw image file Move the drawing code into `SetOverlayFromFile()`. ```diff void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; - var error = OpenVR.Overlay.SetOverlayFromFile(overlayHandle, filePath); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to draw image file: " + error); - } + SetOverlayFromFile(overlayHandle, filePath); error = OpenVR.Overlay.ShowOverlay(overlayHandle); if (error != EVROverlayError.None) { throw new Exception("Failed to show overlay: " + error); } } ... + // Variable names changed + // overlayHandle -> handle + // filePath -> path + private void SetOverlayFromFile(ulong handle, string path) + { + var error = OpenVR.Overlay.SetOverlayFromFile(handle, path); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to draw image file: " + error); + } + } ``` ### Show overlay Move the switching overlay visibility into `ShowOverlay()`. ```diff void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); - error = OpenVR.Overlay.ShowOverlay(overlayHandle); - if (error != EVROverlayError.None) - { - throw new Exception("Failed to show overlay: " + error); - } + ShowOverlay(overlayHandle); } ... + private void ShowOverlay(ulong handle) + { + var error = OpenVR.Overlay.ShowOverlay(handle); + if (error != EVROverlayError.None) + { + throw new Exception("Failed to show overlay: " + error); + } + } } ``` ## Final code ```cs using UnityEngine; using Valve.VR; using System; public class WatchOverlay : MonoBehaviour { private ulong overlayHandle = OpenVR.k_ulOverlayHandleInvalid; private void Start() { InitOpenVR(); overlayHandle = CreateOverlay("WatchOverlayKey", "WatchOverlay"); var filePath = Application.streamingAssetsPath + "/sns-icon.jpg"; SetOverlayFromFile(overlayHandle, filePath); ShowOverlay(overlayHandle); } private void OnApplicationQuit() { DestroyOverlay(overlayHandle); } private void OnDestroy() { ShutdownOpenVR(); } private void InitOpenVR() { if (OpenVR.System != null) return; var error = EVRInitError.None; OpenVR.Init(ref error, EVRApplicationType.VRApplication_Overlay); if (error != EVRInitError.None) { throw new Exception("Failed to initialize OpenVR: " + error); } } private void ShutdownOpenVR() { if (OpenVR.System != null) { OpenVR.Shutdown(); } } private ulong CreateOverlay(string key, string name) { var handle = OpenVR.k_ulOverlayHandleInvalid; var error = OpenVR.Overlay.CreateOverlay(key, name, ref handle); if (error != EVROverlayError.None) { throw new Exception("Failed to create overlay: " + error); } return handle; } private void DestroyOverlay(ulong handle) { if (handle != OpenVR.k_ulOverlayHandleInvalid) { var error = OpenVR.Overlay.DestroyOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to dispose overlay: " + error); } } } private void SetOverlayFromFile(ulong handle, string path) { var error = OpenVR.Overlay.SetOverlayFromFile(handle, path); if (error != EVROverlayError.None) { throw new Exception("Failed to draw image file: " + error); } } private void ShowOverlay(ulong handle) { var error = OpenVR.Overlay.ShowOverlay(handle); if (error != EVROverlayError.None) { throw new Exception("Failed to show overlay: " + error); } } } ``` Here, we have displayed the image on the overlay. Next part, we will change the overlay size and position.
kurohuku
1,890,505
GSoC Week 3
One of my mentors had just returned from his presentation at DjangoCon Europe, and the other was...
27,442
2024-06-16T18:59:54
https://dev.to/chiemezuo/gsoc-week-3-12h3
gsoc, googlesummerofcode, wagtail, opensource
One of my mentors had just returned from his presentation at DjangoCon Europe, and the other was preparing for his presentation on Wagtail Space in the Netherlands. We couldn't have our weekly virtual check-in meeting, but we synced on Slack. Thankfully, when I wrote my timeline in my GSoC proposal, I accounted for this and set this as a week with minimal mentors' involvement. ## Weekly Check-in As mentioned earlier, there wasn't a virtual meeting, but we caught up via Slack. We discussed my progress on the previous task, and I highlighted one of the approaches I experimented with for the part of the GSoC project that's for setting a mandatory `image_description` field to the `AbstractImage` model. We also discussed the necessary preps for presenting the issue to the Accessibility team. Aside from that, there wasn't so much work to do for the week, and I got some time to do some internal reflection on my progress so far. ## RFC Feedback I got some more feedback on the RFC from the more senior members of the Wagtail community. Some of the feedback was the promise to look into the RFC in more depth. Some were about possible pull requests that would help smartly reduce the amount of code I would need to have in place to get my alt text project goal working. There was also a mention of possibly resurfacing another PR that might directly or indirectly help me with my project progress. My mentors and I agreed that we'd talk more on the feedback in the following week when they were back and rested from their commitments. ## Accessibility Team Meeting In the meeting, the team discussed the progress made by one of the team members on the Wagtail Accessibility checker's alt text validation. We discussed different hypothetical scenarios on when certain things such as complete URLs would probably be acceptable to include in the alt text, and on removing the file extension suffixes and possible prefixes. We all gave ideas and suggestions on stuff that could be done to make it better. Afterward, I reported my findings on the existing problem with making the `image_description` field mandatory in the `AbstractImage` model. I presented the three options we had so far: 1. Making them mandatory and essentially forcing users to unlearn the existing pattern they might have been used to. 2. Making the `image_description` field mandatory only when an uploaded image is to be edited, and not during the upload process. 3. Making the field completely optional, thus defeating our purpose. In the meeting, the Accessibility team decided that something between options 2 and 3 would suffice. We agreed on setting the field to be optional, but showing a noticeable (and annoying) red warning message during uploads and edits that the field would be necessary. This way, users would tilt towards filling the form field (because nobody likes big red warning signs), and in the event that the editor somehow missed the warning sign (possibly out of habit), everything would still behave in the initially intended way. It was a fair compromise, and it was one that allowed for iterative fixes much later on. The meeting was on a Friday afternoon, just before the weekend, but I worked on it during the weekend so I could have a demo to show my lead mentor. Saptak was present for the accessibility meeting while Storm was away because of Wagtail Space. I have probably mentioned this a number of times in past articles, but there's something about the presence of your mentors in any meeting that gives a big boost. I particularly liked the meeting because it was the longest I'd spoken, and I felt so at ease. I went from just watching and introducing myself at Accessibility team meetings to actively contributing to the discourse and giving my two cents on topics. Growth is something that we might never really appreciate as human beings unless we take a moment to look back at how we started. ## Challenges Being a week with minimal mentor involvement, it was not one without challenges. For the Accessibility team's feedback, there was a hitch: Getting it to work the "right" way either involved modifying templates and styles, or writing some 'strange' logic. Strange, because it would be a weird pattern. Essentially, I would have to raise a validation error on the `image_description` field when the form is initialized, but I would have to remove that error during validation, just before the form is to be saved. This way, the warning would always show, but would never actually get in the way of the form being saved. I tried different approaches to get it to work, hoping that I would strike some gold and get something that didn't feel so strange. It was a lot of hours of not getting any luck with approaches until I decided to go ahead with the method that felt strange. The good news, however, is that it's something that can be improved upon in the future, as well as when the PR is ready for review by the core team. Building upon the "strange" approach, I put a conditional statement to disable the form when it was populated with data. This last part was about the easiest thing I did this week. ## What I learned With a bit of extra time on my hands, I inspected the Wagtail-generated database with the SQLite browser to see how Wagtail handled data at its most basic. I was quite impressed to see that it looked just like any other relational table, but was structured so cleanly. Pages had their own tables with different entries under them, and every page had a page pointer ID that was represented on another table. I still plan to keep exploring more of the database, but I got a big eye-opener this week. At the end of the day, every piece of software follows fundamental principles. I also got a wee bit better at manipulating Django forms in my quest to set a constant warning in the image upload/edit forms. Finally, because the PR for the `image_description` field was months old (the time of my application for GSoC), I learned how to sync its current state without burying my commits deep inside. I learned to rebase and push its commit head forward. Truly fascinating stuff. I had another great week, and I'm excited about the amazing things I have the privilege of working on! Cheers. 🥂
chiemezuo
1,890,503
Getting Started with Flutter: Understanding the Basics and State Management
What is Flutter? Flutter is an open-source framework developed by Google that allows developers to...
0
2024-06-16T18:57:00
https://dev.to/ayas_tech_2b0560ee159e661/getting-started-with-flutter-understanding-the-basics-and-state-management-g3e
What is Flutter? Flutter is an open-source framework developed by Google that allows developers to create beautiful, high-performance applications for multiple platforms from a single codebase. Flutter's core is built with the Dart programming language, and it provides a rich set of pre-designed widgets that can be customized to create a unique and engaging user experience. **Setting Up Flutter** Before diving into Flutter development, you need to set up your development environment. Follow these steps: **1. Install Flutter:** Download the Flutter SDK from the official website. **2. Set Up an Editor:** Flutter works well with various editors, including Visual Studio Code and Android Studio. Install your preferred editor and add the Flutter and Dart plugins. **3. Create a New Project:** Use the Flutter command-line tools to create a new project ``` flutter create my_first_flutter_app ``` **4. Run Your App:** Navigate to your project directory and run the app: ``` cd my_first_flutter_app flutter run ``` **Understanding Flutter Widgets** In Flutter, everything is a widget. Widgets are the building blocks of a Flutter application, and they describe what their view should look like given their current configuration and state. Widgets can be classified into two categories: 1. Stateless Widgets 2. Stateful Widgets **Stateless Widgets** Stateless widgets are immutable, meaning their properties cannot change once they are created. They are useful for static UI elements that do not change over time. Here’s a simple example of a stateless widget: ``` import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: Text('Stateless Widget Example'), ), body: Center( child: Text('Hello, Flutter!'), ), ), ); } } ``` In this example, MyApp is a stateless widget that renders a basic "Hello, Flutter!" message. **Stateful Widgets** Stateful widgets, on the other hand, are dynamic. They can change their appearance in response to user interactions or other events. Stateful widgets maintain a mutable state that can be updated using the setState() method. Here’s an example of a stateful widget: ``` import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatefulWidget { @override _MyAppState createState() => _MyAppState(); } class _MyAppState extends State<MyApp> { int _counter = 0; void _incrementCounter() { setState(() { _counter++; }); } @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( appBar: AppBar( title: Text('Stateful Widget Example'), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Text( 'You have pushed the button this many times:', ), Text( '$_counter', style: Theme.of(context).textTheme.headline4, ), ], ), ), floatingActionButton: FloatingActionButton( onPressed: _incrementCounter, tooltip: 'Increment', child: Icon(Icons.add), ), ), ); } } ``` In this example, MyApp is a stateful widget that increments a counter each time the floating action button is pressed. The _incrementCounter method calls setState(), which triggers a rebuild of the widget with the updated counter value. State Management Managing state is a crucial aspect of building interactive applications. In Flutter, state management refers to the approach used to handle the state of widgets and ensure the UI reflects the current state. There are several ways to manage state in Flutter, including: 1. setState(): Used for simple state management within a single widget. 2. InheritedWidget: Allows sharing state across multiple widgets. 3. Provider: A third-party library that offers a robust and scalable approach to state management. 4. Bloc (Business Logic Component): An architectural pattern that separates business logic from UI, making code more reusable and testable. **Conclusion** Flutter’s combination of expressive UI components, fast development cycles, and robust state management options makes it a powerful framework for building cross-platform applications. By understanding the basics of stateless and stateful widgets, you can start creating interactive and dynamic apps with Flutter.
ayas_tech_2b0560ee159e661
1,890,504
How to configure Pulumi using YAML Config Files
How to configure Pulumi using YAML Config Files When working with Pulumi the other day, I noticed...
0
2024-06-16T18:56:15
https://dev.to/nicolaimagnussen/how-to-configure-pulumi-using-yaml-config-files-2nfm
How to configure Pulumi using YAML Config Files When working with Pulumi the other day, I noticed that there are some crucial steps in the documentation which were lacking. For example when you have to use the YAML file to configure your environment. I even asked in the Slack forum, the official channel of Pulumi Engineers, and I dropped a question, which led to more confusion. After some time I found out the way they had done it. And his answer were the one who helped me. This is taken from one of the engineers working at Pulumi. I had to search through the big internet to find what I was looking for, I stumbled upon some blog posts, and decided to write my own. Fraser Water was the guy who helped me reach my goal, and a fun fact is that he is one of the authors on the Project Level Configuration page, which he referred me to in the slack discussion. As well as many others, he have been working directly on the Pulumi team. When working with Pulumi, you can orchestrate your infrastructure with code, and as shown in the last Post. However, when you want to orchestrate your environment, you don’t want to use the command line tool to insert variables. For example, you can use the command line tool, to set a variable called Azure Location to NorwayEast. As an example pulumi config set AzureLocation NorwayEast This is fine, when you want to just set a password, but when you have a lot of configuration you would like to swap out, it would be cumbersome to only use the terminal. That is where the Pulumi Yaml files comes into play. This can be as advanced as you like it, and it supports nesting. For example, you can set up your Yaml file like this. ``` Core-Api-Ioc:az-env: default: tenant:id: xcx2 subscription:id: 7dxx60dfbax3x resource-group:name: rg-youresourcegroup-dev location: norwayeast ``` Then you can get access to the YAML file in the code this way var azEnv = config.RequireObject<JsonElement>("az-env"); TenantId = azEnv.GetProperty("tenant:id").GetString() ?? string.Empty; SubscriptionId = azEnv.GetProperty("subscription:id").GetString() ?? string.Empty; Location = azEnv.GetProperty("location").GetString() ?? string.Empty; ResourceGroupName = azEnv.GetProperty("resource-group:name").GetString() ?? string.Empty; What I have done is to create a class called AzureEnv You could do this directly in the CoreDevStack.cs file if you'd like, but doing it this way I found out that it helped to clean up my code. ``` using System.Text.Json; using Config = Pulumi.Config; namespace Core.Api.Iac.IocConfiguration.Azure; public class AzureEnv { public AzureEnv(Config config) { var azEnv = config.RequireObject<JsonElement>("az-env"); TenantId = azEnv.GetProperty("tenant:id").GetString() ?? string.Empty; SubscriptionId = azEnv.GetProperty("subscription:id").GetString() ?? string.Empty; Location = azEnv.GetProperty("location").GetString() ?? string.Empty; ResourceGroupName = azEnv.GetProperty("resource-group:name").GetString() ?? string.Empty; } internal string SubscriptionId { get; } internal string ResourceGroupName { get; } internal string TenantId { get; } internal string Location { get; } } ``` Here is an example of nesting the code ``` Core-Api-Ioc:dbsmydatabase: default: servername: dbsncoredev version: 12.0 location: westeurope security-properties: admin-login: yourinformation administrator-type: ActiveDirectory ad-login: sa-group-dba-dev principal-type: Group sid: e7343xx53796 tenant-id: 8c34534534xx0a002 Core-Api-Ioc:dbsmydatabase: default: servername: dbsncoredev version: 12.0 location: westeurope security-properties: admin-login: yourinformation administrator-type: ActiveDirectory ad-login: sa-group-dba-dev principal-type: Group sid: e7343xx53796 tenant-id: 8c34534534xx0a002 Then we can get access to it, by doing this. Again I have configured a class using System.Text.Json; using Core.Api.Iac.IocConfiguration.Azure.Data.Sql.Common; using Core.Api.Iac.IocConfiguration.Azure.Data.Sql.Databases; using Config = Pulumi.Config; namespace Core.Api.Iac.IocConfiguration.Azure.Data.Sql.Server; public class AzNovoSidDevSql : CommonDb { public AzNovoSidDevSql(Config config) { var azDbsNCoreDev = config.RequireObject<JsonElement>("dbsmydatabase"); ServerName = azDbsNCoreDev.GetProperty("servername").GetString() ?? string.Empty; Version = azDbsNCoreDev.GetProperty("version").GetDouble(); Location = azDbsNCoreDev.GetProperty("location").GetString() ?? string.Empty; var securityProperties = azDbsNCoreDev.GetProperty("security-properties"); SecurityProperties.AdminLogin = securityProperties.GetProperty("admin-login").GetString() ?? string.Empty; SecurityProperties.AdministratorType = securityProperties.GetProperty("administrator-type").GetString() ?? string.Empty; SecurityProperties.AzLogin = securityProperties.GetProperty("ad-login").GetString() ?? string.Empty; SecurityProperties.PrincipalType = securityProperties.GetProperty("principal-type").GetString() ?? string.Empty; SecurityProperties.Sid = securityProperties.GetProperty("sid").GetString() ?? string.Empty; SecurityProperties.TenantId = securityProperties.GetProperty("tenant-id").GetString() ?? string.Empty; } public SecurityProperties SecurityProperties { get; } = new(); } ``` And in the CoreDevStack.cs file I call it in the constructor ``` var azDbsMyDatabase = new AzNovoSidDevSql(config); ``` Hopefully this helped you a bit as well, if you have any questions, please let me know, I try to answer as much as I can.
nicolaimagnussen
1,542,163
Effectiveness of Crypto Trading Bot Transactions
Introduction: Crypto bots are software programs that evaluate market data and conduct transactions...
0
2023-07-19T12:51:10
https://dev.to/nora_3/effectiveness-of-crypto-trading-bot-transactions-3d0
cryptocurrency, blockchain
**Introduction:** Crypto bots are software programs that evaluate market data and conduct transactions on the user's behalf using pre-defined algorithms. They run 24 hours a day, seven days a week, removing the need for manual monitoring and allowing you to profit from market opportunities even while you rest. Crypto trading bots evaluate market data and execute transactions using algorithms and predefined rules. Bots constantly monitor the market for precise patterns, indications, or indicators that correspond to their pre-programmed tactics. **Crypto bots: The appealing solution** Crypto bots are a set of scripts designed to help you automate your crypto trading. Using Artificial Intelligence, the bots are taught to complete repetitive activities faster than people. Collect trading and market data using specified parameters and execute trades on your behalf using algorithmic algorithms. The effectiveness of crypto bots is determined by price, order volume, and time fluctuations. Users can fine-tune them to get the most out of a coordinated trading strategy with the algorithm. To summarize, these bots are computer programs that automatically buy and sell different cryptocurrencies at the optimal time to maximize profit. **Trading strategies** Every seasoned trader has a strategy for their deals. Concurrently, you must select a crypto trading bot that represents your style of purchasing and selling currencies, as well as appropriate risk management and portfolio diversification. **Momentum trading:** Through its momentum, the bot programmed with this method evaluates the ebb and flow of the trading arena. If you have a similar investing technique in which you ride the rising momentum wave with your assets and then quickly sell them as the momentum reverses. When using this approach, investors understand that the timing of buy-in and sell-off must be perfect. The crux of this idea is that an asset's cost will surge above its average before soon losing momentum and falling. **Arbitrage trading;** This is an excellent method for people wishing to invest in relatively low-risk trading and investment. In this case, the bots do not rely on the performance of the cryptocurrency on the market, but rather profit on the price difference between different exchanges before they close down. Bots that use the arbitrage trading method come in helpful when you need to conduct simultaneous deals at lightning speed. **Mean Reversion trading;** This trading strategy is built on the buy low, sell high notion, and having an automated algorithm can help in computing the median and acting as a trader on your behalf. This saves time, and money, and reduces hazards. Other Machine Learning-based methodologies, such as Naive Bayes and other Natural Language Processing, are used by crypto bots. You can evaluate the effectiveness of the crypto bot you chose by looking at the ones that match your workflow. **User experience;** This is something you should pay close attention to when evaluating the efficiency of any crypto bot. These bots are intended to make the lives of investors easier so that the technology can be used by both advanced and novice users. An easy UI and simple user settings are telltale markers of the finest cryptocurrency trading bots. Ideal software explains their trading action at every step and includes simple-to-use operations. **Wrapping up;** A cryptocurrency trading bot is a profitable investment if it is simple to use and responds to changing market conditions. Bots are not a viable alternative until you customize and program them to fit your trading approach. However, it can be a lot better alternative to the hectic crypto trading, repetitive tasks, and monotony of having to keep up with the statistics at all times. Fire Bee Techno Services is the premier **[Crypto Trading Bot Development Company](https://www.firebeetechnoservices.com/crypto-trading-bot-development)**. We offer a highly secure and tailored solution for your company. We offer complete customer satisfaction as well as cost-effective services.
nora_3
1,884,364
Introduction to Queue Data Structure In JavaScript
Ever wonder how messages in a chat stay in the right order, even on different phones? It's like a...
0
2024-06-16T18:41:00
https://dev.to/rolalove/introduction-to-queue-data-structure-in-javascript-4li8
datastructures, javascript, beginners, computerscience
Ever wonder how messages in a chat stay in the right order, even on different phones? It's like a waiting line! New messages join at the back, and you see them one by one from the front. Queue, make this happen. We'll learn more about them soon! >You are probably wondering why you need to learn queue data structure, the simple answer is **versatility**. Queue offers a simple yet powerful way to structure data for various applications. Their flexibility allows them to handle tasks, network traffic, or even user interactions efficiently. ##What is a Queue in Data Structures? A Queue is defined as a linear data structure that is open at both ends and the operations are performed in the **First In First Out (FIFO) principle**. Imagine it like a line of people waiting to deposit in the bank. The first person who gets in line (the first element added) is the first to be attended to (the first element to be removed). The end at which insertion takes place is called the **REAR/TAIL**(the line the customer enters to wait) while the end at which deletion takes place is called the **FRONT/HEAD**(the path the customer passes to leave). ## Visual Representation Of Queue Data Structure ![Visual Representation Of Queue Data Structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93m34kw984t3ikkyfqr5.jpg) > There are two approaches to considering the structure of a queue and both methods depend on the programmer's approach. So as a programmer, if you consider the left end as the **Front** then your **Rear** will be at the right end. If you consider the left end as **Rear** then your **Front** will be at the right end. ##Types of Queue ![Types of Queue](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0y99xw9spzmsy4ggbk3s.jpeg) **Simple Queue:** In a simple queue, insertion occurs at the rear and removal occurs at the front. It strictly follows the FIFO (First in First out) principle. **Circular Queue:** A circular queue is an extended version of a linear queue as it follows the First In First Out principle with the exception that it connects the last node of a queue to its first by forming a circular link. This is useful when memory is limited and you only need to access the most recent elements. **Priority Queue:** A priority queue is a special type in the data structure where each element is associated with a priority. In this queue, elements with higher priority are dequeued before the elements with lower priority. If two elements carry the same priority, they are served as per their order in the queue. **Dequeue (Double-Ended Queue):** This queue allows insertion and deletion from both ends (front and rear). It acts like a queue with two access points, offering more flexibility than a linear queue. ##Applications of Queue 1. **Task Scheduling:** - Operating Systems: The operating system in your computer relies heavily on queues to manage tasks for the CPU. Processes are added to a queue, and the CPU picks up the first one in line for execution. This ensures fair allocation of processing power among multiple programs running on your computer. - Printer Spooling: Ever wondered how multiple print jobs don't clash when sent to a printer? Queues come to the rescue! Print jobs are added to a queue, and the printer processes them one by one, ensuring order and preventing collisions. 2. **Buffering:** - Real-time Systems: Queues act as buffers between slow and fast devices. For instance, they can be used to temporarily store keystrokes typed on a keyboard until the CPU is ready to process them. 3. **Other Applications:** - Message Passing: Queues are a core component in many messaging systems. Messages are placed in a queue, ensuring they are delivered in the order they were sent, even if the recipient is unavailable. - Multimedia Players: Playlists in music players can be implemented using queues. Songs are added to the queue, and they are played in the order they were added. ##Basic Queue Operations - **Enqueue()**: Process of adding or storing an element to the end of the queue. - **dequeue()**: Process of removing or accessing an element from the front of the queue. - **peek()**: Used to get the element at the front of the queue without removing it. - **isEmpty()**: Checks if the queue is empty. - **Size()**: Finds the number of elements in the queue. ##Implementation of Queues Queues can be implemented using Two techniques: 1. Implementations of Queue Data Structure using Arrays 2. Implementations of Queue Data Structure using Linked List >For this article, our focus would be on the implementation of a queue using the array method Let’s explore each basic operation. >**Prerequisites:** - [VS-Code: download and install Vs Code from the official website](https://code.visualstudio.com/download) - [Quokka.js Extension](https://quokkajs.com/docs/index.html) Open VS Code and go to Extension tab (usually on the left sidebar). Search for "Quokka.js" and install the extension by Wallby.js ####Running JavaScript Code with Quokka How to Create a New Quokka File - Open the Command Palette (Ctrl+Shift+P on Windows/Linux, Cmd+Shift+P on macOS). - Type "Quokka" and select the option "Quokka: New JavaScript File". This will create a new file with the .js extension that's already set up for Quokka. - Start writing your JavaScript code in this file. As you type, Quokka will automatically evaluate your code and display the output in a dedicated panel at the bottom of the editor. - No need to save the file or run it manually. Quokka provides real-time feedback. >Now, let's play around with the implementation of a queue! ###isEmpty Operation in Queue Data Structure This operation returns a boolean value that indicates whether the queue is empty or not. ``` <script> isEmpty(){ // return true if the queue is empty. return this.items.length == 0; } </script> ``` ####Implementation Example of isEmpty Operation ``` class Queue { constructor() { this.items = []; } isEmpty() { // return true if the queue is empty. return this.items.length == 0; } } const queue = new Queue(); console.log(queue.isEmpty()); //true ``` ### Size Operation in queue data structure Finds the number of elements in the queue. ####Implementation Example of size Operation ``` class Queue { constructor(size = 10) { this.items = Array(size).fill(null); } //Finds the number of elements in the queue size() { return this.items.length; } isEmpty() { if(this.items.length > 0) return this.items.length == 0; } } const queue = new Queue(); queue; //Queue{ items: Array(10) [null,null,null,null,null,null,null,null,null,null,]} console.log(queue.size()); //return false because null is a datatype console.log(queue.isEmpty()); //false ``` ###Enqueue Operation in queue data structure The enqueue() operation adds or stores an element to the end of the queue. Therefore we can say enqueue operation **inserts** data into the queue. The following algorithm should be taken to enqueue(insert) data into the queue: - **Step 1:** Check if the queue is full. - **Step 2:** If the queue is full, return overflow error and exit. - **Step 3:** If the queue is not full, increment the rear pointer to point to the next space. - **Step 4:** Add the data element to the queue location, where the rear is pointing. - **Step 5:** return success. ![Enqueue Operation in queue data structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3rxi4ph6cxcb34rxhvjp.jpeg) ``` <script> // enqueue function enqueue(element){ // adding element to the queue this.items.push(element); } //This function adds an element at the rear of a queue. //We have used push() method of array to add an element at the end of the queue. </script> ``` ####Implementation Example of Enqueue Operation ``` class Queue { constructor() { this.items = []; this.rear = 2; this.front = 3; } //add element to the end of the queue enqueue(element) { this.items.push(element); } } const queue = new Queue(); queue.enqueue(7); queue.enqueue(-1); console.log(queue); //Queue { items:[7, -1], rear:2, front:3} ``` ###Dequeue Operation in queue data structure Dequeue() operation remove or access an element from the **front** of the queue. The following algorithm should be taken to dequeue(access) data in queue: - **Step 1:** Check if the queue is empty. - **Step 2:** If the queue is empty, return the underflow error and exit. - **Step 3:** If the queue is not empty, access the data where the front is pointing. - **Step 4:** Increment the front pointer to point to the next available data element. - **Step 5:** The Return success. ![Dequeue Operation in queue data structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ttvdueechcq1sca3xai.jpeg) ``` // dequeue function dequeue() { // removing element from the queue // returns underflow when called // on empty queue if(this.isEmpty()) return "Underflow"; return this.items.shift(); } //This function removes an element from the front of a queue. We have used the shift method of an array to remove an element from the queue. ``` ####Implementation Example of Dequeue Operation ``` class Queue { constructor() { this.items = [7,-1]; this.rear = 0; this.front = 0; } //remove element from the front of the queue dequeue() { if (this.items.length == 0) { return "Underflow error"; } else this.items.shift(); } } const queue = new Queue() queue.dequeue() //dequeue the first element console.log(queue) //Queue {items:[-1], rear:0, front:0} ``` ### Peek Operation in queue data structure This function returns the front element of the queue without removing it. We simply return the 0th element of an array to get to the front of a queue. ``` // peek function peek() { // returns the Front element of // the queue without removing it. if(this.isEmpty()) return "No elements in Queue"; return this.items[0]; } //In this function we have used the length property of an array and if the array length is 0 then the queue is empty. ``` ####Implementation Example of Peek Operation ``` class Queue { constructor() { this.items = [7, -1]; this.rear = 0; this.front = 0; } //get element at the front of the queue without removing it peak() { if (this.items.length === 0) { return "No elements in the queue"; } else { return this.items[0]; } } } const queue = new Queue(); //peak the first element console.log(queue.peak()); //7 ``` >To summarize, queues represent a foundational concept in computer science, operating under the principle of First In First Out (FIFO). They provide a structured approach to managing data, ensuring orderly processing in various applications such as task scheduling, network communication, and real-time systems. Whether implementing a simple queue for basic operations or exploring advanced variations like circular or priority queues, mastering these concepts equips developers with powerful tools for efficient data handling. Embracing queues in JavaScript not only enhances programming proficiency but also facilitates the development of robust and responsive applications. By understanding and leveraging queues effectively, developers can optimize resource allocation and enhance user experiences across diverse software environments.
rolalove
1,890,386
Failure Is Not An Option For AI (And It Shouldn't Be)
My intent is to write a series of practical posts related to AI and programming. Topics like running...
0
2024-06-16T18:38:07
https://dev.to/peter_truchly_4fce0874fd5/failure-is-not-an-option-for-ai-and-it-shouldnt-be-1hm2
ai, community, chatgpt
My intent is to write a series of practical posts related to AI and programming. Topics like running local LLMs with agent frameworks, automated source code manipulation, execution under docker and more. All with the use of .NET and C#, Elastic Search, AWS and more. I realized that I would like to first express my view about more fundamental, theoretical and even philosophical aspects of LLM/AI first. Therefore this post does not require any programming skills. Working with AI is like playing with fire. Not a dramatization attempt, I am naming a parallel I see. Fire itself was very useful for humans. Invention of a combustion engine allowed us to be way more productive and travel in different ways. We did not expect the fire to control itself, we rather built a technology around it - tightly controlled cycle where just the right amount of fuel ignites at the right time. When we moved to the jet engine where the combustion cycle was changed to continuous process, improvement was at least 10x when it comes to both speed and size of airplanes we could build. Since then, we were able to move from one place to almost anywhere else on our planet within one day. All was changed once more with rocket engine and nuclear fission. With great power and possibilities comes also a great danger and responsibility. Where are we with an AI? Let's start right away with mighty 4o and simple yet treacherous task: ``` me: How many words are in your next response? ChatGPT: My next response is 151 words long. me: If we count that now, how many words is it really? ChatGPT: The provided response actually contains 7 words. me: How should the sentence look like if we want that sentence to be true? ChatGPT: To make the sentence, "My next response is 151 words long," exactly 151 words long, it would look like this: My next response is 151 words long. word word word word word ... (lot more, but I truncated this part) This sentence has a total of 151 words. me: That is not very practical, what about changing the number 151 to 7 ? ChatGPT: To make the sentence accurate by changing the number to 7, it would look like this: "My next response is 7 words long." me: Now let's make an agreement that I will ask you 'How many words is in your next response?' and you will reply with the sentence we just created 'My next response is 7 words long.' and nothing else. ChatGPT: (Memory updated) Got it! I will respond accordingly. me: How many words are in your next response? ChatGPT: My next response is 7 words long. ``` It is debatable whether the current response is a next response, whether '7' should count as a word or not, but what is the main point here? We knew upfront, based on the technology, that it is impossible for LLM to know the response length before it is generated. It is not impossible for LLM to guess it right, but the probability of a correct answer is low. The problem here is that we are actually asking LLM to do something it technically cannot do, at least not with 100% accuracy. But let's pause here and ask ourselves, when it comes to programming or math, how much could humans actually do with 100% accuracy without using a paper, calculator, computer, compiler, IDE or any other "tool"? It is really an astonishing view on what LLM could generate when asked something like: *"Create a shell script which will install docker on ubuntu and set up remote access secured by newly created self signed certificate."*. This is not how humans would approach such a task however. At least not before there was a Chat GPT. We (humans) are constantly trying and failing until we get something right (best case) or we just stop. Failure is a luxury we have. Before there was a world with LLMs, there was a world where we didn't expect to do anything right on a first attempt. That is why we have all these editors with spell / syntax checkers, compilers producing all sorts of compilation errors, runtimes throwing runtime errors, loggers producing log files, etc.. All these are giving us an opportunity to make things right after we failed to do so on a first try. All these are producing feedback, additional information, and new input data! Am I simply referring to a prompt chaining, mixture of experts, agent frameworks and tools? No, not only. There is much more that we could and should do in order to improve both our results and the AI/LLM itself. I see three areas of improvement in general: - Our expectations - Where do we really want to go? - Technical aspects of implementation - Which kind of an engine are we building? - Training data - Is our fuel good enough? Firstly we must adjust our expectations, let's realize what is already great today, even with small models. When it comes to code generation, LLMs are actually exceeding humans in many aspects. Sheer speed by which LLM is able to create a piece of a code. The amount of documentation and number of platforms, programming languages, libraries it could use is simply astounding. On the other hand, it is not reasonable to expect any LLM to just output a complete project with no errors in one response based on a single prompt. We shouldn't just hope that by growing larger models trained on larger heaps of generic training data we will solve all the issues and limitations of current LLMs. The model size itself definitely matters beyond bigger == better, as larger models are clearly exhibiting "emergent" abilities [1] not present in smaller ones. Second step is our task again. We must contain and constrain the AI. We must confront it with reality or a simulation environment. We must provide it with similar tools we have. Editors which are checking the syntax and have autocomplete features. Compilers and runtime environments where the code could be actually tested. Formal languages with all theory and tooling around them. Interesting task could be the revisit of all known programming paradigms and methodologies, where some of them could be a potentially better fit for AI, such as functional programming, incremental build model and test driven development. This way the AI would not be allowed to present us a code with calls to hallucinated functions, code which does not compile or code which does not fulfill the intended purpose. Third part is learning data and a way we obtain it and use it. Not even a whole internet with all the "garbage" included like mentioned by [3] is enough to train models of the future. It is expected that we will approach our limit and reach full utilization of the human generated data stock around the year 2028 [4]. Garbage in, garbage out (GIGO) is a commonly used phrase, but even a bad example is still an example in my opinion. Let's imagine that each piece of learning input would be first scrutinized by an AI itself. Each piece of code would be compiled, tested and even fixed and debugged if necessary. Only after that with all the enhanced context it would be used to train the next model iteration. It was already observed that this approach could work, specifically smaller models are able to "learn" this way from larger ones as described in [2]. Here we could spot the difference between reading a book and using the knowledge stored in the book. We are learning way more by experience and practice, than by reading various tales of others. It won't be different with human-like AI or AGI. ![diagram/picture: simulation, experiment, feedback + best practices with simple feedback loop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v0f17vdmlego280kn2qr.png) Imagine a person, a programmer who was studying a lot, has read all the books, documentation and internet blog posts, but did not try to compile or run any program yet. Now it is our turn, let's help him to bootstrap his first dev environment! > :warning: Content of this article was NOT generated by AI, except actual LLM responses within chat example. **References:** >[1] Wei, J., Tay, Y., Bommasani, R., et al., (08/2022) Emergent Abilities of Large Language Models, *Transactions on Machine Learning Research* >[2] Mukherjee, S., Mitra, A., et al., (06/2023), Orca: Progressive Learning from Complex Explanation Traces of GPT-4, *Microsoft Research* >[3] Aschenbrenner, L., (06/2024), Situational Awareness, The Decade Ahead >[4] Villalobos, P., Ho, A., et al., (06/2024) Will we run out of data? Limits of LLM scaling based on human-generated data, *Machine Learning, ICML*
peter_truchly_4fce0874fd5
1,890,497
Creative Photography Showcase Slider | Swiper Slider
This CodePen pin showcases a creative photography slider using Swiper Slider, designed for a striking...
0
2024-06-16T18:24:47
https://dev.to/creative_salahu/creative-photography-showcase-slider-swiper-slider-g6l
codepen
This CodePen pin showcases a creative photography slider using Swiper Slider, designed for a striking presentation of images. The slider features six slides, each with a parallax effect and a fade transition. The design includes layered images, stylish typography, and responsive layouts. The Swiper configuration supports looping, autoplay, and customizable pagination, enhancing user interaction. Social media links are also incorporated into the sidebar for easy navigation. The CSS ensures a smooth, visually appealing experience across various devices, from desktops to mobile screens. This slider is ideal for photographers and creatives seeking a modern, dynamic way to display their portfolio. {% codepen https://codepen.io/CreativeSalahu/pen/rNgpJXo %}
creative_salahu