id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,892,439
GO — Estrutura de projetos
Comecei a programar usando golang de verdade esse ano (2022), e a coisa que logo fiz foi procurar...
0
2024-06-18T20:39:08
https://dev.to/espigah/go-estrutura-de-projetos-1j0k
go, tips, architecture
--- title: GO — Estrutura de projetos published: true description: tags: GoLang, Tips, softwarearchitecture cover_image: https://media.licdn.com/dms/image/D4D12AQEM99NZeVMV7A/article-cover_image-shrink_720_1280/0/1703243547209?e=2147483647&v=beta&t=5mcy9qw0q3Ttz15cdvblS3D6ymUnBhXtEnE8yxiTTR0 # Use a ratio of 100:42 for best results. # published_at: 2024-06-18 13:16 +0000 --- Comecei a programar usando golang de verdade esse ano (2022), e a coisa que logo fiz foi procurar referências de como seria a melhor forma de evoluir minha estrutura para o projeto. Esse post será só mais um dentre tantos outros que falam do mesmo assunto, e, talvez até por isso, resolvi escrevê-lo. Primeiro que golang já é todo diferentoso na forma que lida com pastas/packages, e, pra melhorar, tem uma essência bastante opinativa, com muitas docs oficiais informando como seria o “goway” de se fazer algo (cheio de não me toque), entretanto, na forma que você vai organizar seus arquivos e pastas, não tem bem um direcionamento, então, meio que cada um dá a sua interpretação de mundo pra essa parte. Irei dividir esse post em 3 referências e depois mostrar como ficou a mistura dessas referências no projeto. ## Primeira referência > Um sistema complexo que funciona invariavelmente evoluiu de um sistema simples que funcionou. -- Lei de Gall Para pequenas aplicação a estrutura do projeto deve ser simples. <figure> <img src="https://miro.medium.com/v2/resize:fit:346/1*_zTyOnU7yCe9wl4a1AZlAg.png" alt="Imagem para um projeto simples com tudo na raiz"> <figcaption>https://innovation.enova.com/gophercon-2018-how-do-you-structure-your-go-apps/</figcaption> </figure> ## Segunda referência A “comunidade” fez um levantamento de um conjunto de padrões de layout de projeto históricos e emergentes comuns no ecossistema Go. Nesse levante tem muita coisa bacana, mas o que me chamou a atenção foram as pastas `/cmd` e `/internal`. ### /cmd Principais aplicações para este projeto. O nome do diretório para cada aplicativo deve corresponder ao nome do executável que você deseja ter(ex.: /cmd/myapp). ### /internal Aplicativo privado e código de biblioteca. Este é o código que você não quer que outros importem em seus aplicativos ou bibliotecas. Observe que esse padrão de layout é imposto pelo próprio compilador Go. ## Terceira referência Arquiteturas que separam melhor os “detalhes” do que realmente entrega valor. <figure> <img src="https://miro.medium.com/v2/resize:fit:300/0*6endj9h0yHQcrwpY.png" alt="Imagem mostrando que a infra esta envolvendo o domínio"> <figcaption></figcaption> </figure> ## Resultado Para aplicação simples tento manter simples, contudo, quando o escopo fica um pouco maior, tento fazer uma leve diferenciação do que é “core”/domain do que é detalhe/infrastructure. <figure> <img src="https://miro.medium.com/v2/resize:fit:201/1*cAqVTwIsa--ZW2IOKvi3WQ.png" alt="Imagem mostrando uma pasta para infra e outra para domínio"> <figcaption></figcaption> </figure> Notem que no _cmd_ não tenho a pasta tuttipet, como sugere o projeto de referência. No começo eu tentei usar o padrão sugerido, mas como esta API já saiu com uma interface de linha de comando e um provider para terraform resolvi deixar desta forma. <figure> <img src="https://miro.medium.com/v2/resize:fit:521/1*EZWz2j9gY4NRSLY7ChQ9Jw.png" alt="Imagem mostrando os arquivos dentro da pasta domínio"> <figcaption></figcaption> </figure> Dando um rápido zoom no core. Tento ser simplista aqui e não criar pastas. Mantenho somente 1 ponto de contato com o mundo externo (main.go), o que for generalizado tem seu próprio arquivo e o que não for fica dentro do seu contexto, simples. <figure> <img src="https://miro.medium.com/v2/resize:fit:446/1*l0SOZMv7hsEYnkTbYTQU6g.png" alt="Imagem mostrando o tuttipet"> <figcaption></figcaption> </figure> Com o tuttipet.New (curto, conciso e evocativo) a camada “suja” consegue interagir com os usecases (acho a palavra usecase mais fácil de assimilar que interactor) <figure> <img src="https://miro.medium.com/v2/resize:fit:500/format:webp/1*rzfpebxLUEokne63Pvs5Ag.png" alt="Imagem mostrando as pasta dentro de infra"> <figcaption></figcaption> </figure> Dando um rápido zoom nos detalhes. Aqui simplesmente estão as ferramentas pelas quais o domínio vai conseguir o seu sucesso. ## Conclusão Ainda sou pimpolho nos caminho que golang oferece, ainda tateando o que dá pra fazer com ele, no entanto, mesmo sem gostar do jeitinho Go de fazer algumas coisa, o mesmo tem se mostrado bastante simples e robusto. Resumo, tentando deixar simples quando dê e se ficar complexo de mais … volto pra prancheta. ## Outras referências https://dev.to/booscaaa/implementando-clean-architecture-com-golang-4n0a https://github.com/golang-standards/project-layout https://blog.boot.dev/golang/golang-project-structure/ https://github.com/bnkamalesh/goapp https://www.wolfe.id.au/2020/03/10/how-do-i-structure-my-go-project/ https://blog.logrocket.com/flat-structure-vs-layered-architecture-structuring-your-go-app/ https://developer20.com/how-to-structure-go-code/ https://dev.to/jinxankit/go-project-structure-and-guidelines-4ccm https://github.com/bxcodec/go-clean-arch https://golangexample.com/example-go-clean-architecture-folder-pattern/ https://www.calhoun.io/flat-application-structure/ https://go.dev/doc/effective_go#names https://go.dev/blog/package-names Post original: https://medium.com/@espigah/go-layout-do-projeto-18aacce8089d
espigah
1,892,866
🎉 Celebrating 90 Hours of Coding! 🚀
I'm excited to announce that I've reached a significant milestone: 90 hours of coding on @__codetime...
0
2024-06-18T20:31:27
https://dev.to/zobaidulkazi/celebrating-90-hours-of-coding-3mln
webdev, life, codingmilestone, programming
I'm excited to announce that I've reached a significant milestone: 90 hours of coding on @[__codetime](https://wakatime.com/@zobaidulkazi) and @[WakaTime](https://wakatime.com/@zobaidulkazi)! 🌟 Each hour has been a step forward in my journey to master JavaScript, TypeScript, and Node.js. Here's a glimpse into my tech journey: - Check out my portfolio: [zobkazi.github.io](https://zobkazi.github.io) Thank you for your support and inspiration! Let's continue pushing boundaries and building amazing things together. 🙌 @nodejs_ @codetimedev @wakatime ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1b8zzjt2q8pnw6e4jhu2.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ryim3l4oicy1i1wnn8pe.png)
zobaidulkazi
1,892,865
jjnjnjnj
hii
0
2024-06-18T20:29:54
https://dev.to/abdul_almas/jjnjnjnj-18ng
hii
abdul_almas
1,892,864
Understanding White Box Testing: An In-Depth Exploration
Introduction In the software development lifecycle, ensuring the quality and reliability of the...
0
2024-06-18T20:29:53
https://dev.to/keploy/understanding-white-box-testing-an-in-depth-exploration-3ml
box, opensource, aitools, testing
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pf4m5thqnrd27ekpc0x0.png) **Introduction** In the software development lifecycle, ensuring the quality and reliability of the product is paramount. Among the various testing methodologies, **[White Box Testing](https://keploy.io/docs/concepts/reference/glossary/white-box-testing/)** stands out due to its rigorous approach towards code validation and optimization. Also known as Clear Box Testing, Glass Box Testing, Open Box Testing, or Structural Testing, White Box Testing delves deep into the internal structures or workings of an application, unlike its counterpart, Black Box Testing, which focuses solely on the external functionalities. **What is White Box Testing?** White Box Testing is a testing technique that involves the examination of the program's internal structures, design, and coding. The tester, in this case, needs to have an in-depth knowledge of the internal workings of the system. This form of testing ensures that all internal operations are executed according to the specified requirements and that all internal components have been adequately exercised. **Key Aspects of White Box Testing** 1. **Code Coverage**: White Box Testing aims to achieve maximum code coverage. It ensures that all possible paths through the code are tested, which includes branches, loops, and statements. 2. **Unit Testing**: This involves testing individual units or components of the software. The primary goal is to validate that each unit of the software performs as designed. 3. **Control Flow Testing**: This technique uses the program’s control flow to design test cases. It ensures that all possible paths and decision points in the program are tested. 4. **Data Flow Testing**: Focuses on the points at which variables receive values and the points at which these values are used. It identifies potential issues such as variable mismanagement and incorrect data handling. 5. **Branch Testing**: Aims to ensure that each decision (true/false) within a program's control structures is executed at least once. Advantages of White Box Testing 1. **Thoroughness**: By examining the internal workings of the application, testers can identify and fix more bugs, leading to more robust software. 2. **Optimization**: Allows for the optimization of code by identifying redundant or inefficient paths. 3. **Security**: Enhances security by identifying hidden errors and potential vulnerabilities within the code. 4. **Quality**: Improves the overall quality of the software as it ensures that all parts of the code are functioning as intended. **Challenges in White Box Testing** 1. **Complexity**: Requires a deep understanding of the internal structure of the code, which can be complex and time-consuming. 2. **Scalability**: Can be difficult to scale for large applications due to the detailed level of analysis required. 3. **Maintenance**: As the software evolves, maintaining comprehensive White Box test cases can be challenging. 4. **Cost**: Generally more expensive than Black Box Testing due to the detailed knowledge and time required. **White Box Testing Techniques** 1. Statement Coverage: Ensures that every statement in the code is executed at least once. 2. Decision Coverage: Ensures that every decision point (such as if statements) is executed in all possible outcomes (true/false). 3. Condition Coverage: Ensures that all the boolean expressions are tested both for true and false. 4. Multiple Condition Coverage: Combines multiple conditions in decision making and ensures all possible combinations are tested. 5. Path Coverage: Ensures that all possible paths through a given part of the code are executed. 6. Loop Coverage: Ensures that all loops are tested with zero, one, and multiple iterations. White Box Testing Tools Several tools assist in performing White Box Testing by automating various testing aspects, such as code coverage and static code analysis. Popular tools include: 1. JUnit: A widely used framework for unit testing in Java. 2. CppUnit: A unit testing framework for C++. 3. NUnit: A unit testing framework for .NET languages. 4. JMockit: A toolkit for testing Java code with mock objects. 5. Emma: A tool for measuring code coverage in Java. Best Practices in White Box Testing 1. Early Integration: Integrate White Box Testing early in the development cycle to identify issues sooner. 2. Regular Updates: Regularly update test cases to reflect changes in the codebase. 3. Collaborative Approach: Collaborate with developers to understand the intricacies of the code. 4. Comprehensive Documentation: Maintain thorough documentation of test cases and results to facilitate maintenance and scalability. 5. Automated Testing: Leverage automated testing tools to increase efficiency and accuracy. **Conclusion** White Box Testing is an indispensable part of the software testing process. Its focus on the internal workings of an application ensures a thorough evaluation of code functionality, security, and performance. Despite its challenges, the benefits it brings in terms of improved software quality, optimization, and reliability make it a critical practice for any serious software development project. By employing White Box Testing techniques and best practices, development teams can deliver more robust, secure, and efficient software products.
keploy
1,892,863
The Future of NFT Gaming with Sandbox Clone Scripts
Introduction The realm of digital assets and blockchain technology has...
27,673
2024-06-18T20:29:48
https://dev.to/rapidinnovation/the-future-of-nft-gaming-with-sandbox-clone-scripts-5h76
## Introduction The realm of digital assets and blockchain technology has expanded significantly over the past few years, introducing innovative ways to leverage technology in various sectors, including gaming. One of the most notable advancements is the integration of Non-Fungible Tokens (NFTs) into gaming platforms, which has revolutionized how players interact with games, offering them a unique blend of entertainment and investment opportunities. ## Overview of NFT Gaming NFT gaming combines traditional gaming mechanics with the unique aspects of NFTs, allowing players to own, buy, sell, and trade in-game assets as digital tokens on the blockchain. Unlike standard digital assets in traditional games, NFTs have distinct, verifiable properties that make them unique and hence, potentially more valuable. ## Importance of Sandbox Clone Script in the NFT Space The Sandbox Clone Script is a vital tool in the NFT space, particularly for developers and entrepreneurs looking to launch their own virtual worlds and gaming platforms. This script is essentially a ready-made solution that mimics the core functionalities of the popular NFT game, The Sandbox, which allows users to create, own, and monetize their gaming experiences using NFTs. ## What is Sandbox Clone Script? The Sandbox Clone Script is a comprehensive, ready-made software solution designed to replicate the core functionalities of The Sandbox, a popular virtual world and gaming ecosystem built on blockchain technology. This script enables entrepreneurs and businesses to launch their own decentralized virtual environment where users can create, own, and monetize their gaming experiences using cryptocurrency and NFTs. ## How Does Sandbox Clone Script Work? A Sandbox clone script is essentially a pre-built software solution that replicates the core functionalities of The Sandbox, a popular virtual world and gaming ecosystem where users can create, own, and monetize their gaming experiences using blockchain technology. The script incorporates blockchain technology, which ensures that all transactions within the platform are secure and transparent. ## Types of Sandbox Clone Scripts Sandbox clone scripts are essentially pre-built software solutions that mimic the functionality and features of popular sandbox-style games, where players can create, modify, and interact with a digital environment. These scripts are particularly popular among developers who wish to create similar games without starting from scratch, offering a foundation upon which they can build and innovate. ## Benefits of Using Sandbox Clone Script Using a Sandbox clone script offers a range of benefits for businesses and developers looking to enter the virtual real estate or gaming market. A Sandbox clone script is essentially a pre-built software solution that replicates the core functionalities of the popular virtual world and decentralized gaming platform, The Sandbox. ## Challenges in Implementing Sandbox Clone Script Implementing a Sandbox clone script comes with a variety of challenges that can impact the success of the project. These challenges range from technical issues to regulatory compliance and user acceptance. Each of these areas requires careful planning and strategic decision-making to ensure the platform is robust, secure, and capable of meeting the needs of its users. ## Future of NFT Gaming with Sandbox Clone Scripts The future of NFT gaming, particularly through platforms like Sandbox clone scripts, looks promising with several trends and innovations on the horizon. Sandbox clone scripts are essentially customizable frameworks that mimic the core functionalities of The Sandbox game, allowing developers to create and deploy their own decentralized gaming platforms without starting from scratch. ## Real-World Examples of Sandbox Clone Script Implementations Implementing sandbox environments and clone scripts in real-world scenarios has demonstrated significant benefits across various sectors. These technologies not only facilitate innovation and efficiency but also ensure greater security and compliance with regulatory standards. ## Comparisons & Contrasts Comparing a Sandbox clone script to the original Sandbox game involves looking at various aspects such as features, user experience, customization, and community support. The original Sandbox game, known for its vast open-world and creative freedom, offers a polished experience with continuous updates and a strong community. ## Why Choose Rapid Innovation for Implementation and Development Choosing the right partner for implementing and developing blockchain and AI technologies can significantly impact the success of a project. Rapid Innovation stands out as a preferred choice for several reasons, including their expertise in AI and blockchain, proven track record, and commitment to customization and support. ## Conclusion Understanding the importance of a proven track record and the value of customization and support is crucial for businesses aiming to build and maintain a strong customer base. A proven track record establishes credibility and reliability, reassuring customers of the quality and stability of the product or service. ## Summary of Benefits and Challenges The integration of advanced technologies and methodologies in various sectors brings a host of benefits and challenges that are pivotal to understand for maximizing effectiveness and preparing for potential setbacks. One of the primary benefits of adopting new technologies is the significant enhancement in efficiency and productivity. ## Final Thoughts on the Future of NFT Gaming The future of NFT gaming holds immense potential as it continues to blend the boundaries between digital ownership and gaming experiences. As we look ahead, several key factors suggest that NFTs will play a significant role in the evolution of the gaming industry. 📣📣Drive innovation with intelligent AI and secure blockchain technology! Check out how we can help your business grow! [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [Blockchain App Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software-development-company- in-usa) ## URLs * <http://www.rapidinnovation.io/post/sandbox-clone-script-ready-to-deploy-nft-gaming-solution> ## Hashtags #NFTGaming #SandboxCloneScript #BlockchainGaming #PlayToEarn #DigitalAssets
rapidinnovation
1,892,862
AWS Cert Manager integration with Prometheus with Domain Name
Problem When using CloudWatch metrics for ACM (AWS Certificate Manager), there is a limitation in...
0
2024-06-18T20:29:34
https://dev.to/amaze_singh41/aws-cert-manager-integration-with-prometheus-with-domain-name-4a2a
aws, monitoring, prometheus, sre
**Problem** When using CloudWatch metrics for ACM (AWS Certificate Manager), there is a limitation in that only the ARN (Amazon Resource Name) of ACM certificates is displayed. This makes it difficult to easily identify which domain is expiring, as ARNs are not human-friendly and are hard to interpret at a glance. Instead, having the domain name displayed would be more user-friendly and would make it easier to manage and monitor certificate expirations. **Solution** One effective solution to this problem is to integrate Prometheus for monitoring ACM certificates. Prometheus allows for more customizable and readable metrics. Here is how you can set up Prometheus to monitor ACM certificates by domain name: Install Prometheus: First, install Prometheus on your monitoring server or use a managed service like Amazon Managed Service for Prometheus. Set Up Exporter: Use a custom exporter or an existing one that can fetch ACM certificates details, including domain names. The exporter will query AWS ACM and transform the ARN-based metrics into domain-based metrics. Prerequisites Before we dive into the integration process, ensure you have the following: An AWS account with access to AWS Certificate Manager. A running Prometheus instance. Basic understanding of AWS IAM roles and permissions. Step 1: Setting Up AWS IAM Permissions To allow Prometheus to access ACM, you’ll need to set up appropriate IAM permissions. Create an IAM Policy: Navigate to the IAM console in AWS. Click on “Policies” and then “Create policy”. Add the following JSON to allow read-only access to ACM: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "acm:ListCertificates", "acm:DescribeCertificate" ], "Resource": "*" } ] } ``` 2. Create an IAM Role: Go to the IAM console and click on “Roles” > “Create role”. Select the “EC2” service, assuming Prometheus is running on an EC2 instance. Attach the policy you created in the previous step and assign role to an instance running Prometheus. Step 2: Create a python script to fetch ACM details from AWS. ``` import boto3 import http.server import socketserver import json import time import logging PORT = 9102 # Set up logging logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s') class ACMExporter(http.server.SimpleHTTPRequestHandler): def do_GET(self): if self.path == '/metrics': self.send_response(200) self.send_header('Content-type', 'text/plain') self.end_headers() try: metrics = self.generate_metrics() self.wfile.write(metrics.encode()) except Exception as e: logging.error(f"Error generating metrics: {e}") self.send_response(500) self.end_headers() else: self.send_response(404) self.end_headers() def generate_metrics(self): acm_client = boto3.client('acm', region_name='us-west-1') # Change the region if necessary try: response = acm_client.list_certificates() except Exception as e: logging.error(f"Error listing certificates: {e}") raise certificates = response.get('CertificateSummaryList', []) metrics = [] for cert in certificates: try: expiration_time = int(cert['NotAfter'].timestamp()) cert_arn = cert['CertificateArn'] domain_name = self.get_certificate_domain(acm_client, cert_arn) metrics.append(f'acm_cert_expiration_timestamp{{domain_name="{domain_name}"}} {expiration_time}') except Exception as e: logging.error(f"Error processing certificate {cert['CertificateArn']}: {e}") return '\n'.join(metrics) def get_certificate_domain(self, acm_client, certificate_arn): try: response = acm_client.describe_certificate(CertificateArn=certificate_arn) certificate_detail = response['Certificate'] domain_name = certificate_detail['DomainName'] return domain_name except Exception as e: logging.error(f"Error describing certificate {certificate_arn}: {e}") raise def run(server_class=socketserver.TCPServer, handler_class=ACMExporter): server_address = ('', PORT) httpd = server_class(server_address, handler_class) logging.info(f'Starting ACM exporter on port {PORT}...') httpd.serve_forever() if __name__ == "__main__": run() ``` Now, run the script using nohupcommand. Before running command install all the required libraries using pip3 . `nohup python3 cert-manager-metric.py &` Once above script is running using PORT: 9102 . Use curl command to check if this script is getting metric or not. `curl http://localhost:9102` Step 3: Configure Prometheus to read metrics Edit the Prometheus configuration file and following lines to ``` scrape_configs - job_name: 'acm-exporter' static_configs: - targets: ['localhost:9102'] ``` Once you are done with above configuration, restart Prometheus service `systemctl restart prometheus ` And you are done. Now go to the Prometheus and check the metrics.
amaze_singh41
1,870,615
Async / Await in JavaScript
Here's a question for you. Is JavaScript synchronous, as in it finishes one task at a time, or is it...
0
2024-06-18T20:19:01
https://dev.to/beaucoburn/async-await-in-javascript-24h4
webdev, javascript, beginners, programming
Here's a question for you. Is JavaScript synchronous, as in it finishes one task at a time, or is it asynchronous, where it can work on many tasks at one time? Now, JavaScript 101 would tell us that it is synchronous, because it basically works its way down the code and works on each task one at a time. If that is the case, how do async functions work? If JavaScript is synchronous, how can you have an asynchronous function? Isn't that contradictory? I'm going to take a deeper look and like many of my other articles I will use the MDN documents as my source with links at the bottom. So, what happens if you are loading an application or a site and let's say you need to pull information from another source? Obviously, it is going to take longer to pull the information from the other source than it is to finish loading all of your application and your own content. I use this example because calling an API call with an async function is probably one of the most common uses of async functions. Async functions are made to return a promise that is either resolved or rejected. They were made in order to make promises easier to write. Many times, an async function is accompanied by an await, but an await is not required when writing async functions. Without an await expression, the async function will run synchronously, but the await expression allows the async function to run asynchronously, because this is exactly when the promise is returned. Here is an example of syntax without the await expression: ``` async function name(){ return value; } ``` Here is an example of syntax with an await expression: ``` async function name(){ const x = await promiseFunction(val); console.log(val); //Value from the promise } ``` So, to explain these two blocks. In the first block, the code is just called synchronously, and then the value is returned. In the second block, when the code is called, it starts synchronously also until it reaches the await expression. Every time it goes through the await expression it evaluates if the promise is resolved, rejected or still unfulfilled. If it has been resolved, then the value of the promise is returned and the function is finished. If the promise is rejected, then the value of course will be rejected and also the function is finished. However, if the promise is unfulfilled, the program continues to execute the next functions in the stack. Whenever the promise is either resolved or rejected, then it will be returned and added to the stack. In this way, even though JavaScript is technically synchronous, it can act as though it is asynchronous. I really wanted to cover this topic because to be honest, this was a topic that was a little hard for me to get my head around in the beginning. I wanted to spend time on this topic to really understand it more in a deeper way and hopefully be able to explain it to more people. Photo by <a href="https://unsplash.com/@flowforfrank?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Ferenc Almasi</a> on <a href="https://unsplash.com/photos/text-tvHtIGbbjMo?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a> Async Function - MDN. (n.d.). Retrieved June 18, 2024, from https://developer.mozilla.org/en-US/docs/Web/HTTP/Overview
beaucoburn
1,892,835
Why experienced developers struggle to land jobs
Introduction Despite their extensive knowledge and skills, experienced developers often...
0
2024-06-18T20:18:17
https://dev.to/digitalpollution/why-experienced-developers-struggle-to-land-jobs-33mg
productivity, developers, jobs, linkedin
## Introduction Despite their extensive knowledge and skills, experienced developers often face significant challenges in the job market. It seems counterintuitive that seasoned professionals struggle to find employment, but various factors contribute to this issue. The tech industry, known for its rapid pace and continuous evolution, places high demands on developers to stay updated with the latest trends. At the same time, biases and shifting industry preferences can create additional hurdles. In this article, we'll delve into these challenges, backed by current statistics and expert opinions, to understand why experienced developers find it difficult to secure jobs. ## Shifting industry demands ### Rapid technological changes The tech industry is characterized by rapid innovation. New technologies and frameworks emerge regularly, and staying current can be daunting. According to the 2023 Stack Overflow Developer Survey, over 50% of developers learn a new technology each year to remain relevant. For developers who have specialized in legacy systems, the constant need to update skills can be a significant hurdle. The speed at which technology evolves means that what was cutting-edge five years ago may now be obsolete. For instance, developers who have spent their careers mastering languages like COBOL or frameworks like jQuery may find these skills less in demand as newer technologies like Python, React, and machine learning libraries take precedence. The pressure to continuously learn and adapt can be overwhelming, particularly for those juggling work responsibilities, family life, or other commitments. To keep up, experienced developers often need to: - **Continuously learn new programming languages:** Languages such as Python, Go, and Rust are increasingly in demand, requiring developers to regularly expand their skill set. - **Stay updated with the latest software development methodologies:** Agile, DevOps, and continuous integration/continuous deployment (CI/CD) practices are now standard in many organizations. - **Engage in professional development courses and certifications:** Platforms like Coursera, Udacity, and LinkedIn Learning offer courses that help developers stay current with industry trends. This constant learning curve can be both time-consuming and costly. However, it is essential for maintaining competitiveness in the job market. ### Preference for new skill sets Employers often prioritize candidates with expertise in cutting-edge technologies. A report by Burning Glass Technologies reveals that job postings increasingly demand knowledge in areas like AI, machine learning, and blockchain. Developers who haven't had opportunities to work with these technologies might find themselves at a disadvantage. The focus on these new technologies stems from their potential to drive innovation and efficiency within companies. AI and machine learning, for example, are transforming industries by automating tasks, improving decision-making processes, and providing insights through data analysis. Blockchain technology is revolutionizing the way transactions are conducted and recorded, offering increased security and transparency. **Technology Demand Trends** ![Technology Demand Trends by UpWork](https://www.siliconrepublic.com/wp-content/uploads/2024/03/In-Demand_Skills_Infographic.png) **Key emerging skills in demand:** - **AI and machine learning:** Companies seek developers who can implement and maintain intelligent systems. These skills are crucial for roles in data science, predictive analytics, and automated decision-making. - **Blockchain:** Expertise in this technology is essential for industries exploring secure transaction methods, such as finance, supply chain, and healthcare. - **Cloud computing:** Proficiency in platforms like AWS, Azure, and Google Cloud is highly valued as more companies migrate to cloud infrastructures. As the industry evolves, experienced developers need to pivot their skills to align with market demands. This often means investing time and resources into learning new technologies, which can be a significant commitment but is necessary for career longevity. ## Perception and bias ### Ageism in tech Ageism remains a pervasive issue in the tech industry. A survey by Dice found that 68% of tech workers over 40 have experienced ageism. This bias can severely limit job opportunities for experienced developers, who are often perceived as less adaptable or innovative than their younger counterparts. > "Ageism is a reality in tech, but it's crucial to recognize the value that experience brings." — Marc Benioff, CEO of Salesforce **Ageism can manifest in several ways:** - **Job postings:** Language like "digital native" or "young, dynamic team" subtly discourages older applicants from applying. These terms implicitly suggest a preference for younger candidates, making seasoned professionals feel unwelcome or unsuitable for the role. - **Interview bias:** Older candidates might be unfairly assessed on their ability to integrate into younger, less experienced teams. Interviewers might harbor unconscious biases, questioning an older applicant's ability to learn new technologies quickly or fit into a company's youthful culture, leading to biased hiring decisions. - **Promotion stagnation:** Seasoned employees might encounter limited opportunities for career advancement. They are often overlooked in favor of younger colleagues who are perceived as having more potential for growth, despite the older employees' proven track records and extensive experience. To combat ageism, experienced developers can: - **Showcase continuous learning:** Highlight recent courses, certifications, and workshops on their resumes to demonstrate a commitment to staying current. This shows employers that they are proactive about their professional development and are adept at acquiring new skills, countering the stereotype that older workers are resistant to change. - **Emphasize adaptability:** Clearly communicate their willingness and ability to embrace new technologies and methodologies. Including specific examples of how they have successfully adapted to technological changes in previous roles can help illustrate their flexibility and resilience. - **Engage in diverse networking:** Actively participate in professional groups and events that value age diversity to build supportive connections and gain visibility. Joining organizations such as AARP's Employer Pledge Program or the Age-Friendly Institute can provide networking opportunities and resources tailored to older professionals. Additionally, mentoring younger colleagues can demonstrate their value in fostering a multigenerational workforce and underscore their leadership and collaborative skills. ### Overqualification concerns Employers may hesitate to hire highly experienced developers due to concerns about overqualification. They worry that such candidates will have high salary expectations or may not be satisfied with the role. This can lead to experienced developers being overlooked for positions they are more than capable of handling. LinkedIn's 2023 Global Talent Trends report indicates that 45% of hiring managers are concerned about overqualification when considering experienced candidates. This concern often stems from: - **Salary expectations:** Senior developers typically command higher salaries, which can strain hiring budgets. Employers fear they may not be able to meet the compensation demands of highly experienced professionals, leading to financial imbalances within the team. - **Role satisfaction:** Employers fear overqualified candidates might quickly become dissatisfied and leave. There is a concern that these candidates will feel underutilized or bored in roles that don't fully leverage their skills and experience, resulting in high turnover rates. - **Team dynamics:** Integrating a highly experienced developer into a junior team can be challenging. Employers worry about potential conflicts, as overqualified candidates might unintentionally overshadow less experienced team members, disrupt established workflows, or resist new methodologies. **Strategies for addressing overqualification:** - **Clearly articulate enthusiasm for the specific role and company:** During interviews, express genuine interest in the company's mission and how the specific role aligns with your professional values and goals. Highlighting a passion for the work itself can reassure employers that you are motivated by more than just the paycheck. - **Discuss long-term career goals and how they align with the position:** Share your vision for your future within the company, emphasizing how the role fits into your career trajectory. This can help employers see that you are committed to growing with the company and not just using the position as a temporary stopgap. - **Demonstrate a willingness to mentor junior team members:** Emphasize your interest in supporting and developing less experienced colleagues. Highlight past experiences where you successfully mentored or led teams, showcasing your ability to foster a collaborative and supportive work environment. This not only mitigates concerns about team dynamics but also adds value to your candidacy by positioning you as a potential leader within the organization. ## Cultural fit and soft skills ### Company culture and dynamics Modern companies, particularly startups, often prioritize cultural fit. They seek dynamic, adaptable team members who can seamlessly integrate into fast-paced environments. Experienced developers, accustomed to more structured settings, might find it challenging to align with these expectations. Startups typically emphasize agility, innovation, and a flat hierarchy, which can be a stark contrast to the more hierarchical and process-driven cultures found in larger, more established companies. **Tips for fitting into modern company culture:** - **Show adaptability:** Demonstrate openness to new ideas and methods. Share examples from your past where you successfully adapted to changes or embraced new technologies and practices. This shows that you are flexible and can thrive in dynamic environments. - **Engage in team-building activities:** Participate actively in team events and initiatives. This can include social gatherings, collaborative projects, and informal meet-ups. Being present and engaged helps build rapport with colleagues and shows that you are invested in the team's cohesion and success. - **Exhibit enthusiasm:** Show genuine interest in the company’s mission and values. Research the company thoroughly before interviews or starting a new role. Understand their goals, values, and culture, and articulate how they align with your own professional values. Enthusiasm and alignment with the company's vision can make you a more attractive candidate. Fitting into a new culture isn't just about adapting; it's also about contributing. Experienced developers can leverage their backgrounds to offer fresh perspectives, fostering innovation within their teams. By drawing on their extensive experience, they can introduce best practices, mentor younger colleagues, and provide valuable insights that drive the team forward. Additionally, their ability to bridge the gap between traditional methodologies and modern approaches can be instrumental in enhancing team dynamics and overall productivity. ### Soft skills vs. technical skills While technical skills are critical, soft skills such as communication, teamwork, and adaptability are increasingly valued. LinkedIn’s 2023 Workplace Learning Report highlights that 92% of talent professionals believe soft skills are just as important, if not more so, than technical skills. Experienced developers need to showcase these abilities during the hiring process. ![Soft Skills by LinkedIn Learning](https://media.licdn.com/dms/image/D4D08AQE69NZBUH3ckg/croft-frontend-shrinkToFit1024/0/1707324200435?e=2147483647&v=beta&t=pR9PGO0zYUoW43FiFliw-WxfU7qppB4WE74CMQoxfbg) **Essential soft skills for developers:** - **Communication:** Clearly convey ideas and feedback. Effective communication involves not only speaking and writing clearly but also listening actively. This ensures that everyone on the team understands project goals, expectations, and timelines. - **Teamwork:** Collaborate effectively with diverse team members. Teamwork requires understanding different perspectives, contributing to group tasks, and supporting colleagues. It's about building a cohesive environment where each member's strengths are utilized. - **Adaptability:** Adjust to new challenges and environments with ease. Adaptability means being open to change, whether it's a shift in project direction, adopting new technologies, or adjusting to new team dynamics. To highlight soft skills: - **Include examples of successful team projects in your resume:** Detail your role and contributions to team projects. Highlight specific instances where your communication, collaboration, and problem-solving skills led to successful outcomes. For example, "Led a cross-functional team to develop a new feature that increased user engagement by 20%, ensuring clear communication and collaboration across departments." - **Discuss scenarios in interviews where you effectively used soft skills:** Prepare anecdotes that demonstrate your soft skills in action. For instance, describe a time when you resolved a conflict within your team or adapted quickly to a major project change. Be specific about your actions and the positive results. For example, "When our project scope changed unexpectedly, I facilitated a team meeting to reassess our goals and priorities, ensuring everyone was on board and motivated, which led to a successful project completion ahead of the new deadline." - **Seek feedback from colleagues and supervisors to identify strengths and areas for improvement:** Regular feedback helps you understand how others perceive your soft skills. Use this feedback to enhance your strengths and work on any weaknesses. You can say, "I regularly seek feedback from my peers and supervisors, which has helped me refine my communication and leadership skills, making me a more effective team player." **Encouragement:** - Embrace opportunities to practice and develop your soft skills. Attend workshops, engage in team-building activities, and participate in professional development courses focused on soft skills. - Don't be afraid to ask for constructive criticism. Understanding how others view your interactions can provide valuable insights and help you grow. - Remember, showcasing your soft skills can set you apart in the job market. Employers value team members who can not only perform technically but also contribute positively to the team dynamic. By effectively demonstrating your soft skills, you enhance your employability and position yourself as a well-rounded professional. ## Salary expectations ### Compensation discrepancies There’s often a gap between what experienced developers expect to earn and what companies are willing to pay. According to Glassdoor, senior developers’ salaries can be significantly higher than those of their less experienced peers. This can lead to mismatches in job offers and expectations. > "Companies must balance the value of experience with their budget constraints." — Jeff Weiner, Executive Chairman of LinkedIn **Factors influencing compensation:** - **Market rates:** Research current salary trends in your field and location. - **Company budget:** Understand the financial constraints of potential employers. - **Value proposition:** Clearly communicate the unique value you bring to the company. ### Cost-cutting measures To save costs, companies may prefer to hire less experienced, lower-cost employees. A 2023 study by the National Bureau of Economic Research found that firms increasingly seek to minimize labor costs, impacting hiring decisions for senior roles. **Cost-saving strategies:** - **Hire entry-level employees:** Invest in training and development for less experienced staff. - **Utilize freelancers:** Employ contract workers for specific projects to reduce long-term costs. - **Offer flexible work arrangements:** Provide options like remote work or part-time positions to attract top talent without the full cost of a traditional hire. For experienced developers, being flexible with salary expectations and demonstrating how their expertise can drive cost savings in the long run can make a compelling case to potential employers. ## Keeping skills updated ### Continuous learning For experienced developers, continuous learning is crucial. Engaging in professional development, taking online courses, and obtaining certifications can help them stay relevant. Platforms like Coursera and Udacity offer courses tailored to the latest industry trends. LinkedIn Learning’s 2023 report shows a 58% increase in professionals enrolling in courses related to new technologies. This highlights the growing importance of staying updated in an ever-evolving field. **Why continuous learning matters:** - **Stay current:** In a fast-paced industry, keeping up with new trends and technologies is essential. Regularly engaging with new material ensures your skills don’t become obsolete. - **Enhance employability:** Employers value candidates who demonstrate a commitment to growth and learning. Showing that you are proactive about your development makes you a more attractive hire. - **Expand skillset:** Learning new technologies and methodologies can open doors to different roles and specializations. Gaining expertise in areas like AI, cybersecurity, and cloud computing can significantly broaden your career prospects. **Practical steps for continuous learning:** - **Set learning goals:** Define what you want to achieve, whether it’s mastering a new programming language or understanding the basics of AI. - **Schedule regular study time:** Dedicate specific times each week to learning. Consistency is key to making steady progress. - **Engage with diverse resources:** Utilize online courses, webinars, books, and podcasts. This variety can keep you engaged and provide different perspectives on the same topic. - **Join a learning community:** Engage with peers who are also interested in learning. This can provide support, accountability, and additional insights. ### Networking and community involvement Networking remains a powerful tool for career advancement. Engaging with professional communities can lead to new opportunities and valuable connections. LinkedIn’s data indicates that 85% of all jobs are filled through networking. > "Networking is not just about connecting people. It’s about connecting people with people, people with ideas, and people with opportunities." — Michele Jennae, author of "The Connectworker" **Why networking is crucial:** - **Access to opportunities:** Many job openings are not publicly advertised. Networking can give you access to the hidden job market. - **Professional growth:** Interacting with peers and industry leaders can provide insights, advice, and mentorship that help you grow in your career. - **Building relationships:** Strong professional relationships can lead to collaborations, partnerships, and support throughout your career. **Effective networking strategies:** - **Attend industry events and meetups:** Participate in conferences, seminars, and local meetups to meet peers and industry experts. - **Join professional associations and online communities:** Become active in groups relevant to your field. Contributing to discussions and sharing your expertise can increase your visibility. - **Engage meaningfully:** Focus on building genuine relationships rather than just exchanging business cards. Follow up with contacts, offer help, and stay in touch. - **Leverage social media:** Use platforms like LinkedIn to connect with professionals, join relevant groups, and engage with content. **Encouragement:** - Networking can feel daunting, but start small. Even a few meaningful connections can make a significant difference. - Remember, networking is a two-way street. Be open to helping others, not just seeking help for yourself. - Stay patient and persistent. Building a strong network takes time, but the benefits are long-lasting. ## Conclusion Experienced developers encounter a distinct set of challenges in the job market. Rapid technological advancements require constant learning and adaptation, while biases like ageism can limit opportunities. Cultural fit and high salary expectations further complicate the job search for seasoned professionals. To navigate these challenges, experienced developers should focus on continuous learning to keep their skills relevant. Embracing new technologies and methodologies can help them stay competitive. Additionally, showcasing strong soft skills—such as communication, teamwork, and adaptability—can make a significant difference in interviews and on the job. Networking plays a crucial role, too. Building relationships through industry events, professional groups, and online communities can lead to new job opportunities and valuable support systems. It's not just about who you know, but also about who knows what you can do. Employers, on the other hand, need to recognize the immense value that experienced developers bring to the table. Their expertise and insights can drive innovation and mentorship within teams. Creating inclusive environments that value diversity in age and experience can help organizations thrive. In summary, the key to overcoming these hurdles lies in a balanced approach: continuous skill enhancement, effective networking, and demonstrating the ability to fit into and contribute to modern company cultures. With these strategies, experienced developers can better navigate the job market and secure roles that truly leverage their extensive knowledge and skills. --- ## Stay connected If you enjoyed this article, feel free to connect with me on various platforms: - [Dev.to](https://dev.to/leandro_nnz) - [Hackernoon](https://hackernoon.com/u/leandronnz) - [Hashnode](https://leandronnz.hashnode.dev) - [Twitter](https://twitter.com/digpollution) - [Instagram](https://instagram.com/_digitalpollution) - [Personal Portfolio v1](https://digitalpollution.com.ar) Your feedback and questions are always welcome. If you like, you can support my work here: [![Buy me a coffee](https://cdn.cafecito.app/imgs/buttons/button_5.svg)](https://cafecito.app/digitalpollution)
leandro_nnz
1,892,834
How to Detect the Starting Node of a Cycle in a Linked List
Introduction In this blog post, we'll explore a popular problem from LeetCode: Linked List...
0
2024-06-18T20:09:57
https://dev.to/kernelrb/how-to-detect-the-starting-node-of-a-cycle-in-a-linked-list-1dal
leetcode, python, algorithms, datastructures
## Introduction In this blog post, we'll explore a popular problem from LeetCode: **Linked List Cycle II**. This problem is all about detecting the start of a cycle in a linked list. We will go through the problem description, understand two approaches to solve it, and then look at their implementations in Python. --- ## Problem Statement Given the head of a linked list, return the node where the cycle begins. If there is no cycle, return `null`. A cycle in a linked list occurs when a node’s `next` pointer points back to a previous node, creating a loop. The problem does not give us the position directly, so we need to determine if a cycle exists and find its starting point. --- ## Approach 1: Using a Set The first approach to solve this problem is by using a set to keep track of visited nodes. As we traverse the linked list, we add each node to the set. If we encounter a node that is already in the set, then that node is the start of the cycle. ### Implementation ```python # Definition for singly-linked list. class ListNode: def __init__(self, x): self.val = x self.next = None class Solution: def detectCycle(self, head: ListNode) -> ListNode: visited = set() current = head while current: if current in visited: return current visited.add(current) current = current.next return None ``` ### Explanation #### Approach 1: Using a Set **Initialization:** - Create an empty set called `visited`. - Initialize a variable `current` to the head of the list. **Cycle Detection:** - Traverse the list, adding each node to the `visited` set. - If a node is already in the set, it means we have encountered the start of the cycle. - Return the node where the cycle begins. - If the traversal ends (i.e., `current` becomes `None`), return `None` as there is no cycle. #### Approach 2: Floyd’s Tortoise and Hare Algorithm The second approach is using Floyd’s Cycle-Finding Algorithm, also known as the Tortoise and Hare algorithm. This method involves two main steps: **Detection of Cycle:** - Use two pointers, a slow pointer (`slow`) and a fast pointer (`fast`). - Move `slow` by one step and `fast` by two steps. - If there is a cycle, `slow` and `fast` will meet at some point inside the cycle. **Finding the Start of the Cycle:** - Once a cycle is detected, reset one pointer to the head of the linked list. - Move both pointers one step at a time. - The point at which they meet is the start of the cycle. ### Implementation ```python # Definition for singly-linked list. class ListNode: def __init__(self, x): self.val = x self.next = None class Solution: def detectCycle(self, head: ListNode) -> ListNode: if not head: return None slow, fast = head, head while True: if not fast or not fast.next: return None slow = slow.next fast = fast.next.next if slow == fast: break fast = head while fast != slow: fast = fast.next slow = slow.next return fast ``` ### Explanation **Initialization:** - Check if the head is `None`. If it is, there’s no cycle, and we return `None`. **Cycle Detection:** - Initialize two pointers, `slow` and `fast`, both pointing to the head of the list. - Traverse the list with `slow` moving one step at a time and `fast` moving two steps. - If `fast` or `fast.next` becomes `None`, there’s no cycle, and we return `None`. - If `slow` equals `fast`, a cycle is detected. **Finding the Start Node:** - Reset `fast` to the head of the list. - Move both `slow` and `fast` one step at a time until they meet. - The node where they meet is the start of the cycle. ### Conclusion We have discussed two efficient methods to detect the start of a cycle in a linked list: using a set and Floyd’s Tortoise and Hare algorithm. Both methods have their own advantages and are useful in different scenarios. - **Using a Set:** Simpler to understand and implement but uses O(n) extra space. - **Floyd’s Algorithm:** More efficient in terms of space complexity (O(1)) but slightly more complex to implement. I hope this explanation helps you understand how to tackle the Linked List Cycle II problem. Feel free to ask questions or share your thoughts in the comments!
kernelrb
1,892,830
Indexing All of Wikipedia on a Laptop
In November, Cohere released a dataset containing all of Wikipedia, chunked and embedded to vectors...
0
2024-06-18T20:07:25
https://www.datastax.com/blog/indexing-all-of-wikipedia-on-a-laptop
vectordatabase, data, jvector
In November, [Cohere released a dataset containing all of Wikipedia](https://huggingface.co/datasets/Cohere/wikipedia-2023-11-embed-multilingual-v3), chunked and embedded to vectors with their [multilingual-v3 model](https://cohere.com/blog/introducing-embed-v3). Computing this many embeddings yourself would cost in the neighborhood of $5000, so the public release of this dataset makes creating a [semantic, vector-based index](https://www.datastax.com/guides/what-is-vector-search?utm_source=dev-to&utm_medium=byline&utm_campaign=jvector&utm_term=all-plays&utm_content=loading-wikipedia) of Wikipedia practical for an individual for the first time. Here’s what we’re building: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8hr4f1vx52r9gy3ohh2b.png) You can try [searching the completed index on a public demo instance here](https://jvectordemo.com:8443/). ## Why this is hard Sure, the dataset is big (180GB for the English corpus), but that’s not the obstacle per se. We’ve been able to build full-text indexes on larger datasets for a long time. The obstacle is that until now, off-the-shelf vector databases could not index a dataset larger than memory, because both the full-resolution vectors and the index (edge list) needed to be kept in memory during index construction. Larger datasets could be split into [segments](https://stackoverflow.com/questions/2703432/what-are-segments-in-lucene), but this means that at query time they need to search each segment separately, then combine the results, turning an O(log N) search per segment into O(N) overall. (In their latest release, [Lucene attempts to mitigate this by processing segments in parallel with multiple threads](https://www.elastic.co/search-labs/blog/elasticsearch-lucene-vector-database-gains), but obviously (1) this only gives you a constant factor of improvement before you run out of CPU cores and (2) this does not improve throughput.) Specifically, if you’re indexing 1536-dimension vectors (the size of ada002 or openai-v3-small), then you can fit about 5M vectors and their associated edge lists in a 32GB index construction RAM budget. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9af3xq1y4c49ykfsxgv5.png) [JVector](https://github.com/jbellis/jvector/), the library that powers [DataStax Astra DB vector search](https://www.datastax.com/products/datastax-astra?utm_source=dev-to&utm_medium=byline&utm_campaign=jvector&utm_term=all-plays&utm_content=loading-wikipedia), now supports indexing larger-than-memory datasets by performing construction-related searches with compressed vectors. This means that the edge lists need to fit in memory, but the uncompressed vectors do not, which gives us enough headroom to index Wikipedia-en on a laptop. ## Requirements Linux or MacOS. It will not work on Windows because ChronicleMap, which we are going to use for the non-vector data, is limited to a 4GB size there. (If you are interested enough, you could shard the Map by vector id to keep each shard under 4GB and still have O(1) lookup times.) About 180GB of free space for the dataset, and 90GB for the completed index. Enough RAM to run a JVM with 36GB of heap space during construction (~28GB for the index, 8GB for GC headroom). Disable swap before building the index. Linux will aggressively try to cache the index being constructed to the point of swapping out parts of the JVM heap, which is obviously counterproductive. In my test, building with swap enabled was almost twice as slow as with it off. ## Building and searching the index - Check out the project: `$ git clone [https://github.com/jbellis/coherepedia-jvector](https://github.com/jbellis/coherepedia-jvector) $ cd coherepedia-jvector` - Edit _config.properties_ to set the locations for the dataset and the index. - Run _pip install datasets_. (Setting up a [venv](https://docs.python.org/3/library/venv.html) or conda environment first is recommended but not strictly necessary.) - Run _python download.py_. This downloads the 180 GB dataset to the location you configured. For me that took about half an hour. - Run _./mvnw compile exec:exec@buildindex_. This took about 5 and a half hours on my machine (with an i9-12900 CPU). - Run _./mvnw compile exec:exec@serve_ and open a browser to [http://localhost:4567](http://localhost:4567). Search away! ## How it works We’re using JVector for the vector index and [Chronicle Map](https://github.com/OpenHFT/Chronicle-Map) for the article data. There are [several things](https://github.com/OpenHFT/Chronicle-Map/issues/533) I don’t love about Chronicle Map, but nothing else touches it for simple disk-based key/value performance. The [full source of the index construction class is here](https://github.com/jbellis/coherepedia-jvector/blob/master/src/main/java/io/github/jbellis/BuildIndex.java). I’ll explain it next in pieces. ## Compression parameters JVector is based on the [DiskANN](https://www.microsoft.com/en-us/research/publication/diskann-fast-accurate-billion-point-nearest-neighbor-search-on-a-single-node/) vector index design, which performs an initial search using vectors compressed lossily with [product quantization](https://towardsdatascience.com/similarity-search-product-quantization-b2a1a6397701) (PQ) in memory, then reranks the results using high-resolution vectors from disk. However, while DiskANN stores full, uncompressed vectors to perform reranking, JVector is able to improve on that using [Locally-Adaptive Quantization](https://arxiv.org/abs/2402.02044) (LVQ) compression. To set this up, we’ll first load some vectors into a RandomAccessVectorValues (RAVV) instance. RAVV is a JVector interface for a vector container; it could be List or Map based, in-memory or on-disk. In this case we’ll use a simple List-backed RAVV. We’ll compute the parameters for both compressions (kmeans clustering for PQ, global mean for LVQ) from a single shard of the dataset. At about 110k rows, this is enough data to have a statistically valid sample. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a3q1zfstj8xuze9ul1ei.png) Next, we compute the PQ compression codebook; we’re compressing the vectors by a factor of 64, because the Cohere v3 embeddings can be PQ-compressed that much without losing accuracy, after reranking. [Binary Quantization only gives us 32x compression and is less accurate](https://thenewstack.io/why-vector-size-matters/). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jkjshvia8p6u4odppkyh.png) Finally, we need to set up LVQ. LVQ gives us 4x compression while losing no measurable accuracy over the full uncompressed vectors, resulting in both a smaller footprint on disk and faster searches. (I thank the vector search team at Intel Research for pointing this out to us.) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2j25v035gdoxgjgbpqny.png) ## GraphIndexBuilder Next, we need to instantiate and configure our GraphIndexBuilder. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c8okv653jepfo9k709fo.png) This instantiates a JVector GraphIndexBuilder and connects it to an OnDiskGraphIndexWriter, and tells it to use the PQ-compressed vectors list (which starts empty and will grow as we add vectors to the index) during construction (in the BuildScoreProvider). ## Chronicle Map and RowData We’ll store article contents in RowData records. This content is what has been encoded as the corresponding vector in the dataset, and is what we want to return to the user in our search results. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qcewwextn64qluehx9js.png) To turn the vector index’s search results (a list of integer vector ids) into RowData, we store the RowData in a Map keyed by the vector id. This will be a lot of data, so we use ChronicleMap to store this on disk with a minimal in-memory footprint. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nl2h0d3jgjc82ai4n2l8.png) We need to tell ChronicleMap how large it’s going to be, both in entry count and entry size. Undersizing these will cause it to crash (my primary complaint about ChronicleMap), so we deliberately use a high estimate. We do not need to explicitly tell ChronicleMap how to read and write RowData objects, instead we just have RowData implement Serializable. While ChronicleMap supports custom de/serialize code, it’s perfectly happy to use simple out-of-the-box serialization and since profiling shows that’s not a bottleneck for us we’ll just leave it at that. ## Ingesting the data We use Java’s parallel Streams to process the shards in parallel. For each row in each shard, we: 1. Add it to _pqVectorsList_ 2. Call _writer.writeInline_ to add the LVQ-compressed vector to disk 3. Call _builder.addGraphNode_ – order is important because both (1) and (2) are used when we call addGraphNode 4. Call _contentMap.put_ with the article chunk data. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zz1cz3yqbh0pzzmjmyk.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pf2zkg8hvxbjemv3pc45.png) You can look at the [full source ](https://github.com/jbellis/coherepedia-jvector/blob/master/src/main/java/io/github/jbellis/BuildIndex.java)if you’re curious about forEachRow, it’s just standard “pull data out of Arrow” stuff. When the build completes, you should see files like this: `$ ls -lh ~/coherepedia -rw-rw-r-- 1 jonathan jonathan 48G May 20 15:53 coherepedia.ann -rw-rw-r-- 1 jonathan jonathan 36G May 20 18:05 coherepedia.map -rw-rw-r-- 1 jonathan jonathan 2.5G May 20 15:53 coherepedia.pqv -rw-rw-r-- 1 jonathan jonathan 4.1K May 17 23:04 coherepedia.lvq -rw-rw-r-- 1 jonathan jonathan 1.1M May 17 23:04 coherepedia.pq` These are, respectively: - ANN: the vector index, containing the edge lists and LVQ-compressed vectors for reranking. - MAP: the map containing article data indexed by vector id. - PQV: PQ-compressed vectors, which are read into memory and used for the approximate search pass. - LVQ: the LVQ global mean, used during construction. - PQ: the PQ codebooks, used during construction. ## Loading the index (after construction) The code for serving queries is found in the [WebSearch](https://github.com/jbellis/coherepedia-jvector/blob/master/src/main/java/io/github/jbellis/WebSearch.java) class. We’re using Spark ([the web framework](https://sparkjava.com/), not the big data engine) to serve a simple search form: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/splrjevpdbjgfnv5cbog.png) Construction needed a relatively large heap to keep the edge lists in memory. With that complete, we only need enough to keep the PQ-compressed vectors in memory; exec@serve is configured to use a 4GB heap. WebSearch ([the class behind exec@serve](https://github.com/jbellis/coherepedia-jvector/blob/master/src/main/java/io/github/jbellis/WebSearch.java)) first has a static initializer to load the PQ vectors and open the ChronicleMap. We also create a reusable GraphSearcher instance: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ybmay0tbggp4tyhwohv.png) ## Performing a search Executing a search and turning it into RowData for the user looks like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sne99d5fttajefgneca2.png) There are four “paragraphs” of code here, containing: - The call to getVectorEmbedding. This calls Cohere’s API to turn the search query (a String) into a vector embedding. - Creating approximate and reranking score functions. Approximate scoring is done through our product quantization, and reranking is done with the LVQ vectors in the index. Since the LVQ vectors are encapsulated in the index itself, we never need to explicitly deal with LVQ decoding; the index does it for us. - The call to _searcher.search_ that actually does the query - Retrieving the RowData associated with the top vector neighbors using contentMap. That’s it! We’ve indexed all of Wikipedia with high performance, parallel code in about 150 loc, and created a simple search server in another 100. On my machine, searches (which each run in a single thread) take about 50ms. We would expect it to take over twice as long if this were split across multiple segments. We would also expect it to lose significant accuracy if searches were performed only with compressed vectors without reranking. ## Conclusion Indexing the entirety of English Wikipedia on a laptop has become a practical reality thanks to recent advances in the JVector library that will be part of the imminent 3.0 release. ([Star the repo](https://github.com/jbellis/jvector) and stand by!) This article demonstrates how to do exactly that using JVector in conjunction with Chronicle Map, while also showcasing the use of [LVQ](https://arxiv.org/abs/2402.02044) to reduce index size while preserving [accurate reranking](https://thenewstack.io/why-vector-size-matters/). To take advantage of the power of JVector alongside powerful indexing for non-vector data, rolled into a document api with support for realtime inserts, updates, and deletes, check out [Astra DB](https://www.datastax.com/products/datastax-astra?utm_source=dev-to&utm_medium=byline&utm_campaign=jvector&utm_term=all-plays&utm_content=loading-wikipedia). Enjoy hacking with JVector and Astra DB!
jbellis
1,892,832
Asymmetrical Encryption
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-18T20:07:08
https://dev.to/rachitbpat/asymmetrical-encryption-57fd
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Think of asymmetric encryption as a combination lock. Anybody can lock it, but only you have the combination to unlock it. Similarly, anybody can send you an encrypted message with your public key, but only you can decrypt them with your private key. ## Additional Context <!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. --> <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
rachitbpat
1,892,829
Twilio Challenge: SmartyCall - AI Powered Competitive Trivia
This is a submission for the Twilio Challenge What I Built SmartyCall is an interactive...
0
2024-06-18T19:59:42
https://www.bengreenberg.dev/blog/blog_twilio-challenge:-smartycall---ai-powered-competitive-trivia_1718668800000
devchallenge, twiliochallenge, ai, twilio
--- title: Twilio Challenge: SmartyCall - AI Powered Competitive Trivia published: true tags: devchallenge, twiliochallenge, ai, twilio cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pj2wbro9ulfbbc775obs.png canonical_url: https://www.bengreenberg.dev/blog/blog_twilio-challenge:-smartycall---ai-powered-competitive-trivia_1718668800000 --- *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)* ## What I Built SmartyCall is an interactive trivia game that utilizes Twilio's Voice and SMS APIs, OpenAI's language model, and Couchbase for data storage. This application allows users to register via SMS, receive trivia questions through voice calls, and respond either via voice or text. The game maintains a leaderboard to track scores and rankings, providing a competitive edge to the trivia challenges. ### System Architecture **Key Components:** * Twilio Voice API: Handles the voice interactions, asking trivia questions and accepting verbal responses along with feedback on whether the answer was correct or not. * Twilio SMS API: Manages user registrations and sends updates or notifications about scores and game status. * [OpenAI API](https://openai.com): Generates trivia questions and processes user responses to determine correctness. * [Couchbase Capella](https://cloud.couchbase.com): Stores user data, game scores and questions are cached for uniqueness checks. **Data Flow:** * Users register via SMS and are stored in the Couchbase database. * Users initiate the game via a voice call, where they receive questions generated by OpenAI. * All questions are cached in Couchbase and the cache is used to check for uniqueness. * Answers are processed and validated through OpenAI, and scores are updated in Couchbase. **Backend Setup:** * The Node.js application serves as the backend, handling API requests and responses. * Express.js is used for routing, managing different endpoints for SMS and voice interactions. ## Demo {% youtube PvueyYpYCp0 %} ## Twilio and AI In "SmartyCall," Twilio's APIs play a crucial role. The Voice API is used to deliver trivia questions to players and collect their responses through natural voice inputs. The SMS API manages user registrations and sends text updates about scores and game progress. The integration with OpenAI’s GPT-4o allows the game to generate trivia questions dynamically and evaluate responses, adding a layer of intelligence and interaction that enhances user engagement. ## Source Code You can find the source code for the game on GitHub. Feel free to clone it and modify it according to whatever you would like to build! {% github https://github.com/hummusonrails/trivia-game %} ## Additional Prize Categories SmartyCall qualifies for the following additional prize categories in the Twilio Hackathon: * Twilio Times Two: This category is met as the project uses both Twilio's Voice and SMS APIs. * Entertaining Endeavors: The application provides an entertaining way for users to engage with trivia, making learning fun and interactive.
bengreenberg
1,892,828
Generate QR Codes Easily with Our Modern QR Code Generator API
Hey everyone! 👋 Are you looking for a simple yet powerful tool to generate QR codes quickly? Look no...
0
2024-06-18T19:59:14
https://dev.to/pr0biex/generate-qr-codes-easily-with-our-modern-qr-code-generator-api-1o3m
api, saas, software, softwaredevelopment
Hey everyone! 👋 Are you looking for a simple yet powerful tool to generate QR codes quickly? Look no further than our QR Code Generator! 🌐 **Webpage**: [QR Code Generator](https://api-chief.github.io/QRCodes/) 🔗 **API**: [QR Code Generator API](https://rapidapi.com/rizzards-of-oz-rizzards-of-oz-default/api/qr-code90) **Why Choose Our QR Code Generator?** Our QR Code Generator is designed to make generating QR codes effortless. Whether you need QR codes for URLs, text, or anything else, our API offers the flexibility and reliability you need. Here's what you can expect: **Ease of Use**: Simply enter your URL or text, click a button, and instantly get a high-quality QR code. **Modern Design**: Our webpage is sleek, modern, and user-friendly, ensuring a seamless experience. **API Integration**: Easily integrate our API into your applications with clear documentation and straightforward endpoints. Get Started Today Visit our QR Code Generator webpage to start creating QR codes instantly. For developers, integrate our API from RapidAPI into your projects and enhance user experiences with QR code functionality. Ready to simplify QR code generation? Check out our QR Code Generator now!
pr0biex
1,892,827
Enhancing Your Cloud Security Services for Optimal Protection
Introduction: As businesses increasingly rely on cloud computing, ensuring the security of cloud...
0
2024-06-18T19:56:38
https://dev.to/unicloud/enhancing-your-cloud-security-services-for-optimal-protection-383
cloud, security
**Introduction:** As businesses increasingly rely on cloud computing, ensuring the security of cloud environments has become paramount. Cloud security services play a vital role in protecting sensitive data and maintaining business continuity. This blog delves into strategies to enhance your cloud security services and safeguard your digital assets. **Understanding Cloud Security Services** [Cloud security services](https://unicloud.co/cloud-security-services.html) are designed to protect cloud-based applications, data, and infrastructure from cyber threats. These services include a range of security measures such as firewalls, intrusion detection systems, encryption, and identity management, all aimed at securing cloud environments. **Why Cloud Security Services Matter** Implementing robust cloud security services is critical for several reasons: **- Data Integrity:** Ensures that data remains accurate and unaltered during storage and transmission. **- Cyber Threat Protection:** Protects against various cyber threats including malware, ransomware, and phishing attacks. **- Regulatory Compliance:** Helps businesses adhere to industry regulations and standards, avoiding legal penalties. **- Customer Trust:** Builds customer confidence by demonstrating a commitment to data security. **Core Elements of Cloud Security Services** Effective cloud security services encompass several core elements: **1. Firewall Protection:** Shields cloud environments from unauthorized access and malicious traffic. **2. Intrusion Detection and Prevention:** Monitors cloud networks for suspicious activity and blocks potential threats. **3. Data Encryption:** Secures data by converting it into a code to prevent unauthorized access. **4. Identity Management:** Manages user identities and access controls to ensure that only authorized users can access cloud resources. **Enhancing Cloud Security Services** To enhance your cloud security services, consider the following strategies: **- Implement Multi-Factor Authentication (MFA):** Adds an extra layer of security by requiring multiple forms of verification for access. **- Regular Security Assessments:** Conduct regular security assessments and vulnerability scans to identify and address potential weaknesses. **- Leverage Cloud Security Frameworks:** Utilize security frameworks such as NIST and CIS Controls to guide your cloud security strategy. **- Integrate Security into DevOps:** Adopt a DevSecOps approach to integrate security practices into the development and operations lifecycle. **Leading Cloud Security Tools** Several leading tools can help businesses bolster their cloud security: **- AWS Shield:** Provides advanced DDoS protection for AWS applications. **- Microsoft Azure Sentinel:** A cloud-native SIEM that uses AI to analyze security data and detect threats. **- Google Cloud Armor:** Protects applications against DDoS attacks and other web-based threats. **Overcoming Cloud Security Challenges** Enhancing [cloud security services](https://unicloud.co/cloud-security-services.html) involves overcoming several challenges: **- Scalability:** Ensuring security measures can scale with the growth of cloud resources. **- Visibility:** Maintaining visibility into cloud environments to detect and respond to threats promptly. **- Compliance:** Navigating the complex landscape of regulatory compliance requirements. **- Skill Gaps:** Addressing the shortage of skilled cybersecurity professionals. **Emerging Trends in Cloud Security Services** Staying ahead of emerging trends is essential for effective cloud security: **- Zero Trust Architecture:** Adopting a zero trust approach that assumes no user or device is trusted by default. **- Cloud-Native Security:** Developing security solutions specifically designed for cloud environments. **- AI and Machine Learning:** Using AI and machine learning to enhance threat detection and response capabilities. **- Serverless Security:** Implementing security measures for serverless architectures to protect against potential vulnerabilities. **Conclusion** Enhancing your [cloud security services](https://unicloud.co/cloud-security-services.html) is crucial for protecting your business from cyber threats and ensuring the integrity of your data. By implementing best practices, leveraging advanced security tools, and staying ahead of emerging trends, businesses can achieve optimal cloud security. Investing in robust cloud security services is not just about protecting assets; it's about ensuring long-term business success and resilience. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/484a2p55xv6e4r3ot6x2.jpeg)
unicloud
1,902,981
Welcome
Welcome to my blog Hi! My name is Alessandro and I am 42. I am working independently in...
0
2024-06-27T18:03:10
https://blog.lamparelli.eu/welcome
learningjourney, fullstack
--- title: Welcome published: true date: 2024-06-18 19:46:29 UTC tags: LearningJourney,fullstack canonical_url: https://blog.lamparelli.eu/welcome --- ### Welcome to my blog Hi! My name is Alessandro and I am 42. I am working independently in the IT domain. Multi-potential in action, continually seeking to grow and expand my knowledge. I recently started learning a full-stack developer path to reinvent myself. I am excited to share with you the story of my journey and I hope that will help you learning new tips & tricks! Greetings 🤟.
alamparelli
1,892,824
Lock / Mutex to a software engineer (Difficulty 3)
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-18T19:44:32
https://dev.to/sauravshah31/lock-mutex-to-a-software-engineer-5hm8
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer A mutex blocks access to a critical section until the current thread is done, preventing race conditions but potentially causing performance hits. The GIL in CPython restricts access to shared resources to one thread at a time, impacting multi-threading. ## Additional Context I am planning to post 5 submissions explaining "Lock/Mutex" at 5 levels of difficulty. This is Difficulty 3. A Computer Science graduate or a software engineer might have heard and used mutex. Mutex explanation and some interesting facts is fun. For more about explaining the term in 5 Levels of difficulty, refer to the below post. It's interesting! {% embed https://dev.to/sauravshah31/computer-science-challenge-lets-make-it-interesting-lai %} [Previous explanation for Difficulty 2](https://dev.to/sauravshah31/lock-mutex-to-a-cs-undergraduate-59me) [Next explanation for Difficulty 4](https://dev.to/sauravshah31/lock-mutex-to-a-post-graduate-cs-student-difficulty-4-m52) **Cheers🎉** ~ [sauravshah31](https://x.com/sauravshah31)
sauravshah31
1,892,823
Zowin – Thiên Đường Game Bài Đổi Thưởng Uy Tín
https://zowin.bid/ Zowin nổi bật là một điểm đến lý tưởng cho những ai đam mê cá cược trực tuyến tại...
0
2024-06-18T19:39:58
https://dev.to/zowinbid01/zowin-thien-duong-game-bai-doi-thuong-uy-tin-4i0c
https://zowin.bid/ Zowin nổi bật là một điểm đến lý tưởng cho những ai đam mê cá cược trực tuyến tại khu vực Châu Á nói chung và tại Việt Nam nói riêng✔️✔️✔️ hotro.zowin@gmail.com 0368.910.938 533 Đ. Điện Biên Phủ, Phường 25, Bình Thạnh, Thành phố Hồ Chí Minh, Việt Nam #zowin #conggamezowin #gamebaizowin #zowinbid https://zowinbid01.zohosites.com https://hashnode.com/@zowinbid01 https://hackmd.io/@zowinbid01
zowinbid01
1,892,815
40 Days Of Kubernetes (2/40)
Day 2/40 How To Dockerize a Project Video Link @piyushsachdeva Git...
0
2024-06-18T19:09:14
https://dev.to/sina14/40-days-of-kubernetes-240-2abn
docker, kubernetes, 40daysofkubernetes
## Day 2/40 # How To Dockerize a Project [Video Link](https://www.youtube.com/watch?v=nfRsPiRGx74) @piyushsachdeva [Git Repository](https://github.com/piyushsachdeva/CKA-2024/) [My Git Repo](https://github.com/sina14/40daysofkubernetes) If you need a playground to manage on your own, please visit: ``` https://labs.play-with-docker.com/ https://labs.play-with-k8s.com/ ``` --- - We started with cloning a git repository from docker. It's a simple node application. ```console root@192.168.0.8 ~/day02 $ git clone https://github.com/docker/getting-started-app.git Cloning into 'getting-started-app'... remote: Enumerating objects: 79, done. remote: Counting objects: 100% (28/28), done. remote: Compressing objects: 100% (14/14), done. remote: Total 79 (delta 17), reused 17 (delta 13), pack-reused 51 Receiving objects: 100% (79/79), 1.76 MiB | 12.86 MiB/s, done. Resolving deltas: 100% (18/18), done. ``` - Login with a credential on docker hub ```console root@192.168.0.8 ~ $ docker login Log in with your Docker ID or email address to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com/ to create one. You can log in with your password or a Personal Access Token (PAT). Using a limited-scope PAT grants better security and is required for organizations using SSO. Learn more at https://docs.docker.com/go/access-tokens/ Username: sinatavakkol Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded ``` - We wrote a Dockerfile ``` FROM node:18-alpine WORKDIR /app COPY . . RUN yarn install --production CMD ["node", "src/index.js"] EXPOSE 3000 ``` - Build the image ```console root@192.168.0.8 ~/day02/getting-started-app $ sudo docker build -t day02-todo . [+] Building 23.2s (10/10) FINISHED docker:default => [internal] load .dockerignore 0.0s => => transferring context: 64B 0.0s => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 155B 0.0s => [internal] load metadata for docker.io/library/node:18-alpine 0.4s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [1/4] FROM docker.io/library/node:18-alpine@sha256:6937be95129321422103452e2883021cc4a96b63c32d7947187fcb25df84fc3f 4.3s => => resolve docker.io/library/node:18-alpine@sha256:6937be95129321422103452e2883021cc4a96b63c32d7947187fcb25df84fc3f 0.0s => => sha256:6937be95129321422103452e2883021cc4a96b63c32d7947187fcb25df84fc3f 1.43kB / 1.43kB 0.0s => => sha256:05412f5b9ed819c373a2535804e473a155fc91bfb7adf469ec2312e056a9e87f 1.16kB / 1.16kB 0.0s => => sha256:e7d39d4d8569a6203be5b7a118d4d92526b267087023a49ee0868f7c50190191 7.23kB / 7.23kB 0.0s => => sha256:d25f557d7f31bf7acfac935859b5153da41d13c41f2b468d16f729a5b883634f 3.62MB / 3.62MB 0.1s => => sha256:f6124930634921d33d69a1a8b5848cb40d0b269e79b4c37c236cb5e4d61a2710 39.83MB / 39.83MB 1.0s => => sha256:22a81a0f8d1c30ce5a5da3579a84ab4c22fd2f14cb33863c1a752da6f056dc18 1.38MB / 1.38MB 0.1s => => extracting sha256:d25f557d7f31bf7acfac935859b5153da41d13c41f2b468d16f729a5b883634f 0.2s => => sha256:bd06542006fda4279cb2edd761a84311c1fdbb90554e9feaaf078a3674845742 447B / 447B 0.2s => => extracting sha256:f6124930634921d33d69a1a8b5848cb40d0b269e79b4c37c236cb5e4d61a2710 2.9s => => extracting sha256:22a81a0f8d1c30ce5a5da3579a84ab4c22fd2f14cb33863c1a752da6f056dc18 0.1s => => extracting sha256:bd06542006fda4279cb2edd761a84311c1fdbb90554e9feaaf078a3674845742 0.0s => [internal] load build context 0.1s => => transferring context: 6.47MB 0.1s => [2/4] WORKDIR /app 0.0s => [3/4] COPY . . 0.1s => [4/4] RUN yarn install --production 15.8s => exporting to image 2.4s => => exporting layers 2.4s => => writing image sha256:41ea09f4ff8628eb00c044f4f5402238c5eb2815b281e998a7a74ca9ca3d1abf 0.0s => => naming to docker.io/library/day02-todo ``` - Tag the image ```console docker tag day02-todo:latest sinatavakkol/40daysofkube:02.0 ``` - Push the image to docker hub ```console docker push sinatavakkol/40daysofkube:02.0 ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kew7mt1nq10dotz1a8t7.png) - Run and create the container ```console docker run -d -p 3000:3000 --name fina sinatavakkol/40daysofkube:02.0 ```
sina14
1,892,822
Embarking on My UI/UX Design Journey: Day 1 - Introduction and Course Outline
Day 1: Introduction to My UI/UX Design Journey 👋 Hello, Dev Community! I’m Prince Chouhan, a B.Tech...
0
2024-06-18T19:39:18
https://dev.to/prince_chouhan/embarking-on-my-uiux-design-journey-day-1-introduction-and-course-outline-20e6
ui, ux, uidesign, uxdesign
Day 1: Introduction to My UI/UX Design Journey 👋 Hello, Dev Community! I’m **Prince Chouhan**, a B.Tech CSE student with a passion for UI/UX design. Today marks the beginning of my journey to learn UI/UX design from scratch, and I'm excited to share my daily learnings with you all. Follow along as I explore the intricacies of creating engaging and user-friendly designs. Course Plan and Table of Contents: 1. Introduction - Understanding UI (User Interface): The visual elements through which users interact with a product. - Defining UX (User Experience): The overall experience and satisfaction a user has when using a product. - Exploring CX (Customer Experience): How the user's experience extends beyond the product to their interactions with the company. 2. UI Design Principles 3. Figma Academy 4. UI Elements 5. Color Theory 6. Practical Web Design 7. Design Challenges 8. Prototyping and Animations 9. UI/UX Using AI --- I am thrilled to start this learning journey and share my progress with all of you. Stay tuned for daily updates and insights into UI/UX design. Let's learn and grow together! Feel free to follow my journey and share your thoughts or advice in the comments. #UIUXDesign #LearningJourney #DesignThinking #PrinceChouhan #DevCommunity ---
prince_chouhan
1,892,821
How To Map PostgreSQL `point` Data Type To Java PGpoint Data Type?
This article describes using PostgreSQL point data in Spring Boot &amp; Spring JPA/Hibernate...
0
2024-06-18T19:32:55
https://dev.to/georgech2/how-to-map-postgresql-point-data-type-to-java-pgpoint-data-type-4d8h
java, postgres, springboot, database
This article describes using PostgreSQL **point** data in Spring Boot & Spring JPA/Hibernate projects. * Which Java Data Type should be used for the `point` Data Type mapped? * Why can’t you use PGpoint directly? * How do you use data types that are not supported in JPA? ## Technology * Java 11 * Spring Boot 2.x * Spring JPA 2.x * PostgreSQL * Maven ## **'point'** Mapped TO **'PGpoint'** The official PostgreSQL library provides definitions of some special data that we can use directly in our projects. For example`PGpoint.class`: ```java public class PGpoint extends PGobject implements PGBinaryObject, Serializable, Cloneable { public double x; public double y; public boolean isNull; } ``` Then for columns of type point in Table, you can use `PGpoint` in Java Model Class: ```java @Data @Entity @Table(name = "cities") public class City implements Serializable { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; private String name; private PGpoint location; } ``` However, when querying the city data, an exception occurs: ``` Caused by: org.hibernate.type.SerializationException: could not deserialize at org.hibernate.internal.util.SerializationHelper.doDeserialize(SerializationHelper.java:243) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.internal.util.SerializationHelper.deserialize(SerializationHelper.java:287) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.fromBytes(SerializableTypeDescriptor.java:138) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.wrap(SerializableTypeDescriptor.java:113) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.wrap(SerializableTypeDescriptor.java:29) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.descriptor.sql.VarbinaryTypeDescriptor$2.doExtract(VarbinaryTypeDescriptor.java:60) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.descriptor.sql.BasicExtractor.extract(BasicExtractor.java:47) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:257) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:253) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:243) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at org.hibernate.type.AbstractStandardBasicType.hydrate(AbstractStandardBasicType.java:329) ~[hibernate-core-5.6.15.Final.jar:5.6.15.Final] at ... ``` ### Reason for Exception According to the StackCause of the above exception, it fails to deserialize the query result to a Java object. It's using the default *AbstractStandardBasicType* in Hibernate because there is no type implementation of `PGpoint` in BasicType, so it can’t do serialization and deserialization. ### How to fix this Exception Hibernate provides an interface for UserType user-defined types. This interface should be implemented by user-defined “types”. A “type” class is _not_ the actual property type — it is a class that knows how to serialize instances of another class to and from JDBC. **PGpointType** ```java package com.example.demo; import org.hibernate.HibernateException; import org.hibernate.engine.spi.SharedSessionContractImplementor; import org.hibernate.usertype.UserType; import org.postgresql.geometric.PGpoint; import org.springframework.util.ObjectUtils; import java.io.Serializable; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Types; public class PGpointType implements UserType { /** * Return the SQL type codes for the columns mapped by this type. * @return int[] */ @Override public int[] sqlTypes() { return new int[] { Types.VARCHAR }; } /** * The class returned by nullSafeGet(). * @return Class */ @Override public Class returnedClass() { return PGpoint.class; } @Override public boolean equals(Object o, Object o1) throws HibernateException { return ObjectUtils.nullSafeEquals(o, o1); } @Override public int hashCode(Object o) throws HibernateException { return ObjectUtils.nullSafeHashCode(o); } /** * Retrieve an instance of the mapped class from a JDBC resultset. */ @Override public Object nullSafeGet(ResultSet resultSet, String[] names, SharedSessionContractImplementor sharedSessionContractImplementor, Object o) throws HibernateException, SQLException { if (names.length == 1) { if (resultSet.wasNull() || resultSet.getObject(names[0]) == null) { return null; } else { return new PGpoint(resultSet.getObject(names[0]).toString()); } } return null; } /** * Write an instance of the mapped class to a prepared statement. */ @Override public void nullSafeSet(PreparedStatement preparedStatement, Object o, int i, SharedSessionContractImplementor sharedSessionContractImplementor) throws HibernateException, SQLException { if (o == null) { preparedStatement.setNull(i, Types.OTHER); } else { preparedStatement.setObject(i, o.toString(), Types.OTHER); } } @Override public Object deepCopy(Object o) throws HibernateException { return o; } @Override public boolean isMutable() { return false; } @Override public Serializable disassemble(Object o) throws HibernateException { return (Serializable) o; } @Override public Object assemble(Serializable serializable, Object o) throws HibernateException { return serializable; } @Override public Object replace(Object o, Object o1, Object o2) throws HibernateException { return o; } } ``` ## Conclusion The above is how to use the PostgreSQL `point` Data Type in a Spring Boot project, if there are other custom data types, you can also follow this way to implement.
georgech2
1,892,819
I'm a lazy developer. Here's how Amazon Q is enabling me
I'm always on the hunt for ways to improve my productivity as a developer. I've spent the better part...
0
2024-06-18T19:27:06
https://community.aws/content/2gQKqKLQqKmlvFsnp4zuCENKW8i/i-m-a-lazy-developer-here-s-how-amazon-q-is-enabling-me
productivity, ai, aws
I'm always on the hunt for ways to improve my productivity as a developer. I've spent the better part of the last two weeks diving head first into using [Amazon Q Developer](https://aws.amazon.com/developer/generative-ai/amazon-q), experimenting with how it might work in my development flow, learning the ins and outs, and chatting with coworkers about how they are using it. Today, I want to share what I've learned so far, how I'm tweaking my productivity and [staying in the flow](https://github.blog/2024-01-22-how-to-get-in-the-flow-while-coding-and-why-its-important/). ## Reduce my context switching When I'm writing code, I want to get in the flow and stay in the flow. [Context switching](https://leaddev.com/process/managing-chaos-context-switching) -- flipping between apps, back and forth between browser tabs, a google search here, and StackOverflow search there, the ding of an email, a bleep of a Slack notification (oops, I'm late for a meeting) -- all of this [comes with a cost](https://news.ycombinator.com/item?id=35459333). I'm finding that Amazon Q Developer keeps me in my IDE. Instead of hopping over to a browser to do a google search or to read through documentation for my programming language, the framework I'm using, or the AWS docs, I can use this tool to do this digging for me. I can ask how to get started with a new project. Here, I started with the prompt `What are the steps to creating a python Flask app?` Amazon Q Developer outlines the steps I need to take to get started, providing me with both code snippets and explanations. ![Amazon Q chat with prompt "What are the steps to creating a python Flask app?" and a response.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lbf53hg3u0cl6ya4xt86.png) I can never remember the steps to create a python virtual environment so I ask it questions like `How do I initialize a python virtual environment?` ![Amazon Q chat with prompt "How do I initialize a python virtual environment?"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3xg2ja20wi1e1tqahjgm.png) And if I want to go deeper, I'm even presented with external references backing up the responses. ![The sources used to prepare the response.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4f86hpcauj05mokwkhp.png) All of this allows me to stay in my IDE, reducing my context switching, and stay in the flow. Check out how my colleague [stays in the flow with Amazon Q](https://community.aws/content/2fo810EHkduTy0eRQRfxCmEFX10/using-amazon-q-developer-in-my-development-flow). ## Support for my lazy developer mentality According to the creator of the Perl programming language, Larry Wall, laziness is one of the virtues of a good programmer. For a programmer to be effective and efficient, they must also be lazy. But not in the traditional, lounge on your couch and watch TV all day and be lazy sense. Instead, a lazy developer is interested in saving time, automating tasks, especially the boring, time consuming, or brittle ones, documenting work for others. That's me. I'm a lazy developer. I don't remember all the properties for an HTML text area. So I ask Amazon Q to do the research for me and propose examples for boilerplate code like HTML elements and unit tests. ![Amazon Q chat with prompt "What is the syntax for an HTML textarea?"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a1bb9vdvbam37cnq7czg.png) I ask it to explain a block of code by selecting it and then using the "Send to Amazon Q" menu option: ![The Send to Amazon Q -> Explain menu.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93hbjyhh9cnl8vaacsx5.png) And the response I get back: ![Prompt "Explain the following part of my code:" and the selected code.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cytl5o1klp60x6kazvrx.png) And then I wanted more detail on the `json.loads` line: ![Prompt "Explain the following part of my code:" and the selected code.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rup3ffgkhff5ygyxyxoy.png) In a previous life, I often worked as a team of one, inheriting someone else's codebase. I would come into a new codebase with no other teammates to help me figure out what was going on. This is where Amazon Q can help you get up to speed quickly, summarizing blocks of code, functions, or even whole files. ## Help me debug One challenging task for developers is debugging hairy bugs. It's hard because you end up having to shove a bunch of info into your short term memory in order to trace through layers and layers of code, some of which you may not even be familiar with. Then... BAM! Slack notification, time for a meeting! You drop everything for an hour and come back to debugging and have to start that trace over again. Or you're chasing through multiple API docs trying to figure out whether you've correctly understood how to make a call and which parameters to send and how to handle the response. That's exactly what just happened to me as I was trying to make a call to Amazon Bedrock's `invoke_model` API. I messed it up! So I asked Amazon Q for an assist: ![Asking Q chat to help me debug the error.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uuxwvhx1x9p0p45degg6.png) I had also messed up the format of the response body, so I used Amazon Q to help me debug through that as well: ![Prompt "What is the format of the prompt I need to send to Bedrock with invoke_model"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cfw7ne7mxrj4h6b2bsst.png) It's important to recognize that LLMs are backing Amazon Q, which means it will experience hallucinations just like any other model. I did have some extra debugging because of this as the example code proposed using `ModelId` and `Body` when calling `invoke_model` when they should have been lower-case `modelId` and `body`. Want to see more complex debugging scenarios with Amazon Q? Check out my colleagues' experiences solving a [data serialization problem](https://community.aws/content/2fbWIDtx027BQjqmNNSA2R7sTqI/kafka-go-and-protocol-buffers-how-amazon-q-solved-a-nasty-data-deserialization-problem) and [finding and fixing a concurrency bug](https://community.aws/content/2fbDOb7FOMzJJgTdhmCam7DVW1s/ai-found-a-concurrency-bug-in-this-code-then-fixed-it). ## Help me write tests I am an avid tester, so when I'm writing code, I'm often pairing it up with unit tests at a minimum. I have not had much experience using mocks in Python tests, so I asked Q `How would you write a test for the send_prompt_to_bedrock function using mocks?` and got this response: ![Prompt "How would you write a test for send_prompt_to_bedrock using mocks?"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ellabgt38follcvw3gf.png) It gets me a lot closer than had I started with a google search, dug through some documentation, and started by writing the boilerplate code myself. This response is an example customized to what I am doing, so I don't have to try to translate what one blog post author was trying to do in their situation to what I'm trying to do in my situation. I didn't spend much time getting this test to pass, most of my time was making sure the request/response objects were prepared correctly. Read more about how my colleagues are using Amazon Q for [writing tests earlier](https://community.aws/content/2gBZtC94gPzaCQRnt4P0rIYWuBx/shift-left-workload-leveraging-ai-for-test-creation) and for [test driven development](https://community.aws/content/2freQx3PAGvuHlULJ2kJ57WP34E/test-driven-development-with-amazon-q-developer). ## Wrapping up Hopefully the only thing you took away from this article wasn't that I'm a lazy developer! Here, I covered four different ways I'm tweaking my productivity as a developer so I can stay in the flow: 1. I'm reducing my context switching by staying in my IDE 2. I'm getting support for my lazy developer mentality by generating boilerplate code, summarizing what code is doing, and getting help with refactoring 3. I'm delegating my debugging work 4. I'm getting help with writing my unit tests Are you ready to get started with Amazon Q Developer in your IDE? Check out how to set it up for [VS Code](https://community.aws/content/2fVw1hN4VeTF3qtVSZHfQiQUS16/getting-started-with-amazon-q-developer-in-visual-studio-code) or [JetBrains IDEs](https://community.aws/content/2fXj10wxhGCExqPvnsJNTycaUcL/adding-amazon-q-developer-to-jetbrains-ides) or even the [command line](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/command-line-getting-started-installing.html). Amazon Q Developer just left Preview on April 30th and is continuously being improved based on your feedback. If something isn't working as you expect, you find a bug, or even have suggestions to improve how it works in your development flow, let me know in the comments!
jennapederson
1,892,685
Setting Up an Express API with TypeScript and Pre-commit Hooks Using Husky
TL;DR Setting up an Express API with TypeScript can greatly enhance your development...
0
2024-06-18T19:27:00
https://dev.to/etorralbab/setting-up-an-express-api-with-typescript-and-pre-commit-hooks-using-husky-87m
typescript, express, boilerplate, lint
## TL;DR Setting up an Express API with TypeScript can greatly enhance your development experience by providing strong typing and modern JavaScript features. This post covers the initial setup of an Express server with TypeScript, integration of linting with ESLint and Prettier, and ensuring code quality with Husky pre-commit hooks. For the complete code and setup details, please visit the [GitHub repository](https://github.com/etorralba/express-ts-api-boilerplate). Today, I am configuring an Express server with TypeScript and setting up pre-commit hooks using Husky to ensure code quality. This setup will include linting and testing the code before it's committed to version control, which is crucial for maintaining code standards and reducing bugs. ## Create an Express Project I'm establishing a mono repo for both the API and the client. Here’s the initial structure for the API: ``` ├── LICENSE ├── README.md ├── api │ ├── eslint.config.mjs │ ├── .eslintrc.json │ ├── .prettierrc │ ├── node_modules │ ├── package-lock.json │ ├── package.json │ ├── src │ └── tsconfig.json ├── node_modules │ └── husky ├── package-lock.json └── package.json ``` ### Configuring Project Dependencies 1. Navigate to the API directory and initialize the Node.js project: ```bash cd api/ npm init -y ``` 2. Install necessary libraries including Express and TypeScript: ```bash npm install express npm install -D @types/express @types/node ts-node typescript nodemon ``` 3. Create the following configuration files: - **tsconfig.json**: This file sets up TypeScript for our project. ```json { "compilerOptions": { "target": "es2018", "module": "commonjs", "outDir": "./dist", "rootDir": "./src", "strict": true, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true, "moduleResolution": "node", "resolveJsonModule": true, "noImplicitReturns": true, "noUnusedLocals": true, "noUnusedParameters": true, "removeComments": true }, "include": ["src/**/*"], "exclude": ["node_modules", "**/*.spec.ts"] } ``` - **src/server.ts**: This is the main server file using Express. ```typescript import express, { Request, Response } from 'express'; const app = express(); const PORT = process.env.PORT || 3000; app.get('/', (req: Request, res: Response) => { res.send('Hello World from Express and TypeScript!'); }); app.listen(PORT, () => { console.log(`Server running on http://localhost:${PORT}`); }); ``` 4. Update the `npm` scripts for easier development and build processes: ```json "scripts": { "build": "tsc", "start": "node dist/server.js", "dev": "nodemon --watch 'src/**/*.ts' --exec 'ts-node' src/server.ts" }, ``` ## Adding Linting to the Project Linting helps maintain a standardized style across the project. Let's set up ESLint and Prettier: 1. Install ESLint: ```bash npm install -D eslint@8.57.0 ``` 2. Initialize ESLint and configure it: ```bash npx eslint --init ``` Choose the appropriate options for Node.js environment with ES modules. 3. Update `package.json` with a lint command: ```json "scripts": { "lint": "eslint src --fix" }, ``` 4. Install Prettier and its ESLint integration: ```bash npm install -D prettier eslint-config-prettier eslint-plugin-prettier ``` 5. Configure Prettier and update ESLint settings: ```json // .prettierrc { "semi": true, "singleQuote": true } ``` ```json // .eslintrc.json { "extends": ["plugin:prettier/recommended"], "plugins": ["prettier"], "rules": { "prettier/prettier": "error" } } ``` 6. Add a format script to `package.json`: ```json "scripts": { "format": "prettier --write \"src/**/*.{js,jsx,ts,tsx,json,css,scss,md}\"" }, ``` ## Adding Pre-commit Hooks Pre-commit hooks help enforce code standards by running lint and format checks before committing: 1. Install Husky in the root folder: ```bash npm install -D husky ``` 2. Initialize Husky to create the `.husky/` directory: ```bash npx husky ``` 3. Add a `pre-commit` file inside the `.husky` directory and provide the following command to lint and format code: ``` cd api && npm run lint && npm run format ``` ## Further Improvements - **Continuous Integration**: Integrate with a CI/CD pipeline to run tests and deploy automatically. - **Testing**: Set up unit and integration tests using Jest or Mocha to ensure code quality and functionality. - **Dockerization**: Containerize the application with Docker for easier deployment and scalability. This setup provides a robust foundation for developing an Express API with TypeScript, emphasizing code quality and developer productivity.
etorralbab
1,892,820
Stunning Minimal Desktop Setups for Developers
Hey there, coding wizards! Ever feel like your workspace could use a bit of magic? You know, that...
0
2024-06-18T19:26:41
https://dev.to/3a5abi/stunning-minimal-desktop-setups-for-developers-3p9k
developers, webdev, productivity
Hey there, coding wizards! Ever feel like your workspace could use a bit of magic? You know, that perfect blend of sleek, stylish, and super functional? Well, you’re in luck! Today, we’re diving into some absolutely jaw-dropping minimal desktop setups that will make your fellow developers green with envy. Ready to transform your workspace into a productivity powerhouse? Let’s go! Read the full article here! -> [Stunning Minimal Desktop Setups for Developers - DevToys.io](https://devtoys.io/2024/06/17/stunning-minimal-desktop-setups-for-developers/) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iwlmrm256gh2ghbq43or.png)
3a5abi
1,892,813
How To Recover Your Stolen BTC
I was once a victim of a heart-wrenching cryptocurrency scam that left me devastated. I had invested...
0
2024-06-18T18:54:50
https://dev.to/clausel_borglum_720c9feed/how-to-recover-your-stolen-btc-4700
I was once a victim of a heart-wrenching cryptocurrency scam that left me devastated. I had invested a significant sum of $195,000 worth of Ethereum in an online investment platform, hoping to reap substantial profits. Little did I know that I was about to face a nightmare. As the weeks went by, my excitement turned into despair when I realized that the platform I had trusted was nothing more than an elaborate scheme to rob unsuspecting investors like me my hard-earned Ethereum, leaving me feeling helpless and betrayed. After extensive research, I came across Century Hackers Recovery Team, a crypto recovery specialist who helps victims like me regain their stolen assets. After weeks of tireless efforts, (century@cyberservices.com ) had successfully recovered a substantial portion of my lost Ethereum. To anyone who finds themselves in a similar unfortunate situation, I urge you not to lose hope. Reach out to Century Team for your Crypto recovery. via century@cyberservices.com  website: https://centurycyberhacker.pro Or WhatsApp : +31622673038
clausel_borglum_720c9feed
1,892,811
JPG to PNG: Understanding the Transition
What Are the Differences Between JPG and PNG Images? JPG (or JPEG) and PNG are two of the...
0
2024-06-18T18:53:24
https://dev.to/msmith99994/jpg-to-png-understanding-the-transition-5cb0
## What Are the Differences Between JPG and PNG Images? JPG (or JPEG) and PNG are two of the most common image formats used today, each with its own set of characteristics that make it suitable for different applications. Understanding the differences between these formats can help you decide when to use each one and how to convert between them. ### JPG - Compression: JPG uses lossy compression, which reduces file size by discarding some image data. This can lead to a loss of quality, especially with higher compression levels. - Color Depth: Supports 24-bit color, displaying millions of colors, making it ideal for photographs. - File Size: Generally smaller due to lossy compression, which is beneficial for web use. - Transparency: Does not support transparency, meaning all pixels have color information. ### PNG - Compression: PNG uses lossless compression, meaning no data is lost and image quality is maintained. This results in larger file sizes compared to JPG. - Color Depth: Supports 24-bit color, like JPG, but also includes an 8-bit alpha channel for transparency. - File Size: Larger than JPG due to lossless compression, which can be a disadvantage for web use. - Transparency: Supports transparency and alpha channels, allowing for varying levels of opacity. ## Where Are They Used? ### JPG - Digital Photography: JPG is the standard format for digital cameras and smartphones due to its balance of quality and file size. - Web Design: Widely used for photographs and complex images on websites because of its quick loading times. - Social Media: Preferred for sharing images on social platforms due to its universal support and small file size. - Email and Document Sharing: Frequently used in emails and documents for easy viewing and sharing. PNG - Web Graphics: Commonly used for logos, icons, and images requiring transparency. - Digital Art: Preferred for images with sharp edges, text, and transparent backgrounds. - Screenshots: Often used for screenshots to capture exact screen details without quality loss. - Print Media: Used in scenarios where high quality and lossless compression are required. ## Advantages and Disadvantages ### JPG **Advantages:** - Small File Size: Effective lossy compression reduces file sizes significantly. - Wide Compatibility: Supported by almost all devices, browsers, and software. - High Color Depth: Capable of displaying millions of colors, ideal for photographs. - Adjustable Quality: Compression levels can be adjusted to balance quality and file size. Disadvantages: - Lossy Compression: Quality degrades with higher compression levels and repeated edits. - No Transparency: Does not support transparent backgrounds. - Limited Editing Capability: Cumulative compression losses make it less ideal for extensive editing. ### PNG **Advantages:** - Lossless Compression: Maintains original image quality without any loss. - Transparency: Supports transparent backgrounds and varying levels of opacity. - High Color Depth: Suitable for images requiring detailed color representation. - Ideal for Editing: No quality loss through multiple edits and saves. **Disadvantages:** - Larger File Sizes: Larger than JPG files due to lossless compression, which can be a drawback for web use. - Not Ideal for Photographs: Typically results in larger files for photographic images compared to JPG. - Browser Compatibility: While widely supported, PNG can be less efficient for large images on older systems. ## How to Convert JPG to PNG Converting [JPG to PNG](https://cloudinary.com/tools/jpg-to-png) is straightforward and can be done using various tools and methods: ### Conversion Methods 1. Using Online Tools: Websites like Convertio and Online-Convert allow you to upload JPG files and download the converted PNG files. 2. Using Image Editing Software: Software like Adobe Photoshop and GIMP support both JPG and PNG formats. Open your JPG file and save it as PNG. 3. Command Line Tools: Command-line tools like ImageMagick can be used for conversion. 4. Programming Libraries: Programming libraries such as Python's Pillow can be used to automate the conversion process in applications. ## Conclusion JPG and PNG are both essential image formats in the digital world, each with unique strengths and weaknesses. JPG is favored for its smaller file sizes and wide compatibility, making it ideal for photographs and web images. PNG, on the other hand, is preferred for its lossless compression and support for transparency, making it suitable for web graphics, digital art, and scenarios requiring high-quality images. Understanding the differences between JPG and PNG, and knowing how to convert between them, allows you to choose the best format for your specific needs. Whether you need the efficient, compact storage of JPG or the high-quality, transparent capabilities of PNG, mastering these formats ensures you can handle any digital image requirement with ease.
msmith99994
1,892,810
The Growing Demand for Doula Services in Ontario and Naturopathic Clinics in Toronto
https://www.sereneclinic.ca/ As holistic health and personalized care continue to gain traction, the...
0
2024-06-18T18:52:19
https://dev.to/serene_healthclinic_9116/the-growing-demand-for-doula-services-in-ontario-and-naturopathic-clinics-in-toronto-49g2
https://www.sereneclinic.ca/ As holistic health and personalized care continue to gain traction, the demand for doula services in Ontario and naturopathic clinics in Toronto has seen a significant rise. These services cater to individuals seeking comprehensive care that addresses both physical and emotional well-being. This article explores the burgeoning interest in these services, their benefits, and how they are shaping healthcare in Ontario and Toronto. The Role of a Doula in Ontario A doula in Ontario is a trained professional who provides continuous physical, emotional, and informational support to a mother before, during, and shortly after childbirth. Unlike medical professionals, doulas do not perform clinical tasks. Instead, they focus on ensuring a positive birth experience. Benefits of Hiring a Doula in Ontario Emotional Support: Doulas offer a calming presence and emotional reassurance, which is crucial during labor and delivery. Personalized Care: Each birth is unique, and doulas provide tailored support based on the mother's needs and preferences. Advocacy: Doulas help mothers communicate their birth plans to medical staff, ensuring their wishes are respected. Reduced Interventions: Studies have shown that the presence of a doula can lead to fewer medical interventions, such as cesarean sections and epidurals. Postpartum Support: Doulas also assist new mothers in the postpartum period, helping with breastfeeding and adjusting to life with a newborn. The growing awareness of these benefits has led to an increased demand for doula services in Ontario, with many expecting parents seeking the added support to navigate their birthing journey. Naturopathic Clinics in Toronto: A Holistic Approach to Health Naturopathic clinics in Toronto are becoming increasingly popular as people seek alternatives to conventional medicine. These clinics offer a wide range of services, including nutritional counseling, herbal medicine, acupuncture, and lifestyle advice. The focus is on treating the root cause of health issues rather than just addressing symptoms. Key Services Offered by Naturopathic Clinics in Toronto Nutritional Counseling: Personalized diet plans and nutritional advice to support overall health and manage specific conditions. Herbal Medicine: The use of natural remedies and supplements to promote healing and well-being. Acupuncture: An ancient Chinese practice that involves inserting thin needles into specific points on the body to alleviate pain and treat various health conditions. Lifestyle and Stress Management: Guidance on managing stress, improving sleep, and making lifestyle changes that enhance health. Detoxification Programs: Personalized detox programs to cleanse the body of toxins and improve metabolic function. Why Choose Naturopathic Clinics in Toronto? Individualized Treatment Plans: Naturopathic doctors (NDs) take the time to understand each patient's unique health concerns and develop personalized treatment plans. Holistic Approach: Naturopathic clinics in Toronto emphasize the connection between mind, body, and spirit, aiming for overall well-being. Preventative Care: NDs focus on preventing illness by promoting healthy lifestyle choices and proactive health measures. The integration of these holistic practices within the healthcare system provides an alternative for those seeking natural and non-invasive treatment options. This approach has led to the proliferation of naturopathic clinics in Toronto, catering to a growing demographic that values natural health solutions. Synergy Between Doula Services and Naturopathic Clinics The synergy between doula services in Ontario and naturopathic clinics in Toronto is evident as both prioritize holistic, patient-centered care. Expecting parents often turn to naturopathic clinics for prenatal and postnatal care, complementing the support provided by doulas. This integrated approach ensures comprehensive care throughout pregnancy and beyond. Complementary Benefits Prenatal Care: Naturopathic clinics provide nutritional and lifestyle advice to support a healthy pregnancy, while doulas offer emotional and informational support. Labor and Delivery: Doulas assist during labor, and naturopathic treatments such as acupuncture can be used to manage pain and promote relaxation. Postpartum Care: Both services offer postpartum support, helping new mothers with recovery, breastfeeding, and emotional well-being. Finding the Right Doula in Ontario and Naturopathic Clinic in Toronto With the growing popularity of these services, it’s essential to choose the right provider to meet your needs. Here are some tips: Selecting a Doula in Ontario Experience and Training: Look for a doula with proper certification and experience in various birthing scenarios. Compatibility: Choose someone who aligns with your birth philosophy and makes you feel comfortable. References: Ask for references and read reviews from previous clients. Availability: Ensure the doula is available around your due date and can provide continuous support. Choosing a Naturopathic Clinic in Toronto Qualified Practitioners: Verify that the clinic’s naturopathic doctors are licensed and have the necessary credentials. Range of Services: Select a clinic that offers the specific treatments and services you need. Patient Reviews: Check reviews and testimonials to gauge patient satisfaction. Consultation: Schedule a consultation to discuss your health concerns and treatment options. Conclusion The increasing demand for doula services in Ontario and naturopathic clinics in Toronto reflects a shift towards holistic and personalized healthcare. By focusing on the individual’s overall well-being, these services provide valuable support during significant life transitions, such as pregnancy and childbirth, and promote long-term health through natural and preventative measures. Whether you are an expectant parent seeking a doula or someone looking for naturopathic care, these options offer a comprehensive approach to health and wellness, ensuring that you receive the care and support you need. Wednesday Backlink 1 Keywords: Doula vs midwife Doula vs Midwife: Understanding the Differences and Choosing the Right Birth Professional When planning for childbirth, the terms "doula vs midwife" often come up. While both play crucial roles in supporting expectant mothers, their responsibilities, training, and approaches differ significantly. Understanding the distinctions between a doula and a midwife can help you make an informed decision about which professional is best suited for your birth experience. In this comprehensive guide, we’ll delve into the key differences, benefits, and considerations to help you decide between a doula vs midwife for your childbirth journey. What is a Doula? A doula is a trained professional who provides continuous physical, emotional, and informational support to a mother before, during, and shortly after childbirth. The primary focus of a doula is on the mother's comfort and well-being rather than the medical aspects of childbirth. Doulas are not medical professionals, and they do not perform clinical tasks such as delivering the baby or providing medical care. Roles and Responsibilities of a Doula Emotional Support: Doulas offer reassurance, encouragement, and companionship. They help reduce anxiety and stress, providing a calm presence during labor. Physical Comfort: Techniques such as massage, breathing exercises, and positioning can help manage pain and increase comfort. Information and Advocacy: Doulas provide evidence-based information to help mothers make informed decisions. They also advocate for the mother's wishes and preferences in the birthing environment. Postpartum Support: Assistance with breastfeeding, newborn care, and emotional adjustment after birth. What is a Midwife? A midwife is a healthcare professional specialized in supporting women during pregnancy, labor, and postpartum. Midwives are trained to provide comprehensive prenatal care, deliver babies, and handle common complications. They can practice in various settings, including hospitals, birthing centers, and home births. Roles and Responsibilities of a Midwife Prenatal Care: Monitoring the health of the mother and baby, conducting routine check-ups, and providing medical advice. Labor and Delivery: Managing the birth process, including delivering the baby, monitoring fetal health, and handling complications. Postpartum Care: Providing medical care and support to the mother and baby after birth, including check-ups and assistance with breastfeeding. Healthcare Provider: Performing medical procedures such as administering medications, suturing tears, and conducting emergency interventions if necessary. Doula vs Midwife: Key Differences Understanding the key differences between a doula vs midwife is essential for making an informed decision: Training and Certification: Doulas undergo training programs focused on non-medical support, while midwives have extensive medical education and training, often holding certifications such as Certified Nurse Midwife (CNM) or Certified Professional Midwife (CPM). Scope of Practice: Doulas provide emotional and physical support but do not perform medical tasks. Midwives provide medical care, deliver babies, and manage complications. Role During Labor: A doula offers continuous, non-medical support, while a midwife handles medical aspects and the actual delivery. Postpartum Services: Both provide postpartum support, but a midwife offers medical care, whereas a doula focuses on emotional and practical support. Benefits of Having a Doula Continuous Support: Studies show that continuous support from a doula can reduce the length of labor, decrease the need for interventions, and improve overall birth satisfaction. Personalized Care: Doulas provide tailored support based on the mother's preferences and needs, enhancing the birthing experience. Enhanced Communication: Doulas facilitate communication between the mother and medical staff, ensuring the mother’s wishes are respected. Benefits of Having a Midwife Medical Expertise: Midwives provide comprehensive medical care and are trained to handle complications, ensuring the safety of both mother and baby. Holistic Approach: Midwives often adopt a holistic approach to childbirth, emphasizing natural birth and minimal intervention when possible. Continuity of Care: Midwives offer consistent care throughout pregnancy, labor, and postpartum, fostering a strong, trusting relationship with the mother. Choosing Between a Doula vs Midwife The choice between a doula vs midwife depends on your personal preferences, health needs, and desired birth experience. Here are some factors to consider: Health and Pregnancy Risk Level: If you have a high-risk pregnancy, a midwife's medical expertise is essential. For low-risk pregnancies, the additional emotional and physical support of a doula can be highly beneficial. Birth Setting: Consider where you plan to give birth. Midwives can practice in hospitals, birthing centers, or at home, while doulas can support you in any setting, complementing the care provided by midwives or doctors. Type of Support Desired: If you seek continuous emotional and physical support without medical intervention, a doula is a great choice. If you need comprehensive medical care and delivery management, a midwife is necessary. Combining the Support of a Doula vs Midwife For many expectant mothers, the best approach is to benefit from the strengths of both a doula and midwife. A midwife can handle the medical aspects of childbirth, ensuring safety and health, while a doula provides continuous, personalized support to enhance comfort and emotional well-being. This combination can lead to a more satisfying and empowering birth experience. Conclusion Choosing between a doula vs midwife is a significant decision in your childbirth journey. By understanding the distinct roles, benefits, and scopes of practice, you can select the right professional to support your needs and preferences. Whether you choose a doula, a midwife, or both, the goal is to ensure a safe, positive, and empowering birth experience. Remember, the best choice is one that aligns with your individual needs, health circumstances, and desired birth plan.
serene_healthclinic_9116
1,892,809
Understanding Static vs Self in PHP
In PHP, static and self are keywords used to refer to properties and methods within classes, but they...
0
2024-06-18T18:48:30
https://dev.to/edriso/understanding-static-vs-self-in-php-3bm2
php, oop
In PHP, `static` and `self` are keywords used to refer to properties and methods within classes, but they behave differently in terms of inheritance and late static bindings. #### Static - **Definition:** When used inside a class, `static` refers to the class that the method or property is called on, not the class where it is defined. - **Usage:** Typically used for static properties and methods that are shared across all instances of a class and do not change between instances. - **Inheritance:** Static references using `static` are resolved at run time, which means they refer to the class of the object instance on which the method or property is called, adapting to any subclass that may inherit them dynamically. #### Example: ```php <?php class Furniture { protected static $category = 'General'; public static function describe() { return 'This is a ' . static::$category . ' piece of furniture.'; } } class Chair extends Furniture { protected static $category = 'Chair'; } echo Furniture::describe(); // Outputs: This is a General piece of furniture. echo Chair::describe(); // Outputs: This is a Chair piece of furniture. ?> ``` #### Self - **Definition:** `self` refers explicitly to the class in which the keyword is used. It is resolved using the class where it is defined, not where it is called. - **Usage:** Useful for accessing static properties and methods within the same class, especially when you want to ensure that the reference does not change with inheritance. - **Inheritance:** Static references using `self` are resolved early at compile-time and do not change in subclasses. #### Example: ```php <?php class Furniture { protected static $category = 'General'; public static function describe() { return 'This is a ' . self::$category . ' piece of furniture.'; } } class Chair extends Furniture { protected static $category = 'Chair'; } echo Furniture::describe(); // Outputs: This is a General piece of furniture. echo Chair::describe(); // Outputs: This is a General piece of furniture. ?> ``` ### Conclusion - **Static** is used to refer to properties and methods in a way that allows for late static bindings, meaning references can change based on the runtime class. - **Self** is used for references that should remain specific to the class where they are defined, ensuring they do not change with inheritance. Understanding when to use `static` vs `self` depends on whether you want the reference to adapt to subclasses (`static`) or remain fixed at the class where it is defined (`self`). Each serves a distinct purpose in PHP's object-oriented programming paradigm.
edriso
1,892,808
2024-06-17: CoT prompting
Seems like we still have a fire lit under our butts. v0 was apparently not satisfactory enough, so my...
0
2024-06-18T18:48:12
https://dev.to/armantark/2024-06-17-cot-prompting-3jjj
devjournal
Seems like we still have a fire lit under our butts. v0 was apparently not satisfactory enough, so my primary goal last week to make a more fully-fledged version of CoT (as I mentioned last week) so that the AI can actually think about its responses before querying the user for further details. It was a bit of a technical lift to get it done. The way our project is set up is a little weird, in that, we're using Django for routing so for some reason all our LLM logic is localized to the serializer and apps.py files, even though it really should all be refactored into their own files. I had to do a bunch of theorycrafting and planning to come up with a design that I thought would work for the CoT. Obviously sending that whole massive chunk of thinking is not good for UX, so it needs to be done under the hood. So I thought it should do a CoT and THEN have a second step to refine that into a singular question or statement. So I set out to do that. I had to end up keeping two separate memories, though, and we apparently have like 5 or 6 variables dedicated just to handling the memory/conversation/chat history, so it got real confusing real fast. Fortunately, I managed to detangle all that, and I also had one of the other backend engineers show me how to add a new column to the database for this CoT message so it can pick up from where it left off upon reload or switching conversations. All this took like 12 hours on Tuesday. So it was working, but it needed some further engineering to make the prompt actually do a good job of improving the UX. So I met with the product team several times to just jam it out and add and tweak the prompts until it was up to the standard we wanted. The only major issue is that it's SLOW, but I guess that's the point. They preferred it take 5 seconds to generate a response over it being suboptimal. Anyway, so all that was good and proper, then came the menace of trying to merge it back in to dev so that everyone could try it. The rest of the dev team had been working on implementing text streaming for the chat messages, and the way they did it is through SSE instead of simply doing it over a websocket, so the code was absolutely grotesque. On top of that, instead of relying on version control, the one guy primarily working on it decided to make an entirely separate file/routing system just for the streaming, and I'd been working out of the "vanilla" ones this whole time. I immediately told them to start "zipping" it all together, and that was a whole mess on Friday because they didn't understand what I meant for some reason, so that one dev just slapped on the "new" methods into the original file instead of actually just reworking the functionality. Eventually the lead dev stepped in and did it himself, and I was able to merge in my code. It didn't work properly, but by the time I had to sign off on Friday (it was a busy weekend for me), it was about 90% there so I left it for the other guys to work on. They ended up fixing it (mostly), but then a whole mess happened with threading that I'll cover next blog post because it's technically been this week, not last week. The reason why we were in such a rush to get this out, as I mentioned, was to get investors hooked. Hopefully it was all worth it. Until next time, cheers.
armantark
1,892,695
Active Session History (ASH) in YugabyteDB
The latest versions of YugabyteDB have a new instrumentation feature to troubleshoot performance:...
0
2024-06-18T18:47:49
https://dev.to/yugabyte/active-session-history-ash-in-yugabytedb-39ic
yugabytedb, distributed, sql, database
The latest versions of YugabyteDB have a new instrumentation feature to troubleshoot performance: Active Session History, which gathers information about the threads running in the Tablet Servers and samples it. You need to set the following: ```sh --allowed_preview_flags_csv=ysql_yb_ash_enable_infra,ysql_yb_enable_ash --ysql_yb_ash_enable_infra=true --ysql_yb_enable_ash=true ``` I'll probably refine my queries on it in the future, but for the moment, I've created a function that can gather the samples for the last minute from all tablet servers. I run `select * from gv$ash(seconds=>60);` and get the following: ``` psql (16.2, server 11.2-YB-2.21.1.0-b0) Type "help" for help. yugabyte=# select * from gv$ash(seconds=>60); samples | #req | #rpc | #ysql | component | event_type | event_class | wait_event | info | host | zone | region | cloud | secs ---------+------+------+-------+-----------+------------+-------------+------------------------------------+------------------------------------------------------+------------+-------+--------+-------+------ 56 | 56 | 0 | 1 | YSQL | Cpu | YSQLQuery | QueryProcessing | insert into xxx(value) select generate_series($1,$2) | 10.0.0.181 | zone1 | fra | cloud | 60 26 | 1 | 26 | 1 | TServer | Network | Client | YBClient_WaitingOnDocDB | | 10.0.0.181 | zone1 | fra | cloud | 59 17 | 1 | 17 | 1 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: 6b4edd419b554eb | 10.0.0.181 | zone1 | fra | cloud | 43 14 | 1 | 14 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: 9c885e587b73499 | 10.0.0.182 | zone2 | fra | cloud | 52 11 | 1 | 9 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: e9e5e29943864ba | 10.0.0.183 | zone3 | fra | cloud | 58 9 | 1 | 9 | 1 | TServer | Cpu | Common | OnCpu_Active | | 10.0.0.181 | zone1 | fra | cloud | 54 9 | 1 | 9 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: 9c885e587b73499 | 10.0.0.181 | zone1 | fra | cloud | 52 9 | 1 | 9 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: aab25ce7209043f | 10.0.0.183 | zone3 | fra | cloud | 59 7 | 1 | 7 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: 6b4edd419b554eb | 10.0.0.182 | zone2 | fra | cloud | 21 7 | 1 | 7 | 1 | TServer | Cpu | Common | OnCpu_Passive | | 10.0.0.181 | zone1 | fra | cloud | 53 7 | 1 | 7 | 1 | TServer | Cpu | Common | OnCpu_Active | | 10.0.0.183 | zone3 | fra | cloud | 56 6 | 2 | 4 | 2 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: e9e5e29943864ba | 10.0.0.182 | zone2 | fra | cloud | 1 6 | 1 | 6 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: ae8cdaebee6e4ad | 10.0.0.182 | zone2 | fra | cloud | 51 6 | 2 | 6 | 1 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: ae8cdaebee6e4ad | 10.0.0.183 | zone3 | fra | cloud | 4 5 | 5 | 5 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: ae8cdaebee6e4ad | 10.0.0.183 | zone3 | fra | cloud | 37 5 | 1 | 4 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: e9e5e29943864ba | 10.0.0.181 | zone1 | fra | cloud | 24 5 | 5 | 5 | 1 | TServer | Network | TabletWait | ConflictResolution_ResolveConficts | tablet id: aab25ce7209043f | 10.0.0.182 | zone2 | fra | cloud | 18 5 | 5 | 5 | 2 | TServer | Cpu | Common | OnCpu_Passive | tablet id: aab25ce7209043f | 10.0.0.182 | zone2 | fra | cloud | 49 5 | 4 | 5 | 2 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: aab25ce7209043f | 10.0.0.182 | zone2 | fra | cloud | 5 5 | 1 | 5 | 1 | TServer | Cpu | Consensus | Raft_ApplyingEdits | tablet id: ae8cdaebee6e4ad | 10.0.0.183 | zone3 | fra | cloud | 26 5 | 1 | 5 | 1 | TServer | Cpu | Consensus | Raft_ApplyingEdits | tablet id: 9c885e587b73499 | 10.0.0.183 | zone3 | fra | cloud | 26 5 | 1 | 5 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: ae8cdaebee6e4ad | 10.0.0.181 | zone1 | fra | cloud | 45 5 | 1 | 5 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: aab25ce7209043f | 10.0.0.181 | zone1 | fra | cloud | 40 5 | 1 | 5 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: 6b4edd419b554eb | 10.0.0.183 | zone3 | fra | cloud | 15 4 | 4 | 4 | 1 | TServer | Network | TabletWait | ConflictResolution_ResolveConficts | tablet id: ae8cdaebee6e4ad | 10.0.0.183 | zone3 | fra | cloud | 40 4 | 1 | 4 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: df10b3d89f3349f | 10.0.0.183 | zone3 | fra | cloud | 29 3 | 1 | 3 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: 6b4edd419b554eb | 10.0.0.183 | zone3 | fra | cloud | 38 3 | 1 | 3 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: 8cefb385bcdc44b | 10.0.0.183 | zone3 | fra | cloud | 44 3 | 2 | 3 | 2 | TServer | Cpu | Consensus | Raft_ApplyingEdits | tablet id: aab25ce7209043f | 10.0.0.182 | zone2 | fra | cloud | 2 3 | 3 | 3 | 1 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: 9c885e587b73499 | 10.0.0.183 | zone3 | fra | cloud | 7 3 | 1 | 3 | 1 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: b63700bea0ef4b0 | 10.0.0.181 | zone1 | fra | cloud | 7 3 | 3 | 3 | 1 | TServer | Network | TabletWait | ConflictResolution_ResolveConficts | tablet id: 9c885e587b73499 | 10.0.0.183 | zone3 | fra | cloud | 13 3 | 3 | 0 | 1 | YSQL | Network | TServerWait | StorageFlush | insert into xxx(value) select generate_series($1,$2) | 10.0.0.181 | zone1 | fra | cloud | 23 2 | 1 | 2 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: 6b4edd419b554eb | 10.0.0.181 | zone1 | fra | cloud | 25 2 | 1 | 2 | 1 | TServer | Cpu | Consensus | Raft_ApplyingEdits | tablet id: e9e5e29943864ba | 10.0.0.182 | zone2 | fra | cloud | 28 2 | 1 | 2 | 1 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: df10b3d89f3349f | 10.0.0.181 | zone1 | fra | cloud | 6 2 | 1 | 2 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: df10b3d89f3349f | 10.0.0.182 | zone2 | fra | cloud | 7 2 | 2 | 2 | 1 | TServer | DiskIO | RocksDB | RocksDB_NewIterator | tablet id: aab25ce7209043f | 10.0.0.182 | zone2 | fra | cloud | 51 2 | 2 | 2 | 1 | TServer | DiskIO | RocksDB | RocksDB_NewIterator | tablet id: ae8cdaebee6e4ad | 10.0.0.183 | zone3 | fra | cloud | 9 2 | 1 | 1 | 1 | TServer | DiskIO | TabletWait | SaveRaftGroupMetadataToDisk | | 10.0.0.181 | zone1 | fra | cloud | 1 2 | 1 | 1 | 1 | TServer | DiskIO | TabletWait | SaveRaftGroupMetadataToDisk | | 10.0.0.182 | zone2 | fra | cloud | 1 2 | 1 | 2 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: ae8cdaebee6e4ad | 10.0.0.182 | zone2 | fra | cloud | 5 2 | 1 | 1 | 1 | TServer | DiskIO | TabletWait | SaveRaftGroupMetadataToDisk | | 10.0.0.183 | zone3 | fra | cloud | 1 2 | 1 | 2 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: 8cefb385bcdc44b | 10.0.0.181 | zone1 | fra | cloud | 0 2 | 1 | 2 | 1 | TServer | Cpu | Common | OnCpu_Active | | 10.0.0.182 | zone2 | fra | cloud | 1 2 | 1 | 2 | 1 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: 8cefb385bcdc44b | 10.0.0.181 | zone1 | fra | cloud | 6 2 | 1 | 2 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: b63700bea0ef4b0 | 10.0.0.182 | zone2 | fra | cloud | 2 2 | 1 | 2 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: eb7069a1649144e | 10.0.0.183 | zone3 | fra | cloud | 4 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: 9c885e587b73499 | 10.0.0.181 | zone1 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: 6885238820ff401 | 10.0.0.182 | zone2 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: ae8cdaebee6e4ad | 10.0.0.183 | zone3 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: eb7069a1649144e | 10.0.0.182 | zone2 | fra | cloud | 0 1 | 1 | 0 | 1 | YSQL | Extension | YSQLQuery | Extension | | 10.0.0.181 | zone1 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Network | Consensus | Raft_WaitingForReplication | tablet id: eb7069a1649144e | 10.0.0.181 | zone1 | fra | cloud | 0 1 | 1 | 0 | 1 | YSQL | Network | TServerWait | CatalogRead | select * from gv$ash(seconds=>$1) | 10.0.0.181 | zone1 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Active | tablet id: b63700bea0ef4b0 | 10.0.0.183 | zone3 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: ae8cdaebee6e4ad | 10.0.0.181 | zone1 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: 9c885e587b73499 | 10.0.0.183 | zone3 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Passive | tablet id: e9e5e29943864ba | 10.0.0.182 | zone2 | fra | cloud | 0 1 | 1 | 1 | 1 | TServer | Cpu | Common | OnCpu_Passive | | 10.0.0.183 | zone3 | fra | cloud | 0 (60 rows) ``` The YSQL layer is running an insert statement and the tablet identifiers can be used to find the table or index in http://yb-master:7000/dump-entities ASH samples are visible from `yb_active_session_history` on each tablet server. I use the Foreign Data Wrapper to consolidate from all servers. Here is how I created the `gv$ash()` function and all dependent objects (FDW server, foreign tables, and views): ```sql create extension if not exists postgres_fdw; select format(' create server if not exists "gv$%1$s" foreign data wrapper postgres_fdw options (host %2$L, port %3$L, dbname %4$L) ', host, host, port, current_database()) from yb_servers(); \gexec select format(' drop user mapping if exists for admin server "gv$%1$s" ',host) from yb_servers(); \gexec select format(' create user mapping if not exists for current_user server "gv$%1$s" --options ( user %2$L, password %3$L ) ',host, 'yugabyte', 'SECRET') from yb_servers(); \gexec select format(' drop schema if exists "gv$%1$s" cascade ',host) from yb_servers(); \gexec select format(' create schema if not exists "gv$%1$s" ',host) from yb_servers(); \gexec select format(' import foreign schema "pg_catalog" limit to ("yb_active_session_history","pg_stat_statements") from server "gv$%1$s" into "gv$%1$s" ', host) from yb_servers(); \gexec with views as ( select distinct foreign_table_name from information_schema.foreign_tables t, yb_servers() s where foreign_table_schema = format('gv$%1$s',s.host) ) select format('drop view if exists "gv$%1$s"', foreign_table_name) from views union all select format('create or replace view public."gv$%2$s" as %1$s', string_agg( format(' select %2$L as gv$host, %3$L as gv$zone, %4$L as gv$region, %5$L as gv$cloud, * from "gv$%2$s".%1$I ', foreign_table_name, host, zone, region, cloud) ,' union all '), foreign_table_name ) from views, yb_servers() group by views.foreign_table_name ; \gexec drop function if exists gv$ash; create or replace function public.gv$ash(seconds interval default '60 seconds') RETURNS TABLE ( samples real, "#req" bigint, "#rpc" bigint, "#ysql" bigint, component text, event_type text, event_class text, wait_event text, info text, host text, zone text, region text, cloud text, secs int ) as $$ select sum(sample_weight) as samples , count(distinct root_request_id) as "#req" , count(distinct rpc_request_id) as "#rpc" , count(distinct ysql_session_id) as "#ysql" , wait_event_component as component, wait_event_type as event_type , wait_event_class as event_class, wait_event , coalesce ( 'tablet_id: '||wait_event_aux, substr(query,1,60) ) as info , h.gv$host, h.gv$zone, h.gv$region, h.gv$cloud , extract(epoch from max(sample_time)-min(sample_time))::int as secs from gv$yb_active_session_history h left outer join gv$pg_stat_statements s on s.gv$host=h.gv$host and s.queryid=h.query_id where sample_time>now()-seconds group by wait_event_component, wait_event_type, wait_event_class, wait_event , wait_event_aux, substr(query,1,60) , h.gv$host, h.gv$zone, h.gv$region, h.gv$cloud order by 1 desc ; $$ language sql; select * from gv$ash(); ``` The user and password are hardcoded here (`yugabyte`), but you can create your own user mapping to each server. The list of wait event is documented: {% embed https://docs.yugabyte.com/preview/explore/observability/active-session-history/#wait-events %} My function shows the overall picture and you can query `yb_active_session_history` to get the details of all samples. There's a request ID that helps to match activity from different layers.
franckpachot
1,892,806
Debugging in JS
Debugging in JavaScript is like being a detective in your code. Imagine your code is a mystery...
0
2024-06-18T18:40:46
https://dev.to/__khojiakbar__/msdnzxfbcdms-35o5
javascript, debug
> Debugging in JavaScript is like being a detective in your code. Imagine your code is a mystery novel, and sometimes the plot gets tangled. The debugger is your magnifying glass, helping you zoom in on the tricky parts and figure out what's going wrong. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m0s4jcvktgrrftifrq4c.jpeg) ## Why Do We Need a Debugger? 1. **Find Bugs:** Bugs are like tiny gremlins messing with your code. The debugger helps you catch them in action. 2. **Understand Flow:** It lets you see how your code runs step-by-step, which can be super enlightening. 3. **Inspect Variables:** You can check the value of variables at different points in your code. ## How Do We Use It? #### **Basic Example:** Catching the Cookie Thief Imagine you're running a bakery and have a code snippet to keep track of your cookies. But for some reason, cookies keep disappearing! ``` function bakeCookies() { let cookies = 10; console.log("Cookies before thief: " + cookies); // Mysterious cookie thief! cookies -= 3; console.log("Cookies after thief: " + cookies); } bakeCookies(); ``` To catch the cookie thief, you can use the debugger. 1. Add the **debugger** Keyword: When you run this in your browser's developer tools (usually by pressing F12), the code will pause at the **debugger** line. Now, you can inspect variables and step through the code. 2. Using Developer Tools: - Open Developer Tools (usually F12 or right-click on the page and select "Inspect"). - Go to the "Sources" tab. - You'll see the code paused at the debugger line. - Use "Step over" (F10) to go to the next line. - Use "Step into" (F11) to dive into functions. - Check the "Scope" section to see the values of your variables. ### Another Fun Example: The Uncooperative Robot Imagine you have a robot that should count to 5, but it's not cooperating. ``` function robotCount() { for (let i = 1; i <= 5; i++) { debugger; // Pause and check what i is console.log("Robot says: " + i); } } robotCount(); ``` When you run this, the debugger will pause on each iteration of the loop. You can watch **i** increment and see if it ever misbehaves. ### Tips for Using the Debugger 1. **Breakpoints:** You can set breakpoints by clicking on the line numbers in the "Sources" tab. This is a great way to pause execution without modifying your code with debugger. 2. **Watch Expressions:** You can add variables to the "Watch" panel to keep an eye on their values. 3. **Call Stack:** Check the "Call Stack" panel to see how your code reached the current point. ### Why It's Fun Using the debugger is like being Sherlock Holmes in your own code. You get to investigate, uncover mysteries, and catch bugs red-handed. Plus, it's incredibly satisfying to see your code work perfectly once you've sorted out the issues.
__khojiakbar__
1,892,246
Is Turkiye Ready for Digital Nomads and Indie Makers?
I recently saw a tweet by John Rush comparing Turkey and Portugal as potential destinations for indie...
0
2024-06-18T18:40:36
https://dev.to/enszrlu/is-turkiye-ready-for-digital-nomads-and-indie-makers-3gp4
digitalnomad, turkiye, indiemakers, discuss
I recently saw a tweet by John Rush comparing Turkey and Portugal as potential destinations for indie makers. His insights inspired me to write this article. ![John Rush's tweet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7if2jgvqnzhwqhlirqt4.png) Turkiye is often overlooked as a prime destination for digital nomads and indie makers due to outdated perceptions. Despite its strategic location, cultural richness, and affordability, various issues hinder its potential to become a top choice for remote workers and creative entrepreneurs. In this article, we will explore the current branding challenges, economic struggles, poor infrastructure, and inadequate English education. Additionally, we will highlight the advantages of Turkey, such as its locational benefits, excellent weather, cosmopolitan structure, vibrant culture, and young population. Finally, we will discuss possible solutions to these challenges and the outcomes they could bring. ## Current Challenges ### Poor Branding Turkey suffers from a negative global image, often perceived as a third-world country plagued by political instability and safety concerns. This outdated perception deters potential digital nomads and indie makers, overshadowing the nation's true potential as a creative and entrepreneurial hub, especially due to political instability and how Turkish government advertise Turkey in global arena. ### Economic Struggles The Turkish economy has faced significant challenges in recent years, including high inflation and currency devaluation. These economic issues make it difficult for remote workers and indie makers to plan long-term stays and maintain a stable standard of living. It is just challenging to set long term plans due to volatile market. ### Poor Infrastructure Despite some improvements, Turkey's infrastructure still lags behind many other countries. Issues with transportation, utilities, and telecommunications can pose significant hurdles for digital nomads and indie makers looking to establish operations or work remotely. Especially in rural areas, fast internet connection is still a challenge. ### Inadequate English Education English proficiency is relatively low in Turkey, which can be a significant barrier for digital nomads and indie makers. Effective communication is crucial for networking, business operations, and daily interactions, and the lack of English speakers can hinder both personal and professional integration. ## Advantages of Turkey After talking about all of above cons, we should also remember all of the positives about this country. ### Strategic Location Turkey's location at the crossroads of Europe and Asia offers significant logistical advantages for digital nomads and indie makers looking to access multiple markets. Its proximity to both the European Union and the Middle East makes it an attractive hub for international collaboration and travel. ### Excellent Weather Turkey boasts a diverse climate, with coastal regions enjoying Mediterranean weather and inland areas experiencing a more temperate climate. This favorable weather is a significant draw for digital nomads and indie makers seeking a comfortable living environment. ### Cosmopolitan Structure Major cities like Istanbul, Ankara, and Izmir offer a cosmopolitan lifestyle with a mix of modern amenities and rich cultural heritage. These cities provide a blend of Eastern and Western influences, making them attractive to digital nomads and indie makers. ### Rich Culture and Food Turkey's cultural richness and diverse culinary offerings are significant draws for digital nomads and indie makers. The country's history, traditions, and cuisine offer a unique and enriching experience for those living or traveling there. ### Young Population Turkey has a young and dynamic population, providing a large talent pool and potential for innovation. This demographic advantage can be a significant asset for indie makers looking to recruit energetic and skilled collaborators. ### Vibrant Nightlife and Hobbies The young population contributes to a vibrant nightlife and various recreational activities. From bustling nightclubs to serene coastal towns, Turkey offers a wide range of options for entertainment and hobbies, making it an appealing destination for digital nomads. ## What can be done? ### Rebranding Efforts To change outdated perceptions, Turkey needs a comprehensive rebranding strategy. This could involve international marketing campaigns highlighting the country's strengths, such as its cultural heritage, modern amenities, and strategic location. Emphasizing Turkey's potential as a hub for digital nomads and indie makers can attract more interest from the global remote work community. ### Economic Reforms Addressing economic challenges requires significant reforms to stabilize the currency, control inflation, and create a more business-friendly environment. Improving economic stability will make Turkey more attractive to digital nomads and indie makers seeking a stable and affordable place to live and work. ### Infrastructure Development Investing in infrastructure improvements, particularly in infrastructure and telecommunications, will enhance Turkey's appeal to remote workers and indie makers. Modern and reliable infrastructure is crucial for efficient operations and connectivity, essential for digital nomads. ### Enhancing English Education Improving English education across the country can help bridge the communication gap for digital nomads and indie makers. This could involve policy changes in education but I think this will develop naturally once more internationals move to Turkey. ## Conclusion Turkey has immense potential as a destination for digital nomads and indie makers. However, to unlock this potential, the country must address its current branding issues, economic struggles, infrastructure deficits, and inadequate English education. By implementing targeted solutions, Turkey can rebrand itself and attract more international interest, transforming itself into a prime option for remote workers and creative entrepreneurs. Maybe in 4 years...
enszrlu
1,892,745
A journey to Flutter liveness (pt1)
Here we are again. This time I decided to write the posts as I go with the project, so it may or may...
27,768
2024-06-18T18:38:48
https://dev.to/jodamco/a-journey-to-flutter-liveness-pt1-4164
machinelearning, flutter, android
Here we are again. This time I decided to write the posts as I go with the project, so it may or may not have an end, for sure it'll not have an order! ## Google Machine Learning Kit I was trying to decide on some Flutter side project to exercise some organizations and concepts from the framework and since AI is at hype I did some research and found out about [Google Machine Learning kit](https://developers.google.com/ml-kit) which is a set of machine learning tools for different tasks such as face detection, text recognition, document digitalization, among other features (you should really check the link above). They're kinda plug and play, one can just install the plugin dependency and use the capabilities and it doesn't depend on API integrations or third-party accounts, so I decided to move on using it. For the project itself, I decided to go with liveness - oh boy, if I had done some more research before maybe I would've selected something else - because I got curious about how current tools differentiate between photographs and real people. I have to be honest and say that I didn't do a deep research on the matter and I'll follow the path of reproducing the results I found in [this great article](https://towardsdatascience.com/implementing-liveness-detection-with-google-ml-kit-5e8c9f6dba45). In it the author gets to the conclusion that the usage of the GMLKit for liveness is feasible and my **first goal is to reproduce the Euler Angles graphs**, but using a Flutter app. I'm not sure what may be a final casual use for liveness, but I'm sure I'll learn through the process, so let's start! ## Flutter app The startup of a project is always a good thing. You know, follow up the docs for the init, run a `flutter create my_app` or do it using vscode through the command bar. I'll be using FVM to manage the Flutter version and you can [check out the full code here](https://github.com/jodamco/gmlkit_liveness). ### Camera Layer First things first, I needed the camera preview set to get the image data (and to see something at least). For that, I added `camera` and `permission_handler` as dependencies to get access to the camera widgets. I also tried to split my camera component in a way that it would become agnostic regarding the machine learning layer, so I could reuse it in different contexts. Here's a small part of the camera widget ``` class CustomCameraPreview extends StatefulWidget { final Function(ImageData inputImage)? onImage; final CustomPaint? customPaint; final VoidCallback? onCameraFeedReady; const CustomCameraPreview({ super.key, this.onImage, this.onCameraFeedReady, this.customPaint, }); @override State<CustomCameraPreview> createState() => _CustomCameraPreviewState(); } class _CustomCameraPreviewState extends State<CustomCameraPreview> { //... more code Future<void> _startLiveFeed() async { if (selectedCamera == null) { setState(() { hasError = true; }); return; } _controller = CameraController( selectedCamera!, ResolutionPreset.high, enableAudio: false, imageFormatGroup: Platform.isAndroid ? ImageFormatGroup.nv21 : ImageFormatGroup.bgra8888, ); await _controller?.initialize(); _controller?.startImageStream(_onImage); if (widget.onCameraFeedReady != null) { widget.onCameraFeedReady!(); } } //... more code Widget display() { if (isLoading) { return PreviewPlaceholder.loadingPreview(); } else if (hasError) { return PreviewPlaceholder.previewError( onRetry: _initialize, ); } else if (!hasPermissions) { return PreviewPlaceholder.noPermission( onAskForPermissions: _initialize, ); } else { return Stack( fit: StackFit.expand, children: <Widget>[ Center( child: CameraPreview( _controller!, child: widget.customPaint, ), ), ], ); } } @override Widget build(BuildContext context) { return Scaffold( body: display(), ); } } ``` I think the most important part is the startup of the camera live feed. When creating the camera controller you must set the image type using the `imageFormatGroup` property since this is required for the mlkit plugin to work. The ones on the code above are the ones recommended for each platform and you can check it better on the [docs of the face detection plugin](https://pub.dev/packages/google_mlkit_face_detection). This widget was inspired on the [example widget](https://github.com/flutter-ml/google_ml_kit_flutter/blob/master/packages/example/lib/vision_detector_views/camera_view.dart#L89) from the official example from the docs. One great thing I was able to test out was the usage of [factories](https://en.wikipedia.org/wiki/Factory_(object-oriented_programming)) on widgets when I wrote the placeholder for the camera. There were other options, I was suggested to use widget extension and enums, but in the end, I was satisfied with the factory and decided to let it be since it simplified the way the parent was calling the placeholder. ``` enum PreviewType { permission, loading, error } class PreviewPlaceholder extends StatelessWidget { final PreviewType type; final VoidCallback? onAction; const PreviewPlaceholder._({ required this.type, this.onAction, }); factory PreviewPlaceholder.noPermission({ required VoidCallback onAskForPermissions, }) => PreviewPlaceholder._( type: PreviewType.permission, onAction: onAskForPermissions, ); factory PreviewPlaceholder.loadingPreview() => const PreviewPlaceholder._( type: PreviewType.loading, ); factory PreviewPlaceholder.previewError({required VoidCallback onRetry}) => PreviewPlaceholder._( type: PreviewType.error, onAction: onRetry, ); @override Widget build(BuildContext context) { return Column( mainAxisAlignment: MainAxisAlignment.center, children: [ if (type == PreviewType.permission) ElevatedButton( onPressed: onAction, child: const Text("Ask for camera permisions"), ), if (type == PreviewType.error) ...[ const Text("Couldn't load camera preview"), ElevatedButton( onPressed: onAction, child: const Text("Ask for camera permisions"), ), ], if (type == PreviewType.loading) ...const [ Text("Loading preview"), Center( child: LinearProgressIndicator(), ) ], ], ); } } ``` With the camera layer done, let's dive into face detection. ### Face detection For the face detection, so far, I just needed to add two more dependencies: `google_mlkit_commons` and `google_mlkit_face_detection`. The docs of the GMLKit recommend using the specific plugin dependency for release builds instead of the Flutter GMLKit dependency. If you ~~copy~~ write your [first ever approach](https://github.com/flutter-ml/google_ml_kit_flutter/blob/master/packages/example/lib/vision_detector_views/face_detector_view.dart) to face detection, it can be very straightforward to reach data and have the results, unless by one problem: **if you're using android and android-camerax plugin, you will not be able use the camera image with face detection**. This is because although you must've set `ImageFormatGroup.nv21` as the output format, the [current version of the flutter android-camerax](https://pub.dev/packages/camera_android_camerax/versions/0.6.5+5) plugin will only provide images using the `yuv_420_888` format (you may find more info [here](https://github.com/flutter/flutter/issues/145961)). The good part is that someone [provided a solution](https://blog.minhazav.dev/how-to-use-renderscript-to-convert-YUV_420_888-yuv-image-to-bitmap/#tonv21image-image-java-approach) (community always rocks 🚀). I set the detection widget as my main "layer" for the detection since it does the heavy job of running the face detection from the GMLKit plugin. It ended up being a very small widget with a core function for face detection ``` Future<void> _processImage(ImageData imageData) async { if (_isBusy) return; _isBusy = true; RootIsolateToken rootIsolateToken = RootIsolateToken.instance!; final analyticData = await Isolate.run<Map<String, dynamic>>(() async { BackgroundIsolateBinaryMessenger.ensureInitialized(rootIsolateToken); final inputImage = imageData.inputImageFromCameraImage(imageData); if (inputImage == null) return {"faces": null, "image": inputImage}; final FaceDetector faceDetector = FaceDetector( options: FaceDetectorOptions( enableContours: true, enableLandmarks: true, ), ); final faces = await faceDetector.processImage(inputImage); await faceDetector.close(); return {"faces": faces, "image": inputImage}; }); _isBusy = false; } ``` A few comments on this function: 1. It is VERY SIMPLE to get the data from the GMLKit and it can be done without the Isolate. 2. Although the isolate is not needed you might want to use it since [Flutter code should build in 16ms](https://docs.flutter.dev/perf/best-practices#build-and-display-frames-in-16ms). I was eager to try out Isolates and never had a real good reason, but without it the processing of the image would drop the framerate and the app look would become terrible. By applying the Isolate I can remove all the processing and conversion from the main event loop and guarantee that the frames will be built on time. 3. I decided to have the face detector instantiated inside the Isolate since I had trouble passing it out from the main isolate to the new one. I also had this specific conversion `imageData.inputImageFromCameraImage(imageData)` done inside the isolate since it is also time-consuming. This is what allows me to parse the `yuv_420_888` format into the one needed for the GMLKit plugin. For this job, I decided that the best approach was to use a class to receive all the data from the camera and smoothly provide the `InputImage` object for the GMLKit. You can check out the [class here](https://github.com/jodamco/gmlkit_liveness/blob/main/lib/data/models/image_data.dart) and the extension for the [conversion here](https://github.com/jodamco/gmlkit_liveness/blob/main/lib/data/models/camera_image.dart). ### Results So far I still don't have the Euler Angles in a graph as I wanted but I was able to at least get the data from the kit and paint out the bounding box of my face. I also did some tests regarding the execution time of the face detection and could see that the **average time to execute the detection** for a high-quality image is **about 600ms with a debug build** and **about 380ms with a release build**. Since I have the Isolate the framerate of the app is running ok but I would like to enhance this performance later. My next step will be to get the Euler Angles and paint a graph with them so I can try to reproduce the comparison between photos and real people. See you there!
jodamco
1,892,734
Exploring the Natural Path in Toronto: A Comprehensive Guide
https://www.sereneclinic.ca/ Toronto, a bustling metropolis known for its vibrant culture and modern...
0
2024-06-18T18:36:46
https://dev.to/serene_healthclinic_9116/exploring-the-natural-path-in-toronto-a-comprehensive-guide-25pe
https://www.sereneclinic.ca/ Toronto, a bustling metropolis known for its vibrant culture and modern amenities, also offers a plethora of natural escapes that allow residents and visitors to reconnect with nature. This article delves into the 'natural path Toronto' has to offer, guiding you through the city's most serene and scenic spots. The Allure of the Natural Path in Toronto Toronto’s green spaces are a testament to the city's commitment to preserving nature amidst urban development. The natural path Toronto offers includes a variety of parks, trails, and waterfronts that cater to nature enthusiasts, hikers, and those seeking a peaceful retreat from city life. High Park: The Jewel of the Natural Path in Toronto High Park is one of Toronto's largest and most popular green spaces. Spanning 400 acres, it offers a diverse range of activities for nature lovers. From its extensive network of trails to its picturesque ponds and gardens, High Park is a cornerstone of the natural path Toronto is celebrated for. During the spring, the cherry blossoms draw thousands of visitors, creating a magical experience that epitomizes the beauty of the natural path Toronto provides. The Toronto Islands: A Natural Path Toronto Gem A short ferry ride from downtown Toronto, the Toronto Islands offer a tranquil escape from the city's hustle and bustle. The islands are car-free, making them perfect for cycling and walking. The natural path Toronto features here includes sandy beaches, lush parks, and scenic picnic spots. With stunning views of the city skyline, the Toronto Islands are a must-visit for anyone exploring the natural path Toronto boasts. Rouge National Urban Park: A Wilderness Experience Rouge National Urban Park is a unique addition to the natural path Toronto offers. As Canada's first national urban park, it provides a rare opportunity to experience wilderness within a major city. The park features diverse ecosystems, including wetlands, forests, and meadows. It is home to an array of wildlife, making it a significant spot on the natural path Toronto has mapped out for nature enthusiasts. Hiking, bird watching, and kayaking are popular activities here. Don Valley Trails: A Natural Corridor The Don Valley offers some of the most accessible and scenic trails in the city. The network of trails along the Don River is a key component of the natural path Toronto promotes for outdoor recreation. These trails cater to both casual walkers and avid cyclists, providing a serene escape with the convenience of proximity to the urban core. The Evergreen Brick Works, located in the Don Valley, is a vibrant community hub that enhances the natural path Toronto experience with its environmental programs and farmers' market. Humber River: A Historic Natural Path Toronto Landmark The Humber River is not only a beautiful natural feature but also a historic landmark. The river's trails are part of the natural path Toronto offers, providing picturesque routes that follow the river's meandering course. These trails are ideal for hiking, cycling, and bird watching. The Humber Arboretum, located along the river, adds to the appeal with its gardens and educational programs focused on conservation and biodiversity. Exploring Toronto’s Waterfront Toronto’s waterfront is another highlight of the natural path Toronto features. The extensive boardwalks and trails along Lake Ontario offer stunning views and recreational opportunities. The Martin Goodman Trail, part of the larger Waterfront Trail, is a popular route for cyclists and joggers. The natural path Toronto’s waterfront creates is perfect for those looking to enjoy the beauty of the lake while engaging in outdoor activities. Conclusion The natural path Toronto offers is a diverse and accessible array of green spaces that allow people to enjoy the outdoors without leaving the city. From expansive parks like High Park and Rouge National Urban Park to the scenic Toronto Islands and waterfront, the natural path Toronto provides something for everyone. Whether you are a local or a visitor, exploring the natural path Toronto has mapped out is a rewarding way to experience the city’s commitment to preserving and celebrating nature. By taking advantage of the natural path Toronto features, you can find solace and adventure, making your time in the city both refreshing and memorable. So, lace up your hiking boots or hop on a bike, and discover the natural wonders that make Toronto a unique urban oasis.
serene_healthclinic_9116
1,892,716
Design Pattern Operation Result em C#
O Operation Result Design Pattern é um ferramenta incrível para você ter nas bases do seu projeto....
0
2024-06-18T18:34:09
https://dev.to/felipesntr/design-pattern-operation-result-em-c-2g8o
csharp, designpatterns, result
O Operation Result Design Pattern é um ferramenta incrível para você ter nas bases do seu projeto. Isso ajuda a lidar de forma muito mais assertiva com os resultados de operações do seu sistema. ## O que é o Operation Result Design Pattern Para começar, o Operation Result Design Pattern é utilizado para encapsular o resultado de operações de forma que permite uma estrutura consistente de retornar tanto o sucesso quanto a falha de alguma operação. Ele também é uma alternativa ao disparos de exceções, ao invés de lidar com as exceções, você trabalha com um resultado. ## Alguns obejtivos ao utilizar o Operation Result Ter um indicador de sucesso ou falha da operação; Ter acesso ao motivo da falha se a operação tiver falhado ou o valor resultante da operação caso tenha sido sucedida. ## Criação da classe Resultado A classe Resultado em si encpasula o resultado de alguma operação. Ela pode incluir um valor de retorno quando a operação for bem sucedida ou os detalhes de erros em operações que falharam. Em C# podemos escrever a classe Resultado da seguinte maneira, assim conseguimos usar o Resultado para tipos diferentes de valores ``` public class Resultado<Tipo> { public Resultado() { Erros = []; } public Resultado(params string[] erros) { Erros = erros.ToImmutableList(); } public Tipo? Valor { get; init; } public bool TemErros() => Erros.Count > 0; public bool Sucesso => !TemErros(); public IReadOnlyCollection<string> Erros { get; init; } } ``` Com essa pequena implementação você já tem disponível a lista de erros, caso a operação tenha falhado, armazenadas em um objeto ImmutableList<string>, caso a operação tenha falhado, e consegue retornar como um IReadOnlyCollection<string>. Além disso consegue saber se a operação foi sucedida ou não e no caso do sucesso, trabalhar em cima do valor resultante. Lendo um pouco mais a classe, conseguimos entender que a criação do objeto com o construtor vazio representa uma operação de sucesso e quando tem itens no construtor significa que a operação falhou. ## Adicionando a gravidade da mensagem! Ao invés de retornar somente mensagens de erros quando houverem erros, você pode retornar mensagens e em cada mensagem disponibilizar um nível de gravidade, ou seja, a mensagem pode ser somente uma informação, quer seria o caso de sucesso, ou pior, ela pode ser um erro, que seria uma mensagem com maior gravidade. Assim, podemos escrever uma nova classe para representar essas mensagens e um enum para representar os níveis ``` public class ResultadoMensagem { public ResultadoMensagem(string menssagem, MensagemGravidade gravidade) { Mensagem = menssagem?? throw new ArgumentNullException(nameof(menssagem)); Gravidade = gravidade; } public string Mensagem { get; } public MensagemGravidade Gravidade{ get; } } public enum MensagemGravidade { Informacao = 0, Aviso = 1, Erro = 2, } ``` Por fim vamos adaptar a classe de resultado ao novo padrão utilizando as mensagens com gravidade! ``` public class Resultado<Tipo> { public Resultado() { Mensagens = []; } public Resultado(params ResultadoMensagem[] mensagens) { Mensagens = [.. mensagens]; } public Tipo? Valor { get; init; } public bool TemErros() => Mensagens.Count > 0; public bool Sucesso => !TemErros(); public IReadOnlyCollection<ResultadoMensagem> Mensagens { get; init; } public IEnumerable<ResultadoMensagem> ProcurarErros() => Mensagens.Where(mensagem => mensagem.Gravidade == MensagemGravidade.Erro); } ``` Espero que eu tenha ajudado de alguma forma, pode deixar um comentário caso tenha alguma dúvida sobre esse design pattern!
felipesntr
1,892,697
Byte: Beads on a Bracelet
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-18T18:33:47
https://dev.to/avishek_chowdhury/byte-beads-on-a-bracelet-f5b
devchallenge, cschallenge, computerscience, beginners
<u></u>*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* --- ## Explainer **Byte** is the fundamental unit of information in a computer, similar to a <u>single bead on a bracelet</u>. Eight of these 'beads' (called bits) come together to represent a letter, number, images, videos, and even games to create all the fun. And we get, **1 Byte = 8 Bits.** <!-- Explain a computer science concept in 256 characters or less. --> --- ## Additional Context * Explained Byte as a tiny building block using a relatable analogy (bead on bracelet). * Shortened things up to fit within the 256 character challenge (gotta keep it fun!). It is 254 characters to be exact (without the markdown). * Explained bytes as tiny building blocks that make all the cool computer stuff work. *the cover image has been taken from [here](https://www.computerhope.com/jargon/b/bit-byte.png)* <!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. --> <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
avishek_chowdhury
1,892,715
JPG to WebP: Enhancing Image Efficiency
What Are the Differences Between JPG and WebP Images? JPG (or JPEG) and WebP are two...
0
2024-06-18T18:32:11
https://dev.to/msmith99994/jpg-to-webp-enhancing-image-efficiency-205p
## What Are the Differences Between JPG and WebP Images? JPG (or JPEG) and WebP are two popular image formats used for different purposes. While both serve the fundamental role of image storage and display, they have distinct characteristics that make them suitable for specific applications. ### JPG **- Compression:** JPG uses lossy compression, which reduces file size by discarding some image data. This process can result in a loss of image quality, especially at higher compression levels. **- Color Range:** JPG supports 24-bit color, displaying millions of colors, making it ideal for photographs. **- File Size:** The lossy compression technique helps significantly reduce the file size, which is beneficial for web usage. **- Transparency:** JPG does not support transparency. ### WebP **- Compression:** WebP supports both lossy and lossless compression. Lossy WebP compression can reduce file sizes more effectively than JPG without sacrificing quality, while lossless WebP retains all image data. **- Color Range:** Like JPG, WebP supports 24-bit color and 8-bit transparency. **- File Size:** WebP typically results in smaller file sizes compared to JPG for both lossy and lossless images. **- Transparency and Animation:** WebP supports transparency and animation, making it more versatile than JPG. ## Where Are They Used? ### JPG **- Digital Photography:** JPG is the standard format for digital cameras and smartphones due to its balance of quality and file size. - Web Design: Widely used for photographs and complex images on websites because of its quick loading times. **- Social Media:** Preferred for sharing images on social platforms due to its universal support and small file size. **- Email and Document Sharing:** Frequently used in emails and documents for easy viewing and sharing. ### WebP Web Development: Increasingly adopted in web design for faster page load times without compromising image quality. Mobile Applications: Used in mobile apps to enhance performance by reducing the data load. Digital Advertising: Employed in digital ads to deliver high-quality visuals with minimal loading times. E-commerce: Used to display product images efficiently, enhancing user experience and page speed. ## Advantages and Disadvantages ### JPG **Advantages:** - Small File Size: Effective lossy compression reduces file sizes significantly. - Wide Compatibility: Supported by almost all devices, browsers, and software. - High Color Depth: Capable of displaying millions of colors, ideal for photographs. - Adjustable Quality: Compression levels can be adjusted to balance quality and file size. **Disadvantages:** - Lossy Compression: Quality degrades with higher compression levels and repeated edits. - No Transparency: Does not support transparent backgrounds. - Limited Editing Capability: Cumulative compression losses make it less ideal for extensive editing. ### WebP **Advantages:** - Smaller File Sizes: More efficient compression compared to JPG, resulting in smaller files. - High Quality: Maintains high image quality, even with significant compression. - Transparency and Animation: Supports both, making it versatile for various uses. - Wide Browser Support: Increasingly supported by modern web browsers. **Disadvantages:** - Compatibility Issues: Not supported by some older browsers and software. - Conversion Overhead: Requires effort to convert existing images to WebP. - Complexity: Managing multiple formats for compatibility can complicate workflows. ## How to Convert JPG to WebP Converting [JPG to WebP](https://cloudinary.com/tools/jpg-to-webp) can enhance web performance and reduce file sizes. Here are several methods to convert JPG images to WebP: ## Conversion Methods 1. Using Online Tools: Websites like Convertio and Online-Convert allow you to upload JPG files and download the converted WebP files. 2. Using Image Editing Software: Software like Adobe Photoshop and GIMP support WebP format. Open your JPG file and save it as WebP. 3. Command Line Tools: Command-line tools like cwebp from the WebP library can be used for conversion. 4. Programming Libraries: Programming libraries such as Python's Pillow or JavaScript's sharp can be used to automate the conversion process in applications. ## Conclusion JPG and WebP are essential image formats in the digital landscape, each with unique strengths and weaknesses. JPG remains a staple for digital photography and web images due to its compatibility and manageable file sizes. However, WebP offers significant advantages in terms of compression efficiency and versatility, making it an excellent choice for modern web applications. Understanding how to convert between these formats allows you to leverage their benefits effectively, enhancing both user experience and web performance.
msmith99994
1,892,714
Mastering Scalable React Apps. Your Ultimate Guide to High-Performance Development!
Are you ready to take your React development skills to the next level? In this comprehensive guide,...
0
2024-06-18T18:31:41
https://dev.to/michael_osas/mastering-scalable-react-apps-your-ultimate-guide-to-high-performance-development-59fo
enterprise, react, scaling, applications
Are you ready to take your React development skills to the next level? In this comprehensive guide, we'll delve into the art of building scalable React applications that not only perform flawlessly but also remain maintainable and robust in the face of evolving requirements. ## Part 1: Lay the Foundation In the first part of our journey, we laid down the groundwork for building scalable React apps. We explored the fundamentals of scalability, including project structure, state management best practices, and modular architecture. From organizing your codebase to efficiently managing state with Redux Toolkit and React Query, we covered it all. ## Part 2: Advanced Strategies Unveiled In the second part, we dived into the advanced techniques that separate good React apps from exceptional ones. We discussed performance optimization tricks like rendering optimization, code splitting, and lazy loading to ensure lightning-fast loading times. Additionally, we explored seamless API integration using REST and GraphQL, along with data fetching libraries like Axios and React Query. Finally, we demystified CI/CD pipelines, containerization with Docker, and deployment strategies on cloud platforms like AWS, Azure, and Google Cloud. Ready to level up your React game? Dive deeper into these concepts and unlock the full potential of your applications! [Click here to read the full blog post and become a React master ](https://buildclaw.com/building-scalable-react-apps-in-2024-how-to-build-a-scalable-enterprise-application/) Join us on this exciting journey to mastering scalable React apps. Don't miss out on the chance to elevate your development skills and create top-notch applications that stand out in today's competitive landscape. Happy coding! 🚀
michael_osas
1,892,713
Best Practices and Productivity Hacks for Efficient Developers
As developers, we're constantly seeking ways to improve our efficiency and make the most out of our...
0
2024-06-18T18:28:04
https://dev.to/msubhro/best-practices-and-productivity-hacks-for-efficient-developers-712
productivity, developers, leadership
As developers, we're constantly seeking ways to improve our efficiency and make the most out of our working hours. With the rapid pace of technological advancement and the increasing complexity of projects, staying productive can be a challenge. Here are some of the best productivity hacks tailored specifically for developers. ## Master Your Development Environment **Choose the Right IDE** Your Integrated Development Environment (IDE) is your primary tool, and mastering it can significantly boost your productivity. Whether it's Visual Studio Code, IntelliJ IDEA, or PyCharm, invest time in learning its shortcuts, plugins, and features. A well-configured IDE can save you hours of coding and debugging time. **Customize Your Workspace** Personalize your IDE with themes, fonts, and layouts that make you comfortable. A pleasant and efficient workspace reduces strain and helps maintain focus over long coding sessions. ## Automate Repetitive Tasks **Use Scripts** Automate repetitive tasks with scripts. Shell scripts, Python scripts, or automation tools like Ansible can handle routine operations such as deployment, database migrations, and environment setups. **Employ Task Runners** Tools like Gulp, Grunt, and npm scripts can automate tasks like minification, compilation, and testing. This reduces manual work and ensures consistency across your project. ## Practice Good Code Hygiene **Follow a Coding Standard** Adopt and stick to a coding standard for your projects. Consistent code is easier to read, understand, and maintain. Use linters and formatters to enforce these standards automatically. **Write Clear and Concise Comments** Comments should explain the "why" behind your code, not the "what." Good comments can save you and your teammates from spending extra time deciphering complex code. ## Version Control Like a Pro **Commit Often** Frequent commits with clear, descriptive messages help track progress and make it easier to roll back changes if something goes wrong. **Branch Wisely** Use branches for different features or bug fixes. This isolates changes and reduces the risk of conflicts, making your workflow more organized and manageable. ## Utilize Debugging Tools Effective debugging is crucial for productivity. Learn to use the debugging tools available in your IDE. Set breakpoints, inspect variables, and step through code to quickly identify and fix issues. ## Optimize Your Workflow **Use Shortcuts** Keyboard shortcuts can significantly speed up your workflow. Spend some time learning the shortcuts for your IDE and other frequently used tools. **Batch Similar Tasks** Group similar tasks together to maintain focus. For instance, handle all your emails at once rather than interrupting your coding session multiple times. ## Leverage the Power of Version Control Systems (VCS) **Git** Git is an essential tool for developers. Mastering its commands and understanding how to effectively use branches, merges, and rebases can drastically improve your workflow and collaboration with other developers. **Continuous Integration/Continuous Deployment (CI/CD)** Implementing CI/CD pipelines automates the testing and deployment processes, ensuring that your code is always in a deployable state. This reduces manual errors and speeds up the release cycle. ## Prioritize Learning and Improvement **Continuous Learning** Technology evolves rapidly, and continuous learning is essential. Dedicate time each week to learn new languages, frameworks, or tools. Platforms like Coursera, Udemy, and Pluralsight offer excellent courses. **Reflect and Improve** Regularly review your work and identify areas for improvement. Conduct post-mortems after completing projects to understand what went well and what could be improved. ## Take Care of Your Health **Regular Breaks** The Pomodoro Technique, which involves working for 25 minutes and then taking a 5-minute break, can help maintain high levels of productivity without burnout. Regular breaks prevent fatigue and keep your mind fresh. **Ergonomic Workspace** Invest in a comfortable chair, a good desk, and consider using a standing desk. Proper ergonomics reduce the risk of physical strain and long-term injuries. ## Collaborate and Communicate **Use Collaboration Tools** Tools like Slack, Microsoft Teams, and Jira streamline communication and project management. Effective use of these tools can keep your team on the same page and reduce misunderstandings. **Code Reviews** Participate in code reviews. They are not only great for catching bugs but also for learning new techniques and improving code quality through peer feedback. ## Conclusion Boosting productivity as a developer involves a combination of mastering your tools, automating repetitive tasks, maintaining good coding practices, and taking care of your well-being. By incorporating these hacks into your daily routine, you can enhance your efficiency, reduce stress, and produce high-quality code more consistently. Remember, productivity is not about working harder, but about working smarter.
msubhro
1,892,712
I want to improve design and functionality of Image Color Picker?
Hello all, I want to improve design and functionality of "Image Color Picker". Could you please help...
0
2024-06-18T18:24:31
https://dev.to/toolconverter/i-want-to-improve-design-and-functionality-of-image-color-picker-32a9
image, tooling
Hello all, I want to improve design and functionality of "Image Color Picker". Could you please help me. Link: https://toolconverter.com/image-color-picker/ Thanks.
toolconverter
1,892,709
The Comprehensive Guide to Naturopath Doulas: Blending Natural Medicine with Birth Support
In recent years, the role of a doula has evolved significantly, encompassing a wide range of...
0
2024-06-18T18:16:52
https://dev.to/serene_healthclinic_9116/the-comprehensive-guide-to-naturopath-doulas-blending-natural-medicine-with-birth-support-2g90
In recent years, the role of a doula has evolved significantly, encompassing a wide range of specialties and holistic approaches. One such specialized role is that of a naturopath doula, a professional who combines the principles of naturopathy with the supportive and nurturing care traditionally provided by a doula. This comprehensive guide explores the multifaceted benefits of engaging a naturopath doula, the unique services they offer, and how they can transform the birthing experience. What is a Naturopath Doula? A naturopath doula is a trained professional who merges the holistic principles of naturopathy with the supportive role of a doula. Naturopathy focuses on natural remedies and the body's intrinsic ability to heal itself, emphasizing a holistic approach to health care. A naturopath doula integrates this philosophy into their doula practice, providing not only emotional and physical support during childbirth but also employing natural therapies and remedies to enhance the birthing process. The Role of a Naturopath Doula Prenatal Support Holistic Health Planning: A naturopath doula helps create a comprehensive health plan for the mother, incorporating diet, exercise, and natural supplements to ensure optimal health during pregnancy. Stress Management: Techniques such as meditation, yoga, and aromatherapy are employed to manage stress and promote relaxation. Natural Remedies: They recommend and provide natural remedies for common pregnancy ailments such as nausea, fatigue, and insomnia. Birth Support Emotional and Physical Assistance: During labor, a naturopath doula offers continuous emotional support and physical comfort measures such as massage, acupressure, and guided breathing techniques. Natural Pain Relief: They utilize methods like hydrotherapy, essential oils, and herbal medicine to alleviate pain and discomfort during childbirth. Postpartum Care Recovery Support: Post-birth, a naturopath doula assists with recovery through natural remedies, nutritional guidance, and gentle exercises to promote healing. Lactation Assistance: They provide support and natural solutions to breastfeeding challenges, ensuring both mother and baby have a successful breastfeeding journey. Emotional Well-being: Addressing the emotional needs of the new mother, a naturopath doula offers counseling and support to navigate the transition into motherhood. Benefits of Choosing a Naturopath Doula Holistic Health Approach By integrating natural medicine, a naturopath doula ensures that the mother’s health is addressed comprehensively, considering physical, emotional, and mental well-being. Personalized Care Each mother’s needs are unique, and a naturopath doula tailors their services to meet these specific requirements, offering personalized care plans. Natural Pain Management Utilizing non-invasive and natural pain relief methods helps minimize the need for medical interventions and pharmaceuticals, promoting a more natural birthing experience. Enhanced Emotional Support The combination of naturopathic principles and doula support ensures a nurturing environment, reducing anxiety and fear associated with childbirth. Improved Birth Outcomes Studies have shown that continuous support during labor can lead to shorter labor durations, reduced need for pain relief, and lower rates of cesarean sections, all of which are supported by the holistic approach of a naturopath doula. How to Choose a Naturopath Doula When selecting a naturopath doula, consider the following: Credentials and Training Ensure the doula is certified and has received training in both doula services and naturopathy. Experience and References Look for a doula with substantial experience and positive references from previous clients. Philosophical Alignment It’s important that the doula’s approach to birth and health aligns with your own beliefs and preferences. Communication Style Choose a doula with whom you feel comfortable communicating, as this is essential for effective support. Conclusion Engaging a naturopath doula offers a unique and enriching approach to childbirth, combining the nurturing support of a doula with the healing principles of naturopathy. This integrative care model not only enhances the birthing experience but also promotes overall well-being for both mother and child. By choosing a naturopath doula, you are investing in a holistic, personalized, and natural birth journey, ensuring that your transition into motherhood is as smooth and supported as possible.
serene_healthclinic_9116
1,892,708
حبوب سايتوتك في الامارات | 971547952044 | سايتوتك الأصلي
إذا كنت تبحث عن حبوب الإجهاض في الإمارات، فإن هناك العديد من الخيارات المتاحة لك. يمكنك العثور على...
0
2024-06-18T18:16:50
https://dev.to/cytgcc/hbwb-sytwtk-fy-lmrt-971547952044-sytwtk-lsly-465e
إذا كنت تبحث عن **حبوب الإجهاض في الإمارات**، فإن هناك العديد من الخيارات المتاحة لك. يمكنك العثور على **حبوب سايتوتك للبيع في الإمارات** من خلال عدة صيدليات معتمدة، وهي متوفرة بشكل خاص في المدن الرئيسية مثل دبي وأبوظبي والعين. في دبي، تتوفر **حبوب سايتوتك للبيع في دبي** بخدمة **الدفع عند الاستلام**، مما يجعل الحصول على المنتج أكثر سهولة وأمانًا. في أبوظبي، يمكنك العثور على **حبوب سايتوتك للبيع في أبوظبي** مع ضمان الجودة والأصالة. تعتبر **حبوب الإجهاض بالامارات** من الأدوية الهامة والمتوفرة في صيدليات معتمدة، حيث يمكنك البحث عن **حبوب تنزيل الحمل للبيع في الإمارات** والحصول على المنتج الذي يلبي احتياجاتك. توفر **صيدلية تبيع حبوب الإجهاض في الإمارات** جميع المعلومات التي تحتاجها لضمان الحصول على المنتج الصحيح بطريقة آمنة وموثوقة. إذا كنت في العين، يمكنك الحصول على **حبوب سايتوتك للبيع في العين** بسهولة من خلال صيدليات موثوقة تضمن جودة المنتجات الطبية. **حبوب سايتوتك في الإمارات** تُعتبر من الخيارات الأكثر شيوعاً وفعالية، وهي متوفرة في كافة أنحاء الإمارات. السعر هو دائماً عامل مهم، ويمكنك الاطلاع على **cytotec سعر في الإمارات** لمقارنة الأسعار والحصول على أفضل العروض. سواء كنت في دبي، أبوظبي، العين أو أي مدينة أخرى، فإن **حبوب الإجهاض الإمارات** متوفرة لضمان حصولك على الحل الأمثل. من المهم أيضاً التأكد من أن المنتج الذي تحصل عليه هو الأصلي. لذلك، يُنصح بالتوجه إلى **صيدلية تبيع حبوب الإجهاض في الإمارات** للحصول على **دواء سايتوتك في الإمارات** بجودة مضمونة. تأكد دائماً من الشراء من مصادر موثوقة لضمان الأمان والفعالية. باختصار، سواء كنت تبحث عن **حبوب سايتوتك للبيع في الإمارات**، أو **حبوب الإجهاض في الإمارات**، أو تبحث عن **دواء الإجهاض في الإمارات**، فإن هناك العديد من الخيارات الموثوقة المتاحة لك. تأكد من الحصول على المنتج من صيدلية معتمدة لضمان حصولك على حبوب آمنة وفعالة بأفضل الأسعار في السوق. صيدلية تبيع حبوب الاجهاض في الامارات حبوب الاجهاض الامارات حبوب سايتوتك للبيع في الامارات حبوب سايتوتك الدفع عند الاستلام الامارات حبوب سايتوتك للبيع في العين حبوب الاجهاض في الامارات حبوب الاجهاض بالامارات سايتوتك للبيع في الامارات حبوب اجهاض في الامارات حبوب اجهاض الامارات سايتوتك في الامارات حبوب سايتوتك في الامارات cytotec سعر في الامارات حبوب سايتوتك للبيع في ابوظبي حبوب تنزيل الحمل للبيع في الامارات حبوب سايتوتك الامارات صيدلية تبيع حبوب الإجهاض في الامارات cytotec سعر في الإمارات حبوب سايتوتك للبيع في دبي حبوب الاجهاض للبيع في الامارات سايتوتيك في الامارات حبوب سايتوتيك في الامارات دواء سايتوتك في الامارات دواء الاجهاض في الامارات سعر حبوب سايتوتك في الامارات بيع حبوب سايتوتك في الامارات حبوب اجهاض في ابوظبي
cytgcc
1,892,707
Get a Web3 Grant with These Steps
The cryptocurrency landscape has witnessed remarkable growth in recent years, attracting a diverse...
0
2024-06-18T18:16:09
https://blog.learnhub.africa/2024/06/18/get-a-web3-grant-with-these-steps/
web3, cryptocurrency, blockchain, beginners
The cryptocurrency landscape has witnessed remarkable growth in recent years, attracting a diverse user base and investors. However, a pressing need has emerged as the industry evolves: developing user-friendly applications that can bridge the gap between cutting-edge blockchain technology and mainstream adoption. To address this challenge, a wave of grant programs has swept across the Web3 ecosystem, providing crucial support to ambitious projects to build more accessible and intuitive solutions. These initiatives recognize the importance of fostering innovation across various domains, including decentralized finance (DeFi), gaming (GameFi), and infrastructure development. ![Top 10 Web3 Grants You Should Know About](https://blog.learnhub.africa/wp-content/uploads/2024/06/Top-10-Web3-Grants-You-Should-Know-About-.png) You are missing if you do not know about these [Top 10 Web3 Grants](https://blog.learnhub.africa/2024/06/18/top-10-web3-grants-you-should-know-about/). Grant programs have become vital launchpads, offering financial resources, invaluable mentorship, marketing opportunities, and connections with industry leaders. By empowering visionary developers and entrepreneurs, these grants pave the way for a future where blockchain-based applications are seamlessly integrated into our daily lives. ## Understanding Web3 Grants Web3 grants play a pivotal role in the ecosystem by providing financial support and resources to projects that contribute to the growth and adoption of blockchain technology. These grants come in various forms, including project-based, research-focused, and community-driven initiatives. The benefits of securing a Web3 grant extend far beyond financial support. Grantees gain validation for their ideas, access to networking opportunities, and the chance to collaborate with industry leaders. Furthermore, these grants have the potential to drive technological advancements and foster innovation, ultimately contributing to the mainstream adoption of Web3 technologies. ## Identifying Grant Opportunities Securing a Web3 grant begins with thorough research to identify suitable opportunities. Dedicated websites, Web3 communities, accelerators, and industry events can be valuable sources for locating grant programs. Aligning your project goals with the grant requirements is crucial to increasing your chances of success. Creating a comprehensive list of potential grant sources and tracking application deadlines is essential. Additionally, networking and building relationships within the Web3 community can provide valuable insights and connections that may open doors to grant opportunities. ## Crafting a Compelling Grant Proposal A well-written and persuasive grant proposal is critical to the application process. Successful proposals typically include an executive summary, a detailed project description, technical details, a timeline, and a budget. It is paramount to effectively communicate the problem being addressed and the proposed solution. Demonstrating a deep understanding of the Web3 space and the project's potential impact can set your proposal apart. Additionally, highlighting the team's expertise, experience, and commitment is crucial. ![How Web3 Decentralization Can Dismantle Big Tech Monopolies in 2024](https://blog.learnhub.africa/wp-content/uploads/2024/02/How-Web3-Decentralization-Can-Dismantle-Big-Tech-Monopolies-in-2024-1-1024x535.png) [How Web3 Decentralization Can Dismantle Big Tech Monopolies in 2024](https://blog.learnhub.africa/2024/02/14/how-web3-decentralization-can-dismantle-big-tech-monopolies-in-2024/) The proposal should outline clear and measurable objectives, milestones, and deliverables. Strategies for showcasing the project's unique value proposition and innovative approach can further strengthen the application. ## Building a Strong Team and Partnership Assembling a diverse and talented team is key to securing a Web3 grant. Complementary skills and expertise, such as technical proficiency, domain knowledge, and passion for the project, can contribute to a stronger overall proposal. Forming strategic partnerships with industry leaders, academic institutions, or community organizations can also enhance your chances of success. Successful collaborations within the Web3 space can serve as examples of the potential impact of such partnerships. Maintaining a strong network and fostering relationships within the Web3 community can open doors to future opportunities, collaborations, and potential funding sources. ## Post-Grant Considerations Securing a Web3 grant is only the first step in the journey. Effective project management and timely execution are crucial for maintaining grant providers' trust and demonstrating your project's viability. Transparent communication and regular progress updates are essential for keeping stakeholders informed and engaged. Additionally, leveraging a grant's success can attract further funding or investment opportunities, facilitating the growth and expansion of your project. Giving back to the Web3 community through open-source contributions, knowledge sharing, or mentorship can foster a culture of collaboration and drive collective progress within the ecosystem. ## Conclusion The Web3 space presents an unprecedented opportunity for innovation and disruption, and securing grants can be the catalyst that propels visionary projects to success. By understanding the importance of grants, identifying suitable opportunities, crafting compelling proposals, building strong teams and partnerships, and adopting a strategic approach to post-grant considerations, ambitious developers and entrepreneurs can unlock the resources they need to bring their groundbreaking ideas to life. Embrace the opportunities presented by the Web3 ecosystem, and embark on your journey to secure the funding that will drive your project forward. The future of user-friendly blockchain applications awaits, and with the right approach, you can be at the forefront of this transformative revolution. If you like my work and want to help me continue dropping content like this, buy me a [cup of coffee](https://www.buymeacoffee.com/scofields1s). If you find this post exciting, find more exciting posts on [Learnhub Blog](https://blog.learnhub.africa/); we write everything tech from [Cloud computing](https://blog.learnhub.africa/category/cloud-computing/) to [Frontend Dev](https://blog.learnhub.africa/category/frontend/), [Cybersecurity](https://blog.learnhub.africa/category/security/), [AI](https://blog.learnhub.africa/category/data-science/), and [Blockchain](https://blog.learnhub.africa/category/blockchain/).
scofieldidehen
1,892,706
Tips for Effective Communication in Remote Developer Jobs
Remote work introduces unique communication challenges, but effective strategies can turn these...
0
2024-06-18T18:15:11
https://dev.to/msubhro/tips-for-effective-communication-in-remote-developer-jobs-3mm
productivity, remote, developer, leadership
Remote work introduces unique communication challenges, but effective strategies can turn these challenges into opportunities for growth and productivity. Here are seven essential tips to enhance communication within remote teams: ## Proactive Problem-Solving Team members are encouraged to tackle problems independently before seeking assistance. When encountering an issue, taking a moment to consider possible solutions often leads to resolving the problem without outside help. This proactive approach fosters innovation and efficiency, boosting both individual and team productivity. ## Consult with Proposed Solutions When consulting a manager or colleague, presenting potential solutions alongside the issue saves time and streamlines the decision-making process. Offering options makes it easier to select the best course of action and often reveals the optimal solution without extensive deliberation. This method enhances efficiency and demonstrates proactive thinking. ## Emphasize Asynchronous Communication Asynchronous communication, where team members respond at their convenience, reduces interruptions and accommodates different time zones. This method promotes thoughtful responses and flexibility, essential for maintaining productivity in a remote environment. Regular intervals for checking and responding to messages, such as every two hours, balance responsiveness and focus. ## Prioritize Clarity Clear, concise, and focused communication is crucial. Reviewing messages to eliminate unnecessary details ensures the main points are easy to understand, minimizing misunderstandings and keeping everyone aligned. This practice enhances overall productivity by ensuring effective information exchange. ## Encourage Ownership Empowering team members to take initiative and make decisions without waiting for approval fosters a sense of responsibility and creativity. This approach leads to higher motivation and productivity. Providing clarity on objectives, often referred to as "Commander’s Intent," helps team members understand the "what" and "why" behind tasks and projects. ## Continuously Evaluate and Improve Regular assessment of communication strategies helps identify what works and what doesn’t. Experimenting with new ideas and adjusting as necessary ensures that practices evolve to meet the team’s needs. For instance, brief daily synchronous meetings can help maintain alignment and provide a platform for quick updates and progress reports. ## Utilize Synchronous Meetings When Necessary While asynchronous communication is often ideal, real-time interaction is sometimes essential. Short, focused meetings should be held to address issues requiring immediate attention or detailed discussion. These meetings should have a clear objective and involve only the necessary participants to maintain efficiency and productivity. ## Conclusion These principles form the foundation of effective communication in a remote work environment. By fostering proactive problem-solving, encouraging ownership, and balancing asynchronous and synchronous communication, teams can transform from good to exceptional. Open and clear communication is key to navigating the complexities of remote work and achieving greater success. By adopting these strategies, organizations can enhance their remote teams' productivity and cohesion, blending creativity with precision in their work processes. Embracing these communication practices can unlock the full potential of remote work environments.
msubhro
1,892,705
Some UX Design Principles Everyone Should Know 🥸
Hitting the ground running with a new app idea is tough. There are a million things to do and no time...
0
2024-06-18T18:13:14
https://houseofgiants.com/blog/some-ux-design-principals-startups-should-know
webdev, ux, design
Hitting the ground running with a new app idea is tough. There are a million things to do and no time to do them. You’ve gone through the “justification” phase, explaining to everyone and their mom why the world needs your application. You’ve documented every single forecast and business plan. Now it’s time to start thinking about actually executing your vision. It all begins with User Experience. Great UX design isn't an afterthought or a “nice to have”; it's an absolute necessity for any application aiming to stand out and thrive. That said, let's dive into some UX design principles that everyone should know. ## User-Centered Design 🗣️ First things first: always put your users at the center of your design process. This means understanding their needs, behaviors, and pain points. It's not about what you think looks good; it's about making sure your users can accomplish the goal you’ve set out for them with the least amount of resistance. - **Conduct User Research**: Surveys, interviews, and usability tests are your best friends here. - **Create Personas**: Develop detailed profiles of your target users to guide your design decisions. - **User Journey Mapping**: Map out the steps users take to complete tasks within your application, to identify opportunities for improvement. ## Simplicity and Clarity 🧘 Less is more. Seriously. Don't overload your users with information or options. Keep your design clean, straightforward, and intuitive. As users flock to your application, you’ll be able to take their feedback and make well educated, data-driven decisions about what to improve. - **Clear Navigation**: Make sure users can easily find what they need without getting lost. Don’t hide primary actions behind layers of interaction. - **Minimalist Design**: Remove any unnecessary elements that don't add value. Truly ask yourself why something exists, and how it aids in users accomplishing their goals. - **Readable Content**: Use clear, concise language and break up text with headings, bullet points, or some _very_ cool emojis if you’re hip 😎. ## Consistency Like all things, consistency is key. Especially when it comes to creating a seamless user experience. This means maintaining uniformity in your design elements across your website or app. - **Design System**: I love design systems. There’s nothing better than an organized style guide that includes typography, color schemes, button styles, and all the little components that make your application unique. - **Consistent Interactions**: Make sure similar actions produce similar results throughout your site. Don’t have twelve different variations of that modal. People notice, and it makes you look silly. - **Branding**: Keep your branding elements consistent to build trust and foster that brand recognition. ## Accessibility Accessibility needs to be top of mind at the outset of any project. Baking accessibility best practices into the UX and design phase of your application will ensure you’re providing an equitable experience for each and every one of your users, regardless of how they interact with the web. Make sure you’re _at a minimum_ making these considerations: - **Alt Text for Images**: This is too easy not to be doing. Provide descriptive text for images to assist screen readers. - **Keyboard Navigation**: Not everyone browses the web with a mouse or a $130 Apple trackpad. Make considerations for those who navigate with a keyboard. - **Contrast Ratios**: I’m old and this one bothers me more and more. Do a quick check to make your text has _at least_ a AA contrast ratio. There [are](https://webaim.org/resources/contrastchecker/) [several](https://coolors.co/contrast-checker/112a46-acc8e5) [tools](https://colourcontrast.cc/) [for](https://accessibleweb.com/color-contrast-checker/) [this](https://contrastchecker.com/). Pick one. ## Feedback and Responsiveness In our wild world of JavaScript applications, ensuring that a user knows that _something_ is happening, has become more important than in the past. Our applications connect to more third-party services these days, and our users must know that an action they’ve taken is processing. We use these strategies to help with that; - **Loading Indicators / Skeletons 💀**: Skeleton loaders not only sound cool, they also provide anticipatory design elements that give users a sense of what will be on the page, before the data has fully loaded. - **Error Messages:** Provide clear, helpful error messages when something goes wrong. There’s nothing worse than a user seeing `Uncaught ReferenceError: Invalid left-hand side in assignment` or some other nonsense. Give them plain English feedback instead! - **Performance Optimization:** This is becoming more and more tricky. The more services we integrate, the more complex a database becomes the more creative you have to get to ensure an application is performant. ## Emotional Design This is such a cool concept to us. Emotional design focuses more on creating experiences that evoke a particular feeling from your users. As cliche, as it may sound, this creates a unique connection and keeps your users feeling positive about the service you’re providing them. This can be done in a few ways; - **Storytelling**: Visual storytelling on the web is an art form. We love expressing our creativity through creating a full-fledged digital experience. - **Micro-Interactions**: Small, thoughtful animations and interactions tend to put you in a position to entertain and engage your users and drive them to keep coming back. - **Human Touch**: Incorporate human elements, like friendly language, relatable imagery, or exceptionally witty content, like the content you’re reading right this second. ## Iterative Design Everything is iterative. After you build an MVP, it’s back to the drawing board. Starting the process over and continuously iterating is the only way to keep pace with the multi-billion dollar applications that exist in our world. Stay engaged with your user base, listen to their feedback, and they’ll remain loyal to you. - **Usability Testing**: Regularly test your designs with real users. This is and always will be something that the tech giants miss out on. Don’t miss the mark here. - **A/B Testing**: Get super granular with this. A/B test simple language, or small components, and use that information to inform the larger design. - **Analytics**: Use data to inform all of your decisions. There needs to be a why, and data will uncover it. ## Focus on Business Goals While user satisfaction is paramount, your application should align with your business objectives. All that work you did at the inception of your idea shouldn’t go to waste. The forecasts and the documentation are invaluable, but balance user needs with your goals. No bullet points here, those business goals are yours to define. We’ll be here to help you refine and execute them. Keep testing, collaborating, learning, and evolving.
magnificode
1,892,703
How to containerize your web app- a beginner-friendly tutorial for Dockerfile
Welcome to part 2 of the series Docker for Dummies in this blog we are going to create an image of a...
27,767
2024-06-18T18:07:18
https://dev.to/swikritit/how-to-containerize-your-web-app-a-beginner-friendly-tutorial-for-dockerfile-282e
docker, devops, containerapps, webdev
Welcome to part 2 of the series `Docker for Dummies` in this blog we are going to create an image of a small web app and learn about what each step does. Without any further ado let's get started. For this blog, I'm going to use a simple vue-app quiz app that'll let you guess the name of the book based on the first line. If you want to follow along with the setup you can find the GitHub link to the web app [here](https://github.com/SwikritiT/Guessthebook-blog) or you can use your app or you can create a simple `hello-world` app in node or framework/language of your choosing. ## Let's create the image > Note: this tutorial follows ubuntu commands but they should be similar for other OS's as well Let's clone the app and go inside of it ```bash git clone https://github.com/SwikritiT/Guessthebook-blog cd Guessthebook-blog ``` We are going to start by creating a file called `Dockerfile` in the root of our repository. ```bash touch Dockerfile ``` ### Dockerfile A Dockerfile is a script that contains all the steps necessary to build an image. Dockerfile starts with something called `base image`. A `base image` is a pre-configured environment that our images build upon. The base image can be an OS like Linux, alpine, or some application stack. Choosing an appropriate base image is crucial for the overall size and productivity of the image of our application. We will talk more about how to select the appropriate base image in the later part of this series so for now let's keep in mind that a lighter base-image will create a lighter app image(this comes with its limitations which we will talk about in more detail in the later part of this series). So, for this app, we are going to use `node:alpine` images as a base, you can select your needed version of a base image from an image registry like [Docker Hub](https://hub.docker.com/_/node). > Note: If you're not building the image of the node application, you need to select the image appropriate for your stack ```dockerfile # Start with a base image FROM node:alpine ``` > `FROM`: Specifies the base image to use. Every Dockerfile starts with this instruction. The next step is to set the `WORKDIR` i.e. working directory for our container where commands will be executed and files will be stored by default. Normally, for web apps, workdir is set to `/usr/src/app` but you can customize it as per your need. ```dockerfile # Set the working directory WORKDIR /usr/src/app ``` Now let's copy the necessary config files to our container, in our case, package.json and lockfile. For this, we will use `COPY` command ```dockerfile # Copy package.json and package-lock.json to the working directory COPY package*.json ./ ``` > `COPY or ADD`: Copies files from your local filesystem into the container. The next step is to install dependencies just like we do in our host machine and for this `RUN` command can be used ```dockerfile # Install the application dependencies RUN npm install ``` > `RUN`: Executes commands in the container. These commands are typically used to install software packages. Now we copy the rest of the files to our container's working directory ```dockerfile # Copy the current directory contents into the container at /usr/src/app COPY . . ``` The above command will copy every file and folder that is in your app to the container's working directory. This includes the build file, files created by IDE, or other miscellaneous files/folders that might not be necessary for building the image. So, it is a good practice to either copy only necessary files or create a `.dockerignore` file with the list of files and folders that you don't want to be copied inside the container. The syntax of the `.dockerignore` file is similar to that of `.gitignore`. Learn more [here](https://docs.docker.com/build/building/context/#dockerignore-files) The next step is to build our application ```dockerfile # Build the application RUN npm run build ``` The next step is to `Expose` the ports on which a containerized application listens for network connections ```dockerfile # Make port 3000 available to the world outside the container EXPOSE 3000 ``` > ` EXPOSE` is a way to document which ports the application running inside the container will use. It does not map the port to the host machine’s ports. It simply indicates which ports are intended to be accessible. We've come to the final stage where we will run the application. There are two commands that we can use to do this `CMD` and `ENTRYPOINT` for this part we will be using `CMD` and we'll talk about `ENTRYPOINT` in later parts. Since we will be running the application in production env we will use the `preview` command for this. Later we'll also create a dockerized dev env. ```dockerfile # Define the command to run the app CMD ["npm","run","preview"] ``` > `CMD`: Provides the command that will be run when a container is started from the image. Only one CMD instruction can be present in a Dockerfile. Now let's look at the whole `Dockerfile` ```Dockerfile # Start with a base image FROM node:alpine # Set the working directory WORKDIR /usr/src/app # Copy package.json and package-lock.json to the working directory COPY package*.json ./ # Install the application dependencies RUN npm install # Copy the current directory contents into the container at /usr/src/app COPY . . # Build the application RUN npm run build # Make port 3000 available to the world outside the container EXPOSE 3000 # Define the command to run the app CMD ["npm","run","preview"] ``` In Docker, each line in your Dockerfile creates a new layer in the final image, like adding ingredients to a sandwich. These layers stack on top of each other, with each layer representing a change or addition, such as copying files or installing software. Docker saves these layers, and if you rebuild your image and some layers haven’t changed, Docker reuses them, speeding up the build process and reducing redundancy. So, in the above dockerfile, we have 8 layers. ## Build the image Now that we've created the image it's time to build it and get it running. We can run the following command in the terminal from the root of our repository to build a docker image ```bash $ docker build . -t guessthebook:v1 [+] Building 1.1s (11/11) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 191B 0.0s => [internal] load metadata for docker.io/library/node:20-alpine 0.9s => [internal] load .dockerignore 0.0s => => transferring context: 129B 0.0s => [1/6] FROM docker.io/library/node:20-alpine@sha256:66c7d989b6dabba6b4 0.0s => [internal] load build context 0.0s => => transferring context: 1.41kB 0.0s => CACHED [2/6] WORKDIR /usr/src/app 0.0s => CACHED [3/6] COPY package*.json ./ 0.0s => CACHED [4/6] RUN npm install 0.0s => CACHED [5/6] COPY . . 0.0s => CACHED [6/6] RUN npm run build 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:08f32d7f583b7e65a844accafff8fc19930849204c5ae 0.0s => => naming to docker.io/library/guessthebook:v1 0.0s ``` You will get the output like such with the information of each layer being built. The command `docker build . -t guessthebook:v1` reads the Dockerfile in the current directory, builds a Docker image according to its instructions, and tags this image as `guessthebook` with version `v1`. Now if you run the following command in your terminal, you should see the relevant info about the image that we just created ```bash docker images REPOSITORY TAG IMAGE ID CREATED SIZE guessthebook v1 08f32d7f583b 5 minutes ago 275MB ``` ## Running a Docker Container Once you have built the image, you can run it with the `docker run` command: ```bash docker run -p 3000:3000 guessthebook:v1 ``` This command runs a container from the `guessthebook:v1` image, mapping port `3000` of the container to port `3000` on the host machine. We use `-p` for port mapping. We can improvise the above command to use the option `-d` to run it in detached mode ```bash docker run -d -p 3000:3000 guessthebook:v1 ``` After running the docker container you can visit `http://localhost:3000` to ensure everything works fine. If everything is setup and working correctly you should be greeted with this screen ![app homepage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iggehash0bf3euqpyxt3.png) You can now take a rest and play the quiz to see how many you get right. ## Stop container You can stop the container with the following command ```bash docker stop <container_name or id> # can run `docker ps` to get the name and id ``` ## Publish the image to the docker hub If we want to take this a step further we can publish the image to the docker hub. For that create an account on [Docker Hub](https://hub.docker.com/signup) if you haven't already. Next, you can create a new repository. ![docker hub create repo page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l9l3ctp3zg0qxnl46ckw.png) 1. Login to your docker hub through docker CLI ```bash docker login ``` 2. Tag your local image to match the repo image ```bash # docker tag local-image:tagname new-repo:tagname docker tag guessthebook:v1 <your-docker-username>/guessthebook ``` 3. Push the image ```bash # docker push new-repo:tagname docker push <your-docker-username>/guessthebook ``` Now, you can go and check if your docker hub has the image that you just pushed. To test the image you can now run the image by pulling directly from `Docker Hub` ```bash # let's remove the locally tagged image first docker rmi <your-docker-username>/guessthebook # run the container docker run -p 3000:3000 <your-docker-username>/guessthebook ``` That's it for this blog. Hope you enjoyed this one and learned something new as well! See you in the next part of this series. If you have any queries or suggestions please comment them below!
swikritit
1,892,700
حبوب الاجهاض في الامارات | 00971547952044 |
يمكنك على حبوب الاجهاض "اتصل بناالآن على واتساب: 00971547952044 " رخيص حبوب الإجهاض في الإمارات...
0
2024-06-18T18:02:12
https://dev.to/cytgcc/hbwb-ljhd-fy-lmrt-00971547952044--4p57
يمكنك على حبوب الاجهاض "اتصل بناالآن على واتساب: 00971547952044 " رخيص حبوب الإجهاض في الإمارات متاحة بشكل قانوني في صيدليات معتمدة، حيث يمكن العثور على حبوب سايتوتك للبيع في مختلف المدن مثل دبي وأبوظبي والعين. تتوفر هذه الحبوب بخدمة الدفع عند الاستلام لتسهيل الوصول إليها بأمان وسرية. تتراوح أسعار سايتوتك حسب المكان ونوع الخدمة المقدمة. يُنصح دائمًا بشراء الحبوب من صيدلية موثوقة لضمان الحصول على المنتج الأصلي وتجنب المخاطر الصحية المحتملة. تقدم العديد من الصيدليات خدمات استشارية للتأكد من استخدام الحبوب بطريقة صحيحة وآمنة. #حبوب_الإجهاض #سايتوتك_الإمارات #صيدلية_الإجهاض #حبوب_سايتوتك #الإجهاض_الآمن #cytotec_الإمارات #حبوب_الإجهاض_في_أبوظبي #حبوب_الإجهاض_في_دبي #سايتوتك_الدفع_عند_الاستلام #صيدلية_تبيع_حبوب_الإجهاض_الإمارات #حبوب_الإجهاض_الإمارات #حبوب_سايتوتك_للبيع_الإمارات #حبوب_سايتوتك_الدفع_عند_الاستلام_الإمارات #حبوب_سايتوتك_للبيع_العين #حبوب_الإجهاض_الإمارات #حبوب_الإجهاض_بالإمارات #سايتوتك_للبيع_الإمارات #حبوب_إجهاض_الإمارات #حبوب_إجهاض_الإمارات #سايتوتك_الإمارات #حبوب_سايتوتك_الإمارات #cytotec_سعر_الإمارات #حبوب_سايتوتك_للبيع_أبوظبي #حبوب_تنزيل_الحمل_للبيع_الإمارات #حبوب_سايتوتك_الإمارات #صيدلية_تبيع_حبوب_الإجهاض_الإمارات #cytotec_سعر_الإمارات #حبوب_سايتوتك_للبيع_دبي #حبوب_الإجهاض_للبيع_الإمارات #سايتوتيك_الإمارات #حبوب_سايتوتيك_الإمارات #دواء_سايتوتك_الإمارات #دواء_الإجهاض_الإمارات #سعر_حبوب_سايتوتك_الإمارات #بيع_حبوب_سايتوتك_الإمارات #حبوب_إجهاض_أبوظبي
cytgcc
1,892,699
Properties in C# | Uzbek
Salom barchaga. Bugun biza C# dasturlash tilida o'rganishimiz kerak bo'lgan ba'zi bir tushunchalar...
0
2024-06-18T18:01:48
https://dev.to/ozodbek_soft/properties-in-c-uzbek-3cof
uzbek, csharp, dotnet, property
Salom barchaga. Bugun biza C# dasturlash tilida o'rganishimiz kerak bo'lgan ba'zi bir tushunchalar haqida aytib beraman. `** Reja: ** - Property nima ? - Property turlari - Propertydan foydalanish - Amaliyot va tushunchalar - Quizlar` **Property nima ? ** Property(_xususiyat_) C# dasturlash tilida obyektlar, classlar va strukturalar ichida ma'lumotni saqlash va olish uchun ishlatiladigan maxsus a'zolar(_member_)dir. Propertylar fieldlar(_maydonlar_) kabi ko'rinadi. Va ular `getter` va `setter` orqali maydonga kirish imkonyatini beradi. **Property turlari: ** 1 - **_Auto implemented property_** - _Bu turdagi propertylar soddalashtirilgan holda yoziladi, bu yerda `get` va `set` methodlari automatik ravishda yoziladi._ 2 - **_Read-Only Property_** - _Faqat o'qish uchun belgilangan propertylar. get funksiyasi ishlaydi xolos, set qo'sha olmaymiz._ 3 - **_Write-Only Property_** - _Faqat yozish mumkin bo'lgan propertylar, faqat set metodi bilan ta'minlanadi._ 4 - **_Calculated Properties_** - _Hisob kitobli propertylar, get va set methodlari maxsus hisob-kitoblarni amalga oshirish uchun ishlatiladi._ **Amaliyotni boshladik 🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀** **Auto-Implemented Property** ``` public class Person { public string Name { get; set; } public int Age { get; set; } } ``` **Read-Only Property ** ``` public class Person2 { private string name; public Person2 (string name) { this.name = name; } public string Name { get {return name;} } } ``` **Write-Only Property** ``` public class Person3 { private string password; public string Password { set {password = value;} } } ``` davomi bor....
ozodbek_soft
1,892,698
Enhancing Food Manufacturing with Python: Optimizing Raw Materials
Efficient raw material management is essential in food manufacturing to reduce costs and ensure...
0
2024-06-18T17:57:48
https://dev.to/twinkle123/enhancing-food-manufacturing-with-python-optimizing-raw-materials-38c6
python, programming, foodmanufacturing
Efficient raw material management is essential in food manufacturing to reduce costs and ensure product quality. Python, with its robust libraries and tools, offers powerful solutions for this optimization. #### Advantages of Python in Food Manufacturing Python's strengths in data analysis and machine learning make it ideal for optimizing raw material use. Key libraries include: - **Pandas and NumPy** for data analysis. - **TensorFlow and** Scikit-Learn** for machine learning. - **OpenCV** for image processing. These tools enable sophisticated management and optimization processes, enhancing efficiency and sustainability. #### Key Applications 1. **Data Analysis and Predictive Modeling**: [Python's data libraries](https://www.clariontech.com/blog/best-python-modules-for-automation) help predict demand, optimize ordering, and manage inventory efficiently. 2. **Automated Quality Control**: Using OpenCV, Python can automate the inspection of raw materials to detect defects and ensure high quality. 3. **Supply Chain Optimization**: Python can evaluate supplier performance, predict lead times, and optimize procurement processes. 4. **Real-Time Process Automation**: Python scripts can analyze sensor data in real-time to adjust production parameters dynamically, reducing waste and maintaining consistency. 5. **Waste Reduction**: [Python tools](https://www.clariontech.com/blog/top-python-frameworks) identify waste patterns and suggest improvements, promoting sustainability. #### Practical Implementation: A Case Study A food manufacturer facing high [raw material costs and quality ](https://www.clariontech.com/blog/why-use-python-for-ai-ml)issues can benefit from Python by: - **Data Collection and Analysis**: Using sensors and IoT devices, data on raw material usage is gathered and analyzed to identify inefficiencies. - **Predictive Maintenance**: [Machine learning models ](https://www.clariontech.com/blog/why-use-python-for-ai-ml)forecast equipment failures to reduce downtime. - **Automated Quality Control**: Python analyzes images of raw materials to detect defects, ensuring only the best materials are used. - **Optimizing Supply Chain**: Algorithms improve supplier selection and delivery schedules, reducing costs. These implementations can lead to reduced costs, improved quality, and higher efficiency. #### Conclusion Python's versatility and powerful analytical capabilities make it essential for optimizing raw material usage in food manufacturing. By leveraging data analysis, machine learning, and automation, manufacturers can significantly improve their operations. As the industry evolves, Python-based solutions will continue to drive innovation and competitiveness.
twinkle123
1,892,692
Reduzindo o tempo da pipeline em x3 vezes
Nesse texto, tenho como objetivo compartilhar a nossa estratégia para diminuir o tempo que nossa...
0
2024-06-18T17:56:29
https://dev.to/tino-tech/reduzindo-o-tempo-da-pipeline-em-x3-vezes-1pd8
frontend, cicd, webpack, vite
Nesse texto, tenho como objetivo compartilhar a nossa estratégia para diminuir o tempo que nossa pipeline de CI/CD demorava para ser executada, essa estratégia pode ser aplicada independente do framework ou biblioteca. Esse artigo usará um projeto de front-end que demorava 15 minutos para ter sua pipeline concluída. Iremos considerar a pipeline com: Autenticação no NPM, Instalação de bibliotecas, Lint de arquivos e tipagem, Build, Deploy, Testes (E2E, Unitários e Integrados). ## O que você encontrará nesse artigo? - Importância de reduzirmos o tempo de nossas pipelines - Estratégia para redução de tempo de execução - Referências a documentação de ferramentas que estamos utilizando ## Antes de começar - O projeto front-end que vamos usar roda com React e Webpack inicialmente - Essa estratégia não é uma bala de prata, só aplicamos ela depois de 3 anos no nosso monolito front-end, evite fazer otimizações prematuras em seu projeto - Pode ser aplicado para projetos de diferentes contextos, porém é necessário analisar se a etapa de build faz sentido ser modificada ## Por que eu preciso me preocupar com o tempo da pipeline? - **Impacto na entrega de valor** - Uma pipeline demorada afeta de maneira significativa a entrega de valor para o usuário, causando falta de eficiência em disponibilizar recursos ou correções para o usuário. - **Escalabilidade** - Quanto mais o projeto cresce, mais a sua pipeline vai demorar para ser concluída. Em algum momento será mais custoso esperar a pipeline ser concluída do que desenvolver a solução. - **Colaboração** - Em um time com muitos engenheiros, uma pipeline demorada vai causar um grande gargalo de entregas. Onde um engenheiro irá fazer o merge de uma Pull Request e os outros precisaram rodar a pipeline novamente por conta da mudança. Com a pipeline demorada, isso vai gerar um grande gargalo e vai ficar quase que impossível de se entregar novas soluções. - **Otimização de custos** - A maioria das ferramentas cobram por tempo de execução da pipeline, quanto mais a sua pipeline demorar maior será o custo final. ## Encontrando os gargalos dentro da nossa pipeline Com a imagem abaixo, podemos analisar o tempo que cada etapa leva para completar a execução de nossa pipeline. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s4elxeph8dcstffe52in.png) ## Paralelizando nossa pipeline Nesse projeto estamos usando o Cloud Build da GCP, com isso não conseguimos ter etapas paralelas dentro de um único trigger. Para isso vamos criar dois novos triggers, um para os testes unitários e outro para os testes integrados, dessa forma eles vão rodar de forma paralela e já irá causar uma redução considerável em nossa pipeline. ### Reduzindo o tempo de 15 minutos para 7 minutos. É importante notar que não vamos conseguir paralelizar os testes E2E, por que eles dependem da etapa de deploy, que depende da etapa de build. ## Aprimorando os resultados Uma forma de aprimorarmos o nosso resultado é melhorando o tempo de build, nesse projeto estávamos usando React com Webpack, com isso o tempo de build levava 3 minutos. Para solucionarmos isso, analisamos algumas ferramentas do mercado e optamos por usar o Vite. Você pode encontrar mais informações sobre o [Vite aqui](https://medium.com/r/?url=https%3A%2F%2Fvitejs.dev%2Fguide%2Fwhy.html). Aplicando o Vite em nosso projeto, conseguimos obter esse resultado: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1st2bbs8gazxtrdefnv6.png) ## Referências https://cloud.google.com/build?hl=en https://vitejs.dev/guide/why.html
oigabrielteodoro
1,891,461
Superglue vs. Hotwire for modern frontend development
Written by Frank Joseph✏️ The web development landscape is constantly evolving and as user...
0
2024-06-18T17:54:26
https://blog.logrocket.com/superglue-vs-hotwire-modern-frontend-development
javascript, webdev
**Written by [Frank Joseph](https://blog.logrocket.com/author/frankjoseph/)✏️** The web development landscape is constantly evolving and as user requirements also change, developers continue to explore ways to build frontend applications with a balance of performance, flexibility, and maintenance. JavaScript-heavy frameworks help developers build frontend applications, but they often introduce scalability and maintenance challenges and can be complex to manage. Modern frontend development approaches such as Superglue and Hotwire aim to simplify creating dynamic and interactive web applications using HTML over the wire, sending HTML directly to the client’s browser instead of JSON. In this article, we will explore how Hotwire and Superglue are reimagining the web development landscape. We will compare them based on factors like developer experience/ease of use, performance, compatibility with existing stacks, use cases, scalability, community, and ecosystem. ## Superglue [Superglue](https://thoughtbot.github.io/superglue/#/) is a framework-agnostic pattern that enables web developers to build end-to-end frontend web applications by sending HTML over the wire and using less JavaScript. The “glue it” approach of modern frontend development prioritizes interoperability, simplicity, and flexibility, and advocates for a move away from the complexities of a monolithic approach to a more streamlined approach. Superglue is more of a philosophy and approach rather than a set of tools. With Superglue, developers use a combination of tools to build their frontend instead of relying on a single tool. Think of it this way: rather than use React, Vue, or Angular for frontend development, developers can use templating engines like EJS or Handlebars to dynamically manipulate data, [Redux or other state management](https://blog.logrocket.com/understanding-redux-tutorial-examples/) tools to manage state, and [Axios or similar libraries](https://blog.logrocket.com/axios-vs-fetch-best-http-requests/) to fetch data. ### Why is Superglue necessary? Superglue, like other modern frameworks, seeks to introduce a modern pattern to frontend web development while trying to solve the following problems: * Latency: Large frameworks usually come with additional codes, which, in most cases, are unnecessary and can impact application size and contribute to latency * Using a particular framework can limit your creative power as you are forced to follow a certain pattern. There is also the cost of switching or integrating with other tools * Picking up or learning and mastering a complex framework is usually difficult and time-consuming for beginners The Superglue approach offers an alternative that prioritizes simplicity, performance, and flexibility by using the building block of the web and focusing on interoperability. ### Superglue features The following are features associated with the Superglue approach: * **Over the wire update**: Performance is optimized by sending only the necessary HTML over the wire. This feature reduces client-side processing, thereby reducing latency and network overhead * **Framework-agnostic**: Being framework-agnostic, the Superglue approach allows developers to integrate easily with various backend technologies and frameworks * **Micro-frameworks**: Libraries like [the Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API), lit-html, Marko, and server-sent events (SSE) offer an alternative means to build lightweight web components without heavy JavaScript ### Ease of use Superglue requires an understanding of fundamental web development technologies like HTML, CSS, and JavaScript with which developers can pick up other tools. With Superglue, developers have the freedom to choose their preferred tools. However, using and being able to configure different tools effectively requires experience and in-depth knowledge, which contributes to the steepness of the learning curve. ### Performance Superglue inherently improves performance by avoiding JavaScript-heavy frameworks. Performance optimization depends not only on the developer’s experience but also on the tools they choose to use. For example, developers can use the `imageOptim` API to optimize images and other visual assets. Additionally, Superglue’s server-side rendering capabilities further improve performance by ensuring that website content appears quickly without needing to download and run extensive client-side code. ### Compatibility with the existing stack Superglue’s strength lies in its interoperability; it integrates well with various backend technologies, especially Python, JavaScript, Django, and Node.js. Developers using the Superglue pattern enjoy the flexibility of choosing a backend stack depending on product requirements. For example, regardless of whether you are using Node.js or Django as your backend stack, Superglue gives you the freedom to choose the frontend tool of your choice. ### Use cases Because Superglue is not tied to a specific backend stack, it makes future stack migration possible. This flexibility is especially beneficial for projects where control and customization of every aspect of the frontend are essential. Imagine you are building a simple form to receive user data like their name, email, and password. Using JavaScript-heavy frontend frameworks like React, Vue, or Angular for this use case might not make sense as it could make the application heavy. But with the Superglue pattern, you can easily write an HTML form and use the `JustValidate` library to validate input: ```html <form id="user-form"> <input type="text" name="name" placeholder="Enter your Name"> <input type="email" name="email" placeholder="Enter your email Address"> <input type="password" name="password" placeholder="Enter your password"> <button type="submit">Register</button> </form> ``` In the JavaScript file, write the following code: ```javascript const validator = new JustValidate('#user-form'); validator .addField('#name', [ { rule: 'required'}, {rule: 'minLength', value: 3,}, ]) .addField('#email', [ { rule: 'required', errorMessage: 'Email is required'}, { rule: 'email', errorMessage: 'Email is invalid'}, ]) .addField('password', [ {rule: 'required', errorMessage: 'Password is required'}, {rule: 'password'}, ]) .onSuccess((event) => { event.currentTarget.submit(); }); ``` ### Scalability Building scalable applications requires well-organized and maintainable code. But managing complex applications that follow the Superglue pattern can be challenging, especially when features are built with different technology stacks. Building a scalable application following the Superglue philosophy starts with choosing the right tools. This is why developers are advised to choose tools they have experience working with. On the flip side, Superglue encourages building applications with modules, which are easier to optimize and debug. Templating engines like EJS and libraries like `JustValidate` are lightweight and improve performance and loading time when compared to JavaScript-heavy frontend frameworks like React or Angular. ### Community/Ecosystem Superglue is more of a philosophy than a tool, so the community might not be as established when compared to a language or a framework like Rails. Available libraries and resources that align with Superglue's philosophy are scattered across different tools and projects. Developers following the Superglue concept need to stay up-to-date with new innovations and updates in frontend development modules and libraries. ## Hotwire [Hotwire](https://hotwire.io/about) is an alternative approach to building modern web applications without a heavy JavaScript framework by sending HTML over the wire. It focuses on server-side rendering. Hotwire was initially designed for the Rails ecosystem but now it can be integrated with frameworks such as Laravel, Django, Wagtail, and many others. It provides a collection of features and tools (e.g., Stimulus and Turbo) that improve frontend development without using heavy JavaScript frameworks. ### Hotwire features #### [Turbo](https://turbo.hotwired.dev/) This uses a combination of several techniques to create fast, modern, and progressive web applications without writing heavy JavaScript. Turbo offers a simpler alternative to JavaScript-heavy frontend frameworks, which put all the logic in the frontend and confine the server side of your app to being little more than a JSON API. Using Turbo means you can write all your application logic on the server side and you also let the server serve HTML to the browser. When a webpage refreshes, CSS and JavaScript have to be reinitialized and reapplied to the page. Imagine how slow the process can be with a fair amount of CSS and JavaScript. At its core, Turbo gets around this reinitialization problem by maintaining a persistent process similar to SPAs (single-page applications). It intercepts links and loads new pages via Ajax. The server still returns fully-formed HTML documents. #### [Stimulus](https://stimulus.hotwired.dev/) This is a modest JavaScript framework. Unlike other JavaScript-heavy frontend frameworks, at the core of its design, Stimulus enhances server-rendered HTML by connecting JavaScript objects (controllers) to elements in HTML pages using annotation. Stimulus continuously monitors the page, waiting for HTML `data-controller` attributes. For each attribute, Stimulus looks at the attribute’s value to find a corresponding controller class, creates a new instance of that class, and connects it to the element. Think of the Stimulus `data-controller` attribute as a bridge connecting HTML to JavaScript as the class attribute is the bridge connecting HTML to CSS. Stimulus’s use of data attributes helps separate content from behavior in the same way that CSS separates content from presentation. Aside from controllers, the three other major Stimulus concepts are: * **Actions**: This connects controller methods to DOM events using `data-action` attributes * **Targets**: This locates elements of significance within a controller * **Values**: This reads, writes, and observes data attributes on the controller’s element ### Why is Hotwire necessary? * SEO: Server-side rendered HTML content is easy to crawl and index by search engines compared to JavaScript-heavy frameworks * Modular web development: Hotwire favors a modular approach to web development. For example, Turbo handles webpage updates while Stimulus manages interactivity * Hotwire reduces complexities when compared to JavaScript-heavy frameworks. * Hotwire improves performance The following is a sample code snippet to illustrate how to use Turbo and Stimulus to handle updates and interactivity respectively: ```html <div id="comments-section" data-turbo-frame> <button data-controller="comments">Add Comment</button> </div> ``` This is a sample HTML code. The `data-turbo-frame` attribute informs Turbo that the element with an ID of `comment-section` is a potential update target. It also includes a button with a `data-controller` attribute set to `comments`. This instructs Stimulus to associate a controller with this button. In a `comment_controller.js` file, add the following code: ```javascript import { Controller } from "@hotwired/stimulus" export default class extends Controller { static targets = [‘comments’] connect() { console.log("New comment controller connected!") } submitComment(event) { event.preventDefault(); fetch('/api/comments', { method: 'POST', body: new FormData(this.commentsTarget) }) .then(response => response.json()) .then(data => { this.commentsTarget.insertAdjacentHTML('beforebegin', data.html); }); } } } ``` This code defines the `comment` controller class. It uses the `static targets` property to specify that this controller targets an element with the ID `comment`. The `connect` method logs a message to the console when the controller is connected to the button element. The `submitComment` method gets triggered when the button is clicked — this is handled behind the scenes by Stimulus. The `preventDefault` method prevents the default form submission behavior and the remaining part of the code simulates sending data to the server using the Fetch API. Refer to these guides for more information on how to use [Stimulus](https://stimulus.hotwired.dev/handbook/hello-stimulus) and [Turbo](https://turbo.hotwired.dev/handbook/frames). ### Ease of use Developers who are familiar with the Rails ecosystem find it easier to get started with Hotwire. ### Performance Turbo manages page updates and reduces data transfer, thereby improving performance. Additionally, it includes built-in performance optimization techniques that reduce the need for manual optimization. ### Compatibility with the existing stack Hotwire is specifically designed for Rails applications, so integrating Hotwire with technologies outside the Rails ecosystem can be complex. ### Use cases Applications, especially those built with Rails, need modern, performant, and easy-to-integrate frontend stacks. Hotwire is ideal for building applications with real-time communication functionalities and is fit for projects seeking the SEO advantages of server-side rendering. ### Scalability Hotwire implements separation of concerns (e.g., Turbo manages page updates and Stimulus manages interaction), which helps applications built with Hotwire scale well. Building complex and data-intensive applications using Hotwire may require additional tools and a scalable design architecture. ### Community/Ecosystem Hotwire greatly benefits from the well-established Rails ecosystem and the active participation of its community members. The [Hotwire documentation](https://hotwire.io/about) was a result of a community-driven effort. ## Conclusion In this article, we explored how Hotwire and Superglue are reimagining modern frontend development. We compared their features and evaluated their performance, scalability, ease of use, community support, and compatibility with existing stacks. Hotwire, with its focus on server-side rendering and minimal JavaScript, offers a streamlined approach for Rails applications and more. On the other hand, Superglue provides a flexible, framework-agnostic approach that emphasizes interoperability. Both approaches aim to address the complexities of JavaScript-heavy frameworks, offering developers efficient alternatives for building dynamic web applications. --- ##[LogRocket](https://lp.logrocket.com/blg/javascript-signup): Debug JavaScript errors more easily by understanding the context Debugging code is always a tedious task. But the more you understand your errors, the easier it is to fix them. [LogRocket](https://lp.logrocket.com/blg/javascript-signup) allows you to understand these errors in new and unique ways. Our frontend monitoring solution tracks user engagement with your JavaScript frontends to give you the ability to see exactly what the user did that led to an error. [![LogRocket Signup](https://blog.logrocket.com/wp-content/uploads/2020/06/reproduce-javascript-errors.gif)](https://lp.logrocket.com/blg/javascript-signup) LogRocket records console logs, page load times, stack traces, slow network requests/responses with headers + bodies, browser metadata, and custom logs. Understanding the impact of your JavaScript code will never be easier! [Try it for free](https://lp.logrocket.com/blg/javascript-signup).
leemeganj
1,892,694
826. Most Profit Assigning Work
826. Most Profit Assigning Work Medium You have n jobs and m workers. You are given three arrays:...
27,523
2024-06-18T17:48:28
https://dev.to/mdarifulhaque/826-most-profit-assigning-work-2b8o
php, leetcode, algorithms, programming
826\. Most Profit Assigning Work Medium You have `n` jobs and `m` workers. You are given three arrays: `difficulty`, `profit`, and `worker` where: - `difficulty[i]` and `profit[i]` are the difficulty and the profit of the <code>i<sup>th</sup></code> job, and - `worker[j]` is the ability of <code>j<sup>th</sup></code> worker (i.e., the <code>j<sup>th</sup></code> worker can only complete a job with difficulty at most `worker[j]`). Every worker can be assigned **at most one job**, but one job can be **completed multiple times**. - For example, if three workers attempt the same job that pays `$1`, then the total profit will be `$3`. If a worker cannot complete any job, their profit is `$0`. Return _the maximum profit we can achieve after assigning the workers to the jobs_. **Example 1:** - **Input:** `difficulty = [2,4,6,8,10]`, `profit = [10,20,30,40,50]`, `worker = [4,5,6,7]` - **Output:** `100` - **Explanation:** Workers are assigned jobs of `difficulty [4,4,6,6]` and they get a `profit of [20,20,30,30]` separately. **Example 2:** - **Input:** `difficulty = [85,47,57]`, `profit = [24,66,99]`, `worker = [40,25,25]` - **Output:** 0 **Constraints:** - <code>n == difficulty.length</code> - <code>n == profit.length</code> - <code>m == worker.length</code> - <code>1 <= n, m <= 10<sup>4</sup></code> - <code>1 <= difficulty[i], profit[i], worker[i] <= 10<sup>5</sup></code> **Solution:** ``` class Solution { /** * @param Integer[] $difficulty * @param Integer[] $profit * @param Integer[] $worker * @return Integer */ function maxProfitAssignment($difficulty, $profit, $worker) { $ans = 0; $jobs = array(); for ($i = 0; $i < count($difficulty); ++$i) { $jobs[] = array($difficulty[$i], $profit[$i]); } sort($jobs); sort($worker); $i = 0; $maxProfit = 0; foreach ($worker as $w) { for (; $i < count($jobs) && $w >= $jobs[$i][0]; ++$i) { $maxProfit = max($maxProfit, $jobs[$i][1]); } $ans += $maxProfit; } return $ans; } } ``` **Contact Links** - **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)** - **[GitHub](https://github.com/mah-shamim)**
mdarifulhaque
1,890,862
1.WHAT IS SELENIUM 2.USE OF SELENIUM FOR AUTOMATION
## 1.)What is selenium? Selenium is an open-source,automated testing tool used to...
0
2024-06-18T17:35:54
https://dev.to/sunmathi/1what-is-selenium-2use-of-selenium-for-automation-jhf
## 1.)What is selenium? Selenium is an open-source,automated testing tool used to test web applications across various browsers.Selenium can only test web applications,unfortunately so desktop and mobile application cannot be tested. Selenium was the first tool that allowed users to control a browser with the help of any language.It allowed professionals to automate various processes,but it had a set of drawback since it was not possible to perform automation testing on certain things with javascript.Besides,with web applications getting complex,the restrictions of the tool only started to increase. Soon,simon stewart from google got tried of the limitations of selenium.He required a testing tool that was capable of communicating with the browser directly, and hence he came up with webdriver.A few years later selenium merged with webdriver.This tool allowed professionals to do automation testing by using a single tool which was much more efficient. Jason Huggins,an engineer at THoughtworks,chicago,found manual testing repititive and boring.He developed a java script program to automate the testing of a web application called java script runner. Initially the new invention was developed by the employees at THoughtworks.However in 2004 it was renamed selenium and was made open-source.Since its inception,selenium has been a powerful automation testing tool to test various web applications across different platforms. Selenium is easy to use since it is primarily developed in java script. selenium can test web applications against various browsers like java,python,perl,PHP and ruby. selenium is "platform independent",meaning it can deploy on windows,linux and macintosh. selenium test scripts can be coded in any of the supported programming languages and can be run directly in most modern web browsers. With the growing need for efficient software products,every software development group need to carry out a series of tests before the launching the final product into the market.Test engineers strive to catch the faults or bugs before the software product is released,yet delivered software always has defects. Even with the best manual testing processes,there is always a possibility that the final software product is left with a defect or unable to meet the end user requirements.Automation testing is the best way to increase the effectiveness,efficiency and coverage of software testing with the help of selenium. manual testing can be time consuming and prone to human errors.selnium automation allows tests to be executed quickly and accurately,reducing the likehood of human mistakes and ensuring consistent test results. selenium allows developers and testers to automate the testing of web applications across different browsers and platforms. Most programmers and deveolpers who build website applications and wish to test them every now and then use selenium.One of the biggest advantages of selenium which has made it popular is its flexibility.Any individual who creates web programs can use selenium to test the code and applications.Further professional can debug and perform visual regression tests as per the requirements of the website or code. Im most organizations it is the job of quality analyst engineers to test the web application by using selenium.They are required to write scripts that can help in maximizing accuracy and test coverage to make changes in the project and maintain the infrastructure of the test. QA engineers are responsible for developing test suites that can identify bugs using which they can inform stakeholders about the benchmarks set for the project.The primary goal of QA engineers is to ensure efficiency and test coverage and increase productivity. _Advantages of Selenium:-_ **Lanuguage support:-**Seleniumallows us to create test scripts in different languages like ruby,java,php,python,javascript and c# among others. We can write your scripts in any of these programming languages and selenium will convert it into a selenium compatible course in no time. so when you go for selenium as a tool for performing automation testing,we wont have to worry about language and framework support as selenium will do that for us. **Multi-Browser Support:-**Selenium enables us to test website on different browsers such as google chrome,mozila firefox,microsoft edge,safari and internet explorer,etc. The selenium community has been working on improvising every day on one selenium script for all browsers.According to all browsers world wide ,selenium benefits are compatible with all browsers.Just one script is required for all browsers. **Open Source Availability:-** The availability of open source code is one of the benefits of selenium.Selenium is publicly available automation framework that is free to use because it is an open source product. Work can be saved here and used for beneficial purpose.The selenium community always willing to assist developers and software engineers in automating web browser capabilities and functionality. Selenium as an open source technology also allows you to customize the code for easier management and to improve the functionality of preset methods and classes. **Support across various operations:-** Different people use various operating systems and your automation tool must support all of them. selenium is highly profitable tool supporting and could work across different operating systems like Windows,linux mac os,unic etc. **Scalability:-**Automated testing with selenium can easily scale to cover a wide range of test cases,scenarios and user interactions.This scalabilty ensures maximum test coverage of the applications functionality. **Reusable test scripts:-**Selenium allows testers to create reusable test scripts that can be used across different test cases and projects.This reusability saves time and effort in test script creation and maintenance. **Parallel Testing:-**Selenium supports parallel test execution,allowimng multiple test to run concurrently.This helps to reduce the overall testing time,making the development process more easier and efficient. **Documentation and reporting:-**Selenium provides detailed test execution logs and reports,making it easier to track test results and identify areas that require attention. **User Experience Testing:-**Selenium can simulate user interactions and behaviour,allowing testers to assess the user experience and ensure that the application is intuitive and user-friendly. **Continuous Integration and Continuous Deployment(CI/CD):-** Selenium can be integrated into CI/CD pipelines to automate the testing of each code change.This integration helps identidy and address issues earlier in the development cycle,allowing for faster and more reliable releases. ** Less Harware Usage:-** selenium requires less hardware than other testing tools,if we compare with others when the focus automation tools like QTP,UFT,SkillTest,etc. _Disadvantage of Selenium:-_ While selenium is a powerful automation testing tool some disadvantages must be covered. **Limited support for desktop application:-** Selenium has limited support for desktop apps and is mainly designed for online application testing.If your testing requirements involve desktop application testing,you may need to use additional tools or framework. **Lack of build in reporting:-** Selenium needs to provide build in reporting capabilities.Testers must rely on third-party reporting tools or custom code to generate comprehensive test reports,which can be time consuming and require additional effort. **Steep Learning Curve for beginners:-** Selenium requires programming skills to create and maintain test scripts.for individuals with limited programming knowledge there may be a significant learning curve to overcome leading to longer ramp-up times. ** Maintenance efforts for test scripts:-** Test scripts developed using selenium may require frequent updates and maintenance.As web applications evlove and change,test scripts must be updated to accommodate these changes which can be time consuming and resource intensive. we cannot automate sms based otp authentication and we cannot automate captcha also **Limited support for mobile testing:-** while selenium can test mobile web applications,it has limited support for native mobile applications.To test native mobile apps additional tools are required such as appium. **Dependency on browser update:-** selenium relies on browser-specific drivers to interact with web browsers.When browser release updates the corresponding selenium drivers may need to be updated.This dependency on browser updates can sometime cause compatibility issues and require additional effort to keep the automation tests up to date. **No support for image based testing and sms and OTP:-** selenium does not provide support for image based testing,where tests are based on comparing screenshots or visual elements.image based testing is useful for verifying the visual aspects of an applications and absence of this feature in selenium can be a limitation. **Limited support for non-web technologies:-** Seleniumprimarily focuses on web technologies and may not have extensive support for testing non web technologies such as desktop applications,mobile applications. **Limited control over network activities:-** Selenium does not have direct control over network activities such as simulating different network conditions.If your testing requires network related scenarios you may need additional libraries to simulate these conditions. **Dependency on browser automatiom:-** Selenium automation relies on browser's automation capabilities which can sometimes lead to inconsistence across different browsers.some browsers specific behaviors and limitations may impact the reliability and consistency of automation tests. ## 2.Use of Selenium for Automation Selenium is used for automation to streamline the testing of web applications ensuring they function correctly across various browsers and platforms. As we now are familiar with selenium,lets take a look at the main points of selenium and here are some key uses of selenium for automation. **Web Application Testing:-** functional Testing-Verify that each function of the application operates according to its specifications. To validate that the application performs its intented functions correctly. Benefits- Automating functional tests ensures comprehensive coverage of application features improving reliability and functionality. Regression Testing- Ensure that new changes dont break exisiting functionalities.To verify that new code changes do not adversly affect the exisiting functionality of the application. Benefits- Automated regression tests can be run frequently and quickly ensuring that the applications remains stable after updates. **Cross-Browser Testing:-** Purpose- Test web applications across different browsers such as chrome,firefox,safari and edge. Benefits- Ensure consistent behaviour and appearance of the application in various browser environment. **End to End Testing:-** purpose- To run tests with complex workflow of an application from start to finish. Benefit- Ensures that all integrated components of the application work together as expected,providing confidence in the overall user experience. **Automated test Execution:-** Continuous Integration/Continuous Deployment:- Integrate selenium with CI/CD pipeline to ensure that tests are run automatically with each build and deployment. Benefit-facilitates early detection defects reduces manual efforts and speeds up the delivery of high quality software. Scheduled Testing:- Run automated tests at regular intervals to catch issues early. **Data Driven Testing:-** purpose- To run tests with multiple sets of data inputs to validate the applications behavior under various conditions. Benefit- Enhances test coverage and identifies potential edge cases that could cause failures. **Smoke Testing:-** purpose- To Perform a quick check to ensure the most critical functionalities of the applications are working after a build or update. Benefit- Automated somke tests provide rapid feedback on the health of the application allowing early detection of major issues. **Performance and Load Testing:-** Integration with tools- Use selenium scripts with tool like JMeter to simulate user load and measure application performance. Purpose- Identify performance bottlenecks amd ensure the application can handle expected user loads. **Behaviour Driven Development:-** Integration with BDD tools- Use selenium with BDD framework like cucumber to write tests in plain language. Benefit-Enhances collaboration between technical and non technical stakeholders. **Mobile Web Testing:-** Purpose- verify the applications compatibility with different operating systems screen resolutions and devices. benefit- Ensure a consistence user experience across various environments. **Headless browser testing:-** Purpose- Run tests in headless mode(without a GUI) using headless browsers like headless chrome. Benefit- Faster test execution and integration with environments where a GUI is not available. **Compatibility Testing:-** Purpose- verify the applications compatibility with different operating systems,secreen resolutions and devices. Benefit- Ensures a consistent user experience across various environment. **UI Testing:-** Purpose- Automate testing of the user interface to ensure all UI elements function as expected. Benefit- Detects issues with the visual elements and user interactions. **Integration Testing:-** Purpose- Test the interaction between different components of the application. Benefit- Ensures that integrated parts of the application work together as intended. **Error Detection and Logging:-** Purpose- Automatically capture and log errors encountered during test execution. Benefit- simplifies debugging and error resolution. **Custom Testing Framework:-** Purpose- Develop custom framework tailored to specific testing needs using selenium's Wendriver API. Benefit- Flexibility to create robust and maintainable test suites. **Integrating Testing with Other Tools:** Jenkins- For CI/CD pipeline integration. TestNg/JUnit- For test case management and reporting Maven/Gradle- For project build automation Allure/Extent Reports- For generating detailed test reports Github/GitLab- For version control and colaboration Selenium is a versatile tool for automating various aspects of web application testing.It enhances test efficiency,coverage and reliability making it an essential component in modern software development and quality assurance processess. **Benefits:-** Efficiency- Automates repititive testing tasks saving time and effort. Consistency- Ensures consistence test execution and results. Coverage- Provides extensive test coverage across different browsers and platforms. Scalability- Supports parallel test execution allowing for scalable testing solutions. **Selenium in Agile Environment:-** Selenium is an all in one tool that can help you streamline your agile testing process.By following the tips below you can ensure that your automated tests are practical and efficient and that they play a valuble role in your agile development cycle. Automated test should be run frequently as part of the continuous integration process. Test should be written to allow them to be run quickly and easily. Test should be designed to test a specific functionality or behavior and should not be complex. New features and changes should be accompained by automated tests to ensure that the applications functionality remains intact Automated tests should supplement manual testing rather than replace it altogether. In agile development Selenium testing is typically used in the following ways: As part of the regression testing process to ensure that existing features continue to work as expected after new code has been added. To verify that new features are working as expected before htey are released to production. To help identify and troubleshoot bugs in web application at both business and development levels before releasing the application. **Open Source:-** selenium is open source this means that no licensing or cost is required,it is totally free to download and use.This not the case for many other automation tools out there. **Mimic user actions:-** As stated earlier selenium webdriver is able to mimic user input in real scenarios you are able to automate events like keypresses,mouse clicks,drag and drop,click and hold,selecting and much more. **Easy Implementation:-** Selenium webdriver is known for being a user-friendly automation tool.selenium being open source means that users are able to develop extensions for their own needs. **Tool for every scenario:-** As mentioned earlier selenium is suite tools and you will most likely find something that fits your scenario and your way of working. **Language Support:-** One big benefit is multilingual support.selenium supports all major langugaes like java,javascript,python,ruby,c sharp,perl,.Net and PHP giving the developer a lot freedom amd flexibility. **Conclusion:** Selenium is a top choice for automation testing due to its open-source nature,multi browser and platform support,language support and easy integration with other tools.Best practices for using selenium include using a POM design pattern,explicits waits,dynamic locators,test data management tools and version control systems.Selenium can be used for web application testing,regression testing,cross browser testing and load testing.
sunmathi
1,892,691
** Perdidos en la Isla del Internet de las Cosas: IoT y la Serie "Lost"** 🏝️
¡Hola Chiquis! 👋🏻 ‍ ¿Preparados para adentrarse en una isla misteriosa donde la tecnología y la...
0
2024-06-18T17:30:49
https://dev.to/orlidev/-perdidos-en-la-isla-del-internet-de-las-cosas-iot-y-la-serie-lost-41n3
webdev, beginners, tutorial, learning
¡Hola Chiquis! 👋🏻 ‍ ¿Preparados para adentrarse en una isla misteriosa donde la tecnología y la supervivencia se entrelazan? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xc1n1o7i17esvfb6864j.jpg) Imaginen un lugar remoto, lleno de dispositivos interconectados y secretos ocultos, donde un grupo de personas heterogéneas se ve obligado a unir fuerzas para salir adelante. ¿Les suena familiar? Sí, les hablo de la famosa serie "Lost"🌊 y el fascinante mundo del Internet de las Cosas (IoT). El Internet de las Cosas (IoT) y su Paralelo con 'Lost' 🥥 El Internet de las Cosas (IoT) es como la serie de televisión Lost, un mundo lleno de posibilidades, conexiones y misterios por resolver. Al igual que los personajes de la serie, cada dispositivo IoT tiene su propia identidad y función, pero todos están interconectados en una red compleja y a veces enigmática. IoT se refiere a la red de objetos físicos ("cosas") que están incrustados con sensores, software y otras tecnologías con el fin de conectar y intercambiar datos con otros dispositivos y sistemas a través de Internet. Al igual que los personajes de "Lost",🛰️ los dispositivos IoT se encuentran en una isla digital, conectados entre sí a través de una red invisible. Cada dispositivo, como un personaje individual, tiene sus propias capacidades y objetivos, pero juntos forman un ecosistema complejo e interdependiente. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3gtopcvp6ipayzm9v0iz.jpg) Características ✈️ + Conectividad: Como los personajes de Lost conectados por la isla, los dispositivos IoT están interconectados globalmente a través de Internet. + Inteligencia: Al igual que los personajes tienen diferentes habilidades y conocimientos, los dispositivos IoT procesan información y toman decisiones inteligentes. + Heterogeneidad: Cada dispositivo es único con diferentes funciones, similar a la diversidad de personalidades en Lost. + Dinamismo: La red IoT está siempre activa y cambiante, como la trama siempre evolutiva de la serie. Ejemplo: Aquí hay un fragmento simple para un dispositivo IoT que podría representar a John Locke, quien siempre está buscando señales y patrones en la isla: Python 🚀 ``` # Sensor ambiental que detecta cambios en el entorno class LockeSensor: def __init__(self): self.environment_data = {} def detect_change(self): # Supongamos que esta función recoge datos del entorno data = get_environment_data() if has_significant_change(data): self.environment_data = data print("Detecté un cambio significativo en el entorno.") ``` Los personajes y sus contrapartes IoT 📲 Cada episodio de Lost revela más sobre los personajes y sus conexiones, al igual que cada interacción con IoT nos da más datos y comprensión sobre cómo mejorar nuestra vida diaria. - Jack Shephard, el líder es el médico del grupo, una figura natural de autoridad que busca unir a los demás y encontrar un camino a casa. En el mundo IoT, Jack podría representarse a sí mismo como una plataforma central de IoT, que conecta y gestiona los diversos dispositivos. Es el centro de control IoT, liderando y coordinando la red. - Kate Austen, la fugitiva tiene un pasado misterioso y a menudo actúa por instinto. En el mundo IoT, Kate podría simbolizar los dispositivos wearables, que recopilan datos personales y a veces pueden ser utilizados de formas inesperadas. Es los sensores móviles, recopilando datos en diferentes situaciones. - Sawyer Ford, el estafador es sarcástico y cínico, pero también tiene un lado vulnerable. En el mundo IoT, Sawyer podría representar a los dispositivos de seguridad, que protegen los datos y la privacidad, pero que a veces pueden ser demasiado intrusivos. Representa los dispositivos de gestión de recursos, optimizando el uso y distribución. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/49uezev3rg7vyybo4qd2.jpg) - John Locke, el místico está obsesionado con los misterios de la isla y cree que hay un propósito más elevado en juego. En el mundo IoT, Locke podría simbolizar los dispositivos de investigación, que recopilan datos para comprender mejor el mundo que nos rodea. También, el hombre de fe, sería como los sensores ambientales, siempre en sintonía con el entorno y adaptándose a los cambios. - Ben Linus, el manipulador podría representar a las grandes empresas tecnológicas que recopilan y utilizan datos de IoT para sus propios fines. También, podría representar los algoritmos de aprendizaje automático, manipulando datos para influir en los resultados y decisiones. - Hurley es como el almacenamiento en la nube, manteniendo registros y datos importantes. Confiable y capaz de guardar grandes cantidades de información (o secretos). - Sayid, con sus habilidades técnicas, sería como un sistema de diagnóstico y reparación de la red IoT, siempre buscando solucionar problemas y mejorar la comunicación. - Desmond sería el dispositivo de predicción y análisis, capaz de 'ver' el futuro y ayudar a prevenir problemas antes de que ocurran. - Charlie podría ser un dispositivo de seguimiento de activos, siempre tratando de encontrar su camino y ayudar a los demás a hacer lo mismo. - Sun sería como un dispositivo de monitoreo agrícola IoT, cuidando silenciosamente el crecimiento y la salud de las plantas. - Jin podría representar los dispositivos de comunicación IoT, trabajando para superar barreras y conectar diferentes sistemas. - Claire sería como un sistema de monitoreo de salud IoT, cuidando y alertando sobre las necesidades críticas. - Eko podría ser un sistema de seguridad IoT, protegiendo la red con su presencia imponente y su juicio moral. Al igual que en Lost, donde cada descubrimiento lleva a más preguntas, el IoT nos lleva hacia un futuro donde cada conexión nos acerca más a entender el vasto potencial de esta tecnología. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7qf1xaw71ghyc0ybd2ud.jpg) La isla misma es como la red de IoT,🏖️ llena de misterios y datos por descubrir. Los 'Otros' podrían representar las amenazas de seguridad cibernética, siempre al acecho y tratando de infiltrarse en la red. Cada uno aporta algo vital al ecosistema de IoT, al igual que cada personaje contribuye a la narrativa de Lost. Los misterios y las amenazas de la isla 📳 Al igual que la isla de "Lost", el mundo del IoT está lleno de misterios y amenazas. No siempre sabemos cómo funcionan los dispositivos, quién está recopilando nuestros datos y cómo se están utilizando. Además, existe la posibilidad de que los dispositivos IoT sean pirateados o utilizados con fines maliciosos. La búsqueda de la redención y la comunidad 💻 A pesar de los peligros, la isla de "Lost" también ofrece la posibilidad de redención y comunidad. Los personajes aprenden a confiar el uno en el otro, a trabajar juntos y a encontrar un nuevo sentido de propósito. En el mundo IoT, también podemos encontrar oportunidades para crear un futuro mejor. Podemos utilizar los dispositivos IoT para resolver problemas, mejorar nuestras vidas y construir comunidades más fuertes. Aspectos adicionales para explorar 🌐 - El "Monstruo de Humo" como metáfora de las amenazas cibernéticas: es una fuerza misteriosa y peligrosa que acecha en la isla de "Lost". En el mundo IoT, el Monstruo de Humo podría representar las amenazas cibernéticas, como el malware y el hackeo. - La Iniciativa Dharma como símbolo de la vigilancia gubernamental: es una organización secreta que estudia la isla de "Lost". En el mundo IoT, la Iniciativa Dharma podría simbolizar la vigilancia gubernamental y la recopilación de datos. - La isla como metáfora del planeta Tierra: es un microcosmos del planeta Tierra, con sus propios ecosistemas y recursos limitados. En el mundo IoT, la isla podría representar al planeta Tierra, y los dispositivos IoT podrían representar la forma en que estamos interconectando y agotando sus recursos. Resumen: 🗺️ Cada dispositivo IoT, al igual que cada personaje, tiene su rol único que desempeñar en el gran esquema de las cosas, trabajando juntos para crear una red interconectada y funcional. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pgvt1fv4ajxyjz43hyv0.jpg) Conclusión: La analogía entre "Lost" y el IoT 🧳 es un recordatorio de que la tecnología es una herramienta poderosa que puede ser utilizada para el bien o para el mal. Depende de nosotros decidir cómo utilizaremos esta tecnología para crear un futuro que beneficie a todos. Al igual que los personajes de "Lost", debemos trabajar juntos para comprender y navegar en el mundo del IoT. Necesitamos ser conscientes de los riesgos y las oportunidades que presenta esta tecnología, y debemos utilizarla de manera responsable y ética. Juntos, podemos crear un futuro donde el IoT se utilice para el bien de todos. 🚀 ¿Te ha gustado? Comparte tu opinión. Artículo completo, visita: https://lnkd.in/ewtCN2Mn https://lnkd.in/eAjM_Smy 👩‍💻 https://lnkd.in/eKvu-BHe  https://dev.to/orlidev ¡No te lo pierdas! Referencias:  Imágenes creadas con: Copilot (microsoft.com) ##PorUnMillonDeAmigos #LinkedIn #Hiring #DesarrolloDeSoftware #Programacion #Networking #Tecnologia #Empleo #IoT ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9h0qew9qqttyl4cmcik2.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zuzfsp503pow8unp9bme.jpg)
orlidev
1,892,690
Sleep in minutes. Not hours.
I made SleepFast, the AI-powered sleep companion that's revolutionizing the way you sleep. Say...
0
2024-06-18T17:29:50
https://dev.to/sleep_fast/sleep-in-minutes-not-hours-4gag
webdev, ai, career, news
I made [SleepFast](https://sleepfast.io), the AI-powered sleep companion that's revolutionizing the way you sleep. Say goodbye to sleepless nights and hello to deep, rejuvenating sleep. No more relying on harmful [sleeping pills](https://sleepfast.io) – SleepFast offers a natural, safe, and highly effective alternative. Wake up refreshed and ready to tackle your day. [Sleep in minutes](https://sleepfast.io), dream in hours.
sleep_fast
1,892,683
Understanding Concurrency and Parallelism: What's the Difference?
In this article, we will discuss the concepts of concurrency and parallelism, the differences between...
0
2024-06-18T17:23:13
https://dev.to/ryanmabrouk/understanding-concurrency-and-parallelism-whats-the-difference-3d45
computerscience, webdev, beginners, programming
In this article, we will discuss the concepts of concurrency and parallelism, the differences between them, and the challenges we might face while implementing them. Let's dive in 🚶‍♀️. ## What is Concurrency? To understand concurrency, let's start with a simple example. Imagine we have an app that receives user requests, and we are operating on a single thread. This means we handle one request at a time, respond to it, and then move on to the next one. ‼️ But what happens when the number of users increases over time? In this case, the system will become very slow, similar to a restaurant where many customers arrive, but only one person is working to serve all of them. So, you might think of hiring more people in the restaurant to serve the customers faster. Similarly, to handle multiple requests in the app simultaneously, we need to use multiple threads to speed up the process and improve performance. ✅ This is exactly what **Concurrency** means – working on multiple tasks at the same time 🤝. ## What is the Difference Between Concurrency and Parallelism? Both concurrency and parallelism aim to improve performance, but they work differently. If you have a set of threads, each performing a specific task: - In **Concurrency**: The system runs them together through context switching, meaning it runs one thread for a while, then pauses it, and runs another thread, and so on until all tasks are completed. - In **Parallelism**: The threads run simultaneously without switching between them; they operate in parallel. To achieve parallelism, you need a multi-core processor to run the threads in parallel. In contrast, concurrency can be achieved on a single core by **context switching** between threads. ## Should You Always Use Multiple Threads to Improve Performance? ◀️ There are many challenges and potential issues to be aware of and avoid (which we will discuss later). If your task is simple and doesn't require multiple threads, then avoid adding unnecessary complexity. However, if you really need to improve performance, then using multiple threads is fine. Here are some common problems you might encounter and should be careful about to prevent issues in your application: - Race Condition - Deadlock - Starvation We will discuss these problems in detail in future articles..
ryanmabrouk
1,892,688
Developer Updates - May 2024
Supabase underwent Consolidation Month™ to focus on initiatives that improve the stability,...
0
2024-06-18T17:21:59
https://dev.to/supabase/developer-updates-may-2024-34ki
webdev, programming, ai, opensource
Supabase underwent **Consolidation Month™** to focus on initiatives that improve the stability, scalability, and security of our products. We also have exciting product announcements that we can’t wait to share. {% cta https://supabase.com %} ⚡️ Learn more about Supabase {% endcta %} ## Consolidation Month™ We kicked off Consolidation Month (no it’s not actually trademarked) during the month of May. During this time, every product team within Supabase addressed outstanding performance and stability issues of existing features. Here’s a small subset of initiatives and product announcements as part of Consolidation Month: ## Auth Launches @supabase/ssr for Better SSR Framework Support ![ssr logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9thxuif5xj0jw9px23jf.jpg) The newly released @supabase/ssr package improves cookie management, developer experience, and handling of edge cases in various SSR and CSR contexts. We’ve added extensive testing to prevent issues that users experienced with the @supabase/auth-helpers package. [Announcement](https://supabase.link/auth-ssr-email) ## pgvector v0.7.0 Release Features Significant Performance Improvements ![pgvector](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrbusfd20ntfnh4sdbhv.jpg) pgvector v0.7.0 introduced float16 vectors that further improve HNSW build times by 30% while reducing shared memory and disk space by 50% when both index and underlying table use 16-bit float. The latest version also adds sparse and bit vectors as well as L1, Hamming, and Jaccard distance functions. [Announcement](https://supabase.link/pgvector-v070-github) ## Edge Functions Improve Memory Handling ![no errors](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sefzeq81603rtdo0wghv.jpg) The Edge Functions team has significantly reduced the error rate for functions encountering memory issues by implementing better safeguards. This has greatly minimized errors with the 502 status code. Additionally, status codes and limits are now documented separately. [Status Codes](https://supabase.link/functions-codes-github) | [Limits](https://supabase.link/functions-limits-github) ## Dashboard Supports Bigger Workloads as Projects Grow ![dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x031b5f7tnr3h2kelqrd.jpg) The Supabase Dashboard is now better equipped to handle your projects, regardless of their size. We have implemented sensible defaults for the amount of data rendered and returned in the Table and SQL Editors to prevent browser performance issues while maintaining a snappy user experience. [Announcement](https://supabase.link/dashboard-workloads-email) ## Realtime Standardizes Error Codes ![error codes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/chzc7lx2yun7wz3pxxhb.jpg) Realtime now emits standardized error codes, providing descriptions of their meanings and suggested actions. This enhancement improves your error-handling code and helps to narrow down whether the issue lies with the database, Realtime service, or client error. [Realtime Error Codes](https://supabase.link/realtime-codes-github) ## RLS AI Assistant v2 ![AI assitant](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bvht7mprhdre4731eip4.jpg) We’ve improved the prompt and output of our RLS AI Assistant by including best practices found in our RLS docs and upgrading to OpenAI’s newest GPT-4o. We’ve also introduced numerous test scenarios to make sure you’re getting the right security and performance recommendations by comparing parsed SQL with the help of pg_query. [Pull Request](https://supabase.link/rls-ai-v2-email) ## Quick product announcements - [Functions] JSR modules are supported in Edge Functions & Edge Runtime [[Announcement](https://supabase.link/functions-jsr-email)] - [Functions] Debug Edge Functions with Chrome DevTools [[Docs](https://supabase.link/functions-devtools-email)] - [Functions] Use HonoJS web Framework with Edge Functions [[Docs](https://supabase.link/functions-hono-email)] - [Analytics] Log Drains is in Private Alpha [[Announcement](https://supabase.link/log-drains-email)] - [Realtime] Realtime Authorization Early Access [[Announcement](https://supabase.link/realtime-authz-email)] - [Docs] SQL to PostgREST API Translator [[Docs](https://supabase.link/sql-to-rest-email)] - [Client libs] Supabase JavaScript SDK Sentry Integration now supports Sentry SDK v8 [[Commit](https://supabase.link/sentry-v8-email)] ## Made with Supabase ⚡️ - GroupUp - organize social gatherings to hang out with friends [[Website](https://supabase.link/groupup-email)] - HabitKit - track habits, view daily progress, and stay motivated as you work towards your goals [[Website](https://supabase.link/habitkit-email)] - Meteron AI - LLM and generative AI metering, load-balancing, and storage [[Website](https://supabase.link/meteron-email)] - EQMonitor - An app that displays and notifies earthquake information in Japan [[Website](https://supabase.link/eqmonitor-email)] - GitAuto - AI software engineer that writes, reads, and creates pull requests [[Website](https://supabase.link/gitauto-email)] - GenPPT - Free AI PowerPoint presentation generator to help you create beautiful slides in minutes [[Website](https://supabase.link/genppt-email)] ## Community highlights ![decorative](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/99d8e7irws5l4sfhyf88.png) - Make your queries 43,240x faster [[Video](https://supabase.link/faster-queries-email)] - Exploring Support Tooling at Supabase: A Dive into SLA Buddy [[Article](https://supabase.link/sla-buddy-email)] - FlutterFlow SuperApp Complex Template: Developing Feed with Supabase [[Video](https://supabase.link/flutterflow-superapp-email)] - How We Use Supabase in Betashares Direct [[Video](https://supabase.link/betashares-direct-email)] - AI Assistant to Chat with Supabase Database [[Video](https://supabase.link/ai-chat-db-email)] - How to use wrappers in Supabase [[Video](https://supabase.link/wrappers-comm-email)] - Build Realtime Apps with Next.js and Supabase [[Video](https://supabase.link/realtime-next-supa-email)] - SvelteKit & Supabase Project Build [[Video](https://supabase.link/sveltekit-supa-email)] - Next.js 14 x Supabase — Build a Team component using shadcn [[Article](https://supabase.link/nextjs-supa-shadcn)] - Create a Real Time Chat App with Supabase and Angular [[Article](https://supabase.link/realtime-angular-email)] ## ⚠️ Baking Hot Meme Zone ⚠️ If you made it this far, you deserve a treat! Enjoy this devilishly funny meme: ![cat scratcher vs box simulating user UI vs user needs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wcuhmzhemllndalvaolj.png)
yuricodesbot
1,892,687
Learning about 3D model
I'm trying to build my portfolio and I'd would like the user to be able to click inside the 3D model...
0
2024-06-18T17:20:25
https://dev.to/lmacanda/learning-about-3d-model-f3m
learning, threejs, react, help
I'm trying to build my portfolio and I'd would like the user to be able to click inside the 3D model on the hero section to move to other section of the profile. Some advice about how can I achieve it? Here the link to my github repo: [](https://github.com/lmacanda/portfolio ) I really appreciate if someone can share with me some ideas or good resources where I can find the next steps
lmacanda
1,892,686
EC-Council to Decrease AI Chasm with Free Cyber AI Toolkit for Members
EC-Council's Pro-Bono Cyber AI Toolkit Sets New Standards for Cybersecurity Training in the US for...
0
2024-06-18T17:19:56
https://dev.to/sharon_dew_3dc728669a0cc3/ec-council-to-decrease-ai-chasm-with-free-cyber-ai-toolkit-for-members-117o
EC-Council's Pro-Bono Cyber AI Toolkit Sets New Standards for Cybersecurity Training in the US for its Certified Members Tampa, Fla., June 13, 2024: EC-Council, creator of the iconic Certified Ethical Hacker (CEH)® credential, is introducing a first of its kind Cyber AI Toolkit free for all of its certified members. Designed to empower its membership base of certified cybersecurity professionals, the Cyber AI Toolkit equips members with cutting-edge AI-enabled cybersecurity courses at no cost, helping them be better prepared for today’s evolving cyber security landscape in the advent of AI. This highlights EC-Council's commitment to driving standards and advancing global cybersecurity readiness. As governmental organizations like the FBI and others sound the alarm on the expected increase in cyber criminals utilizing AI in their attacks, the Cyber AI Toolkit, which features 14 hours of online learning, 74 premium videos, and 90 assessment questions, provides EC-Council members practical insights and hands-on experience in tackling AI-driven cyber threats. This innovative program provides real-world scenarios and lessons curated to advance an organizations cybersecurity readiness while enhancing the skills and rapid response for cybersecurity professionals. Jay Bavisi, Group President, EC-Council, highlighted the importance of equipping cybersecurity professionals with AI knowledge, "As threat actors increasingly weaponize AI to develop more advanced attack techniques, it is imperative that we provide our community of members with the necessary tools and knowledge to counter these threats. By offering this toolkit for free we are bridging the AI Chasm by enhancing global cybersecurity standards and advancing continuous skill development.” The Cyber AI Toolkit responds directly to findings of the latest EC-Council C|EH Threat Report 2024 based on transformative insights gathered from more than 1,000 industry professionals. The report revealed that 83% of cybersecurity professionals have observed significant shifts in cyber-attack methodologies attributed to AI. 80% of organizations have embraced multi-factor authentication as a cornerstone of their defense against escalating cloud threats. Equally crucial is the report's emphasis on continuous training, recognized by 82% of respondents as pivotal in enhancing incident response readiness, and over 70% of participants identify zero-day exploits and social engineering as primary threat vectors. This stark reality revealed by EC-Council Threat Report 2024 often referred to as the "AI Chasm," highlights the disparity between advancing AI-driven cybersecurity solutions and the evolving tactics employed by threat actors. With the Cyber AI Toolkit, EC-Council is committed to shaping the future of the cybersecurity industry. From the inception of the CEH program to the introduction of AI-enabled courses and now the Cyber AI Toolkit, EC-Council remains dedicated to democratizing cybersecurity education and equipping professionals worldwide with the skills needed to safeguard digital landscapes effectively. For more information on the Cyber AI Toolkit and enrollment details, certified members are encouraged to visit the ASPEN portal. About EC-Council: Founded in 2001, EC-Council is a trusted authority in cybersecurity education and certification. Best known for its Certified Ethical Hacker program, EC-Council also offers training, certificates, and degrees on a wide spectrum of subjects from Computer Forensic Investigation and Security Analysis to Threat Intelligence and Information Security. EC-Council is an ISO/IEC 17024 Accredited Organization recognized under the U.S. Defense Department Directive 8140/8570 and many other authoritative cybersecurity bodies worldwide. With over 350,000 certified professionals globally, EC-Council remains a gold standard in the industry. With a steadfast commitment to diversity, equity, and inclusion, EC-Council maintains a global presence with offices in the US, the UK, India, Malaysia, Singapore, and Indonesia. For press inquiries, please contact: press@eccouncil.org For more information, please visit https://www.eccouncil.org/
sharon_dew_3dc728669a0cc3
1,892,682
How to review as a Pro
There is some dispute about whether it is worth having a code review as a step inside the development...
0
2024-06-18T17:05:18
https://dev.to/nadia/how-to-review-as-a-pro-59a0
codereview, productivity
There is some dispute about whether it is worth having a code review as a step inside the development pipeline, is there more harm than good at adopting this practice? I personally believe that good code review can add a lot to our work, both from the team's perspective and for career and self-development. This explanation will show how to get the most benefit from this process. ## What not to review It’s important to know not only about good practices, but also about things to avoid. Sometimes, It could even be better not to review code at all and replace it with some other practices like architectural review, pair or XP programming, than to have a step in a process that will damage team relationships. ### Code style Many people struggle with finding as many “spelling” mistakes as they can during code review, arguing over the number of spaces in indent, line breaks and other language syntactic sugar. That's not the right approach. Specific code style is essential , but it doesn’t need to be done manually. What you need (among other things) is to automate this routine and boring process. As Rob Pike said at his closing [talk](https://commandcenter.blogspot.com/2024/01/what-we-got-right-what-we-got-wrong.html) at GopherConAU conference: _every language worth using has a standard formatter_. There are lots of linters for different program languages: - Java (built-in tools in IntelliJ or Eclipse) - Python (ruff, black, pyllint, flake8) - Go (Gofmt - default code formatter) Not only can you check how well your code follows certain rules but also add auto-formatting options. This way, most inconsistencies are fixed automatically, saving time. Linters start up can be automated, for example, with [pre-commit tool](https://pre-commit.com/) for python and this logic can be integrated into GitHub PR or new commits workflow. So before a new Pull Request is reviewed, it's the author who resolves any code style issues. ### Attitude This paragraph is not about what, but about how, or rather how not to do a review. Some people treat code reviews as a challenge to uncover as many mistakes as and faults as possible. Even with the best intentions, this approach can make the process more personal than it should be. When work is taken too close to heart there’s a risk of negative and strong emotions that can harm our attitude towards colleagues and lead to rushed or wrong decisions. A PR review is not a place to compete, you do not need to win a battle of corrections. It’s a place to clear up questionable aspects of the implementation, better understand what’s happening, learn something new, or help the code's author discover new things. ## What to review What useful things can be done? After minimising any harm, it’s time to focus on productive approaches to code review. Everything I write below will work for us in the future. These practices will make it easier to understand and maintain the codebase by yourself later on if everyone who's touched it gets hit by a bus. ### Logic One of the main goals of a review is to understand what is going on in this code and whether it meets the goal of an initial task. If you can't understand what the hell is going on, it could mean a couple of things: - The code is too complex logically or has too many layers of abstraction. In the future, this will make it harder to add more logic or understand the code's original purpose, increasing the time needed to fix related problems. - If this code horrifies you now, imagine how you'll feel when the responsibility to change this block suddenly falls on you because the original author is no longer available. - When we don't know the goal or reason for the code changes, how can we understand if the implementation is correct? So, before the review, we need to understand what was the exact intent of the author. Yes, this will increase the time spent on someone else’s task, taking time away from your own more interesting work. But software development is about collective efforts and communication. With the development of AI Coding Co-pilots we have a need to read and review code much frequently. ### Language usage A linter can’t solve all our problems with incorrect or non-idiomatic usage of language constructs. There are some supporting tools for this purpose, like [refurb](https://github.com/dosisod/refurb) for Python. However, these tools usually only help with simple concepts. It's important to follow good practices, and mentor your colleagues to do so. Like not storing passwords and credentials in your code. In the age of fast developing AI-tools, there certainly will be better instruments to help us with these tasks. For now, we still need to check suggested options for correctness and can’t blindly trust any information provided by LLM tools. We certainly can ask ChatGPT about some things, but sometimes you just need to know what to ask about before compiling a question. So knowledge sharing is still in demand. ## How to review One common argument against code review is that it slows down the development process. But bug fixing and releasing hot fixes to production also slows down features development. It's up to us to create a process that minimises harm. Although it depends on a company and team structure, the process can be flexible. - Agree on a timeframe for code reviews and establish conventions for skipping review in certain cases, with agreement from all team members. - Set fixed time slots for reviewing others' code instead of writing your own. This can help manage long review queues and add structure to the work routine. Also, you will have some time milestones to pause the review process and take a break. - Break down your review into parts: start with higher-level logic of the code, and then move to individual functions. Reviewing by commits rather than by files can also be more effective. But if you are strongly against code review practice, who am I to tell you what to do? Code review could be replaced by architectural review, pair or XP programming. Personally, I had a few problems with my review comments. They could be perceived as quite harsh and emotional, though I didn't mean them that way. Often, my comments were written in an imperative tone, lacking polite and suggestive language. One technique that helped change this situation was a [conventional comments](https://conventionalcomments.org/) approach. A comment is marked by a keyword like `question`, `issue`, `remark`, `suggestion`, or any others that seem appropriate. This way, the code author can understand the severity of your notes and decide what definitely needs to be changed and what can stay as it is. For me, this tag also worked like magic, changing the narrative. When I write down a `question`, I encourage myself to expand on the initial idea further in text. The more explicitly and neutrally you write your comments, the better response you will get from the code creator. Also, if you see convenient or effective code implementation from a less experienced colleague, don’t be afraid to point it out. Positive feedback is rare in such a small feedback loop. Yes, people are different, but most appreciate recognition for their work. ### How to help reviewers It’s well known that many people don't like to review other codes. In my personal opinion, this anticipation comes from the thought: 'Oh no, I have to sort through someone else's messy code'. We can take a few simple steps to make code reviews less frustrating. - Write a description for the PR, so it's easy to understand the main goal or what’s happening inside. - Add links to the task issue in the tracker, if you have one, so the reviewer can understand the initial task deeper. - Try to divide the pull request into logical commits, so each commit can be checked separately. - Avoid huge PRs. Sometimes, the task can be split into smaller PRs that are easier to review. Other times, it's helpful to have regular architecture or code review sessions to understand the current task and discuss next steps of implementation. The first two points also help leave a digital paper trail trail. The last ones add more structure to work. You can even separate commits after the code is written. Thus, we can look from the upper abstract layer and check that we do not forget any important part of our initial plan. Sometimes, it helps to take a pause after creating a pull request and check it quickly from an outsider's perspective. This break can reveal obvious mistakes that can be fixed quickly. ## Ending I approach the review process with the aim of gaining knowledge — about the product, new features, or programming language. It’s also an opportunity to help colleagues avoid making some tricky mistakes. Questions that arise during reviews can lead to better understanding of our codebase or highlight areas that need rewriting and simplification. A thorough review would be very profitable when time passes and you need to refactor that part of code. Lately, I've been refactoring some quite ugly solutions that I could have insisted on reworking half a year ago during the code review stage (the last line of defence before merging to master). But hindsight is 20/20, and my foresight wasn't strong enough, so I'm paying the price now. PS: I want to thank the reviewers of this article. Without theirs, the help text would be worse, and I wouldn't have gained new knowledge of the English language :heart:
nadia
1,892,679
One Byte Explainer: Algorithm
Explainer An algorithm is a detailed step-by-step instruction to get to a goal. It forms...
0
2024-06-18T16:56:35
https://dev.to/pachicodes/one-byte-explainer-algorithm-27do
devchallenge, cschallenge, computerscience, beginners
## Explainer An algorithm is a detailed step-by-step instruction to get to a goal. It forms the basis for all computer programming but is also present in our daily lives, like when making a sandwich and you always follow the same steps.
pachicodes
1,892,656
Optimized Toggle Visibility
Case 1 &lt;i onClick={() =&gt; setVisible(!visible)} className="fi fi-rr-eye...
0
2024-06-18T16:53:47
https://dev.to/mahmudurbd/optimized-toggle-visibility-14k8
webdev, programming, javascript, react
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ath5smqewaxmtwer0nzf.png) #### Case 1 ``` <i onClick={() => setVisible(!visible)} className="fi fi-rr-eye absolute top-10 right-3"> </i> ``` #### Case 2 ``` <i onClick={() => setVisible((currentVal) => !currentVal)} className="fi fi-rr-eye absolute top-10 right-3" ></i> ``` Both of the provided `onClick` handlers in your <i> elements are functionally similar and will work correctly. However, they have slight differences in terms of readability and potential optimizations. #### For Case 1: - This directly toggles the `visible`state using the current value of `visible`. - If `visible`is managed in a parent component, this can lead to stale closures because the `visible`value might not be the most up-to-date value at the time of execution. #### For Case 2: - This uses the functional form of the state setter, which ensures that the state update is based on the most recent state value, even if the component re-renders before the state is updated. ### Recommendation: The second handler is generally more optimized and is the recommended approach because it leverages the functional update form. This method ensures that you always get the latest state value, avoiding potential issues with stale state closures. Here's the recommended version: ``` <i onClick={() => setVisible((currentVal) => !currentVal)} className="fi fi-rr-eye absolute top-10 right-3" ></i> ``` ### Explanation: - **Stale Closures**: In the first handler, if the component re-renders between the time the onClick is set up and the time it is executed, the visible value might be outdated, leading to unexpected behavior. The second handler avoids this by using the current state value directly. - **Readability**: The second handler is more explicit about its intent to toggle the state based on the current state, making the code easier to understand and maintain.
mahmudurbd
1,892,678
Life on the Moors: A Journey Through Emily Brontë’s Wuthering Heights
Stepping into Emily Brontë's World Imagine being whisked away to the wild and windswept...
0
2024-06-18T16:52:44
https://dev.to/markhfd/life-on-the-moors-a-journey-through-emily-brontes-wuthering-heights-54p5
novel, romance, books
## Stepping into Emily Brontë's World Imagine being whisked away to the wild and windswept moors of Yorkshire. That's where Emily Brontë, a reclusive yet brilliant writer, found inspiration for her only novel, Wuthering Heights. Emily, born in 1818, lived a quiet life in the parsonage with her famous siblings. Though she penned just one novel, its intensity and passion have left an indelible mark on literature. ![Life on the Moors: A Journey Through Emily Brontë’s Wuthering Heights](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpduwitzt156st5giwgi.jpg) ## The Tale Begins Our story kicks off with Mr. Lockwood, a curious tenant of Thrushcross Grange, who ventures up to Wuthering Heights. What he finds is a world brimming with mystery and brooding characters. Through the narration of Nelly Dean, the housekeeper, we are transported back in time to uncover the dark and passionate saga of the Earnshaw and Linton families. ## A Love Like No Other When Mr. Earnshaw brings home an orphan named Heathcliff, it sets off a series of events that forever alter the lives of everyone at Wuthering Heights. Heathcliff forms a deep, unbreakable bond with Earnshaw's daughter, Catherine. Their love is fierce and all-consuming, but it's not without its challenges. Catherine's brother, Hindley, harbors a deep-seated hatred for Heathcliff, which only grows over time. As they grow older, Catherine finds herself torn between her wild love for Heathcliff and the security offered by Edgar Linton, a refined gentleman from Thrushcross Grange. Her choice to marry Edgar sets Heathcliff on a path of revenge that engulfs everyone in its wake. ## The Next Generation The legacy of Heathcliff and Catherine's tumultuous love affair spills over into the next generation. Young Cathy, Catherine's daughter, and Hareton Earnshaw, Hindley's son, are caught in the crossfire. Despite their troubled beginnings, they find a way to break free from the cycle of vengeance and hatred, offering a glimmer of hope that love can indeed triumph over darkness. ![Life on the Moors: A Journey Through Emily Brontë’s Wuthering Heights](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hh60c5kuvpz3syem5l8.jpg) ## The Highlights Characters that Stay with You One of the most compelling aspects of Wuthering Heights is its unforgettable characters. Heathcliff, with his brooding intensity, and Catherine, with her wild and willful spirit, create a love story that is as haunting as it is passionate. Their complex personalities and their tumultuous relationship make the novel a riveting read. ## A Setting Like No Other The Yorkshire moors are more than just a backdrop in this novel; they are characters in their own right. Emily Brontë's vivid descriptions bring the moors to life, making them an integral part of the story. The moody, atmospheric setting perfectly mirrors the emotional turbulence of the characters. ## A Unique Storytelling Approach The novel's narrative structure is both unique and effective. By employing multiple perspectives, primarily through Nelly Dean's narration, Brontë adds layers of depth to the story. This approach allows readers to see events from different angles, enriching their understanding of the characters and their motivations. ## The Challenges Dark and Disturbing Themes Wuthering Heights is not a light read. It delves deeply into themes of revenge, cruelty, and obsession. The relentless negativity and suffering experienced by the characters can be overwhelming for some readers, making it a challenging read. ## Complex and Troubling Relationships The relationships in the novel are intricate and often toxic. The intense emotions and destructive actions of the characters can be difficult to navigate, and some readers may struggle to sympathize with their choices. This complexity, while adding depth, can also make the story feel heavy and bleak. ## A Lack of Traditional Heroes The novel lacks clear, traditional heroes, which might be a drawback for readers who prefer more straightforward, morally clear narratives. Most characters in Wuthering Heights possess significant flaws, making it hard to root for any one of them unequivocally. ![Quotes to Remember](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghfhjt28px2qd4fs366a.jpg) ## Quotes to Remember “Whatever our souls are made of, his and mine are the same.” “I cannot live without my soul.” “I wish I were a girl again, half-savage and hardy, and free.” ## Final Thoughts Wuthering Heights by Emily Brontë is a powerful and haunting novel that explores the darker sides of human nature. Its intense characters, vivid setting, and unique narrative structure make it a standout work of literature. Despite its dark themes and complex relationships, the novel remains a timeless classic, offering a deep and compelling look at love, revenge, and the human spirit. For those who enjoy gothic romance and emotionally charged stories, Wuthering Heights is a must-read. It’s a testament to [Emily Brontë](https://mxstories.blogspot.com/2024/06/a-romantic-novel-review-of-wuthering.html)’s storytelling prowess and her ability to create worlds that linger in the mind long after the final page is turned. For a more in-depth look, check out this comprehensive review of Wuthering Heights. If you enjoyed this review and want to explore more literary treasures, visit MX Stories. You'll find a wealth of reviews, recommendations, and literary insights to fuel your reading adventures.
markhfd
1,892,677
Filament PHP Blade UI Components Visually Explained
Visual references for each Filament PHP Blade UI component available for your view.
0
2024-06-18T16:52:23
https://dev.to/andreiabohner/filament-php-blade-ui-components-visually-explained-3941
filament, php, blade, components
--- title: Filament PHP Blade UI Components Visually Explained published: true description: Visual references for each Filament PHP Blade UI component available for your view. tags: filament, PHP, blade, components cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4kr8edwv2i829o6kzmvt.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-18 14:10 +0000 --- In addition to the awesome full-stack components that [Filament PHP](https://filamentphp.com/docs/) provides, some [UI components](https://filamentphp.com/docs/3.x/support/blade-components/overview) are also available to be used independently on your Blade view files. I've been working on creating references to easily visualize these Blade UI components. You can check them all out on my blog: [https://andreia.github.io](https://andreia.github.io/blog/2024-06-15/filament-php-blade-ui-components-visually-explained/) ![Filament PHP Blade UI Components Overview](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5h20vq2e1pt90hs66be2.jpg) I hope they've come in handy for your projects! :)
andreiabohner
1,892,676
40 Days Of Kubernetes (1/40)
Day 1/40 Docker Tutorial For Beginners - Docker Fundamentals Video Link It's...
0
2024-06-18T16:51:04
https://dev.to/sina14/40-days-of-kubernetes-2c10
kubernetes, docker, 40daysofkubernetes
## Day 1/40 # Docker Tutorial For Beginners - Docker Fundamentals [Video Link](https://www.youtube.com/watch?v=ul96dslvVwY) It's all about easy Build, Ship and Run. @piyushsachdeva [Git Repository](https://github.com/piyushsachdeva/CKA-2024/) [My Git Repo](https://github.com/sina14/40daysofkubernetes) ### What we learn: 1. What is Docker 2. Understanding Containers V/S Virtual Machines 3. Containers V/S Virtual machines with the help of a Building and House analogy 4. Challenges with the non-containerized applications 5. How Docker solves the challenges 6. A Simple Docker WorkFlow 7. Docker Architecture ### Tasks **Docker Workflow** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/olpxr9vrrs5oauj6oc8m.png) **Docker Architecture** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tav2hymoraoncpb5mepa.png)
sina14
1,892,674
BEST CAT Coaching In Mansarovar Jaipur
CAT Coaching in Jaipur - T.I.M.E. Institute, Mansarovar T.I.M.E. Institute in Mansarovar, Jaipur, is...
0
2024-06-18T16:50:35
https://dev.to/rachit_fe61bda498a0a35af0/best-cat-coaching-in-mansarovar-jaipur-490k
CAT Coaching in Jaipur - T.I.M.E. Institute, Mansarovar T.I.M.E. Institute in Mansarovar, Jaipur, is your top choice for CAT coaching for Academic Excellence and Career Success! . With expert faculty, comprehensive study material, and regular mock tests,We offer best coaching for CAT, CMAT, MAT, Bank PO, SSC CHSL, IPM, CLAT, GRE, GMAT, and much more! Our proven track record of top CAT scorers speaks for itself.Government Job Exams (SSC - CGL tier 1 & 2) 4 International Graduate Admissions (GRE, GMAT) 5 Campus Recruitment Training (CRT) for top-notch corporate placements Join us at T.I.M.E. JAIPUR (Best CAT coaching institute in Jaipur) for your brighter future!" Join us for state-of-the-art infrastructure and dedicated support to achieve your management career goals.
rachit_fe61bda498a0a35af0
1,892,673
Build your First AI Agent with Julep: A Step-by-Step Guide
Creating an AI app from scratch can be a very challenging task. Whether you want to build a simple...
0
2024-06-18T16:47:52
https://dev.to/julep/building-your-first-ai-application-with-julep-a-step-by-step-guide-4n71
webdev, javascript, ai, react
Creating an AI app from scratch can be a very challenging task. Whether you want to build a simple chatbot or an advanced intelligent virtual assistant, it can take weeks to develop the desired app successfully. But that’s where Julep comes to rescue us. [Julep](https://git.new/julep) is a platform that helps to build stateful and functional LLM-powered applications. With Julep, you can build a fully functional AI app with just a few lines of code. Platforms like OpenAI's GPT-3, Microsoft's Azure Bot Service, and Google's Dialogflow can build AI applications. However, Julep stands out due to its advantages like statefulness to track conversation history and context, easy integration with multiple LLMs, and a user-friendly interface for managing users, agents, and sessions. In this blog, we will create Movio, an AI-powered Movie Companion App that provides recommendations and information about any movie the user asks for. We will walk through each step and understand how you can use Julep in your projects. Let’s get started! ## Prerequisites Make sure you have Nodejs installed in your device. Download and Install Node.js from their [official website](https://nodejs.org/en/download/package-manager) If you like the posts on the Julep blog so far, please consider [giving Julep a star on GitHub](https://git.new/julep), it helps us to reach and help more developers. {% embed https://git.new/julep %} ## Creating React App To create a React App, run this command in the terminal: ```sh npm create-vite@latest ``` You can check out the [Vite Docs](https://vitejs.dev/guide/) to create a React app. Create the basic structure in the `App.jsx` file. Add an `<input>` tag allowing the users to enter the query: ```html <div className="container"> <h1>Hi, I'm Movio</h1> <h4>Your Ultimate Movie Companion</h4> <div id="conversation"> {conversation.map((item, index) => ( <p key={index} className={item.role}> {item.message} </p> ))} </div> <input type="text" id="queryInput" placeholder="Ask me anything about Movies..." value={query} onChange={(e) => setQuery(e.target.value)} /> <button onClick={sendQuery}>Submit</button> </div> ``` To handle and capture the user’s query, we have defined a functionality: ```js const [query, setQuery] = useState(""); const [conversation, setConversation] = useState([]); ``` We have used `useState()` hook to define `query` and `conversation` variables along with their state updating functions. ```js const sendQuery = async () => { if (!query) return; // Append user message to conversation setConversation((prev) => [...prev, { role: "user", message: query }]); setQuery(""); try { const response = await axios.post("http://localhost:3000/chat", { query, }); const agentResponse = response.data.response; // Append agent response to conversation setConversation((prev) => [ ...prev, { role: "agent", message: agentResponse }, ]); } catch (error) { console.error("Error fetching response:", error); } }; ``` The function `sendQuery()` first checks if the query is empty. If it isn't, it updates the `conversation` state by appending a new object with `role` as `"user"` and `message` as the `query`. Then, it resets the `query` to an empty string using `setQuery("")`. Inside the try block, `axios` posts the user's query to the endpoint. The server's response is stored in `agentResponse`, which is extracted from the `response` data. Next, `setConversation` is used again to add the agent's response to the `conversation` state, with role as `"agent"` and message as `agentResponse`. Finally, any errors during the axios request are caught in the catch block and logged to the console. Here’s the full `App.jsx` code: ```js import React, { useState } from "react"; import axios from "axios"; function App() { const [query, setQuery] = useState(""); const [conversation, setConversation] = useState([]); const sendQuery = async () => { if (!query) return; // Append user message to conversation setConversation((prev) => [...prev, { role: "user", message: query }]); setQuery(""); try { const response = await axios.post("http://localhost:3000/chat", { query, }); const agentResponse = response.data.response; // Append agent response to conversation setConversation((prev) => [ ...prev, { role: "agent", message: agentResponse }, ]); } catch (error) { console.error("Error fetching response:", error); } }; return ( <div className="container"> <h1>Hi, I'm Movio</h1> <h4>Your Ultimate Movie Companion</h4> <div id="conversation"> {conversation.map((item, index) => ( <p key={index} className={item.role}> {item.message} </p> ))} </div> <input type="text" id="queryInput" placeholder="Ask me anything about Movies..." value={query} onChange={(e) => setQuery(e.target.value)} /> <button onClick={sendQuery}>Submit</button> </div> ); } export default App; ``` Let's start integrating Julep in our project. ## Installing Libraries For the Movie Companion app, we will install some necessary libraries. These libraries are: - [express](https://expressjs.com/) - To create and manage your web server - [julep SDK](https://docs.julep.ai/api-reference/js-sdk-docs) - To interact with a specific service or API provided by Julep - [body-parser](https://www.npmjs.com/package/body-parser) - To parse incoming request bodies, making it easier to handle data sent by clients - [cors](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#:~:text=Cross%2DOrigin%20Resource%20Sharing%20(CORS)%20is%20an%20HTTP%2D,browser%20should%20permit%20loading%20resources.) - To enable cross-origin requests, allowing your server to handle requests from different domains - [dotenv](https://www.dotenv.org/) - To retrieve the value stored in the `.env` file - [axios](https://axios-http.com/docs/intro) - a promise-based HTTP Client for node.js and the browser Run this command to install libraries: ```sh npm install express @julep/sdk cors body-parser dotenv axios ``` ## Integrating Julep To integrate Julep, we need an API key. Go to [platform.julep.ai](http://platform.julep.ai) and Sign in with Google account credentials. Copy the _YOUR API TOKEN_ present on the top-right corner. ![Julep API Token](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ao5kts1slunn5wpb5itk.png) This API Token will serve as your API key. Create a `.env` file in your project directory and paste the following code: ```js JULEP_API_KEY = “api_key” ``` Replace the `api_key` with the copied API Token. Create a file in `src` directory and name it `server.js`. Whole Julep code will come in this file. Firstly, we will import the required libraries. ```js import express from "express"; import julep from "@julep/sdk"; import bodyParser from "body-parser"; import cors from "cors"; import { fileURLToPath } from "url"; // Import the fileURLToPath function import path from "path"; import dotenv from "dotenv"; ``` Create a new `client` using Julep SDK‘s `Client` class. This client interacts with the Julep API and initializes the managers for agents, users, sessions, documents, memories, and tools. ```js const apiKey = process.env.JULEP_API_KEY; const client = new julep.Client({ apiKey }); ``` Now, we will create an Express app instance to serve as the backend server. Use the `bodyparser.json()` to configure the app to parse incoming JSON requests automatically, and `cors()` to enable Cross-Origin Resource Sharing (CORS) allowing requests from multiple origins. ```js const app = express(); app.use(bodyParser.json()); app.use(cors()); ``` Set up an asynchronous route handler for POST requests to the Express app's `/chat` endpoint. ```js app.post("/chat", async (req, res) => { ``` Inside the try block, Create a `query` variable that stores the query entered by the user. ```js try { const query = req.body.query; ``` Now, let’s create users, agents, and sessions to perform the interaction with Julep API. ### Creating User [User](https://docs.julep.ai/concepts/users) object represents an entity, either a real person or a system, that interacts with the application. Every AI application developed using Julep supports multiple users, each capable of interacting with the Agent. Each of these users is distinct, meaning they have their own unique identities and assigned roles. User is an optional entity, and an application can function properly without defining one. However, it's advisable to create a user profile for each individual or system interacting with the Agent for better organization and tracking. Specifically, adding some basic details about the user can help the application better understand their behavior. This enables the application to provide personalized results tailored to the user's preferences and needs. When it comes to creating a user, Julep offers a `users.create()` method which we can be used on the `client` to create a user. Creating a user demands 4 attributes: - Name - Name of the user - About - Small description of the user - Documents - Essential documents formatted as text tailored to the user's needs (Optional) - Metadata - Additional data beyond the ID that pertains to the user within the application (Optional) Here’s an example: ```js const user = await client.users.create({ name: "Sam", about: "Machine Learning Developer and AI Enthusiast", docs:[{"title": "AI Efficiency Report", "content": "...", "metadata": {"page": 1}}], // Optional metadata:{"db_uuid": "1234"}, // Optional }); ``` Now, let’s create a user for our Movie Companion app: ```js const user = await client.users.create({ name: "Ayush", about: "A developer", }); ``` Here, we have created a user with name `Ayush` and `A developer` as description. ### Creating Agents [Agent](https://docs.julep.ai/concepts/agents) is the intelligent interface serving between the user and the application, handling all the interactions and enhancing the user experience. Agents are programmed to process the queries user has asked for and provide tailored results or suggestions. Agent contains all the configurations and settings of LLM Models you want to use in your AI application. This enables the applications to carry out specific tasks and cater to the individual preferences of users. These agents can be as simple as a chatbot and can range up to highly complex AI-driven intelligent assistants capable of understanding natural language and performing intricate tasks. Similarly, like users, Julep includes the `agents.create()` method to create an agent. Creating an agent requires a collection of attributes: - Name - Name of the agent - About - Small description of the agent (Optional) - Instructions - List of instructions for the agent to follow (Optional) - Tools - List of functions for agent to execute tasks (Optional) - Model Name - LLM Model that agent will use (Optional) - Settings - Configurations on LLM model (Optional) - Documents - Important documents in text format to be used by agent to improve the persona (Optional) - Metadata - Additional information apart from ID to identify the user or agent (Optional) Here’s an example: ```js const agent = client.agents.create( (name = "Cody"), (about = "Cody is an AI powered code reviewer. It can review code, provide feedback, suggest improvements, and answer questions about code."), (instructions = [ "On every new issue, Review the issue made in the code. Summarize the issue made in the code and add a comment", "Scrutinize the changes very deeply for potential bugs, errors, security vulnerabilities. Assume the worst case scenario and explain your reasoning for the same.", ]), (tools = [ { type: "function", function: { name: "github_comment", description: "Posts a comment made on a GitHub Pull Request after every new commit. The tool will return a boolean value to indicate if the comment was successfully posted or not.", parameters: { type: "object", properties: { comment: { type: "string", description: "The comment to be posted on the issue. It should be a summary of the changes made in the PR and the feedback on the same.", }, pr_number: { type: "number", description: "The issue number on which the comment is to be posted.", }, }, required: ["comment", "pr_number"], }, }, }, ]), (model = "gpt-4"), (default_settings = { temperature: 0.7, top_p: 1, min_p: 0.01, presence_penalty: 0, frequency_penalty: 0, length_penalty: 1.0, }), (docs = [{ title: "API Reference", content: "...", metadata: { page: 1 } }]), (metadata = { db_uuid: "1234" }) ); ``` Now, let’s create the agent for our Movio app: ```js const agent = await client.agents.create({ name: "Movie suggesting assistant", model: "gpt-4-turbo", }); ``` As you can see, we have used the `gpt-4-turbo` LLM model in this agent, but Julep supports multiple LLM models that you can use to create AI applications. Check out the [documentation](https://docs.julep.ai/guides/llms) to know more. ### Creating Sessions [Session](https://docs.julep.ai/concepts/sessions) is an entity where the users interact with the agent. It’s the period of interaction between the user and the agent. It serves as a framework for the entire interaction that happens, including back-and-forth messaging queries and any other relevant details. Sessions store a record of all the messages exchanged between the user and agent. This record helps the AI understand the ongoing conversation better and provide more personalized answers. To create a session, we can use the sessions.create() method. Let’s take a look at the attributes it requires: - Agent ID - ID of the created agent - User ID - ID of the created user (Optional) - Situation - A prompt to describe the background of the interaction - Metadata - Additional information apart from IDs to identify the session (attribute) Situation attribute plays a vital role in the session as it provides a context for the interaction or the conversation. The situation helps the agent better understand and compute the user’s query and give more tailored replies. Here’s an example: ```js // Assuming 'client' is an object with a 'sessions' property containing a 'create' method let session = client.sessions.create({ agent_id: agent.id, user_id: user.id, situation: ` You are James a Software Developer, public speaker & renowned educator. You are an educator who is qualified to train students, developers & entrepreneurs. About you: ... Important guidelines: ... `, metadata: { db_uuid: "1234" }, }); ``` Let’s create a session for our Movio app: ```js const session = await client.sessions.create({ agentId: agent.id, userId: user.id, situation: "You are Movio. You tell the people about movies they ask for, and recommend movies to the users", }); ``` Here, the `agentID` and `userId` are the IDs of the agent and user we created earlier, and the `situation` is the small context provided for the interaction. ### Getting Response Message After creating the user, agent, and session, we need to handle the interaction. We will use the `sessions.chat()` method to handle the chat interaction and get the response message. This method demands two attributes to function - session.id and an object having messages array. ```js const chatParams = { messages: [ { role: "user", name: "Ayush", content: query, }, ], }; const chatResponse = await client.sessions.chat(session.id, chatParams); const responseMessage = chatResponse.response[0][0].content; res.json({ response: responseMessage }); ``` Here, `chatParams` object contains the messages array, which includes an object with three properties: - role: The role of the message sender, "user". - name: The user's name, "Ayush". - content: The user's query, stored in the variable query. Then, the `sessions.chat()` method is called on `client` with `session.id` and `chatParams` as arguments. The resultant object is stored in chatResponse. The value of the `content` property is extracted from the `chatResponse` and stored in `responseMessage`. ### Handling Error To handle the errors, we will use the `catch` block to capture the error and display it ```js catch (error) { res.status(500).json({ error: error.message }); } ``` ### Start the server To start the server on localhost, we use the `listen()` method on `app` specifying the port number. ```js app.listen(3000, () => { console.log("Server is running on port 3000"); }); ``` This will host the server on `localhost:3000` and print the defined string in the console window. Congratulations! Your AI app is successfully created. ## Run the App The project is completed, we will run it and try it. To run the app, first, we will run the `server.js` file to initiate the Julep API and then the html file for the user interface. Run this command to start the server: ```sh node src/server.js ``` To run the React App, run this command: ```sh npm run dev ``` This will run your project on the localhost. Here is the demo of the project: ![result](https://raw.githubusercontent.com/ayush2390/julep-assets/main/movio-demo.gif) Your app is running successfully. Project link - [https://github.com/ayush2390/julep-movio](https://github.com/ayush2390/julep-movio) Try out Movio - [https://stackblitz.com/github/ayush2390/movio?file=README.md](https://stackblitz.com/github/ayush2390/movio?file=README.md) Excited to see what more Julep offers? The journey starts with a single click. Visit the repository and give it a star: [https://github.com/julep-ai/julep](https://github.com/julep-ai/julep) Check out the tutorial for a deeper understanding of Julep. {% youtube LhQMBAehL_Q %} Have any questions or feedback? Join the Julep Discord Community {% embed https://discord.gg/QVBnztXC %}
ayush2390
1,892,671
How can developers use gaming trends to publish top Android platform games on Nostra?
To publish the best mobile games, especially platform games on Android, game developers need to stay...
0
2024-06-18T16:42:06
https://dev.to/claywinston/how-can-developers-use-gaming-trends-to-publish-top-android-platform-games-on-nostra-585f
gamedev, developers, development, mobilegames
To publish the [**best mobile games**](https://medium.com/@adreeshelk/publishing-on-a-robust-gaming-platform-key-considerations-for-developers-1c8888f80d91?utm_source=referral&utm_medium=Medium&utm_campaign=Nostra), especially platform games on Android,[ **game developers**](https://nostra.gg/articles/Lock-Screen-Games-Are-a-Game-Changer-for-Gaming-Developers.html?utm_source=referral&utm_medium=article&utm_campaign=Nostra) need to stay ahead of gaming trends and partner with a reliable game host like Nostra. As a game publisher, [**Nostra**](https://medium.com/@adreeshelk/creating-vivid-ongoing-interaction-encounters-with-nostra-games-d12e7e8593ba?utm_source=referral&utm_medium=Medium&utm_campaign=Nostra) offers extensive tools and resources that help developers capitalize on current trends. Social interaction and community building are critical trends; integrating features like leaderboards and real-time chat can enhance player engagement. Hyper-casual games are also on the rise, characterized by simple mechanics and addictive gameplay, which are essential for platform games on Android. Monetization strategies are evolving, and Nostra provides expert guidance on in-app purchases, rewarded ads, and more. With Nostra's support, developers can incorporate emerging technologies like augmented reality and cloud gaming, ensuring their platform games remain innovative and competitive. By leveraging these gaming trends and partnering with Nostra, developers can create and publish top-tier mobile games that captivate and retain players.
claywinston
1,892,670
Not Ignored, But Spared For Better Things!
Source
0
2024-06-18T16:41:22
https://dev.to/td_inc/not-ignored-but-spared-for-better-things-3omo
ai, humanintelligence, meme, technology
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftup27f432khh85okdgn.jpg) [Source](https://i.imgflip.com/85vgo5.jpg)
td_inc
1,892,664
Day 22 of my progress as a vue dev
About today Today I developed my first landing page and I tried to make it as simple and and engaging...
0
2024-06-18T16:31:27
https://dev.to/zain725342/day-22-of-my-progress-as-a-vue-dev-4ol7
webdev, vue, tailwindcss, typescript
**About today** Today I developed my first landing page and I tried to make it as simple and and engaging as possible. I experimented with colors, element positioning, and responsiveness of the page and I must say I enjoyed working on it as I could see things coming together which is not the case with backend. **What's next?** I will be working more landing pages with different approaches and styles and polish my skill to have a grip on it and understand the science behind it from a marketing perspective. **Improvements required** I have to focus more on developing unique and latest designs and also I have to focus on understanding what colors and element positioning works better in terms of engaging a user to perform the intended task. Wish me luck!
zain725342
1,892,669
Blister Packaging Market: A Deep Dive into Materials and Manufacturing Processes
Browse 158 market data Tables and 63 Figures spread through 191 Pages and in-depth TOC on “Blister...
0
2024-06-18T16:40:48
https://dev.to/aryanbo91040102/blister-packaging-market-a-deep-dive-into-materials-and-manufacturing-processes-ima
news
Browse 158 market data Tables and 63 Figures spread through 191 Pages and in-depth TOC on “Blister Packaging Market by Material (Paper & Paperboard, Plastic, Aluminum), Type (Carded, Clamshell), Technology (Thermoforming, Cold Forming), End-use Sector (Healthcare, Consumer Goods Industrial Goods, Food), and Region – Global Forecast to 2025”  Blister packaging has come a long way since its inception. Originally designed for the pharmaceutical industry to provide tamper-evident and unit-dose packaging, blister packs have now expanded their footprint into diverse sectors. Today, blister packaging is not just about securely encasing pills; it’s about delivering efficiency and sustainability across the board. Market Research: The Compass of Progress Market research serves as the guiding compass for the blister packaging industry. It offers insights into market trends, consumer preferences, and emerging technologies, driving innovation and efficiency. Manufacturers and stakeholders rely on research to navigate the evolving landscape and seize growth opportunities. End-Use Applications: A Multifaceted Tapestry Blister packaging’s versatility shines through its myriad end-use applications: Pharmaceuticals: Blister packaging remains a stalwart in the pharmaceutical industry, ensuring product integrity and safety. Market research in this segment focuses on compliance with regulations, child-resistant packaging, and patient-friendly designs. Consumer Electronics: The electronics industry relies on blister packaging to protect delicate components from damage during transit and display products attractively. Research here delves into anti-static materials and custom-fit designs. Food and Beverage: Blister packs have become popular in the food industry for items like candies and chewing gum. Research emphasizes food-grade materials and sustainable options. Cosmetics: The cosmetic industry leverages blister packaging to showcase product colors and maintain hygiene. Market research explores eco-friendly packaging materials and user-friendly designs. Medical Devices: Blister packaging ensures the sterility and safety of medical devices. Research delves into smart packaging solutions and specialized materials. **Get Sample Copy of this Report: [https://www.marketsandmarkets.com/requestsampleNew.asp?id=24775059](https://www.marketsandmarkets.com/requestsampleNew.asp?id=24775059)** Efficiency and Sustainability: Twin Pillars of Growth Efficiency is the watchword in blister packaging. Its ability to protect products, provide user-friendly features, and reduce waste makes it a compelling choice for businesses. Market research identifies areas for improving production efficiency, cost-effectiveness, and supply chain management. Sustainability is the driving force behind packaging innovations. Blister packaging is no exception, with a growing emphasis on reducing environmental impact. Research in this realm explores biodegradable materials, recyclability, and eco-conscious designs. As consumers become more eco-conscious, businesses are adopting sustainable practices to meet their demands. Key Trends Shaping Blister Packaging Several trends are shaping the blister packaging landscape: Eco-Friendly Materials: The shift towards bioplastics and recycled materials is reducing the environmental footprint of blister packaging. Smart Packaging: Integration of IoT and NFC technology to provide real-time product information and enhance user experience. Customization: Tailoring blister packaging to specific product dimensions, improving product visibility and aesthetics. Child-Resistant Features: Enhanced safety measures to protect children from accessing medications and hazardous products. Anti-Counterfeiting Measures: Incorporating features like holograms and QR codes to combat counterfeit products. **Inquire Before [Buying: https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=24775059](Buying: https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=24775059)** “APAC is the fastest-growing market for the blister packaging market.” APAC is expected to register the highest CAGR during the forecasted period owing to factors such as rising disposable income, growing middle-class population, high consumption of visibility products, and the growth of end-use sectors such as healthcare, food, and consumer & industrial goods will drive the blister packaging market over the forecast period. Blister Packaging Market Key Players Amcor Plc (Switzerland), DOW (US), WestRock Company (US), Sonoco Products Company (US), Constantia Flexibles (Austria), Klockner Pentaplast Group (Germany), E.I. du Pont de Nemours and Company (US), Honeywell International Inc. (US), Tekni-Plex (US), and Display Pack (US) are the key players operating in the blister packaging market. Conclusion Blister packaging market research is not just about numbers and data; it’s about shaping the future of packaging. Its insights drive innovation, enhance efficiency, and pave the way for sustainable practices. The growth of blister packaging across diverse end-use industries underscores its adaptability and relevance in today’s dynamic market. As we journey forward, the key to success lies in harnessing the power of research to create packaging solutions that are not just protective, but also efficient and sustainable, meeting the evolving demands of both businesses and consumers alike. TABLE OF CONTENTS 1 INTRODUCTION (Page No. - 23)     1.1 OBJECTIVES OF THE STUDY     1.2 MARKET DEFINITION     1.3 STUDY SCOPE            1.3.1 MARKET SEGMENTATION            1.3.2 REGIONAL SCOPE            1.3.3 YEARS CONSIDERED FOR THE STUDY     1.4 CURRENCY CONSIDERED     1.5 UNITS CONSIDERED     1.6 LIMITATIONS     1.7 STAKEHOLDERS Get 10% Customization on this Report: https://www.marketsandmarkets.com/requestCustomizationNew.asp?id=24775059 2 RESEARCH METHODOLOGY (Page No. - 27)     2.1 RESEARCH DATA            2.1.1 SECONDARY DATA                     2.1.1.1 Key data from secondary sources            2.1.2 PRIMARY DATA                     2.1.2.1 Key data from primary sources                     2.1.2.2 Key industry insights     2.2 BASE NUMBER CALCULATION            2.2.1 SUPPLY-SIDE APPROACH – 1            2.2.2 SUPPLY-SIDE APPROACH – 2            2.2.3 SUPPLY-SIDE APPROACH – 3            2.2.4 DEMAND-SIDE APPROACH – 1     2.3 FACTOR ANALYSIS            2.3.1 INTRODUCTION            2.3.2 DEMAND-SIDE ANALYSIS            2.3.3 SUPPLY-SIDE ANALYSIS     2.4 MARKET SIZE ESTIMATION     2.5 DATA TRIANGULATION     2.6 MARKET SHARE ESTIMATION     2.7 RESEARCH ASSUMPTIONS & LIMITATIONS            2.7.1 ASSUMPTIONS MADE FOR THIS STUDY 3 EXECUTIVE SUMMARY (Page No. - 37) 4 PREMIUM INSIGHTS (Page No. - 41)     4.1 DEVELOPING ECONOMIES TO WITNESS HIGH DEMAND  FOR BLISTER PACKAGING     4.2 BLISTER PACKAGING MARKET, BY MATERIAL     4.3 BLISTER PACKAGING MARKET, BY TYPE     4.4 BLISTER PACKAGING MARKET, BY TECHNOLOGY     4.5 NORTH AMERICA: BLISTER PACKAGING MARKET     4.6 BLISTER PACKAGING MARKET: BY KEY COUNTRIES 5 MARKET OVERVIEW (Page No. - 44)     5.1 INTRODUCTION     5.2 EVOLUTION OF THE BLISTER PACKAGING MARKET     5.3 MARKET DYNAMICS            5.3.1 DRIVERS Continued...
aryanbo91040102
1,892,668
Lead Generation: CallTrack AI’s Customer Conversion Edge
The AI Advantage in Lead Generation CallTrack.AI employs advanced algorithms to score...
0
2024-06-18T16:36:59
https://dev.to/calltrackai/lead-generation-calltrack-ais-customer-conversion-edge-5fnh
calltrackai, callcenters, machinelearning, ai
## The AI Advantage in Lead Generation CallTrack.AI employs advanced algorithms to score leads based on their interaction with your business. By analyzing call patterns, duration, and frequency, AI assigns a value to each prospect, prioritizing those with the highest potential for conversion. With AI, segmentation is no longer a manual, time-consuming task. CallTrack.AI automatically categorizes leads into segments based on their behavior, preferences, and likelihood to purchase, allowing for targeted marketing campaigns. The platform’s predictive analytics forecast the future actions of leads, enabling businesses to tailor their nurturing strategies. This proactive approach ensures that prospects receive relevant information at the right time, nudging them closer to a purchase. ## Enhancing Customer Interactions with AI CallTrack.AI’s AI-driven insights enable personalized communication at scale. Each interaction is tailored to the lead’s interests and past behavior, creating a sense of individual attention that can significantly boost engagement. The immediacy of response is crucial in lead conversion. CallTrack.AI’s click-to-call functionality ensures that prospects can instantly connect with a representative, reducing the chances of losing interest and increasing engagement. Understanding the emotional undertone of customer calls is vital. CallTrack.AI’s sentiment analysis feature interprets the mood of the conversation, allowing businesses to adjust their approach and improve the customer experience. ## Streamlining Operations with CallTrack.AI CallTrack.AI consolidates all customer interactions, including calls, texts, and web form submissions, into a single platform. This centralization streamlines communication and ensures that no lead is overlooked. The platform facilitates seamless collaboration among team members. With shared access to lead information and communication history, teams can work together effectively to convert prospects into customers. CallTrack.AI provides actionable insights through its comprehensive dashboard. Businesses can make informed decisions about their marketing strategies, focusing on what works and improving what doesn’t. ## The Integration of CallTrack.AI with Multi-Channel Marketing In today’s digital ecosystem, prospects interact with brands across multiple channels. CallTrack.AI seamlessly integrates with an array of marketing platforms, ensuring that every touchpoint is an opportunity for engagement. Whether it’s through social media, email, or direct calls, CallTrack.AI provides a consistent and personalized experience to every prospect. Understanding which channels drive the most valuable leads is crucial for optimizing marketing strategies. CallTrack.AI offers comprehensive tracking and analytics across all channels, giving marketers the insights they need to invest in the most effective tactics for lead generation and conversion. ## Revolutionizing Lead Generation: How CallTrack.AI Transforms Prospects into Customers ## Leveraging Data for Competitive Advantage The more you know about your leads, the better you can serve them. CallTrack.AI utilizes the wealth of data at its disposal to create detailed lead profiles. These profiles enable marketers to craft highly targeted campaigns that resonate with the interests and behaviors of their prospects. Staying ahead of the curve requires a keen understanding of market trends and lead preferences. [CallTrack.AI analyzes](https://calltrack.ai/call-analytics/) vast datasets to identify emerging patterns, helping businesses to adapt their strategies in real-time and maintain a competitive edge. ## CallTrack.AI’s Role in Sales Enablement Sales teams thrive on information. CallTrack.AI equips them with real-time data and insights, enabling them to engage with leads more effectively. With a deeper understanding of each prospect’s needs and interests, sales representatives can tailor their pitches to maximize the chances of conversion. Efficiency is key in sales. CallTrack.AI streamlines sales workflows by automating routine tasks and prioritizing leads based on their likelihood to convert. This not only saves time but also ensures that sales efforts are focused where they can have the greatest impact. ## Future-Proofing Your Lead Generation Strategy The technological landscape is ever-changing, and staying relevant means being adaptable. CallTrack.AI positions businesses to quickly embrace new AI and machine learning advancements, ensuring that their lead generation strategies remain effective and innovative. Sustainable lead generation is about more than just short-term gains; it’s about building a strategy that lasts. CallTrack.AI promotes a sustainable approach by continuously learning from interactions and adapting to both the market and the individual needs of prospects.
calltrackai
1,892,667
Most Trading IT Development Services in the USA
The American financial landscape is a dynamic beast, constantly evolving with new technologies and...
0
2024-06-18T16:36:03
https://dev.to/codeperksolutions/most-trading-it-development-services-in-the-usa-2eoa
itservices, webdev, programming
The American financial landscape is a dynamic beast, constantly evolving with new technologies and trends. In this ever-changing environment, traders need a competitive edge. Here's where robust trading IT development services come in. These services empower you to build custom solutions that streamline workflows, enhance decision-making, and ultimately, boost your trading success. This blog delves into the world of trading IT development services in the USA. We'll explore the key areas – web development, [mobile app development services](https://www.codeperksolutions.com/mobile-app-development-services/), and web scraping – highlighting their potential to elevate your trading game. We'll also provide insights into choosing the right development partner for your specific needs. ## The Powerhouse Trio: Web Development, Mobile Apps, and Web Scraping for Traders ### 1. Web Development: Building a Bespoke Trading Platform Imagine a central hub for all your trading needs. A web platform tailored to your strategies, seamlessly [integrating market](https://dev.to/) data, analysis tools, and order execution. This is the magic of custom [web development services](https://www.codeperksolutions.com/web-development-services/) for traders. Here are some of the advantages a bespoke web platform offers: - Customization: Craft a platform that aligns perfectly with your trading workflow. Organize data feeds, charts, and indicators exactly how you prefer. No more navigating through clunky generic interfaces. - Advanced Functionality: Integrate powerful features like algorithmic trading, automated order execution, and real-time risk management tools. Gain the edge you need to capitalize on fleeting market opportunities. - Enhanced Security: Prioritize the safety of your sensitive financial data—partner with a development team that prioritizes robust security protocols and industry-standard encryption practices. - Scalability: As your trading needs evolve, your platform can too. A scalable web application can grow alongside your ambitions, accommodating increased data volumes and new functionalities. ### 2. Mobile App Development: Trading on the Go The modern trader demands flexibility. Mobile app development allows you to access and manage your trades from anywhere, anytime. Here's how a trading mobile app can empower you: - Real-time Market Monitoring: Stay on top of market movements with live quotes, breaking news alerts, and customizable watchlists. React to market shifts swiftly, even while you're on the move. - Order Management at Your Fingertips: Buy, sell, and manage positions seamlessly from your smartphone or tablet. This eliminates the need to be chained to a desktop computer, giving you true trading freedom. - Advanced Features: Integrate charting tools, technical indicators, and sentiment analysis into your mobile app. Conduct on-the-go analysis and make informed trading decisions from any location. ### 3. Web Scraping: Extracting Valuable Market Data The financial markets generate a vast amount of data constantly. [Web scraping services](https://www.codeperksolutions.com/data-scraping/) can harvest this valuable information and transform it into actionable insights. Here's how web scraping can benefit your trading strategy: Market Data Aggregation: Gather real-time and historical data from various financial websites, news sources, and social media platforms. This comprehensive data pool can fuel your analysis and identify hidden opportunities. - Sentiment Analysis: Scrape vast amounts of social media data to gauge market sentiment. Analyze public opinion and identify potential shifts in investor confidence, providing a valuable edge in a fast-paced environment. - Price Tracking: Automate the process of tracking asset prices across different exchanges and identify arbitrage opportunities. This can help you capitalize on price discrepancies and maximize profits. ### 4. Artificial Intelligence (AI) and Machine Learning (ML): AI and ML algorithms are revolutionizing trading by automating tasks, identifying patterns, and generating predictive insights. Development teams are integrating these tools into trading platforms and mobile apps, empowering traders to make more data-driven decisions. ### 5. Blockchain Technology: Blockchain offers unparalleled security and transparency for financial transactions. Trading platforms built on blockchain technology can facilitate secure and efficient trade execution, particularly in the realm of digital assets like cryptocurrencies. ### 6. The Rise of Low-Code/No-Code Development: This technology allows users with limited coding experience to build basic trading applications. While not a replacement for custom development, low-code/no-code tools can democratize access to trading technology for individual investors. ## Choosing the Right Trading IT Development Partner in the USA The success of your trading IT project hinges on selecting the right development partner. Here are some key factors to consider: - Experience in Fintech: Look for a development team with a proven track record in the financial technology space. Their familiarity with regulatory requirements and security best practices will be crucial. - Understanding of Trading Strategies: Choose a team that not only understands the technical aspects of development but also possesses a grasp of different trading strategies. This ensures they can create solutions that align with your specific trading goals. - Communication and Transparency: Open communication throughout the development process is essential. Ensure you choose a partner that fosters transparent collaboration and actively addresses your concerns. - Scalability and Future-proofing: Your trading needs will evolve over time. Choose a partner who prioritizes the development of scalable solutions that can accommodate future growth and integration with new technologies. ## Additional Tips: When searching for a development partner, be sure to check online reviews and case studies to understand their past work. Request quotes from multiple development firms to compare pricing and services offered. Clearly define your project requirements and goals before approaching potential partners. ## The Final Word: Unlocking Your Trading Potential By leveraging the power of trading IT development services, you can gain a significant advantage in the ever-competitive financial markets. A custom web platform, a mobile trading app, and strategic web scraping capabilities can transform your trading experience. Remember, the key lies in selecting the right development partner who shares your vision and possesses the expertise to bring it to life. With the right tools and team by your side, you'll be well on your way to unlocking your full trading potential.
codeperksolutions
1,892,666
Munster - Webhooks processing engine for Rails
By the time of writing this article, I had already written webhook processing logic at least 10 times...
0
2024-06-18T16:34:39
https://skatkov.com/posts/2024-06-18-munster-webhooks-processing-engine-for-rails
rails, webhooks, ruby, opensource
By the time of writing this article, I had already written webhook processing logic at least 10 times for different companies and clients. Together with [Julik](https://blog.julik.nl/) we have implemented one recently at our [current place of employment](https://cheddar.me). And guess what? Once a second service had to be built, it needed to accept webhooks too. Our combined experience in the ingestion of webhooks had already produced a reasonable generic solution. Should we just copy some files over to a microservice and duplicate that code? Nah, let's save the world from wasting those countless hours of re-implementing webhooks over and over again! This darn well could be a gem! ## Enter "Munster" Munster is a webhooks processing engine for Rails. And you heard that right: it's only accepting and processing, not sending. Services that send webhooks would love it if you accepted those as fast as possible and responded with a 200 status code. Pretty much always, actually - because if your service refuses to accept the webhooks they send you they will likely stop retrying. Important business events could then get lost, you would need to examine the retry policies of every webhook sender, etc. And if you didn't manage to receive a webhook correctly, it is usually a hassle to ask the sending service to re-deliver the missed webhooks to you in bulk. Some (like Stripe) provide facilities to replay their event feed in a "pull" manner, but most do not. How to make that happen well? The answer is pretty simple: do not process the webhook inline. Verify it is coming from the sender (most good webhooks use some form of signature that you can verify using a shared secret or a public key), and spool the webhook for processing using your background jobs. Processing webhooks asynchronously has many other advantages: - Equally manage load on servers. - Background process is not subject to any request timeouts. - If processing fails, we will have a webhook safely stored in our database for later re-processing or analysis. Munster is an engine which will provide you the following facilities: * A small abstraction for building webhook receiving _handlers._ A handler is a small object with just a handful of methods that you need to implement - normally those would be `valid?` and `process` * A Rails engine that can be mounted into your application and handles webhooks from multiple senders together - every webhook sender would get its own _handler_ definition. So you would have a handler for Stripe, a handler for Revolut, a handler for Github - and any other services you might want to receive webhooks from * A background job class which calls `process` asynchronously, from your job queue. ## Getting started As with any other gem, you'd first want to add `gem 'munster'` into a `Gemfile` and run `bundle install`. This would add a generator task to a Rails project, so run `rails g munster:install`. Then run `rails db:migrate` so that the table used for received webhooks gets created. It will create the required migration and an initializer file at `app/initializers/munster.rb`. This initializer would expect you to define at least one `active_handlers` hash for it. I will quickly run you through the process and give you an example. ### Mounting Munster is an engine and a Rack app, so it can be mounted in your Rails routes file. Inside `config/routes.rb` you can mount a webhook engine on a subdomain (like `webhooks.yourdomain.com/:service_id`): ```ruby scope as: "webhooks", constraints: "webhooks.yourdomain.com" do mount Munster::Engine, at: "/", as: "" end ``` Or on the main domain (`yourdomain.com/webhooks/:service_id`): `mount Munster::Engine => "/webhooks"` Having a separate subdomain for receiving webhooks can be useful if you have very specific security requirements, or if you would like to have a separate load balancer fronting your webhook receiving endpoint. ### Defining a Handler The next step is to define a webhook handler. For the sake of an example, let's create a handler for Customer.io at `app/webhooks/customer_io_handler.rb`. The handler will take care of handling two specific metrics from Customer.io - `subscribed` and `unsubscribed`. We want to store a local value per user called "subscribed" – once a user unsubscribes using customer.io, we want to record this information in our app database. Same for when a user subscribes. ```ruby class Webhooks::CustomerIoHandler < Munster::BaseHandler def process(webhook) return if webhook.status.eql?("processing") webhook.processing! # The webhook body gets stored as bytes, so senders may deliver # you binary data on the endpoint - it does not have to be JSON json = JSON.parse(webhook.body, symbolize_names: true) case json[:metric] when "subscribed" ActiveRecord::Base.transaction do user = User.find(json.dig(:data, :customer_id)) user.update!(subscribed: true) webhook.processed! end when "unsubscribed" ActiveRecord::Base.transaction do user = User.find(json.dig(:data, customer_id)) user.update!(subscribed: false) webhook.processed! end else webhook.skipped! end rescue => error webhook.error! raise error end def extract_event_id_from_request(action_dispatch_request) JSON.parse(action_dispatch_request.body.read).fetch("event_id") end # Verify that request is actually comming from customer.io # @see https://customer.io/docs/api/webhooks/#section/Securely-Verifying-Requests def valid?(action_dispatch_request) xcio_signature = action_dispatch_request.headers["HTTP_X_CIO_SIGNATURE"] xcio_timestamp = action_dispatch_request.headers["HTTP_X_CIO_TIMESTAMP"] request_body = action_dispatch_request.body.read string_to_sign = "v0:#{xcio_timestamp}:#{request_body}" hmac = OpenSSL::HMAC.hexdigest("SHA256", 'customer_io_webhook_signing_key', string_to_sign) Rack::Utils.secure_compare(hmac, xcio_signature) end end ``` We're rewriting three methods from [BaseHandler](https://github.com/cheddar-me/munster/blob/main/lib/munster/base_handler.rb): - The `valid?` method verifies that the webhook indeed comes from customer.io. This method runs inline, before we persist the webhook. - The `process` method defines how we want to process data - The `extract_event_id_from_request` method defines how to extract an ID from a webhook, and by default it will generate a random UUID. The `extract_event_id_from_request` is a very important feature. A lot of webhook senders send you unique webhooks, but those webhooks may arrive out of order, or arrive twice. Networks being unreliable, the retries being too aggressive at the sender side, you name it. Munster will use this event ID to deduplicate your webhook - if you receive the same data more than once, just one webhook will be persisted and processed. This will also protect your infrastructure if, for some reason, the webhook sender happens to DoS you with repeated deliveries. There are more methods that could be redefined and tweaked, but these three are most commonly used. And lastly we need to mount our handler. We use the "service ID", which will also be the URL path component for your webhook. For our handler, we will use "customer-io" (our URL to paste into the Customer.io configuration will thus be `https://your-app.example.com/webhooks/customer-io`): ```ruby require_relative '../../app/webhooks/customer_id_handler.rb' Munster.configure do |config| config.active_handlers = { "customer-io" => Webhooks::CustomerIoHandler } end ``` ## Final note We went through the basic webhook endpoint setup with Munster, but it offers a bit more than that. This little framework is especially beneficial in cases where you have more than one webhook endpoint. Once you set it up in your project , it's just a matter of defining a single handler to accept new types of webhooks. The core of this is already battle-tested at [cheddar.me](https://www.cheddar.me), but we're still working on fleshing out the details and would love your feedback! You can find Munster at [https://github.com/cheddar-me/munster](https://github.com/cheddar-me/munster)
skatkov
1,892,665
Day 8 of Machine Learning ||Linear Regression Part 2
Hey reader👋Hope you are doing well😊 In the last post we have read about Linear regression and some of...
0
2024-06-18T16:32:36
https://dev.to/ngneha09/day-8-of-machine-learning-linear-regression-part-2-28i8
datascience, machinelearning, tutorial, beginners
Hey reader👋Hope you are doing well😊 In the last post we have read about Linear regression and some of its basics. In this post we are going to discuss about how we can minimize our cost function using gradient descent algorithm. So let's get started🔥 ## Gradient Descent It is the algorithm used to find the value of theta that minimizes the cost function. Cost Function -: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7v197aymvtue5zndyl9m.png) According to this algorithm we will start with some random value of theta let's say `Θ = 0 vector` that is value for all parameters Θ's is 0 and then keep changing Θ to reduce the cost function. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l25svqe3eclzgae97v5w.png) where j=0,1,2.....,n α is learning rate (in practice α=0.01) this indicates that we are taking small steps or we are making small change in the value of Θ. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v3zzoy7n6t5l6dv8gt4h.png) If α is too large then the steps taken are too large and if α is too small then number of iterations will be more and algorithm will become slow. To understand it better consider that you are on mountain and you want to get to the lowest point in the valley. Gradient descent is like taking steps downhill in the direction that decreases the altitude. Each step is based on slope of the mountain at current point. Now let's find the value of Θ -: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c07ti4ozfd9p5avfwv5s.png) For m training samples -: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7cr6cv3z8vqb4j9tqu2x.png) This is how we can compute value of parameters that minimizes the cost function. So here you are seeing that we are starting from Θ = 0 then calculating the predicted output for all training samples and then reducing the value of Θ in order to minimize the cost function. This algorithm is also known as **Batch Gradient Descent**. The main disadvantage of this algorithm is that it will fail for large dataset because in order to make one update we need to calculate sum of all training examples. An alternative to this algorithm is **Stochastic Gradient Descent**. ## Stochastic Gradient Descent In this algorithm instead of using the whole dataset, we use only one training point at a time to update model's parameters. [Note -> Stochastic means random] Stochastic Gradient Descent picks one data point to compute the gradient and update the model. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qisln573qbrpjjo6p1st.png) So here the algorithm will pick any random data point and compute gradient for this then it will pick another value and will do the same thing. The main disadvantage of this algorithm is that it can be more noisy and less stable because it uses only one data point at a time which can lead to fluctuating gradients. You can see the implementation of Gradient Descent here -: [https://www.kaggle.com/code/nehagupta09/linear-regression-implementation] I hope you have understood this. If you have any doubts please comment I'll try to solve your queries. Don't forget to follow me for more. Thankyou 💙
ngneha09
1,892,662
Top HTML Secrets: Unveiling the Hidden Gems of Web Development by Michael Savage
HTML, or HyperText Markup Language, is the cornerstone of web development. While many are familiar...
0
2024-06-18T16:29:32
https://dev.to/savagenewcanaan/top-html-secrets-unveiling-the-hidden-gems-of-web-development-4ael
html, webdev, google
<p style="text-align: justify;">HTML, or HyperText Markup Language, is the cornerstone of web development. While many are familiar with its basic tags and functions, HTML harbors several lesser-known secrets that can significantly enhance web design and user experience. These hidden gems can transform a simple webpage into an interactive, accessible, and highly functional digital space. Let's delve into the top HTML secrets that every web developer should know.</p> <h4 style="text-align: justify;">1. Semantic HTML: Beyond Structure</h4> <p style="text-align: justify;">One of HTML's most powerful features is its semantic elements, which provide meaning to web content. Tags like header, footer, article, and section do more than just structure a page&mdash;they convey the purpose of the content within them. This semantic approach improves SEO by helping search engines understand the context of your content. It also enhances accessibility for screen readers, ensuring a better experience for users with disabilities.</p> <p style="text-align: justify;"><strong>Secret Tip</strong>: Using the main tag to denote the main content of your webpage helps screen readers and search engines bypass repetitive content like navigation links and headers, focusing directly on what matters most.</p> <h4 style="text-align: justify;">2. The Picture Element: Responsive Images</h4> <p style="text-align: justify;">In the era of diverse devices and screen sizes, ensuring that images look great on every platform is crucial. The picture element allows developers to define multiple sources for an image, specifying which one to display based on the device's screen size, resolution, or other factors.</p> <p style="text-align: justify;"><strong>Secret Tip</strong>: Combining the picture element with the source tag allows you to provide different images for various screen sizes. This not only improves load times but also enhances user experience by delivering appropriately sized images.</p> <h4 style="text-align: justify;">3. Data Attributes: Custom Data Storage</h4> <p style="text-align: justify;">HTML5 introduced data attributes, allowing developers to embed custom data directly within HTML elements. These attributes can store extra information without cluttering the DOM with non-standard attributes.</p> <p style="text-align: justify;"><strong>Secret Tip</strong>: Using data-* attributes to pass data to JavaScript without using hidden form fields or other workarounds makes your code cleaner and more maintainable.</p> <h4 style="text-align: justify;">4. The Template Element: Reusable HTML Snippets</h4> <p style="text-align: justify;">The template element is a powerful tool for defining reusable HTML snippets that are not rendered when the page loads. These templates can be cloned and inserted into the DOM dynamically using JavaScript.</p> <p style="text-align: justify;"><strong>Secret Tip</strong>: Utilizing the template element for creating repeatable UI components like modals, cards, or form sections keeps your HTML clean and your code DRY (Don't Repeat Yourself).</p> <h4 style="text-align: justify;">5. Accessible Forms: Enhancing User Interaction</h4> <p style="text-align: justify;">Creating accessible forms is crucial for usability. HTML offers several features to make forms more accessible, such as label tags, aria-* attributes, and input types like email, tel, and date.</p> <p style="text-align: justify;"><strong>Secret Tip</strong>: Always associating label tags with their corresponding form elements using the for attribute improves accessibility for screen readers and users with mobility impairments.</p> <h4 style="text-align: justify;">6. HTML Entities: Special Characters and Symbols</h4> <p style="text-align: justify;">HTML entities allow you to display special characters and symbols that might otherwise be interpreted as code by the browser. This is particularly useful for displaying characters like less than, greater than, and ampersands, as well as non-breaking spaces.</p> <p style="text-align: justify;"><strong>Secret Tip</strong>: Using HTML entities to enhance the readability and functionality of your content can improve the user experience. For example, use &lt; and &gt; to display angle brackets, &amp; to display an ampersand, and &nbsp; to add extra spaces.</p> <p style="text-align: justify;">HTML is more than just a markup language&mdash;it's a versatile toolkit for creating rich, interactive, and accessible web experiences. By leveraging these hidden secrets, developers can craft websites that are not only functional but also optimized for performance, accessibility, and user engagement. Whether you're a novice web designer or a seasoned developer, these HTML tricks can elevate your web development skills to new heights. Embrace these secrets, and unlock the full potential of HTML in your projects.</p> <p style="text-align: justify;"><a href="https://brojure.com/savage-new-canaan/">Michael Savage</a> is a tech-savvy enthusiast from New Canaan, Connecticut, renowned for his insightful tech blog and deep passion for all things technology. With a background in both hardware and software, Savage has cultivated a comprehensive understanding of the tech landscape, making him a respected voice in the tech community. His blog covers a broad spectrum of topics, from the latest in consumer electronics to detailed analyses of emerging technologies and trends.</p> <p style="text-align: justify;">Savage's ability to distill complex technical information into engaging and easily understandable content has earned him a dedicated following. He frequently reviews new gadgets, offers practical tech tips, and shares his thoughts on the future of technology. His commitment to staying at the forefront of tech innovation is evident in his active participation in tech conferences and his contributions to various<a href="https://dev.to/savagenewcanaan/introduction-to-html-the-backbone-of-the-web-48m"> online tech forums</a>.</p> <p style="text-align: justify;">Outside of his blogging activities, Savage enjoys experimenting with new software, building custom PCs, and exploring advancements in artificial intelligence and cybersecurity. Michael Savage's unique blend of expertise and passion for technology continues to inspire and educate his readers, solidifying his reputation as a leading tech enthusiast from New Canaan.</p>
savagenewcanaan
1,892,658
How to join tables data already exist in table
getting below error [Nest] 15320 - 18/06/2024, 9:48:15 pm ERROR [ExceptionHandler] Cannot add or...
0
2024-06-18T16:19:57
https://dev.to/akash_chawan/how-to-join-tables-data-already-exist-in-table-2j80
getting below error [Nest] 15320 - 18/06/2024, 9:48:15 pm ERROR [ExceptionHandler] Cannot add or update a child row: a foreign key constraint fails (`shipping`.`#sql-220c_403`, CONSTRAINT `FK_ce723b9c7f1fbc4a9a1cf4d8865` FOREIGN KEY (`present_rank`) REFERENCES `rank` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION) QueryFailedError: Cannot add or update a child row: a foreign key constraint fails (`shipping`.`#sql-220c_403`, CONSTRAINT `FK_ce723b9c7f1fbc4a9a1cf4d8865` FOREIGN KEY (`present_rank`) REFERENCES `rank` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION) @Column() present_rank: string; @OneToOne(() => Rank,rank=>rank.id) @JoinColumn({ name: 'present_rank' }) rank: Rank;
akash_chawan
1,892,657
Need Help with AWS Lambda Function
I am trying to write a python script to fetch CloudWatch graphs on AWS Direct Connect (DX). I setup...
0
2024-06-18T16:19:37
https://dev.to/brandon_johnson_23ff31206/need-help-with-aws-lambda-function-4fff
I am trying to write a python script to fetch CloudWatch graphs on AWS Direct Connect (DX). I setup SNS Topic as well as Terraform code coupled with python script. I receive the email, but there is no data. I have been working on this for weeks, but no progress. Cloudwatch and SNS topics were implemented with Terraform code. The script to retrieve DX report was done via Python. Can anyone help? The code is a bit long.
brandon_johnson_23ff31206
1,891,835
Data Science and the Cloud
There are many good reasons to move data science projects to the cloud as there are many advantages...
0
2024-06-18T16:15:47
https://dev.to/michellebuchiokonicha/data-science-in-the-cloud-1hg5
datascience, cloudcomputing, ai, softwaredevelopment
There are many good reasons to move data science projects to the cloud as there are many advantages like highly clustered processes on many machines giving us enormous computing power. Also, data science is nothing without data. In a distributed cloud system, our data can be stored and processed reliably even if they are enormous datasets that are also changing constantly. Drawbacks include: Data science projects in the cloud tend to be more complex especially collaborating in large teams and using many services and technologies. We can however choose from a whole range of services supporting our calms in the cloud ranging from complete managed services to services giving us full control and flexibility of the environment. ## Provider-Independent services and tools **MLFlow** A framework you will find implemented in most data cloud services independent of the cloud provider. It consists of 4 components, supporting and streamlining our ML projects and helping us conduct them systematically and collaboratively. It lets us streamline our ML projects by systematically tracking experiment runs, and easy packaging using Anaconda and Docker. **Tracking:** This can be used during model training to lock and track model parameters, and KPIs like model performance metrics, experimental models, code version numbers, etc. **Projects:** can be used to package ML model training and trigger it remotely with varying parameters. Once we are satisfied with the performance of a trained model, we can proceed. **Model:** register this model with ML flow. **Model registry:** can be used as a centralized repository to train models and we can use these repositories to deploy models to the production environment easily. An applied example of using an ML flow project to share our ML development easily and trigger runs with specific parameters over the network. For example, someone optimized a Python script to train a machine-learning model. The new model looks very promising but you have to fill in that tweaking a parameter might make the ML model perform even better. MLflow project can be used to do this without writing the code again. Eg, this is the GitHub rep you used. We have Data, conda environment, readme file, trained pyscript, license. The script imports models and defines evaluation metrics ## Typical MLFLOW workflow: ## Start MLFlow Server **Coduct and tract run:** meaning to train models and lock the parameters and modern metrics. Evaluate results using graphical user interphase or its SDK to query the best-performing models programmatically. Once satisfied with a model, we register it. **Register model** **Model registry ** **Deploy models:** we deploy a model for inference to a productive system typically equipped with a restful API that we can use to obtain predictions from the model. **Databricks:** Processing massive amounts of data requires parallel computation on several cluster notes. Spark is a well-known and widely used solution for data processing and data-intensive projects. It is open source but the downside lies in its maintenance and operation overhead. It might not be that simple to set up a spark cluster and keep it running on custom machines. If we lack the resources to do this, we can use Databricks. Databricks is An overreaching technology that many cloud providers(CSP) can host; Databricks was founded in 2013 by some of the founders of Spark. It is therefore not surprising that the core services offered by DB are managed by spark clusters. It is active in open source and hosting the data and AI summit yearly. At its core, it offered managed spark clusters that can be hosted on Azure, AWS GCP, etc. Different billing models depend on the cloud platform of choice. But in general, databricks units are used. Depending on the machine, processes, and storage that is used, it is calculated in DBUs. we can also use the community edition at no cost. The managed spark clusters are at the core of Databricks but additional features are designed for data-intensive projects. **Additional features are:** **Lakehouses:** aims to describe unified data storage that comes with data bricks designed for structured and unstructured data so it can be considered a mixture of a data lake and a data warehouse. It uses files to store the data in a delta lake. **Delta Lake:** the data storage used in Databricks. It is a distributed datastore but using an elaborate metadata mechanism, delta lake is ACID-compliant. **Delta Live Tables(DLT):** they automatically propagate underlined data updates and provide a GUI to design and manage DTL pipelines straightforwardly. We can also set up data quality checks in DLT that are automatically conducted regularly. **Delta Engine:** A query optimizer that automatically and periodically optimizes data queries depending on the data access pattern. **Unity Catalog:** With Unity Catalog, databricks offers an easy-to-use tool for data governance. We can the graphical user interface of the unity catalog or SQL queries to perform data governance tasks for example, we can set role-based access control for a specific dataset. Structuring the data in Delta Lake by processing maturing, there are Bronze, silver, and Gold tables. Bronze tables: contains raw data as it was loaded from external sources. **Silver tables:** contain processed data including established joints. **Gold tables:** contain completely pre-processed data ready for specific use cases like machine learning. We can use Python, R, SQL, and scala to develop the ML and data processing logic in scripts, and notebooks or use remote compute targets. **Analysis components** **Databricks SQL:** similar to spark SQL to use the SQL language to query our data. Databricks data science and engineering environment: similar to ML leap in spark. It comes with a prepared environment for typical data science and data engineering use cases. **Databricks machine learning environment:** it uses MLflow to track versions and deploy machine learning models. **Programming languages:** python, R, SQL, Scala similar to Spark. AutoML: to automatically perform feature selection, algorithm choice, and hyperparameter toning. ## Google Data Science and ML services Services give us maximum control and flexibility of the environment, leaving us with some responsibilities such as security patches and environment maintenance. Eg, we can use compute engines for our data science projects and Google’s cloud-based virtual machines. Using containerization instead of VM, we can develop our containerized application for eg model training or inference on a local machine then deploy that container to **Cloud compute engine:** **Kubernetes engine:** the cloud-based container orchestration service on GCP. **Deep learning VM:** Specialised VM coming with GPU support and pre-installed libraries that are typically used in data science projects. All Cloud providers offer specialized ML services giving us more comfort and managing some tedious maintenance tasks for us but also taking a little bit of control and flexibility. On the Google platform, the specialized ML service is called cloud AI/ vertex AI being the main component. They include: - Dataflow - Composer - Dataproc - Bigquery - Google Cloud AI for ML training etc - Google Cloud console - Vertex AI workbench - Vertex AI formerly datalab. - Auto ML - API endpoints - Visual reporting **Ready-to-use services on GCP:** they can be used by calling the standardized restful APIs to obtain model predictions by pretraining machine learning models hosted on GCP. eg we can use the speech-tech API to send the sound file to that service and receive the transcribed text. - Natural language AI - Teachable machine - Dialogflow - Translations - Speech-to-text - Text-to-speech - Timeseries insights API - Vision AI - Video AI ## Data science and ML services on AWS Same as on GCP, there are services maximizing control and others suitable for more comfort and instant application. Example, Cloud-based VM called EC2. Deploying containers to elastic container service ECS/Elastic kubernetes service EKS And Elastic map reduce EMR **Specialized ML Service** It is called **SageMaker: ** Sagemaker is a service family of an entire collection of sub-services dedicated to supporting typical DS and ML projects. eg, there are graphical user interfaces, auxiliary services for data wrangling, data labeling, prepared scripts and templates, and also auto ML features. - Notebook instances - Data labeling - Data wrangler - Feature Store - Clarify - Pipelines - Studio - Jumpstart - Canvas - Autopilot **Ready to use services on AWS:** Translate, transcribe, computer vision, and other services being pre-trained ML models that are callable for restful APIs for inference. The computer vision API here includes: - Comprehend - Rekognition - Lookout for vision, panorama - Textract - A2I - Personalize - Translate, Transcribe - Polly, Lex - Forecast - Fraud Detector - Lookout for Metrics - Kendra - Auxillary services are DevOps Guru and Code Guru. ## Data science and ML services on Microsoft Azure. **VM:** For maximum control and flexibility, we can use Azure VMs to decide how to create and manage our development environment. **ACI/AKS:** we can use Azure container instance ACI for containerized model training for testing purposes then use Azure Kubernetes service AKS for production settings. **HDInsight:** A managed service for technology. **Databricks:** **Synapse Analytics:** **Specialized ML service.** This is called Azure Machine Learning. Similar to AWS sagemaker, Azure ML comprises several sub-services for example the GUI called designer and auxiliary services for data labeling and other features. - Azure machine learning - Studio - Workspace - Notebooks/RStudio - Data labelling - Designer **Ready-to-use services on Azure.** Giving us maximum comfort and usability of pre-trained models callable by restful APIs. These services are called cognitive services and they include: - Computer vision, Face - Azure cognitive service for language - Language understanding models, - Translators, and other services. - QnA Maker, Translator - Speech Service - Anomaly Detector - Content Moderator - Personalizer - Cognitive Services for Big Data Note: This is a 4-fold series on cloud computing, virtualization, containerization, and data processing. Check the remaining 3 articles on my blog. This is the fourth. Here is the link to the third. https://dev.to/michellebuchiokonicha/cloud-computing-platforms-4667 it focuses on cloud platforms. Follow me on Twitter Handle: https://twitter.com/mchelleOkonicha Follow me on LinkedIn Handle: https://www.linkedin.com/in/buchi-michelle-okonicha-0a3b2b194/ Follow me on Instagram: https://www.instagram.com/michelle_okonicha/
michellebuchiokonicha
1,892,649
Understanding the Pyramid in Front-End development
In front-end development, the test pyramid is a strategy used to create comprehensive and efficient...
0
2024-06-18T15:58:48
https://dev.to/godblessed/understanding-the-pyramid-in-front-end-development-5b08
react, frontend, beginners
In front-end development, the **test pyramid** is a strategy used to create comprehensive and efficient test suites. The concept of the test pyramid helps developers understand the right balance and number of tests needed to ensure high-quality applications. At the base of the pyramid are **unit tests**, which are quick to execute and should form the majority of your test suite. These tests focus on small, isolated pieces of code like functions or components. Moving up, we have **integration tests**, which ensure that different units work together as expected. These tests are fewer than unit tests but are crucial for checking the interactions between units. At the top of the pyramid are **end-to-end (e2e) tests**. These simulate real user scenarios, verifying that the entire application works as intended from start to finish. While they provide the highest level of confidence, they are also slower and more expensive to run, so they should be used sparingly. The front-end test pyramid ensures that developers write a large number of low-level unit tests, some integration tests, and a few high-level end-to-end tests. This approach leads to a robust and maintainable codebase with faster feedback loops for developers. By adhering to this testing strategy, teams can prevent over-reliance on any single type of test and maintain a balanced, effective approach to quality assurance in front-end development. **Best Practices and Tools for Implementing the Front-End Test Pyramid** When implementing the front-end test pyramid, it's important to follow best practices to ensure that your testing strategy is effective and sustainable. Here are some key practices to consider: 1. **Write Testable Code**: Design your code in a way that makes it easy to test. This often means adhering to principles like single responsibility and modularity. 2. **Prioritize Coverage**: Aim for high coverage with unit tests, as they are the foundation of your test suite. Use coverage tools to identify untested parts of your code. 3. **Mock Dependencies**: For integration tests, mock external dependencies to focus on the interaction between units. 4. **Automate e2e Tests**: Automate your end-to-end tests to run them regularly, ensuring that they remain valuable and up-to-date. 5. **Continuous Integration**: Integrate testing into your CI/CD pipeline to catch issues early and often. Several tools can help you implement the front-end test pyramid effectively: - **Jest**: A delightful JavaScript testing framework with a focus on simplicity, often used for unit and snapshot testing. - **Cypress**: A next-generation front end testing tool built for the modern web, suitable for end-to-end testing. - **Selenium**: An industry-standard tool for web application testing across various browsers and platforms. By following these best practices and utilizing appropriate tools, you can build a front-end test suite that is robust, maintainable, and provides confidence in the quality of your application. **Code Examples for Each Level of the Front-End Test Pyramid** Here are some simple code examples for each level of the front-end test pyramid: 1. **Unit Test Example with Jest**: ```javascript // A simple unit test for a function that adds two numbers function add(a, b) { return a + b; } test('adds 1 + 2 to equal 3', () => { expect(add(1, 2)).toBe(3); }); ``` 2. **Integration Test Example with React Testing Library**: ```javascript // An integration test for a React component with its child components import { render, fireEvent } from '@testing-library/react'; import ParentComponent from './ParentComponent'; test('renders and interacts with child components', () => { const { getByText } = render(<ParentComponent />); fireEvent.click(getByText('Child Button')); expect(getByText('Child Component')).toBeInTheDocument(); }); ``` 3. **End-to-End Test Example with Cypress**: ```javascript // An end-to-end test that checks user login functionality describe('Login Test', () => { it('successfully logs in', () => { cy.visit('/login'); cy.get('input[name=username]').type('user'); cy.get('input[name=password]').type('password'); cy.get('button[type=submit]').click(); cy.contains('Welcome back').should('be.visible'); }); }); ``` ** Conclusion: The Significance of the Front-End Test Pyramid ** The front-end test pyramid serves as a guiding principle for developers to create effective, scalable, and maintainable test suites. By focusing on a large number of unit tests, supplemented by integration tests, and a few critical end-to-end tests, teams can ensure comprehensive coverage and quick feedback loops. Adopting this structured approach to testing allows for early detection of issues, reduces the cost of fixing bugs, and ultimately leads to the delivery of a more reliable and user-friendly application. The front-end test pyramid is not just a testing strategy; it’s a commitment to quality that resonates through every line of code. As front-end technologies continue to evolve, the principles of the test pyramid remain relevant, guiding developers towards best practices in testing and quality assurance. Embrace the pyramid, and build your way to a robust front-end architecture.
godblessed
1,892,655
Freecodecamp's Back End Development and APIs Project.
This is node(express).js project for freecodecamp's backend development and apis certificate.
0
2024-06-18T16:12:37
https://dev.to/highsoft85/freecodecamps-back-end-development-and-apis-project-3l03
webdev, javascript, backenddevelopment, node
--- title: Freecodecamp's Back End Development and APIs Project. published: true description: This is node(express).js project for freecodecamp's backend development and apis certificate. tags: webdev, javascript, backenddevelopment, nodejs # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-18 14:20 +0000 --- Most of [freecodecamp](https://www.freecodecamp.org)'s certificate tests need a simple codepen work. But, from backend development and apis section, it needs a user to configure a whole project and build using gitpod, online tool. This post is for [freecodecamp's backend development and apis projects](https://www.freecodecamp.org/learn/back-end-development-and-apis/). Freecodecamp's backend development and apis project consists of 5 microservice projects like following: - Timestamp Microservice - Request Header Parser Microservice - URL Shortener Microservice - Exercise Tracker - File Metadata Microservice Of course these projects are not difficult to the users who took the all courses that freecodecamp provides. But I created a node.js project for [freecodecamp's backend development and apis projects](https://www.freecodecamp.org/learn/back-end-development-and-apis/) to help developers to understand and pass more easily. To simplify, I integrated 5 projects into one express projects using router. Unfortunately, as those projects have some same api endpoints, you should activate and run one by one separately. Structure of the project is following: - **models** folder: includes two mongoose models for exercise tracker microservice project(fourth project) - **routes** folder: includes 6 express router files(five for individual projects and one for home) - **public** folder:** contains data.json(for third, url shortener project) and style.css(I didn't care about styles, as it is for backend api functions). - **uploads** folder: it is for last project(file metadata microservice) - **views** folder: includes 5 index.html files(each one for each project) - _app.js_: main express app file(you will work here mostly) - _server.js_: main endpoint file for node project For example, if you build and run the project for "URL Shortener Microservice", third task, you should do like following: - First, in _app.js_ file, change the home page into _views/index3.html_ and uncomment _require_ statement for express router(_urlShortener_ in this case) and _app.use_ statement with that router. ``` app.get('/', function(req, res) { res.sendFile(process.cwd() + '/views/index1.html'); }); const home = require('./routes/homeRoute'); // const timestamp = require('./routes/timestampRoute'); // const headerParser = require('./routes/headerParserRoute'); const urlShortener = require('./routes/urlShortenerRoute'); // const exTracker = require('./routes/exTrackerRoute'); // const metadataRouter = require('./routes/metadataRoute'); app.use('/api', home); // app.use('/api', timestamp); // app.use('/api', headerParser); app.use('/api', urlShortener); // app.use('/api', exTracker); // app.use('/api', metadataRouter); ``` - Second, build and run the project. `npm run start` For other projects, do the same thing as above. Here are running screens for each project. - Timestamp Microservice project ![Screen for Timestamp Microservice project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88ud0e0cepqt804zuubt.png) - Request Header Parser Microservice project ![Screen for Request Header Parser Microservice project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6u3s3ko93ntdjzgtkvzw.png) - URL Shortener Microservice project ![Screen for URL Shortener Microservice project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jpdv6elsjz6i2piu2wmk.png) - Exercise Tracker project ![Screen for Exercise Tracker project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q19rpxo1f47p9dtw2j8e.png) - File Metadata Microservice project ![Screen for File Metadata Microservice project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/124ciena6sjmisjcqwd4.png) You can download full codebase from [my git repository](https://github.com/highsoft85/freecodecamp-backend-api-dev). I hope it would be helpful to you.
highsoft85
1,892,653
Personalize a Musica de Inicialização do seu Windows 10 com Audio do J.A.R.V.I.S.
Olá mais uma vez galera Dev ! Hoje vamos abordar uma funcionalidade que muitos usuários do Windows 7...
0
2024-06-18T16:12:11
https://dev.to/carlos-cgs/personalize-a-musica-de-inicializacao-do-seu-windows-10-com-audio-do-jarvis-ilj
Olá mais uma vez galera Dev ! Hoje vamos abordar uma funcionalidade que muitos usuários do Windows 7 usavam e que infelizmente não está disponível de forma nativa no windows 10. No Windows 7, era possível configurar sons personalizados para diversos eventos do sistema, incluindo o logon. No Windows 10, essa funcionalidade foi simplificada, e a opção de definir uma música personalizada para o logon foi removida. Neste artigo explicarei o passo a passo de com um pequeno script VBS e alguns passos simples, poder restaurar essa funcionalidade e personalizar o som de inicialização do Windows 10. Vou mostrar todo o passo a passo de como criar essa funcionalidade, de forma simples e rápida, vamos lá. **Passo a Passo** **1. Escolha a Música** Primeiro, escolham a música que vocês querem tocar quando o Windows iniciar. A música deve estar em um formato compatível, como MP3. **2. Coloque a Música em um Local Fixo** Para garantir que o script funcione sem problemas, salvem a música em uma pasta que não será movida ou deletada. Por exemplo, usem o caminho ``` C:\Música\Jarvis.mp3. ``` **3. Criar um Script VBS** Agora vamos criar um script VBS que vai tocar a música ao fazer logon no Windows. Abra o Bloco de Notas (Notepad): Copiem e colem o seguinte código, substituindo essa parte do código ("C:\Música\BootJarvis.mp3") pelo seu caminho onde salvou seu audio.mp3; ``` Set objShell = CreateObject("WMPlayer.OCX") Set objMedia = objShell.newMedia("C:\Música\BootJarvis.mp3") objShell.currentPlaylist.appendItem(objMedia) objShell.controls.play Do While objShell.playState <> 1 ' Wait until the song finishes WScript.Sleep 100 Loop ``` Salvem o arquivo como iniciar.vbs: **4. Colocar o Script na Pasta de Inicialização** Para finalizar devemos garantir que o script seja executado toda vez que o Windows iniciar, e para que isso ocorra, vamos coloca-lo dentro de uma pasta do sistema, chamada shell:startup; **5. Abram a pasta de inicialização:** - Pressione `Windows + R`, digitem `shell:startup` e pressionem `Enter`. Isso abrirá a pasta de inicialização. - Movam o arquivo `iniciar.vbs` para essa pasta - Arraste o arquivo `iniciar.vbs` que vocês criaram para a pasta de inicialização. **6- Testar o Script** Para garantir que tudo está funcionando, reiniciem o computador ou façam logoff e logon novamente. Se tudo deu certo, a música escolhida deve tocar quando o Windows iniciar. **Observações Importantes** Para funcionar corretamente, se atente para que o caminho da música no script está correto e que o Windows Media Player está instalado no sistema. Se a música não tocar, verifiquem se o Windows Media Player consegue reproduzir o arquivo MP3 manualmente, para testar se o seu áudio esta 100% funcional. E é isso, galera Dev! Com esses passos simples, você pode personalizar o som de inicialização do Windows 10 e começar o dia com a música que vocês mais gostam. Eu coloquei o audio de inicialização do Homem de Ferro acordando o Jarvis, o qual disponibilizei no meu GitHub no link abaixo, porém fiquem a vontade de personalizarem com a música que mais lhe agradem. Espero que tenham curtido a dica! e ter ajudado um pouco mais. Até a próxima! _"Vamos Disseminar os Conhecimentos e Transbordar Tudo o que Aprendemos"_ Segue o caminho do GitHub onde coloquei este áudio citado: [Download - BootJarvis.MP3](https://github.com/Carlos-CGS/InteligenciaArtificial-IA/blob/main/Jarvis_1.0/BootJarvis.MP3) Segue lá no LinkedIn: [LinkedIn - Carlos-CGS](https://www.linkedin.com/in/carlos-cgs/)
carlos-cgs
1,892,652
Creando un Tetris con JavaScript III: el tablero
Insertrix: un tetris ligeramente distinto.
27,594
2024-06-18T16:11:04
https://dev.to/baltasarq/creando-un-tetris-con-javascript-iii-el-tablero-1fbp
spanish, gamedev, javascript, tutorial
--- title: Creando un Tetris con JavaScript III: el tablero published: true series: JavaScript Tetris description: Insertrix: un tetris ligeramente distinto. tags: #spanish #gamedev #javascript #tutorial cover_image: https://upload.wikimedia.org/wikipedia/commons/4/46/Tetris_logo.png # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-18 14:57 +0000 --- En nuestro **Tetris** necesitamos un tablero donde poner las piezas, que el usuario pueda moverlas o rotarlas, etc. De la misma forma que utilizamos una matriz para representar las piezas, utilizaremos una matriz para representar la totalidad del tablero. Al principio, todo el tablero estará vacío, pero a medida que vayan bajando piezas, estas se irán añadiendo a la parte inferior del mismo. Es decir, cuando una pieza no pueda bajar más (bien porque ha alcanzado la parte inferior del tablero, o bien porque se ha ido a posar en antiguas piezas o partes de piezas que ahora lo conforman), *portaremos* esa pieza al tablero como una parte más. Mientras una pieza esté bajando, tendremos que pintarla *encima* del tablero, pero no pasará a formar parte de él (de otra forma no podríamos detectar colisiones, al no saber qué puntos forman parte de la pieza y cuáles del tablero). ![Tablero de Tetris, vacío, con una barra bajando](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nakhajbs0ltn8emaqoyo.jpg) Así, a continuación podemos ver un tablero de juego de juego de 10 filas y 5 columnas, y su representación, además de cómo podemos crearlo con JavaScript. ![Tablero de Tetris, vacío.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ae3k71e9zlukrv1jy2pp.jpg) ```javascript let board = [ /* Fila 0 */ [ 0, 0, 0, 0, 0 ], /* Fila 1 */ [ 0, 0, 0, 0, 0 ], /* Fila 2 */ [ 0, 0, 0, 0, 0 ], /* Fila 3 */ [ 0, 0, 0, 0, 0 ], /* Fila 4 */ [ 0, 0, 0, 0, 0 ], /* Fila 5 */ [ 0, 0, 0, 0, 0 ], /* Fila 6 */ [ 0, 0, 0, 0, 0 ], /* Fila 7 */ [ 0, 0, 0, 0, 0 ], /* Fila 8 */ [ 0, 0, 0, 0, 0 ], /* Fila 9 */ [ 0, 0, 0, 0, 0 ] ]; ``` Al final, lo que tenemos es una matriz creada a partir de un vector de vectores. Igual que con las piezas, el vector principal representa a las filas, mientras que cada uno de los vectores que lo forman son las columnas de esa fila. El código que tenemos ahí arriba es perfectamente válido para crear una matriz 10x5, pero... ¿y si quisiérsmo crear una matriz de forma que no supiéramos el número de filas y columnas previamente? Pues podemos crear cada vector como un objeto **Array**. Le pasaremos el número de elementos necesario. Además, el método `fill()` lo podemos emplear para inicializar cada elemento (a partir de ahora, lo llamaremos *celda*), a cero. Al final, si el tablero se llama **board**, accederemos a cada celda con un número entre corchetes que representará la fila, y otro número entre corchetes que representará la columna. El primer acceso con corchetes nos devolverá el vector de esa fila, mientras que el segundo número nos devolverá la celda específica. Solo hay que recordar que en JavaScript todos los **Array**'s empiezan en cero. Así, para crear nuestro tablero, podríamos crear una función como la siguiente: ```javascript function createBoard(rows, cols) { let toret = new Array(rows); for(let i = 0; i < rows; ++i) { toret[ i ] = new Array( cols ).fill( 0 ); } return toret; } ``` Para acceder a una celda en concreto, podríamos utilizar una función como la siguiente: ```javascript function cellFrom(board, row, col) { const ROWS = board.length; const COLS = board[ 0 ].length; if ( row < 0 || row >= ROWS ) { throw new RangeError( "row: 0 - " + ROWS + ": " + row + "??" ); } if ( col < 0 || col >= COLS ) { throw new RangeError( "col: 0 - " + COLS + ": " + col + "??" ); } return board[ row ][ col ]; } ``` Es cierto que ern definitiva vamos a utilizar una clase, pero probablemente ver estas funciones individuales nos ayude a entender el concepto. Así, para acceder a la tercera columna de la primera fila, llamaríamos a: ```javascript console.log( cellFrom( board, 0, 2 ) ); ``` Utilizamos excepciones para detectar los casos en los que pedimos valores de fila o columna fuera de rango. La excepción estándar **RangeError** es la adecuada; tendremos que pasarle un mensaje de error, como a cualquier otra. ![Tablero de Tetris con restos de piezas y una pieza bajando.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ms5vnduujzk5tzd16y52.jpg) La situación en la imagen de arriba se representaría en el tablero de la siguiente forma: ```javascript let board = [ /* Fila 0 */ [ 0, 0, 0, 0, 0 ], /* Fila 1 */ [ 0, 0, 0, 0, 0 ], /* Fila 2 */ [ 0, 0, 0, 0, 0 ], /* Fila 3 */ [ 0, 0, 0, 0, 0 ], /* Fila 4 */ [ 0, 0, 0, 0, 0 ], /* Fila 5 */ [ 0, 0, 0, 0, 0 ], /* Fila 6 */ [ 0, 0, 0, 1, 1 ], /* Fila 7 */ [ 1, 1, 0, 1, 1 ], /* Fila 8 */ [ 0, 1, 1, 0, 1 ], /* Fila 9 */ [ 1, 1, 0, 1, 0 ] ]; ``` ¿Dudas? ¡No dudes en ponérmelas en los comentarios! De acuerdo, entonces podemos crear una nueva clase **Board** que cree y mantenga el tablero con las celdas que necesitamos. ```javascript class Board { static PIXEL_SIZE = 24; #_color = "darkgreen"; #_rows; #_cols; #_board; constructor(rows, cols, color) { this._rows = rows; this._cols = cols; if ( color != null ) { this._color = color; } this._board = new Array( rows ); for(let i = 0; i < rows; ++i) { this._board[ i ] = new Array( cols ).fill( 0 ); } } } ``` Con la clase anterior, tenemos claro cómo se crean el tablero, de hecho en el constructor de la clase. Necesitamos también *getters* que nos devuelvan, por ejemplo, el número de filas (*rows*) y columnas (*cols*). Pero aparte de todo eso, necesitaremos sendos métodos, además, que nos devuelva, y nos permita cambiar, una celda determinada. ```javascript class Board { // más cosas... cell(row, col) { if ( row < 0 || row >= this.rows) { throw new RangeError( "row: 0 - " + this.rows + ": " + row + "??" ); } if ( col < 0 || col >= this.cols) { throw new RangeError( "col: 0 - " + this.cols + ": " + col + "??" ); } return this._board[ row ][ col ]; } setCell(row, col, val) { if ( val == null ) { val = 1; } if ( row < 0 || row >= this.rows) { throw new RangeError( "row: 0 - " + this.rows + ": " + row + "??" ); } if ( col < 0 || col >= this.cols) { throw new RangeError( "col: 0 - " + this.cols + ": " + col + "??" ); } this._board[ row ][ col ] = val; } } ``` Además, a medida que avance el juego, tendremos que eliminar filas del tablero (porque se han rellenado de celdas con '1', es decir, están llenas) e insertar filas al principio del tablero (para compensar las filas que hemos eliminado). ```javascript class Board { // Más cosas... insertEmptyRows(numRows) { for(let i = 0; i < numRows; ++i) { this._board.unshift( new Array( this.cols ).fill( 0 ) ); } } removeRows(listRows) { let newBoard = []; for(let numRow = 0; numRow < this._board.length; ++numRow) { if ( !listRows.includes( numRow ) ) { newBoard.push( this._board[ numRow ] ); } } this._board = newBoard; } } ``` Bueno, tenemos el *core* del juego creado. Nos falta la vista para representar tablero y piezas bajando, y el gestor del juego, que lo arranque, lo pare, haga descender piezas, y preste atención a las teclas pulsadas por el usuario. En la próxima entrega veremos la clase **Canvas**, que se emparejará con un elemento **Canvas** de HTML, y se encargará de pintar el tablero.
baltasarq
1,892,415
Why clean code matters
In various consultancy projects, I have noticed lately that the same thing keeps repeating itself:...
0
2024-06-18T16:08:41
https://dev.to/nicolaimagnussen/why-clean-code-matters-2fkn
cleancode, architecture, php, singleresponsibility
In various consultancy projects, I have noticed lately that the same thing keeps repeating itself: clustered code. What do I mean by that? Well, let me put it this way. When you code, you should think about clean code. Yes, I know, one more person out there talking about clean code. Why should you listen to me? Since I was 12 years old, I was interested in computers and how things work. When I became 15 years old, I started watching Pluralsight, a bunch of videos on how to do MVVM, MVC, architecture, etc. I watched tons of videos, but I did not know how to program yet. I followed along, but I remember not understanding a lot of what was going on. In the past years, I've been working as an architect and senior software developer for various companies. My background is in computer engineering and IT apprenticeship. And I try to share with you what I know, as you all know, to help people, but also to get exposure like all the people out there on LinkedIn. Yeah, they don't love writing as much as you think; it's purely a business model. But that doesn't matter, right? So here it goes. Hopefully, one day you'll buy one of my products. ;) Now, let me tell you what I have seen lately in different projects. I think that the reason clean code isn't always applied isn't necessarily because people don't have the knowledge. It's often about strict deadlines and pressure from different projects. If you're a software engineer like me or a project manager, you know there are certain constraints and time pressures needed for a project to be successful. In order to deliver to the client, and even when working in-house, you face deadlines and different stakeholders. Companies often operate on a subscription model where clients expect new features regularly. This creates a lot of challenges. Developers and project planners need to keep the project moving forward without falling into the trap of architectural debt because they didn't have enough time to think through the solution properly. Once that problem is there, it’s really hard to go back and fix it. From my experience, people don’t often go back to refactor their projects—at least not the people I know. Let me know if you’re different. There are various things you can do to refactor, and it helps a lot, but the problem is that it’s not prioritized. If the code is working and the client is happy, refactoring isn't on the top of the list. But let’s think two or three years ahead. What will happen once the code becomes more and more clustered? You might end up hiring a lot of developers to overhaul the monolithic architecture into a microservices architecture, which costs a lot of money. This is why you should think about clean code—not just when you start a project, not just when you wake up, but all the time. Because eventually, it will come back to bite you if you don't apply it. Practical Strategies for Clean Code Consistent Code Reviews Regular code reviews ensure adherence to coding standards and catch potential issues early. Automated Testing Implementing automated testing, including unit tests, integration tests, and end-to-end tests, helps identify problems before they make it into production. Refactoring Regularly Set aside time in your project schedule specifically for refactoring to prevent technical debt from accumulating and to keep your codebase maintainable. Adopting SOLID Principles The SOLID principles (Single Responsibility, Open-Closed, Liskov Substitution, Interface Segregation, and Dependency Inversion) provide a framework for writing clean and maintainable code. Clear Documentation Writing clear and concise documentation helps new developers understand the codebase more quickly and reduces the likelihood of introducing errors. Pair Programming Pair programming allows two developers to work together on the same code, catching mistakes early and sharing knowledge among team members. Long-Term Benefits of Clean Code _Reduced Maintenance Costs_ Clean code is easier to maintain, reducing time and money spent on fixing bugs and implementing new features. _Enhanced Readability and Understandability_ A clean codebase is easier to read and understand, crucial for onboarding new developers and for long-term project sustainability. _Improved Performance_ Well-structured code leads to better performance by avoiding unnecessary complexity and optimizing resource usage. _Greater Scalability_ Clean code allows for easier scaling of applications, simplifying the process of adding new features and adapting to changing requirements. _Increased Developer Satisfaction_ Working with clean code reduces frustration and increases job satisfaction for developers, leading to higher productivity and lower turnover rates. **Example of Messy Code** ```php <?php class User extends security\Session { protected $app; public function __construct($app) { $this->app = $app; } public function addSkill(Application $app, Request $request) { $userInput['id'] = $request->request->get('id', null); $userInput['id'] = preg_replace("/[^0-9,.]/", "", $userInput['id']); $app['checkpoint']->minimumRole(1); $user = $app['session']->get('user', []); $userId = $user['profile']['easyJobAddressId']; if ($userInput['id'] === null) { return $app->json(['ok' => true]); } $app['dbs']['appMySql']->insert('skills', [ 'skillId' => $userInput['id'], 'userId' => $userId, 'rank' => 0 ]); return $app->json(['ok' => true]); } } ``` **Refactored Code** The refactored code adheres to clean code principles by breaking down responsibilities, using dependency injection, and following SOLID principles. Dependency Injection and Constructor ```php public function __construct( UserRoleService $userRoleService, RequestStack $requestStack, UserRepository $userRepository, EasyJobServiceInterface $easyJobService, SkillsRepository $skillsRepository, AppDataService $appDataService ) { $this->userRoleService = $userRoleService; $this->requestStack = $requestStack; $this->userRepository = $userRepository; $this->easyJobService = $easyJobService; $this->skillsRepository = $skillsRepository; $this->appDataService = $appDataService; } ``` By injecting dependencies, we ensure that each class has a single responsibility and can be easily tested and maintained. Single Responsibility for Adding Skills ```php #[Route('/profile/experience/add', name: 'profile_experience_add', methods: ['POST'])] public function addExperience(Request $request): JsonResponse { $this->denyAccessUnlessGranted('ROLE_USER'); $skillId = $request->request->get('id'); if (!is_numeric($skillId)) { return $this->json(['status' => 'error', 'message' => 'Invalid skill ID']); } $userId = $this->getUser()->getId(); $result = $this->appDataService->addSkillToUser($userId, (int) $skillId); return $this->json(['status' => 'ok', 'added' => $result]); } ``` Here, we use a dedicated method to handle the addition of skills. It ensures validation and follows a clean, concise structure. Separation of Concerns ```php public function index(): Response { $user = $this->getUser(); $userId = $user->getId(); $allSkills = [90, 10, 11, 12, 13, 20, 21, 22, 23, 30, 31]; $skills = array_fill_keys($allSkills, 0); $userSkills = $this->appDataService->getSkillsByUserId($userId); foreach ($userSkills as $skill) { $skillId = $skill->getSkillId(); if (array_key_exists($skillId, $skills)) { $skills[$skillId] = 1; } } } ``` Notice how we use appDataService to decouple the system. By separating concerns, we keep each method focused on a single task, making the code easier to read and maintain. Conclusion In conclusion, always think about clean code. It might not seem urgent now, but neglecting it can lead to significant problems down the line. Prioritizing clean code will save time, money, and headaches in the future. Refactoring regularly and adhering to coding standards are key to maintaining a healthy codebase. Remember, the effort you put into writing clean code today will pay off in the long run, making your projects more scalable, maintainable, and enjoyable to work on.
nicolaimagnussen
1,891,446
Using PocketBase to build a full-stack application
Written by Rahul Padalkar✏️ PocketBase is an open source package that developers can leverage to...
0
2024-06-18T16:06:28
https://blog.logrocket.com/using-pocketbase-build-full-stack-app
go, webdev
**Written by [Rahul Padalkar](https://blog.logrocket.com/author/rahulpadalkar/)✏️** PocketBase is an open source package that developers can leverage to spin up a backend for their SaaS and mobile applications. It’s written in Go, so it’s more performant under heavy loads compared to Node.js. You can use PocketBase to build full-stack applications quickly — it provides many essential features out of the box, saving a lot of developer time and effort. In this tutorial-style post, let’s take a look at how we can leverage the power of PocketBase to build a forum-like web application. You can find the [code for our demo project here](https://github.com/rahulnpadalkar/forum-pocketbase) and follow along. ## Overview of PocketBase and our demo app PocketBase provides many benefits to developers working on full-stack apps. For example, it: * Supports 15+ OAuth2 providers, including Google, Apple, Facebook, GitHub, and more * Provides a real-time database with support for subscriptions * Is extensible. We can intercept methods and run custom business logic in JavaScript or Go * Provides a Dart and JavaScript SDK, so it can be integrated into Flutter or React Native applications along with web applications * Provides an admin dashboard out of the box for managing and monitoring the backend * Supports file storage without writing any extra code We’ll see these benefits and more in action as we build our demo app. In our forum application, users will be able to: * Join using their GitHub accounts * Post their thoughts * Comment on other posts * Update or delete their own comments and posts * Receive notifications when someone comments on their post Let’s dive into our tutorial. ## Setting up PocketBase PocketBase is distributed as an executable for all major operating systems. Setting it up is very easy. Head to [the PocketBase docs](https://pocketbase.io/docs/) and download the executable for the OS that you’re on. It will download a zip file, which you must unzip to access the executable: ![Screenshot Of Pocketbase Docs Showing Executable Download Options For Various Operating Systems](https://blog.logrocket.com/wp-content/uploads/2024/06/img1-PocketBase-docs-download-options-executable.png) Run the executable with the following command: ```shell ./pocketbase serve ``` This command will start PocketBase. You should see this printed in the terminal window: ![Message Printed On Terminal Window After Pocketbase Has Started Successfully](https://blog.logrocket.com/wp-content/uploads/2024/06/img2-Message-printed-terminal-window-PocketBase-started-successfully.png) We will explore the Admin UI in the later sections of this post. ## Creating a GitHub application We’ll be using GitHub OAuth2 to onboard users onto our forum. To [integrate GitHub OAuth2](https://blog.logrocket.com/implement-oauth-2-0-node-js/), we first need to create a OAuth application. Head over to [GitHub’s Developer Settings](https://github.com/settings/developers) (you must be logged in to GitHub) and click on **OAuth** **Apps** on the sidebar: ![Github Developer Settings Open To Tab For Oauth App Setup](https://blog.logrocket.com/wp-content/uploads/2024/06/img3-GitHub-Developer-Settings-OAuth-app-setup.png) Then, click the **New OAuth App** button (or the **Register a new application** button, if you’ve never created an OAuth app before) and fill in the form: ![Form To Create New Oauth App](https://blog.logrocket.com/wp-content/uploads/2024/06/img4-Form-create-new-OAuth-App.png) In the **Authorization callback URL** field, paste in the following: ```plaintext http://127.0.0.1:8090/api/oauth2-redirect ``` You can provide any URL you’d like in the **Homepage URL** field. You should paste your application's web address if you’re developing a real application that uses GitHub OAuth2. Once you’ve filled out all the required fields, click on the **Register application** button. Then, open the application and copy the client ID and client secret. We’ll need these values to enable GitHub as an OAuth2 provider in the PocketBase Admin UI. Remember that the client secret is only visible once, so make sure to copy it somewhere safe. ## Configuring PocketBase for Github OAuth2 Open the PocketBase Admin UI by visiting http://127.0.0.1:8090/_/. You need to create an admin account when using the Admin UI for the first time. Once you’ve created your admin account, log in with your admin credentials, head over to **Settings**, and click on **Auth providers**. From the list, select **GitHub**, add the **Client ID** and **Client secret**, and hit **Save**: ![Pocketbase Settings Open To Modal For Configuring Admin Account Through Github](https://blog.logrocket.com/wp-content/uploads/2024/06/img5-PocketBase-settings-configure-admin-account-GitHub.png) PocketBase is now configured for GitHub OAuth2. ## Creating and setting up a React project Now that we have successfully set up PocketBase and our GitHub application, let’s create a React project. We will use [Vite to bootstrap our React project](https://blog.logrocket.com/build-react-typescript-app-vite/). Run this command: ```bash npm create vite@latest forum-pocketbase --template react-ts ``` Follow the prompts. Once the app has been created, `cd` into the project directory and run this command: ```bash cd forum-pocketbase npm i ``` This will install all packages described in `package.json`. Now, let’s install [Chakra UI](https://blog.logrocket.com/chakra-ui-adoption-guide/): ```bash npm i @chakra-ui/react @emotion/react @emotion/styled framer-motion ``` Followed by installing [`react-router-dom`](https://blog.logrocket.com/react-router-v6-guide/#different-routers-react-router-dom-library) and its dependencies: ```bash npm install react-router-dom localforage match-sorter sort-by ``` Once installed, open `main.tsx` and add the following code: ```typescript /* ./src/main.tsx */ import React from "react"; import ReactDOM from "react-dom/client"; import "./index.css"; import { createBrowserRouter, RouterProvider } from "react-router-dom"; import { ChakraProvider, Flex } from "@chakra-ui/react"; import Join from "./routes/Join"; // We will define this later import Home from "./routes/Home"; // We will define this later import PrivateRoute from "./routes/PrivateRoute"; // We will define this later const router = createBrowserRouter([ { path: "/join", element: <Join />, }, { path: "/", element: <PrivateRoute />, children: [ { index: true, element: <Home />, }, ], }, ]); ReactDOM.createRoot(document.getElementById("root")!).render( <ChakraProvider> <Flex flexDirection="column" paddingX="80" paddingY="10" h="100%"> <RouterProvider router={router} /> </Flex> </ChakraProvider> ); ``` There are two things to note in the code above. First, we wrap our application in `ChakraProvider` component. This is a required step in setting up Chakra UI. Then, we nest the `RouterProvider` component from React Router inside the `ChakraProvider` component. The `RouteProvider` takes the router as its input and helps with client-side routing. [React Router provides various routers](https://reactrouter.com/en/main/routers/picking-a-router) — here, we’re using the browser router. We’ve defined two routes in the router config: a `/` root route and a `/join` route. We will define the corresponding components later. ## Creating tables in PocketBase’s Admin UI Now that we have set up React and PocketBase, let's create the necessary tables using the Admin UI. We will add two tables — a `Posts` table and a `Comments` table: ![Graphic Representing Pocketbase Data Schema For Demo App With Three Collections: Users, Comments, And Posts](https://blog.logrocket.com/wp-content/uploads/2024/06/img6-Graphic-representing-PocketBase-database-schema.png) The `Posts` table will store all the posts made by members. The `Comments` table will store all the comments made on a post. The `Users` table is created by PocketBase out of the box. Navigate to http://127.0.0.1:8090/_/ and log in with your admin credentials. Click on the **Collections** icon in the sidebar and click on the **New collection** button: ![Admin Dashboard For Pocketbase App With Modal Open To Create New Collection](https://blog.logrocket.com/wp-content/uploads/2024/06/img7-Admin-dashboard-modal-create-new-collection.png) Create two collections: `posts` and `comments`. The `posts` collection should have these fields: * `post_text` with its type set as `Plain Text`; make it `non-empty` * `author_id` with its type set as `Relation`; select `Single` and the `Users` collection from the dropdown. This relation means only one user can be associated with one post * PocketBase also automatically sets up and populates some other fields, such as `id`, `created`, and `updated` The `comments` collection should have these fields: * `comment_text` with its type set as `Plain Text`; make it `non-empty` * `author` with its type set as `Relation`; select `Single` and the `Users` collection from the dropdown * `post` with its type set as `Relation`; select `Single` and the `Posts` collection from the respective dropdown This is how the two collections should look after the configuration described above. First, the `comments` collection: ![Setup Details For Comments Collection](https://blog.logrocket.com/wp-content/uploads/2024/06/img8-Setup-details-comments-collection.png) Next, the `posts` collection: ![Setup Details For Posts Collection](https://blog.logrocket.com/wp-content/uploads/2024/06/img9-Setup-details-posts-collection.png) ## Setting up access control in PocketBase With the collections all set up, let’s tweak the access control rules. These rules define who can access the data stored in a collection, as well as how they can do so. By default, all CRUD operations on a collection are admin-only. To set up access control, click on the gear icon next to the collection name and click on the **API Rules** tab: ![Steps To Open Settings Via Gear Icon To Tweak Access Control Rules For Pocketbase App](https://blog.logrocket.com/wp-content/uploads/2024/06/img10-Steps-tweak-access-control-rules-PocketBase.png) These are the access rules for the three collections. First, the `posts` collection access rules: ![Access Rules For Posts Collection](https://blog.logrocket.com/wp-content/uploads/2024/06/img11-Access-rules-posts-collection.png) Next, the `comments` collection access rules: ![Access Rules For Comments Collection](https://blog.logrocket.com/wp-content/uploads/2024/06/img12-Access-rules-comments-collection.png) Finally, the `users` collection access rules: ![Access Rules For Users Collection](https://blog.logrocket.com/wp-content/uploads/2024/06/img13-Access-rules-users-collection.png) We’ve defined two kinds of rules here. This rule allows only registered and logged-in users to perform any action: ```plaintext @request.auth.id != "" ``` While this rule allows only the user who created that record to perform any action on it: ```plaintext @request.auth.id = author_id.id // defined for posts collection @request.auth.id = author.id // defined for comments collection id = @request.auth.id // defined users collection ``` PocketBase allows developers to set complex access control rules, with [more than 15 operators available](https://pocketbase.io/docs/api-rules-and-filters/) for you to define those access control rules according to your needs. ## Accessing PocketBase in our app PocketBase ships with a JavaScript SDK that we can use for seamless communication with the PocketBase server. To install the SDK, run this command: ```bash npm i pocketbase ``` Once the SDK is installed, let’s create a utility that we can call in our React components to access PocketBase easily: ```typescript // ./src/pocketbaseUtils.ts import PocketBase from "pocketbase"; const pb = new PocketBase(import.meta.env.VITE_POCKETBASE_URL); export function checkIfLoggedIn(): boolean { return pb.authStore.isValid; } export async function initiateSignUp() { await pb.collection("users").authWithOAuth2({ provider: "github" }); } export function logout() { pb.authStore.clear(); } export function getPb() { return pb; } ``` This utility is handy for accessing the PocketBase instance from anywhere in our app. It also ensures that we make a single connection to the PocketBase server. We describe four functions in the utility above: * `checkIfLoggedIn`: Tells the caller if the user is logged in. It looks at the `isValid` property on the `authStore` on the PocketBase instance * `initiateSignUp`: Initiates sign-in with GitHub. It calls the `authWithOAuth2` method, to which we pass the OAuth2 provider. The callbacks, acknowledgments, and tokens are all handled by PocketBase * `logout`: Clears the auth info from the `authStore` and logs the user out * `getPb`: A getter method that returns the PocketBase instance to the caller ## Adding a login screen to our PocketBase app In this section, we’ll implement a login screen that will look like the below: ![Login Screen For Pocketbase App](https://blog.logrocket.com/wp-content/uploads/2024/06/img14-Login-screen-PocketBase-app.png) Here’s the code we’ll use to accomplish this feature: ```typescript /* ./src/routes/Join.tsx */ import { Button, Flex, Heading } from "@chakra-ui/react"; import { initiateSignUp } from "../pocketbaseUtils"; import { useNavigate } from "react-router-dom"; function Join() { const navigate = useNavigate(); async function join() { await initiateSignUp(); navigate("/"); } return ( <> <Flex direction="column" alignItems="center" height="100%" justifyContent="center" > <Heading>PocketBase Forum Application</Heading> <Flex justifyContent="space-evenly" width="20%" marginTop="10"> <Button onClick={join}>Sign In with Github</Button> </Flex> </Flex> </> ); } export default Join; ``` The `<Join/>` component here allows users to log into our forum application with their GitHub account. The `Join` component is mounted on the `/join` path as configured in the react-router config in the previous steps. One thing to note is the `join` function that gets called when the **Sign In with GitHub** button is clicked. This calls the `join` function which in turn calls the `initiateSignUp` function from `pocketbaseUtils`. ## Adding posts to the homepage Before we start building our UI components, let’s take a look at how the components are structured: ![Component Structure For Pocketbase App](https://blog.logrocket.com/wp-content/uploads/2024/06/img15-Component-structure-PocketBase-app.png) We have defined two routes in the React Router config: the root `/` route and the `/join` route. On the root `/` route, we will load the `Home` component. The `/` route is protected or private — i.e., it should only be accessible to logged-in users. So, we add it as a child of the `<PrivateRoute>` component in the React Router config. We will take a quick look at the `PrivateRoute` component first ```typescript // ./src/routes/PrivateRoute.tsx import { Navigate, Outlet } from "react-router-dom"; import { checkIfLoggedIn } from "../pocketbaseUitl"; const PrivateRoute = () => { return checkIfLoggedIn() ? <Outlet /> : <Navigate to="/join" />; }; export default PrivateRoute; ``` This is a pretty straightforward component. It checks if the user is logged in. If yes, then the child component is rendered; if not, then the user is navigated to the `/join` route. Now let’s take a look at the `Home` component that we render if the user is logged in: ```typescript /* ./src/routes/Home.tsx */ import { Flex } from "@chakra-ui/react"; import { useEffect, useState } from "react"; import { getPb } from "../pocketbaseUitl"; import { Post } from "../components/Post"; import { RawPost } from "../types"; import { convertItemsToRawPost } from "../utils"; import { SubmitPost } from "../components/SubmitPost"; import { Navigation } from "../components/navigation"; const Home = () => { const [posts, setPosts] = useState<RawPost[]>([]); useEffect(() => { getPosts(); }, []); async function getPosts() { const pb = getPb(); const { items } = await pb .collection("posts") .getList(1, 20, { expand: "author_id" }); const posts: RawPost[] = convertItemsToRawPost(items); setPosts(posts); } return ( <Flex direction="column"> <Navigation /> <SubmitPost onSubmit={getPosts} /> {posts?.map((p) => ( <Post post={p} key={p.id} openComments={openCommentsModal} /> ))} </Flex> ); }; export default Home; ``` In the code above, we get the PocketBase instance from the utility function. Then, we get a list of posts along with the author details. The `getList` function has pagination built in. Here, we’re fetching page `1` with `20` records per page. We use the `expand` option from PocketBase to get the related tables data — in this case the `users` table. We have introduced three new components here: * `Post`: For displaying posts submitted by members of the forum * `SubmitPost`: For submitting a `Post` * `Navigation`: For navigating around the app. We will take a look at this component later Here’s a demo of how the `Post` and `SubmitPost` components would work together: ![Demo Of How Post And Submitpost Components Work Together, With Red Box And Labeled Arrow Pointing To Each Component](https://blog.logrocket.com/wp-content/uploads/2024/06/img16-Demo-Post-SubmitPost-components-together.png) Let’s quickly take a look at the `Post` component: ```typescript /* ./src/components/Post.tsx */ import { Flex, IconButton, Image, Text, Textarea, useToast, } from "@chakra-ui/react"; import { RawPost } from "../types"; import { GoHeart, GoComment } from "react-icons/go"; import { format } from "date-fns"; import { BiLike } from "react-icons/bi"; import { RiDeleteBin5Line, RiCheckFill } from "react-icons/ri"; import { getPb } from "../pocketbaseUitl"; import { GrEdit } from "react-icons/gr"; import { GiCancel } from "react-icons/gi"; import { useState } from "react"; const pb = getPb(); export const Post = ({ post, }: { post: RawPost; }) => { const [updateMode, setUpdateMode] = useState<boolean>(false); const [updatedPostText, setUpdatedPostText] = useState<string>( post.post_text ); const toast = useToast(); async function deletePost() { try { await pb.collection("posts").delete(post.id); toast({ title: "Post deleted", description: "Post deleted successfully.", status: "success", }); } catch (e) { toast({ title: "Post deletion failed", description: "Couldn't delete the post. Something went wrong.", status: "error", }); } } async function updatePost() { try { await pb .collection("posts") .update(post.id, { post_text: updatedPostText }); toast({ title: "Post updated", description: "Post updated successfully.", status: "success", }); setUpdateMode(false); } catch (e) { toast({ title: "Post updation failed", description: "Couldn't update the post. Something went wrong.", status: "error", }); } } return ( <Flex flexDirection="column" margin="5"> <Flex flexDirection="column"> <Flex alignItems="center"> <Image src={`https://source.boringavatars.com/beam/120/${post.author.username}`} height="10" marginRight="3" /> <Flex flexDirection="column"> <Text fontWeight="bold">{post.author.username}</Text> <Text fontSize="13">{format(post.created, "PPP p")}</Text> </Flex> </Flex> </Flex> <Flex marginY="4"> {updateMode ? ( <Flex flexDirection="column" flex={1}> <Textarea value={updatedPostText} onChange={(e) => setUpdatedPostText(e.target.value)} rows={2} /> <Flex flexDirection="row" marginTop="2" gap="3"> <IconButton icon={<RiCheckFill />} aria-label="submit" backgroundColor="green.400" color="white" size="sm" onClick={updatePost} /> <IconButton icon={<GiCancel />} aria-label="cross" backgroundColor="red.400" color="white" size="sm" onClick={() => { setUpdateMode(false); }} /> </Flex> </Flex> ) : ( <Text>{post.post_text}</Text> )} </Flex> <Flex> <Flex> <IconButton icon={<GoHeart />} aria-label="love" background="transparent" /> <IconButton icon={<BiLike />} aria-label="like" background="transparent" /> {post.author_id === pb.authStore.model!.id && ( <> <IconButton icon={<RiDeleteBin5Line />} aria-label="delete" background="transparent" onClick={deletePost} /> <IconButton icon={<GrEdit />} aria-label="edit" background="transparent" onClick={() => setUpdateMode(true)} /> </> )} </Flex> </Flex> </Flex> ); }; ``` Although this code block is quite long, if you take a closer look, you’ll see that it’s actually a fairly simple React component. This `Post` component allows us to display post details and lets the user edit and delete their posts, or interact with using `"love"` and `"like"` icons. It also uses toast notifications to alert the user upon successful or failed post deletions and updates. There are three things to note here. First, the code block on line number 32 uses the `delete` function from the collection to delete the post whose `id` is passed to the function: ```typescript await pb.collection("posts").delete(post.id); ``` Second, the code block on line numbers 48-50 uses the `update` function on the collection. The first argument is the `id` of the post that needs to be updated and the second argument is the update object. Here, we’re updating the post text: ```typescript await pb.collection("posts").update(post.id, { post_text: updatedPostText }); ``` Finally, the code block on line 125 allows for conditional rendering of the action buttons: ```typescript post.author_id === pb.authStore.model!.id ``` This condition allows the owner of the post to either delete or update it. Even if a malicious user somehow bypasses this check, they won’t be able to delete or update the post because of the [access rules we set earlier](#setting-up-access-control-pocketbase). Now, let’s briefly take a look at the `SubmitPost` component: ```typescript import { Button, Flex, Textarea } from "@chakra-ui/react"; import { useState } from "react"; import { getPb } from "../pocketbaseUitl"; import { useToast } from "@chakra-ui/react"; export const SubmitPost = ({ onSubmit }: { onSubmit: () => void }) => { const [post, setPost] = useState(""); const toast = useToast(); const submitPost = async () => { const pb = getPb(); try { await pb.collection("posts").create({ post_text: post, author_id: pb.authStore.model!.id, }); toast({ title: "Post submitted.", description: "Post succesfully submitted", status: "success", duration: 7000, }); onSubmit(); setPost(""); } catch (e: any) { toast({ title: "Post submission failed", description: e["message"], status: "error", }); } }; return ( <Flex flexDirection="column" paddingX="20" paddingY="10"> <Textarea rows={4} placeholder="What's on your mind?" value={post} onChange={(e) => setPost(e.target.value)} /> <Flex flexDirection="row-reverse" marginTop="5"> <Button backgroundColor="teal.400" color="white" onClick={submitPost}> Submit </Button> </Flex> </Flex> ); }; ``` As before, this is a simple React component that provides a text area in the UI for users to write and submit posts. Note that the code block on line 11 uses the `create` function on the collection to create a post with text and the author as the currently logged-in user: ```typescript await pb.collection("posts").create({ post_text: post, author_id: pb.authStore.model!.id, }); ``` ## Adding comment functionality What’s the use of posts if no one can comment on them? As a next step, let’s add a comment feature on posts to our forum application. Here’s how our comment functionality will look: ![Comments Modal Open To Show Individual Comment Components](https://blog.logrocket.com/wp-content/uploads/2024/06/img17-Comments-modal-individual-Comment-components.png) We’ll create a `Comments` component to set up the modal, while the individual comments will be `Comment` components. To start, let’s modify the `Home` component: ```typescript import { Flex, useDisclosure } from "@chakra-ui/react"; import { useEffect, useState } from "react"; import { getPb } from "../pocketbaseUitl"; import { Post } from "../components/Post"; import { RawPost } from "../types"; import { convertItemsToRawPost } from "../utils"; import { SubmitPost } from "../components/SubmitPost"; import { Navigation } from "../components/navigation"; import Comments from "../components/Comments"; import NewPosts from "../components/NewPosts"; const Home = () => { const [posts, setPosts] = useState<RawPost[]>([]); const { isOpen, onOpen, onClose } = useDisclosure(); const [openCommentsFor, setOpenCommentsFor] = useState(""); const openCommentsModal = (postId: string) => { onOpen(); setOpenCommentsFor(postId); }; useEffect(() => { getPosts(); }, []); async function getPosts() { const pb = getPb(); const { items } = await pb .collection("posts") .getList(1, 20, { expand: "author_id" }); const posts: RawPost[] = convertItemsToRawPost(items); setPosts(posts); } return ( <Flex direction="column"> <Navigation /> <SubmitPost onSubmit={getPosts} /> <NewPosts /> {posts?.map((p) => ( <Post post={p} key={p.id} openComments={openCommentsModal} /> ))} {isOpen && ( <Comments isOpen={isOpen} onClose={onClose} postId={openCommentsFor} /> )} </Flex> ); }; export default Home; ``` Here we introduced a new component `Comment` that accepts the following props: * `isOpen`: For toggling the visibility of the comments modal * `onClose`: A callback to execute when the comments modal is closed * `postId`: To specify the post for which comments need to be shown We also add a `openComments` prop to the `Post` component. We will modify the `Post` component next: ```typescript import { Flex, IconButton, Image, Text, Textarea, useToast, } from "@chakra-ui/react"; import { RawPost } from "../types"; import { GoHeart, GoComment } from "react-icons/go"; import { format } from "date-fns"; import { BiLike } from "react-icons/bi"; import { RiDeleteBin5Line, RiCheckFill } from "react-icons/ri"; import { getPb } from "../pocketbaseUitl"; import { GrEdit } from "react-icons/gr"; import { GiCancel } from "react-icons/gi"; import { useState } from "react"; const pb = getPb(); export const Post = ({ post, openComments, }: { post: RawPost; openComments: (postId: string) => void; }) => { const [updateMode, setUpdateMode] = useState<boolean>(false); const [updatedPostText, setUpdatedPostText] = useState<string>( post.post_text ); const toast = useToast(); async function deletePost() { try { await pb.collection("posts").delete(post.id); toast({ title: "Post deleted", description: "Post deleted successfully.", status: "success", }); } catch (e) { toast({ title: "Post deletion failed", description: "Couldn't delete the post. Something went wrong.", status: "error", }); } } async function updatePost() { try { await pb .collection("posts") .update(post.id, { post_text: updatedPostText }); toast({ title: "Post updated", description: "Post updated successfully.", status: "success", }); setUpdateMode(false); } catch (e) { toast({ title: "Post updation failed", description: "Couldn't update the post. Something went wrong.", status: "error", }); } } return ( <Flex flexDirection="column" margin="5"> <Flex flexDirection="column"> <Flex alignItems="center"> <Image src={`https://source.boringavatars.com/beam/120/${post.author.username}`} height="10" marginRight="3" /> <Flex flexDirection="column"> <Text fontWeight="bold">{post.author.username}</Text> <Text fontSize="13">{format(post.created, "PPP p")}</Text> </Flex> </Flex> </Flex> <Flex marginY="4"> {updateMode ? ( <Flex flexDirection="column" flex={1}> <Textarea value={updatedPostText} onChange={(e) => setUpdatedPostText(e.target.value)} rows={2} /> <Flex flexDirection="row" marginTop="2" gap="3"> <IconButton icon={<RiCheckFill />} aria-label="submit" backgroundColor="green.400" color="white" size="sm" onClick={updatePost} /> <IconButton icon={<GiCancel />} aria-label="cross" backgroundColor="red.400" color="white" size="sm" onClick={() => { setUpdateMode(false); }} /> </Flex> </Flex> ) : ( <Text>{post.post_text}</Text> )} </Flex> <Flex> <Flex> <IconButton icon={<GoHeart />} aria-label="love" background="transparent" /> <IconButton icon={<BiLike />} aria-label="like" background="transparent" /> <IconButton icon={<GoComment />} aria-label="like" background="transparent" onClick={() => { openComments(post.id); }} /> {post.author_id === pb.authStore.model!.id && ( <> <IconButton icon={<RiDeleteBin5Line />} aria-label="delete" background="transparent" onClick={deletePost} /> <IconButton icon={<GrEdit />} aria-label="edit" background="transparent" onClick={() => setUpdateMode(true)} /> </> )} </Flex> </Flex> </Flex> ); }; ``` Here’s a closer look at the update we made to our `Post` component on lines 126-133: ```typescript <IconButton icon={<GoComment />} aria-label="like" background="transparent" onClick={() => { openComments(post.id); }} /> ``` To summarize, we added a comment icon in the `Post` component. When a user clicks on this icon, the `openComments` method passed as a prop to the `Post` component is executed. This then opens the comment modal. Now that we have set the trigger for opening the comments, let’s take a look at the `Comments` component: ```typescript /* ./src/components/Comments.tsx */ import { Flex, Textarea, Button, Modal, ModalOverlay, ModalContent, ModalHeader, ModalBody, ModalCloseButton, useToast, } from "@chakra-ui/react"; import { useEffect, useState } from "react"; const pb = getPb(); import { getPb } from "../pocketbaseUitl"; import { convertItemsToComments } from "../utils"; import { RawComment } from "../types"; import Comment from "./Comment"; export default function Comments({ isOpen, onClose, postId, }: { isOpen: boolean; onClose: () => void; postId: string; }) { const [comment, setComment] = useState<string>(""); const [comments, setComments] = useState<RawComment[]>([]); const toast = useToast(); const submitComment = async () => { try { await pb.collection("comments").create({ comment_text: comment, author: pb.authStore.model!.id, post: postId, }); loadComments(); toast({ title: "Comment Submitted", description: "Comment submitted successfully", status: "success", }); setComment(""); } catch (e) { toast({ title: "Comment Submission", description: "Comment submission failed", status: "error", }); } }; async function loadComments() { const result = await pb .collection("comments") .getList(1, 10, { filter: `post="${postId}"`, expand: "author" }); const comments = convertItemsToComments(result.items); setComments(comments); } useEffect(() => { loadComments(); }, []); return ( <Modal isOpen={isOpen} onClose={onClose} size="xl"> <ModalOverlay /> <ModalContent> <ModalHeader>Comments</ModalHeader> <ModalCloseButton /> <ModalBody> <Flex flexDirection="column"> <Flex flexDirection="column"> <Textarea value={comment} onChange={(e) => setComment(e.target.value)} placeholder="What do you think?" /> <Flex flexDirection="row-reverse"> <Button backgroundColor="teal.400" color="white" marginTop="3" onClick={submitComment} > Comment </Button> </Flex> </Flex> {comments.map((c) => ( <Comment comment={c} key={c.id} loadComments={loadComments} /> ))} </Flex> </ModalBody> </ModalContent> </Modal> ); } ``` Four things need to be noted here: * To load the comments, we’ve defined a `loadComments` function. Just like posts, we use the `getList` function available on the `comments` collection. Similar to our `Posts` component, we use the `expand` option to get information about the author of the comment. Additionally, we pass in a filter that filters comments by `postId` * We call the `loadComments` function inside [the `useEffect` Hook](https://blog.logrocket.com/useeffect-react-hook-complete-guide/) * We have defined a `submitComment` function that creates a new comment in the `comments` collection. Upon successful submission of a comment, we call the `loadComments` function again to fetch all the comments made on the post * We use the `Comment` component to display comments in the modal. This `Comment` component accepts the comment object, a `key` to identify comments uniquely (required by React), and the `loadComments` function Now, let's quickly take a look at the `Comment` component: ```typescript /* ./src/components/Comment.tsx */ import { Flex, IconButton, Image, Text, Textarea, useToast, } from "@chakra-ui/react"; import { RawComment } from "../types"; import { format } from "date-fns"; import { useState } from "react"; import { GrEdit } from "react-icons/gr"; import { GiCancel } from "react-icons/gi"; import { RiDeleteBin5Line, RiCheckFill } from "react-icons/ri"; import { getPb } from "../pocketbaseUitl"; const pb = getPb(); export default function Comment({ comment, loadComments, }: { comment: RawComment; loadComments: () => void; }) { const toast = useToast(); const [updateMode, setUpdateMode] = useState<boolean>(false); const [updatedCommentText, setUpdatedCommentText] = useState<string>( comment.comment_text ); async function deleteComment() { try { await pb.collection("comments").delete(comment.id); toast({ title: "Comment deleted", description: "Comment deleted successfully.", status: "success", }); loadComments(); } catch (e) { toast({ title: "Comment deletion failed", description: "Couldn't delete the comment. Something went wrong.", status: "error", }); } } async function updateComment() { try { await pb .collection("comments") .update(comment.id, { comment_text: updatedCommentText }); toast({ title: "Comment updated", description: "Comment updated successfully.", status: "success", }); loadComments(); setUpdateMode(false); } catch (e) { toast({ title: "Comment updation failed", description: "Couldn't update the comment. Something went wrong.", status: "error", }); } } return ( <Flex flexDirection="column"> <Flex> <Image src={`https://source.boringavatars.com/beam/120/${comment.author.username}`} height="10" marginRight="3" /> <Flex flexDirection="column"> <Text fontWeight="bold">{comment.author.username}</Text> <Text fontSize="12">{format(comment.created, "PPP p")}</Text> </Flex> </Flex> <Flex> {updateMode ? ( <Flex marginY="3" flex="1"> <Textarea value={updatedCommentText} onChange={(e) => setUpdatedCommentText(e.target.value)} rows={1} /> <Flex flexDirection="row" marginTop="2" gap="3"> <IconButton icon={<RiCheckFill />} aria-label="submit" backgroundColor="green.400" color="white" size="sm" onClick={updateComment} /> <IconButton icon={<GiCancel />} aria-label="cross" backgroundColor="red.400" color="white" size="sm" onClick={() => { setUpdateMode(false); }} /> </Flex> </Flex> ) : ( <Flex marginY="3" flex="1"> <Text>{comment.comment_text}</Text> </Flex> )} {comment.author.email === pb.authStore.model!.email && ( <Flex flexDirection="row"> <IconButton icon={<RiDeleteBin5Line />} aria-label="delete" backgroundColor="transparent" onClick={deleteComment} /> <IconButton icon={<GrEdit />} aria-label="edit" backgroundColor="transparent" onClick={() => setUpdateMode(true)} /> </Flex> )} </Flex> </Flex> ); } ``` This component is very similar to the `Post` component in terms of functionality. We use the `delete` and `update` functions on the `comments` collection to perform actions on the record. Also, we allow only the owner of the comment to perform these actions on the comment. ## Adding a notification system PocketBase offers [out-of-the-box support for subscriptions](https://pocketbase.io/docs/api-realtime). This allows users to listen to changes made to a collection. Let’s try to build a notification system with this feature, which we’ll add to our `Navigation` component: ![Notification Icon In Pocketbase App's Navigation Component With No Notifications Shown](https://blog.logrocket.com/wp-content/uploads/2024/06/img18-Notification-icon-Navigation-component.png) Let’s try to add subscriptions to the `navigation` component. Whenever someone comments on a post made by the logged-in user, the notification counter in the nav bar increases by one: ![Example Notification Icon With Counter Reading One](https://blog.logrocket.com/wp-content/uploads/2024/06/img19-Example-notification-counter-reading-one.png) Here’s the code for our `navigation` component, updated to include the notification feature: ```typescript /* ./src/components/navigation */ import { Flex, Text, Button, IconButton, Image } from "@chakra-ui/react"; import { getPb, logout } from "../pocketbaseUitl"; import { useNavigate } from "react-router-dom"; import { BiBell } from "react-icons/bi"; import { useEffect, useState } from "react"; const pb = getPb(); export const Navigation = () => { const navigate = useNavigate(); const [notificationCount, setNotificationCount] = useState<number>(0); const logoutUser = () => { logout(); navigate("/join"); }; useEffect(() => { pb.collection("comments").subscribe( "*", (e) => { if (e.record.expand?.post.author_id === pb.authStore.model!.id) { setNotificationCount(notificationCount + 1); } }, { expand: "post" } ); return () => { pb.collection("comments").unsubscribe(); }; }, []); return ( <Flex direction="row" alignItems="center"> <Text fontWeight="bold" flex="3" fontSize="22"> PocketBase Forum Example </Text> <Flex> <Flex alignItems="center" marginX="5"> <Button backgroundColor="transparent"> <BiBell size="20" /> {notificationCount && ( <Flex borderRadius="20" background="red.500" p="2" marginLeft="2" height="60%" alignItems="center" > <Text color="white" fontSize="12"> {notificationCount} </Text> </Flex> )} </Button> </Flex> <Button onClick={logoutUser} colorScheme="red" color="white"> Logout </Button> <Image marginLeft="5" height="10" src={`https://source.boringavatars.com/beam/120/${ pb.authStore.model!.username }`} /> </Flex> </Flex> ); }; ``` The code below, which is on lines 15-28 above, is of particular interest to us: ```typescript useEffect(() => { pb.collection("comments").subscribe( "*", (e) => { if (e.record.expand?.post.author_id === pb.authStore.model!.id) { setNotificationCount(notificationCount + 1); } }, { expand: "post" } ); return () => { pb.collection("comments").unsubscribe(); }; }, []); ``` We use the `subscribe` method on the collection to listen to changes made to the `comments` collection. We want to show a notification when a new comment is added to a post made by the logged-in user. So, we subscribe to all the changes by passing in `*` as the first argument to the `subscribe` function. When a new record gets added to the `comments` collection the server sends an event to all the subscribers with the newly created record as a payload. We check if the comment is made on a post authored by the logged-in user. If yes, we increment the notification counter and show it in the navigation bar. We use the `useEffect` Hook with no dependencies to ensure that the client is subscribed only once and we dispose of the subscription when the component is unmounted. ## Conclusion If you’re trying to build an MVP or quickly test out a business idea to see if it has any legs, PocketBase can be a huge time and effort saver. Most of the features required to build an MVP — like authentication, file uploads, real-time subscriptions, and access control rules — are baked into the PocketBase framework. Also, since PocketBase is Go-based, it performs better than Node.js under heavy loads. Overall, if you’re looking to move fast and experiment with some business ideas, PocketBase can help you do just that with minimal effort. All the code for the tutorial is [available here](https://github.com/rahulnpadalkar/forum-pocketbase). That’s it! Thank you for reading!
leemeganj
1,892,650
Personal Protective Equipment Market| Trends, Share, Growth Rate, Opportunities and Industry Forecast
The Report "Personal Protective Equipment Market by Type (Hand &amp; Arm Protection, Protective...
0
2024-06-18T16:06:25
https://dev.to/aryanbo91040102/personal-protective-equipment-market-trends-share-growth-rate-opportunities-and-industry-forecast-b79
news
The Report "Personal Protective Equipment Market by Type (Hand & Arm Protection, Protective Clothing, Foot & Leg Protection), End-use Industry (Manufacturing, Construction, Oil & Gas, Healthcare, Transportation, Firefighting, Food), Region - Global Forecast to 2028", size was USD 54.0 billion in 2023 to USD 69.4 billion by 2028, at a CAGR of 5.1% from 2023 to 2028. Personal protective equipment (PPE) refers to specialized clothes or equipment meant to protect individuals from various health and safety threats in the workplace or other places. Personal protective equipment is chosen based on the specific hazards in each environment and should be fitted and maintained appropriately. Employers are normally responsible for providing suitable personal protective equipment (PPE) and training employees on how to make best use of it. Workers are responsible for wearing the specified PPE as directed to maintain their safety and well-being. The proper use of personal protective equipment (PPE) is a vital component of occupational safety and risk reduction. Browse in-depth TOC on "Personal Protective Equipment Market" 218 – Tables 47 – Figures 250 – Pages **Download PDF [Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=132681971 ](Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=132681971 )** Personal Protective Equipment Market Key Players The key market players identified in the report are Honeywell International Inc. (US), DuPont de Nemours, Inc. (US), 3M Company (US), Ansell Limited (Australia), Kimberly-Clark Corporation (US), Lakeland Industries, Inc. (US), Alpha Pro Tech, Ltd. (Canada), Sioen Industries NV (Belgium), Radians Inc. (US), and MSA Safety Inc. (US). Honeywell International Inc. Honeywell International Inc. is a multinational company known for its diverse range of products and services, including a strong presence in the personal protective equipment (PPE) market. With a more than a century-long history, the company has gained a reputation for innovation and excellence in the manufacture of PPE goods. Honeywell International Inc. focus to individual safety and well-being is obvious in its comprehensive range of PPE options, which includes ranging from high-quality safety goggles, respiratory protection, and gloves to innovative hearing protection and headgear. For instance, in February 2022, Honeywell International Inc. announced a partnership with AstraZeneca to develop new-generation respiratory inhalers that use near-zero global warming potential propellants to treat chronic obstructive pulmonary disease (COPD) and asthma. This partnership will help the company to cover a broad range of markets. Moreover, in September 2020, the company announced that NFL's Carolina Panthers and Honeywell International Inc. have collaborated to create a safer stadium experience by offering individual personal protective equipment packs for Panthers staff and fans, as well as deploying air quality monitoring solutions, through a custom real-time Healthy Buildings dashboard. The partnership will help the company to develop infinite possibilities in the personal protective equipment market with modern technology. **Request Sample [Pages:https://www.marketsandmarkets.com/requestsampleNew.asp?id=132681971](Pages:https://www.marketsandmarkets.com/requestsampleNew.asp?id=132681971)** "Hand & Arm Protection was the largest type for personal protective equipment market in 2022 in terms of value." The personal protective equipment market is experiencing significant growth driven by many key factors. Hand & arm protection has attracted attention as a prospective type in the personal protective equipment market for several reasons. Hand and arm protection is necessary in a variety of industries, including manufacturing, construction, healthcare, oil & gas, food, transportation, firefighting, and others. Due to the variety of uses, protective gloves and sleeves are in high demand. Many industries entail actions that put the hands and arms at risk. Contact with chemicals, sharp objects, severe temperatures, and mechanical injuries are examples of potential dangers. As these industries grow, so does the demand for safety equipment. "Manufacturing segment is estimated to be the largest end-use industry in personal protective equipment market in 2022, in terms of value." The personal protective equipment market has been gradually expanding, with increased manufacturing and infrastructural development. Most of the countries have witnessed industrial and manufacturing growth, resulting in an increased demand for personal protective equipment (PPE) to protect the rising workforce. This expansion has been particularly evident in emerging economies. These developments frequently require the use of specialized personal protective equipment. Manufacturing frequently employs many workers, all of them require appropriate personal protective equipment. The sheer size of the manufacturing workforce contributes to an increase in the demand for protective equipment in the manufacturing industry. **Speak to Expert: [https://www.marketsandmarkets.com/speaktoanalystNew.asp?id=132681971](https://www.marketsandmarkets.com/speaktoanalystNew.asp?id=132681971) ** "North America was the largest region for the personal protective equipment market in 2022, in terms of value." The expansion of the personal protective equipment market in the North America region is primarily due to strict regulations regarding workplace safety. The North America healthcare sector is one of the largest in the world. The sector has a significant need for medical personal protective equipment (PPE), including gloves, masks, gowns, and face shields. On the other hand, Asia Pacific is projected to be the fastest growing region in this market during the forecasted period. The Asia Pacific area, which includes China, India, Japan, South Korea, and Southeast Asian countries, has experienced substantial economic expansion in recent decades. This expansion has resulted in greater industrialization, construction, and manufacturing activity, resulting in a significant demand for PPE to safeguard the expanding workforce. TABLE OF CONTENTS   1 INTRODUCTION (Page No. - 35)     1.1 STUDY OBJECTIVES      1.2 MARKET DEFINITION             1.2.1 PERSONAL PROTECTIVE EQUIPMENT MARKET: INCLUSIONS AND EXCLUSIONS            1.2.2 PERSONAL PROTECTIVE EQUIPMENT MARKET: DEFINITION AND INCLUSIONS, BY TYPE            1.2.3 PERSONAL PROTECTIVE EQUIPMENT MARKET: DEFINITION AND INCLUSIONS, BY END-USE INDUSTRY     1.3 MARKET SCOPE             1.3.1 PERSONAL PROTECTIVE EQUIPMENT MARKET SEGMENTATION            1.3.2 REGIONAL SCOPE     1.4 YEARS CONSIDERED      1.5 CURRENCY CONSIDERED      1.6 UNITS CONSIDERED      1.7 STAKEHOLDERS      1.8 SUMMARY OF CHANGES             1.8.1 IMPACT OF RECESSION   2 RESEARCH METHODOLOGY (Page No. - 42)     2.1 RESEARCH DATA            FIGURE 1 PERSONAL PROTECTIVE EQUIPMENT MARKET: RESEARCH DESIGN            2.1.1 SECONDARY DATA            2.1.2 PRIMARY DATA                     2.1.2.1 Primary interviews–demand and supply sides                     2.1.2.2 Key industry insights                     2.1.2.3 Breakdown of primary interviews     2.2 MARKET SIZE ESTIMATION             2.2.1 BOTTOM-UP APPROACH                     FIGURE 2 MARKET SIZE ESTIMATION METHODOLOGY: APPROACH 1 (SUPPLY SIDE) - COLLECTIVE SHARE OF KEY COMPANIES                     FIGURE 3 MARKET SIZE ESTIMATION METHODOLOGY: APPROACH 2 (SUPPLY SIDE) - COLLECTIVE REVENUE OF ALL PRODUCTS (BOTTOM-UP)                     FIGURE 4 MARKET SIZE ESTIMATION METHODOLOGY: APPROACH 3 (DEMAND SIDE) - END-USE INDUSTRY (BOTTOM-UP)     2.3 DATA TRIANGULATION            FIGURE 5 PERSONAL PROTECTIVE EQUIPMENT MARKET: DATA TRIANGULATION     2.4 GROWTH RATE ASSUMPTIONS/GROWTH FORECAST             2.4.1 SUPPLY-SIDE ANALYSIS                     FIGURE 6 MARKET CAGR PROJECTIONS FROM SUPPLY SIDE            2.4.2 DEMAND-SIDE ANALYSIS                     FIGURE 7 MARKET GROWTH PROJECTIONS FROM DEMAND-SIDE DRIVERS AND OPPORTUNITIES     2.5 FACTOR ANALYSIS      2.6 IMPACT OF RECESSION      2.7 ASSUMPTIONS      2.8 LIMITATIONS      2.9 RISK ASSESSMENT            TABLE 1 PERSONAL PROTECTIVE EQUIPMENT MARKET: RISK ASSESSMENT   3 EXECUTIVE SUMMARY (Page No. - 53)     FIGURE 8 HAND & ARM PROTECTION TO ACCOUNT FOR LARGEST SHARE OF PERSONAL PROTECTIVE EQUIPMENT MARKET     FIGURE 9 MANUFACTURING SECTOR TO BE LARGEST END USER OF PERSONAL PROTECTIVE EQUIPMENT DURING FORECAST PERIOD     FIGURE 10 NORTH AMERICA ACCOUNTED FOR LARGEST SHARE OF PERSONAL PROTECTIVE EQUIPMENT MARKET IN 2022   4 PREMIUM INSIGHTS (Page No. - 56)     4.1 ATTRACTIVE OPPORTUNITIES FOR PLAYERS IN PERSONAL PROTECTIVE EQUIPMENT MARKET            FIGURE 11 RISING AWARENESS ABOUT WORKPLACE SAFETY PROPELLING MARKET GROWTH     4.2 PERSONAL PROTECTIVE EQUIPMENT MARKET, BY REGION            FIGURE 12 ASIA PACIFIC TO RECORD HIGHEST CAGR DURING FORECAST PERIOD     4.3 NORTH AMERICA: PERSONAL PROTECTIVE EQUIPMENT MARKET, BY TYPE AND COUNTRY            FIGURE 13 US DOMINATED NORTH AMERICA PERSONAL PROTECTIVE EQUIPMENT MARKET IN 2022     4.4 PERSONAL PROTECTIVE EQUIPMENT MARKET, BY END-USE INDUSTRY AND REGION            FIGURE 14 MANUFACTURING SEGMENT LED PERSONAL PROTECTIVE EQUIPMENT MARKET IN MOST REGIONS     4.5 PERSONAL PROTECTIVE EQUIPMENT MARKET, BY KEY COUNTRIES            FIGURE 15 INDIA TO REGISTER HIGHEST CAGR DURING FORECAST PERIOD   5 MARKET OVERVIEW (Page No. - 59)     5.1 INTRODUCTION      5.2 MARKET DYNAMICS  Continued...
aryanbo91040102
1,892,442
Building JavaScript Array Methods from Scratch in 2024 - Easy tutorial for beginners # 1
Video version link: https://www.youtube.com/watch?v=iZaQmP8lXMo Hi guys, in this post we will be...
0
2024-06-18T16:04:59
https://dev.to/itric/building-javascript-array-methods-from-scratch-in-2024-easy-tutorial-for-beginners-1-1jbg
beginners, tutorial, javascript, learning
Video version link: https://www.youtube.com/watch?v=iZaQmP8lXMo Hi guys, in this post we will be build JavaScript array methods as a function from JavaScript basics as it is great and simple way to refine your JavaScript programming skill and make your fundamentals strong as beginner and you’ll also know various JavaScript array methods along the way, we will start from easiest to more complex ones, i have try to keep it as beginner friendly as possible. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b8lys0lav1rouk46jrke.png) First, i will be introducing the array method with their example input and output, then we will examine those methods, identifying which programming tools and tricks to used and in which order to arrange them. Along that process we will build pseudocode algorithm for required function and later on, i will show you flowchart version of that algorithm, the important tip that i can give to you all, to get most out of this post, is to first try to build it by yourself then come to this post. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hd2zfxqecw1ia7hrz7mi.png) 1. Array.isArray() : So lets, start with simple one Array.isArray() method. The Array.isArray() method is static method determines whether the passed value is an Array. So, how will we find if given argument passed to function is of type array? Well, remember whenever you are stuck with problem like this, it is always helpful to go back to basics. let’s start with question “what is argument passed to function? can it give any information about itself?” and the answer is yes, remember that Almost everything in the language is an object including arrays and functions or gets treated as an object. Objects are a fundamental part of JavaScript. Even primitive data types like strings and numbers can be special temporary objects that provide extra functionality. If you dig into this object, you can find one useful property called constructor which stores data type of data, and can be utilized for our purpose. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/04fazk4p66al5wk4sb0r.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lybydb0togpjkk1c3b4c.png) So let's start working  on pseudocode: First we need to initialize function, let call it isArray. Inside function write if statement to check if argument.constructor is equal to array and return the result. - Initialize function name isArray - Check if argument.constructor is equal to Array - If true return true and vice versa But now there is an edge case to take care of, what if argument passed down is null or undefined, we still have to pass false in that case. So, to update our code we need to check if argument is null or undefined and return false if yes. - Initialize function name isArray - Check if argument is null or undefined - if true return false - Check if argument.constructor is equal to Array - If true return true and vice versa ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/doxrit66aslenbrdjhgq.png) Here is flowchart representation of algorithm: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oz71lxk4dm5c6n5rxnue.png) Now let’s code it up with javaScript : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0p0ipdj8rwomayds92gr.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bodc2avts23h89hbw50g.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/50xwdc4uip2zh2jlge0v.png) 2. Array: length : Next is array’s length method. Well, actually it is an array property, but for learning sake, we will make function similar to it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbujlypfei6me1n6r1fn.png) As its name suggests, it gives number of elements in particular array, so the function that we will be building have to return value, which represent element's number. So at the end, function will be returning value, or variable containing value which represents number of elements. Thinking like that, we will be needing variable which keeps track of number of elements. And that first it should be of value zero. Let name it count, but how we will count elements? We need something which goes through each element and increment count variable by one. So loop sounds perfect for it. But how can we use loop if we don't know length of the array? For this we can use for of loop which loops through the values of an iterable object. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vegiaq9qkh43ifkkdgrd.png) Now, we have all essential parts to make a proper algorithm. First, initialize function, let's name it length. Then, initialize variable name count with value zero. Next, implement a for of loop, in which increment count variable by one in each iteration. And finally, return count variable. - Initialize function name length - Initialize variable name count with value 0, // which keeps track of number of elements - Implement “for of ” loop in which increment count by 1 in each iteration - return count But what if argument pass to function is not array but other data type? To deal with this edge case, we have to check if argument is an array in the first place. Now, I will encourage you all to think, how can we check if given argument is an array? Stop and ponder. Well, for this we can use isArray method which we have build earlier. Now, modifying the code. Check if argument is an array. If not then throw an error. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tvon0u4uigr3nq9ub1tk.png) Here is the flowchart representation of the algorithm; ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dl700ho4wjf936yelt9h.png) Now let's code it up with JavaScript: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qf2d7y3lfmtz67845caf.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jcn6hh0ikmis5dbyy83.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/400ownztqtbwvkdk4pvg.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jsvszp31dh5odzbce8bk.png) Well, that's all for today. I hope this post was useful for you and thank you for reading.
itric
1,892,602
Impulsa tu Carrera: Únete a la Comunidad de Desarrolladores de AWS en Iberia (User Groups)
AWS User Groups Únete a las comunidades de desarrolladores de AWS en Iberia para seguir...
0
2024-06-18T16:00:52
https://dev.to/aws-espanol/impulsa-tu-carrera-unete-a-la-comunidad-de-desarrolladores-de-aws-en-iberia-user-groups-h0m
alianzatechskills2jobs, aws, awsespanol
### AWS User Groups Únete a las comunidades de desarrolladores de **AWS en Iberia** para seguir perfeccionando tus habilidades en computación en la nube y avanzar en tu carrera profesional. Podrás mantenerte actualizado en las tecnologías de AWS, conectar con profesionales destacados y participar en actividades que harán que tu experiencia de aprendizaje aún más enriquecedora. ![AWS User Groups en Iberia](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ou4lrsvdjoeb6f5odet0.png) - **AWS Alicante:** https://www.meetup.com/aws-user-group-alicante/ - **AWS Algarve:** https://www.meetup.com/loule-cloud-computing-meetup-group/ - **AWS Andorra La Vella** - **AWS Asturias:** https://www.meetup.com/aws-user-group-asturias/ - **AWS Barcelona:** https://www.meetup.com/Barcelona-Amazon-Web-Services-Meetup/ - **AWS Bilbao:** https://www.meetup.com/AWS-Bilbao/ - **AWS Castelo Branco:** https://www.meetup.com/aws-user-group-castelo-branco/ - **AWS Las Palmas:** https://www.meetup.com/aws-las-palmas-user-group/ - **AWS Lisboa:** https://www.meetup.com/aws-user-group-lisbon/ - **AWS Madrid:** https://www.meetup.com/Madrid-Amazon-Web-Services-Meetup/ - **AWS Palma de Mallorca:** https://www.meetup.com/Amazon-Web-Services-User-Group-Palma-Spain/ - **AWS Sevilla:** https://www.meetup.com/aws-user-group-sevilla/ - **AWS Valencia:** https://www.meetup.com/AWS-Valencia/ - **AWS Zaragoza:** https://www.meetup.com/awszgz/ Y aprovechamos para dejaros unos **recursos** que os podrán ser muy útiles para seguir aprendiendo: ### AWS Skillbuilders AWS Skillbuilders es un programa diseñado para ayudar a los desarrolladores, arquitectos y profesionales de TI a mejorar sus habilidades en AWS y hacer crecer sus carreras en el campo de la computación en la nube. ![AWS Skillbuilders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ouiekbliebb7wtw7dufr.png) Link: https://skillbuilder.aws/ ### AWS Cloud Quest AWS Cloud Quest una plataforma educativa que permite a las personas aprender y perfeccionar sus habilidades en la nube de una manera divertida, interactiva y práctica. ![AWS Cloud Quest](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fripzxh2rdpats2zltdt.jpg) **Link:** https://aws.amazon.com/training/digital/aws-cloud-quest/ ### AWS JAM AWS JAM es una experiencia de aprendizaje y desarrollo de habilidades en un entorno de colaboración y competencia, diseñada para que los desarrolladores y profesionales de TI profundicen en los servicios y tecnologías de AWS. ![AWS JAM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wqcexj9wmbrm8jabxpe1.png) **Link:** https://jam.awsevents.com/ ### AWS GameDays AWS GameDays es una experiencia de aprendizaje práctica y simulada que permite a los profesionales de TI y desarrolladores desarrollar y poner a prueba sus habilidades en la gestión de crisis y la resolución de problemas en la nube. ![AWS Gamedays](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/de9l01ta0tbygcqn59ea.jpg) **Link:** https://aws.amazon.com/gameday/ ### AWS DeepRacer AWS DeepRacer es una plataforma interactiva y gamificada que permite a los desarrolladores, ingenieros y entusiastas de la IA aprender y practicar habilidades de aprendizaje por refuerzo (RL) de una manera práctica y divertida. ![AWS Deepracer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sku4t990akhxzm2bst3c.png) Link: https://aws.amazon.com/deepracer/student/ ### AWS PartyRock AWS PartyRock es un espacio donde puedes construir aplicaciones generadas por IA en un campo de juego impulsado por Amazon Bedrock. Es una forma rápida y divertida de aprender sobre IA generativa. ![AWS Partyrock](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jghgjijmzk35pm5i7uj9.png) Link: https://partyrock.aws/
iaasgeek
1,892,741
Rotate Your PC Screen in a seconds on Windows 11?
Rotating the screen on your Windows 11 PC can be incredibly useful for a variety of scenarios,...
0
2024-06-24T10:56:58
https://winsides.com/how-to-rotate-screen-on-windows-11-pc/
windows11, desktoprotate, tutorials, tips
--- title: Rotate Your PC Screen in a seconds on Windows 11? published: true date: 2024-06-18 15:59:50 UTC tags: Windows11, desktoprotate, tutorials, tips canonical_url: https://winsides.com/how-to-rotate-screen-on-windows-11-pc/ cover_image: https://winsides.com/wp-content/uploads/2024/06/Rotate-Screen-in-Windows-11.jpg --- **Rotating the screen on your Windows 11 PC** can be incredibly useful for a variety of scenarios, whether you’re working on a presentation, reading documents in **portrait mode** , or setting up a **multi-display workstation**. In this guide, we’ll walk you through the straightforward method to **rotate your screen in Windows 11** , ensuring you can easily switch between landscape and portrait modes. ### Rotate Screen on Windows 11 PC – Simple Steps: - In the space of the Windows 11 Desktop, right-click and click on **Display Settings**. ![Display Settings](https://winsides.com/wp-content/uploads/2024/06/Display-Settings-1.webp "Display Settings") _Display Settings_ - Under **Scale & layout** , you can find the option “ **Display Orientation** “. ![Display Orientation](https://winsides.com/wp-content/uploads/2024/06/Display-Orientation-1024x324.webp "Display Orientation") _Display Orientation_ - Drop down the **Display Orientation** , you can find four options, **Landscape (Default), Portrait, Landscape Flipped, and Portrait Flipped**. Choose your Orientation accordingly. ![Choose Orientation](https://winsides.com/wp-content/uploads/2024/06/Choose-Orientation-1024x352.webp "Choose Orientation") _Choose Orientation_ - Once you choose the orientation, you can either keep the changes made or revert back to the previous orientation. This option allows users to get back instantly if the newly chosen orientation is not comfortable. ![Keep Changes](https://winsides.com/wp-content/uploads/2024/06/Keep-Changes.webp "Keep Changes") _Keep Changes_ - By following the above steps, you can rotate the screen in Windows 11. ## Take away: **Rotating your screen in Windows 11** is a simple yet powerful feature that can enhance your productivity and viewing experience in numerous ways. By following the steps outlined in this guide, you can effortlessly switch your **screen orientation** to meet various needs, making your **Windows 11** experience more versatile and enjoyable. Whether for **professional use or personal convenience** , screen rotation is a handy feature that can significantly improve how you interact with your device. For more tweaks, follow [Winsides](https://dev.to/winsides). **Happy Coding! Peace out!**
vigneshwaran_vijayakumar
1,892,641
constructor function / Errors(xatolar)
MAVZU: constructor function Errors Qo'shimcha: debugger keyword constructor...
0
2024-06-18T15:43:28
https://dev.to/bekmuhammaddev/constructor-errorsxatolar-3791
aripovdev, javascript, bekmuhammaddev
MAVZU: - constructor function - Errors Qo'shimcha: - debugger keyword **constructor function** Konstruktor funksiyasi (constructor function) JavaScriptda obyektlar yaratish uchun ishlatiladigan maxsus funksiyadir. Konstruktor Funksiya shu ko'rinishda yaratiladi: ``` function Car(make, model, year) { this.make = make; this.model = model; this.year = year; } ``` Bu yerda Car nomli konstruktor funksiyasi yaratilgan. Konstruktor funksiyasi yangi obyekt yaratish uchun ishlatiladi va odatda katta harf bilan boshlanadi. Bu funksiyada make, model, va year parametrlar bo'lib, ular yangi yaratilayotgan obyektning xususiyatlarini ifodalaydi. this kalit so'zi yangi yaratilayotgan obyektni bildiradi: this Kalit so'zi: this kalit so'zining qiymati funksiyani qanday chaqirganingizga bog'liq. Konstruktor funksiyalarida this yangi yaratilgan obyektga ishora qiladi. Metodlar ichida esa this metod chaqirilgan obyektga ishora qiladi. Yangi obyekt yaratish: ``` let myCar = new Car('Toyota', 'Corolla', 2020); ``` Bu yerda new kalit so'zi yordamida Car konstruktor funksiyasi chaqirilgan va yangi obyekt yaratilgan. Bu chaqiruv quyidagi vazifalarni bajaradi: - Yangi bo'sh obyekt yaratiladi. - Yaratilgan obyekt this konteks biriktiriladi. - Car funksiyasi this orqali yangi obyektni to'ldiradi: make, model, va year xususiyatlari Toyota, Corolla, va 2020 qiymatlarini oladi. - Konstruktor funksiyasi avtomatik ravishda yangi yaratilgan obyektni qaytaradi va myCar o'zgaruvchisiga biriktiriladi. Obyekt hususiyatlarini cansolega chiqarish: ``` console.log(myCar.make); ``` Bu yerda myCar o'zgaruvchisi yaratilgandan so'ng, uning make xususiyatiga kirish uchun console.log orqali natijani chiqaramiz. Bu kod Toyota qiymatini konsolga chiqaradi, chunki myCar obyektining make xususiyati Toyotaga teng qilib belgilangan. **JAVASCRIPT ERRORS** Errors(xatolar turlari) turlari: - ReferenceError - SyntaxError - TypeError - URIError - EvalError - InternalError: 1-ReferenceError ReferenceError ma'lum bir o'zgaruvchi yoki funksiya mavjud bo'lmaganida yoki aniqlanmaganida yuzaga keladi. ``` console.log(test); ``` cansole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cflviytbkypga6gead8i.png) 2-SyntaxError SyntaxError JavaScript kodida sintaksis xatolik mavjud bo'lganida yuzaga keladi. Bu xatolar kod yozilishidagi qoidabuzarliklarni ifodalaydi. ``` a =; 5; // ``` Bu yerda a =; noto'g'ri yozilgan, chunki = operatori to'g'ri foydalanilmagan. cansole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gk27cz99j2f4vbwgjhqo.png) 3-TypeError TypeError ma'lum bir amal yoki operatsiya noto'g'ri turdagi qiymatga nisbatan bajarilganda yuzaga keladi. ``` "abc".toFixed(5); ``` Bu yerda toFixed metod faqat raqamli qiymatlarga nisbatan qo'llaniladi, lekin u string qiymatga qo'llanilgan. cansole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/45je5pw58erf1hf5ajv3.png) 4-URIError URIError URI (Uniform Resource Identifier) bilan bog'liq noto'g'ri foydalanish holatlarida yuzaga keladi. Bu xatolar decodeURI(), decodeURIComponent(), encodeURI(), va encodeURIComponent() funksiyalari noto'g'ri parametrlar bilan chaqirilganda paydo bo'lishi mumkin. ``` decodeURIComponent('%'); ``` Bu yerda % belgisi noto'g'ri bo'lganligi uchun xato yuzaga keladi. cansole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pfor9puftfdviwox82qj.png) 5-EvalError EvalError dasturda noto'g'ri ishlatilgan eval() funksiyasi natijasida yuzaga kelishi mumkin. Biroq, ES5 versiyasidan boshlab EvalError deyarli ishlatilmaydi va kamdan-kam hollarda uchraydi. ``` eval("foo bar"); ``` ES5 versiyasidan keyin, EvalError odatda boshqa xatolar bilan almashtiriladi. cansole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7j8ep3hbv96d7wb4w8vq.png) 6-InternalError InternalError JavaScript dvigatelida ichki muammo yoki cheklov tufayli yuzaga keladi. Bu xatolar kamdan-kam uchraydi va ko'pincha recursion chuqurligi oshib ketganida yoki JavaScript interpreter ichida boshqa ichki xatolar sodir bo'lganda paydo bo'ladi. ``` function recurse() { recurse(); } recurse(); ``` Bu yerda funksiya o'z-o'zini cheksiz chaqirish natijasida recursion chuqurligi oshib ketgan. cansole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2dxyqyi00q44i6vhctnr.png) JavaScriptda xatolarni tutish va ularga ishlov berish uchun try-catch bloki ishlatiladi. Xatolarni to'g'ri tutish va ularga ishlov berish dastur barqarorligini ta'minlash uchun muhimdir. ``` try { console.log(test);// } catch (error) { console.log(error.massage); }finnaly{ cansole.log("finish") } ``` Bu yerda siz k0dlaringizni yozib tekshirishingiz mumkin,agar xato topiladigan bo'lsa keyingi, erorr blockiga o'tiladi agar xato topilmasa finnaly blockiga o'tiladi va kondishin yakunlanadi. JavaScriptda xatolarni tushunish va ularga ishlov berish dastur barqarorligini ta'minlash uchun muhimdir. Har bir xato turi o'ziga xos xususiyatlarga ega bo'lib, dasturchilar uchun xatolarni aniqlash va tuzatishda yordam beradi. try-catch bloklari yordamida xatolarga ishlov berish dasturlarning noto'g'ri ishlashining oldini olishda muhim vositadir. **debugging** JavaScriptda debugger kalit so'zi kodni qadama qadam tahlil qilish, jarayonni soddalashtirish uchun ishlatiladi. Bu kalit so'z javascriptda yozilgan barcha kodlarni,funksiyalarni,brauzerda yoki boshqa debugging vositalarida ishlaydigan kodni to'xtatish va uning holatini tahlil qilish imkonini beradi. debuggingni vazifalari: debugger kalit so'zi ko'rsatilgan joydan kodning o'qilishini to'xtatadi va debugger vositasini chaqiradi. Bu, ayniqsa, murakkab kodlarni tushunish va xatolarni aniqlashda anchagina qo'l keladi. Kod to'xtatilgan joyda o'zgaruvchilarni kuzatish va ularning qiymatlarini tahlil qilish mumkin. Bu orqali xatolarni tezda topish va tuzatish osonlashadi.Barcha yozilgan kodlarni birma-bir tekshirish va uning natijasini ko'ra olamiz. debugger kalit so'zining ishlashi: ``` function name1() { console.log('name1'); } function name2() { console.log('name2'); } function name3() { console.log('name3'); } debugger name1(); debugger name2(); debugger name3(); ``` Bu kodda debugg keywortini call qilib chaqirilgan funksiyalar tepasidan yozilgan va tepadan pastga harakatlanib yozilgan kodlarni ya'ni funksiyalar va natijalarni ko'rib tahlil qilishiimiz mumkin. debugging ishlatilishi: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d4gwb3yis5bresv6jepe.png) cansole: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k8sh9nytlp73i320ufwu.png) Debugger Kalit So'zini Ishlatishda Ehtiyot Choralari: Ishlab chiqarishda olib tashlash: debugger kalit so'zi ishlab chiqarish (production) muhitida olib tashlanishi kerak. Aks holda, u foydalanuvchilar uchun noqulaylik tug'dirishi mumkin. Foydalanish joyini belgilash: debugger kalit so'zini faqat kerakli joylarda va qisqa muddatli tahlil jarayonlarida foydalaning. Kodingizda ortiqcha debugger kalit so'zlari bo'lishi kerak emas. debugger kalit so'zi JavaScriptda kodning ijrosini to'xtatish, o'zgaruvchilarni kuzatish va xatolarni topish uchun kuchli vositadir. Bu kalit so'z yordamida siz kodni qadam-baqadam tahlil qilishingiz va murakkab muammolarni tezda aniqlab, tuzatishingiz mumkin. Debugging jarayonini samarali qilish uchun debugger kalit so'zini qanday ishlatishni va uning foydali tomonlarini bilish juda muhim.
bekmuhammaddev
1,892,648
Code Reviews vs. QA: Why Your React Project Needs Both
In the fast-paced world of software development, project management often seeks to streamline...
0
2024-06-18T15:58:23
https://dev.to/nilanth/code-reviews-vs-qa-why-your-react-project-needs-both-bmc
react, webdev, javascript, beginners
In the fast-paced world of software development, project management often seeks to streamline processes to enhance productivity. One common suggestion is to eliminate code reviews, especially if a dedicated Quality Assurance (QA) team is in place. This approach, however, can lead to significant long-term issues, particularly in complex projects like those involving React. This article delves into the necessity of code reviews for React team members, highlighting potential pitfalls of omitting them and the critical benefits they provide. ## Introduction Software development teams constantly face the challenge of balancing speed and quality. While QA teams play a vital role in ensuring that applications function correctly, they are not a panacea for all potential issues. This is particularly true for React projects, where the intricacies of component-based architecture, state management, and performance optimization require meticulous oversight. Code reviews serve as a critical checkpoint to maintain high standards, promote knowledge sharing, and prevent long-term technical debt. ## The Role of Code Reviews in Software Development Code reviews are a systematic examination of source code by developers other than the author. They are designed to find bugs, enforce coding standards, and ensure consistency across the codebase. In React development, code reviews are especially crucial due to the following reasons: 1. **Ensuring Code Quality:** They help identify potential issues early in the development process, such as inefficient algorithms, improper state management, or security vulnerabilities. 2. **Knowledge Sharing and Mentorship:** Reviews provide an opportunity for team members to learn from each other, share best practices, and improve their coding skills. 3. **Maintaining Consistency:** They enforce coding standards and architectural guidelines, ensuring that the codebase remains maintainable and scalable. 4. **Collaborative Improvement:** Reviews encourage collaborative problem-solving and innovation, fostering a culture of continuous improvement. ## Potential Positive Outcomes of Skipping Code Reviews At first glance, eliminating code reviews might seem to offer several benefits: 1. **Faster Development Cycle:** Developers can push code directly, speeding up the development process. 2. **Lower Overhead:** Less time spent on reviews means more time available for actual coding. 3. **Simplified Team Structure:** Developers focus solely on writing code, while QA handles testing, simplifying roles and responsibilities. While these benefits might yield short-term gains, they come with significant long-term risks that can outweigh the initial advantages. ## Critical Negative Outcomes of Omitting Code Reviews A. **Code Quality Issues** * **Lack of Peer Review:** Without reviews, the quality of the codebase may degrade over time as bugs and inconsistencies accumulate. * **Technical Debt:** Unreviewed code can introduce technical debt, making the codebase harder to maintain and scale. B. **Knowledge Silos** * **Missed Learning Opportunities:** Code reviews facilitate knowledge transfer and skill development, which are crucial for team growth. * **Isolation:** Developers working in isolation may implement inconsistent coding styles and architectural patterns. C. **Decreased Team Morale and Collaboration** * **Reduced Collaboration:** Reviews foster a collaborative environment. Without them, the team may become fragmented. * **Morale:** Developers might feel undervalued if their code isn't reviewed, leading to lower job satisfaction 🥺. D. **Project Risk** * **Unnoticed Bugs:** QA can catch many issues, but not all. Reviews can identify logical errors and architectural flaws that automated tests might miss. * **Security Vulnerabilities:** Reviews help spot potential security issues early in the development process. E. **Loss of Leadership and Guidance:** * **No Technical Lead:** A React lead ensures that the team follows best practices and maintains code quality. Without a lead, the project may lack direction. * **Lack of Mentorship:** Junior developers benefit from guidance, accelerating their growth and improving code quality. ## Comparative Analysis: Code Review vs. QA While QA and code reviews both aim to improve software quality, they serve different purposes and are complementary rather than interchangeable. A. **Scope of QA** * **Functional Testing:** QA focuses on ensuring that the application works as intended from an end-user perspective. * **Automation:** QA involves automated testing to catch regressions and ensure consistent functionality. B. **Scope of Code Reviews** * **Code Quality:** Reviews ensure that the code adheres to best practices and coding standards. * **Non-functional Concerns:** They address maintainability, scalability, and architectural soundness. C. **Limitations of QA** * **Non-functional Issues:** QA may not catch inefficiencies, poor coding practices, or architectural flaws. * **Early Detection:** QA typically catches issues after the code is written, whereas reviews can prevent issues from being introduced in the first place. ## Case Study: Technical Debt from Unreviewed Code Consider a scenario where a React team member adds a new feature to a to-do list application without code reviews: A. **Initial Implementation** Developer A quickly adds a due date field to each to-do item. State management and date comparison logic are added directly within the component. B. **Issues Introduced** * **Inconsistent State Management:** Local state management leads to scalability issues. * **Poor Structure:** Repetitive and poorly structured code makes maintenance difficult. * **Lack of Error Handling:** No validation or error handling for due date inputs. * **No Testing:** Absence of unit or integration tests. C. **Consequences:** * **Technical Debt:** As the application grows, the poorly structured codebase becomes harder to maintain. * **Refactoring Challenges:** Major refactoring is needed to address accumulated issues, disrupting ongoing development. D. **Impact on Team:** * **Morale:** Frustration among developers due to the complex and unmanageable codebase . * **Productivity** Increased time spent fixing issues rather than developing new features. ## Best Practices for Maintaining Code Quality in React Projects To balance the need for speed and quality, consider the following best practices: A. **Hybrid Approach** * **Partial Reviews:** Implement partial code reviews for critical or complex changes. * **Pair Programming:** Encourage pair programming to maintain some level of peer review. B. **Automated Tools** * **Static Analysis:** Use tools like ESLint and Prettier to enforce coding standards automatically. * **Comprehensive Testing:** Invest in robust automated testing frameworks to catch issues early. C. **Regular Audits and Retrospectives** * **Codebase Audits:** Conduct periodic audits to identify and address technical debt. * **Retrospectives:** Hold regular team retrospectives to discuss and improve processes. D. **Leadership and Mentorship** * **Tech Leads on Demand:** Have senior developers take on lead roles for specific tasks or sprints. * **Mentorship Programs:** Establish mentorship programs to foster knowledge sharing and skill development. E. **Continuous Learning:** * **Training Sessions:** Provide regular training on best practices and new technologies. * **Documentation:** Maintain thorough documentation to help team members understand the project's architecture and standards. ## Conclusion While eliminating code reviews might seem like a way to streamline the development process, the long-term risks and potential negative outcomes far outweigh the short-term benefits. Code reviews play a critical role in maintaining code quality, ensuring consistency, and fostering a collaborative team environment. In React development, where the complexity and scalability of applications are paramount, the value of code reviews cannot be overstated. By balancing code reviews with effective QA practices, teams can achieve both rapid development and high-quality outcomes. Incorporating code reviews into your React development process, even with a dedicated QA team, is essential for maintaining a robust and scalable codebase. It ensures that your project remains healthy, maintainable, and adaptable to future growth, ultimately leading to a more successful and sustainable product. --- Thank you for reading. **You can support me [by buying me a coffee](https://buymeacoffee.com/nilanth) ☕**
nilanth
1,892,647
Adding location tracking to mobile apps (for Android)
Hey! 👋 Is anyone struggling with location tracking performance on their mobile app? We've got this...
0
2024-06-18T15:55:37
https://dev.to/roam/adding-location-tracking-to-mobile-apps-for-android-lki
mobile, android, location, sdk
Hey! 👋 Is anyone struggling with location tracking performance on their mobile app? We've got this quick guide article with an introductory breakdown of our location SDK's setting up, installation, and initialization process. Check it out: [Integrating Location Tracking using Roam.ai's Android SDK ](https://www.roam.ai/blog/integrating-location-tracking-using-roam-ais-android-sdk) (And yes, we're GDPR compliant :) )
roam
1,892,645
Palindrome check a string
This one is pretty common. Sounds difficult, but not really bad once you think it through. Write a...
27,729
2024-06-18T15:53:10
https://dev.to/johnscode/palindrome-check-a-string-3g4c
go, programming, interview, interviewquestions
This one is pretty common. Sounds difficult, but not really bad once you think it through. Write a golang function to check if a string is a palindrome. A palindrome is a sequence of characters that is the same even when reversed, for example: - "aba" is a palindrome - "abb is not - "ab a" is considered a palindrome by most, so we ignore whitespace. ``` func PalindromeCheck(str string) bool { trimmedStr := strings.ReplaceAll(str, " ", "") len := len(trimmedStr) chars := []rune(trimmedStr) for i := 0; i < len/2; i++ { if chars[i] != chars[len-i-1] { return false } } return true } ``` This solution is functionally the same as you will find for C or Java when searching online. We are essentially using dual pointers to traverse from the beginning and the end looking for a mismatched character. When a mismatch is found, we can declare the string is not a palindrome. Can we make it better? Is there a better way to trim whitespace rather than using `strings.ReplaceAll`? (_there is but it can get ugly_) What about the efficiency of converting to an `[]rune`, is there a better way? Post your thoughts in the comments. Thanks! _The code for this post and all posts in this series can be found [here](https://github.com/johnscode/gocodingchallenges)_
johnscode
1,892,644
Recursive function
What is a Recursive Function in JavaScript? A recursive function is a function that calls...
0
2024-06-18T15:44:45
https://dev.to/__khojiakbar__/recursive-function-5593
javascript, recursive
# What is a Recursive Function in JavaScript? > A recursive function is a function that calls itself in order to solve a problem. It's like a loop, but instead of repeating a block of code, it calls itself with a smaller piece of the problem. # Why Use Recursive Functions? 1. **Breaking Down Problems:** Recursive functions are useful when a problem can be divided into smaller, similar problems. 2. **Cleaner Code:** For some problems, recursion can make the code simpler and easier to understand compared to using loops. # Key Components 1. **Base Case:** This is the condition that stops the recursion. Without it, the function would call itself forever. 2. **Recursive Case:** This is where the function calls itself with a smaller part of the problem, moving towards the base case. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3m8qifri0zfhaz3si8aj.png) # Funny samples: 1. Running Competition: ``` function runOrder(countDown) { if (countDown <= 0) { console.log('Goo...!!!') return; } console.log(countDown) runOrder(countDown - 1) } runOrder(4); // 4 // 3 // 2 // 1 // Goo...!!! ``` 2. Knock Knock ``` function knockKnock(times) { if (times <= 0) { console.log(`Who's there?`) return } console.log(times) knockKnock(times-1) } knockKnock(3); // 3 // 2 // 1 // Who's there? ```
__khojiakbar__
1,892,642
An winter effect in HTML/CSS/JS
Check out this Pen I made!
0
2024-06-18T15:43:55
https://dev.to/tidycoder/an-winter-effect-in-htmlcssjs-477h
codepen
Check out this Pen I made! {% codepen https://codepen.io/TidyCoder/pen/oNRExpV %}
tidycoder
1,892,640
Mastering Kubernetes Multi-Cluster: Strategies for Improved Availability, Isolation, and Scalability.
If you deploy on Kubernetes, you are scaling your application. This usually means scaling pods and...
0
2024-06-18T15:39:54
https://www.getambassador.io/blog/mastering-kubernetes-multi-cluster-availability-scalability
kubernetes, multicluster, deployment, isolation
If you deploy on Kubernetes, you are scaling your application. This usually means scaling pods and nodes within a cluster. This type of scaling allows you to handle increased workloads and provides a level of fault tolerance. However, there are scenarios where scaling within a single cluster won’t be enough. This is where Kubernetes multi-cluster deployments come into play. Multi-cluster implementations allow you to improve availability, isolation, and scalability across your application. Here, we want to examine the benefits of this approach for organizations, how to architect Kubernetes multi-cluster deployments, and the top [deployment strategies.](https://www.getambassador.io/blog/top-5-kubernetes-deployment-strategies) ## What is Multi-Cluster? Here’s how a single cluster deployment looks: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ngy68ul5d89steigm8ei.png) ## Multi cluster This is a straightforward deployment. In a single cluster deployment, you can scale your application by: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/je7luqn6loin90qvfvqb.png) **Horizontal Pod Autoscaling (HPA): **Automatically adjusting the number of pods based on observed CPU utilization or custom metrics. Vertical Pod Autoscaling (VPA): Automatically adjusting the CPU and memory resources requested by pods based on historical usage. **Cluster Autoscaling: **Automatically adjust the number of nodes in the cluster based on the pods' resource demands. A multi-cluster deployment uses two or more clusters: ## Multi cluster deployment In a multi-cluster deployment, traffic routing to different clusters can be achieved through a global load balancer or API gateway. These sit in front of the clusters and distribute incoming traffic based on predefined rules or policies. These rules can consider factors such as the geographic location of the user, the workload's requirements, or the current state of the clusters. An API gateway can be used as a central entry point for external traffic and can route requests to the appropriate cluster based on predefined rules or policies. The API gateway bridges clusters and provides a unified interface for accessing services across clusters. Clusters in a multi-cluster deployment can live in different locations, depending on the organization's requirements and infrastructure setup. Some common placement strategies include: **Regional Clusters:** Clusters are deployed in different geographic regions to provide better performance and availability to users in those regions. This helps reduce latency and improves the user experience. **Cloud Provider Clusters: **Clusters are deployed across multiple cloud providers or a combination of on-premises and cloud environments. This allows organizations to leverage the benefits of different cloud platforms and avoid vendor lock-in. **Edge Clusters: **Clusters are deployed at the edge, closer to the users or data sources. This is particularly useful for scenarios that require low-latency processing or have data locality constraints. Clusters in a multi-cluster deployment also need to communicate with each other to enable seamless operation and data exchange. This is usually implemented in the same way services within a cluster communicate–through a service mesh providing a unified communication layer. The service mesh handles service discovery, routing, and secure cluster communication. ## Why Multi-Cluster? While single-cluster scaling methods are effective, they have limitations. The first is fault tolerance. A single cluster is a single point of failure. The entire application becomes unavailable if the cluster experiences an outage or catastrophic failure. There are often limited disaster recovery options, and recovering from a complete cluster failure is challenging and time-consuming. The second is the scalability limits. Scaling nodes vertically is limited by the maximum capacity of the underlying infrastructure, while scaling nodes horizontally may be constrained by the data center's or cloud provider's capacity. Finally, you can have availability and isolation issues. If your single cluster is in a data center in us-west-1, European users may experience higher latency and reduced performance. Compliance and data sovereignty problems can also exist when storing and processing data within specific geographic boundaries. Additionally, all applications and environments within the cluster compete for the same set of resources, and resource-intensive applications can impact the performance of other applications running in the same cluster. Organizations can deploy more Kubernetes clusters and treat them as disposable by adopting a multi-cluster approach. Organizations now talk of “treating clusters as cattle, not pets.” This approach results in several benefits. **Improved Operational Readiness:** By standardizing cluster creation, the associated operational runbooks, troubleshooting, and tools are simplified. This eliminates common sources of operational error while reducing the cognitive load for support engineers and [SREs](https://www.getambassador.io/blog/rise-of-cloud-native-engineering-organizations), ultimately leading to improved overall response time to issues. Increased Availability and Performance: Multi-cluster enables applications to be deployed in or across multiple availability zones and regions, improving application availability and regional performance for global applications. **Eliminate Vendor Lock-In**: A multi-cluster strategy enables your organization to shift workloads between different Kubernetes vendors to take advantage of new capabilities and pricing offered by different vendors. **Isolation and Multi-Tenancy:** Strong isolation guarantees simplify key operational processes like cluster and application upgrades. Moreover, isolation can reduce the blast radius of a cluster outage. Organizations with strong tenancy isolation requirements can route each tenant to their individual cluster. **Compliance**: Cloud applications today must comply with many regulations and policies. A single cluster is unlikely to be able to comply with every regulation. A multi-cluster strategy reduces the scope of compliance for each cluster. ## Multi-Cluster Application Architecture When designing a multi-cluster application, there are two fundamental architectural approaches: **Replicated Architecture **In a replicated architecture, each cluster runs a complete and identical copy of the entire application in a replicated architecture. This means that the application's services, components, and dependencies are deployed and running independently in each cluster. The key advantages of this approach are: **Scalability**: The application can be easily scaled globally by replicating it into multiple availability zones or data centers. This allows the application to handle increased traffic and efficiently serve users from different geographic locations. **High Availability: **A replicated architecture enables failover and high availability when coupled with a health-aware global load balancer. If one cluster experiences an outage or becomes unresponsive, user traffic can be seamlessly routed to another healthy cluster, ensuring continuous service. **Simplified Deployment:** Since each cluster runs an identical copy of the application, deployment, and management processes are simplified. Updates and changes can be rolled out consistently across all clusters. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9el9s3p6pkqq07wohvi8.png) ## Gateway Balancer multi cluster However, a replicated architecture also has some considerations. First, data synchronization can become a challenge. If the application relies on shared data across clusters, data synchronization and consistency mechanisms need to be implemented to ensure data integrity. Second, running a full copy of the application in each cluster requires more resources than a split-by-service approach, as each cluster needs sufficient capacity to handle the entire application workload. ## Split-by-Service Architecture In a split-by-service architecture, the services or components of an application are divided and deployed across multiple clusters. Each cluster runs a subset of the application's services, and the clusters work together to form the complete application. The benefits of this approach include: **Strong Isolation:** Splitting services across clusters provides more robust isolation between different application parts. This is particularly useful when dealing with regulatory compliance requirements. For example, services handling sensitive data (e.g., PCI DSS-compliant services) can be isolated in a dedicated cluster. In contrast, the remaining services can be operated in separate clusters with less stringent compliance requirements. **Independent Scalability: **Each service can be scaled independently based on its specific resource requirements and usage patterns, allowing for more granular resource allocation and optimization. **Faster Development Cycles:** With a split-by-service architecture, individual development teams can work on and deploy their specific services into their own clusters without impacting other teams. This enables faster development cycles and reduces the risk of conflicts or dependencies between teams. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ogr178v26o0l2v8u8zee.png) ## Multi cluster However, a split-by-service architecture also introduces some challenges: **Increased Complexity:** Managing and orchestrating services across multiple clusters can be more complex than a replicated architecture. Inter-service communication, data consistency, and distributed transactions must be carefully designed and implemented. **Network Latency: **As services are distributed across clusters, network latency between services can increase. This must be considered when designing the application architecture and choosing the appropriate communication protocols and patterns. **Operational Overhead: **Managing and monitoring multiple clusters running different services requires more operational effort and tooling than a replicated architecture. Choosing between a replicated or split-by-service architecture depends on the application's specific needs, such as scalability requirements, compliance obligations, development team structure, and operational constraints. In some cases, a hybrid approach combining elements of both architectures can strike a balance between simplicity and isolation. ## Configuring Multi-Cluster Kubernetes When configuring and managing multi-cluster [Kubernetes](https://www.getambassador.io/use-case/productive-local-dev-environment) deployments, various challenges and approaches must be considered. These approaches can be broadly categorized into two main categories: [Kubernetes-Centric ](https://www.getambassador.io/blog/securing-cloud-native-communication)and Network-Centric. Each category focuses on different aspects of multi-cluster configuration and offers distinct solutions. [Kubernetes-centric](https://www.getambassador.io/blog/securing-cloud-native-communication) approaches aim to extend and enhance the core [Kubernetes](https://www.getambassador.io/use-case/productive-local-dev-environment) primitives to support multi-cluster use cases. The goal is to provide a centralized management plane that allows administrators to manage and control multiple [Kubernetes](https://www.getambassador.io/use-case/productive-local-dev-environment) clusters from a single point of control. Kubernetes-centric approaches focus on extending the Kubernetes API and control plane to enable centralized management and control of multiple clusters. They provide a higher level of abstraction and automation, simplifying the configuration and management of multi-cluster environments. The Kubernetes Cluster Federation project, managed by the Kubernetes Multicluster Special Interest Group, takes this approach, as does Google’s Anthos project (via environs). Network-centric approaches prioritize creating network connectivity between clusters to enable communication and collaboration between applications running in different clusters. These approaches leverage various networking technologies and service mesh solutions to establish inter-cluster connectivity. Some notable examples of network-centric approaches include: **Istio** Istio is a popular service mesh platform that provides advanced networking capabilities for multi-cluster architectures. [Istio](https://www.getambassador.io/docs/emissary/latest/howtos/istio/) has two different strategies for multi-cluster support: a replicated control plane and a shared control plane. A replicated control plane generally results in greater system availability and resilience. Istio provides powerful primitives for multi-cluster communication at the expense of complexity. In practice, application and deployment workflow changes are needed to fully take advantage of Istio multi-cluster. **Linkerd** [Linkerd](https://www.getambassador.io/docs/emissary/latest/howtos/linkerd2/) service mirroring is a simple but powerful approach requiring no application modification. Moreover, Linkerd supports using [Edge Stack](https://www.getambassador.io/products/edge-stack/api-gateway) to connect traffic between clusters, enabling resilient application-level connectivity over the Internet. With service mirroring, traffic can be automatically routed to a mirrored service in another cluster if the primary service becomes unavailable, ensuring continuous service availability. **Consul** Consul is a distributed service mesh and service discovery platform. Consul's mesh gateway feature enables secure communication between services across different clusters. [Consul](https://www.getambassador.io/docs/edge-stack/latest/howtos/consul/) Connect uses a VPN-like approach built around Consul Mesh Gateways to connect disparate clusters. This approach requires configuring Consul for data center federation so that different Consul instances can achieve st[rong consistency over a WAN.](https://learn.hashicorp.com/consul/security-networking/datacenters) Network-centric approaches focus on establishing network connectivity and enabling seamless communication between clusters. They leverage service mesh technologies and networking solutions to create a unified application network across multiple clusters, allowing services to collaborate and interact transparently. Most organizations adopting multi-cluster are evaluating network-centric approaches. The primary reasons for this trend are the Federation project's lack of maturity and the fact that a GitOps approach to configuration management has become de rigueur for [Kubernetes](https://www.getambassador.io/use-case/productive-local-dev-environment) users. A GitOps approach and some basic automation lend themselves to managing multiple clusters, as each cluster can be created from a standardized configuration. Thus, a centralized management plane does not reduce management overhead in a way proportional to the complexity it introduces. ## Embrace Multi-Cluster Kubernetes for Your Unique Needs As [Kubernetes](https://www.getambassador.io/use-case/productive-local-dev-environment) continues to be the de facto standard for container orchestration, organizations are increasingly exploring multi-cluster deployments to enhance availability, isolation, and scalability. While this article provides a comprehensive overview of the benefits, architectures, and configuration approaches for multi-cluster Kubernetes, it is just the beginning of your journey. Every organization has unique requirements, constraints, and goals that shape its multi-cluster strategy. It is crucial to carefully evaluate your needs, such as geographic distribution, compliance obligations, team structures, and performance requirements, to determine the most suitable architecture and approach for your multi-cluster deployment. Mastering multi-cluster Kubernetes is an ongoing process that requires continuous learning, experimentation, and adaptation. Use this article as a foundation to understand the key concepts, architectures, and approaches, but don't stop there. Dive deeper into the tools, techniques, and best practices that align with your organization's unique needs and goals. By embracing multi-cluster Kubernetes strategically and thoughtfully, you can unlock new availability, isolation, and scalability levels for your applications.
getambassador2024
1,892,639
What is a vCISO Platform and Where Should You Start?
Demand for InfoSec professionals is through the roof. There's just one problem -- security-conscious...
0
2024-06-18T15:38:52
https://cynomi.com/blog/what-is-a-vciso-platform-and-where-should-you-start/
cybersecurity
Demand for InfoSec professionals is through the roof. There's just one problem -- security-conscious SMBs can't just pick up a great team member off the street. New hires are expensive, to say the least, especially choosing a full-time Chief Information Security Officer (CISO) to steer the ship.  [Almost half](https://www.msspalert.com/native/an-easy-way-for-msps-and-mssps-to-boost-virtual-ciso-offerings) of MSP clients have fallen victim to cyber attacks in the past year, yet [27% of organizations](https://www.computerweekly.com/news/366580712/IT-leaders-hiring-CISOs-aplenty-but-dont-fully-understand-the-role) believe a CISO has just one role -- to be a scapegoat when things go south. Ouch! This conundrum opens the door to a new breed of professionals, services, and platforms that provide MSP clients with a cost-effective, scalable, and flexible alternative to an in-house CISO -- the vCISO.   What is a vCISO? ---------------- A virtual Chief Information Security Officer (vCISO) is a part-time or on-demand CISO hired to provide strategic leadership and ongoing maintenance to an organization's cybersecurity and information security program.  The job of a vCISO usually entails guiding businesses in developing, implementing, and managing cybersecurity and compliance programs -- all without taking up a seat in their offices (and a hefty sum from the payroll budgets). Some requirements from vCISOs are: - [Dynamic risk assessment ](https://cynomi.com/blog/8-essential-components-every-dynamic-risk-assessment/)and management services - Cybersecurity strategy development and maintenance - Implementation of controls to protect organization assets - Employee security awareness training  - Compliance and governance enforcement - Incident response, mitigation, and remediation - Continuity and data loss prevention planning - Third-party and [supply chain risk](https://spectralops.io/blog/10-steps-to-take-now-to-reduce-supply-chain-risks/) management - Communication and reporting to the C-suite and board of directors ![virtual CISO](https://cynomi.com/wp-content/uploads/2024/06/virtual-CISO.png) [*Source*](https://www.ensl.co.uk/cyber-security/virtual-ciso/) ###\ What is a vCISO service? MSPs offer a whole suite of services to their clients, from disaster recovery planning to network monitoring. As part of this roster, many also provide vCISO services -- essentially, SMB clients can hire the expertise of a CISO, without the hassle, high costs, and addition to their headcount.  Under the vCISO services umbrella, MSPs might support functions like compliance readiness assessments, security awareness training plans, and task management optimization ---it all depends on the vCISO *platform *your MSP chooses. ### What is a vCISO platform? A vCISO platform is part of the suite of [MSP software solutions](https://cynomi.com/blog/top-8-msp-software-solutions-2024/). It streamlines the delivery of a complete vCISO service package at scale. A vCISO platform lets service providers automate a great deal of the work entailed in providing vCISO services, including compliance and risk assessments and gap analysis, and enables automated crafting of security policies and strategic remediation plans. Ideally, a vCISO platform enhances a service provider's portfolio and drives revenue growth. It enables MSPs and MSSPs to deliver a comprehensive range of cybersecurity and compliance services tailored to each client's needs without hiring or training additional InfoSec and IT personnel. Top 5 Reasons Why You Need a vCISO Platform ------------------------------------------- Why are service providers adopting vCISO platforms at an increasing rate? First and foremost, they want to meet the growing demand from their clients -- if you don't offer comprehensive vCISO services powered by a robust vCISO platform, your competitors will.  A competitive edge is not the only advantage that vCISO platforms offer to both novice and seasoned MSP/MSSPs and their clientele. Ideally, the vCISO platform of your choice will enable: ### 1\. Cost-effective vCISO service scalability With a vCISO platform in their arsenal, MSP/MSSPs can deliver comprehensive vCISO services at scale without significantly investing in hiring and training additional IT and InfoSec staff. In addition, by employing automation and AI technologies, a vCISO platform can dramatically decrease the manual work required for vCISO service delivery, thus allowing MSP/MSSPs to customize effective cybersecurity strategies for each client at a fraction of the time and cost. ### 2\. Bridging internal skill gaps Skilled information security professionals are hard to come by and not cheap to hire and retain. The demand for cybersecurity skills and knowledge can limit your ability to provide comprehensive vCISO services to a large volume of clients and increase your dependence on individual employees, teams, or contractors. ### 3\. Demonstrating value to clients One of the most critical factors in building customer trust and showcasing the value of your vCISO services is your ability to provide your clients with readable and accurate data through reports and dashboards.  A vCISO platform like Cynomi can streamline this process with white-label branded templates and flexible reporting capabilities. The reports and dashboards you provide using a vCISO platform can help communicate security gaps effectively in a way that translates into upsell opportunities. ![need for a vCISO](https://cynomi.com/wp-content/uploads/2024/06/Screenshot-2024-06-10-at-8.08.04%E2%80%AFAM.png) [*Source*](https://valuementor.com/virtual-ciso-services/the-primacy-of-virtual-ciso-services-in-the-present-clock/) ### 4\. Streamlined workflows You can streamline vCISO work through a structured process using the right platform. For example, Cynomi saves time and sets standards for processes and deliverables by simplifying key vCISO tasks and work processes, including risk and compliance assessment, security policy creation, cyber posture reporting, building remediation plans, and ongoing management optimization ### 5\. Competitive advantage It's no secret that your clients need comprehensive on-demand cybersecurity expertise---and they need it to be cost-effective, up-to-date, and hassle-free. A vCISO platform enables you to keep up with the speed at which the cybersecurity landscape is evolving. Thanks to a vCISO platform's clear-to-read dashboards and comprehensive security features, you can prove to your clients that you can proactively address emerging risks and keep them safe. 7 Key Features to Look for in a vCISO Platform ---------------------------------------------- Not all vCISO platforms are made equal, and there are a few features that you should add to your [vCISO checklist ](https://cynomi.com/blog/ultimate-vciso-checklist/)when choosing a provider. 1. 1. Discovery questionnaire automation and self-guided client onboarding enhance your visibility into your customers' cybersecurity posture and slash the time and resources necessary to achieve full coverage. 2. Automatic compliance readiness assessment for frameworks like [SOC 2](https://www.jit.io/blog/soc-2-compliance-checklist), ISO 27001, and NIST 800-171/CMMC according to the client's unique cyber profile. 3. Security policy generation and vulnerability auto-remediation to bridge security and compliance gaps. 4. Task management optimization and active prioritization of tasks according to their urgency and impact on the organization's overall security posture. 5. Cybersecurity posture and compliance reporting with a customizable self-service operations dashboard that enables you to showcase the value of your vCISO services to your client's stakeholders. 6. White-labeling, [multitenancy](https://controlplane.com/community-blog/post/saas-vs-self-hosted), and client-specific customization can promote brand loyalty and enhance the overall experience for your client's stakeholders. 7. Partner-focused vendors do not sell directly to end-clients but remain focused on how to support your needs as an MSP/MSSP.  ![product](https://cynomi.com/wp-content/uploads/2024/06/Screenshot-2024-06-10-at-8.13.05%E2%80%AFAM.png)\ Scale Your Services With Cynomi's vCISO Platform -------------------------------------------------- Virtual CISO services are in high demand, and it's up to MSPs and MSSPs to deliver them. However, providing a comprehensive end-to-end vCISO service at scale can be challenging, even for seasoned service providers. Cynomi's vCISO platform is designed for MSPs and MSSPs looking to grow their business and open new recurring revenue streams. It helps you provide enterprise-grade vCISO services to SMEs and SMBs without scaling in-house services. By leveraging AI and automation, Cynomi's platform reduces the dependency on manual expert work by as much as 40%.  Cynomi empowers your teams to make the most professional and impactful decisions for your clients' security posture. With Cynomi, you can standardize and streamline onboarding processes for employees and customers while leveraging a robust and customizable reporting system to demonstrate value to C-suite executives and business leaders. [Request a demo](https://cynomi.com/request-a-demo) to discover how Cynomi can help you get started with providing vCISO services today.
yayabobi
1,892,637
Real time data pipeline with a single command
After working at companies big and small, I often found myself poring over logs to answer business...
0
2024-06-18T15:34:31
https://dev.to/ericzizhouwang/real-time-data-pipeline-with-a-single-command-19nm
data, webdev, analytics
After working at companies big and small, I often found myself poring over logs to answer business questions for non-technical users. To be honest, a more sophisticated server-side SDK for instrumenting event data would have been ideal, with that data then streamed into a Kafka queue. This would allow me to write an ETL job to transform the data, subsequently storing it in a data warehouse, from which I could integrate with tools like Looker or Tableau so business users can create dashboards themselves! If only there were infinite time and energy for such indulgent engineering projects... It would have been marvelous! In practice, I wrestled with messy log data until it could be condensed into a nice number suitable for a dashboard. If more data was necessary, I would simply log it in the code, then get back to building features for customers or arguing with strangers on Reddit. After my cofounder Seb and I joined DoorDash through an acquisition from Bbot—a restaurant technology startup—we had to integrate Bbot data into DoorDash’s data pipeline. This task was intended to answer questions such as the number of completed checkouts per day, average check subtotals, and total daily sales, etc. Once again, we found ourselves in a familiar problem space, that was the last straw, we quit immediately! In reality, it took another month to gather the courage to actually quit our day jobs, but that's only a minor detail. We built Siege because we wanted a fast and easy way to get real time data into a format that is easily queryable using plain SQL. We spent a few months building an agent in C that you can just plop into your server(s) to mine data directly from API traffic while using negligible resources. You can then pick and choose the data fields to track with just the click of a button from a catalog of data points. We also built an user-friendly UI that allows you to visualize the data and create real-time dashboards. All in less than 10 minutes. We have 4 criteria when choosing tools for our own use: 1. We hate reading documentation, so it better be short. 2. We need to be able to derive value from it within the first 10 minutes. 3. We need to be able to play with it for free, no sales calls ever. 4. Dark mode. We were fully committed to these principles while building our own tool. Join our public beta for free! https://siegeai.com
ericzizhouwang
1,892,636
🚀Notcoin Price Forecast: Is NOT Going To Zero As Bearish Momentum Builds?
📉 Notcoin price forecast shows a significant 14.64% drop, now trading at $0.01534, according to...
0
2024-06-18T15:33:48
https://dev.to/irmakork/notcoin-price-forecast-is-not-going-to-zero-as-bearish-momentum-builds-17c1
📉 Notcoin price forecast shows a significant 14.64% drop, now trading at $0.01534, according to CoinMarketCap. With a market cap of approximately $1.575 billion, trading volumes surged over 5% in the last 24 hours, reaching around $770 million. 📊 Over the past week, Notcoin has seen a decrease of more than 4%, indicating bearish sentiment. Despite this, the token has gained approximately 130% over the past month, showing an overall upward trend. The broader market downturn, with Bitcoin dropping below $66,000, is affecting altcoins like Notcoin. 🔍 Analysts suggest various factors behind the sell-off, including macroeconomic pressures and miner capitulation. Notcoin shows strong downtrend momentum, possibly finding support at $0.015. A prolonged downturn could push the price to $0.012 or $0.01. However, a bullish phase could see it break the $0.019 resistance level, potentially reaching $0.05. 📉 Technical indicators show a concerning downward trend. The MACD is below the signal line, and the RSI is at 27, indicating oversold conditions. The Bollinger Bands suggest narrowing volatility, with the price trend descending toward the lower band. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o9a3ljee84dwmnjbf1op.png)
irmakork
1,892,635
🤯Pepe Coin Whale Dumps 1 Tln Coins To Binance, Price Risks Further Dip?
📉 Pepe coin has sparked investor concerns amid the crypto market’s bearish trend on June 18. The...
0
2024-06-18T15:33:31
https://dev.to/irmakork/pepe-coin-whale-dumps-1-tln-coins-to-binance-price-risks-further-dip-3dh6
📉 Pepe coin has sparked investor concerns amid the crypto market’s bearish trend on June 18. The frog-themed meme coin has shown signs of a correction, with bearish sentiments amplified by a whale’s colossal selloff of over 1 trillion PEPE to Binance. 🐳 Whale Alert data shows 1.15 trillion PEPE, worth $12.34 million, was dumped by an unknown address, causing significant selling pressure and reduced market confidence. Despite this, the whale still holds 6.77 trillion PEPE and 2.19 trillion SHIB, among other tokens. 📊 PEPE’s price dipped 9.21% in the past 24 hours to $0.00001055, with daily lows and highs of $0.000009865 and $0.00001176, respectively. Coinglass data indicates substantial liquidations totaling $6.78 million, contributing to the price correction. 📉 Futures OI for PEPE dropped 14.52% to $109.67 million, while derivatives volume spiked 79.19% to $1.96 billion due to speculative trading. The RSI at 38 signals downside pressure, potentially leading to oversold conditions and a price rebound if the market recovers. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/94wea5q1mwsg6ncrgwvd.png)
irmakork
1,892,634
Top 7 Featured DEV Posts of the Week
Welcome to this week's Top 7, where the DEV editorial team handpicks their favorite posts from the...
0
2024-06-18T15:33:16
https://dev.to/devteam/top-7-featured-dev-posts-of-the-week-1368
top7
Welcome to this week's Top 7, where the DEV editorial team handpicks their favorite posts from the previous week. This week, there happens to be an an abundance of career advice from different perspectives! Congrats to all the authors that made it onto the list 👏 {% embed https://dev.to/thekashey/weak-memoization-in-javascript-4po6 %} Anton walks us through an evolution of concepts (with code snippets every step of the way) to help us better understand memoization in JavaScript. --- {% embed https://dev.to/wasp/ive-been-writing-typescript-without-understanding-it-5ef4 %} @vincanger realized they didn't understand the fundamentals of TypeScript, so they took a step back and did some investigating. In this post, we learn about some initial discoveries. --- {% embed https://dev.to/gauri1504/building-a-bulletproof-cicd-pipeline-a-comprehensive-guide-3jg3 %} Gauri offers a comprehensive guide on the different considerations for building a CI/CD pipeline, packed with tips and best practices for a secure and efficient workflow. --- {% embed https://dev.to/tentanganak/7-habits-that-programmers-must-have-1dfj %} Firman applies the lessons of the book "7 Habits of Highly Effective People" to the day-to-day lives of programmers to enhance productivity and career growth. --- {% embed https://dev.to/vorniches/ive-worked-in-it-for-over-10-years-here-are-5-things-i-wish-i-knew-when-i-started-43pe %} Sergei shares valuable lessons learned over a decade in IT, offering insights that are both practical and reflective. --- {% embed https://dev.to/dipakahirav/understanding-debouncing-in-javascript-5g30 %} Dipak explains the concept of debouncing in JavaScript, providing practical examples and applications. This post is essential for developers looking to optimize their web applications. --- {% embed https://dev.to/rampa2510/advice-for-intermediate-developers-4777 %} Ram reflects on a [post they shared five years ago](https://dev.to/rampa2510/3-tips-for-new-developers-49hj), as an early career developer, and shares new advice they've learned since gaining more experience. --- _And that's a wrap for this week's Top 7 roundup! 🎬 We hope you enjoyed this eclectic mix of insights, stories, and tips from our talented authors. Keep coding, keep learning, and stay tuned to DEV for more captivating content and [make sure you’re opted in to our Weekly Newsletter] (https://dev.to/settings/notifications) 📩 for all the best articles, discussions, and updates._
thepracticaldev