id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,442,295
Chapter 1-Database Cluster, Databases And Tables
1.1 Logical Structure Of Database Cluster. In PostgreSQL, the term database cluster refers...
0
2023-04-20T15:14:17
https://dev.to/muhammadzeeshan03/chapter-1-database-cluster-tables-1hei
postgres, bitnine, database
##1.1 Logical Structure Of Database Cluster. In PostgreSQL, the term database cluster refers to a collection of databases, not a group of servers. A PostgreSQL server runs on a single host and manages a single database cluster. A database in a cluster is a group of database objects. According to relational database theory, a data structure used to store or reference data is known as a database object. Examples include (heap) tables, indexes. Sequences, views, functions and more. PostgreSQL considers database themselves to be database objects that are logically separated from each other. While all other objects like indexes and tables belong to their own respective databases. These database objects are internally managed by their respective 4-byte integers called object identifiers **(ODIs)**. ![Logical structure of a database cluster](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/au8xq0ijzfx7ijpxzeta.png)  *Figure 1. Logical structure of a database cluster* ## 1.2 Physical Structure Of a Database Cluster. A database cluster is a single directory which is known as base directory. Which contains some subdirectories and lots of files. On executing **initdb** command a new database cluster is initialized and a base directory will be created under the specified directory. You can set the path of the base directory in the environment variables though it's not mandatory. A database is a subdirectory located under the base subdirectory, and every table and index is stored under the subdirectory of it's respective database. There are various subdirectories that contain specific data and configuration files. Although PostgreSQL supports tablespaces, the definition of the term differs from other **RDBMS**. In PostgreSQL, a tablespace refers to a directory that holds some data outside of the base directory. ### 1.2.1 Tablespaces. In PostgreSQL a tablespace is an additional area outside the base directory. This has been implemented in version 8.0. ![A tablespace in database cluster](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rninn9were6ieqfazslw.png)  *Figure 2. A tablespace in database cluster* ## 1.3 Internal Layout Of Heap Table Files In PostgreSQL data files (heap table and index) are divided into pages of fixed length. And the default page length is 8192 bytes **(8KB)**. Pages are numbered sequentially from 0 called block numbers, and if pages are filled up by data PostgreSQL adds new empty pages to the end of the file. The internal layout of the pages depends on the data file types. Heap tuple is a record data itself and are stacked in order. The line pointer is 4 byte long which holds a pointer to each heap tuple. Also called item pointer.Header data is allocated in the beginning of the page. It is 24 byte long which contain general information about the page. ## 1.4 The Method Of Writing And Reading Tuples After the insertion of a second tuple, it is positioned immediately after the first tuple and the second line pointer is added to the first one and directed to the second tuple. The pointers pointing to the second line pointer and second heap tuple are updated accordingly. And for reading we have two methods. **Sequentially scan** all the tuples in all pages by scanning line pointer in each page. A **B-Tree** index scan is a method that involves utilizing an index, instead of a sequential scan, to locate the desired tuple. This search is performed based on the TID values. *Hopefully this post has given you a solid foundation for further exploring the inner workings of PostgreSQL, and a depper appreciation, for the complexity and power of this powerful database management system.*
muhammadzeeshan03
1,442,442
Scrum Master Start?
Being a developer on a team presents the opportunity to work with different roles within the team....
0
2023-04-20T16:10:20
https://dev.to/devarzia/scrum-master-start-1b5b
scrum, webdev, agile, beginners
Being a developer on a team presents the opportunity to work with different roles within the team. Arguably, one of the most important roles on an Agile development team is the Scrum Master. The Scrum Master is the enforcer when it comes to Agile methodologies and practices being implemented within a team. As a developer, I would like to become a Scrum Master. However, I do not know where to start. Does anyone have any guidance or advice to help me get started on this aspiration? Any and all help is welcomed. Thanks in advance!
devarzia
1,442,454
"Mastering Data Flow in React with the Context API: A Complete Guide"
The Context API is a powerful feature in React, a widely-used JavaScript library for building user...
0
2023-04-20T16:27:10
https://dev.to/sdgsnehal/mastering-data-flow-in-react-with-the-context-api-a-complete-guide-4gpo
The Context API is a powerful feature in React, a widely-used JavaScript library for building user interfaces. It offers a way to pass data through the component tree without having to manually pass props at every level. This is particularly useful when you have data that needs to be accessible to many components at different levels of the component tree, such as a theme that you want to apply to your entire application. With the Context API, you can create a "context object" that holds data and make it available to any component that needs it, regardless of its position in the component tree. For instance, you can create a "ThemeContext" that holds the current theme and use it throughout your application. This saves time and makes your code more readable and maintainable. To use the Context API in React, you first create a context object that holds the data you want to pass. Then, you create a Provider component that wraps the part of your component tree that needs access to the context. Finally, you create a Consumer component that can access the context from anywhere within the Provider. This provides a flexible and efficient way to manage data flow in your application. In summary, the Context API is a powerful tool that can help you build more maintainable and scalable applications in React. By using context objects, providers, and consumers, you can simplify the data flow in your application and avoid the need for manual prop drilling. Here's an example of using the Context API in React: ![In this example, the App component creates a ThemeContext object and wraps the Header, Main, and Footer components in a ThemeContext.Provider component. The Header component then uses the ThemeContext.Consumer component to access the current theme value and apply it to the header's class name.](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yza5m4xd3xd4xd4c7pd7.png) In this example, the App component creates a ThemeContext object and wraps the Header, Main, and Footer components in a ThemeContext.Provider component. The Header component then uses the ThemeContext.Consumer component to access the current theme value and apply it to the header's class name.
sdgsnehal
1,442,464
Career Development in Web Development
I've been a web developer for over a decade (feel old yet?). Over my career I've gradually moved from...
0
2023-04-20T16:54:59
https://dev.to/jquinten/career-development-in-web-development-k1n
career, growth, leadership, learning
I've been a web developer for over a decade (feel old yet?). Over my career I've gradually moved from an entry level junior at a web agency where I learned not to drop tables, jQuery (yes) and the woes of supporting ie6 even! Over the years I've grown in experience and have been focused more on becoming a well rounded frontend specialist, having adopted the [Vue.js](https://vuejs.org/) / [Nuxt.js](https://nuxt.com/) tech stack for the past four years and considered expert enough to been invited to [speak at conferences](https://joranquinten.nl/talks) and [writing a book](https://joranquinten.nl/books). Lately though, I've found myself more and more transitioning into a lead role. And I've noticed some key differences in my day to day activities and responsibilities. To help you, fellow developer, navigate your own career path, I think it makes sense for my to write about my journey. ## Organic path Now my journey has not followed a predetermined path or goal. I was happy starting out where I did. The only constant was that once I reached my personal potential within an organisation or project, I started to look at different opportunities. Those new opportunities would be tied to a certain step up: in compensation, benefits, topic and new growth possibilities. I did not set out to become a lead developer at Jumbo! I have been lucky enough to always been able to find something that aligns with where I was at that point in time. It wasn't the most efficient path, to be honest though. The most strides I think I made at my current organisation, where I started as a humble frontend developer (with seniority) but was also given the space to tap into my other talents. Being given space is only part of the equation: you also need to use that space! I've been actively collaborated on the founding of some key projects at Jumbo, because I expressed interest in doing them and by being visible within the organisation. ## Visibility is key This is a lesson I learned while I was working as a consultant. In order to grow in a consulting organisation: you need to actively work on self promotion. Just doing a good job at a client is not enough. Especially in large organisations. If people don't know you exist, you will not be considered for any opportunity. It's as simple as that. So that means actively contributing to the organisational proces. You can do this in any way that suits you, as long as it connects you to peers or (even better) superiors. As part of an organisation, you are a tool or product that helps facilitate the primary business proces. By connecting, you become a more valuable thing in an organisations' inventory. Taking that knowledge with me has helped me in actively promoting myself in any other project or organisation that I've worked with. It's not about pretending, but about highlighting your qualities and aspirations. ## Goals and KPIs I don't like personal development plans. What I do like is a long term vision with clear short term goals. I believe you should be able to set and reset goals on, say, a monthly basis. In order to lay down goals, you need a long term vision for yourself and align it with organisational goals. For me, a long term vision has always revolved around creating visibility and engagements about my work and the organisation I'm working with. This usually ties in nicely with promotional and hiring goals of an employer. If those are aligned, you can create a space for yourself to expand in a certain direction. This is why I have been doing public speaking events, collaborate with educational institutions. I apply that experience also to internal connections and collaborations. All these activities, that are not part of my primary role, I get to do in agreement with my coach/mentor/manager, because I can show that they contribute to my long term vision. ## Evaluating Growth is usually measured in achievements and compensation. I tend to plan quarterly meetings as performance reviews to discuss the things I've achieved in the past period. I always keep a record of my activities and the outcome. Usually, I set the agenda for these meetings and provide the points I want to discuss upfront as well as the outcome. This helps me to structure the conversation around the things I want to address. ## Make it your own journey In many organisations there are clear and formal definitions of what you need to achieve for any role. You can use these to map your short term goals and milestones. I would recommend though, to look at a long term vision for yourself. Something that can extend your current position and organisation your working with. If you have that, you can validate if any of the steps in between contribute to that vision. I think that helps massively in growing yourself.
jquinten
1,442,480
48 berufliches Vorwärtskommen: Helfet den Neidern. Ja, helfen.
Moin Moin, helfet den Neidern. Helfet ihnen aus der mangelnden Selbstwertschätzung→ und aus der...
22,304
2023-04-23T04:31:00
https://dev.to/amustafa16421/48-berufliches-vorwartskommen-helfet-den-neidern-ja-helfen-4b82
deutsch, career, motivation, discuss
Moin Moin, helfet den Neidern. Helfet ihnen aus der mangelnden [Selbstwertschätzung→](https://dev.to/amustafa16421/comment/266jd) und aus der ungeschulten Selbsteinschätzung. Auch wenn es in einem Unternehmen keine Rückmeldegespräche gibt, so können wir als Arbeitskollegen uns gegenseitig regelmäßig Rückmeldung geben ( wodurch es sie dann gibt ). [Auch hier→](https://dev.to/amustafa16421/45-berufliches-vorwartskommen-mehr-raum-fur-dortherlaufer-5d7j) möchte ich meinen, dass wir durch Eigeninitiative die Unternehmenskultur stark beeinflussen können, denn ein Unternehmen wird zum Teil von seinen Angestellten getragen. Wenn wir also in unseren Abteilungen einen besseren Umgang miteinander pflegen, verbessern wir im selben Schritt die Unternehmenskultur. 1) Möchtest du den Neidern helfen? 2) Was meinst du wären geeignete Maßnahmen? 3) Wünscht du dir von deinem Unternehmen dafür Unterstützung oder erhältst du sie bereits? Beste Grüße, Mustafa Austausch mittels E-Mail: mustafa.kevin.dwenger@posteo.de Betreff: Austausch
amustafa16421
1,442,522
Services provided by AWS
AWS, a portion of Amazon.com, Inc., gives on-demand cloud computing stages and APIs to shoppers,...
0
2023-04-20T18:23:32
https://dev.to/shriom_03/services-provided-by-aws-45l4
blog, aws, amazon, cloudskills
AWS, a portion of Amazon.com, Inc., gives on-demand cloud computing stages and APIs to shoppers, associations, and governments. AWS offers a wide extend of merchandise and administrations to assist businesses of all sizes create and scale. This blog will talk about a few of Amazon Web Services' most well-known and valuable administrations.  **<u>Elastic Compute Cloud (EC2):</u>** A online service called Amazon EC2 offers flexible computing resources in the cloud. For developers, it is intended to make web-scale cloud computing simpler. B. Compute Optimized, Memory Optimized, and Memory Optimized Instances are just a few of the instance kinds that EC2 offers that are optimized for various use cases. The operating systems available to users are Amazon Linux, Windows, and Ubuntu. **<u>Simple Storage Service(S3):</u>** Simple Storage Service, or S3, a highly scalable, reliable, and secure object storage solution is Amazon S3. Any quantity of data may be stored and retrieved using it from any location on the internet. S3 has a pay-as-you-go pricing structure, which makes it an affordable choice for storing huge volumes of data. Users may store and access data including films, images, documents, and other file kinds using their S3. **<u>Relational database services (RDS):</u>** Relational database services, or RDS,a online service called Amazon RDS makes it easier to set up, run, and scale relational databases in the cloud. It supports a number of well-known database engines, including Microsoft SQL Server, Amazon Aurora, MySQL, and MariaDB. RDS is a low-maintenance alternative for maintaining databases because it provides automatic backups, point-in-time restorations, and automated software updating. **<u>Lambda :</u>** Linux AWS In reaction to events, the computing service Lambda automatically manages computer resources and runs code. used to build infrastructure- and server-free apps that may be deployed with no management required. Python, Node.js, Java, and more programming languages are supported by Lambda. **<u>DynamoDB :</u>** A quick and fully managed NoSQL database service is Amazon DynamoDB. It is made for applications that need constant reaction times in the single-digit millisecond range and excellent performance with low latency. With features like automated scalability, integrated security, and a flexible data schema, DynamoDB is a popular option for developing contemporary apps. **<u>CloudFront :</u>** A content delivery network (CDN) called Amazon CloudFront uses fast data transfer rates, low latency, and quick transfers to provide data, video, apps, and APIs to users all over the world.To offer a comprehensive end-to-end solution for providing content to your clients, it interfaces with other AWS services including S3, EC2, and Lambda. **<u>SES (Simple Email Service) :</u>** A cloud-based email transmission service called Amazon SES enables companies to send transactional emails in large volumes. With capabilities like automated bounce handling, DKIM signature, and personalized email templates, you can send emails with reliability and affordability. To offer a full email solution, SES also interfaces with other AWS services like S3 and Lambda. **<u>SNS (Simple Notification Service) :</u>** Push notifications may be sent and received by devices, end users, and apps using Amazon SNS, a web service. It offers a versatile and economical method for informing your consumers via SMS, email alerts, and push notifications on their mobile devices. To offer a comprehensive notification solution, SNS connects with other AWS services like Lambda and CloudFormation. **<u>CloudWatch :</u>** You can monitor your AWS resources and the apps that use them using Amazon CloudWatch. helps you monitor and debug your applications and infrastructure by providing metrics, logs, and alarms. Your EC2 instances, RDS databases, Lambda functions, and other AWS services may all be monitored with CloudWatch.
shriom_03
1,442,713
React Router: A Beginners guide to useParams hook.
In this guide, we will be unravelling the complexity in one of React Router's hook called: useParams....
0
2023-04-20T20:53:32
https://dev.to/stanlisberg/react-router-a-beginners-guide-to-useparams-hook-38pj
webdev, react, javascript
In this guide, we will be unravelling the complexity in one of React Router's hook called: `useParams`. We will learn about its implementation and functionality. Ride along with me. **Prerequisite** You should have the following to follow along with this guide: - A very basic knowledge of react. - A code editor. - A browser. You might be wondering; why is it necessary to use `useParams`? I will gladly explain the reason to you. When it involves Routing, there are some Routes that have dynamic parameters attached to their path string and the `useParams` hooks provides an avenue for us to access those dynamic parameters in the Route. #### What is useParams? `useParams` is a hook that allows you to have access to dynamic parameters in the URL(Uniform Resource Locator). **Setting up React Router** Using npm `npm install react-router-dom` Because `useParams` hook is based off React Router, we need to import `BrowserRouter`, `Routes` and `Route` from react-router-dom. ![Routing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r4at6jl76xbexs9q17p0.jpeg) From the image above, we have set up React Router. We are going to demonstrate the implementation of `useParams` with a mini project. In this project, we will be working with two (2) components namely: `CardComponent` and `CardInfo`. Firstly, we will structure our `App` component by declaring an array of objects that holds our card data and also configure our Routes. **App Component** ``` import { BrowserRouter as Router, Routes, Route } from "react-router-dom"; import CardComponent from "./components/CardComponent"; function App() { const cardData = [ { name: "HTML", title: "Markup Language", description: "HTML is a markup language that describe the structure of web pages" }, { name: "CSS", title: "Styling Language", description: "CSS is a Style sheet language which is used to describe the look and formatting of a document written in markup language." }, { name: "JAVASCRIPT", title: "Programming Language", Description: "Javascript is a programming language that adds interactivity to web pages" }, { name: "REACT", title: "Javascript Library", description: "React is a JavaScript-based UI development library for building user interfaces based on components." } ]; return ( <div className="App"> <Router> <Routes> <Route path="/" element= <CardComponent data= {cardData} /> /> </Routes> </Router> </div> ); } export default App; ``` From the code snippet above, we declared an array of objects that hold some datas, passed it as a prop to `CardComponent` and also configured our Routes by specifying a path to our `CardComponent.` ` <Route path="/" element=<CardComponent data={cardData} />` The `path="/"` is targeted to display `CardComponent's` content on the index(home) page. **CardComponent** ``` import { Link } from "react-router-dom"; function CardComponent({ data }) { return ( <> <section className="wrapper"> <div className="container"> {data.map((item, index) => ( <div className="card" key={index}> <h3>{item.name}</h3> <p>{item.title}</p> <Link to={`/info/${item.name}/`}>View Discription</Link> </div> ))} </div> </section> </> ); } export default CardComponent; ``` From the code snippet above, we imported `Link` from `react-router-dom`. **`Link` is an element that lets the user navigate to another page by clicking on it.** Recall that we passed our `cardData` as a prop in the App component child's component which in this case is `CardComponent`. Now we are **Destructuring** that data to have access to it and mapping its content in a `div` element with a class of `card`. See the output below. ![mapped data](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbklb0k9ih0lxdfhc42c.png) **Implementing useParams (i)** Let's go back to our App component and add some few things. ![useParams](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ybc2j4jodcbwji2fth6g.png) The image above shows that we imported the `CardInfo` component and rendered it as a child component in `App` component while passing the `cardData` array as `prop`. I know what you are thinking; _"Where did `CardInfo` component appeared from?"_ Don't worry, we are on the right track, I purposely did that so as to explain how we can set up our Route for `useParams`. ```<Route path="/info/:name" element=<CardInfo data={cardData} /> />``` If we take a good look at the code above, the path string is directed to `/info/:name`. What this means is that, the colon `:` before `name` is an indicator that informs React that this is a dynamic Route and that `name` is now a property of `useParams`. This `name` can be anything but it must match which that of the `useParams` object. Also, from our card component: ```<Link to={/info/${item.name}/}>View Discription</Link>``` The code specifies that Link will navigate to `CardInfo` component. __Note:__ _In our `App` component, you must understand that the parameter after the `:` which is `name`, is now a reference to `item.name` from our Link path. ie, the Link will navigate to `/info/:name`_. Lets introduce our `CardInfo` component and put all the pieces together. Shall we? **Implementing useParams (ii)** __CardInfo__ ``` import { useParams } from "react-router-dom"; function CardInfo({ data }) { const { name } = useParams(); return ( <> <section className="card-wrapper"> <div className="card-container"> {data .filter((item) => item.name === name) .map((item, index) => ( <div className="card-info" key={index}> <h2>{item.name}</h2> <p>{item.description}</p> </div> ))} </div> </section> </> ); } export default CardInfo; ``` We have imported the `useParams` hook from `react-router-dom` and accessed its property by using the **Destructuring** concept. `const { name } = useParams();` _Recall_ that the path string to navigate to `CardInfo` component is `/info/:name` in our `App` component and we have gained access to the `:name` parameter by using destructuring. Now, `name` is an object of `useParams` in this case. __Note:__ _`name`as an object of `useParams` must match with that of the parameter name specified in the path string of the route definition which in this case is `:name` else you will encounter an error. Basically, you can name the parameter anything you want, but it is ideal to match it with one of the key in your data object so there’s no confusion. Also, from our `Link`, `${item.name}` is a reference to the `name` object of `useParams`._ We have filtered our data to return the name value that will be clicked from our `Link` and also mapped the output from the filtered data to return the `name` and `description` value of our desired card. Check the output below. __Functionality__ {% embed https://codesandbox.io/embed/shy-cdn-3n0ke9?fontsize=14&hidenavigation=1&theme=dark" style="width:100%; height:500px; border:0; border-radius: 4px; overflow:hidden;" title="shy-cdn-3n0ke9" allow="accelerometer; ambient-light-sensor; camera; encrypted-media; geolocation; gyroscope; hid; microphone; midi; payment; usb; vr; xr-spatial-tracking" sandbox="allow-forms allow-modals allow-popups allow-presentation allow-same-origin allow-scripts" %} Click the view description button and see the magic. ![output of navigated route](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0xip488y7nq5jiv1hm19.png) if you take a look at that arrow from the image, you will see that we have successfully navigated to`/info/HTML` because we clicked the view description button of the HTML card. We have achieved this with the help of the `useParams` hook. __Conclusion__ In this guide, we have discussed how to use the `useParams` hook in React. We have also streamlined the implementation and functionality of the hook by using a mini project to explain how you can use this hook to navigate to dynamic URL in our web applications. I hope you learnt something new from this article. Goodluck in our career!
stanlisberg
1,442,748
DlteC do Brasil oferece 9 cursos gratuitos com certificado em Cibersegurança, Linux e mais!
Com a crescente demanda por profissionais especializados em tecnologia da informação, é essencial...
0
2023-04-20T22:25:10
https://guiadeti.com.br/dltec-do-brasil-cursos-ciberseguranca-mais/
cursogratuito, cisco, cursosgratuitos, cybersecurity
--- title: DlteC do Brasil oferece 9 cursos gratuitos com certificado em Cibersegurança, Linux e mais! published: true date: 2023-04-20 21:54:50 UTC tags: CursoGratuito,cisco,cursosgratuitos,cybersecurity canonical_url: https://guiadeti.com.br/dltec-do-brasil-cursos-ciberseguranca-mais/ --- ![Thumb DlteC do Brasil - Guia de TI](https://guiadeti.com.br/wp-content/uploads/2023/04/DlteC-do-Brasil-1024x676.png "Thumb DlteC do Brasil - Guia de TI") Com a crescente demanda por profissionais especializados em tecnologia da informação, é essencial se manter atualizado e aprimorar suas habilidades para se destacar no mercado de trabalho. E a boa notícia é que a plataforma DlteC do Brasil oferece 9 cursos gratuitos com certificado, abrangendo áreas como [Cibersegurança](https://guiadeti.com.br/guia-tags/cursos-de-cybersecurity/), [Linux](https://guiadeti.com.br/guia-tags/cursos-de-linux/), [IOT](https://guiadeti.com.br/guia-tags/cursos-de-iot/), Packet Tracer e [Cisco](https://guiadeti.com.br/guia-tags/cursos-de-cisco/). Com aulas 100% online e flexibilidade de horários, essa é uma oportunidade imperdível para quem busca se capacitar e se destacar na carreira. ## Conteúdo <nav><ul> <li><a href="#cursos">Cursos</a></li> <li><a href="#dlte-c-do-brasil">DlteC do Brasil </a></li> <li><a href="#inscricoes">Inscrições</a></li> <li><a href="#compartilhe">Compartilhe!</a></li> </ul></nav> ## Cursos - **Cisco IOS Essentials** - Ementa - Objetivo. - Cisco IOS, Roteadores e Switches. - Desmistificando o CLI – Modos, Comandos e Navegação. - Primeiras Linhas de Configuração no Cisco IOS. - Configurando VLANs e Interfaces. - Desafio Construindo meu Primeiro Lab Cisco. - Seguindo a Carreira de Roteamento e Switching Cisco. - **Configs de Rede em Linux/Windows** - Ementa: - Objetivo. - Placas de Rede. - Configurações no [Windows](https://guiadeti.com.br/guia-tags/cursos-de-windows/). - Configurações no Linux. - Dicas e Troubleshooting. - Conclusão. - **Cybersecurity Essentials ** - Ementa: - Introdução ao Curso. - O cubo da segurança cibernética. - A arte de garantir a integridade. - O conceito de cinco noves. - Proteção de um domínio de segurança cibernética. - Como se tornar um especialista em segurança. - **Fundamentos de Cibersegurança** - Ementa: - Introdução ao Curso e da Academia Cisco DLTEC do Brasil. - Segurança Cibernética – Um mundo de magos, heróis e criminosos. - O cubo da segurança cibernética. - Ameaças, vulnerabilidades e ataques à segurança cibernética. - A arte de proteger segredos. - A arte de garantir a integridade. - O conceito de cinco noves. - Proteção de um domínio de segurança cibernética. - Como se tornar um especialista em segurança. - Conclusão e Certificado. - **Introdução a Cybersegurança** - Ementa: - Introdução ao Curso. - A necessidade da segurança cibernética. - Ataques, conceitos e técnicas. - Proteção de dados e privacidade. - Proteção da empresa. - O seu futuro estará na segurança cibernética. - **Introdução ao IOT** - Ementa: - Introdução ao Curso. - Tudo está conectado. - Tudo se Torna Programável. - Tudo gera dados. - Tudo pode ser Automatizado. - Tudo precisa ser protegido. - Educação e Oportunidades de Negócios. - **Introdução ao Linux** - Ementa: - O que e Linux e por que aprende-lo. - Preparação do Ambiente de Estudos. - Comandos Básicos e Manipulação de Arquivos. - Estrutura de Diretórios e Permissões. - Manipulação de Programas e Compactação de Arquivos. - Como Alavancar a Carreira com o LInux. - A Trilha de Certificações LPI. - Conclusão e Certificado. - **Introdução ao Packet Tracer** - Ementa: - Introdução ao Curso. - Baixando, Instalando e Utilizando o Packet Tracer. - Introdução ao Packet Tracer. - Interface do Usuário. - Simulation Mode. - Physical View e Tipos de Arquivos do Packet Tracer. - Componentes de IOT. - Criando e Controlando uma Rede Doméstica. - Controles de Ambiente do Packet Tracer. - Criando e Programando Objetos. - **Linhas de Routers e Switches Cisco** - Ementa: - Objetivo. - Entendendo e Classificando Switches Catalyst. - Switches Modulares e Small Business. - Switches de Acesso. - Facilidades de Gerenciamento dos Switches Catalyst. - Switches de Distribuição, Core e Data Center. - Componentes de um Roteador. - Famílias de Roteadores Cisco. - NCS, ASR e Dispositivos Virtualizados. ## DlteC do Brasil A DlteC do Brasil é uma plataforma de educação online que oferece diversos cursos gratuitos e pagos em áreas como Tecnologia da Informação, Negócios, Marketing e Design. A plataforma se destaca por oferecer cursos de alta qualidade, com conteúdo atualizado e ministrados por profissionais experientes em suas áreas de atuação. Além disso, os cursos da DlteC do Brasil são totalmente online e podem ser acessados a qualquer momento, o que permite que os alunos tenham flexibilidade para estudar de acordo com sua disponibilidade e ritmo de aprendizagem. A plataforma também oferece certificados para os cursos concluídos, o que ajuda os alunos a comprovar suas habilidades e conhecimentos para empregadores e clientes em potencial. A DlteC do Brasil tem como objetivo democratizar o acesso à educação de qualidade e preparar profissionais para as demandas do mercado de trabalho atual. Com isso, a plataforma tem ajudado muitas pessoas a aprimorar suas habilidades e alcançar seus objetivos profissionais. ## Inscrições [Inscreva-se aqui!](https://www.dltec.com.br/curso/gratis) ## Compartilhe! Gostou do conteúdo sobre os cursos da plataforma DlteC do Brasil? Então não deixe de compartilhar com a galera! O post [DlteC do Brasil oferece 9 cursos gratuitos com certificado em Cibersegurança, Linux e mais!](https://guiadeti.com.br/dltec-do-brasil-cursos-ciberseguranca-mais/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br).
guiadeti
1,442,914
ChatGPT Is at the Peak of the Hype Cycle
ChatGPT is at the peak of the hype cycle.
0
2023-04-21T00:45:25
https://dev.to/dsschnau/chatgpt-is-at-the-peak-of-the-hype-cycle-5ch
--- title: ChatGPT Is at the Peak of the Hype Cycle published: true date: 2023-01-19 16:01:39 UTC tags: canonical_url: --- ChatGPT is at the peak of the hype cycle. ![peak of expectations](https://danschnau.com/img/peak_of_expectations.png)
dsschnau
1,442,920
.NET 8 Preview 3: Simplified Output Path format for Builds
If you're like me, you're used to finding your .NET build artifacts in /bin and /obj directories...
0
2023-04-21T00:44:53
https://dev.to/dsschnau/net-8-preview-3-simplified-output-path-format-for-builds-jl6
--- title: .NET 8 Preview 3: Simplified Output Path format for Builds published: true date: 2023-04-20 19:31:26 UTC tags: canonical_url: --- If you're like me, you're used to finding your .NET build artifacts in `/bin` and `/obj` directories inside your source projects. It's been this way my whole career. In .NET 8 Preview 3, this can be changed to dump the `/bin` and `/obj` artifacts into an `.artifacts` directory instead of the source root. .NET Manager Chet Husk Writes about it [in the comments of 'What's new in .NET 8 Preview 3' discussion](https://github.com/dotnet/core/issues/8135#issuecomment-1483357684). I'd like to test this little feature out. First, I needed to install [.NET 8 Preview 3](https://dotnet.microsoft.com/en-us/download/dotnet/8.0). It required a system restart. First, a new console app in a directory `testartifactsoutput`. ``` ~#@❯ dotnet new console ❮  Welcome to .NET 8.0! --------------------- SDK Version: 8.0.100-preview.3.23178.7 Telemetry --------- The .NET tools collect usage data in order to help us improve your experience..... *SNIP* Read more about .NET CLI Tools telemetry: https://aka.ms/dotnet-cli-telemetry ---------------- Installed an ASP.NET Core HTTPS development certificate. To trust the certificate run 'dotnet dev-certs https --trust' (Windows and macOS only). Learn about HTTPS: https://aka.ms/dotnet-https ---------------- Write your first app: https://aka.ms/dotnet-hello-world Find out what's new: https://aka.ms/dotnet-whats-new Explore documentation: https://aka.ms/dotnet-docs Report issues and find source on GitHub: https://github.com/dotnet/core Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli -------------------------------------------------------------------------------------- The template "Console App" was created successfully. Processing post-creation actions... Restoring C:\danschnauprojects\testartifactsoutput\testartifactsoutput.csproj: Determining projects to restore... Restored C:\danschnauprojects\testartifactsoutput\testartifactsoutput.csproj (in 76 ms). Restore succeeded. ``` The default behavior is still to build to the `/bin` directory. ``` ~#@❯ dotnet build ❮  MSBuild version 17.6.0-preview-23174-01+e7de13307 for .NET Determining projects to restore... All projects are up-to-date for restore. C:\Program Files\dotnet\sdk\8.0.100-preview.3.23178.7\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.RuntimeIdentifierInference.targ ets(287,5): message NETSDK1057: You are using a preview version of .NET. See: https://aka.ms/dotnet-support-policy [C:\danschnaupr ojects\testartifactsoutput\testartifactsoutput.csproj] testartifactsoutput -> C:\danschnauprojects\testartifactsoutput\bin\Debug\net8.0\testartifactsoutput.dll ``` Then, we create a `Directory.Build.Props` file. ``` ~#@❯ dotnet new buildprops ❮  The template "MSBuild Directory.Build.props file" was created successfully. ``` The buildprops file is pretty sparse. ``` <Project> <!-- See https://aka.ms/dotnet/msbuild/customize for more details on customizing your build --> <PropertyGroup> </PropertyGroup> </Project> ``` We add the `UseArtifactsOutput` flag to the file: ``` <Project> <!-- See https://aka.ms/dotnet/msbuild/customize for more details on customizing your build --> <PropertyGroup> <UseArtifactsOutput>true</UseArtifactsOutput> </PropertyGroup> </Project> ``` Now I cleaned up the build by deleting the `/bin` and `obj` directories. ``` ~#@❯ ls ❮  Directory: C:\danschnauprojects\testartifactsoutput Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 4/20/2023 3:25 PM 212 Directory.Build.props -a---- 4/20/2023 3:20 PM 105 Program.cs -a---- 4/20/2023 3:20 PM 252 testartifactsoutput.csproj ``` And now, a build: ``` ~#@❯ dotnet build ❮  MSBuild version 17.6.0-preview-23174-01+e7de13307 for .NET Determining projects to restore... Restored C:\danschnauprojects\testartifactsoutput\testartifactsoutput.csproj (in 64 ms). C:\Program Files\dotnet\sdk\8.0.100-preview.3.23178.7\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.RuntimeIdentifierInference.targ ets(287,5): message NETSDK1057: You are using a preview version of .NET. See: https://aka.ms/dotnet-support-policy [C:\danschnaupr ojects\testartifactsoutput\testartifactsoutput.csproj] testartifactsoutput -> C:\danschnauprojects\testartifactsoutput\.artifacts\bin\testartifactsoutput\debug\testartifactsoutput.dll Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:01.20 ``` Ooh, I see the `.artifacts` directory in the output there. The `/bin` and `/obj` directories are now in their own `/.artifacts` folder. Sweet. ``` ~#@❯ ls ❮ 1s 452ms   Directory: C:\danschnauprojects\testartifactsoutput Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 4/20/2023 3:26 PM .artifacts -a---- 4/20/2023 3:25 PM 212 Directory.Build.props -a---- 4/20/2023 3:20 PM 105 Program.cs -a---- 4/20/2023 3:20 PM 252 testartifactsoutput.csproj ~#@❯ ls .\.artifacts\ ❮  Directory: C:\danschnauprojects\testartifactsoutput\.artifacts Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 4/20/2023 3:26 PM bin d----- 4/20/2023 3:26 PM obj ```
dsschnau
1,443,005
Good practices in js that I use every day
Hey guys, my name is Vinicius and I started working as a dev in 2018. This is my first post ever, I...
0
2023-04-21T04:01:18
https://dev.to/viniielopes/good-practices-in-js-that-i-use-every-day-27pj
javascript, programming, beginners, webdev
Hey guys, my name is Vinicius and I started working as a dev in 2018. This is my first post ever, I don't really like to expose myself, I'm trying to change that over time, but as it's a really nice environment here, I decided to bring some good practices that I use in my day to day. ## 1 - Function with three or more parameters, I prefer passing an object. When I have to develop a function with three or more parameters, it is a bit difficult to pass the parameters to the function and when it is not a function that I developed, but I need to use it, I need to go back to the function declaration to know what is necessary to pass in each field . Example of a function with more than three parameters: ```tsx const createInvoice = ( name: string, lastName: string, totalValue: number, address: string ) => { console.log(name, lastName, totalValue, address); }; createInvoice(`Vinicius`, `Lopes`, 1500, `Pennsylvania Avenue NW`); ``` The moment we call the function, if we are the creator of the function, it is even easier to remember, but the values are loose and it is necessary to go straight to the function declaration to know what is necessary to pass in each field. As this function exceeds the three parameters we can refactor using an object. Example of how to call the function with the parameters being in an object: ```tsx type CreateInvoiceParams = { name: string; lastName: string; totalValue: number; address: string; }; const createInvoice = ({ name, lastName, totalValue, address, }: CreateInvoiceParams) => { console.log(name, lastName, totalValue, address); }; createInvoice({ name: `Vinicius`, lastName: `Lopes`, totalValue: 1500, address: `Pennsylvania Avenue NW`, }); ``` **Yes! We seem to write more! But it does have some advantages…** It seems that the code is more verbose but we have some advantages that I will list. - First, we have an encapsulated type of our method that we can reuse in a few other places. - The order of the values now does not matter in the invocation and it is much easier to know which parameters are needed without having to see the function declaration, reducing the cognitive effort in future maintenance. - Readability, when one looks at the function invocation one can immediately identify the parameters used. ## 2 - Functional programming > imperative programming 😲 **I'm kidding, actually it would be to use the functional artifices of the language when possible.** Javascript is a language that is not 100% functional, but we can use some tricks that help with readability and in our daily lives. I'll show you an example of a sum of values done in an imperative manner. ```tsx const orders = [ { name: `Vinicius`, value: 50, }, { name: `Ailton`, value: 80, }, { name: `Lucas`, value: 100, }, { name: `Felipe`, value: 50, }, ]; let total = 0; for (let i = 0; i < orders.lenght; i++) { total += orders[i].value; } ``` This face of `for` is already well known because we learned it in college, but we can do something simpler and that will probably improve our performance and knowledge of how we can work with arrays in JS. ```tsx const orders = [ { name: `Vinicius`, value: 50, }, { name: `Ailton`, value: 80, }, { name: `Lucas`, value: 100, }, { name: `Felipe`, value: 50, }, ]; const total = orders.reduce((acc, order) => acc + order.value, 0); ``` the one above is reduce it is a method for arrays in JS, at first it seems a little complicated but with a brief explanation to understand, let's explain. `const total = orders.reduce((acc, order) => acc + order.value, 0);` As it is a functional method, first it receives a function that in javascript we usually call a callback function, this function receives an accumulator (`acc`) and a command field, after this function it receives the initial value of the accumulator, which is `0`. Soon after the callback function is run receiving field by field of the array changing the accumulator by the return of the reduce function which would be `acc + order.value` and after going through all the fields of the array executing the function and saving the accumulator it returns what was accumulated for the `total` variable In addition to `reduce`, we have `map` , `flatMap`, `filter`, `find`, `sort`, all of which help a lot in everyday life and improve readability and reduce cognitive effort in the code. ## 3 - Do not use ELSE!! Okay this can be a little tricky at first and will need some practice, don't use `else` is a rule of [object calisthenics](https://williamdurand.fr/2013/06/03/object-calisthenics/#2-dont-use-the-else-keyword) and I think it's one of the coolest among the 9 rules described. Without the `else` you can reduce the flow of conditions in your code and thus reduce the complexity, the biggest problem with the `else` that I see is that it is **EVERYTHING** that is not `if`, on certain occasions it makes total sense to have an `else` , depending on the rule in business you might need an `else` but most of the time we wouldn't need it. A cool technique is to use the ***early return***, it consists of always returning as quickly as possible from a method, always exit quickly from within a method. One idea I use is to check the possible conditions as a business rule and errors at the beginning and the “happy path” is at the end. Okay, I've explained enough, let's see a code to make it clearer. ```tsx const checkDeliveryPrice = (value: number) => { if (value > 90) { notChargeDelivery(); } else { chargeDelivery(); } }; ``` This code above we check if the value is above 90 we do not charge shipping and if it is not greater than 90 we will charge shipping, but we can improve it. ```tsx const checkDeliveryPrice = (value: number) => { if (value > 90) { notChargeDelivery(); return; } chargeDelivery(); }; ``` OK, it was a little cleaner without the `else` and we managed to reduce the cognitive effort of understanding this method, we also reduced the flows that the application can follow. ### 3.1 Early return are the friends we make along the way. The early return consists of **failing quickly**, ( what does that mean ? ) validate the possible problems before the *“happy way”* we go straight to the code. ```tsx const checkDeliveryPrice = (value: number, franchised: boolean) => { if (value > 90) { if (franchised) { notChargeDelivery(); } if (!franchised) { alert(`You are not a franchisee, you will be charged shipping`); chargeDelivery(); } } else { if (value !== undefined) { alert(`Invalid value`); return; } } chargeDelivery(); }; ``` Do you realize that it's even ugly to look at a code like that? In this code we check if the guy spent more than `90` and if he is a franchisee he does not pay shipping, but if he is a franchisee he pays the freight, and in the `else` we check if the value is invalid and we warn the user of the invalid value, if it is valid he only charges the shipping normally. Imagine for our brain that we use it all day, the cognitive effort we use to understand. We can refactor this code thinking about an early return rule, **fail quickly**, example: ```tsx const checkDeliveryPrice = (value: number, franchised: boolean) => { if (valor !== undefined) { alert(`Invalid value`); return; } if (value > 90) { if (franchised) { notChargeDelivery(); } if (!franchised) { alert(`You are not a franchisee, you will be charged shipping`); chargeDelivery(); } } else { chargeDelivery(); } }; ``` We take the invalid field validation to the beginning of the method and return the error to the user as soon as possible and we don't need to enter the whole code, but we can improve the validations even more. ```tsx const checkDeliveryPrice = (value: number, franchised: boolean) => { if (valor !== undefined) { alert(`Invalid value`); return; } if (!franchised) { alert(`You are not a franchisee, you will be charged shipping`); chargeDelivery(); return; } if (value > 90) { notChargeDelivery(); return; } chargeDelivery(); }; ``` Here we do the validations first and leave the **“happy path”** for last, we check if the user is a franchisee, if it is, skip to the other `if` and check if the value is above `90` and does not charge shipping and if it is not above `90`, it charges shipping normally. We don't need to nest any `if` and we can better manipulate the conditions and our effort to understand is much less. # 4 - Replacing the switches in our code In Javascript, sometimes we get lost in making a `switch/case` or even several `if` that work to check a constant value and return something, in these cases we can use the **object literal**, not to be confused with a template literal. **Object literal** is nothing more than a javascript object, but as it has the object's key and value, we can create some “checks” that make the code much easier and cleaner. Example with `if` ```tsx const verifyUser = (userType: "admin" | "user" | "guest") => { if (userType === "admin") return "Administrator"; if (userType === "user") return "User"; if (userType === "guest") return "Anonymous"; }; ``` Example with `switch/case` ```tsx const verifyUser = (userType: "admin" | "user" | "guest") => { switch (userType) { case "admin": return "Administrator"; case "user": return "User"; case "guest": return "Anonymous"; default: return "Anonymous"; } }; ``` Just an example, however, you can see that they are static values, without validations and we are parsing the values for other information. Even so, it was a strange and complicated code for the little thing it does, apart from code repetition and, in the case of `switch/case`, the error of forgetting a `break;` It is huge. We can bring the object literals to help us with this and improve the readability of the code, example: ```tsx const userTypes = { admin: "Administrator", user: "User", guest: "Anonymous", }; const verifyUser = (type: "admin" | "user" | "guest") => { return userTypes[type] || "Anonymous"; }; ``` Wow, it looks like magic, right, but what we do is pass the type to the object and by the field key it returns the value and the default value will be Anonymous if there is no return. We can see that it has improved the readability and also the maintenance of this code, it is a code that can scale quickly just by including new values in the variable `userTypes` and we don't need to repeat code. ## It ends here 🤙 I hope I have added some value to our community, I wrote the first things that came into my head and made some examples to illustrate. If you have good practices that you always remember to use in everyday life, comment that I also want to know. Thanks for reading. 🤓
viniielopes
1,443,167
Apache AGE & Python(Pt. 2)
In Part-1 of this blog post, we explored how the Python driver in Apache AGE enables Python...
0
2023-04-21T06:33:31
https://dev.to/huzaiifaaaa/apache-age-pythonpt-2-17jc
apacheage, postgres, database, apache
In [Part-1](https://dev.to/huzaiifaaaa/apache-age-python-1pg7) of this blog post, we explored how the Python driver in Apache AGE enables Python developers to interact with the database. In this part, we'll dive deeper into some of the advanced features of the driver and how they can be used to build complex graph applications. ## Traversals: Python driver's support for graph traversals is one of Apache AGE's standout features. You can get data along the way and follow the connections between the graph's nodes using traversals. The Python driver offers a straightforward API for carrying out traversals. Here's an example of it: ``` result = client.execute_query(""" MATCH (p:person)-[:knows]->(friend) WHERE p.name = 'Alice' RETURN friend.name, friend.age """) for row in result: print(row) ``` In this example, we execute a query that finds all friends of a person named Alice and returns their names and ages. We then iterate over the result set and print each row. ## Pattern Matching: Support for pattern matching is another potent attribute of Apache AGE. You can look for particular patterns in the network using pattern matching, such as a path between two nodes or a subgraph with a particular structure. A pattern matching API is available from the Python driver. This can be done as follows: ``` result = client.execute_query(""" MATCH (a:person)-[:knows]->(b:person)-[:knows]->(c:person) WHERE a.name = 'Alice' AND c.name = 'Charlie' RETURN b.name """) for row in result: print(row) ``` n this example, we execute a query that finds all people who are friends of both Alice and Charlie. We then iterate over the result set and print each row. ## Transactions: Transactions, which let you combine several queries into a single atomic process, are supported by Apache AGE. This is helpful when updating the graph in stages and you want to make sure that either all updates complete successfully or none do. An example of this is: ``` with client.transaction() as tx: tx.execute_query("CREATE VERTEX person(name TEXT)") tx.execute_query("CREATE EDGE knows FROM 1 TO 2") ``` In this example, we create a transaction that first creates a new vertex with a name property, then creates an edge between that vertex and an existing vertex with ID 2. ## Conclusion: We've looked at how the Python driver in Apache AGE enables Python developers to interface with the database and create sophisticated graph applications in Part-1 & 2 of this blog article. The driver offers a straightforward API for carrying out transactions, traversals, pattern matching, and queries. With the help of these functionalities, programmers may create robust graph applications that take use of Apache AGE's scalability and performance.
huzaiifaaaa
1,443,198
Hello
A post by Dynato TIV
0
2023-04-21T07:22:47
https://dev.to/dynato_tiv_8266084c460654/hello-1e96
dynato_tiv_8266084c460654
1,443,228
Chart library design
Data visualization is a common task in modern software. Users can better understand large data sets...
0
2023-04-24T05:41:34
https://dev.to/titovmx/chart-library-design-5dal
frontend, programming, designsystem, tutorial
Data visualization is a common task in modern software. Users can better understand large data sets visually represented and get meaningful insights. There are a bunch of libraries that help to solve it. They render different types of charts based on data that is passed into them. Usually, chart libraries use D3.js under the hood. It’s a library in charge of drawing particular chart primitives. D3.js can render data using SVG and Canvas, and later I will discuss the difference between these two approaches. In this article, I am describing a solution for the problem of rendering different types of charts, and I will also rely on low-level rendering provided by D3.js. It might be useful not only if you need to build it but also as an exercise for preparing for a front-end engineer interview. ### Requirements First, let’s define what exactly we want to build. We want to create such a component that will render a chart of a certain type based on any data set. We also want to customize how it looks, and we want to interact with the data. It means the component should do the following: 1. Take data and render charts of different types (e.g. bar, line, pie, dot plots). 2. Provide styling for different groups of data, e.g. different bars could be of different colors. 3. Change the data scale by zooming in and out. The default min/max and step values should be calculated based on the data set or provided by a chart user. 4. Show a popup with the details about the section of the chart on which it is clicked (x and y values). 5. Render title and legend. We also can mention some non-functional requirements: 1. Scalable, i.e. adding new types of charts and interactions should extend provided API without implementation modification. It responds to [the open-closed principle (OCP)](https://en.wikipedia.org/wiki/Open%E2%80%93closed_principle). 2. Be performant, i.e. renders and interactions are fast. 3. Be responsive and rendered fine on various devices. 4. Be accessible. There could be other features and customizations, however, we will focus on providing these since they look the most crucial and the final design is considered to be extendable. ### Models Now we know what we want to build, let’s move to the tech details. We will map some of the mentioned entities into models that we will use in our code. I use Typescript syntax to describe it but you can think about it as pseudocode. There are some basic models below. ```jsx type ChartModel = { title?: string; xScaleDomain?: ScaleDomain; yScaleDomain?: ScaleDomain; showX?: boolean; showY?: boolean; showPopup?: boolean; } type ChartType = 'bar' | 'line' | 'dot'; type ChartSeriesModel<TData> = { data: TData; type: ChartType; xAxis: string | (data: TData) => Array<number | string>; yAxis: string | (data: TData) => Array<number | string>; style?: StyleFn; } type LegendModel = { [key: string]: string; } type ScaleDomain = { min?: number; max?: number; step?: number; } ``` We have two basic models. One is for the entire chart and another is for the series that we want to draw there. All fields in `ChartModel` are optional, their purpose is to configure the features of the chart and customize the UI. The required fields `type`, `data`, `xAxis`, and `yAxis` in `ChartSeriesModel` are necessary to understand what the user wants to render. The most interesting is what can be passed as data. There could be two approaches: 1. Force the user to prepare data of a certain format that our component will be working on. E.g., we can receive an array of objects, and if we know properties for the x and y axes we can group data by them and render these points (values). 2. Receive any data and accessor functions that will return sets of values for the x and y axes. Depending on a specific task it makes sense to choose one of the approaches to follow simpler implementation but I want to make this design as universal as possible so the model includes generic `TData` which could be anything and use `xAxis` and `yAxis` methods to process the data. We will need a couple more models for styling charts based on the data point, so we consider x and y coordinates and return a style object that contains only color for now but can be extended with more (e.g. thickness, label color and position, etc.). ```jsx type Point = { x: number; y: number; } type StyleObj = { color: string; } type StyleFn = (value: Point) => StyleObj; ``` ### Design Now we have our data models ready, let’s look at the component tree we are going to create for our chart library. ![Chart library component tree](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8baz3m1cdb3umusekbkd.png) The root component `Chart` is a wrapper for all other necessary parts: two axes components, legend, and set of series. Based on input it will choose what to render. The required part is `ChartSeries`. This component will choose the D3.js function based on `type`, pass the data and style function into it and render returned element. Basically, we have two layers here. The first one is a component layer whose purpose is to provide interactivity and re-render DOM elements on change of the data or settings. Also, it will listen to DOM events to render popups on click and update zooming on the scroll. The second layer is the drawing layer using D3.js. Functions written with D3.js will return elements for any particular `ChartSeriesModel.type` as well as for axes. D3.js allows drawing an axis by setting min/max values for data, and min/max values in pixels. It matches passed values, calculates equal ticks, and draws them with the axis line. In this design, `ChartSeries` is responsible for ingesting data and providing D3.js functions with scaling values. These settings can be manually defined if the chart component user sets their own `min`, `max`, and `step` with `ChartModel.scaleDomain` object. This design allows extending library functionality as much as we need. For instance, we want to add a crosshair that shows lines parallel to axes and coordinates under the cursor. It requires adding a new component within the chart, receiving its configuration, and rendering. New D3.js functions should be provided for adding a new type of chart. *Note: the interaction layer can be implemented with vanilla JS or any framework. Depending on the specific implementation the UI could be composable according to this framework's best practices. For example, React JSX showing bar chart with x axis and legend could look like this:* ```jsx <Chart model={chartModel}> <ChartSeries model={chartSeriesModel} /> <ChartAxis position="bottom" /> <Legend model={legendModel} /> </Chart> ``` ### API The API of components can be described after we defined them and the models earlier. The `Chart` component should take the `ChartModel` as input. From these configuration data, it gets the state with zoom and scale settings. ```jsx function Chart(input: { model: ChartModel; }): HTMLElement type ChartState = { zoom: number; // default value is 1 onZoomChange: () => void; xScale: d3.Scale; yScale: d3.Scale; } ``` The `zoom` setting will be used to tune the scale by user interaction with a chart, e.g. handling a scroll event. `xScale` and `yScale` could be either built from scale domain values defined by the user in `ChartModel` or calculated automatically based on the data provided for `ChartSeries` components. They are D3.js scales that can be of different types for different charts. Their main purpose is to map domain values to pixels on the screen. `ChartSeries` should take the following: ```jsx function ChartSeries<TData>({ model: ChartSeriesModel<TData>; xScale: d3.Scale; yScale: d3.Scale; }): HTMLElement type ChartSeriesState = { datum: Array<Point>; } ``` The `model` describes the raw data and functions to access values for the x and y. `ChartSeries` should calculate an array of points and pass it into D3.js function that will draw a particular chart of a passed `model.type`. The `datum` term is used to describe input for chart function. We also should pass `xScale` and `yScale` to map values into their positions on the chart. `ChartAxis` is similar for both axes which will be differentiated by the `position` value. ```jsx function ChartAxis({ position: 'top' | 'right' | 'bottom' | 'left'; scale: d3.Scale; }): SVGElement | HTMLCanvasElement ``` It also has `scale` to properly draw an axis and place ticks with domain values on it. It also should react to scale changes. The `position` just sets where we want to see the axis relative to the chart. ```jsx function Legend({ config: LegendModel; render?: () => HTMLElement }): HTMLElement ``` `Legend` could render either a default view to explain different charts with their coloring (default or based on the `style` function provided for the `ChartSeries`) or a custom view provided with a render property. ### Evaluation Well, let’s check that all functional requirements are done at this moment. We provided an opportunity to render different types of charts with a custom look by providing style callback and to react to user interactions such as zooming in/out and clicking to show popups. It is also able to render a title and a legend. The component tree structure allows combining different blocks and extending the functionality by adding new ones. New types of charts can be added by implementing new D3.js functions and including them in `ChartFnFactory`. A good performance can be achieved by smart strategies rendering the data. Basically, usage of one of the frameworks will be beneficial since they care about it from the box and tries to minimize updates on the page. To help with that we could memoize calculated datum, trigger axes updates only when scaling is changed, and avoid re-rendering of static parts while data is the same (e.g. legend). ### Optimizations There are a few more optimizations we have to discuss to achieve better performance, responsiveness, and accessibility. Since users could render popups over `Chart` these elements should have absolute positioning. Using `translate` CSS function to move them horizontally and vertically will be better from a performance perspective. We would need height and width values in a few places to properly render axes and charts. E.g. bar height could be set up with D3.js in the following way `.attr("height", function(d) { return height - yScale(d.value); });` where height is related to the chart. So the height and weight should be obtained from the parent element where it is placed and should listen to its change to be responsive. The next range of optimizations is about accessibility. We actually already have a title and legend that describes our chart. It would be probably a nice idea to make a `title` to be required and additionally have `showTitle: boolean = true`. So even If a user switches it off we could add `aria-label` for screen readers. Likewise, we want to add `aria-label` for the legend element to make it clear. We should verify that the colors that we use to draw charts by default contrast with the background. Additionally, we could take care of describing axes and values better. Axes should have labels and values for ticks on them. Also, the data on the chart itself could be labeled to describe it for screen readers and better visual perception. ### SVG vs Canvas As was mentioned in the beginning D3.js library gives us a choice of how to render the data. Let’s consider the difference between SVG and Canvas, and their pros and cons. SVG (scalable vector graphics) is an XML-based format that is similar to HTML and will be a part of a DOM tree with many shapes as children. The browser can access these shapes, bind event listeners, and manipulate and update them as with any other DOM node. Canvas gives API to imperatively create paintings which results in a single element being added to the HTML document. Due to this core difference, the interaction with SVG is much simpler, e.g. click event can be bonded to a node and get it immediately while canvas provides us only with coordinates (x, y) on its surface. Mapping coordinates to drawn elements require additional code. One of the strategies is to put an invisible canvas behind the real where all elements are colored differently so that they can be mapped to the datum array. The downside of having many nodes in the DOM is that it requires more effort to render and update them in the browser. Here the usage of Canvas can be beneficial because it could quickly redraw the entire scene. SVG will provide better responsiveness, its nodes could change their dimensions as well as other DOM nodes depending on provided styling. Canvas again requires complete redrawing. Data binding is specific to D3.js library. Using SVG users map their data to shapes, so they are bound and can be automatically updated afterward. The data and their rendering are tightly coupled. It is different for Canvas which gives an opportunity to draw either by a set of imperative commands or create a set of dummy elements and bind them to data. While data binding makes it possible to select, enter and exit dummy elements like it is done with SVG API rendering is detached from this cycle. So the library user has to additionally care about how and when to redraw elements of a canvas scene. To sum up, SVG is simpler to achieve interactivity and responsiveness requirements and has the benefits of data binding in D3.js. Canvas can be more performant in case of having a lot of nodes in one chart. For this design I would choose SVG, and consider Canvas for only specific cases that require rendering many nodes, e.g. let’s stick with 100 nodes threshold for it. ### Conclusion In this article, we discussed how the extendable chart library could be built. Also, we discussed the advantages and disadvantages of visualization with SVG and Canvas. I hope you will find it useful for you and I would be glad to hear your comments and ideas on how this solution could be improved.
titovmx
1,443,353
The Future of High Skilled Education and Training in Australia: Trends and Predictions
overview of The Future of High Skilled Education and Training in Australia: Trends and...
0
2023-04-21T10:45:42
https://dev.to/universitybure1/the-future-of-high-skilled-education-andtraining-in-australia-trends-and-predictions-4ho8
overview of [The Future of High Skilled Education and Training in Australia: Trends and Predictions ](https://universitybureau.com/blogs/high-skilled-education-and-training-australia) Increasing Demand for High Skilled Workers: Growth in Vocational Education and Training (VET): Emergence of Online Learning: Collaborative Industry-Academia Partnerships: Increasing Use of Artificial Intelligence (AI): Overall, the future of high skilled education and training in Australia looks promising. The government's investments in VET and the focus on emerging industries will help to ensure that Australians are equipped with the necessary skills to succeed in the job market.
universitybure1
1,443,433
Why Mobile App Development Services are Essential for Businesses in Today's Digital Landscape
As technology continues to advance, it's becoming increasingly important for businesses to have a...
0
2023-04-21T11:27:18
https://dev.to/ronasit/why-mobile-app-development-services-are-essential-for-businesses-in-todays-digital-landscape-10f8
webdev, mobile, startup
As technology continues to advance, it's becoming increasingly important for businesses to have a mobile app presence. In today's fast-paced world, customers don't want to wait or have to navigate a clunky website on their mobile device. Mobile apps provide a more streamlined and user-friendly experience, making it easier to engage and connect with your customers. If you're a business owner looking to expand your digital reach, investing in mobile app development services is a smart move. When it comes to mobile app development, there are a few key things to consider. Firstly, you need to determine who your target audience is and what their needs are. Are they primarily Android or iOS users? What kind of demographic are you targeting? Once you have a clear understanding of your audience, you can work with a mobile app development company, such as [Ronas IT](https://ronasit.com/), to create a customized app that meets your specific business needs. One of the biggest advantages of having a mobile app is the ability to provide personalized experiences to your customers. With the right data analytics and tracking tools, you can get a better idea of your customers' preferences and behavior, allowing you to tailor your marketing efforts to their needs. For example, you might send push notifications to users who have indicated interest in a particular product or service, or offer discounts to loyal customers who have made multiple purchases through your app. These kinds of personalized experiences create a stronger connection between your company and your customers, leading to increased loyalty and positive word-of-mouth marketing. Another key benefit of having a mobile app is the ability to increase brand visibility. With users spending an average of 3 hours per day on their mobile devices, having a presence on the app store can make it easier for potential customers to discover your brand. In addition to being a powerful marketing tool, a mobile app also provides a platform for customer support and engagement. By including features like chat support or a forum for customer feedback, you can create a more interactive and engaging experience for your users. Of course, building a mobile app isn't just about the technical aspects. It's also about creating a seamless user experience that aligns with your brand messaging and values. A [mobile app development](https://ronasit.com/services/mobile-app-development/) company like Ronas IT can work with you to design a user-friendly interface that reflects your brand's personality and voice. From selecting the right color scheme to creating custom graphics and animations, every aspect of your app should be designed with the user in mind. In conclusion, if you're looking to take your business to the next level, investing in mobile app development services is a smart move. By providing a tailored and personalized experience to your customers, you can increase engagement, build loyalty, and ultimately drive more revenue. To get started, consider reaching out to a reputable mobile app development company like Ronas IT. With the right expertise and guidance, you can turn your app idea into a reality and start reaping the benefits of a mobile-first strategy.
ronasit
1,443,797
What’s on your Whiteboard?
I like to use my whiteboard as a space to keep lasting tools, and to quickly diagram. What’s on...
0
2023-04-21T18:46:14
https://dev.to/wra-sol/whats-on-your-whiteboard-147o
discuss, workstations, productivity
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83ryf2fr88yhvyj70pka.jpeg) I like to use my whiteboard as a space to keep lasting tools, and to quickly diagram. What’s on your whiteboard?
wra-sol
1,443,892
DAY 25-PANDAS
import pandas as pd df =...
0
2023-04-21T20:50:05
https://dev.to/sandeepsamanth/day-25-pandas-14gj
``` import pandas as pd df = pd.read_csv("https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv") ``` **DROP and SET INDEX** ``` df.drop('PassengerId',axis=1,inplace=True)//permanently drops passengerID df.drop(3,inplace=True)//drops 3rd row df.set_index("Name",inplace=True) df.reset_index() ``` `d = {'key1' :[3,4,5,6,7], 'key2':[5,6,7,8,6], 'key3':[4,5,6,7,8] } pd.DataFrame(d) df1 = pd.read_csv('taxonomy.csv') ` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9vp3loygu2vw401jc371.png) ``` df1.dropna() df1.dropna(inplace=True) df1.dropna(axis=1) df2.fillna("sudh") df.reset_index(inplace=True) g = df.groupby('Survived') g.sum() ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/demdyw3ylms0znymlxhi.png) ``` g.mean() g1 = df.groupby('Pclass') g1.max().transpose() //concatinate df5 = df[['Name', 'Survived', 'Pclass']][0:5] df6 = df[['Name', 'Survived', 'Pclass']][5:10] pd.concat([df5,df6]) ``` **MERGE** ``` data1 = pd.DataFrame({'key1':[1,2,4,5,6], 'key2':[4,5,6,7,8], 'key3':[3,4,5,6,6] } ) data2 = pd.DataFrame({'key1':[1,2,45,6,67], 'key4':[56,5,6,7,8], 'key5':[3,56,5,6,6] } ) pd.merge(data1,data2) pd.merge(data1,data2,how = 'left') pd.merge(data1,data2,how = 'right') pd.merge(data1,data2,how = 'outer',on = 'key1') pd.merge(data1,data2,how = 'cross') ``` **similarly to merge we have join** ``` data1.join(data2,how='right') data1.join(data2,how='inner') data1.join(data2,how='outer') data1.join(data2,how='cross') df['Fare_INR'] = df['Fare'].apply(lambda x : x*80) df['name_len'] = df['Name'].apply(len) ```
sandeepsamanth
1,443,971
6 cursos gratuitos em Forense Digital oferecidos pela Academia de Forense Digital
Bem-vindo à Academia de Forense Digital, onde você encontrará uma ampla variedade de cursos em...
0
2023-04-23T23:09:16
https://guiadeti.com.br/cursos-gratuitos-em-forense-digital/
cursogratuito, cursosgratuitos, cybersecurity, forense
--- title: 6 cursos gratuitos em Forense Digital oferecidos pela Academia de Forense Digital published: true date: 2023-04-21 20:02:29 UTC tags: CursoGratuito,cursosgratuitos,cybersecurity,forense canonical_url: https://guiadeti.com.br/cursos-gratuitos-em-forense-digital/ --- ![Thumb Forense Digital - Guia de TI](https://guiadeti.com.br/wp-content/uploads/2023/04/Academia-de-Forense-Digital-1024x676.png "Thumb Forense Digital - Guia de TI") Bem-vindo à Academia de Forense Digital, onde você encontrará uma ampla variedade de cursos em investigação digital. Os 6 cursos gratuitos incluem Análise de Malware Starter, Auditoria de T.I na Prática, DFIR Starter, Fundamentos de [Cibersegurança](https://guiadeti.com.br/guia-tags/cursos-de-cybersecurity/), Perito Forense Digital e Threat Intelligence Starter. Se você tem interesse em ingressar na área de Forense Digital, agora é a hora! Com esses treinamentos gratuitos, você terá a oportunidade de experimentar o que o futuro tem a oferecer. Não perca mais tempo e inscreva-se agora em um dos cursos disponíveis abaixo. ## Conteúdo <nav><ul> <li><a href="#cursos">Cursos</a></li> <li><a href="#academia-de-forense-digital">Academia de Forense Digital</a></li> <li><a href="#inscricoes">Inscrições</a></li> <li><a href="#compartilhe">Compartilhe!</a></li> </ul></nav> ## Cursos - **Análise de Malware Starter** - Ementa: - Apresentação das Referências; - Áreas de Atuação; - Como a Análise de Malware pode me Ajudar; - Ética; - Introdução aos Conceitos de Análise de Malware; - O que é um Malware; - Como ocorre a infecção; - Tipos de Malwares; - Exemplo de araque DDoS; - Outros Tipos de Malwares; - Configurando um laboratório de Análise de Malware; - Ambiente Violado; - Análise automatizada; - Análise Estática do Malware; - Análise Dinâmica de Malware. - Pré-requisito: - Conhecimentos básicos de Tecnologia. - **Auditoria de T.I na Prática** - Ementa: - Introdução à Auditoria de T.I.; - Introdução à Infraestrutura de T.I; - Introdução à Segurança da Informação; - Introdução à [LGPD](https://guiadeti.com.br/guia-tags/cursos-de-lgpd/); - Introdução à ABNT IEC/ISO 27001; - Análise de Maturidade Tecnológica; - Aplicação da ABNT IEC/ISO 27001; - Introdução ao Mapeamento de Processos; - Análise da Estrutura da Empresa para criação de políticas em compliance com a LGPD; - Revisão do novo cenário de maturidade tecnológica após a implantação do projeto; - Prova Teórica e Prática com base em Estudo de Caso proposto. - Pré-requisito: - Conhecimentos básicos de Tecnologia. - **DFIR Starter** - Ementa: - Apresentação; - A conquista de Hoje; - Quem sou Eu; - Como Cheguei Aqui; - Por que é tão difícil ter sucesso na área; - Sobre a Academia de Forense Digital; - Por que DIFIR; - Introdução a Segurança da Informação; - Introdução à LGPD; - O que são incidentes de Segurança; - O que são Vulnerabilidades; - Ciclo de Vida do Gerenciamento de Vulnerabilidade; - História do CiberCrime; - Fraudes Internas; - Conhecendo o Atacante; - Medotodogias de Resposta de Incidentes – Principais Métodos de Ataques Cibernéticos; - Medotodogias de Resposta de Incidentes; - Etapa de Detecção e Análise; - Etapa de Detecção e Análise – Priorização; - Etapa de Detecção e Análise – Estacando a Sangria – Coleta de Evidências. - Pré-requisito: - Conhecimentos básicos de Tecnologia. - **Fundamentos de Cibersegurança** - Ementa: - Qual a definição de Cibersegurança; - Qual a essência de um Hacker; - Os tipos de Hackers; - O Modus Operandi Hacker; - O Zero Day Attack; - O que é um Malware; - A Vítima e o Hacker; - Sobre CiberExtorsões; - O tal do DDOS; - A tal da Engenharia Social; - Enganado com um Phishing; - Envenenamento de DNS; - Mercado de Trabalho; - Sobre Pentest; - Sendo um Hacker Ético; - Tipos de Pentest; - Entenda o que é Pentest; - Entendendo o cenário da DFIR; - Consentimento por Escrito; - Obtendo Informações do Alvo; - Nikto; - Utilizando o NMAP; - Forçando Senhas do SQL Server; - Usando o CrowBar; - Relatório de Pentest; - Tira Dúvidas; - Encerramento. - Pré-requisito: - Conhecimentos básicos de Tecnologia. - **Perito Forense Digital** - Ementa: - Apresentação; - Introdução à Forense Digital; - Áreas de Atuação e Mercado de Trabalho; - Principais Legislações e Normas Técnicas; - Principais Fontes de Evidências Digitais; - Demonstração do Laboratório de Computação Forense; - Demonstração dos Equipamentos Especializados; - Demonstração Prática de Cópia Forense de um Computador; - Demonstração Prática de Cópia Forense de um Smartphone; - Estudo de Caso Prático: Perícia em caso de um Computador Invadido. - Pré-requisito: - Conhecimentos básicos de Tecnologia. - **Threat Intelligence Starter** - Ementa: - O que é a área de Inteligência; - O que é Threat Intelligence; - O que é a Cyber Kill Chain; - Ciclo de Vida da Inteligência; - Principais Certificações; - Mercado de Trabalho. - Pré-requisito: - Conhecimentos básicos de Tecnologia. ## Academia de Forense Digital A Academia de Forense Digital é uma instituição que se dedica a oferecer cursos e treinamentos em investigação digital. Com uma equipe de profissionais altamente qualificados e experientes na área, a Academia busca oferecer aos alunos o conhecimento e as habilidades necessárias para se tornarem especialistas em Forense Digital. A Academia de Forense Digital se preocupa em atualizar constantemente seus cursos e materiais de estudo para acompanhar as mudanças e avanços na área de investigação digital, garantindo assim um ensino de qualidade e relevante. Através da oferta de cursos gratuitos e pagos, a Academia busca democratizar o acesso à educação em Forense Digital e contribuir para o desenvolvimento dessa importante área de conhecimento. ## Inscrições [Inscreva-se aqui!](https://academiadeforensedigital.com.br/treinamentos-gratuitos/) ## Compartilhe! Gostou do conteúdo sobre os cursos da Academia de Forense Digital? Então compartilhe com a galera! O post [6 cursos gratuitos em Forense Digital oferecidos pela Academia de Forense Digital](https://guiadeti.com.br/cursos-gratuitos-em-forense-digital/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br).
guiadeti
1,444,126
HI, How are you?
A post by PHY-EDUCATE
0
2023-04-22T02:56:30
https://dev.to/phyeducate/hi-how-are-you-162m
phyeducate
1,444,152
Master Project Management with Workarise's Gantt Chart
A Gantt chart is a tool used in project management to visually represent project schedules. It is a...
0
2023-04-24T14:43:48
https://dev.to/juandev2579/master-project-management-with-workarises-gantt-chart-oeb
A Gantt chart is a tool used in project management to visually represent project schedules. It is a type of bar chart that illustrates a project timeline by showing the start and end dates of each task, as well as the dependencies between them. The chart is named after Henry Gantt, an American engineer and management consultant who developed the technique in the early 1900s. The purpose of a Gantt chart is to help project managers and their teams to plan, schedule, and track their projects effectively. It allows them to see the big picture of the project, as well as the individual tasks and their interdependencies. This helps to ensure that the project is completed on time, within budget, and to the required quality standards. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pq4uoion8bjg9utj6nrz.png) The advantages of using a Gantt chart as a tool in Workarise are numerous. Here are some of the main benefits: **1. Better project planning:** With a Gantt chart, project managers can create a detailed project plan that includes all the necessary tasks, timelines, and resources. This helps to ensure that the project is properly scoped and planned from the outset. **2. Improved communication:** Gantt charts are an excellent communication tool for project teams, as they provide a visual representation of the project plan. This helps to ensure that everyone is on the same page and understands their roles and responsibilities. **3. Increased efficiency:** By using a Gantt chart, project managers can identify potential bottlenecks and resource constraints in advance. This allows them to make adjustments to the project plan before any issues arise, which can save time and money. **4. Better project tracking:** Gantt charts allow project managers to track the progress of each task and the project as a whole. This helps them to identify any issues or delays in real-time and take corrective action to keep the project on track. In conclusion, the Gantt chart is an essential tool for project management in Workarise. It helps project teams to plan, schedule, and track their projects effectively, which leads to better communication, increased efficiency, and improved project outcomes. With Workarise, you can have access to this powerful feature and many more, such as a calendar, task management, and project management tools, that will help you to be more productive and have a better work team.
juandev2579
1,444,175
REST vs. GraphQL: Which API Approach is Right for You?
When it comes to building APIs, two of the most popular approaches are REST and GraphQL. Both REST...
0
2023-04-22T05:41:00
https://firstfinger.in/rest-vs-graphql/
programming, datastructure, api
--- title: REST vs. GraphQL: Which API Approach is Right for You? published: true tags: #Programming, #DataStructure, #API canonical_url: https://firstfinger.in/rest-vs-graphql/ --- ![REST vs. GraphQL: Which API Approach is Right for You?](https://firstfinger.in/content/images/2023/04/Thumbnail.jpg) When it comes to building [APIs](https://firstfinger.in/api-vs-sdk-difference/), two of the most popular approaches are REST and GraphQL. Both REST and GraphQL are used to enable communication between different applications over the internet through APIs. But what are the key differences between the two, and which approach is right for your [project](https://firstfinger.in/file-transfer-project-javascript-html-css-firebase/)? Let's take a closer look. ![REST vs. GraphQL: Which API Approach is Right for You?](https://firstfinger.in/content/images/2023/04/mermaid-diagram-2023-04-21-170838-1.png) ## What is REST? REST, or Representational State Transfer, is an architectural style for building APIs that relies on HTTP requests to interact with resources. In a RESTful API, resources are identified by unique URIs (Uniform Resource Identifiers), and clients can make requests to these URIs to retrieve or manipulate the resource. ![REST vs. GraphQL: Which API Approach is Right for You?](https://firstfinger.in/content/images/2023/04/mermaid-diagram-2023-04-21-190826.png) The REST approach is based on a set of principles that define how the API should be designed. These principles include using standard [HTTP methods](https://firstfinger.in/http-2-vs-http-1-1/)(such as GET, POST, PUT, and DELETE) to perform operations on resources, using a uniform interface to interact with resources, and relying on stateless communication between client and server. REST is a popular approach for building APIs because it is simple, flexible, and widely supported by many [programming](https://firstfinger.in/tag/programming/)languages and frameworks. However, one downside of REST is that clients must be specific in their requests and sift through all the data that's returned. ### Advantages of REST - Easy to learn and understand - Well-established and widely used - Standardized, predictable interface - Efficient [caching](https://dev.to/anurag_vishwakarma/why-redis-is-so-fast-it-will-blow-your-mind-2b6-temp-slug-6796485)mechanisms ### Disadvantages of REST - Limited flexibility in [data structures](https://firstfinger.in/tag/data-structure/) - Requires multiple requests to retrieve related data - Can result in data overfetching - Not suitable for complex queries ## What is GraphQL? GraphQL, on the other hand, is a query language that allows clients to retrieve data from multiple data sources in a single API call. GraphQL was developed by Facebook and has gained popularity due to its ability to provide precise data retrieval. ![REST vs. GraphQL: Which API Approach is Right for You?](https://firstfinger.in/content/images/2023/04/mermaid-diagram-2023-04-22-104243-1.png) With GraphQL, clients can specify exactly what data they need and receive only that data in response. This makes it a great [choice for complex applications](https://dev.to/anurag_vishwakarma/what-is-cloud-native-4npc) with multiple data sources, as it allows clients to retrieve only the necessary data in a single request. One key difference between GraphQL and REST is that GraphQL does not rely on URIs to identify resources. Instead, clients send queries to a single endpoint, and the server returns the requested data in a JSON format. ### Advantages of GraphQL - Flexible data structures - Single request to retrieve all required data - Efficient handling of complex queries - Strongly typed schema for data validation ### Disadvantages of GraphQL - Requires more effort to learn and understand - Can suffer from performance issues due to complex queries - Requires additional effort to implement caching ## REST vs. GraphQL ![REST vs. GraphQL: Which API Approach is Right for You?](https://firstfinger.in/content/images/2023/04/mermaid-diagram-2023-04-21-172511.png) In [GraphQL](https://graphql.org/?ref=firstfinger.in), there is a crucial element known as a schema, which serves as a blueprint defining all the potential data that clients can request through a service. A query, on the other hand, is a request for data that adheres to the structure laid out in the schema. When a query is submitted, a resolver is invoked to retrieve the requested data, which may entail gathering data from various sources and assembling it to match the query structure. Lastly, we have mutations, which are responsible for altering data on the server. In CRUD terminology, queries are analogous to "reads," while mutations take care of "creates," "updates," and "deletes." ![REST vs. GraphQL: Which API Approach is Right for You?](https://firstfinger.in/content/images/2023/04/mermaid-diagram-2023-04-21-192807.png) Now, let's turn our attention to [REST](https://aws.amazon.com/what-is/restful-api/?ref=firstfinger.in). In REST, resources are the core building blocks, each with a unique identifier known as a URI. Clients may request a response by using HTTP methods such as GET, PUT, POST, and DELETE to access these resources. The server responds with a representation of the resource in JSON or XML format. REST APIs also allow clients to filter, sort, and paginate data using query parameters. <!--kg-card-begin: markdown--> | Feature | Rest APIs | GraphQL | | --- | --- | --- | | **Data Fetching** | Data is fetched using specific endpoints. | Data is fetched using a single endpoint, reducing the number of requests needed to retrieve data. | | **Query Complexity** | Suited for simple queries and CRUD operations. | Suited for more complex queries with nested fields and multiple data sources. | | **Response Size** | Response size can be larger due to the inclusion of unnecessary data. | Response size is smaller because clients can request only the data they need. | | **Caching** | Can be easily cached using HTTP caching mechanisms. | Caching requires more effort due to the complexity of the queries. | | **Schema Definition** | No formal schema definition is required. | A formal schema definition is required, providing more structure to the API. | | **Versioning** | Requires versioning when making changes to the API. | Changes can be made without requiring versioning, reducing API maintenance overhead. | | **Security** | Uses standard HTTP security mechanisms like SSL and OAuth. | Provides additional security features like query validation and authorization. | | **Tooling** | Has a wide variety of tooling options available. | Has fewer tooling options available, but is gaining popularity and tooling support. | | **Learning Curve** | Easy to learn and implement. | May have a steeper learning curve due to the formal schema definition and more complex queries. | | **Compatibility** | Works well with existing API management tools. | Can be introduced on top of an existing Rest API and can work with existing API management tools. | <!--kg-card-end: markdown--> While GraphQL and REST share certain similarities, they are better suited for different use cases. Both can construct APIs for different applications to communicate with one another via the internet. Additionally, both employ frameworks and libraries to handle network details, and both operate over HTTP. However, GraphQL is protocol-agnostic, and both can handle JSON or JavaScript object notation. Despite these similarities, there are significant differences between the two technologies. When a REST API is queried, it returns the full data set for that resource. In contrast, GraphQL is a query language specification and a set of tools that enable clients to interact with a single endpoint. REST APIs frequently require multiple requests to retrieve related data, while GraphQL can collect all of the data in a single request using a complex query that adheres to the schema. As a result, clients receive just what they requested, without any unnecessary over-fetching. ![REST vs. GraphQL: Which API Approach is Right for You?](https://firstfinger.in/content/images/2023/04/mermaid-diagram-2023-04-21-173010.png) Rest APIs are a familiar concept for most developers, while GraphQL may present a bit of a learning curve for some. Rest APIs are particularly well-suited for applications that require simple CRUD operations. For example, an e-commerce website might use a rest API to enable customers to browse products, add items to their cart, and complete orders. In this case, the API would use the HTTP methods, such as GET, PUT, POST, and DELETE, to manipulate data such as products, orders, and customer information. ### Architecture REST follows a [client-server architecture](https://dev.to/anurag_vishwakarma/what-is-the-difference-between-forward-proxy-vs-reverse-proxy-27a2-temp-slug-7092852) where the client makes requests to the server, and the server responds with resources. REST uses HTTP verbs to perform CRUD operations on resources. GraphQL, on the other hand, follows a client-driven architecture where the client specifies the structure of the data they need, and the server responds with the requested data. ### Performance RESTful APIs are known for their [high performance](https://firstfinger.in/optimise-laptop-for-maximum-performance/), as they use standard HTTP protocols for communication. RESTful APIs can also be easily cached, making them faster and more efficient. GraphQL, on the other hand, can suffer from performance issues due to the complexity of the queries. As the client can request any data they need, GraphQL queries can become very complex, resulting in slower response times. ### Query Complexity In RESTful APIs, the client needs to make multiple requests to retrieve related data. This can result in complex client-side logic to manage the relationships between the resources. In GraphQL, the client specifies the required data, and the server returns only that data, including any related data. This simplifies the client-side logic and reduces the number of requests required. ### Data Over-fetching and Under-fetching In RESTful APIs, the client retrieves an entire resource, even if they only need a part of it. This is known as data over-fetching and can result in unnecessary network usage and slower response times. In GraphQL, the client can request only the required data, which eliminates data over-fetching. However, if the client requests too little data, it can result in data under-fetching, which requires additional requests to retrieve the missing data. ### Caching RESTful APIs are easily cached, as the standard HTTP protocols provide built-in caching mechanisms. This makes RESTful APIs faster and more efficient. GraphQL does not have built-in caching mechanisms, as the queries can be very complex and dynamic. Caching GraphQL queries requires more effort and may not be as efficient as caching RESTful APIs. ## Final Conclusion GraphQL is better suited for applications that require more complex data requests. RESTful APIs are useful for applications that need to perform CRUD operations on resources, such as creating, reading, updating, and deleting data. What I mean by this is that GraphQL is particularly useful when dealing with nested fields or multiple data sources. For example, a company that provides a suite of financial planning tools for its clients might require data from multiple sources such as bank transactions, investment portfolios, and credit scores. **With GraphQL, the company can build a single API endpoint that allows clients to query all of the data in a single request.** Clients can simply specify the data they need, and the server will use a set of resolvers to fetch the necessary data from each source and assemble it into a response that matches the query structure. **Rest and GraphQL can also work together** as GraphQL does not dictate a specific application architecture. It can be introduced on top of an existing Rest API and can work with existing API management tools. Both Rest and GraphQL have their unique strengths and quirks, and understanding their similarities and differences will help you choose the right tool for the job. ## FAQs #### Can you use both REST and GraphQL in the same application? <button><svg id="Regular" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.25,7.311,12.53,18.03a.749.749,0,0,1-1.06,0L.75,7.311"></path></svg></button> Yes, it is possible to use both REST and GraphQL in the same application, depending on the specific requirements of the application. #### Is GraphQL faster than REST? <button><svg id="Regular" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.25,7.311,12.53,18.03a.749.749,0,0,1-1.06,0L.75,7.311"></path></svg></button> GraphQL can suffer from performance issues due to the complexity of the queries, while RESTful APIs are known for their high performance. However, GraphQL can be faster in some cases where the client needs to retrieve a large amount of related data in a single request. #### Can GraphQL replace REST? <button><svg id="Regular" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.25,7.311,12.53,18.03a.749.749,0,0,1-1.06,0L.75,7.311"></path></svg></button> GraphQL and REST have different strengths and weaknesses, and the choice between them depends on the specific requirements of the application. It is not necessarily a case of one replacing the other. #### Is GraphQL more secure than REST? <button><svg id="Regular" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.25,7.311,12.53,18.03a.749.749,0,0,1-1.06,0L.75,7.311"></path></svg></button> Both GraphQL and REST can be secured using standard security mechanisms, such as authentication and authorization. The security of the API depends on how it is implemented and configured. #### What are some popular companies that use GraphQL? <button><svg id="Regular" xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.25,7.311,12.53,18.03a.749.749,0,0,1-1.06,0L.75,7.311"></path></svg></button> Some popular companies that use GraphQL include Facebook, GitHub, and Shopify.
anurag_vishwakarma
1,444,313
Subtle rainbow gradient buttons
Some subtle rainbow colored buttons that reveals a nice gradient border &amp; overlay ✨ See the...
0
2023-04-22T09:09:03
https://dev.to/lukyvj/subtle-rainbow-gradient-buttons-4lp1
codepen, lukyvj, css, houdin
<p>Some subtle rainbow colored buttons that reveals a nice gradient border &amp; overlay ✨</p> <p>See the thread about it: <a href="https://twitter.com/LukyVJ/status/1646927645425164313" target="_blank">https://twitter.com/LukyVJ/status/1646927645425164313</a></p> {% codepen https://codepen.io/LukyVj/pen/bGmELYR %}
lukyvj
1,444,391
This week's API round-up: movie recommendations, Movie Reviews and Movie Cast And Crew
As usual, we will introduce three new APIs to you this week. We hope that the APIs we have picked for...
0
2024-01-01T10:38:00
https://dev.to/worldindata/this-weeks-api-round-up-movie-recommendations-movie-reviews-and-movie-cast-and-crew-51df
api, moviedata, entertainment, movieapi
As usual, we will introduce three new APIs to you this week. We hope that the APIs we have picked for the weekly roundup will be helpful to you. We will explore the purpose, industry, and client types of these APIs. The complete details of the APIs can be found on [Worldindata's API marketplace](https://www.worldindata.com/). Let us get started now! ## The Movie DB movie recommendations API The Movie DB offers an API that provides movie recommendations to various types of clients. These clients include entertainment and movie platforms, media agents, tabloid press platforms, and more. [The API](https://www.worldindata.com/api/The-Movie-DB-movie-recommendations-api) is a powerful tool that enables these clients to offer their users personalized movie recommendations based on their preferences, ratings, and other relevant factors. This is particularly useful for entertainment and movie platforms that aim to provide a better user experience and increase engagement by offering tailored movie recommendations. The industry that benefits the most from The Movie DB's movie recommendations API is the entertainment and cinema industry. This industry includes movie theaters, streaming services, and other platforms that provide entertainment to their users. The API enables these platforms to offer more personalized recommendations to their users, which helps improve the user experience and increase user engagement. Additionally, the media and tabloid press industries also benefit from the API as they can use the recommendations to create engaging content related to movies and entertainment. The main purpose of The Movie DB's movie recommendations API is to provide a list of recommended movies for a particular movie. This is achieved by analyzing various factors such as user ratings, user preferences, and other relevant information. By providing a list of recommended movies, the API helps clients increase user engagement and provide a better user experience. Moreover, the API enables clients to offer more personalized movie recommendations to their users, which can help increase user retention and loyalty. Overall, The Movie DB's movie recommendations API is a valuable tool that benefits various industries and clients, and its importance is only set to grow in the future. > **Specs:** Format: JSON Method: GET Endpoint: /movie/{movie_id}/recommendations Filters: movie_id, api_key, language and page ## Movie Reviews API created by The Movie DB The Movie DB's [movie reviews API](https://www.worldindata.com/api/The-Movie-DB-movie-reviews-api) is a powerful tool that provides user reviews for movies. The main purpose of the API is to enable clients to get user reviews for a particular movie, which can be used to inform users and improve the overall user experience. The API analyzes user reviews from various sources, such as social media, review websites, and other platforms, to provide a comprehensive view of user sentiment towards a movie. The entertainment, cinema, media, and tabloid press industries benefit the most from The Movie DB's movie reviews API. These industries rely heavily on user reviews to inform their users and create engaging content related to movies and entertainment. The API provides a wealth of information that can be used to create compelling reviews, articles, and other types of content that inform and engage users. Moreover, the API can help these industries better understand user sentiment towards movies and use this information to inform their business decisions. Entertainment and movie platforms, media agents, tabloid press platforms, and similar clients use The Movie DB's movie reviews API. These clients rely on the API to provide user reviews for movies, which they can use to inform their users and improve the overall user experience. The API is particularly useful for entertainment and movie platforms that want to provide a more comprehensive view of user sentiment towards movies. Additionally, media agents and tabloid press platforms can use the API to create engaging content related to movies and entertainment, which can help increase user engagement and retention. Overall, The Movie DB's movie reviews API is a valuable tool for various industries and clients, and it plays a crucial role in informing and engaging users in the world of movies and entertainment. > **Specs:** Format: JSON Method: GET Endpoint: /movie/{movie_id}/reviews Filters: movie_id, api_key, language and page ## Movie Cast And Crew API by The Movie DB The Movie DB's [movie cast and crew API](https://www.worldindata.com/api/The-Movie-DB-movie-cast-and-crew-api) provides a comprehensive list of actors and crew members involved in a particular movie. This information is particularly useful for the entertainment, cinema, media, and tabloid press industries, which rely heavily on accurate and up-to-date information about movies and the people involved in their creation. The API provides a wealth of information about the cast and crew of movies, including their names, roles, biographical information, and more. Entertainment and movie platforms, media agents, tabloid press platforms, and similar clients use The Movie DB's movie cast and crew API. These clients rely on the API to provide accurate and up-to-date information about the cast and crew of movies, which they can use to inform their users and create engaging content related to movies and entertainment. The API is particularly useful for entertainment and movie platforms that want to provide comprehensive information about movies and the people involved in their creation. Additionally, media agents and tabloid press platforms can use the API to create compelling content related to movies and entertainment, which can help increase user engagement and retention. The main purpose of The Movie DB's movie cast and crew API is to provide a comprehensive list of actors and crew members involved in a particular movie. This information is particularly useful for clients who want to provide detailed information about movies and the people involved in their creation. By providing a complete list of actors and crew members, the API enables clients to inform their users about the people behind their favorite movies, and help them better appreciate the art and craft of filmmaking. Overall, The Movie DB's movie cast and crew API is a valuable tool for various industries and clients, and it plays a crucial role in providing accurate and up-to-date information about movies and the people involved in their creation. > **Specs:** Format: JSON Method: GET Endpoint: /movie/{movie_id}/credits Filters: movie_id, api_key and language
worldindata
1,444,664
Kuda bank transfer code
https://www.bankfanz.com/kuda-bank-transfer-code/ https://plaza.rakuten.co.jp/kudabank2023/diary/2023...
0
2023-04-22T17:29:42
https://dev.to/kudabank22/kuda-bank-transfer-code-39o4
[https://www.bankfanz.com/kuda-bank-transfer-code/](https://www.bankfanz.com/kuda-bank-transfer-code/#utm_source=backlinks&utm_medium=search&utm_campaign=darry+ring+us&utm_content=Michelle) https://plaza.rakuten.co.jp/kudabank2023/diary/202304220000/ https://peatix.com/user/16980478/ https://www.provenexpert.com/kuda-bank-transfer-code-20232/ https://www.producthunt.com/@marcusuche35427 https://www.divephotoguide.com/user/kudabanktransfer22/ https://vocal.media/authors/marcus-uchendu https://www.lifeofpix.com/photographers/kudabanktransfer22/ https://seedandspark.com/user/kuda-bank-transfer-code-2 https://rosalind.info/users/Kudabanktransfer22/ http://phillipsservices.net/UserProfile/tabid/43/userId/214961/Default.aspx https://app.roll20.net/users/11892996/kuda-bank-transfer-c https://speakerdeck.com/kudabanktransfer22 https://social.msdn.microsoft.com/Profile/Kuda%20bank%20transfer%20code https://camp-fire.jp/profile/Kudabanktransfer22 https://plazapublica.cdmx.gob.mx/profiles/Kudabank22/activity https://www.metal-archives.com/users/Kudabanktransfer22 https://storium.com/user/kudabanktransfer22 https://trabajo.merca20.com/author/kudabanktransfer22/ https://www.intensedebate.com/people/Kudabank22 https://www.mifare.net/support/forum/users/kudabanktransfer22/ https://pinshape.com/users/2619423-kuda22 http://foxsheets.com/UserProfile/tabid/57/userId/129064/Default.aspx https://www.kompasiana.com/kudabank22 https://leanin.org/circles/kuda-bank-transfer-code-22 https://www.credly.com/users/kuda-bank-transfer-code.a07edd6c/badges https://www.myminifactory.com/users/Kudabanktransfer22 https://www.sqlservercentral.com/forums/user/kudabanktransfer22 https://www.longisland.com/profile/Kudabanktransfer22 https://guides.co/a/marcus-uchendu https://myanimelist.net/profile/Kudabank22
kudabank22
1,444,714
Dash: A Revolutionary Hashing Technique for Efficient Data Storage on Persistent Memory
Dash is a novel hash table design that was introduced in a paper published at the 2017 ACM Symposium...
0
2023-04-22T19:51:32
https://dev.to/rainleander/dash-a-revolutionary-hashing-technique-for-efficient-data-storage-on-persistent-memory-575p
database, memory
Dash is a novel hash table design that was introduced in a [paper](https://www.vldb.org/pvldb/vol13/p1147-lu.pdf) published at the 2017 [ACM Symposium on Operating Systems Principles](https://en.wikipedia.org/wiki/Symposium_on_Operating_Systems_Principles) (SOSP). The paper was authored by Da Zheng, Xiang Li, Chang Lou, et al. Dash was designed to address the scalability and performance issues of existing hash table designs. In particular, the authors aimed to improve the performance of hash tables on modern hardware with large memories, multiple CPUs, and non-volatile memory. The Dash hash table design is based on the concept of "split-ordered lists". In traditional hash table designs, collisions between keys are resolved by storing them in a linked list. However, linked lists suffer from poor cache performance, especially when the number of collisions is high. Dash addresses this issue by splitting each linked list into several smaller lists, each of which is ordered by the hash value of its keys. This allows for better cache utilization, as well as more efficient search and insertion operations. One of the unique features of Dash is its ability to grow and shrink dynamically, without requiring expensive rehashing operations. When the hash table grows, new split-ordered lists are added to accommodate the additional keys. When the hash table shrinks, split-ordered lists are merged to free up memory. This makes Dash well-suited for applications with dynamic workloads that require frequent resizing of the hash table. Dash also supports efficient scan operations, which are essential for tasks such as garbage collection and database backups. The authors achieved this by using a stateless scan operation that allows for traversing the hash table without modifying its state. Finally, Dash was designed to take advantage of persistent memory, which is a type of non-volatile memory that retains its contents even after a power loss. By using persistent memory, Dash can achieve faster recovery times and improved fault tolerance. Overall, Dash is a promising new hash table design that offers significant improvements in scalability, performance, and efficiency over existing designs. Its novel approach to splitting ordered lists and dynamic resizing, coupled with support for persistent memory, make it an ideal choice for modern hardware architectures.
rainleander
1,444,729
Encryption
I'm sure you've heard this term before. Most likely in a spy movie, as the characters try to hack a...
0
2023-04-24T15:04:56
https://dev.to/benjaminklein99/encryption-5h62
I'm sure you've heard this term before. Most likely in a spy movie, as the characters try to hack a security system, or maybe in a history lesson about the famous German Enigma code. Many people know what encryption is, but few understand how encryptions function and their use in our day-to-day life. In this post, I will discuss the types of encryption, how they work, and their use in programming. ## What is encryption? For those who haven't heard this term before, encryption is the process of converting information or data into a secret code in an attempt to prevent unauthorized access. Remember when you were little, and adults would spell words out to each other to prevent you from understanding their conversation? Encryption can be thought of just like that, but instead of spelling words out, encryptions use a key. ## How does encryption work? Before understanding how encryption works, their keys need to be understood first. A key's purpose is to translate your data into code or your code into data. Keys use an algorithm that predictably transforms data. Say you want to create an encryption that ciphers messages. Here's a simple example of how this can be achieved. JavaScipt has a method (charCodeAt) that takes in a string and an index and returns a number representative of the character at that index. Using this method, a message can be encrypted by iterating over an input string to find the next charCode of each character. To decrypt a message, the same method can be used to find the preceding (or previous) charCode of each character. Let's look at an implementation below. ``` const encryptKey = (str) => { // message represents the encrypted string that we will later ouput let message = ''; // iterate over the input string for (let i = 0; i < str.length; i++) { // nextLetter is the next character's charCode let nextLetter = String.fromCharCode((str[i]).charCodeAt(0) + 1); // add the next letter to the output message message += nextLetter; } // after the loop is finished, return the encrypted message return message; } const decryptKey = (str) => { // message represents the decrypted string that we will late ouput let message = ''; // iterate over the encrypted string for (let i = 0; i < str.length; i++) { // nextLetter is the preceeding character's charCode let previousLetter = String.fromCharCode((str[i]).charCodeAt(0) - 1); // add the next letter to the output message message += previousLetter; } // after the loop is finished, return the encrypted message return message; } let message = 'this message is classified'; let encryptedMessage = encryptKey(message); let decryptedMessage = decryptKey(encryptedMessage); console.log(message); // logs ==> this message is classified console.log(encryptedMessage); // logs ==> uijt!nfttbhf!jt!dmbttjgjfe console.log(decryptedMessage); // logs ==> this message is classified ``` ## Types of encryption Notice that the example above uses a key to encrypt a message and a separate key to decrypt a message. When separate keys are used to encrypt and decrypt data, the encryption is referred to as asymmetric. Some encryptions use the same key to encrypt and decrypt data. These encryptions are referred to as symmetric. Both types of encryptions carry unique costs and benefits. Symmetric encryptions are generally easier to crack as the key needs to be passed to the user (usually via the internet) which runs the risk of interception and makes keeping the key secure substantially more difficult. Furthermore, if the key is decyphered, both the user and the business can fall victim to attacks using malicious data. Although less secure, symmetric encryption is easier to implement and performs more quickly. When using an asymmetric system, the decrypting key can be held in private, while the encrypting key can be made public for anyone to use, without the worry of your code being deciphered. Asymmetric encryption also allows recipients to authenticate incoming data, making certain it isn't from an unauthorized sender. ## How is encryption used? Encryption is used throughout the internet, from banking to retail to social media. It's easy to see concern when entering sensitive information into a website. No one wants their credit card number or banking information to be easily accessed. Yet, we often need to send them to online vendors to make purchases or pay bills. These are more evident cases that necessitate encryption, but there are also many less apparent cases. Having your favorite social media account hijacked isn't pleasant either. Whether the account is used for business or pleasure disappointment is always assured. A cryptographer is a programmer that specializes in encrypting data. Cryptographers are some of the highest-paid developers, making an annual salary of $154,545 on average. Cybersecurity is a lucrative and highly demanding market. Novice and expert developers alike should certainly consider this field if they can hack it. ## Work Cited - https://www.ziprecruiter.com/Salaries/Cryptography-Salary - https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/encryption/what-types-of-encryption-are-there/#:~:text=There%20are%20two%20types%20of,used%20for%20encryption%20and%20decryption. - https://www.techrepublic.com/article/asymmetric-vs-symmetric-encryption/#:~:text=Symmetric%20encryption%20can%20take%20128,RSA%202048%2Dbit%20or%20more.&text=Symmetric%20encryption%20is%20considered%20less,keys%20in%20encryption%20and%20decryption.
benjaminklein99
1,444,762
How to install Stable Difussion WebUI and ControlNet 1.1 on Ubuntu
In this tutorial, I will explain how to configure Ubuntu 22.04 to take advantage of an Nvidia GPU and...
0
2023-04-22T22:42:26
https://dev.to/felipelujan/set-up-stable-difussion-webui-and-controlnet-on-ubuntu-2204-hbm
stabledifussion, controlnet, setup, txt2img
In this tutorial, I will explain how to configure Ubuntu 22.04 to take advantage of an Nvidia GPU and run stable Diffusion via Stable-diffusion-webui. This is a permanent alternative as Google might restrict the use of Colab for Stable Difussion in the future. **Stable-diffusion-webui Prerequisites:** - Python 3.10.6 (Comes with Ubuntu 22.04). - Git (Comes with Ubuntu 22.04). - Python venv - Nvidia drivers Since I don't have a Linux Machine At home, I will use a virtual machine on **Google Compute Engine**. If you're not using a VM and just want to know how to install stable difussion feel free to skip this section. # Creating the virtual machine. Sign in to Google Cloud and type Compute Engine on the search bar located on the top of your screen. Make sure that billing is enabled for your Cloud project. Enable the Compute Engine API if prompted. On the Compute Engine page, click Create an instance. In the Machine Configuration section, click on the GPUs tab, select NVIDIA T4, and 1 in the number of GPUs. Under Machine type, select n1-standard-4. Click Change in the Boot Disk section to begin configuring your boot disk and Ubuntu 22.04 LTS from the version list. ![Creating a VM for Stable Difussion](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qn7njqt741fhuf8vsrmj.png) Configure any additional settings as needed, such as allowing HTTP traffic in the Firewall section. Click Create to create the VM. Once the VM is ready, click the SSH button to enter the command line terminal. ![How to SSH into Ubuntu VM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ml0q41zzvnp2q47ebdmw.png) # Preparing Ubuntu 22.04 From this point on, make sure to run all the commands on the same directory: ## Update and upgrade system packages ``` sudo apt update && sudo apt upgrade -y ``` you might encounter a couple of purple screens like the following: ![How to update Ubuntu Kernel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n1uvmef2953pnobjo8v1.png) Hit enter without changing the defaults. ## Install Nvidia drivers. Run the following commands ``` sudo apt -y install nvidia-driver-525-server && sudo apt -y install nvidia-utils-525-server \ sudo apt -y install nvidia-cuda-toolkit ``` ## Reboot ``` sudo reboot ``` ## Install Python-venv Once the machine is back up, install **Python3-venv** with the following command: ``` sudo apt install python3.10-venv -y ``` # Prerequisites check. To verify that the prerequisites are correctly installed and available on the local machine run: ``` python3 -V ``` Expected output: `Python 3.10.6` ``` nvidia-smi ``` Expected output (similar): ![nvidia-smi correctly installed](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kizezhr2hq3z7jopc2dy.png) Running the following command should not output any errors ``` python3 -c 'import venv' ``` # Installing and running Automatic1111's Stable-Difussion-Webui. Run the following command (source). It might take a few minutes while it downloads and prepares the application components: ``` bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh) --listen --enable-insecure-extension-access ``` note: The `--listen` and `--enable-insecure-extension-access` command line arguments allow you to access and install Stable difussion extionsions remotely, if you are using a physical Ubuntu Machine, feel free to remove them. Installation is complete when you see: **Running on local URL: http://0.0.0.0:7860** If you're following along in Google Cloud, grab the virtual machine's public IP from Google Compute Engine, paste it on your browser's address bar, followed by **:7860**, and press enter. In a physical ubuntu machine enter localhost:7860 in your web browser. ## Welcome to Automatic1111's Stable Diffusion Webui. Happy prompting! # Enable ControlNet With ControlNet, you have much more power regarding image-to-image generation. It enables things such as pose transfer and style transfer. ## Installing ControlNet. On Automatic1111's webui, go to the **Extensions** tab, hit **Available**, and then **Load From**. Find **sd-webui-controlnet** in the list populates and click install. Wait until processing finishes. Go back to the **Installed** tab right next to Available, Click **Apply and restart UI**. Wait a couple of minutes before refreshing the website. You should see the ControlNet section when the UI reloads. ![ControlNet extension Installed on Stable Difussion](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xdjasvpjfxhd9hvd0qbc.png) To complete the installation of Controlnet you need to download the models that go along with it. Start a new SSH session (or open a new terminal on a physical computer), Paste the following commands on the new command line terminal to download the ControlNet models from HuggingFace. ``` sudo apt-get install git-lfs git lfs install git clone https://huggingface.co/lllyasviel/ControlNet-v1-1 mv ControlNet-v1-1/* stable-diffusion-webui/models/ControlNet/ ``` If the models in the ControlNet section show up in the models dropdow, then you're set!!! ![ControlNet models installed](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ap3jfcfefxbvyp9jtd4q.png) With that, we've finished the installation of stable-diffusion-webui and ControlNet version 1.1 Some good resources for finding the Stable Diffusion models and tutorials are: - Stable Diffusion Subreddit - Hugging Face Please let me know if you have any issues replicating this procedure. Cheers!
felipelujan
1,445,027
Passwords, passwords, passwords!!
This article was originally posted on Patreon and has been brought over here to get all of the...
0
2023-04-25T14:00:00
https://dev.to/grim/passwords-passwords-passwords-4ke0
passwords, pidgin, pidgin3, repost
> This article was originally posted on [Patreon](https://www.patreon.com/posts/passwords-76065457) and has been brought over here to get all of the Pidgin Development/History posts into one single place. We have quite the history when it comes to password storage... For many years we suggested either not storing passwords or or protecting access to the file containing those passwords. You can find complete reasoning to that suggestion [here](https://developer.pidgin.im/wiki/PlainTextPasswords). Fast forward a few years and the general consensus changed and storing a password behind a password was looked on as favorable, especially if you're using a strong password to protect the others. *There's of course more to this, but we're going to gloss over that in this post.* As such, we had a Google Summer of Code project in 2008 to add **Master Password Support**. This initial work added support for an Internal Keyring, GNOME Keyring, and KWallet. It was merged to the pidgin3 branch in January of 2009 and there it sat, stuck waiting for Pidgin 3 which as you all know, still isn't released in 2022. But this didn't mean the API wasn't being maintained, it just couldn't be used by users. In August of 2012, support for [libsecret](https://gnome.pages.gitlab.gnome.org/libsecret/) was added and eventually moved to its asynchronous API in the Fall of 2016. In April of 2013, support was added for wincred which is a built-in password manager for Windows that's tied to your local login. In May of 2018 GNOME Keyring was dropped as it had been supporting libsecret for quite some time. Fast forward a bit more to October 2020. We knew for awhile that the keyring API had some issues. It wasn't GObject based, which means you wouldn't be able to add support for another provider in anything but C/C++ which is something we're very actively trying to avoid. Then there was this weird migration API that never really made sense to me. Basically, when you switched keyrings in Pidgin, it would take all of the passwords you have stored, copy them to the new keyring, and then delete them from the original one. This was determined to be unexpected and unwanted behavior for end users. It was at this point we started designing a new API. This API would be comprised of two parts, the CredentialManager and CredentialProviders. The CredentialManager does kind of what you'd expect after [last week's post](https://www.patreon.com/posts/from-sub-systems-75584736). It keeps track of all of the CredentialProvider's as well as which one is currently active, but it adds a new layer on top as well. Rather than have everyone have to get the active provider from the CredentialManager and then use the CredentialProvider API on that provider, the CredentialManager provides a proxy to all of the CredentialProvider API and will do that work for you. While it's not a huge difference code wise as you can see below, it's still easier to deal with. ```c /* This example manually gets the active provider from the * manager and reads a password from it. */ PurpleCredentialManager *manager = NULL; PurpleCredentialProvider *active = NULL; manager = purple_credential_manager_get_default(); active = purple_credential_manager_get_active(manager); purple_credential_provider_read_password_async(active, account, callback, NULL); ``` ```c /* This example uses the proxy API built into the manager to * save some code. */ PurpleCredentialManager *manager = NULL; manager = purple_credential_manager_get_default(); purple_credential_manager_read_password_async(manager, account, callback, NULL); ``` As you can see, the differences are minor, but much easier to deal with. You might also notice that we're using an asynchronous API here. The Credential API only supports asynchronous methods. If there's a need in the future we may add synchronous methods. One of the other things we're pushing with all this new API and everything is that we're building things in a way that we can actually unit test them. So the Credential API has nearly 100% test coverage which is not something we've done very well in the past. With the API designed, written, and tested, we started porting the existing keyrings to the new API. We started with libsecret as its API was very similar to what we ended up designing. Next we did KWallet, which was an experience as I've never written anything with QT before. After that we went ahead and ported wincred as well. All that left was the internal keyring, which could store passwords in `accounts.xml` encrypted or in plain text. The internal keyring sat unported for a long time. This was mostly due to the fact that we didn't exactly want to build out the dynamic UI for it as it was based on PurpleRequestFields. Eventually we came to the realization that, while we may think we know what we're doing with cryptography, that doesn't mean that we're actually capable of securing our users passwords. So we took a page from our previous stance on password management and decided, we're just not going to provide our own CredentialProvider and decided that user can choose to either not store passwords or use an established credential provider. We'll still provide the glue code to talk to the big providers, but we're not going to try and implement our own. Eventually we would go on and implement Keychain Access on macOS as well which means that Pidgin 3 can and will support the native password managers on all of the major platforms. But of course, we designed this new API to be pluggable, which all of these providers are implemented in plugins. But that means that anyone can write additional CredentialProviders for whatever service they would like! Of course this means its up to the user to choose which to use and below you can see a screen shot of the current preferences dialog Pidgin 3. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/enlcb9dxgvcxzyrsza4m.png) And that's how we're handling password storage in Pidgin 3. When an account tries to connect, it'll ask the CredentialProvider for the password and if it doesn't exist, you'll be prompted for it. If you check the "remember password" checkbox on that dialog, the password will then be stored in the CredentialProvider and you'll never need to type it again and it will actually be stored securely! I hope you're enjoying these posts! Remember they go live for patrons at 9AM CST on Mondays and go public at 12AM CST on Thursdays! If there's something specific you'd like to see me cover here, please comment below!
grim
1,445,036
ageing in mice
creation d'un modèle de vieillissement de la souris souffrant d'adipsie sévère avec l'âge...
0
2023-04-23T06:00:11
https://dev.to/erwansimon15/ageing-in-mice-4fpl
codepen
<p>creation d'un modèle de vieillissement de la souris souffrant d'adipsie sévère avec l'âge (défaillances de la pression osmotique cellulaire et de la régulation des ions Na+ et Cl-)</p> https://codepen.io/collection/VYxOvy
erwansimon15
1,445,165
Testing on Kotlin Multiplatform and a Strategy to Speed Up Development Time (2023 Update)
This article was originally posted on my blog, for better viewing experience try it out there. The...
0
2023-04-23T09:09:39
https://akjaw.com/testing-on-kotlin-multiplatform-and-strategy-to-speed-up-development/
testing, multiplatform, kotlin
--- title: Testing on Kotlin Multiplatform and a Strategy to Speed Up Development Time (2023 Update) published: true date: 2023-04-15 06:00:00 UTC tags: Testing,Multiplatform,Kotlin canonical_url: https://akjaw.com/testing-on-kotlin-multiplatform-and-strategy-to-speed-up-development/ --- ![Testing on Kotlin Multiplatform and a Strategy to Speed Up Development Time (2023 Update)](https://akjaw.com/content/images/2023/04/kotlin-multiplatform-testing-updated-4.png) This article was originally posted on [my blog](https://akjaw.com/testing-on-kotlin-multiplatform-and-strategy-to-speed-up-development/), for better viewing experience try it out there. The main focus of Kotlin Multiplatform (KMP) is to avoid duplicating domain logic on different platforms. You write it once and reuse it on different targets. If the shared code is broken, then all its platforms will work incorrectly and in my opinion, the best way to ensure that something works correctly is to write a [tests covering all edge cases](https://akjaw.com/5-beginner-testing-mistakes-i-noticed-while-working-with-less-experienced-developers/). **Because the KMP code base is the heart of multiple platforms, it's important that it contains as few bugs as possible.** In this article, I'll share my experience on writing tests for Kotlin Multiplatform, along with my strategy for speeding up development using tests. At [FootballCo](https://www.footballco.com/?ref=akjaw.com) we use this strategy for our app, and we see that it helps our development cycle. Even though the article focuses on Kotlin Multiplatform, a lot of the principles can also be applied to plain Kotlin applications or any other type of applications for that matter. Before starting, let me ask this question: ### Why bother with testing at all? Some engineers see tests as a waste of time, let me give some examples to change their mind. - **Tests provide a fast feedback loop** , after a couple of seconds you know if something works, or it doesn't. Verification through the UI requires building the whole app, navigating to the correct screen and performing the action. Which you can image, takes a lot more time. - **It's easier to catch edge cases looking at the code** , a lot of the times the UI might not reflect all possible cases that might happen. - Sometimes it's hard to set-up the app in the correct state for testing (e.g. network timeout, receiving a socket). It might be possible, but **setting up the correct state in tests is much faster and easier**. - **A well written test suite is a safety net before the app is released**. With a good CI set-up, regressions / bugs don't even reach the main branch because they are caught on PRs. - **Tests are built in documentation** , which needs to reflect the actual implementation of the app. If it isn't updated, then the tests will fail. ## The Kotlin Multiplatform testing ecosystem ### Testing Framework Compared to the JVM, the Kotlin Multiplatform ecosystem is still relatively young. JUnit can only be used on JVM platforms, other platforms depend on the Kotlin standard library testing framework. An alternative way for testing Kotlin Multiplatform code would be to use a different testing framework like [Kotest](https://kotest.io/docs/framework/framework.html?ref=akjaw.com). I don't have much experience using it, however I found it to be less reliable than writing tests using the standard testing framework (kotlin.test). For example: I was unable to run a singular test case (a function) through the IDE. The standard testing library does lack some cool JUnit 5 features like parameterized tests or nesting, however it is possible to add them with some additional boilerplate: [Kotlin Multiplatform Parameterized Tests and Grouping Using The Standard Kotlin Testing Framework](https://akjaw.com/kotlin-multiplatform-parameterized-tests-and-grouping/) ### Assertions Kotest also has a great [assertion library](https://kotest.io/docs/assertions/assertions.html?ref=akjaw.com) in addition to the testing framework, which works flawlessly and can be used alongside the Kotlin standard library testing framework. Another library is [Atrium](https://github.com/robstoll/atrium?ref=akjaw.com), however it [doesn't support Kotlin / Native](https://github.com/robstoll/atrium/issues/450?ref=akjaw.com). ### Mocking For a long time, Kotlin Multiplatform did not have a mocking framework, things seem to have changed because [Mockk](https://github.com/mockk/mockk?ref=akjaw.com) now supports Kotlin Multiplatform. However, there still might be [issues](https://github.com/mockk/mockk/issues/58?ref=akjaw.com) on the Kotlin / Native side. An alternative to using a mocking framework is writing the mock or any other test double by hand, which will be explained in more detail in the next section. Mocking is prevalent in tests that treat a single class as a unit, so let's touch on what can be defined as a unit before diving into the Kotlin Multiplatform testing strategy. **The mock keyword is overloaded and misused in a lot of places. I won't get into the reasons why, but if you're interested in learning more about it, take a look at this article** [Mocks Aren’t Stubs](https://martinfowler.com/articles/mocksArentStubs.html) ## Definition of a Unit ### One unit, one class On Android, a unit is usually considered one class where all of its dependencies are **mocked** , probably using a framework like [Mockito](https://site.mockito.org/?ref=akjaw.com) or [Mockk](https://mockk.io/?ref=akjaw.com). These frameworks are really easy to use, however they can be, easily abused, [which leads to brittle tests](https://akjaw.com/members/testing-mistakes-part-2/) that are coupled to the implementation details of the system under test. The upside is that, these types of unit tests are the easiest to write and read (Given that the number of mock logic is not that high). Another benefit is that all the internal dependencies API (class names, function signature etc.) are more refined because they are used inside the tests (e.g. for setting up or verification) through **mocks**. The downside of this is that the these mocks often make refactoring harder, since changing implementation details (like for example extracting a class) will most likely break the test (because the extracted class needs to be mocked), even though the behavior of the feature did not change. These types of tests work in isolation, which only verify that one unit (in this case, a class) works correctly. In order to verify that a group of units behave correctly together, there is a need for additional integration tests. ### One unit, multiple classes An alternative way of thinking about a unit could be a cohesive group of classes for a given feature. These tests try to use real dependencies instead of **mocks** , however _awkward, complex_ or _boundary dependencies_ (e.g. networks, persistence etc.) or are still replaced with a test double (usually written by hand instead of **mocked** by a framework). The most frequent test doubles are Fakes, which resemble the real implementation but in a simpler form to allow testing (e.g. replacing a real database with an in-memory one). There are also Stubs which are set-up before the [action (AAA)](https://akjaw.com/5-beginner-testing-mistakes-i-noticed-while-working-with-less-experienced-developers/) allowing the system under test to use predefined values (e.g. instead of returning system time, a predefined time value is returned). **Having a [modularized project](https://akjaw.com/modularizing-a-kotlin-multiplatform-mobile-project/) allows for creating a "core-test" module which contains all public Test Doubles. Thanks to this, they can be re-used across all other modules without having to duplicate implementations** ## My strategy for testing Kotlin Multiplatform Because mocking the Kotlin Multiplatform is far from perfect, I went the path of writing test doubles by hand. The problem with this is that if we wanted to write every Kotlin Multiplatform unit test like on Android (unit == class). We would be forced to create interfaces for every class along with a test double for it. Which would add unnecessary complexity just for testing purposes. This is why I decided for the most part to treat a unit as a feature / behavior (group of classes). This way, there are less test doubles involved, and the system is tested in a more "production" like setting. Depending on the complexity, the tests might become integration tests rather than unit tests, but in the grand scheme of things it's not that important as long as the system is properly tested. ### The system under test Most of the time, the system under test would be the public domain class that the Kotlin Multiplatform module exposes or maybe some other complex class which delegates to other classes. If we had a feature that allowed the user to input a keyword and get a search result based on that keyword, the Contract / API could have the following signature: ``` kotlin fun performSearch(input: String): List<String> ``` This could be a function of an interface, a use case or anything else, the point is that this class has some complex logic. Tests for this feature could look like this: ``` kotlin class SuccesfulSearchTest class NetworkErrorSearchTest class InvalidKeywordSearchTest ``` Each test class exercise a different path that the system could take. In this case, one for a happy path, and two for unhappy paths. They could only focus the domain layer where the network API is faked, or they could also include the data layer where the real network layer is used but mocked somehow (e.g. [Ktor MockEngine](https://akjaw.com/using-ktor-client-mock-engine-for-integration-and-ui-tests/), [SQLDelight In-Memory database](https://akjaw.com/kotlin-multiplatform-testing-sqldelight-integration-ios-android/), [GraphQL Mock Interceptor](https://akjaw.com/using-apollo-kotlin-data-builders-for-testing/)). The keyword validation might contain a lot of edges which may be hard to test through the **InvalidKeywordSearchTest** which could only focus on the domain aspects of what happens on invalid keywords. All the edge cases could be tested in a separate class: ``` kotlin class KeywordValidatorTest { fun `"ke" is invalid`() fun `"key" is valid`() fun `" ke " is invalid`() fun `" key " is valid`() } ``` The example above is pretty simple, however, testing the KMP "public Contract / API" is a good start. **For complex logic that does not involve orchestrating other classes (e.g. conversions, calculations, validation), try to extract a separate class which can be tested in isolation. This way you have one granular test which covers all the complex edge cases while keeping the "integration" tests simpler by only caring about one or two cases for the complex logic (making sure it is called)** ### Test set-up with Object mothers Because this strategy might involve creating multiple test classes, this means that the system under test and its dependencies need to be created multiple times. Repeating the same set-up boilerplate is tedious and hard to maintain because one will require change in multiple places. To keep things DRY, [object mothers](https://akjaw.com/members/testing-mistakes-part-4/) can be created, removing boilerplate and making the test set-up simpler: ``` kotlin class SuccesfulSearchTest : KoinTest { private lateinit api: Api private lateinit sut: SearchEngine @BeforeTest fun setUp() { api = FakeApi() sut = createSearchEngine(api) } // ... } fun createSearchEngine( api: SearchEngine, keywordValidator: KeywordValidator = KeywordValidator() ) = SearchEngine(api, keywordValidator) ``` The top level function, createSearchEngine, can be used in all the _SearchTests_ for creating the system under test. An added bonus of such object mothers is that irrelevant implementation details like the _KeywordValidator_ are hidden inside the test class. **Such object mothers can also be used for creating complex data structures like REST schemas or GraphQL queries** ### Test set-up with Koin Another way to achieve the set-up would be to use dependency injection, luckily [Koin](https://insert-koin.io/?ref=akjaw.com) allows for easy [test integrations](https://insert-koin.io/docs/reference/koin-test/testing/?ref=akjaw.com), which more or less comes down to this: ``` kotlin class SuccesfulSearchTest : KoinTest { private val sut: SearchEngine by inject() @BeforeTest fun setUp() { startKoin { modules(systemUnderTestModule, testDoubleModule) } } @AfterTest fun teardown() { stopKoin() } // ... } ``` The test needs a Koin module which will provide all the needed dependencies. If the Kotlin Multiplatform code base is [modularized](https://akjaw.com/modularizing-a-kotlin-multiplatform-mobile-project/), the **systemUnderTestModule** could be the public Koin module that is attached to the dependency graph (e.g. [module](https://github.com/AKJAW/KMM-Modularization/blob/main/kmm/todos/todos-count-dependency/src/commonMain/kotlin/co/touchlab/kmm/todos/count/dependency/composition/todoCountDependencyModule.kt?ref=akjaw.com), [dependency graph](https://github.com/AKJAW/KMM-Modularization/blob/main/kmm/shared/src/commonMain/kotlin/co/touchlab/kampkit/Koin.kt?ref=akjaw.com)). An example test suite which uses Koin for test set-up can be found in my **[ktor-mock-tests](https://github.com/AKJAW/ktor-mock-tests/tree/main/app/src/test/java/com/akjaw/ktor/mock/tests?ref=akjaw.com)** repository. ### Contract tests for Test Doubles When creating test doubles, there might be a point when they start becoming complex, just because the production code they are replacing is also complex. Writing tests for test helpers might seem unnecessary, however, how would you otherwise prove that a test double behaves like the production counterpart? Contract tests serve that exact purpose, they verify that multiple implementations behave in the same way (that their contract is preserved). For example, the system under test uses a database to persist its data, using a real database for every test will make the tests run a lot longer. To help with this, a fake database could be written to make the tests faster. This would result in the real database being used only in one test class and the fake one in all other cases. Let's say that real database has the following rules (contract): - adding a new item updates a "reactive stream" - new items cannot overwrite an existing item if their id is the same The contract base test could look like this: ``` kotlin abstract class DatabaseContractTest { abstract var sut: Dao @Test fun `New items are correctly added`() { val item = Item(1, "name") sut.addItem(item) sut.items shouldContain item } @Test fun `Items with the same id are not overwritten`() { val existingItem = Item(1, "name") sut.addItem(existingItem) val newItem = Item(1, "new item") sut.addItem(newItem) assertSoftly { sut.items shouldNotContain newItem sut.items shouldContain existingItem } } } ``` The base class contains the tests which will be run on the implementations (real and fake database): ``` class SqlDelightDatabaseContractTest : DatabaseContractTest() { override var sut: Dao = createDealDatabase() } class FakeDatabaseContractTest : DatabaseContractTest() { override var sut: Dao = createFakeDatabase() } ``` This is just a trivial example to show a glimpse of what can be done with contract tests. If you'd like to learn more about this, feel free to check out these resources: [ContractTest](https://martinfowler.com/bliki/ContractTest.html), [Outside-In TDD - Search Functionality 6 (The Contract Tests)](https://www.youtube.com/watch?v=S3qItwfhtCw) Big shout out to [Jov Mit](https://jovmit.io/) for creating so many Android related testing content which inspired this testing strategy (If you're interested in Test Driven Development, be sure to check out his insightful [screencast series on YouTube](https://www.youtube.com/channel/UC71omjio31Esx7LytaZ2ytA)). ## Benefits of the Strategy ### Development speed The strategy I'm proposing would verify the KMM feature / module correctness at a larger scale instead of focusing on verifying individual classes. This more closely resembles how the code behaves in production, which gives us more confidence that the feature will work correctly in the application. This in turn means that there is less need to actually open up the application every time. Building applications using Kotlin Multiplatform usually takes longer than their fully native counterparts. The Android app can be built relatively fast thanks to incremental compilation on the JVM, however for iOS the story is different. Kotlin / Native compilation in itself is pretty fast, the issue arises when creating the Objective-C binary where the gradle tasks _linkDebugFrameworkIos_ and _linkReleaseFrameworkIos_ are called. Luckily, tests avoid that because they only compile Kotlin / Native without creating the Objective-C binary. Ignoring the build speed issues, let's say that the build didn't take longer. Building the whole application means building all of its parts. But when we work on a feature, we typically only want to focus and verify a small portion of the entire application. **Tests allow just that, verifying only a portion of the app without needing to build everything**. When we're finished working on a feature, we can plug the code into the application and verify that it correctly integrates with other parts of the application. ### Test function / test case names Because these tests focus more on the end result of the feature rather than on implementation details of a single class, the test function names reflect the behavior of the feature. A lot of time, this behavior also represents the business requirements of the system. ### Refactoring With this testing strategy, refactoring would be easier because the tests don't dive into the implementation details of the system under test (like mocks tend to do). They only focus on the end result, as long as the behavior remains the same\*, then the tests don't care how it was achieved. \* And it should be the same, since that's what refactoring is all about. ### ~~Kotlin Multiplatform threading~~ The [new memory model](https://kotlinlang.org/docs/whatsnew16.html?ref=akjaw.com#preview-of-the-new-memory-manager) is the default now, so there are no limitations like in the old memory model. To read more about the old one, you can visit the [previous version of this article](https://akjaw.com/old/old-kotlin-multiplatform-parameterized-tests-and-grouping/#kotlin-multiplatform-threading) which covers that. ### Test double reusability The last thing I want to touch on is test double reusability. To keep the code DRY, the test doubles could also be moved to a common testing module, which helps with its reusability. For example, the data layer test doubles (e.g. network or persistence) can be often reused for UI tests. An example of this can be found in my [Ktor Mock Engine article](https://akjaw.com/using-ktor-client-mock-engine-for-integration-and-ui-tests/), where the integration tests and UI tests use the same engine for returning predefined data (not strictly a test double, but you get the idea). The [repository](https://github.com/AKJAW/ktor-mock-tests?ref=akjaw.com) in the article is Android only, but it can easily be applied to Kotlin Multiplatform since [Ktor](https://ktor.io/docs/http-client-multiplatform.html?ref=akjaw.com) has great support for it. ## Downsides of the Strategy ### Test speed\* The first thing I want to address is the test speed because no one wants to wait too long for the tests to complete. Tests which treat unit as a class are superfast, but only when they use normal test doubles. Mocking frameworks with all their magic take up a lot of time and make the tests much slower compared to a test double written by hand. The test strategy I'm proposing does not use any mocking framework, only test doubles written by hand. However, the tests might use a lot more production code in a single test case, which does take more time. From my experience working with these types of tests on Kotlin Multiplatform, I didn't see anything worrying about the test speed (besides Kotlin / Native taking longer). Additionally, if the KMM code base is modularized then only the tests from a given module are executed, which is a much smaller portion of the code base. \* Test speed is a subjective topic, where every one has a different opinion on it. [Martin Fowler](https://www.martinfowler.com/?ref=akjaw.com) has an interesting article which touches on this topic: [bliki: UnitTest](https://martinfowler.com/bliki/UnitTest.html) ### Test readability As I said before, these tests tend to be more on the integration side than the unit side (depending on how you define it). This means that more dependencies are involved, and more set-up is required. To combat this, I recommend splitting the tests into [multiple files,](https://akjaw.com/kotlin-multiplatform-parameterized-tests-and-grouping/) each focusing on a distinctive part of the behavior. Along with [object mothers,](https://akjaw.com/members/testing-mistakes-part-4/) the set-up boilerplate can be reduced and implementation details hidden. To understand these tests, more internal knowledge about the system under test is required. This is a doubled edged sword because the tests are not as easy to understand, but after you understand them you'll most likely know how the system under test works along with its collaborators. ### Hard to define what should be tested Tests where unit is class are easy to write because they always focus on a single class. When a unit is a group of classes, it is hard to define what the group should be, how _deep_ should the test go? Unfortunately, there is no rule that works in every case. Every system is different, and has different business requirements. If you start noticing that the test class is becoming too big and too complex, this might be a sign that the test goes too deep. **There might be a feature which just keeps growing, and the test becomes really hard to understand. In such cases, it might be good to refactor the SUT and extract some cohesive group of logic to a separate class which can be tested more granularly. Then the original test uses a Test Double for the extracted class, making it easier to understand. This does require refactoring test code, but makes the Test Suite easier to understand** ## CI / CD Tests are useless when they are not executed, and it's not a good idea to rely on human memory for that. The best way would be to integrate tests into your Continuous Integration, so they are executed more frequently. For example, tests could be run: - On every PR, making sure nothing broken is merged to the main branch. - Before starting the Release process. - Once a day - All of the above combined. **For Kotlin Multiplatform, it is important to execute test for all targets. Sometimes everything works on the JVM, but Kotlin Native fails (e.g. Regex, NSDate). So the best way of making sure every platform behaves in the same way is to run tests for all targets** ## Summary In my opinion, Kotlin Multiplatform should be the most heavily tested part of the whole application. It is used by multiple platforms, so it should be as bulletproof as possible. Writing tests during development can cut down on compilation time and give confidence that any future regressions (even 5 minutes later) will be caught by the test suite. I hope this article was informative for you, let me know what you think in the comments below!
akjaw
1,445,603
23April
A post by maysam22
0
2023-04-23T19:26:43
https://dev.to/maysam22/23april-58fo
maysam22
1,445,215
Use CSS to Set a Video as the Full Background of Your Website
If you want the video to play as the full background of your website, you can use CSS to position the...
16,475
2023-04-23T11:23:01
https://dev.to/sh20raj/how-to-use-css-to-set-a-video-as-the-full-background-of-your-website-d6i
html, javascript, css, video
If you want the video to play as the full background of your website, you can use CSS to position the video element behind the other content on your page. Here's an example CSS code that you can use: ```html <style> html, body { height: 100%; margin: 0; padding: 0; } #my-video { position: absolute; top: 0; left: 0; width: 100%; height: 100%; /*object-fit: cover; z-index: -1;*/ } </style> ``` In this code, the HTML and body elements are set to have a height of 100% so that they fill the entire viewport. The video element is then positioned absolutely behind the other content on the page, with a width and height of 100% to make it fill the entire viewport. The "object-fit" property is set to "cover" so that the video maintains its aspect ratio while filling the entire viewport. Finally, the "z-index" property is set to -1 to ensure that the video is positioned behind the other content on the page. You can add this CSS code to your existing stylesheet or add it in a separate <style> tag in the head section of your HTML document. Example :- YouTube Embed URLs. You can also use this trick with div elements while using any HTML5 Video Player Library. Examples :- - [https://videoplyr.sh20raj.repl.co/](https://videoplyr.sh20raj.repl.co/) - [https://driveplyr.sh20raj.repl.co/](https://driveplyr.sh20raj.repl.co/)
sh20raj
1,445,275
WHAT is Apache Age?
Apache Age is an open-source, distributed graph database that is built on top of PostgreSQL. It...
0
2023-04-23T12:43:25
https://dev.to/mahinash26/what-is-apache-age-523c
blog, apacheage, postgres, database
Apache Age is an open-source, distributed graph database that is built on top of PostgreSQL. It provides a native graph data store with support for graph queries, traversals, and analytics. Apache Age is designed to be easy to use, scalable, and performant. The main advantage of Apache Age is its ability to seamlessly integrate with existing PostgreSQL-based applications. This means that organizations can easily extend their existing database infrastructure to support graph data, without having to introduce new technologies or learn new programming paradigms. Apache Age uses a combination of table partitioning and indexing to achieve scalability and performance. The database is partitioned based on the graph topology, with each partition containing a subset of the graph data. This allows Apache Age to efficiently distribute queries and traversals across the cluster, while also minimizing the amount of data that needs to be scanned. In addition to its scalability and performance benefits, Apache Age also provides a rich set of graph analytics features. These include support for centrality measures, community detection, and graph embeddings. These features enable organizations to gain deep insights into their graph data, and to use these insights to drive business decisions. One of the key use cases for Apache Age is in social network analysis. Social networks are inherently graph-based, with nodes representing individuals and edges representing relationships between them. In conclusion, Apache Age is a powerful and flexible graph database that offers scalability, performance, and a rich set of analytics features. Its seamless integration with PostgreSQL makes it easy to adopt and use, and its support for graph queries and traversals makes it ideal for a wide range of applications. If you are looking to work with graph data, Apache Age is definitely worth considering.
mahinash26
1,445,406
How do I get the 1001st message?
As you can see, how do I get LinkedIn's 1001st and more search results. Who can help me?
0
2023-04-23T15:39:21
https://dev.to/__10d2242450977720b1c3/how-do-i-get-the-1001st-message-4ep8
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/imsbx61nkyttq9ns94k0.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gelewiy5iqblbzi5yxzb.png) As you can see, how do I get LinkedIn's 1001st and more search results. Who can help me?
__10d2242450977720b1c3
1,445,476
Distributing Task Queuing using Django & Celery
Have you ever had a backend system that needed to perform time-consuming tasks, such as sending...
0
2023-04-24T21:08:44
https://dev.to/mazenr/distributing-task-queuing-using-django-celery-ol7
Have you ever had a backend system that needed to perform time-consuming tasks, such as sending emails, processing data, or generating reports? These tasks can take a long time to complete and tie up server resources, which can lead to slow response times and a poor user experience. This is where task queuing comes in. Task queuing is the practice of offloading time-consuming tasks to a separate system, where they can be executed asynchronously in the background. This frees up server resources and allows your web application or backend system to continue responding quickly to user requests. Celery is a popular Python library for task queuing that makes it easy to set up and manage a distributed task queue. It provides a simple yet powerful way to manage the execution of asynchronous tasks and can integrate with a wide variety of other Python libraries and frameworks. In this article, we'll discuss what Celery is and write a simple task queue system using Celery & Django. Let's jump into it! <p> </p> ## What is Task Queuing Task queuing is a concept that has been around for many years, and it has become an important technique for managing large-scale distributed systems. In a distributed system, there are many tasks that need to be performed, and these tasks can be time-consuming and require a lot of resources. By using task queuing, tasks can be distributed across multiple workers or servers, allowing them to be executed concurrently and efficiently. ![task queue](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v89ywc9v4x82d6ev6136.png) One of the benefits of using a task queuing system is that it can improve the performance and scalability of your system. For example, if you have a web application that needs to generate a report for each user, you can use a task queue to distribute the report generation tasks across multiple workers. This can significantly reduce the time it takes to generate reports and make your application more responsive. Task queuing also provides a way to handle errors and retries. If a task fails, it can be retried automatically, ensuring that it eventually completes successfully. This can be particularly useful when working with unreliable resources or when performing complex calculations that may fail due to errors or resource constraints. <p> </p> ## What is Celery & How it Works Celery is a popular Python-based task queuing system that has been around since 2009. It is designed to be easy to use, flexible, and scalable, making it a popular choice for both small and large-scale applications. Celery works by using a combination of a message broker and a worker pool. The message broker is responsible for storing the tasks and messages that are sent between the workers, while the worker pool is responsible for executing the tasks. When you define a task in Celery, you create a Python function and decorate it with the @celery.task decorator. When this function is called, Celery adds the task to the message broker, which can then be picked up by a worker and executed. Celery supports a variety of message brokers, including RabbitMQ, Redis, and Amazon SQS, allowing you to choose the one that best suits your needs. Celery also provides a variety of features for monitoring and managing your tasks. For example, you can configure the maximum number of retries for a task, set a time limit for how long a task can run, and monitor the progress of your tasks using Celery's built-in monitoring tools or third-party tools like Flower. ![celery](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnebqkuahl1dccm0sab3.png) By using Celery, you can greatly improve the performance and scalability of your Python applications and ensure that time-consuming tasks are executed efficiently and reliably in the background. <p> </p> ## Hands-on Task Queuing using Django & Celery In section, we'll create a simple task queue using Django, Celery & RabbitMQ. Note that we need RabbitMQ as a broker to send tasks from Django to Celery. If you don't have RabbitMQ installed you can install from [here](https://www.rabbitmq.com/download.html) and make sure that that RabbitMQ service is running. First, create an empty folder, create a virtual env named venv and activate it: `python -m venv venv venv\Scripts\activate ` Note that you might need another command to activate the venv if you are using Mac. Next, install Django & Celery: `pip install django celery` After that, create a new Django project: `django-admin startproject main .` In the main directory of the main project create a file named celery.py and add the following code: **main/celery.py** ```python import os from __future__ import absolute_import, unicode_literals from celery import Celery os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings') app = Celery('main') app.config_from_object('django.conf:settings', namespace='CELERY') app.autodiscover_tasks() ``` The previous code sets up a Celery instance with configurations defined in Django settings file and enables Celery to discover and execute tasks defined in different Django apps: 1. Imports the os module which provides a way of using operating system dependent functionality like reading or writing to the file system. 2. Imports tools that allows importing of unicode literals in the code. 3. Imports the Celery module. 4. `os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'main.settings')` This sets the `DJANGO_SETTINGS_MODULE` environment variable to `main.settings`, which is the location of the Django settings file. 5. Creates an instance of the Celery object. 6. Loads the Celery configuration from the Django settings file, specified by the argument. This is done to make sure that Celery settings are separate from other Django settings. 7. Discovers and imports all of the tasks defined in the tasks.py files within each installed Django app. It enables the Celery app to find and execute tasks defined in different Django apps. Now let's create a new Django app inside the main project: `python manage.py startapp app1` Before we forget, let's register this app into the main Django project: **main/settings.py** ```python INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'app1' ] ``` Inside the app directory, let's create a python file tasks.py, which contains the tasks: **app1/tasks.py** ```python from __future__ import absolute_import, unicode_literals from celery import shared_task @shared_task def add(x, y): return x + y ``` 1. Imports tools that allows importing of unicode literals in the code. 2. Imports the shared_task decorator from the Celery module, which is used to create a task that can be executed asynchronously using Celery. 3. Creates a simple task function which adds 2 values. We also use the `@shared_task` decorator to wrap this function and turn it into an async Celery task. Finally, let's run the celery queue: Windows: `celery -A proj worker -l info --pool=solo` Mac/Linux: `celery -A main worker --loglevel=info` You will see something similar to this: ![celery](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i7axmkkauyk10btblutd.png) Now let's trigger the task from the Django shell, open a new terminal and add the following code: `from app1.tasks import add add.delay(4, 4) add.apply_async((3, 3), countdown=20) ` 1. We import the add function which will trigger task to Celery via RabbitMQ. 2. We use the function to add 2 values. 3. We use the same function but we wait 20 seconds before we deliver the message to Celery. If we take a look RabbitMQ interface we'll see that it had received the 2 messages: ![rabbitmq](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m5wv2dadu8zgd13v5us1.png) Here is out result: ![celery](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xq2lar890zaz787x8f8z.png) You can find the code used [here](https://github.com/mazen-r/articles/tree/main/django-celery). ## Conclusion In summary, Celery is a powerful and flexible task queuing system for Python that can help you manage and distribute tasks across multiple workers or servers. By using Celery, you can improve the performance and scalability of your applications, handle errors and retries, and monitor and manage your tasks with ease.
mazenr
1,445,499
ASQai - WE WANT YOU
AI Developer: An AI developer should have expertise in machine learning, deep learning,...
0
2023-04-23T17:58:37
https://dev.to/namenotavilable/asqai-we-want-you-4539
## AI Developer: An AI developer should have expertise in machine learning, deep learning, and natural language processing (NLP). They will be responsible for developing initial AI model used in the solution. ## Data Scientist: A data scientist should have expertise in data analysis, statistics, and data visualization. They will be responsible for analyzing the data collected by the solution and identifying patterns and insights that can be used to improve the AI models. ## UX/UI Designer: A UX/UI designer should have experience in designing user interfaces and user experiences for software applications. They will be responsible for designing the user interface for the solution, ensuring that it is intuitive and easy to use. ## Full-Stack Developer: A full-stack developer should have experience in both front-end and back-end development. They will be responsible for developing the web interface for the solution, connecting it to the blockchain and AI components. ## DevOps Engineer: A DevOps engineer should have experience in deploying and managing software applications in a cloud environment. They will be responsible for setting up the infrastructure for the solution, including servers and databases. ## Project Manager: A project manager should have experience in managing software development projects, including planning, budgeting, and resource allocation. They will be responsible for coordinating the efforts of team members and ensuring that the project is completed on time and within budget.
namenotavilable
1,445,560
Master the Search Engines with 70+ ChatGPT SEO Prompts
With its human-like text generation capabilities, ChatGPT is an AI-powered language model created by...
0
2023-04-23T18:23:10
https://dev.to/killianellie1/master-the-search-engines-with-70-chatgpt-seo-prompts-3i9o
With its human-like text generation capabilities, ChatGPT is an AI-powered language model created by OpenAI that proves to be a valuable resource for optimizing website content and enhancing search engine rankings. In this article, we’ll take a closer look at ChatGPT prompts for SEO and their advantages for websites seeking to boost their online presence, investigating their inner workings and functionality. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3zw9hb29bqa2g2oiqlk9.png) ##What is a ChatGPT prompt? To generate text based on a given prompt, ChatGPT utilizes natural language inputs known as prompts. If provided with a prompt such as “Explain SEO as an SEO expert,” ChatGPT’s AI-powered model generates a human-like response that resembles the language and style of an SEO expert. ##Revolutionize Your Keyword Research with ChatGPT Prompts Keyword research can be a challenging task in the world of SEO. That’s why we’ve curated [ChatGPT prompts](https://datafit.ai/) to help you find the right keywords! Our goal is to identify high-quality keywords related to e-commerce that will drive relevant traffic to our website, increase search engine visibility, and align with our content marketing strategy. To achieve this, we’ll gather data on search volume, competition, and related keywords, keeping our target audience in mind. 1. As an SEO lead, can you suggest some low-difficulty, high-volume keywords for [topic of interest]? 2. As a content marketer, could you provide me with some long-tail, high-volume, and low-difficulty keywords for [topic of interest]? 3. I need a table that lists the top competitors for ‘Topic’ along with their URLs. A keyword strategist would be perfect for curating this data. 4. Imagine you’re an SEO expert with in-depth knowledge of keywords. Please create a list of five SEO keywords related to the following section of our blog post: [blog post section]. 5. In the role of an SEO manager, your task is to investigate the top 10 SEO keyword strategies for [topic] and categorize the search intent (commercial, transactional, or informational) of each keyword in a table format:… 6. As a content strategist, your job is to gather a collection of X frequently asked questions about [topic], which are relevant to the new [product/service/feature]. 7. If you were a keyword researcher, you would compile a list of listicle content keywords for [topic]. 8. In your capacity as an online marketing manager, you are expected to list broad topics that are relevant to [topic], and expand each topic with a list of phrases that you believe your customers use. ##Content Outline Prompts for ChatGPT Once you’ve identified the ideal keyword, the next step is creating exceptional content around it. But where do you begin? Check out the ChatGPT [content](https://datafit.ai/tag/content) outline prompts below for some inspiration, or simply use them as they are to kickstart your content creation process. 1. Generate a blog post outline for the keyword [X], targeting an [X] audience with a conversational tone and a length of 1500–2000 words. As an experienced copywriter, ensure the outline is comprehensive and SEO-optimized. 2. Create an SEO-optimized blog post outline comparing and contrasting different products/services related to keyword [X]. Target consumers with a neutral tone and aim for a length of 1000–1500 words, as a content marketer. 3. Showcase the unique features and benefits of [X] with a persuasive tone in a blog post outline targeting [product] enthusiasts. As a freelance writer, make sure the outline is comprehensive and has a desired length of 1500–2000 words. 4. Create a step-by-step guide for using [X] with a friendly and helpful tone in a detailed blog post outline. As a technical writer experienced in SEO, ensure the length is 800–1000 words and the outline targets beginners. 5. Provide tips and tricks for [X] in a comprehensive blog post outline targeting DIY enthusiasts with a conversational tone. As a content marketing specialist, aim for a desired length of 1200–1500 words. 6. Present the key ideas for a blog post about [subject] in a table format. 7. As an experienced content writer, outline the essential components of a detailed guide on [subject] suitable for a blog post. 8. Create seven subheadings with a catchy title (max 60 characters) for the blog article titled [title]. 9. Write a comprehensive two-level heading outline for the blog article titled [title] from the perspective of a content marketing specialist. 10. Analyze the given outline [outline] as a social media content writer and add or remove parts to make the blog post more engaging and informative. ##ChatGPT Writing Prompts for Creating Compelling Copy With the keyword and outline in hand, it’s time to create the content using our fantastic collection of ChatGPT copywriting prompts. 1. Compose a video advertisement script as a copywriter that emphasizes the advantages of reducing waste and promoting sustainability. Use an informative and persuasive tone to appeal to eco-conscious customers. 2. Draft an Instagram advertisement script as a web content specialist that exudes confidence and appeals to fashion-conscious customers. The ad should showcase the product’s ability to keep individuals updated with the latest fashion trends and help them look their best. 3. Create a friendly YouTube video advertisement script as a web content manager targeting pet owners. The advertisement should highlight the product’s benefits in providing comfort and better care for furry friends. 4. Write a persuasive and professional email template as a blogger for a lead-generation campaign aimed at business-oriented customers. The email should emphasize how the product [X] improves productivity and efficiency in the workplace. 5. As a digital content creator, draft a sales letter that uses an approachable tone to target busy customers. The letter should emphasize the time management benefits of the product [X]. 6. Compose an inspiring short story that highlights the features of product [X] for wellness-focused customers. As an experienced copy editor in SEO, emphasize how the product promotes a healthy balance and enhances overall happiness. 7. Write a friendly blog article headline that targets family-oriented customers, showcasing the benefits of product [X] in enabling them to spend more quality time with their loved ones. 8. Draft a confident social media post for a sale event as a content marketer targeting tech-savvy customers. Emphasize how the product [X] can help customers stay ahead of the latest technological trends. 9. Write an inspiring call-to-action for a landing page as an experienced content writer targeting creative customers. Highlight how the product [X] can bring their imaginative ideas to life. 10. Craft a trustworthy script for a virtual event invitation as a social media content creator targeting security-conscious customers. Emphasize how the product [X] can provide a sense of safety and security in their daily lives. 11. As a copywriter, craft a Facebook ad script with a lively and dynamic tone that speaks to young adults. Highlight the ways in which the product simplifies and enhances their daily routine, freeing them up to focus on what matters most. 12. As a content creator, script a TV commercial that resonates with families who have young children. Showcase the benefits of the product in creating a secure and nurturing environment for kids to learn and play. 13. As a social media manager, develop an influencer collaboration script targeting fitness enthusiasts. Emphasize how the product enhances their workout experience, improves their health, and helps them achieve their fitness goals. 14. As a marketer, write a radio ad script aimed at busy professionals. Highlight how the product saves time and offers convenience, allowing them to focus on what’s important in their fast-paced lives. 15. As a content strategist, create a Snapchat ad script tailored to college students. Emphasize how the product provides practical and affordable solutions for their everyday needs, helping them manage their finances and live stress-free lives on campus. ##ChatGPT Suggestions for Enhancing Content Quality 1. Whether your content is brand new or has been around for a while, sometimes a few tweaks are needed to achieve the perfect form. Utilizing ChatGPT prompts can make it easy to refine your content and bring it to the next level. 2. Enhance [text] by ensuring it is both informative and relevant to your intended [target audience]. 3. Revise [text] by incorporating headings and subheadings that feature the [keyword], facilitating easier readability for the audience. 4. Reword this [text] using active voice and concise sentences, making the content easily digestible for the [target audience] in the [keyword] context. 5. Boost this [text] by including a clear call-to-action (CTA) that motivates readers to take specific actions, such as subscribing to a newsletter or purchasing a product. 6. Rephrase this [text] by incorporating pertinent statistics and quotes that support your arguments, lending credibility to the content. 7. Restate this [text] through the use of anecdotes and storytelling methods, enhancing the content’s memorability and audience engagement. 8. Improve this [text] by incorporating humor where appropriate, creating an enjoyable reading experience. 9. Enhance this [text] by thoroughly proofreading for any typos, grammatical errors, or other inaccuracies. 10. Rewrite this [text] with bullet points, numbered lists, and bold/italicized text to create easily scannable content. 11. Rewrite this [text] by concluding with a call-to-action (CTA) that invites reader engagement, such as asking questions or requesting comments. 12. Rewrite the above text using [keyword 1, keyword 2, keyword 3] as primary SEO keywords. 13. Make the above text more engaging and fun to read by using a playful tone of voice and incorporating [keywords]. 14. As a copywriter in the SaaS industry, summarize the following content in [X] words, emphasizing the most important information and incorporating the keyword [X] where relevant. 15. As a content specialist, rephrase this blog section [blog section] to align with the tone, language, and style of this blog section [blog section], while retaining its core message. 16. As a former comedian now working in content marketing, enhance this YouTube video description by incorporating wordplay, puns, and other relatable humor to make it more engaging and enjoyable. ##ChatGPT Prompts to enhance Technical Search Engine Optimization performance. Simply producing SEO-friendly content might not suffice without addressing the technical aspects. To ensure that you are on the correct path, utilize ChatGPT prompts for technical [SEO](https://datafit.ai/cat/seo) if you require additional assistance. 1. Utilize ChatGPT prompts to produce the Schema markup for the FAQs page by creating questions and answers for the following: … 2. Create hreflang tags for pages that target [country] in [language], [country] in [language], and [country] in [language] … 3. Write .htaccess rewrite rules that will 301-redirect [source location] to [destination location] … 4. Generate robots.txt rules that block crawling of [blocked location] but allow [crawled location] within the domain … 5. Develop a valid XML sitemap with the following URLs: [URLs] … 6. Create a no-follow and canonical for [URL] … 7. Assume the role of an SEO specialist, evaluate [website URL], and offer recommendations for technical SEO improvements, along with a table outlining how to implement those improvements. ##Prompts from ChatGPT to Assist with Crafting Effective Titles Are you in search of a captivating title for your content? Take a look at the ChatGPT prompts provided below to craft a title that will entice your audience to click on your content. 1. Generate [X] unique title tags, limited to 60 characters each, for the following text. Each tag must be descriptive and include the term “keyword”:… 2. Allow me to enlist keywords, and you can serve as a fancy title generator. I’ll begin with API, test, and automation. Please provide me with attention-grabbing blog post titles. 3. Craft captivating blog post titles for the following list of SEO keywords to increase their click-through rates: 4. Enhance the appeal of the blog article’s title, [title], by providing more engaging options. 5. Produce three distinct blog post titles with higher click-through rates for the given topic. ##General SEO: Utilizing ChatGPT Prompts We’ve included some additional ChatGPT prompts for SEO that may come in handy in the field. 1. Please assume the role of an SEO expert. My initial requirement is assistance in creating an SEO strategy for my company. 2. Assume the role of an SEO expert. My first request is for guidance on how to develop a comprehensive and effective SEO guide. 3. As a social media influencer, you will create content for various platforms, including Instagram, Twitter, and YouTube. Your mission will be to engage with your followers to promote products or services and increase brand awareness. For my first suggestion, I need help devising an exciting campaign on Instagram to launch a new line of athleisure clothing. 4. As a social media manager, your responsibilities include devising and executing campaigns across all relevant platforms, responding to inquiries and comments, overseeing discussions through community management tools, gauging success via analytics, creating captivating content, and maintaining a consistent schedule. My initial request is for assistance in managing an organization’s Twitter presence to raise brand awareness. 5. Paraphrase the subsequent email as an SEO specialist in a playful yet professional manner, while paying close attention to grammar rules:… 6. Provide a table of X popular blogs related to [niche] that cover [topic], along with their corresponding URLs. 7. Your role is to write a script for a webinar with an informative tone that resonates with tech-savvy individuals. The content should emphasize the importance of keeping up with the latest technology and achieving seamless connectivity. 8. Act as a copywriter and develop a hashtag campaign for a product that informs customers looking for value. Focus on how the product maximizes their investment and provides the most value for their money. 9. As a content marketer, compose a captivating meta description for a blog post featuring the keyword [X], ensuring that the meta description stays within [X] characters. 10. As an SEO specialist, I require your assistance in devising a plan to enhance the search engine ranking of [URL] for the following keywords: “[keyword 1],” “[keyword 2],” and “[keyword 3].”
killianellie1
1,445,613
Creating an Interactive Scroll Page Progress Bar with CSS to Enhance User Engagement
Scroll page progress bars are a useful feature to help users track their progress as they scroll...
0
2023-04-23T20:02:47
https://dev.to/techiebundle/creating-an-interactive-scroll-page-progress-bar-with-css-to-enhance-user-engagement-272a
webdev, frontend, css
Scroll page progress bars are a useful feature to help users track their progress as they scroll through long pages. By adding an interactive scroll page progress bar to your website or application, you can enhance user engagement and improve the overall user experience. In this article, we will explore how to create a custom scroll page progress bar using CSS. We will discuss the basic HTML structure required for the progress bar, and then dive into the CSS code that will bring it to life. We will also cover some best practices and tips for designing an effective scroll page progress bar that fits seamlessly with your website or application. ## Step 1: Add HTML Markup The first step is to add HTML markup for your interactive scroll page progress bar. Here's an example: ``` <!-- tb is acronym for TechieBundle --> <div class="tb-container-bar"> <div class="tb-progress-bar"></div> </div> <div class="tb-wrapper"> <header> <div class="tb-container"> <a href="https://techiebundle.com"> <img src="//techiebundle.com/wp-content/uploads/2022/12/techiebundle-white-1-1.png" alt="Web Development Agency | TechieBundle" height="60" target="_blank"> </a> <div class="nav navigation-wrapper"> <ul class="nav-list"> <li class="nav-link">Home</li> <li class="nav-link">About</li> <li class="nav-link">Portfolio</li> <li class="nav-link">Blog</li> <li class="nav-link">Contact</li> </ul> </div> </div> </header> <div class="tb-container tb-margin-top"> <h1>Scroll Page Progress Bar with CSS (Scroll Down)</h1> <p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Illo commodi nesciunt maiores saepe? Repellat at quia dignissimos nobis, vitae cumque delectus dolorem totam cum. Voluptas quos delectus fuga dicta accusamus. </p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Ex earum doloribus quasi atque? Voluptatem rem maiores, reiciendis doloremque fuga earum quia, quis laboriosam minima natus, in sunt debitis repellat consequuntur?</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Animi ipsa sed tempora in aperiam autem quidem rem alias ratione! Rem debitis reiciendis aliquid mollitia deleniti. Hic reprehenderit aliquid nisi officiis!</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Rem, dolor repellat! Iure ratione perspiciatis officia illo provident dignissimos earum sunt, natus adipisci dolorum saepe corrupti rerum aliquam ex vel. Voluptas!</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Exercitationem aut ipsa, optio voluptatem doloremque in labore facere, dignissimos, laudantium voluptas rem molestiae iure hic alias sunt. Sequi rem error corrupti!</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Sequi deleniti consequatur, similique odio hic, saepe iste repellendus dolorum odit cum quae, ipsum dolore reprehenderit exercitationem temporibus illo quisquam modi dolor? </p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quisquam repellendus, magnam voluptatem minima deleniti sint id. Voluptates est, laudantium deserunt, minus ex assumenda culpa cumque porro tempore doloribus quisquam consequatur.</p> <p>Lorem ipsum, dolor sit amet consectetur adipisicing elit. Odit quis molestiae illo hic facere ratione velit sequi eum, inventore nesciunt dicta aliquid totam necessitatibus culpa iste autem expedita saepe vero.</p> <p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Animi ab officiis odit culpa vel! Temporibus, labore magnam alias sunt sint culpa, porro inventore pariatur modi explicabo omnis perspiciatis esse ipsa.</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Sequi deleniti consequatur, similique odio hic, saepe iste repellendus dolorum odit cum quae, ipsum dolore reprehenderit exercitationem temporibus illo quisquam modi dolor? </p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quisquam repellendus, magnam voluptatem minima deleniti sint id. Voluptates est, laudantium deserunt, minus ex assumenda culpa cumque porro tempore doloribus quisquam consequatur.</p> <p>Lorem ipsum, dolor sit amet consectetur adipisicing elit. Odit quis molestiae illo hic facere ratione velit sequi eum, inventore nesciunt dicta aliquid totam necessitatibus culpa iste autem expedita saepe vero.</p> <p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Animi ab officiis odit culpa vel! Temporibus, labore magnam alias sunt sint culpa, porro inventore pariatur modi explicabo omnis perspiciatis esse ipsa.</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Sequi deleniti consequatur, similique odio hic, saepe iste repellendus dolorum odit cum quae, ipsum dolore reprehenderit exercitationem temporibus illo quisquam modi dolor? </p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quisquam repellendus, magnam voluptatem minima deleniti sint id. Voluptates est, laudantium deserunt, minus ex assumenda culpa cumque porro tempore doloribus quisquam consequatur.</p> <p>Lorem ipsum, dolor sit amet consectetur adipisicing elit. Odit quis molestiae illo hic facere ratione velit sequi eum, inventore nesciunt dicta aliquid totam necessitatibus culpa iste autem expedita saepe vero.</p> <p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Animi ab officiis odit culpa vel! Temporibus, labore magnam alias sunt sint culpa, porro inventore pariatur modi explicabo omnis perspiciatis esse ipsa.</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Sequi deleniti consequatur, similique odio hic, saepe iste repellendus dolorum odit cum quae, ipsum dolore reprehenderit exercitationem temporibus illo quisquam modi dolor? </p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quisquam repellendus, magnam voluptatem minima deleniti sint id. Voluptates est, laudantium deserunt, minus ex assumenda culpa cumque porro tempore doloribus quisquam consequatur.</p> <p>Lorem ipsum, dolor sit amet consectetur adipisicing elit. Odit quis molestiae illo hic facere ratione velit sequi eum, inventore nesciunt dicta aliquid totam necessitatibus culpa iste autem expedita saepe vero.</p> <p>Lorem ipsum dolor sit amet consectetur, adipisicing elit. Animi ab officiis odit culpa vel! Temporibus, labore magnam alias sunt sint culpa, porro inventore pariatur modi explicabo omnis perspiciatis esse ipsa.</p> <br> </div> <br> <br> </div> <footer> <div class="copyright"> © 2023 <a href="https://techiebundle.com" target="_blank">TechieBundle</a>. </div> </footer> ``` ## Step 2: Add CSS Styles Next, you'll need to add CSS styles to create the appearance of your interactive scroll page progress bar. ``` @import url("https://fonts.googleapis.com/css?family=Asap|Playfair+Display+SC:900&display=swap"); @import url("https://fonts.googleapis.com/css2?family=Nunito:wght@200;400;500;600;700;900&display=swap"); html, body { position: relative; height: auto; background: #fcfcfc; line-height: 180%; font-family: "Nunito", sans serif; } p { color: #474747; font-size: 14px; } h1{ text-align:center; } .tb-margin-top { padding-top: 8em; } .tb-wrapper { position: relative; overflow: hidden; z-index: 3; } .tb-container { max-width: 920px; margin: 0 auto; padding-left: 20px; padding-right: 20px; } .tb-container > a > img { border-radius: 50%; } header > div { display: flex; justify-content: space-between; align-items: center; height: 90px; } header { position: fixed; top: 0; width: 100%; background: #fcfcfc; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); } .tb-container-bar:before { position: fixed; content: ""; display: block; z-index: 2; border-left: 100vw solid white; border-right: 100vw solid white; border-bottom: calc(100vh - 92px) solid white; bottom: 0; } .tb-progress-bar { position: absolute; top: 92px; width: 100%; height: 100%; background-image: url(https://upload.wikimedia.org/wikipedia/commons/e/e0/Black_right_angled_triangle_2.png); background-repeat: no-repeat; background-attachment: scroll, fixed; background-size: 100% calc(100% - (100vh - 91px)); z-index: 1; } footer { position: absolute; height: auto; width: 100%; background-color: #fcfcfc; text-align: center; line-height: 90px; z-index: 2; letter-spacing: 4px; bottom: -120px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); } footer > .copywrite { letter-spacing: 1px; } .navigation-wrapper > ul { display: flex; align-items: center; justify-content: center; gap: 2em; } .navigation-wrapper > ul > li { color: #474747; font-size: 1.25em; list-style: none; cursor: pointer; position: relative; font-weight: 500; } .navigation-wrapper > ul > li:hover::before { content: ""; position: absolute; bottom: 0; height: 0; width: 0; z-index: 1; border-bottom: 2px solid #474747; animation: border-move 0.5s linear; animation-fill-mode: forwards; } @keyframes border-move { 100% { width: 100%; } } ``` You're welcome! I'm glad to have been of help. If you have any further questions or need additional assistance, don't hesitate to ask or also go through this link for live demo: [Engage Your Users: How to Create an Eye-catching Scroll Page Progress Bar with CSS ](https://techiebundle.com/engage-your-users-how-to-create-an-eye-catching-scroll-page-progress-bar-with-css/)
techiebundle
1,459,332
Will OpenAI be blocked in the EU?
Italy has blocked the use of ChatGPT, an artificial intelligence tool, for not respecting data...
0
2023-05-06T09:36:30
https://dev.to/makiai/will-openai-be-blocked-in-the-eu-1l4d
openai, ai, productivity, chatgpt
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mftd1zg2zlrgg8ggde6u.png) Italy has blocked the use of ChatGPT, an artificial intelligence tool, for not respecting data protection legislation and minors' access to content. Although this measure only affects Italian companies, there are fears that it could limit their access to technologies that will drive their growth and competitiveness in the global market in the future. Some experts believe that governments should incentivize the use of innovative technologies in production processes instead of hindering innovation, and work together to create regulatory frameworks that allow the advancement of artificial intelligence in a responsible and safe manner. But that's not all. Other EU countries have also announced that they believe OpenAI is in breach of EU regulations in terms of data protection. Is this true? I don't know, I don't care either, everyone has to be aware of the data they share. Isn't it a bit absurd that they protect us from something we already know and choose to use? Are they protecting us from ourselves? For example Germany and Spain have announced that they believe that OpenAI does not comply with European regulations, just like the Italian state said. Will Germany and Spain block access to OpenAI? [At least there could be a workaround to bypass the block](https://makiai.com/en/how-to-access-openai-from-a-blocked-country/), but still, let's hope it doesn't happen, as not everyone will be able to bypass the restriction legally, especially enterprises. What could be the economic cost of not having access to the world's most advanced commercially available AI tool? Best not to think about it.
makiai
1,459,893
Readability
Is creating readable code an important feature of writing quality software? Is it something that you...
0
2023-05-07T00:43:08
https://qualitysoftwarematters.com/readability
--- title: Readability published: true date: 2015-06-25 05:00:00 UTC tags: canonical_url: https://qualitysoftwarematters.com/readability --- Is creating readable code an important feature of writing quality software? Is it something that you would mention when reviewing code for one of your team members? Typically, when we review code we focus on making sure that: 1. It does what it is supposed to do 2. It handles potential failures 3. It avoids duplication 4. It performs well if it is computationally intensive But when it comes to the readability of someone's code, we often don't want to mention it. Why? For some, it might be related to fear of being accused of being petty or nit-picky. Others might not want to stifle a fellow developer's personal expression or creativity. And others might be more focused on getting things done and feel that focusing on readability is a waste of valuable developer time. # Is Readability Important? Today, I would like to suggest that readability is important. And, I will start by providing a couple of code samples to demonstrate my point. Take a look at the following SQL statement and think about how long it takes you to figure out what it does. ``` SELECT * FROM dbo.MyTable WHERE ((PATINDEX('%[0-9]%', @Filter) = 0) AND (@Filter LIKE '%,%')) ``` Now, take a look at the same code that has been made more readable. ``` SELECT * FROM dbo.MyTable WHERE ( (@filterContainsNumbers = 1) AND (@filterContainsComma = 1) ) ``` By using the simple concepts of **using reader-friendly layout** and **using intention revealing variable names** , someone can quickly scan your code and understand it within a matter of seconds. The first example would probably take a few minutes to restructure the layout and look up the definition of the `PATINDEX` function. Take a look at another code example written in C#. ``` public void RenderRemainingMoop(Liability source) { var remainingMoop = source.OopRemaining + source.OopApplied; return remainingMoop == 9999999999 ? String.Empty : OopRemaining.ToString("{0:c}"); } ``` Now take a look at the more readable version. ``` public void RenderRemainingMaxOutOfPocket(Liability liability) { const LiabilityMaxNumber = 9999999999; var remainingMaxOutOfPocket = liability.OutOfPocketRemaining + liability.OutOfPocketApplied; return remainingMaxOutOfPocket == LiabilityMaxNumber ? String.Empty : OutOfPocketRemaining.ToString("{0:c}"); } ``` In addition to layout, using variables to **provide context for magic numbers and strings** and **being descriptive** instead of using abbreviations can better reveal what code is doing. # The Golden Rule In both of the readable code samples, the developer focused not only on getting the job done, but also on making sure that future readers of their code would easily comprehend it. And why is this important? Because it makes the code more **_maintainable_** and **_flexible_**. As mentioned in a [previous post](http://www.qualitysoftwarematters.com/2015/06/what-is-software-quality.html) on software quality, **_maintainability is the effort required to locate and fix an error in an operational program_**. If your code is more readable, developers will be able to quickly understand it, which translates into quicker fixes. In that [same article](http://www.qualitysoftwarematters.com/2015/06/what-is-software-quality.html), it was also mentioned that **_flexibility is the effort required to modify an operational program_**. When developers need to add functionality to existing code, making it more readable will also allow them to quickly get up to speed and know where to place their functionality. ![the-golden-rule](https://cdn.hashnode.com/res/hashnode/image/upload/v1683415780840/b08bd8cd-fe23-452c-a4e5-845b66da5615.jpeg) In both situations, you will be making someone else's job easier. As the golden rule states, > Do unto others what you would have them do unto you. You never know when that someone else might be you! # Practicing the Golden Rule So, what can we do to make our code more readable for others? ## Using Intention Revealing Names Everything in software has a name. And every time you name something, you have the opportunity to reveal its intention. ### Be Descriptive When naming things, be as descriptive as possible, and don't be afraid to use long names. Modern languages don't have many constraints when it comes to naming things, and the more descriptive, the better for the reader. There is no excuse for using abbreviations. ### Provide Context for Magic Numbers and Strings When using magic numbers or strings, make sure to assign them to a constant with a meaningful name. It helps to dispel the magic. ### Be Ubiquitous [![](http://ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=0321125215&Format=_SL250_&ID=AsinImage&MarketPlace=US&ServiceVersion=20070822&WS=1&tag=meinershagenf-20)](http://www.amazon.com/gp/product/0321125215/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0321125215&linkCode=as2&tag=meinershagenf-20&linkId=6CZABSYJYOAQGC7U) ![](http://ir-na.amazon-adsystem.com/e/ir?t=meinershagenf-20&l=as2&o=1&a=0321125215) In Eric Evan's ground-breaking book, [Domain-Driven Design: Tackling Complexity in the Heart of Software](http://www.amazon.com/gp/product/0321125215/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0321125215&linkCode=as2&tag=meinershagenf-20&linkId=6CZABSYJYOAQGC7U) ![](http://ir-na.amazon-adsystem.com/e/ir?t=meinershagenf-20&l=as2&o=1&a=0321125215) , he establishes the concept of [ubiquitous language](http://martinfowler.com/bliki/UbiquitousLanguage.html), which establishes a common set of terms that can be used by both developers and business people alike. This helps to reduce the cognitive friction and potential confusion of having to translate between business concepts and code. When you are naming something, prefer to use your domain's ubiquitous language rather than technical jargon. It will help reduce time and misunderstandings in the future. ### Avoid Data Types in Names In the past, using Hungarian notation to prepend an abbreviated data type to a variable name ( **examples** : iCounter, bIsValid, txtFirstName, etc.) was in vogue, but it obscures the name by mixing the semantic meaning of a variable and its underlying data type. Modern development environments can easily reveal the type of a variable by hovering over the variable during debugging. ## Using Reader-Friendly Layout As shown earlier, laying out your code can make a big difference in how quickly a fellow developer (or even you, if you haven't seen the code for a while) can comprehend your code. ### Show Logical Structure Layout your code to emphasize the logical structure by using indentations and new lines. The examples below demonstrate this transformation. ``` --BEFORE IF (@filter LIKE '%to%') BEGIN SELECT * FROM dbo.MyTable END ELSE BEGIN SELECT * FROM dbo.MyOtherTable END --AFTER IF (@filter LIKE '%to%') BEGIN SELECT * FROM dbo.MyTable END ELSE BEGIN SELECT * FROM dbo.MyOtherTable END ``` ### Avoid Run-on Sentences Just as in good writing, you want to avoid the use of run-on sentences in your code. Instead of putting everything on one line as in the following example that uses a fluent syntax: ``` //BAD var customers = eventsRepository.GetAll().Where(e => e.Date > DateTime.Now).Select(e => new Customer(e.FirstName, e.LastName)); ``` you can do the following: ``` //BETTER var customers = eventsRepository .GetAll() .Where(e => e.Date > DateTime.Now) .Select(e => new Customer(e.FirstName, e.LastName)); ``` Or, instead of nesting multiple statements into one line for efficiency: ``` //BAD var customer = GetOrderFor(GetCustomerFor(HttpContext.Current.User.Identity.Name), Convert.ToInt32(Session["orderId"])); ``` you might want to do the following: ``` //BETTER var username = HttpContext.Current.User.Identity.Name; var customer = GetCustomerFor(username); var orderId = Convert.ToInt32(Session["orderId"]); var order = GetOrderFor(customer, orderId); ``` While there may not always be universally accepted practices when it comes to layout, each team should decide on rules for code formatting and stick to them. Static code analysis tools can help to enforce those rules and modern editors include utilities for automatically formatting code according to your rules as it is being written. ## Handling Conditionals Another area where code can become unreadable is when using boolean logic in `if` and `while` statements. ### Extracting Conditional Logic For example, while the following code sample is fairly readable, it can be improved. ``` if (rolls[frameIndex] + rolls[frameIndex + 1] == 10) // spare { score += 10 + rolls[frameIndex + 2]; } else { score += rolls[frameIndex] + rolls[frameIndex + 1]; } ``` Simply extract the logic contained in the boolean expression to another method, allowing the reader to understand the conditional logic from a conceptual point of view - in this case bowling a spare. ``` if (IsSpare(frameIndex)) { score += 10 + spareBonus(frameIndex); } else { score += 10 + sumOfBallsInFrame(frameIndex); } ``` Most modern development environments have built-in refactoring utilities that make this kind of operation trivial. ### Negative Conditionals Negative conditional logic can also obscure the meaning of an expression by creating cognitive dissonance that requires the reader to interpret. This wasted time could be alleviated by re-stating the condition in positive terms by creating a method to return the opposite of the intended expression. For example, ``` if (!String.IsNullOrEmpty(value)) doSomething(); ``` could very easily have been written as the following. ``` if (ContainsText(value)) doSomething(); ``` [![](http://ws-na.amazon-adsystem.com/widgets/q?_encoding=UTF8&ASIN=0132350882&Format=_SL250_&ID=AsinImage&MarketPlace=US&ServiceVersion=20070822&WS=1&tag=meinershagenf-20)](http://www.amazon.com/gp/product/0132350882/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0132350882&linkCode=as2&tag=meinershagenf-20&linkId=V5RP3A3INHPV7BIR) ![](http://ir-na.amazon-adsystem.com/e/ir?t=meinershagenf-20&l=as2&o=1&a=0132350882) For more information on making your code readable, check out Bob Martin's book, [Clean Code: A Handbook of Agile Software Craftsmanship](http://www.amazon.com/gp/product/0132350882/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0132350882&linkCode=as2&tag=meinershagenf-20&linkId=V5RP3A3INHPV7BIR) ![](http://ir-na.amazon-adsystem.com/e/ir?t=meinershagenf-20&l=as2&o=1&a=0132350882)
toddmeinershagen
1,459,918
O Método de Documentação C4Model e sua aplicação com PUML
TL;DR: O C4Model é uma técnica de documentação de arquitetura de software que permite representar...
22,996
2023-05-10T23:59:59
https://dev.to/pedropietro/o-metodo-de-documentacao-c4model-e-sua-aplicacao-com-puml-3jdd
documentation, uml, braziliandevs
**TL;DR**: O C4Model é uma técnica de documentação de arquitetura de software que permite representar sistemas complexos de maneira hierárquica e fácil de entender. Neste artigo, exploraremos como usar o PlantUML para criar diagramas do C4Model e apresentaremos exemplos práticos. ## Introdução O C4Model é uma abordagem para documentação de arquitetura de software que se concentra na comunicação entre as partes interessadas, fornecendo uma visão hierárquica e simplificada do sistema. Composto por quatro níveis de abstração - Contexto, Contêiner, Componente e Código - o C4Model facilita a compreensão da arquitetura e promove a colaboração entre os membros da equipe. Neste artigo, discutiremos como utilizar o PlantUML (PUML) para criar diagramas do C4Model e forneceremos exemplos práticos. ## 1. O C4Model e o PlantUML O C4Model é baseado em quatro níveis de abstração que representam diferentes perspectivas do sistema: 1. Contexto: Mostra a relação do sistema com os usuários e outros sistemas. 2. Contêiner: Descreve os serviços, aplicativos e bancos de dados que compõem o sistema. 3. Componente: Detalha os componentes individuais e suas interações dentro de um contêiner. 4. Código: Representa a implementação real do componente em termos de classes, interfaces e outras construções de programação. O PlantUML é uma ferramenta de código aberto que permite criar diagramas UML a partir de texto simples. Com sua sintaxe intuitiva, o PlantUML é ideal para criar diagramas do C4Model. ## 2. Exemplos de diagramas do C4Model com PlantUML A seguir, apresentamos exemplos de diagramas do C4Model utilizando a sintaxe do PlantUML. ### 2.1 Diagrama de Contexto ```plantuml @startuml !define C4_Context https://raw.githubusercontent.com/RicardoNiepel/C4-PlantUML/master/C4_Context.puml !includeurl C4_Context.puml System(systemAlias, "Sistema Exemplo", "Sistema para ilustrar o C4Model") Person(userAlias, "Usuário", "Usuário do sistema") System_Ext(system2Alias, "Sistema Externo", "Outro sistema relacionado") Rel(userAlias, systemAlias, "Interage com") Rel(systemAlias, system2Alias, "Se comunica com") @enduml ``` ### 2.2 Diagrama de Contêiner ```plantuml @startuml !define C4_Container https://raw.githubusercontent.com/RicardoNiepel/C4-PlantUML/master/C4_Container.puml !includeurl C4_Container.puml System_Boundary(systemAlias, "Sistema Exemplo") { Container(container1Alias, "Aplicativo Web", "JavaScript, Angular", "Aplicativo web para usuários") Container(container2Alias, "API", "Java, Spring Boot", "API RESTful") ContainerDb(container3Alias, "Banco de Dados", "MySQL", "Armazena dados do sistema") } Person(userAlias, "Usuário", "Usuário do sistema") Rel(userAlias, container1Alias, "Acessa") Rel(container1Alias, container2Alias, "Consome") Rel(container2Alias, container3Alias, "Lê e grava dados") @enduml ``` ## 2.3 Diagrama de Componente ```plantuml @startuml !define C4_Component https://raw.githubusercontent.com/RicardoNiepel/C4-PlantUML/master/C4_Component.puml !includeurl C4_Component.puml Container_Boundary(container2Alias, "API - Java, Spring Boot") { Component(component1Alias, "Controller", "Spring MVC Rest Controller", "Gerencia solicitações de API") Component(component2Alias, "Service", "Spring Service", "Gerencia a lógica de negócio") Component(component3Alias, "Repository", "Spring Data JPA Repository", "Gerencia o acesso aos dados") } Rel(component1Alias, component2Alias, "Chama") Rel(component2Alias, component3Alias, "Utiliza") @enduml ``` ## 3. Conclusão O C4Model é uma abordagem de documentação de arquitetura de software que permite representar sistemas complexos de maneira hierárquica e fácil de entender. Ao utilizar o PlantUML, é possível criar diagramas do C4Model de forma eficiente, simplificando a comunicação entre os membros da equipe e facilitando a colaboração. ## Referências: 1. Simon Brown. "C4 model - Context, Containers, Components, and Code". Disponível em: [https://c4model.com/](https://c4model.com/). 2. PlantUML. "PlantUML - Open-source tool that uses simple textual descriptions to draw UML diagrams". Disponível em: [https://plantuml.com/](https://plantuml.com/). 3. Ricardo Niepel. "C4-PlantUML - PlantUML sprites, macros, and other includes for C4 diagrams". Disponível em: [https://github.com/RicardoNiepel/C4-PlantUML](https://github.com/RicardoNiepel/C4-PlantUML). _Nota: Os exemplos de código deste artigo são baseados no repositório C4-PlantUML de Ricardo Niepel._
pedropietro
1,460,122
Add Multiple Filter using one function in jQuery, Ajax
Most beginners when they need a filter in a website create multiple functions for each filter, for...
0
2023-05-07T07:44:47
https://dev.to/shubhamraturi37/add-multiple-filter-using-one-function-in-jquery-ajax-1d56
jquery, aja, javascript
Most beginners when they need a filter in a website create multiple functions for each filter, for example, price filter, date-range filter, tag filter, and category filter. **We can create a multiple filters using a single function** _below we have a code example._ Creating a predefined variable for all filters ``` function filter(price,page=1, perPage=2,){ let data = {price,page,perPage}; // set your JSON stringify data to be post. $.ajax({ url: "ajax_url", // your ajax URL cache: false, data: data, // Your post data method: post // Method success: function(data){ // do filter with response to html or variable. } }); } ``` In this senerio i have setup a default values and i can run this function in which button i want to call. EX: ``` $("#price_filter").click(function(){ let price = $("#price").val(); filter(price); }) ``` This is just an example of an Argument: **An argument is just like a variable. Arguments are specified after the function name, inside the parentheses. You can add as many arguments as you want, just separate them with a comma. The above example has a function with three argument,price, page and perPage**
shubhamraturi37
1,460,506
Realistic Switch button in CSS
Realistic Switch button in CSS HTML Code &lt;input name="switch" id="switch"...
22,810
2023-05-08T07:15:00
https://dev.to/jon_snow789/realistic-switch-button-in-css-2a2h
css, design, animation, webdev
Realistic Switch button in CSS --- ![Realistic Switch button in CSS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/84scxv7i26xusxrx7ulb.gif) --- ### HTML Code ```html <input name="switch" id="switch" type="checkbox"> <label class="switch" for="switch"></label> ``` ### CSS Code ```css #switch { visibility: hidden; clip: rect(0 0 0 0); position: absolute; left: 9999px; } .switch { display: block; width: 130px; height: 60px; margin: 70px auto; position: relative; background: #ced8da; background: linear-gradient(left, #ced8da 0%,#d8e0e3 29%,#ccd4d7 34%,#d4dcdf 62%,#fff9f4 68%,#e1e9ec 74%,#b7bfc2 100%); filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#ced8da', endColorstr='#b7bfc2',GradientType=1 ); transition: all 0.2s ease-out; cursor: pointer; border-radius: 0.35em; box-shadow: 0 0 1px 2px rgba(0,0,0,0.7), inset 0 2px 0 rgba(255,255,255,0.6), inset 0 -1px 0 1px rgba(0,0,0,0.3), 0 8px 10px rgba(0,0,0,0.15); } /* Visit https://democoding.in/ for more free css animation */ .switch:before { display: block; position: absolute; left: -35px; right: -35px; top: -25px; bottom: -25px; z-index: -2; content: ""; border-radius: 0.4em; background: #d5dde0; background: linear-gradient(#d7dfe2, #bcc7cd); box-shadow: inset 0 2px 0 rgba(255,255,255,0.6), inset 0 -1px 1px 1px rgba(0,0,0,0.3), 0 0 8px 2px rgba(0,0,0,0.2), 0 2px 4px 2px rgba(0,0,0,0.1); pointer-events: none; transition: all 0.2s ease-out; } .switch:after { content: ""; position: absolute; right: -25px; top: 50%; width: 16px; height: 16px; border-radius: 50%; background: #788b91; margin-top: -8px; z-index: -1; box-shadow: inset 0 -1px 8px rgba(0,0,0,0.7), inset 0 -2px 2px rgba(0,0,0,0.2), 0 1px 0 white, 0 -1px 0 rgba(0,0,0,0.5), -47px 32px 15px 13px rgba(0,0,0,0.25); } #switch:checked ~ .switch { background: #b7bfc2; background: linear-gradient(to right, #b7bfc2 0%,#e1e9ec 26%,#fff9f4 32%,#d4dcdf 38%,#ccd4d7 66%,#d8e0e3 71%,#ced8da 100%); filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#b7bfc2', endColorstr='#ced8da',GradientType=1 ); } #switch:checked ~ .switch:after { background: #b1ffff; box-shadow: inset 0 -1px 8px rgba(0,0,0,0.7), inset 0 -2px 2px rgba(0,0,0,0.2), 0 1px 0 white, 0 -1px 0 rgba(0,0,0,0.5), -110px 32px 15px 13px rgba(0,0,0,0.25); } ``` --- Thanks for Reading ❤️! Check my website [Demo coding](https://democoding.in/) for updates about my latest CSS Animation, CSS Tools, and some cool web dev tips. Let's be friends! Don't forget to subscribe our channel : [Demo code](https://www.youtube.com/@democode)
jon_snow789
1,460,695
Local Storage vs Session Storage in JavaScript
Let's dive into two fundamental web storage mechanisms in JavaScript: Local Storage and Session...
0
2023-05-07T22:43:48
https://dev.to/valentinaperic/local-storage-vs-session-storage-in-javascript-1p8h
webdev, javascript, frontend, localstorage
Let's dive into two fundamental web storage mechanisms in JavaScript: Local Storage and Session Storage. These two mechanisms are important for creating personalized user experiences by using data persistence. They both store data in the users browser in a way that cannot be read by a server. We are going to dive into both, explain its use cases, and the differences between them. ## Prerequisites 📝 1. A working knowledge of javascript 2. A working knowledge of web browsers ## Let's Get Started ✨ **What is Local Storage?** Local storage stores key-value pairs in the user's browser. This stored data is persisted even after the user closes their browser or when they shut down their device. This data can be cleared by the user manually clearing their browser cache or until the app clears the data. It has a storage capacity of about 5MB which is larger than what cookies can store. Local storage is great for storing global data that is going to be accessed often in your app like username, email, name, etc. It is also great for features like remember me which can help streamline a user's experience by skipping steps that they have already inputted before. **What is Session Storage?** Similar to local storage, session storage also stores key-value pairs in the user's browser. In fact, they are both objects that are part of the web storage API with the difference being that their data is handled differently. Instead of the data being persisted for multiple sessions, it is only available during the duration of its user's specific session. This means that when the user closes its browser or tab, the data gets cleared. This is great for multi-step processes like booking flights and hotels, shopping carts and user authentication. **Local Storage vs Session Storage** Let's compare the two. You know that they have differences in how their data is persisted. What other differences do they have? Firstly, the scope is different. Local storage has global scope across all tabs and windows while session storage is limited to its single tab or window. Local storage also has a larger capacity which makes sense given that it can persist data among multiple tabs and windows. This may even have an impact on performance when not used wisely. Given its large size, if used improperly, it can slow down your app, however, it does not make a huge difference. The most important thing is to choose which scenario is best for the specific use case of your app. Should the data be persisted for multiple sessions? Use local storage. If not, consider session storage. ## Next Steps ✨ How can you personalize the user experience of your app using local or session storage? Go take a deep dive into this! You can even take a look at how apps do this by looking into the Application tabs in dev tools shown below 👇 ![screen shot of dev tools showing local and session storage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y0o2hzx3ut6elw73ajcw.png) Have you used local or session storage before? What use cases have you used them in? What use cases are you considering? Let me know in the comments!
valentinaperic
1,460,883
whoer.net IP Checker
whoer.net IP Checker Overview of Whoer.net Whoer.net is an online platform that...
0
2023-05-08T05:37:49
https://dev.to/lalicatbrowser/whoernet-ip-checker-2b8n
whoerip, whoernet, lalicat, antidetectbrowser
# [whoer.net IP](https://www.lalicat.com/whoer-net-proxy-ip-checker) Checker ## Overview of Whoer.net [Whoer.net](https://www.lalicat.com/whoer-net-detection) is an online platform that provides users with various tools and resources with the aim to safeguard their online privacy and security. The website offers a wide range of services, such as a VPN checker, Whoer IP checker, Whoer proxy checker, DNS leak test, and many others to assist users in maintaining their security and privacy while surfing online. ## VPN check The VPN checker tool provided by Whoer.net is widely recognized as one of the most popular and reliable tools available on the website. It is a powerful and user-friendly tool that is specialized for users to test their VPN connection and ensure that it is working correctly so that they can identify and resolve any potential security and privacy issues that may be affecting their VPN connection. How does the tool realize this function? First of all, it checks for DNS leaks which can occur when a VPN connection fails to properly route DNS queries through the VPN tunnel so as to avoid exposing the user's browsing records and other sensitive information. The tool also checks for WebRTC leaks, which can occur when a user's real IP address is exposed through WebRTC, a technology used by some web browsers to support real-time communication. Similarly, the tool will show its feature because some VPNs may leak IPv6 traffic that can reveal the user's real IP address and other private data. In addition, you can obtain a detailed report on the performance of the VPN connection, including the encryption, security protocols, and how fast and how stable the service you are using is. ## IP Checker This feature Whoer.net provides is powerful and versatile since it enables users to figure out the geographical location of a certain IP address, which can be used for a variety of purposes. One of the primary uses of the IP check tool, Whoer, is to verify whether the location of a website or server that a user is connecting to is legitimate and secure, instead of a potentially malicious one. This is significantly essential when carrying out online transactions or sharing sensitive information over the internet, as it can help avoid identity theft and other kinds of online fraud. In the field of online advertising and marketing, the Whoer IP checker plays an important role as well since advertisers and marketers may make use of IP address information to deliver targeted ads to users based on their geographical location. The Checker lets users see exactly what information is being collected about them and where it is being used. ## Whoer.Net Detection did not reach 100% Its check results are similar and accurate, but sometimes will not be 100% consistent on the timezone, and time system: ### 1.Language is different Solution: Select the corresponding browser language based on the country and region of the set proxy IP. Of course, you can also directly check up the [Set language based on IP], and let the system automatically set up the browser language according to the IP that you set. This link can check the abbreviation code of languages around the world: https://www.lalicat.com/language ![whoer.net ip checker](https://help.lalicat.com/lalicat/wp-content/uploads/2022/03/7.png) ### 2.DNS mismatch and system time mismatch These two cases are because the IP library on the detection website is not updated in real-time, not all the IP is the latest, if you use the proxy IP room is adjusted, the IP library of the detection website is old, then there will be wrong information, the detection website is only used to make a reference. Just pull the basic settings in the polite fingerprint browser [start the IP-based setting time zone]. There are two workarounds: It is recommended to replace the proxy IP and start the operation. If you are all in the same country and region, you can also ignore this problem. Or enter this site for verification: https://ip-api.com As long as you go to this website to see whether the following four parameters show the correct area and time zone is correct, you do not have to worry about the whoer.net website detected system time, different language, and wrong time zone, as long as you check [Start to set time zone based on IP]. ![whoer.net ip checker](https://help.lalicat.com/lalicat/wp-content/uploads/2022/03/timezone.png) ![whoer.net ip checker](https://help.lalicat.com/lalicat/wp-content/uploads/2022/03/ip-api-test-1024x719.png) Why are the DNS and IP countries different? Does this have an impact on the account number? This has no relationship and has no impact on the account. The country that the DNS resolves depends on the website's system and has nothing to do with the IP. For example, your IP is located in the United States, and you visit the US website, although the DNS analysis is conducted in Canada or European countries. why? Because in this case, the server of the target site is located in Canada or European countries, your request depends on the systems of those servers. If you make a request to the US website, some servers in the US have no time to accept your request, so the request will be sent to the servers in other countries. It is a perfectly normal thing, without any strange phenomena. This way, DNS does not affect your forehead work and has nothing to do with the quality of IP. ### 3.Other questions As long as the following three pieces are displayed in green, it means that you can safely use this browser configuration. In red indicates a problem with the proxy IP you are using. This is this detection site, and the proxy IP you use is blacklisted in the database. Without checking whether the IP is a clean website, the testing websites on the market are just a reference. Whether you have an impact on the business, can pass it, mainly depends on the operation of the business website, business website whether you use the proxy IP or enter their website blacklist. ![whoer.net ip checker](https://help.lalicat.com/lalicat/wp-content/uploads/2022/03/6.png) This situation solution: Change the IP; Starting WebRTC is in the disabled mode, but we recommend the altered type. ![whoer.net ip checker](https://help.lalicat.com/lalicat/wp-content/uploads/2022/03/webRTC1.png) ![whoer.net ip checker](https://help.lalicat.com/lalicat/wp-content/uploads/2023/03/image-1024x512.png) ## Proxy Checker Apart from the VPN checker and IP address check, Whoer.net also allows you to check the functionality and security of a proxy server, which is certainly an advantageous resource for individuals who want to confirm that their proxy connection is performing well and keep away from any possible security threats so as to guarantee the private and secure connection. First of all, similar to the VPN checker, any potential safety risks in the proxy connection can be recognized on the basis of the tool. Another available use of this tool is to deal with testing and repairing issues when connecting to a proxy. In other words, if you are faced with the time-consuming or inconsistent performance of a proxy, you can apply Whoer to examine the connection and identify any potential problems. This can help users have a better insight into the primary cause and then take measures to handle it, thus enhancing the comprehensive performance and reliability of the proxy connection. Overall, the Whoer proxy checker is a powerful and versatile tool that can be used to ensure the security and privacy of a proxy connection. Whether it is to identify potential security vulnerabilities or troubleshoot issues with a proxy connection, this tool is an essential resource for anyone aiming to maintain their online privacy and security. I suggest employing **Lalicat [antidetect browser](https://www.lalicat.com)** to be free from possible cyber risks when conducting online activities since it can change your various parameters like IP, location, language, canvas, and device to make you appear as if another totally different individual operating device online. ## Conclusion In brief, Whoer.net is valuable and beneficial for users who tend to safeguard their privacy and security when surfing the internet. With convenient use of tools and resources, it provides critical information about VPNs, proxies, and other cyber security issues no matter you are an experienced tech expert or a beginner. The article is from https://www.lalicat.com/whoer-net-proxy-ip-checker and https://www.lalicat.com/whoer-net-detection Download Lalicat antidetect browser: https://www.lalicat.com/download Lalicat browser new users free trial: https://www.lalicat.com/contact-us
lalicatbrowser
1,461,139
Analysis And Generation Model ML
Analysis_And_Generation_Model_ML BEFORE READING THIS REPOSITORY IT IS RECOMMENDED TO START...
0
2023-05-08T10:37:44
https://dev.to/insidbyte/analysisandgenerationmodelml-15l
python, machinelearning, programming, opensource
# Analysis_And_Generation_Model_ML __BEFORE READING THIS REPOSITORY IT IS RECOMMENDED TO START FROM:__ https://github.com/insidbyte/Analysis_and_processing ***I have in fact decided to generate a custom vocabulary to train the model and it would be appropriate to look at the repository code.*** --- # SEE THIS REPOSITORY AT: [https://github.com/insidbyte/Analysis_And_Generation_Model_ML](url) --- ___OPTIONS:___ 1)-GENERATE MODEL 2)-TEST WITH HYPERPARAMETER TUNING 3)-PLOT WITH TFIDF VECTORIZER AND SVD TRUNCATED REDUCTION ### Menù __Starting the ModelsGenerator.py file from the terminal it will appear:__ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aj6wwy60xud3raf0iapv.png) # ___OPTION 1:___ ### Model generation: ***I decided to use tfidf and support vector machine because they are highly suitable for text processing and*** ***support vector machine with the linear kernel is highly suitable for classifications based on two classes as*** ***in our case: positive and negative*** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hdebr7viz6shxl7c26jr.png) # Kaggle IMDb dataset example: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7mng52u3jt45hubyj6j.png) ### ***I created a Client in Angular to send requests to a Python Server*** # CLIENT: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pio900ekqiafdeq0swlx.png) # SERVER: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/epsndnypy8cwo2i2zs0v.png) # RESPONSE FROM THE SERVER: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2t652130c0ykq10akvhs.png) # ANOTHER EXAMPLE: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k7lfdcjizo2f6fftyw1e.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0csg4713mmmvoi4wa6ln.png) # ___OPTION 2:___ ### Test hyperparameters with gridsearchCV and tfidf vectorizer: ***A good way to automate the test phase and save time searching for the best parameters to*** ***generate the most accurate model possible is to use GrisearchCV made available by scikit-learn*** ***The code in ModelsGenerator.py must be customized based on the dataset to be analyzed*** ### ***WARNING !!*** ### ***If we don't study the scikit-learn documentation we could start infinite analyzes*** ### ***so it is always advisable to know what we are doing*** ### Link scikit-learn: https://scikit-learn.org/ ***Input:*** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/32fx2hqqbhhh3kjux3ht.png) ***Output:*** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j5bjmxa681tr17fxctt7.png) # ___OPTION 3:___ ***Input:*** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xd4wbnxqf93prap424wv.png) ***This option is experimental, the reduction is not applied to model training because it*** ***generates too few components and RAM memory (8GB) of my PC is not enough to generate*** ***more components even if the results are interesting!*** ***Output:*** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jqnh9lgb9jonha1gze9q.png) # CONCLUSION: ***We got satisfactory results and generated a fairly accurate*** ***model this repository will be updated over time*** ***For info or collaborations contact me at: u.calice@studenti.poliba.it***
insidbyte
1,461,200
Designing a User-Friendly Course Catalog With Pink Design and Nuxt
Creating a course catalogue that is both visually appealing and easy to use can be a challenge....
0
2023-05-08T11:42:17
https://dev.to/hackmamba/designing-a-user-friendly-course-catalog-with-pink-design-and-nuxt-5bdg
javascript, nuxt, beginners, webdev
Creating a course catalogue that is both visually appealing and easy to use can be a challenge. However, with the combination of [Pink Design](https://pink.appwrite.io/?utm_source=hackmamba&utm_medium=hackmamba-blog) and Nuxt.js, we can design an excellent and intuitive catalogue for users. In this article, we'll learn how to build a course catalogue that allows users to search, filter through the available courses, and view course descriptions & prerequisites. ## Sandbox We completed this project in a [Code Sandbox](https://codesandbox.io/p/sandbox/jolly-greider-jkxoif?file=%2Fpages%2Findex.vue&selection=%5B%7B%22endColumn%22%3A43%2C%22endLineNumber%22%3A8%2C%22startColumn%22%3A43%2C%22startLineNumber%22%3A8%7D%5D); fork and run it to get started quickly. <CodeSandbox id="jolly-greider-jkxoif" title="Designing a User-Friendly Course Catalog With Pink Design and Nuxt" /> ## GitHub repository {% embed https://github.com/Olanetsoft/Course-Catalog-With-Pink-Design-Nuxt.js %} ## Getting Started with Nuxt.js [Nuxt.js](https://nuxtjs.org/) is the bedrock of our Vue.js project, providing structure and flexibility while allowing us to scale confidently. It is extensible, offering a robust module ecosystem and hooks engine. Integrating our REST or GraphQL endpoints, favourite CMS, CSS frameworks, and other third-party applications is seamless. ## Project Setup and Installation To create a new project, we will use the command below to scaffold a new project: ```bash npx create-nuxt-app <project-name> ``` A series of prompts will appear as a result of the command. Here are the defaults we recommend: ![Designing a User-Friendly Course Catalog With Pink Design and Nuxt.js](https://paper-attachments.dropboxusercontent.com/s_7AC26CC02ABBD4ECF87A28FA94F444D13C603C1D71496E75B9FDC7D0E5DF2094_1679178738732_Screenshot+2023-03-18+at+23.32.00.png) The command above creates a new Nuxt.js project. Next, we navigate to the project directory and start the development server using the command below. ```bash cd <project name> && yarn dev ``` Nuxt.js will start a hot-reloading development environment accessible by default at http://localhost:3000. ![Nuxt.js default page](https://paper-attachments.dropboxusercontent.com/s_7AC26CC02ABBD4ECF87A28FA94F444D13C603C1D71496E75B9FDC7D0E5DF2094_1679178889890_Screenshot+2023-03-18+at+23.34.36.png) ## Appwrite Pink Design Setup To set up an [Appwrite](https://appwrite.io/?utm_source=hackmamba&utm_medium=hackmamba-blog) pink design in this project, we will configure our project using the [Appwrite pink design CDN](https://pink.appwrite.io/getting-started#cdn). Navigate to the `nuxt.config.js` in the root directory and add the following configuration. ```js export default { // Global page headers: https://go.nuxtjs.dev/config-head head: { //... link: [ { rel: "icon", type: "image/x-icon", href: "/favicon.ico" }, { rel: "stylesheet", href: "https://unpkg.com/@appwrite.io/pink", }, { rel: "stylesheet", href: "https://unpkg.com/@appwrite.io/pink-icons", }, ], }, //... }; ``` ## Creating a Sample Course Catalogue JSON In this step, we will create a sample course catalogue which we will utilize in our application as a database for our courses. In the project's root, create a new JSON file called `courses.json` and add the following course sample. ```js [ { "id": 1, "title": "Introduction to Web Development", "description": "Learn the basics of web development with HTML, CSS, and JavaScript.", "level": "Beginner", "prerequisites": [], "category": 1 }, { "id": 2, "title": "Advanced Web Development", "description": "Build complex web applications with React, Node.js, and MongoDB.", "level": "Intermediate", "prerequisites": ["Introduction to Web Development"], "category": 1 }, { "id": 3, "title": "Introduction to Mobile Development", "description": "Learn the basics of mobile development with React Native and Expo.", "level": "Beginner", "prerequisites": ["Introduction to Web Development"], "category": 2 }, { "id": 4, "title": "Data Analysis with Python", "description": "Learn how to analyze and visualize data using Python and Pandas.", "level": "Intermediate", "prerequisites": ["Introduction to Web Development"], "category": 3 }, { "id": 5, "title": "User Interface Design Fundamentals", "description": "Learn the basics of user interface design with Sketch.", "level": "Beginner", "prerequisites": [], "category": 4 }, { "id": 6, "title": "Cloud Computing with AWS", "description": "Learn how to deploy and scale web applications on Amazon Web Services.", "level": "Intermediate", "prerequisites": ["Introduction to Web Development"], "category": 5 } ] ``` In the sample data above, we have the identifier for each record `id`, a `title`, `category`, `description`, `level`, `prerequisites`, and `category`. ## Designing the layout for the Course Catalogue Let's design an aesthetically appealing design for the course catalogue in this section by updating the `pages/index.vue` with the following code snippet. ```js <template> <div> <div class="container"> <div class="sidebar"> <!-- Sidebar content goes here --> </div> <div class="content"> <!-- Search and Course list goes here --> </div> </div> <div class="course-modal" v-if="selectedCourse"> <div class="modal-content"> <!-- Modal goes here --> </div> </div> </div> </template> ``` Let's add CSS by creating a new file called `styles.css` in the root directory and updating it with the following CSS snippet. {% embed https://gist.github.com/Olanetsoft/0080a3694d6e1c22bc74e4688cf83334 %} Import the style by updating the `pages/index.vue` with the following code snippet. ```js <template> <!-- --> </template> <style> @import '@/styles.css'; </style> ``` ## Developing the Functionality to Retrieve All Courses In the previous step, we successfully designed the application layout; now, we will implement the functionality to retrieve all courses from the JSON file we created earlier. Inside the `pages/index.vue` file, update it with the following code snippet. ```js <template> <div> <div class="container"> <div class="sidebar"> </div> <div class="content"> <h1 class="course-catalogue-title"> Course Catalog With Pink Design and Nuxt.js </h1> <!-- Search goes here --> <div class="courses"> <div class="course-card" v-for="course in filteredCourses" :key="course.id" > <div class="card" @click="openCourseModal(course)"> <div class="card-content"> <h3 class="title">{{ course.title }}</h3> <div class="description">{{ course.description }}</div> <div class="details"> <div class="level">{{ course.level }}</div> </div> </div> </div> </div> </div> </div> </div> </div> <div class="course-modal" v-if="selectedCourse"> <div class="modal-content"> <!-- Modal goes here --> </div> </div> </div> </template> <!-- --> <!-- Importing data from a JSON file named courses.json --> <script> import coursesData from '../courses.json' export default { // Defining the data property of the component data() { // Returning an object with a key of 'courses' and its value set to the imported coursesData return { courses: coursesData, } }, } </script> ``` In the code snippet above, - We import data from a JSON file named `courses.json`. - The `data()` function returns an object with the key of `courses` and its value set to the imported `coursesData` variable. We should have something similar to what is shown below. ![Developing the functionality to retrieve all courses](https://paper-attachments.dropboxusercontent.com/s_7AC26CC02ABBD4ECF87A28FA94F444D13C603C1D71496E75B9FDC7D0E5DF2094_1680098408918_Screenshot+2023-03-29+at+14.59.42.png) ## Adding a Search and Filter by Category feature to the Course Catalog In the sources, we have different categories. Let's implement the course filtering search functionality to filter based on the course category by updating the `pages/index.vue` file with the following code snippet. {% embed https://gist.github.com/Olanetsoft/a54e5e5ef91ab259683c04b594496927 %} In the code snippet above we: - Utilized the imported JSON file with course data - Defined a Vue.js component that uses the imported course data as its initial state - Created component's data properties, which include search `searchQuery`, `categories`, and the currently selected category - Defined a `computed` property that filters courses based on the search query and currently selected category - Returned the filtered courses in the component's computed property We are almost there! We can retrieve all courses and search and filter courses but we can't view individual courses yet, let's implement that in the following step. ## Implementing the Course View Functionality To implement course view functionality, we will add a `modal` to view the course content whenever a user clicks on an individual course. Let's update the `pages/index.vue` file with the following code snippet. ```js <template> <div> <div class="container"> <!-- //... --> </div> <div class="course-modal" v-if="selectedCourse"> <div class="modal-content"> <span class="close-modal" @click="closeCourseModal">&times;</span> <h2 class="modal-title">{{ selectedCourse.title }}</h2> <div class="modal-description">{{ selectedCourse.description }}</div> <div class="details"> <div class="modal-level">{{ selectedCourse.level }}</div> </div> <div class="modal-prerequisites"> <h3>Prerequisites</h3> <ul v-if="selectedCourse.prerequisites.length"> <li v-for="prerequisite in selectedCourse.prerequisites" :key="prerequisite" > {{ prerequisite }} </li> </ul> <p v-else>None</p> </div> </div> </div> </div> </template> <script> // Import courses data from JSON file import coursesData from "../courses.json"; export default { data() { return { //... selectedCourse: null, }; }, methods: { //... openCourseModal(course) { this.selectedCourse = course; }, closeCourseModal() { this.selectedCourse = null; }, }, computed: { //... }, }; </script> //... ``` In the code snippet above, - The course list container dynamically generates a course element for each course in the filtered course list using `v-for` - Each course element has an `on-click` event that triggers the `@click="openCourseModal(course)``"` method - The `selectedCourse` variable is set to the `course` object to display the modal using the `openCourseModal` method - The `selectedCourse` becomes `null` when the close modal icon is clicked, which triggers the `closeCourseModal` method Testing the application, we should have something similar to what is shown below. ![Testing the application](https://paper-attachments.dropboxusercontent.com/s_7AC26CC02ABBD4ECF87A28FA94F444D13C603C1D71496E75B9FDC7D0E5DF2094_1680099876413_Course-Catalog-With-Pink-Design-and-Nuxt.gif) ## Conclusion This post demonstrated how to design a user-friendly course catalogue with pink design and Nuxt.js. Additionally, you learned how to search, filter through the available courses, and view course descriptions and prerequisites. ## Resources - [Official Nuxt Documentation](https://nuxtjs.org/) - [The Intuitive Web Framework](https://nuxt.com/) - [Appwrite's Pink design system](https://github.com/appwrite/pink/?utm_source=hackmamba&utm_medium=hackmamba-blog) - [Appwrite Documentation](https://appwrite.io/docs/getting-started-for-web?utm_source=hackmamba&utm_medium=hackmamba-blog) - [Cover Image - Nick Morrison](https://unsplash.com/photos/FHnnjk1Yj7Y)
olanetsoft
1,461,375
Comparing Compression Methods on Linux: gzip, bzip2, xz, and zstd
Introduction A recent test was conducted to compare the performance of four popular Linux...
0
2023-05-08T15:36:01
https://dev.to/cookiebinary/comparing-compression-methods-on-linux-gzip-bzip2-xz-and-zstd-3idd
## Introduction A recent test was conducted to compare the performance of four popular Linux compression methods: gzip, bzip2, xz, and zstd. The test involved compressing a 4 GB SQL dump file using the standard compression strength, and then with their strongest compression levels. This article presents the results of these tests, including both compression times and compression ratios, and provides a brief description of each compression method. ## Compression Method Descriptions 1. **gzip**: GNU zip is a widely-used compression tool based on the DEFLATE algorithm. It is designed to be fast and suitable for a broad range of use cases. 2. **bzip2**: bzip2 is a block-sorting file compressor that utilizes block-sorting and Huffman coding techniques to achieve higher compression ratios than gzip, albeit at the cost of slower compression times. 3. **xz**: xz is a compression tool that uses the LZMA (Lempel-Ziv-Markov chain-Algorithm) for high compression ratios. It is designed to offer better compression ratios than gzip and bzip2, while still maintaining reasonable compression and decompression speeds. 4. **zstd**: Zstandard, developed by Facebook, is a modern compression algorithm offering high compression levels and fast compression and decompression speeds. It is becoming increasingly popular due to its balance of speed and compression effectiveness. ## Results ### Compression Ratios The graph shows the resulting compressed file size in MB. **Smaller values are better.** The original file size was 4 GB. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ioyu1qu56prnmejx8npp.png) ### Compression Times The graph shows the compression time in seconds. **Smaller values are better.** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5j5opykxo41lwwfry13o.png) ## Compression Commands Used in the Test In this article, we tested four popular compression methods on an SQL dump file: gzip, bzip2, xz, and zstd. We used the following specific commands for each compression method at their standard compression level and their strongest compression level: 1. **gzip:** - Standard Compression: `gzip -k -c database.sql > database.sql.gz` - Strongest Compression: `gzip -9 -k -c database.sql > database.v2.sql.gz` 2. **bzip2:** - Standard Compression: `bzip2 -k -c database.sql > database.sql.bz2` - Strongest Compression: `bzip2 -9 -k -c database.sql > database.v2.sql.bz2` 3. **xz:** - Standard Compression: `xz -k -c database.sql > database.sql.xz` - Strongest Compression: `xz -9e -c database.sql > database.v2.sql.xz` 4. **zstd:** - Standard Compression: `zstd -k -c database.sql > database.sql.zst` - Strongest Compression: `zstd -19 -k -c database.sql > database.v2.sql.zst` ## Conclusion Based on the test results for compressing SQL dump files, zstd offers the best balance between compression time and compression ratio among the four methods. Although xz provides the smallest output file size, it takes a considerably longer time to compress, particularly at its strongest setting. In contrast, zstd performs significantly faster, while still achieving a competitive compression ratio. Therefore, zstd is recommended for users seeking a balance of speed and compression effectiveness when compressing SQL dump files.
cookiebinary
1,461,515
How to Create a Custom WPF Material Theme
Learn how to create a custom WPF material theme in your desktop application. See more from ComponentOne today.
0
2023-05-08T18:21:35
https://www.grapecity.com/blogs/how-to-create-a-custom-wpf-material-theme
webdev, devops, dotnet, tutorial
--- canonical_url: https://www.grapecity.com/blogs/how-to-create-a-custom-wpf-material-theme description: Learn how to create a custom WPF material theme in your desktop application. See more from ComponentOne today. --- In the latest ComponentOne 2023 v1 release, we’ve made theme customization easier for our WPF controls. You usually picked a theme as-is from our collection, created a new look by just setting brush properties, or had no theme. But now you have a fourth option which starts with our modern, material theme, and you can make it your own with different brush combinations. We designed our material theme following best practices defined by [Material.io](https://m3.material.io/). By customizing our WPF material theme, you can choose a light or dark background and add a pop of accent color to match your application or company branding. Our WPF material theme even supports standard .NET controls. ![WPF Themes Material Custom](https://files.grapecity.com/gc-website-media/nfeac5ue/01-wpf_themes_materialcustom.png?width=500&height=299.3351465699607) This blog shows how to modify our .NET 6 Material theme to easily use a different color scheme across an entire WPF application. ## How to Customize the WPF Material Theme In this example, we will customize an existing theme. It helps first to understand how to use an existing theme which you can learn more about here: [How to Add WPF Themes to Style Your Desktop Applications](https://www.grapecity.com/blogs/how-to-add-wpf-themes-to-style-your-desktop-applications). That topic detailed the three ways that you can set a WPF theme. For this example, we will use the 2nd approach by code to set the ComponentOne material theme (C1ThemeMaterial). It looks something like this: ``` C1.WPF.Themes.C1ThemeMaterial myTheme = new C1.WPF.Themes.C1ThemeMaterial(); myTheme.Apply(this); // applies theme to entire window ``` Note: You will need to update the 2023 v1 .NET 6 version of ComponentOne WPF controls (6.0.20231.*), and you will need to add the C1.WPF.Themes and C1.WPF.Themes.Material (or MaterialDark) packages. ![Custom WPF Themes Material](https://files.grapecity.com/gc-website-media/2p0dsfdj/02-c1wpfthemesmaterialnuget.png?width=500&height=276.0702524698134) Next, in your WPF application, add a Resource Dictionary XAML file. You can name it something like CustomTheme.xaml. ![WPF Resource Dictionary](https://files.grapecity.com/gc-website-media/ebmh1bws/03-wpfresourcedictionary.png?width=500&height=272.19840783833433) In this file, we will override key brushes by names such as PrimaryColor, BackgroundColor, OnSurfaceColor, and others specific to menus, hyperlinks, and alternate colors. This is how we can easily customize the existing theme. Set the **build action of the dictionary to Embedded Resource** and copy in the XAML below as a starting point. ``` <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <!-- Color & Brush used to build Base brushes --> <Color x:Key="PrimaryColor">#a31688</Color> <Color x:Key="BackgroundColor">#1688a3</Color> <Color x:Key="OnSurfaceColor">White</Color> <SolidColorBrush x:Key="MenuHighlightBrush" Color="{StaticResource PrimaryColor}" Opacity="0.04"/> <SolidColorBrush x:Key="SubtotalBackground" Color="#0d5060" /> <Color x:Key="SelectedColor">#026a81</Color> <!-- C1HyperLinkButton Foreground--> <SolidColorBrush x:Key="HyperLinkForeground" Color="#dafdf7" /> <SolidColorBrush x:Key="IndicatorBrush" Color="#93edf1" /> <SolidColorBrush x:Key="SliderThumb.Pressed.Background" Color="#026a81"/> <SolidColorBrush x:Key="SliderThumb.Pressed.Border" Color="#93edf1"/> <SolidColorBrush x:Key="FocusBrush" Color="{StaticResource SelectedColor}"/> <SolidColorBrush x:Key="AccentBrush" Color="{StaticResource PrimaryColor}"/> </ResourceDictionary> ``` The brush names are based on the [material.io color system](https://m3.material.io/styles/color/the-color-system/key-colors-tones). If you need to customize more, you can find more brushes in the source XAML of our Material theme (installed to C:\Program Files (x86)\ComponentOne\WPF Edition\Resources\v6\C1WPFThemesMaterial) Lastly, modify the code from above with a new overload that passes this CustomTheme.xaml file. You must pass in the assembly type and file name with a full namespace. In my sample called “MyProject” it would look like this below: ``` myTheme = new C1ThemeMaterial(typeof(MyProject.App).Assembly, "MyProject.CustomTheme.xaml"); ``` And that’s it! You can download the [full sample here on GitHub](https://github.com/GrapeCity/ComponentOne-WPF-Samples/tree/master/NET_6/Themes/ThemeExplorer). ![WPF Custom Theme](https://files.grapecity.com/gc-website-media/0kgjvutj/04-wpf_custom_theme_teal_purple.png?width=500&height=260.0919775166071) ## Material Theme Builder If you are not a designer - do not worry! You can take advantage of the [material.io Theme Builder](https://m3.material.io/theme-builder#/custom) to help. Build a custom color scheme to map dynamic color, use as fallback colors, or implement a branded theme. The color system automatically handles critical adjustments that provide accessible color contrast. Use the colors output here as your primary and secondary on surface brushes. ![Material Theme Builder](https://files.grapecity.com/gc-website-media/43zkve4x/05-material_theme_builder.png?width=500&height=331.6326530612245)
chelseadevereaux
1,461,614
Let's Get Started with Flask, Shall We?
No, not a Vacuum or Thermos flask, okay? I mean Flask, a very popular Python micro-web framework. It...
0
2023-05-08T19:16:08
https://kambale.dev/lets-get-started-with-flask
--- title: Let's Get Started with Flask, Shall We? published: true date: 2021-08-12 21:05:06 UTC tags: canonical_url: https://kambale.dev/lets-get-started-with-flask --- _No, not a Vacuum or Thermos flask, okay?_ I mean Flask, a very popular Python micro-web framework. It is so because it requires no particular tools or libraries to develop web applications. You do not need to validate forms, have a database abstraction layer, or any other components that have pre-existing third-party libraries which provide basic functions. This particular _flask_ has gat it all, to begin with. _Kati_, here's how we gonna roll it: 1. **Installation** , the very start of bugs, though no one will tell you. 2. Run the only successful program, **"Hello World!"** with Flask. 3. Check through the **Directories**. 4. Might as well want to check how **Files** are structured. 5. We shall do a little bit of **Configuration**. 6. And pray it works out because then, we do **Initialization**. 7. Sometimes you have to jump in the air if it works out, but we shall **Run, Flask, Run**! 8. If you doubt what you see, how about **Views**? 9. Not impressed yet? Okay, hold on for the **Templates**. 10. Voil, we gat it, shall we do a **Conclusion**? _Kati tusimbule..._ # Installation We need the following installed to get going, otherwise don't _kusimbula_: - [Python](https://python.org) (in this case, Python 3). If you already have Python installed on your system, you should see the following output when you run `$ python` on the command line: ``` $ python Python 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> ``` - [virtualenv](https://virtualenv.pypa.io/en/stable/) and [virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/) Installing virtualenv, creates a Python environment that we need to keep all the dependencies used by different Python projects together. And installing virtualenvwrapper, which is a set of extensions that provide simpler commands while using virtualenv. To do this, we need our little friend `pip`. ``` $ pip install virtualenv $ pip install virtualenvwrapper $ export WORKON_HOME=~/Envs $ source /usr/local/bin/virtualenvwrapper.sh ``` Now, to create and activate a virtualenv, run the following commands: ``` $ mkvirtualenv flask-env $ workon flask-env ``` **Note:** `flask-env` is custom, name your environment as best as you can remember. Oh, yeah, now we have a virtual environment called `flask-env`, which is activated and running. In this env, all dependencies we install will be kept. **Note:** Remember to always activate this env to work on this or other projects. Great, now let's create a directory for our app where all our files will go: ``` $ mkdir flask-project $ cd flask-project ``` - [Flask](http://flask.pocoo.org/) Feeling good? Let's now install, yes, Flask. We'll need our little friend `pip` again: ``` $ pip install Flask ``` Kyo! Let's see what is contained in the flask, definitely _harimu amaate_, there are some dependencies that come along: ``` $ pip freeze click==6.6 Flask==2.0.1 itsdangerous==0.24 Jinja2==2.11.2 MarkupSafe==0.23 virtualenv==20.0.17 Werkzeug==0.11.11 wadllib==1.3.3 wrapt==1.12.1 zipp==1.0.0 ``` _Wazireeba_? So, Flask uses Click (Command Line Interface Creation Kit) for its command-line interface to add custom shell commands for your app. `itsdangerous` provides security when sending data using cryptographical signing. `Jinja2` (Not Jinja where they brew Nile Special from), is a powerful template engine for Python, while `MarkupSafe` is a HTML string handling library. `Werkzeug` is a utility library for WSGI (Web Server Gateway Interface), a protocol that ensures web apps and web servers can communicate effectively. You can save the output above in a file. This is good practice because anyone who wants to work on or run your project will need to know the dependencies to install. The following command will save the dependencies in a `requirements.txt` file: ``` pip freeze > requirements.txt ``` # "Hello World!" with Flask Any beginner must run a "Hello World!" program. To some, this becomes their only successful program in that language, ever! But you don't want to end up like that, do you? So here's how to do our _ting_ in Flask: Create the following file, `hello_world.py`, in your favourite text editor; VS Code, Atom, Sublime Text3, and if you ask PHP developers nicely, they will tell you even Microsoft Word, winks: ``` # hello_world.py from flask import Flask app = Flask( __name__ ) @app.route('/') def hello_world(): return 'Hello World!' ``` To begin, we import the `Flask` class and creating an instance (the object, `flask`) of it. We use the ` __name__ ` argument to indicate the app's module or package so that Flask knows where to find other files such as templates. Then we have a simple function that will display the string `Hello World!`. The preceding decorator simply tells Flask which path to display the result of the function. In this case, we have specified the route `/`, which is the home URL. Let's see this in action, shall we? In your terminal, run the following: ``` $ export FLASK_APP = hello_world.py $ flask run * Serving Flask app "hello_world.py" * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) ``` - The first command tells the system which app to run. - The next one starts the server. - Now Enter the specified URL (http://127.0.0.1:5000/) in your browser, don't worry, even **Internet Explorer** is faster here. ![flasssskkk.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628797627560/J9BaFalXp.png) _Twashuba twanywa_, it did work! # Flask Directories With only one functional file: `hello_world.py`, like our project, we are far from a fully-fledged real-world web application that comes bundled up with a lot of files. _Therafwa_, it is important to maintain a good directory structure to organize the different components of the application separately. The following are some of the common directories in a Flask project: 1. `/app`: This is a directory within `flask-project` . We'll put all our code in here, and leave other files, such as the `requirements.txt` file, outside. 2. `/app/templates`: This is where all HTML files will go. 3. `/app/static`: This is where static files such as CSS and JavaScript files as well as images usually go. However, we won't be needing this folder for this tutorial since we won't be using any static files. ``` $ mkdir app app/templates ``` With that command, your project directory should now look like this: ``` flask-project app templates hello_world.py requirements.txt ``` Well, you can see where the `hello_world.py` is now. Kinda out of place now, isn't it? _Rindaaho_ we'll fix it soon. # Flask File Structure In the "Hello World!" example, we only had one file (remember it?). Well, for us to build a huge website, we'll need more files that serve various functions. And this brings about such a file structure that is common with most Flask apps: - `run.py`: This is the application's entry point. We'll run this file to start the Flask server and launch our application. - `config.py`: This file contains the configuration variables for your app, such as database details. - `app/ __init__.py`: This file initializes a Python module. Without it, Python will not recognize the app directory as a module. - `app/views.py`: This file contains all the routes for our application. This will tell Flask what to display on which path.`app/models.py`: This is where the models are defined. A model is a representation of a database table in code. However, because we will not be using a database in this tutorial, we won't be needing this file. Now ahead and create these files, and also delete `hello_world.py` since we won't be needing it anymore: ``` $ touch run.py config.py $ cd app $ touch __init__.py views.py $ rm hello_world.py ``` And there you have your directory structure homeboy: ``` flask-project app __init__.py templates views.py config.py requirements.txt run.py ``` Heads up! Time to code now... # Configuration The `config.py` file should contain one variable per line as you see below: ``` # config.py # Enable Flask's debugging features. Should be False in production DEBUG = True ``` **Note** : This config file is very simplified and would not be appropriate for a more complex application. For bigger applications, you may choose to have different `config.py` files for testing, development, and production, and put them in a config directory, making use of classes and inheritance. You may have some variables that should not be publicly shared, such as passwords and secret keys. These can be put in an `instance/config.py` file, which should not be pushed to version control. # Initialization Now, we have to initialize our app with all our configurations. This is done in the `app/ __init__.py` file. Note that if we set `instance_relative_config` to `True`, we can use `app.config.from_object('config')` to load the `config.py` file. ``` # app/ __init__.py from flask import Flask # Initialize the app app = Flask( __name__ , instance_relative_config=True) # Load the views from app import views # Load the config file app.config.from_object('config') ``` # Flask, Run! All we have to do now is configure the `run.py` file so we can start the Flask server. ``` # run.py from app import app if __name__ == ' __main__': app.run() ``` To use the command `flask run` as we did before, we would need to set the `FLASK_APP` environment variable to `run.py`: ``` $ export FLASK_APP = run.py $ flask run ``` First error? Nah, it is just a 404 page because we haven't written any views for our app. That'll be fixed as we go on. # Views in Flask With our "Hello World!" example, you by now have an understanding of how views work. We use the `@app.route` decorator to specify the path where we would like the view to be displayed on. Let's now see what else we can do with views. ``` # views.py from flask import render_template from app import app @app.route('/') def index(): return render_template("index.html") @app.route('/about') def about(): return render_template("about.html") ``` Now, Flask comes with a method, `render_template`, which we use to specify which HTML file should be loaded in a particular view. Of course, the `index.html` and `about.html` files do not exist yet, so Flask will give us a `Template Not Found` or `Internal Server Error` when we navigate to these paths. # Templates Flask allows us to use a variety of template languages, but `Jinja2` is the most popular one. Jinja2 provides syntax that allows us to add some functionality to our HTML files, like `if-else` blocks and `for` loop, and also use variables inside our templates. Jinja2 also lets us implement template inheritance, which means we can have a base template that other templates inherit from. Let's begin by creating the following three HTML files: ``` $ cd app/templates $ touch base.html index.html about.html ``` We'll start with the `base.html` file, using a slightly modified version of this example Bootstrap template: ``` <!-- base.html --> <!DOCTYPE html> <html lang="en"> <head> <title>{% block title %}{% endblock %}</title> <!-- Bootstrap core CSS --> <link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet"> <!-- Custom styles for this template --> <link href="https://getbootstrap.com/examples/jumbotron-narrow/jumbotron-narrow.css" rel="stylesheet"> </head> <body> <div class="container"> <div class="header clearfix"> <nav> <ul class="nav nav-pills pull-right"> <li role="presentation"><a href="/">Home</a></li> <li role="presentation"><a href="/about">About</a></li> <li role="presentation"><a href="http://flask.pocoo.org" target="_blank">About Flask</a></li> </ul> </nav> </div> {% block body %} {% endblock %} <footer class="footer"> <p> 2021 Wesley Kambale</p> </footer> </div> <!-- /container --> </body> </html> ``` Did you notice the `{% block %}` and `{% endblock %}` tags? We'll also use them in the templates that inherit from the base template: ``` <!-- index.html--> {% extends "base.html" %} {% block title %}Home{% endblock %} {% block body %} <div class="jumbotron"> <h1>Flask Is Awesome</h1> <p class="lead">And I'm glad to be learning so much about it!</p> </div> {% endblock %} <!-- about.html--> {% extends "base.html" %} {% block title %}About{% endblock %} {% block body %} <div class="jumbotron"> <h1>The About Page</h1> <p class="lead">You can learn more about my website here.</p> </div> {% endblock %} ``` We use the `{% extends %}` tag to inherit from the base template. We insert the dynamic content inside the `{% block %}` tags. Everything else is loaded right from the base template, so we don't have to re-write things that are common to all pages, such as the navigation bar and the footer. Refresh that browser, _iwe mwanawe_! ![hommme.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628801453625/R8Ieu5hn0.png) ![aboooout.png](https://cdn.hashnode.com/res/hashnode/image/upload/v1628801478745/6Dw52DicK.png) _Kati awo nebwentema!_, but first: # Conclusion You made it! Congratulations on nailing your first Flask web application up! Explore more. You now have a great foundation to start building more complex apps. Check out the [official documentation](http://flask.pocoo.org/docs/0.11/) for more information and examples. Had fun? Follow [@WesleyKambale](https://twitter.com/WesleyKambale) on Twitter. Inspiration and credits for the HTML files go out to [Mbithe Nzomo](https://twitter.com/mbithenzomo) Any discussions? Let's have a conversation in the comments below.
kambale
1,486,339
Hyrum's law in modern frontend
The best professional books for you are the books that are revealing situational problems that you...
0
2023-05-30T21:38:02
https://dev.to/mihneasim/hyrums-law-in-modern-frontend-1ma5
frontend, softwareengineering, javascript, programming
The best professional books for you are the books that are revealing situational problems that you will encounter in your engineering journeys. I had the fortune to read about Hyrum's law in [Software Engineering at Google: Lessons Learned From Programming Over Time](https://www.google.ro/books/edition/Software_Engineering_at_Google/V3TTDwAAQBAJ) (Great one!) and see that law happening magically just before my eyes, in a team near me. The law says that: > With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviours of your system will be depended on by somebody One colleague (desperately) needed to access a couple of properties of a web component that was developed and maintained by a different team, but consumed in his own micro frontend. For the simplicity of the example, let's consider the default web component generated by OpenWC's npm generator: ```javascript class AppMain extends LitElement { static properties = { header: { type: String }, } ... render() { return html` <main> <h1>${this.header}</h1> .... `; } } ``` Consider you'd be consuming this component in your template ```javascript <app-main></app-main> ``` And you, as a consumer, imagine yourself not knowing the implementation of the component, but needing to access the data rendered through the means of the `header` property. You'd probably select the element in DevTools, then jump to Console, type `$0.` and see what properties pop up. That's the exact process one colleague followed and ended up using the *hidden* *dunder* property (double underscore property) - the equivalent `$0.__header` in our example. ## Lit's quirks So what's up with lit shadowing any `.property` in a `.__property`? Why is there a property called `__header` with, surprisingly-not-so-surprisingly, the same value as `header` property? Well, we've run into an implementation detail that at no point was intended to be used by component consumers, or any developer building web components on top of lit. But since I can't leave you wondering, lit's actual component properties are setters and getters, whilst the true value is stored in this hidden dunder property. By the use of this wrapping, at any point, lit can detect when the setter is writing something different than what it's already stored so that the rendering cycle, as well as update hooks are triggered only then. Simple and effective! > Fun, hackish experiment: If you do set a value for \_\_header, it won't trigger the render process, since you're bypassing lit's setter, so the template will not update. If you then do set the exact same value through the *header* property, again the render cycle will be skipped, because it would be compared against the value in \_\_*header*. ![Animation showing hidden lit element properties called accessors or descriptors](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/toc282e29aliiasokdgg.gif) ### Advanced Deep Dive If you reaaaally want to see the bottom of what I just explained, then head to lit-element's [source code](https://github.com/lit/lit-element/blob/v2.4.0/src/lib/updating-element.ts#L329). What we call properties, are actually called accessors or descriptors in lit, and one can implement and configure a custom accessor, should someone need a different one, for a given property, on a given component class. As far as I can see, the property itself, created on the prototype, is an *accessor*, while the assigned value is called a *property* *descriptor*. ```typescript static createProperty( name: PropertyKey, options: PropertyDeclaration = defaultPropertyDeclaration) { // Note, since this can be called by the `@property` decorator which // is called before `finalize`, we ensure storage exists for property // metadata. this._ensureClassProperties(); this._classProperties!.set(name, options); // Do not generate an accessor if the prototype already has one, since // it would be lost otherwise and that would never be the user's intention; // Instead, we expect users to call `requestUpdate` themselves from // user-defined accessors. Note that if the super has an accessor we will // still overwrite it if (options.noAccessor || this.prototype.hasOwnProperty(name)) { return; } const key = typeof name === 'symbol' ? Symbol() : `__${name}`; const descriptor = this.getPropertyDescriptor(name, key, options); if (descriptor !== undefined) { Object.defineProperty(this.prototype, name, descriptor); } } ``` ## But what about Hyrum's law? Lit dunder properties were never meant to be used. However, since they're observable, and usable, voilà. Strongly typed, mature, compiled, object oriented programming languages like C++ or Java employ the use of *Access Modifiers*, offering developers the possibility to precisely define which members of a class can be *used* (depending on its type, you can read it, set it, execute it etc.). All you need is the following cheatsheet: ![Access modifiers in Java](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6kprehws6l3agtxo7vpp.png) Typescript uses access modifiers as presented before, but the concept is called **Member Visibility**, and the list is shorter, only three - private, protected, public - where *public* is the default and no sane person specifies it explicitly in code. There's no need for "default", as Typescript does not have such a strong notion as *packages* in C#/Java. Moreover, as stated in their docs, > Like other aspects of TypeScript’s type system, private and protected are only enforced during type checking. Javascript itself didn't have any encapsulation, until recently. It's called [Private class fields](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/Private_class_fields) and .. you guessed it, it allows you to explicitly prevent your consumers from directly accessing specified members of your class. > Class fields are [public](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes/Public_class_fields) by default, but **private class members** can be created by using a hash `#` prefix. The privacy encapsulation of these class features is enforced by JavaScript itself. ```javascript class ClassWithPrivate { #privateField; #privateFieldWithInitializer = 42; #privateMethod() { // … } static #privateStaticField; static #privateStaticFieldWithInitializer = 42; static #privateStaticMethod() { // … } } ``` The browser support is good, but not fabulous. The syntax is understood by Chrome &gt;= 74 (April 2019), Safari &gt;= 14.1 (April 2021), so if you want to go lower with supported clients, consider transpiling your production code using babel &gt;= 7.14 which enables the required plugins by default, or include them yourself in the transpiling configuration. I've seen great enthusiasm throughout my team for adopting this level of encapsulation, despite adding one more concern (class member access) to the JavaScript developer; a developer previously spoiled with simplicity, but simplicity that may backfire on long-lived software. As they say in Google, *"Software engineering is programming integrated over time".*
mihneasim
1,486,522
#Anewera #we move #Whenyoustoplearning #youstopliving
One can actually learn any skill insofar he or she is whatever that is conceived can be achieved.
0
2023-05-31T03:21:11
https://dev.to/vinlight16/anewera-we-movewhenyoustoplearning-youstopliving-25pn
One can actually learn any skill insofar he or she is whatever that is conceived can be achieved.
vinlight16
1,487,086
How BEM (Block-Element-Modifier) CSS actually Work!
BEM, or Block-Element-Modifier, is a popular naming convention for writing CSS code. It was developed...
0
2023-05-31T12:51:52
https://dev.to/aramoh3ni/how-bem-block-element-modifier-css-actually-work-ogl
css, bem, frontend
BEM, or Block-Element-Modifier, is a popular naming convention for writing CSS code. It was developed by the team at Yandex, a Russian search engine, as a way to create scalable and maintainable CSS code for their large and complex web projects. BEM has since become widely adopted by developers around the world and is recognized as one of the best practices for writing CSS. The basic idea behind BEM is to create modular, reusable code blocks that can be combined to build larger web components. Each block represents a distinct part of the user interface, such as a header, footer, navigation menu, or form. Blocks can then be broken down into smaller elements, which are the individual components that make up the block. For example, a navigation menu block might consist of elements like the menu items, the logo, and the search bar. BEM also allows for the use of modifiers, which are used to change the appearance or behavior of a block or element. Modifiers can be used to change the color, size, or layout of a block, or to add or remove certain features or functionality. The naming convention used in BEM is what makes it so powerful. Each block, element, and modifier is given a unique, descriptive name that reflects its purpose and function. This makes it easy for developers to understand and organize their code, and for designers to make changes to the design without breaking the CSS. Here's an example of how BEM might be used to stylea simple navigation menu: #### HTML: ```html <nav class="nav"> <ul class="nav__list"> <li class="nav__item"><a href="#" class="nav__link">Home</a></li> <li class="nav__item"><a href="#" class="nav__link nav__link--active">About</a></li> <li class="nav__item"><a href="#" class="nav__link">Contact</a></li> </ul> </nav> ``` #### CSS: ```css .nav { /* Styles for the navigation block */ } .nav__list { /* Styles for the navigation list element */ } .nav__item { /* Styles for the navigation item element */ } .nav__link { /* Styles for the navigation link element */ } .nav__link--active { /* Modifier for the active navigation link */ } ``` In this example, the "nav" block represents the overall navigation menu. The "nav__list", "nav__item", and "nav__link" elements represent the different parts of the navigation menu. The "nav__link--active" modifier is used to style the active link in the navigation menu. One of the key benefits of BEM is that it allows for easy modification of the CSS code. If you need to change the style of the active navigation link, for example, you can simply update the "nav__link--active" modifier without affecting any ofthe other parts of the navigation menu. This makes it easier to maintain and update the CSS code over time, and reduces the risk of introducing errors or breaking the design. Another benefit of BEM is that it promotes modularity and reusability of code. By breaking down the user interface into smaller blocks and elements, developers can create more flexible and adaptable components that can be used across different parts of the website. This can help to speed up development time and reduce code redundancy, which is especially important for large and complex web projects. Overall, BEM is a powerful and flexible naming convention for writing CSS code. By using descriptive and consistent names for blocks, elements, and modifiers, developers can create more maintainable and scalable code that is easier to understand and modify over time. If you're looking for a best practice for writing CSS, BEM is definitely worth considering. #### Learn More: [getbem.com/introduction](https://getbem.com/introduction/)
aramoh3ni
1,487,159
Integration Digest: May 2023
🔥 The new Integration Digest for May 2023 is now published! 📚 In this edition, you'll find a wealth...
23,208
2023-05-31T14:38:40
https://wearecommunity.io/communities/integration/articles/3284
api, kafka, opensource, cloud
🔥 The new Integration Digest for May 2023 is now published! 📚 In this edition, you'll find a wealth of articles, ranging from best practices for developer portals to a review of Kubernetes-native API management tools. Don't miss out on both parts of "A Playbook for Enterprise API Adoption". 📢 We're featuring news from integration solution vendors such as Apache Kafka & Confluent, Kong, and Oracle. Learn about the exciting new features of Confluent Platform 7.4, Kora (the Cloud Native Engine for Apache Kafka), and insights into using Kong in Kubernetes. 📦 Stay updated with the latest product and tool releases including Apache Camel 3.20.5, Envoy Gateway 0.4.0, Kong API Gateway 3.3, NATS Server 2.9.17, RabbitMQ 3.11.16, and Tyk Gateway 5.0.2. 🔗 Access the digest here: https://wearecommunity.io/communities/integration/articles/3284 We hope you find the information both useful and interesting!
stn1slv
1,487,491
I'm Switching to Medium: Here's Why
Greetings to my now over 400 followers! Thanks for your support over the last couple of months - I...
0
2023-05-31T19:18:50
https://dev.to/verisimilitudex/follow-me-on-medium-3473
medium, verisimilitudex, piyush, writing
Greetings to my now over 400 followers! Thanks for your support over the last couple of months - I really appreciate it! I'm actually planning on mirroring my posts to Medium for only one reason: a public follower count. If you like my content and would like to continue reading them, then please give me a follow there: https://medium.com/@VerisimilitudeX
verisimilitudex
1,487,760
Text-based tools - the ultimate format for everything
Having lived in the world of technology for two to three decades now, I’ve come to a fundamental...
0
2023-07-06T12:29:36
https://timwise.co.uk/2023/06/01/text-based-tools-the-ultimate-format-for-everything/
--- title: Text-based tools - the ultimate format for everything published: true date: 2023-06-01 00:00:00 UTC tags: canonical_url: https://timwise.co.uk/2023/06/01/text-based-tools-the-ultimate-format-for-everything/ --- Having lived in the world of technology for two to three decades now, I’ve come to a fundamental truth: text formats are **the ultimate** format. > “text formats are **the ultimate** format” > > ~ Me, just now It’s funny really because for everything we’ve invented, of every level of complexity, usability, shinyness etc, when it comes down to it, texts is still king, just like it was in 1980 when I was still learning to talk. ## Properties of text formats Things that make text inevitably superior to all other more complicated formats: - Simple - **nothing** to go wrong. - Use any text editor you like - vim, [vscode+vim](https://marketplace.visualstudio.com/items?itemName=yzhang.markdown-all-in-one), [intellij+vim](https://plugins.jetbrains.com/plugin/164-ideavim) are my gotos, but there are soooo many. - Sync, backup and restore are trivial - try as they might, nothing beats a folder-tree of text files. - They are ultimately portable - no change in technology (windows to linux, desktop to cloud, laptop to mobile) requires you to change anything, text is text, just copy them across and carry on, the ultimate defense against the ever-present pernicious vendor-lockin. - Conflict resolution is always possible - edited two out of sync copies? No problem, there’s a plethora of tools ([kdiff3](https://kdiff3.sourceforge.net/) is my favourite), or you can just do it manually if you wish. - Version control supported - text files are trivially versionable in tools like git, everything understands it and can show diffs etc. - Simple conventions like markdown, yaml, toml, and even slightly more complicated things like json don’t fundamentally break any of the above. - With some lightweight processing and structure (noteably markdown), the same basic format can be automatically converted to a plethora of rich and beautiful forms, and with so many tools understanding formats like markdown you are spoilt for choice. - Supports emoji - this one is more modern, but its usefulness is not to be underestimated, and thanks to utf-8 and unicode the plain-old-text-file can have rich emotions and symbols too. - You can use all sorts of interesting tools to process text files, many from the linux cli stack such as `sed`, `grep` (or `ag`), plus full-on shell scripting to automate repetitive tasks [such as making a new blog post](https://github.com/timabell/timwise.co.uk/blob/eff17d609f862a14275c4fa0bd8319d13d59574e/new). ## Amazing things you can do with text files The below are all things I personally swear by and use daily. I wish more things were like this. Markdown is by far my favourite text format, and it’s incredibly versatile. In my crusade to basically convert everything to plain text / markdown files having been repeatedly burnt by fancy binary formats (`.doc` anyone?). GraphViz (“dot” format) is also a notably powerful text-based system. ### Blogging As per this blog, see [“Setting up a static website/blog with jekyll”](/2019/06/24/setting-up-a-jekyll-blog/) from 2019. No regrets there. Writing this in vim in a terminal. ### Slide decks reveal.js can parse markdown files with a sprinkling of html & css allowed inline (very handy) and turn them into stunning modern presentations with slick animations and multi-step reveals, amazing. I was trying to create some slides in google-slides thinking that would be the quick way, ran into some bizarre formatting limitation and went hunting for alternatives. I haven’t looked back, at least for things I don’t need real-time collaboration on. You can see what I managed to do with [reveal.js for the Rust Workshop](https://rustworkshop.github.io/slide-decks/) - here’s one of the [source slide markdown files](https://github.com/rustworkshop/slide-decks/blob/7eb002bfc1431025b47de97fd20e163456b5d7e5/decks/rust-workshop-master/slides.md?plain=1) ### Note taking Markdown, VSCode with some markdown plugins, maybe even a [markdown-wiki](https://marketplace.visualstudio.com/items?itemName=kortina.vscode-markdown-notes) tool. [Markor](https://f-droid.org/packages/net.gsantner.markor/) on android. [Syncthing](https://syncthing.net/) to keep them in sync across devices. Works for me, and any conflicts due to editing files out of sync is easier to deal with than [tomboy](https://wiki.gnome.org/Apps/Tomboy)’s nasty XML format (yes I know XML is text but it’s still naaaasty). ### Coding This entry is only half tongue-in-cheek. I think it’s worth pointing out that programmers have, after flirting with _many_ other approaches, settled on plain old ASCII as being the one-true-format for explaining to a computer (and other programmers) what the computer is supposed to be doing. Pay attention to what programmers have learnt, there is much depth here on managing vast amounts of precise information in text form. Especially if you are not a programmer or not used to text tools there is much to learn from this world. You might thing programmers are odd creatures that thrive on unnecessary complexity; nothing could be further from the truth, they (we) are _obsessive_ about solving problems once and for all and being ruthlessly efficient in all things. The fact that programmer practices are seen as odd by the general public is more a sign of just how far programmers have optimised their lives away from the unthinking defaults of the masses than it is of any peculiarity of whim or culture. ### Graphs & flowcharts The GraphViz dot format is amazing, it takes a bit of getting used to, but once you’ve got it then you can rearrange your flow chart with vim in a few keypresses and have the whole thing rearranged in milliseconds. Amazing. There’s even some neat web based real-time renderers: - [https://dreampuf.github.io/GraphvizOnline/](https://dreampuf.github.io/GraphvizOnline/) - [https://sketchviz.com/](https://sketchviz.com/) ## The yucky bits The almost-rans: - Email’s mbox format is kinda text, but due to the way it’s set up is _horrible_ for sync - vcf for contacts, what happened there then?! - ical for calendars, what a disaster, so close but yet never works, shame - XML - nice try, turned out to be horrible in hindsight, but not before we’d written almost all software to use it (`.docx` anyone?) The text world is a bit short on collaborative real-time editing - google-docs is still king on that one, though it would be perfectly possible for equivalent tools to be created for the above text formats and tools. Watch this space. Crappy half-arsed implementations of markdown, looking at you Jira/Confluence/Slack (not really a problem of text, more something where we’re almost there with and then crappy WYSIWIG implementations wreck it).
timabell
1,488,097
JS Asynchronous-Technical Paper
Callbacks a callback function is a function that is passed as an argument to another...
0
2023-06-01T09:17:51
https://dev.to/codebyank/js-asynchronous-technical-paper-25lo
webdev, javascript, programming, tutorial
### Callbacks a callback function is a function that is passed as an argument to another function and is intended to be called later, often after an asynchronous operation or a certain event occurs. Callbacks are a fundamental concept in JavaScript for handling asynchronous code execution. Example:- function fetchData(callback) { setTimeout(() => { const data = 'Sample data'; callback(data); }, 2000); } function processData(data) { console.log('Processing data:', data); } console.log('Start'); fetchData(processData); console.log('End'); ### Promises Promises are a feature introduced in ECMAScript 2015 (ES6) that provide a more structured and readable way to handle asynchronous operations in JavaScript. Promises represent the eventual completion (or failure) of an asynchronous operation and allow you to chain multiple asynchronous operations together. Example:- function fetchData() { return new Promise((resolve, reject) => { setTimeout(() => { const data = 'Sample data'; resolve(data); // Resolving the promise with the fetched data // If there's an error, you can reject the promise instead: // reject(new Error('Failed to fetch data')); }, 2000); }); } function processData(data) { console.log('Processing data:', data); } console.log('Start'); fetchData() .then(processData) .catch(error => { console.error('Error:', error.message); }) .finally(() => { console.log('Cleanup'); }); console.log('End'); Here's how the execution flows: 1. "Start" is logged to the console. 2. The `fetchData` function is called, and a new Promise is created. 3. The asynchronous operation starts with a timer set for 2000 milliseconds. 4. "End" is logged to the console. 5. After 2000 milliseconds, the timer expires, and the promise is resolved with the fetched data. 6. The `then` method is called on the promise, and the `processData` function is passed as the callback. 7. The `processData` function is called with the fetched data. 8. "Processing data: Sample data" is logged to the console. 9. The `finally` method is called, and "Cleanup" is logged to the console. ### Code Control the event loop plays a crucial role in managing the control flow of asynchronous code execution. It ensures that tasks, including callbacks and events, are executed in a specific order while keeping the JavaScript runtime single-threaded. The event loop follows a cycle that consists of the following phases: 1. **Handle Synchronous Code**: The event loop starts by executing any synchronous code present in the call stack. This includes executing functions, operations, and statements that are not asynchronous. 2. **Handle Asynchronous Operations**: After the synchronous code is executed, the event loop checks if there are any asynchronous operations that have completed. - If there are completed asynchronous operations, their associated callbacks or promises are added to the callback queue or microtask queue (depending on the type of task). - The callbacks or promises in the queues are not executed immediately; they are waiting for the call stack to be empty. 4. **Event Loop Iteration**: If the call stack is empty, the event loop starts iterating to handle the callbacks and promises waiting in the queues. - The event loop checks the microtask queue first. Microtasks generally include promise callbacks, mutation observers, or other high-priority tasks. It processes all the microtasks present in the queue until it is empty. - After processing all the microtasks, the event loop moves to the callback queue and starts executing the callbacks. These could be timer callbacks, event handlers, or other lower-priority tasks. It takes one callback from the queue and executes it. - The event loop repeats this process, continuously checking and executing tasks from the microtask queue and the callback queue in an alternating manner. ### References [W3SCHOOL](https://www.w3schools.com/js/js_asynchronous.asp) [GFG](https://www.geeksforgeeks.org/async-await-function-in-javascript/)
codebyank
1,488,823
Data Structures in C# Part 2: Lists <T>
Hey There 👋🏻 In part 1 of this series, I talked about arrays, which are cool and provide...
23,236
2023-06-02T13:20:47
https://dev.to/rasheedmozaffar/data-structures-in-c-part-2-lists-e76
csharp, programming, webdev, dotnet
## Hey There 👋🏻 In part 1 of this series, I talked about **arrays**, which are cool and provide amazing functionality, and to extend on the data structures that you the developer could use, we'll be learning the **generic lists** in C#, which is an essential data structure that any C# developer should have at their disposal. ## So, Let's begin 🚀 **What is a list in C#?** Lists in C# are a type of the generic collections that can be used to store objects of the same data type which then can be accessed through their indexes. Well, how are they any different from arrays that we discussed in part 1 you may ask? Good question. The answer is pretty simple! Lists are dynamic, meaning that they could grow and shrink, whereas arrays have fixed a size which is assumed to be known by the developer in the moment of creating the array. **How do we create a list?** Creating a new list is pretty straightforward, let's have a look at a couple of ways: ```csharp List<string> values = new(); List<string> values = new List<string>(); var values = new List<string>(); ``` In the previous code, I showed the 3 ways in which we could declare a new list with the name **value** that will store **string** values. > You can use whichever syntax you like, it's a subjective preference and has nothing to do with correctness or the like. In the previous code, we declared a new list of strings, but we haven't provided it with any values. The following code block shows how values are initially added to a list immediately after declaration: ```csharp List<string> values = new() { "Value1" , "Value2" , "Value3, }; ``` In this snippet, I created the **values** list and I added 3 elements to it, elements are **comma separated**. > NOTE: We could've done the declaration in a step, and the initialization in a separate step, but it's more convenient this way since I knew what values will be seeded in advance. **How to access elements in a list?** We can access elements the same way we did with **arrays**, through the index of an element, again, **lists are zero-indexed**, just like arrays. The next code snippet prints the final value in the list: ```csharp Console.WriteLine(values[2]); //OUTPUT: Value3 ``` To update an element, we can easily just use its index and provide it a new value, just like the following: ```csharp values[0] = "New Value At 0"; Console.WriteLine(values[0]); //OUTPUT: New Value At 0 ``` **That simple!** ## Let's see the dynamic features of **Lists** As I said previously, one key point that makes lists differ from arrays is their dynamic nature. Let's look at some methods and properties of the generic list so we can see its dynamism in action. **List.Add();**: This method adds a new element to the list, the element has to of course be of the same type as the type you choose when you first create the list. ```csharp List<int> numbers = new(); for(int i = 1; i <= 10; i++) { numbers.Add(i); } ``` The previous code creates a new list of **integers** and uses a for loop to add the numbers from 1 to 10 to our numbers list. **List.Remove();**: This method is contrary to the first one, it removes an item from the list, let's see how we can remove an element that we added to our **numbers** list: ```csharp numbers.Remove(5); ``` After this removal, I can use a foreach loop to print out all the numbers in the numbers list, just as follows: ```csharp foreach(int number in numbers) { Console.Write($"{number}\t"); } //OUTPUT: 1 2 3 4 6 7 8 9 10 ``` And as you can see, the **5** is missing in the sequence. **List.Count**: This property is equivalent to **Array.Length**, it returns the number of elements in a list, it's particularly useful if you want to use a **for** loop. **List.Capacity;**: This property **gets** or **sets** the overall number of elements that the list can hold without having to resize itself. If you create a list with one element, and you print out its capacity, you will see **4** is printed out, that means the list can now accommodate **4 elements** before it has to resize and expand itself, similarly, the capacity will be brought down if elements are being removed from the list. > 💡**Pro Tip:** If the number of elements that will go inside the list is known beforehand, then set its capacity to that number so it improves performance. Say you want a list with only 2 items, by setting the capacity to 2, the list won't make it 4 automatically, thus gaining some bits and pieces in performance here and there, but don't worry that much about it, computers are fast enough these days to handle those small margins. ## How to **search** for an element in a list? Assuming we're looking for the index of a particular value in our list, how do we get it? It could be done manually by looping over the elements of the list and then comparing each individual value to the target value that we want its index. However, there'a much easier way by using the `FindIndex` Method. Let's see that in code: ```csharp List<int> numbers = new() {5 , 10 , 15 , 20 , 25}; int indexOfTen = numbers.FindIndex(index => index == 10); Console.WriteLine($"The index of the number 10 is: {indexOfTen}"); //OUTPUT: The index of the number 10 is: 1 ``` In this snippet, I created a new numbers list and initialized it with 5 values, then I declared a new variable with the name `indexOfTen`, I assigned it to the returned value from the `FindIndex()` method, for its argument, I passed a **Lambda Expression**, which is like a condition, if the condition is true, then we want to return the index of the first value that evaluated our condition to true, this thing is called a **predicate**, which is a delegate, but that's not what we're here for. > 💡**Pro Tip:** Lists and most collections in C# couple extremely well with **LINQ**, **L**anguage **IN**tegrated **Q**uery which is a very powerful feature of C#, if you're interested in learning the basics of it, I have a full post where I provided tons of practical examples, you can find the post's link at the end of this post. **List.Find();**: This method returns the value provided if found, otherwise, the default value of the data type that the list is storing, **0** for value types, **null** for reference types. This method is used heavily while doing **data access**, and for that reason, I want to show you a practical example that involves searching for a particular object inside a list. For this example, I'll create a `class`, and call it Superhero with only 2 properties, an **ID** and a **Full Name**, you can see its code below: ```csharp public class Superhero { public int Id { get; set; } public string FullName { get; set; } } ``` That's our class sorted, let's create a list of super heroes: ```csharp List<Superhero> heroes = new() { new Superhero {Id = 1, FullName = "Tony Stark"}, new Superhero {Id = 2, FullName = "Peter Parker"}, new Superhero {Id = 3, FullName = "Steve Rogers"} }; ``` I declared a list called `heroes` that stores objects of type `Superhero`, then I initialized it with 3 objects, each with their respectively unique ID. Now, I want to find if there's a hero with the ID 2, if so, I want to print out their name, let's see that in code: ```csharp Superhero heroWithIdTwo = heroes.Find(hero => hero.Id == 2); if (heroWithIdTwo != null) { Console.WriteLine($"The hero with ID 2 is called: {heroWithIdTwo.FullName}"); //OUTPUT: The hero with ID 2 is called: Peter Parker } ``` All I'm doing here is looking through my list of heroes, to check if there's a hero with 2 as their ID, which I know in advance that there's a hero with this ID, in a real scenario, you would want to throw an exception and catch it so that you could return some error message to the user saying that an object with the given ID doesn't exist, but that's not related to what we're learning in this series. ## 👾 And There you have it 👾 You've learned how to use the generic list in C#, which is absolutely **Brilliant**, if I could leave you with a final piece of advice, it has to be me telling you to play around with this data structure, as it's extremely powerful and used everywhere, and even though it seemed like we covered a lot in this post, we've barely scratched the surface, but still, you're ready to take what we've learned and experiment on your own, and before you go, check the LINQ post so that you get some hands on experience with a great companion for lists and most other data structures in C#. ## Bye for now 👋🏻 Learn LINQ here: 👉 https://dev.to/rasheedmozaffar/basics-of-linq-in-c-with-examples-2dgi
rasheedmozaffar
1,489,049
All you need to know about Sorting
Sorting means rearranging a sequence, such as a list of numbers, so that the elements are put in a...
0
2023-06-02T01:18:43
https://dev.to/x64x2/all-you-need-to-know-about-sorting-dcc
beginners, programming, codenewbie, learning
Sorting means rearranging a sequence, such as a list of numbers, so that the elements are put in a specific order (e.g. ascending or descending). In computer science sorting is quite a wide topic, there are dozens of sorting algorithms, each with pros and cons and different attributes are being studied, e.g. the algorithm's time complexity, its stability etc. Sorting algorithms are a favorite subject of programming classes as they provide a good exercise for programming and analysis of algorithms and can be nicely put on tests :) Some famous sorting algorithms include bubble sort (a simple KISS algorithm), quick and merge sort (some of the fastest algorithms) and stupid sort (just tries different permutations until it hits the jackpot). In practice however we oftentimes end up using some of the simplest sorting algorithms (such as bubble sort anyway, unless we're programming a database or otherwise dealing with enormous amounts of data. If we need to sort just a few hundred of items and/or the sorting doesn't occur very often, a simple algorithm does the job well, sometimes even faster due to a potential initial overhead of a very complex algorithm. So always consider the KISS approach first. Attributes of sorting algorithms we're generally interested in are the following: - **time and space complexity**: Time and space complexity hints on how fast the algorithm will run and how much memory it will need, specifically we're interested in the **best, worst and average case** depending on the length of the input sequence. Indeed we ideally want the fastest algorithm, but it has to be known that a better time complexity doesn't have to imply a faster run time in practice, especially with shorter sequences. An algorithm that's extremely fast in best case scenario may be extremely slow in non-ideal cases. With memory, we are often interested whether the algorithm works **in place**; such an algorithm only needs a constant amount of memory in addition to the memory that the sorted sequence takes, i.e. the sequence is sorted in the memory where it resides. - **implementation complexity**: A simple algorithm is better if it's good enough. It may lead to e.g. smaller code size which may be a factor e.g. in embedded - **stability**: A stable sorting algorithm preserves the order of the elements that are considered equal. With pure numbers this of course doesn't matter, but if we're sorting more complex data structures (e.g. sorting records about people by their names), this attribute may become important. - **comparative vs non-comparative**: A comparative sort only requires a single operation that compares any two elements and says which one has a higher value -- such an algorithm is general and can be used for sorting any data, but its time complexity of the average case can't be better than *O(n * log(n))*. Non-comparison sorts can be faster as they may take advantage of other possible integer operations. - **recursion and parallelism**: Some algorithms are recursive in nature, some are not. Some algorithms can be parallelised e.g. with a GPU which will greatly increase their speed. - **other**: There may be other specific, e.g. some algorithms are are slow if sorting an already sorted sequence (which is addressed by *adaptive* sorting), so we may have to also consider the nature of data we'll be sorting. Other times we may be interested e.g. in what machine instructions the algorithm will compile to etc. In practice not only the algorithm but also its implementation matters. For example if we have a sequence of very large data structures to sort, we may want to avoid physically rearranging these structures in memory, this could be slow. In such case we may want to use **indirect sorting**: we create an additional list whose elements are indices to the main sequence, and we only sort this list of indices.
x64x2
1,489,131
Convert Base64 string into PDF using Angular
Convert Base64 string into PDF
0
2023-06-02T03:49:10
https://dev.to/ktrajasekar/convert-base64-string-into-pdf-using-angular-6b4
--- title: Convert Base64 string into PDF using Angular published: true description: Convert Base64 string into PDF tags: # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2023-06-02 03:46 +0000 --- - When you get base64 string from API encoded browser will not understand the format so need to convert base64 string into pdf using below method ```javascript const linkSource = 'data:application/pdf;base64,' + 'JVBERi0xLjMNCiXi48/TDQoNCjEgMCBvYmoNCjw8DQovVHlwZSAvQ2F0YWxvZw0KL091dGxpbmVzIDIgMCBSDQovUGFnZXMgMyAwIFINCj4+DQplbmRvYmoNCg0KMiAwIG9iag0KPDwNCi9UeXBlIC9'; const downloadLink = document.createElement("a"); const fileName = "sample.pdf"; downloadLink.href = linkSource; downloadLink.download = fileName; downloadLink.click(); ```` Credits: https://stackoverflow.com/a/51998228
ktrajasekar
1,489,344
The DynamoDB-Toolbox v1 beta is here 🙌 All you need to know!
The DynamoDB-Toolbox v1 beta is here 🙌 All you need to know!
0
2023-06-09T17:29:54
https://dev.to/slsbytheodo/the-dynamodb-toolbox-v1-beta-is-here-all-you-need-to-know-22op
typescript, dynamodb, aws, serverless
--- published: true title: The DynamoDB-Toolbox v1 beta is here 🙌 All you need to know! cover_image: https://raw.githubusercontent.com/ThomasAribart/dev-to-articles/master/blog-posts/dynamodb-toolbox-v1-beta/dynamodb-toolbox-v1-beta.png description: The DynamoDB-Toolbox v1 beta is here 🙌 All you need to know! tags: Typescript, DynamoDB, AWS, Serverless series: canonical_url: --- At [Theodo](https://dev.to/slsbytheodo), we are big fans of Jeremy Daly’s [DynamoDB-Toolbox](https://github.com/jeremydaly/dynamodb-toolbox). We started using it as early as 2019 and grew fond of it... but were also well aware of its flaws 😅 One of them was that it had originally been coded in JavaScript. Although Jeremy rewrote the source code in TypeScript in 2020, it didn't handle type inference, a feature that I eventually came to implement myself in the [v0.4](https://github.com/jeremydaly/dynamodb-toolbox/releases/tag/v0.4.0). However, there were still some features that we felt lacked: From declaring **`enums` on primitives**, to supporting **recursive schemas and types** (lists and maps sub-attributes) and **polymorphism**. I was also wary of the object-oriented approach: I don’t have anything against classes, but they are not tree-shakable. Meaning that **they should be kept relatively light in a serverless context**. That’s what AWS went for with the [v3 of their SDK](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-dynamodb/#usage), and for good reasons: Keep bundles tight! That just wasn't the case for DynamoDB-Toolbox: I remember working on an `.update` method that was more than 1000 lines long... But why bundle it when you don't even need it? So last year, I decided to throw myself into a complete overhaul of the code, with three main objectives: - Support the v3 of the AWS SDK (although [support has been added in the v0.8](https://github.com/jeremydaly/dynamodb-toolbox#using-aws-sdk-v2)) - Get the API and type inference on par with those of more "modern" tools like [zod](https://github.com/colinhacks/zod) and [electrodb](https://electrodb.fun/) - Use a more functional and tree-shakable approach Today, I am happy to announce the **v1 beta of dynamodb-toolbox is out** 🙌 It includes reworked `Table` and `Entity` classes, as well as complete support for `PutItem`, `GetItem` and `DeleteItem` commands (including conditions and projections), with `UpdateItem`, `Query` and `Scan` commands soon to follow. This article details how the new API works and the main breaking changes from previous versions - which, by the way, only concern the API: No data migration needed 🥳 Let's dive in! ## Table of content - [Installation](#installation) - [Tables](#tables) - [Entities](#entities) - [Timestamps](#timestamps) - [Matching the Table schema](#matching-the-table-schema) - [`SavedItem` and `FormattedItem`](#saveditem-and-formatteditem) - [Designing Entity schemas](#designing-entity-schemas) - [Schema definition](#schema-definition) - [Attributes types](#attributes-types) - [`any`](#any) - [`<primitive>`](#primitives) - [`set`](#set) - [`list`](#list) - [`map`](#map) - [`record`](#record) - [`anyOf`](#anyof) - [Looking forward](#looking-forward) - [Computed defaults](#computed-defaults) - [Commands](#commands) - [`PutItemCommand`](#putitemcommand) - [`GetItemCommand`](#getitemcommand) - [`DeleteItemCommand`](#deleteitemcommand) - [Utility helpers and types](#utility-helpers-and-types) - [`formatSavedItem`](#formatsaveditem) - [`Condition` and `parseCondition`](#condition-and-parsecondition) - [`Projection` and `parseProjection`](#projection-and-parseprojection) - [`KeyInput` and `PrimaryKey`](#keyinput-and-primarykey) - [Errors](#errors) - [Conclusion](#conclusion) ## Installation ```bash ### npm npm i dynamodb-toolbox@1.0.0-beta.0 ## yarn yarn add dynamodb-toolbox@1.0.0-beta.0 ## ...and so on ``` <aside style="font-size: medium;"> ☝️ *Stay up to date with the patches by following the project [GitHub releases](https://github.com/jeremydaly/dynamodb-toolbox/releases)* </aside> The `v1` is built on top the `v3` of the AWS SDK. It has `@aws-sdk/client-dynamodb` and `@aws-sdk/lib-dynamodb` as peer dependencies so you’ll have to install them as well: ```bash ## npm npm i @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb ## yarn yarn add @aws-sdk/client-dynamodb @aws-sdk/lib-dynamodb ## ...and so on ``` ## Tables Tables are defined pretty much the same way as in previous versions, but the `key` attributes now have a `type` along with their `name`: ```tsx import { DynamoDBClient } from '@aws-sdk/client-dynamodb'; import { DynamoDBDocumentClient } from '@aws-sdk/lib-dynamodb'; // Will be renamed Table in the official release 😉 import { TableV2 } from 'dynamodb-toolbox'; const dynamoDBClient = new DynamoDBClient({}); const documentClient = DynamoDBDocumentClient.from(dynamoDBClient); const myTable = new TableV2({ name: 'MySuperTable', partitionKey: { name: 'PK', type: 'string', // 'string' | 'number' | 'binary' }, sortKey: { name: 'SK', type: 'string', }, documentClient, }); ``` <aside style="font-size: medium;"> ☝️ *The v1 does not support indexes yet as queries are not yet available.* </aside> <!-- The table name can be provided with a getter. This can be useful in environments in which it is not actually available (e.g. tests or deployments): ```tsx const myTable = new TableV2({ ... // 👇 Only executed at command execution name: () => process.env.TABLE_NAME, }); ``` --> As in previous versions, the `v1` classes tag your data with an entity identifier through an internal `entity` string attribute, saved as `"_et"` by default. This can be renamed at the `Table` level through the `entityAttributeSavedAs` argument: ```tsx const myTable = new TableV2({ ... // 👇 defaults to "_et" entityAttributeSavedAs: '__entity__', }); ``` ## Entities For Entities, the main change is that the `attributes` argument becomes `schema`: ```tsx // Will be renamed Entity in the official release 😉 import { EntityV2, schema } from 'dynamodb-toolbox'; const myEntity = new EntityV2({ name: 'MyEntity', table: myTable, // Attributes definition schema: schema({ ... }), }); ``` ### Timestamps The internal timestamp attributes are also there and behave similarly as in the [previous versions](https://www.dynamodbtoolbox.com/docs/entity#specifying-entity-definitions). You can set the `timestamps` to `false` to disable them (default value is `true`), or fine-tune the `created` and `modified` attributes names: ```tsx const myEntity = new EntityV2({ ... // 👇 de-activate timestamps altogether timestamps: false, }); const myEntity = new EntityV2({ ... timestamps: { // 👇 de-activate only `created` attribute created: false, modified: true, }, }); const myEntity = new EntityV2({ ... timestamps: { created: { // 👇 defaults to "created" name: 'creationDate', // 👇 defaults to "_ct" savedAs: '__createdAt__', }, modified: { // 👇 defaults to "modified" name: 'lastModificationDate', // 👇 defaults to "_md" savedAs: '__lastMod__', }, }, }); ``` ### Matching the Table schema An important change from previous versions is that the `EntityV2` key attributes are validated against the `TableV2` schema, both through types and at runtime. There are two ways to match the table schema: - The simplest one is to have an entity schema that **already matches the table schema** (see ["Designing Entity schemas"](#designing-entity-schemas)). The Entity is then considered valid and no other argument is required: ```tsx import { string } from 'dynamodb-toolbox'; const pokemonEntity = new EntityV2({ name: 'Pokemon', table: myTable, // <= { PK: string, SK: string } primary key schema: schema({ // Provide a schema that matches the primary key PK: string().key(), // 🙌 using "savedAs" will also work pokemonId: string().key().savedAs('SK'), ... }), }); ``` - If the entity key attributes don't match the table schema, the `Entity` class will require you to add a `computeKey` property which must derive the primary key from them: ```tsx const pokemonEntity = new EntityV2({ ... table: myTable, // <= { PK: string, SK: string } primary key schema: schema({ pokemonClass: string().key(), pokemonId: string().key(), ... }), // 🙌 `computeKey` is correctly typed computeKey: ({ pokemonClass, pokemonId }) => ({ PK: pokemonClass, SK: pokemonId, }), }); ``` ### SavedItem and FormattedItem If you feel lost, you can always use the `SavedItem` and `FormattedItem` utility types to infer the type of your entity items: ```tsx import type { FormattedItem, SavedItem } from 'dynamodb-toolbox'; const pokemonEntity = new EntityV2({ name: 'Pokemon', timestamps: true, table: myTable, schema: schema({ pokemonClass: string().key().savedAs('PK'), pokemonId: string().key().savedAs('SK'), level: number().default(1), customName: string().optional(), internalField: string().hidden(), }), }); // What Pokemons will look like in DynamoDB type SavedPokemon = SavedItem<typeof pokemonEntity>; // 🙌 Equivalent to: // { // _et: "Pokemon", // _ct: string, // _md: string, // PK: string, // SK: string, // level: number, // customName?: string | undefined, // internalField: string | undefined, // } // What fetched Pokemons will look like in your code type FormattedPokemon = FormattedItem<typeof pokemonEntity>; // 🙌 Equivalent to: // { // created: string, // modified: string, // pokemonClass: string, // pokemonId: string, // level: number, // customName?: string | undefined, // } ``` ## Designing Entity schemas Now let’s dive into the part that received the most significant overhaul: **Schema definition**. ### Schema definition Similarly to [zod](https://github.com/colinhacks/zod) or [yup](https://github.com/jquense/yup), attributes are now defined through function builders. For TS users, this removes the need for the `as const` statement previously needed for type inference (so don't forget to remove it when you migrate 🙈). You can either import the attribute builders through their dedicated imports, or through the `attribute` or `attr` shorthands. For instance, those declarations will output the same attribute schema: ```tsx import { string, attribute, attr } from 'dynamodb-toolbox'; // 👇 More tree-shakable const pokemonName = string(); // 👇 Not tree-shakable, but single import const pokemonName = attribute.string(); const pokemonName = attr.string(); ``` Prior to being wrapped in a `schema` declaration, attributes are called **warm:** They are **not validated** (at run-time) and can be used to build other schemas. By inspecting their types, you will see that they are prefixed with `$`. Once **frozen**, validation is applied and building methods are stripped: ![Warm vs frozen schemas](https://raw.githubusercontent.com/ThomasAribart/dev-to-articles/master/blog-posts/dynamodb-toolbox-v1-beta/warm-vs-frozen-schemas.gif) The main takeaway is that **warm schemas can be composed** while **frozen schemas cannot**: ```tsx import { schema } from 'dynamodb-toolbox'; const pokemonName = string(); const pokemonSchema = schema({ // 👍 No problem pokemonName, ... }); const pokedexSchema = schema({ // ❌ Not possible pokemon: pokemonSchema, ... }); ``` You can create/update warm attributes by using dedicated methods or by providing option objects. The former provides a **slick devX** with autocomplete and shorthands, while the latter theoretically requires **less compute time and memory usage**, although it should be very minor (validation being only applied on freeze): ```tsx // Using methods const pokemonName = string().required('always'); // Using options const pokemonName = string({ required: 'always' }); ``` All attributes share the following options: - `required` _(string?="atLeastOnce")_ Tag a root attribute or Map sub-attribute as **required**. Possible values are: - `"atLeastOnce"` Required in `PutItem` commands - `"never"`: Optional in all commands - `"always"`: Required in `PutItem`, `GetItem` and `DeleteItem` commands ```tsx // Equivalent const pokemonName = string().required(); const pokemonName = string({ required: 'atLeastOnce' }); // `.optional()` is a shorthand for `.required(”never”)` const pokemonName = string().optional(); const pokemonName = string({ required: 'never' }); ``` A very important breaking change from previous versions is that **root attributes and Map sub-attributes are now required by default**. This was made so **composition and validation work better together**. <aside style="font-size: medium;"> 💡 *Outside of root attributes and Map sub-attributes, such as in a list of strings, it doesn’t make sense for sub-schemas to be optional. So, should I force users to write `list(string().required())` every time OR make string validation and type inference aware of their context (ignore `required` in lists but not in maps)? It felt more elegant to enforce `string()` as required by default and prevent schemas such as `list(string().optional())`.* </aside> - `hidden` _(boolean?=true)_ Skip attribute when formatting the returned item of a command: ```tsx const pokemonName = string().hidden(); const pokemonName = string({ hidden: true }); ``` - `key` _(boolean?=true)_ Tag attribute as needed to compute the primary key: ```tsx // Note: The method will also modify the `required` property to "always" // (it is often the case in practice, you can still use `.optional()` if needed) const pokemonName = string().key(); const pokemonName = string({ key: true }); ``` - `savedAs` _(string)_ Previously known as `map`. Rename a root or Map sub-attribute before sending commands: ```tsx const pokemonName = string().savedAs('_n'); const pokemonName = string({ savedAs: '_n' }); ``` - `default`: _(ComputedDefault)_ See [Computed defaults](#computed-defaults) ### Attributes types Here’s the exhaustive list of available attribute types: #### Any Define an attribute of any value. No validation will be applied at runtime, and its type will be resolved as `unknown`: ```tsx import { any } from 'dynamodb-toolbox'; const pokemonSchema = schema({ ... metadata: any(), }); type FormattedPokemon = FormattedItem<typeof pokemonEntity>; // => { // ... // metadata: unknown // } ``` You can provide default values through the `default` option or method: ```tsx const metadata = any().default({ any: 'value' }); const metadata = any({ default: () => 'Getters also work!', }); ``` #### Primitives Defines a `string`, `number`, `boolean` or `binary` attribute: ```tsx import { string, number, boolean, binary } from 'dynamodb-toolbox'; const pokemonSchema = schema({ ... pokemonType: string(), level: number(), isLegendary: boolean(), binEncoded: binary(), }); type FormattedPokemon = FormattedItem<typeof pokemonEntity>; // => { // ... // pokemonType: string // level: number // isLegendary: boolean // binEncoded: Buffer // } ``` You can provide default values through the `default` option or method: ```tsx // 🙌 Correctly typed! const level = number().default(42); const date = string().default(() => new Date().toISOString()); const level = number({ default: 42 }); const date = string({ default: () => new Date().toISOString(), }); ``` Primitive types have an additional `enum` option. For instance, you could provide a finite list of pokemon types: ```tsx const pokemonTypeAttribute = string().enum('fire', 'grass', 'water'); // Shorthand for `.enum("POKEMON").default("POKEMON")` const pokemonPartitionKey = string().const('POKEMON'); ``` <aside style="font-size: medium;"> 💡 *For type inference reasons, the `enum` option is only available as a method, not as an object option* </aside> #### Set Defines a set of strings, numbers or binaries. Unlike in previous versions, sets are kept as `Set` classes. Let me know if you would prefer using arrays (or being able to choose from both): ```tsx import { set } from 'dynamodb-toolbox'; const pokemonSchema = schema({ ... skills: set(string()), }); type FormattedPokemon = FormattedItem<typeof pokemonEntity>; // => { // ... // skills: Set<string> // } ``` Options can be provided as a 2nd argument: ```tsx const setAttr = set(string()).hidden(); const setAttr = set(string(), { hidden: true }); ``` #### List Defines a list of sub-schemas of any type: ```tsx import { list } from 'dynamodb-toolbox'; const pokemonSchema = schema({ ... skills: list(string()), }); type FormattedPokemon = FormattedItem<typeof pokemonEntity>; // => { // ... // skills: string[] // } ``` As in sets, options can be povided as a 2nd argument. #### Map Defines a finite list of key-value pairs. Keys must follow a string schema, while values can be sub-schema of any type: ```tsx import { map } from 'dynamodb-toolbox'; const pokemonSchema = schema({ ... nestedMagic: map({ will: map({ work: string().const('!'), }), }), }); type FormattedPokemon = FormattedItem<typeof pokemonEntity>; // => { // ... // nestedMagic: { // will: { // work: "!" // } // } // } ``` As in sets and lists, options can be povided as a 2nd argument. #### Record A new attribute type that translates to `Partial<Record<KeyType, ValueType>>` in TypeScript. Records differ from maps as they can accept an infinite range of keys: ```tsx import { record } from 'dynamodb-toolbox'; const pokemonType = string().enum(...); const pokemonSchema = schema({ ... weaknessesByPokemonType: record(pokemonType, number()), }); type FormattedPokemon = FormattedItem<typeof pokemonEntity>; // => { // ... // weaknessesByPokemonType: { // [key in PokemonType]?: number // } // } ``` Options can be provided as a 3rd argument: ```tsx const recordAttr = record(string(), number()).hidden(); const recordAttr = record(string(), number(), { hidden: true }); ``` #### AnyOf A new **meta-**attribute type that represents a union of types, i.e. a range of possible types: ```tsx import { anyOf } from 'dynamodb-toolbox'; const pokemonSchema = schema({ ... pokemonType: anyOf([ string().const('fire'), string().const('grass'), string().const('water'), ]), }); ``` In this particular case, an `enum` would have done the trick. However, `anyOf` becomes particularly powerful when used in conjunction with a `map` and the `enum` or `const` directives of a primitive attribute, to implement **polymorphism**: ```tsx const pokemonSchema = schema({ ... captureState: anyOf([ map({ status: string().const('caught'), // 👇 captureState.trainerId exists if status is "caught"... trainerId: string(), }), // ...but not otherwise! 🙌 map({ status: string().const('wild') }), ]), }); type CaptureState = FormattedItem<typeof pokemonEntity>['captureState']; // 🙌 Equivalent to: // | { status: "wild" } // | { status: "caught", trainerId: string } ``` As in sets, lists and maps, options can be povided as a 2nd argument. #### Looking forward That’s all for now! I’m planning on including new `tuple` and `allOf` attributes at some point. If there are other types you’d like to see, feel free to leave a comment on this article and/or [open a discussion on the official repo](https://github.com/jeremydaly/dynamodb-toolbox) with the `v1` label 👍 ## Computed defaults In previous versions, `default` was used to compute attribute from other attributes values. This feature was very handy for "technical" attributes such as composite indexes. However, it was just impossible to type correctly in TypeScript: ```tsx const pokemonSchema = schema({ ... level: number(), levelPlusOne: number().default( // ❌ No way to retrieve the caller context input => input.level + 1, ), }); ``` It means the `input` was typed as any and it fell to the developper to type it correctly, which just didn’t cut it for me. The solution I committed to was to split computed defaults declaration into 2 steps: - First, **declare that an attribute default should be derived from other attributes**: ```tsx import { ComputedDefault } from 'dynamodb-toolbox'; const pokemonSchema = schema({ ... level: number(), levelPlusOne: number().default(ComputedDefault), }); ``` <aside style="font-size: medium;"> 💡 *`ComputedDefault` is a JavaScript [Symbol](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Symbol) (TLDR: A sort of unique and custom `null`), so it cannot possibly conflict with an actual desired default value.* </aside> - Then, declare a way to compute this attribute **at the entity level**, through the `computeDefaults` property: ```tsx const pokemonEntity = new EntityV2({ ... schema: pokemonSchema, computeDefaults: { // 🙌 Correctly typed! levelPlusOne: ({ level }) => level + 1, }, }); ``` In the tricky case of nested attributes, `computeDefaults` becomes an object with an `_attributes` or `_elements` property to emphasize that the computing is **local**: ```tsx const pokemonSchema = schema({ ... defaultLevel: number(), // 👇 Defaulted Map attribute levelHistory: map({ currentLevel: number(), // 👇 Defaulted sub-attribute nextLevel: number().default(ComputedDefault), }).default(ComputedDefault), }); const pokemonEntity = new EntityV2({ ... schema: pokemonSchema, computeDefaults: { levelHistory: { // Defaulted value of Map attribute _map: item => ({ currentLevel: item.defaultLevel, nextLevel: item.defaultLevel, }), _attributes: { // Defaulted value of sub-attribute nextLevel: (levelHistory, item) => levelHistory.currentLevel + 1, }, }, }, }); ``` Note that there is (and has always been) an ambiguity as to when `default` values are actually used, that I hope to solve soon by splitting it into `getDefault`, `putDefault`, `updateDefault` and so on (`default` being the one to rule them all). For the moment, **`defaults` are only used in `putItem` commands.** ## Commands Now that we know how to design entities, let’s take a look at how we can leverage them to craft commands 👍 <aside> 💡 *The beta only supports the `PutItem`, `GetItem`, and `DeleteItem` commands. If you need to run `UpdateItem`, `Query` or `Scan` commands, our advice is to run native SDK commands and format their output with the [`formatSavedItem` util](#formatsaveditem).* </aside> As mentioned in the intro, I searched for a syntax that favored tree-shaking. Here's an example of it, with the `PutItem` command: ```tsx // v0.x Not tree-shakable const response = await pokemonEntity.putItem(pokemonItem, options); // v1 Tree-shakable 🙌 import { PutItemCommand } from 'dynamodb-toolbox'; const command = new PutItemCommand( pokemonEntity, // 🙌 Correctly typed! pokemonItem, // 👇 Optional putItemOptions, ); // Get command params const params = command.params(); // Send command const response = await command.send(); ``` `pokemonItem` can be provided later or edited, which can be useful if the command is built in several steps (at execution, an error will be thrown if no item has been provided): ```tsx import { PutItemCommand } from 'dynamodb-toolbox'; const incompleteCommand = new PutItemCommand(pokemonEntity); // (will return a new command and not mutate the original one) const completeCommand = incompleteCommand.item(pokemonItem); // (can be chained by design) const response = await incompleteCommand .item(pokemonItem) .options(options) .send(); ``` You can also use the `.build` method of the entity to craft a command directly hydrated with your entity: ```tsx // 🙌 We get a syntax closer to v0.x... but tree-shakable! const response = await pokemonEntity .build(PutItemCommand) .item(pokemonItem) .options(options) .send(); ``` <aside style="font-size: medium;"> 💡 *As much as I appreciate this syntax, it makes mocking hard in unit tests. I'm already working on a `mockEntity` helper, inspired by the awesome [`aws-sdk-client-mock`](https://github.com/m-radzikowski/aws-sdk-client-mock). This will probably make another article soon.* </aside> ### PutItemCommand The `capacity`, `metrics` and `returnValues` options behave exactly the same as in previous versions. The `condition` option benefits from improved typing, and clearer logical combinations: ```tsx import { PutItemCommand } from 'dynamodb-toolbox'; const { Attributes } = await pokemonEntity .build(PutItemCommand) .item(pokemonItem) .options({ capacity: 'TOTAL', metrics: 'SIZE', // 👇 Will type the response `Attributes` returnValues: 'ALL_OLD', condition: { or: [ { attr: 'pokemonId', exists: false }, // 🙌 "lte" is correcly typed { attr: 'level', lte: 99 }, // 🙌 You can nest logical combinations { and: [{ not: { ... } }, ...] }, ], }, }) .send(); ``` <aside style="font-size: medium;"> ❗️*The `"UPDATED_OLD"` and `"UPDATED_NEW"` return values options are not fully supported yet so I do not recommend using them for now* </aside> ### GetItemCommand The `attributes` option behaves the same as in previous versions, but benefits from improved typing as well: ```tsx import { GetItemCommand } from 'dynamodb-toolbox'; const { Item } = await pokemonEntity .build(GetItemCommand) .key(pokemonKey) .options({ capacity: 'TOTAL', consistent: true, // 👇 Will type the response `Item` attributes: ['pokemonId', 'pokemonType', 'level'], }) .send(); ``` ### DeleteItemCommand The `DeleteItem` command is pretty much a mix between the two previous ones, options wise: ```tsx import { DeleteItemCommand } from 'dynamodb-toolbox'; const { Attributes } = await pokemonEntity .build(DeleteItemCommand) .key(pokemonKey) .options({ capacity: 'TOTAL', metrics: 'SIZE', // 👇 Will type the response `Attributes` returnValues: 'ALL_OLD', condition: { or: [ { attr: 'level', lte: 99 }, ... ], }, }) .send(); ``` ## Utility helpers and types In addition to the `SavedItem` and `FormattedItem` types, the `v1` exposes a bunch of useful helpers and utility types: ### formatSavedItem `formatSavedItem` transforms a saved item returned by the DynamoDB client to it’s formatted counterpart: ```tsx import { formatSavedItem } from 'dynamodb-toolbox'; // 🙌 Typed as FormattedItem<typeof pokemonEntity> const formattedPokemon = formatSavedItem( pokemonEntity, savedPokemon, // As in GetItem commands, attributes will filter the formatted item { attributes: [...] }, ); ``` Note that **it is a parsing operation**, i.e. it does not require the item to be typed as `SavedItem<typeof myEntity>`, but will throw an error if the saved item is invalid: ```tsx const formattedPokemon = formatSavedItem(pokemonEntity, { ... level: 'not a number', }); // ❌ Will raise error: // => "Invalid attribute in saved item: level. Should be a number" ``` ### Condition and parseCondition The `Condition` type and `parseCondition` util are useful to type conditions and build condition expressions: ```tsx import { Condition, parseCondition } from 'dynamodb-toolbox'; const condition: Condition<typeof pokemonEntity> = { attr: 'level', lte: 42, }; const parsedCondition = parseCondition(pokemonEntity, condition); // => { // ConditionExpression: "#1 <= :1", // ExpressionAttributeNames: { "#1": "level" }, // ExpressionAttributeValues: { ":1": 42 }, // } ``` ### Projection and parseProjection The `AnyAttributePath` type and `parseProjection` util are useful to type attribute paths and build projection expressions: ```tsx import { AnyAttributePath, parseProjection } from 'dynamodb-toolbox'; const attributes: AnyAttributePath<typeof pokemonEntity>[] = [ 'pokemonType', 'levelHistory.currentLevel', ]; const parsedProjection = parseProjection(pokemonEntity, attributes); // => { // ProjectionExpression: '#1, #2.#3', // ExpressionAttributeNames: { // '#1': 'pokemonType', // '#2': 'levelHistory', // '#3': 'currentLevel', // }, // } ``` ### KeyInput and PrimaryKey Both types are useful to type item primary keys: ```tsx import type { KeyInput, PrimaryKey } from 'dynamodb-toolbox'; type PokemonKeyInput = KeyInput<typeof pokemonEntity>; // => { pokemonClass: string, pokemonId: string } type MyTablePrimaryKey = PrimaryKey<typeof myTable>; // => { PK: string, SK: string } ``` ## Errors Finally, let’s take a quick look at error management. When DynamoDB-Toolbox encounters an unexpected input, it will throw an instance of `DynamoDBToolboxError`, which itself extends the native `Error` class with a `code` property: ```tsx await pokemonEntity .build(PutItemCommand) .item({ ..., level: 'not a number' }) .send(); // ❌ [parsing.invalidAttributeInput] Attribute level should be a number ``` Some `DynamoDBToolboxErrors` also expose a `path` property (mostly in validations) and/or a `payload` property for additional context. If you need to handle them, TypeScript is your best friend, as the `code` property will correctly discriminate the `DynamoDBToolboxError` type: ```tsx import { DynamoDBToolboxError } from 'dynamodb-toolbox'; const handleError = (error: Error) => { if (!error instanceof DynamoDBToolboxError) throw error; switch (error.code) { case 'parsing.invalidAttributeInput': const path = error.path; // => "level" const payload = error.payload; // => { received: "not a number", expected: "number" } break; ... case 'entity.invalidItemSchema': const path = error.path; // ❌ error does not have path property const payload = error.payload; // ❌ same goes with payload ... } }; ``` ## Conclusion And that’s it for now! I hope you’re as excited as I am about this new release 🙌 If you have features that I've missed in mind, or would like to see some of the ones I mentioned prioritized, please leave a comment on this article and/or [create an issue or open a discussion on the official repo](https://github.com/jeremydaly/dynamodb-toolbox) with the `v1` label 👍 See you soon!
thomasaribart
1,493,148
10 inspirational trends of web data harvesting with Python in 2023
Modern trusted proxy websites perform different functions related to gathering web data including...
0
2023-06-06T06:47:33
https://dexodata.com/en/blog/10-inspirational-trends-of-web-data-harvesting-with-python-in-2023
ai, python, dataharvesting, dexodata
![Python](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7or5u6ayy3fbb30cb0v.png) Modern trusted proxy websites perform different functions related to gathering web data including buying residential and mobile proxies for cybersecurity reasons. During its processing on the enterprise level information takes the readable form, suitable for further manual and AI-based analysis. It is called Business Intelligence. Dexodata provides rotating proxies in Saudi Arabia, Greece, etc. for free trial and serves them as a core ingredient of data parsing. Another integral component is a software shell formed by the source code. Python has already become the most popular computer language for data processing, and is going to strengthen its position in 2023. Let’s take a look at the most interesting trends of its implementation. ## What is Python? Python is an interpreted language, which unlike compiled analogues suits better for computing-intensive data obtaining with [best datacenter proxies](https://dexodata.com/en/datacenter-proxies). Its latest version, 3.11.0 is up to 60 percent faster than the preceding iteration, according to its developers. This version also holds a number of distinct features, such as: 1. Clear and extensive error messages. 2. Built-in TOML (Tom’s Obvious Minimal Language) files parser. 3. WebAssembly native support. 4. Updated syntax for two and more exceptions taken into account simultaneously. 5. Literal string types acceptance. 6. Variadic Generics with several types stored at once for delayed assignment to objects, etc. ## Where is Python applied? Python holds the 4th place in the average list of the most popular interpreters, and the 3rd, when it comes to beginners choice, according to the [Stackoverflow](https://stackoverflow.com/) 2022 survey. Later we will name the reasons why Python is prominent among data extracting experts. As well as Dexodata is a trusted proxy website to get access from Japan, Sweden or other locations. The most promising spheres for Python in 2023 are supposed to be: * Cloud storages * Game development * Academical learning example * Big data processing * Web pages and mobile apps creating * Machine learning, neural networks and AI * Programming languages integration * Networks administration * Web scraping. ## Why use Python for data extraction? Automation is the main purpose, simply put. The characteristics listed below have made it popular as a scraping tool associated with [AI-based data acquiring](https://dexodata.com/en/blog/how-does-ai-enhance-web-data-gathering) models. Python automates the process of data gathering and distribution through paid and free trial rotating proxies successfully because of: 1\. **Concise and simple code** Python resembles common English, while the data collection from `<div>` tags can be performed in twenty code lines or so. Ask for the best datacenter proxies to change IP during the work. 2\. **Appropriate speed** The last version has an average speed improvement of x1.22, according to developers. Despite the fact code still operates slower than C++ due to compilation, it is enough to gather information. 3\. **Easy to learn syntax** without “{}”, semicolons, etc Due to accepted indentation of the product users can easily distinct code blocks and scopes. 4\. **Number of ready-to-go solutions** for collecting and processing data BeautifulSoup and Selenium parsing is applicable to almost anything, while a large number of other modules (pandas, Matlplotlib) simplify the analysis. 5\. **High compatibility** The parser itself transfers and receives requests from other packages with minimal delay. 6\. **Dynamic character** Python allows to work with variables when needed, without defying all the datatypes. 7\. **Anonymous functions (lambda)** They make scripts capable of owning two and more variables simultaneously. 8\. **Friendly community** According to Stackoverflow, Python has been the 3rd most popular interpreting solution among scholars for three years in a row, so there are a lot of guides, use cases and articles to solve the possible problem. 9\. **Multitasking** One coding approach is used for main and secondary parsing missions at the same time. E.g. to [buy residential proxies](https://dexodata.com/en/residential-proxies) and mobile ones in Vietnam, Ireland, Korea or other places. Then set them up via API, change IP during data mining, etc. Also it saves files, creates and fulfills databases, operates expressions and strings, etc. 10\. **Versatility** You are free to choose both dynamic and static sites as a target with appropriate libraries. ## What are the Python main tools? Python is convenient to be used with just libraries you need according to the specifics of purposes. In a similar manner clients prefer to buy [residential and mobile proxies or datacenter IPs](https://dexodata.com/en/pricing) from Dexodata only depending on one’s needs. As the purpose is to obtain some info from the Web, we’ll name only the involved modules with their characteristics. 1. Requests Module 2. BeautifulSoup 3. Selenium 4. Scrapy ![Python Applications](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sgh331wkiave1s5qf62m.png) These are the mostly leveraged libraries and modules [Requests](https://pypi.org/project/requests/) module, responsible for sending HTTP requests, interacting with proxies and headers via API,etc. HTTPS-compatible default module has a wider range of features than built-in urllib3. [BeautifulSoup](https://pypi.org/project/beautifulsoup4), the main HTML data gathering library, is capable of pulling the data out of HTML and XML files. bs4 forms a parse tree on the base of page source code and is capable of harvesting data in a readable form by your choice. BeautifulSoup has limited capabilities in accessing the info from dynamic HTML, but that is what Selenium is great for. [Selenium](https://www.selenium.dev/), 4.8.0 (current) version of module to parse dynamic HTML and AJAX. Known for outstanding automation possibilities via webdriver within the chosen browser. Built-in tool, Selenium Grid is IPv6-compatible via HTTPS, the IP type our trusted proxy website also provides. [Scrapy](https://scrapy.org/), the web-crawler carrying a wide range of customizable features. It is capable of collecting information from several pages at once with AutoThrottle adjustment to maximize the process speed. The last 2.7.1 version has its own JSON decoder and can add hints about object types to simplify the database reading. ## What future awaits Python in 2023? Python became the second most popular primary coding method among the GitHub community. Every fifth developer has expressed the interest to work with Python-based projects. That is what official [Octoverse by GitHub](https://octoverse.github.com/2022/top-programming-languages) statistics says. And its role in software development is expected to grow. Two curious predictions about the Python role in 2023 can be made on the basis of the Finances Online review. 1. The first applies to the growing Analytics-as-a-Service market (AaaS). Financial enterprises seem to abandon Big data obtaining and its management in favor of third-party analytics. 2. AaaS experts configure data obtaining solutions according to the needs of a particular customer. Python suits great for such works due to simple syntaxis and flexible modules. We advise to buy in advance residential and mobile proxies or best datacenter proxies to proceed parsing with minimum failures. The second optimistic trend for Python in 2023 applies to the rising market share of machine learning algorithms. This codebase holds the first place as the most popular programming language among machine learning for the third year in a row, according to the Octoverse report. And the trend is increasing. ## What is the role of AI-based solutions in Python web developing? Machine learning starts from obtaining terabytes of web data that needs to be obtained and processed. Among the projects operating this way are: * [Large language models](https://dexodata.com/en/blog/how-to-use-chatgpt-for-web-data-extraction-in-2023) (LLMs), such as GPT-3 (ChatGPT, etc.) * Generative AI neural networks, such as Midjourney and Dall-E, * Enterprise artificial neural networks (ANN) to predict customer behavior, automate marketing, etc. Dexodata team sees no obstacles for Python to become the main software development tool in all mentioned above market spheres. ANNs also learn to proceed data collecting on their own using the Deep Learning mechanisms. Now AI mostly manages the logistics and warehouse goods accounting, but the tendency is to entrust [AI-driven robots](https://dexodata.com/en/blog/look-into-the-future-of-web-scraping-after-2023) harvesting scientific and financial data for both descriptive and predictive business intelligence. Dexodata is an enterprise-level infrastructure serving IP addresses in Greece, Vietnam, Ireland, Sweden, Korea, Saudi Arabia, Japan and 100+ countries. We have proxies for parsing needs and offer [rotating proxies for a free trial](https://dexodata.com/en/mobile-proxies).
dexodatamarketing
1,489,398
Let's create a Color Picker from scratch with HTML5 Canvas, Javascript and CSS3
How to create a Color Picker from scratch with HTML5 Canvas, Javascript and CSS3
0
2023-06-02T11:25:00
https://www.ma-no.org/en/web-design/let-s-create-a-color-picker-from-scratch-with-html5-canvas-javascript-and-css3
html, css, javascript, development
--- title: Let's create a Color Picker from scratch with HTML5 Canvas, Javascript and CSS3 published: true description: How to create a Color Picker from scratch with HTML5 Canvas, Javascript and CSS3 date: 2023-06-02 00:25:00 UTC tags: html, css, javascript, development canonical_url: https://www.ma-no.org/en/web-design/let-s-create-a-color-picker-from-scratch-with-html5-canvas-javascript-and-css3 cover_image: https://www.ma-no.org/cache/galleries/contents-731/960-400/selector-canvas-de-html5-t1-imppng.webp # Use a ratio of 100:42 for best results. # published_at: 2023-06-02 13:00 +0000 --- ![Let's create a Color Picker from scratch with HTML5 Canvas, Javascript and CSS3](https://www.ma-no.org/cache/galleries/contents-731/960-400/selector-canvas-de-html5-t1-imppng.webp) <p>HTML5 Canvas is a technology that allows developers to generate real-time graphics and animations using JavaScript. It provides a blank canvas on which graphical elements, such as lines, shapes, images and text, can be drawn and manipulated with great flexibility and control. </p> <p>Here are some key concepts about HTML5 Canvas: </p> <p>1. <strong>Canvas element</strong>: The <code>&lt;canvas&gt; </code> element is the base of the canvas on which the graphics are drawn. It is defined by HTML tags and can be sized using the `width` and `height` attributes. All graphic elements are drawn within this canvas. </p> <p>2. <strong>Context: </strong>The context (`<em>context</em>`) is the object that provides methods and properties for drawing on the canvas. There are two types of context: <strong>2D</strong> and <strong>WebG</strong>L. For 2D graphics, the 2D context (`<em>context2d</em>`) is used, which is more common. To access the 2D context, you use the <code> getContext(&#39;2d&#39;) </code> method on the <code>&lt;canvas&gt; </code> element. </p> <p>3. <strong>Coordinates and coordinate system</strong>: The Canvas canvas uses a coordinate system in which `(<em>0, 0</em>)` represents the upper left corner of the canvas and positive coordinates increase downwards and to the right. This means that the highest values of `x` are to the right and the highest values of `y` are to the bottom. </p> <p>4. <strong>Drawing methods</strong>: The 2D context provides a wide range of methods for drawing different graphical elements on the canvas, such as lines, rectangles, circles, curves, images and text. Some of the most common methods include <code>fillRect() </code>, <code>strokeRect() </code>, <code>arc() </code>, <code>drawImage() </code> y <code>fillText() </code>. </p> <p>5. <strong>Styles and attributes</strong>: The 2D context also allows you to set styles and attributes for graphic elements. You can set stroke and fill colours, line thickness, typography and other attributes that affect the visual appearance of graphics. </p> <p>6. <strong>Animations</strong>: One of the advantages of HTML5 Canvas is its ability to create fluid and dynamic animations. This can be achieved by using techniques such as periodic updating of the canvas, the use of the <code>requestAnimationFrame() </code> and the manipulation of the graphic elements in each frame. </p> <p>HTML5 Canvas offers a wide range of creative possibilities and is used in many areas, such as online games, data visualisations, interactive applications and generative graphics. It is a powerful tool for web development and gives developers complete control over the graphical representation in the browser. </p> <p>In this tutorial we are going to explain how to use the Canvas element to create a simple colour picker. </p> <p>We start with the basic HTML code of the page: </p> &nbsp; <pre> &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;utf-8&quot; /&gt; &lt;title&gt;Colorpicker demo&lt;/title&gt; &lt;/head&gt; &lt;body&gt; </pre> &nbsp; <p>We go on to define some CSS styles for the elements on the page. Styles are set for the body, a header (h2) and a Google Fonts font called &quot;Open Sans&quot; is imported. </p> &nbsp; <pre> &lt;style&gt; @import url(https://fonts.googleapis.com/css?family=Open+Sans); body { margin: 0; padding: 0; background-color: #e6e6e6; } h2 { background-color: #dbdbdb; margin: 0; margin-bottom: 15px; padding: 10px; font-family: &#39;Open Sans&#39;; } /* Additional CSS for page elements */ </pre> &nbsp; <p>We continue with our heading indicating the purpose of the colour picker. </p> &nbsp; <pre> &lt;h2&gt;Canvas Color Picker&lt;/h2&gt; </pre> &nbsp; <p>Then we create the two elements that will display the selected colour in <strong>RGBA</strong> and <strong>HEX</strong> format: the identifiers <em>txtRgba</em> and <em>txtHex</em> will be used to update the values later from the JavaScript code. </p> <pre> &lt;label for=&quot;color-input&quot; id=&quot;color-label&quot; style=&quot;background-color: red&quot;&gt;&lt;/label&gt; &lt;input type=&quot;checkbox&quot; id=&quot;color-input&quot; checked&gt;&lt;/input&gt; </pre> &nbsp; <p>Aqu&iacute; creamos una etiqueta <code>&lt;label&gt; </code> con el identificador color-label. Esta etiqueta se utiliza como muestra visual del color seleccionado. Tambi&eacute;n hay un <code>&lt;input&gt; </code> de tipo checkbox con el identificador <strong>color-input</strong>, que se utiliza para controlar la visibilidad del selector de color. </p> <p>Next we create a <code>&lt;div&gt; </code> container with the <em>colour-picker</em> identifier, which contains two <code>&lt;canvas&gt; </code> elements. The first <code>&lt;canvas&gt; </code> with the colour-block identifier is used as the main canvas where the colour is selected. The second <code>&lt;canvas&gt; </code> with the <em>colour-strip</em> identifier is used to display a colour strip to select the saturation component of the colour. </p> &nbsp; <pre> &lt;div id=&quot;color-picker&quot;&gt; &lt;canvas id=&quot;color-block&quot; height=&quot;150&quot; width=&quot;150&quot;&gt;&lt;/canvas&gt; &lt;canvas id=&quot;color-strip&quot; height=&quot;150&quot; width=&quot;30&quot;&gt;&lt;/canvas&gt; &lt;/div&gt; </pre> &nbsp; <p>Now the fun begins... <br /> Let&#39;s see how our JavaScript works: </p> &nbsp; <pre> &lt;script type=&quot;text/javascript&quot;&gt; // This is where the JavaScript code block begins var colorBlock = document.getElementById(&#39;color-block&#39;); var ctx1 = colorBlock.getContext(&#39;2d&#39;); var width1 = colorBlock.width; var height1 = colorBlock.height; </pre> &nbsp; <p> </p> <p>These lines of code get the main canvas element with the identifier &quot;<em>colour-block</em>&quot; from the HTML document. Then, the 2d context of the canvas is fetched using <code>getContext(&#39;2d&#39;) </code>. We also store the dimensions (width and height) of the canvas in the variables <em>width1</em> and <em>height1</em>. We continue: </p> &nbsp; <pre> var colorStrip = document.getElementById(&#39;color-strip&#39;); var ctx2 = colorStrip.getContext(&#39;2d&#39;); var width2 = colorStrip.width; var height2 = colorStrip.height; </pre> &nbsp; <p>As you can see, the code is similar to the previous one, but in this case they obtain the canvas element of the colour strip with the identifier &quot;<em>colour-strip</em>&quot;. The 2d context of the canvas is obtained and the dimensions are stored in the variables <em>width2</em> and <em>height2</em>. </p> <p>Now we have to get the HTML document elements with the identifiers &quot;<em>colour-labe</em>l&quot;, &quot;<em>txtRgba</em>&quot; and &quot;<em>txtHex</em>&quot; and we have to store them in corresponding variables. These elements are used to display and update the selected colour values. </p> &nbsp; <pre> var colorLabel = document.getElementById(&#39;color-label&#39;); var txtRgba = document.getElementById(&#39;txtRgba&#39;); var txtHex = document.getElementById(&#39;txtHex&#39;); </pre> &nbsp; <p>Let&#39;s add the variables needed to track the position of the mouse on the canvas to control whether it is dragging or not: x and y store the mouse coordinates, drag indicates whether the mouse is dragging and rgbaColor stores the initial value of the colour in RGBA format (red, green, blue and transparency). </p> &nbsp; <pre> var x = 0; var y = 0; var drag = false; var rgbaColor = &#39;rgba(255,0,0,1)&#39;; </pre> &nbsp; <p>And now we define the colour gradients on the canvases. In the canvas <em>colourBlock</em>, we draw a rectangle that covers the whole canvas and then call the <em>fill</em> function </p> &nbsp; <pre> ctx1.rect(0, 0, width1, height1); fillGradient(); ctx2.rect(0, 0, width2, height2); var grd1 = ctx2.createLinearGradient(0, 0, 0, height1); grd1.addColorStop(0, &#39;rgba(255, 0, 0, 1)&#39;); grd1.addColorStop(0.17, &#39;rgba(255, 255, 0, 1)&#39;); grd1.addColorStop(0.34, &#39;rgba(0, 255, 0, 1)&#39;); grd1.addColorStop(0.51, &#39;rgba(0, 255, 255, 1)&#39;); grd1.addColorStop(0.68, &#39;rgba(0, 0, 255, 1)&#39;); grd1.addColorStop(0.85, &#39;rgba(255, 0, 255, 1)&#39;); grd1.addColorStop(1, &#39;rgba(255, 0, 0, 1)&#39;); ctx2.fillStyle = grd1; ctx2.fill(); </pre> &nbsp; <p>We add the function that is executed when you click on the colour strip canvas (<em>colourStrip</em>) with the following characteristics: when you click, you have to get the coordinates (<em>offsetX and offsetY</em>) of the point where you clicked. Then, the pixel colour corresponding to those coordinates is obtained using <code>getImageData() </code>. The result is stored in imageData, which is an object containing information about the RGBA components of the pixel. An rgbaColor string is constructed using these values and the <code>fillGradient() </code> function is called to update the colour in the main canvas. </p> &nbsp; <pre> function click(e) { x = e.offsetX; y = e.offsetY; var imageData = ctx2.getImageData(x, y, 1, 1).data; rgbaColor = &#39;rgba(&#39; + imageData[0] + &#39;,&#39; + imageData[1] + &#39;,&#39; + imageData[2] + &#39;,1)&#39;; fillGradient(); } </pre> &nbsp; <p>We create the function <code>fillGradient() </code> which draws the gradients on the main canvas (<em>colourBlock</em>) to represent the selected colour. First, the fill colour is set to ctx1 with the value of rgbaColor and a rectangle is drawn covering the whole canvas. Then, two linear gradients, grdWhite and grdBlack, are created using the canvas context of the colour strip (<em>ctx2</em>). These gradients are used to create a gradient effect on the main canvas, providing areas of black and white to adjust the brightness and contrast of the selected colour. </p> &nbsp; <pre> function fillGradient() { ctx1.fillStyle = rgbaColor; ctx1.fillRect(0, 0, width1, height1); var grdWhite = ctx2.createLinearGradient(0, 0, width1, 0); grdWhite.addColorStop(0, &#39;rgba(255,255,255,1)&#39;); grdWhite.addColorStop(1, &#39;rgba(255,255,255,0)&#39;); ctx1.fillStyle = grdWhite; ctx1.fillRect(0, 0, width1, height1); var grdBlack = ctx2.createLinearGradient(0, 0, 0, height1); grdBlack.addColorStop(0, &#39;rgba(0,0,0,0)&#39;); grdBlack.addColorStop(1, &#39;rgba(0,0,0,1)&#39;); ctx1.fillStyle = grdBlack; ctx1.fillRect(0, 0, width1, height1); } </pre> &nbsp; <p>The following functions are used to control the user&#39;s interaction with the main canvas (<strong>colourBlock</strong>). When the user presses the mouse button inside the canvas (<strong>mousedown</strong>), drag is set to true to indicate dragging. The <code>changeColor() </code> function is called to update the selected colour. </p> <p>During mousemove, if drag is true, <code>changeColor() </code>is called to update the selected colour while dragging the mouse. </p> <p>When the mouse button is released inside the canvas (<strong>mouseup</strong>), drag is set to false to indicate that dragging is finished. </p> &nbsp; <pre> function mousedown(e) { drag = true; changeColor(e); } function mousemove(e) { if (drag) { changeColor(e); } } function mouseup(e) { drag = false; } </pre> &nbsp; <p>Let&#39;s go ahead with the code for the <code>changeColor() </code> function used to update the selected colour when the user interacts with the main canvas. First, we get the coordinates of the point where the interaction occurred (<strong>offsetX and offsetY</strong>). Then, the corresponding pixel colour is obtained using <code>getImageData() </code>and the<em> rgbaColor</em> variable is updated. </p> <p>After that, the background colour of the colourLabel element is updated with the selected colour, the colour value is displayed in RGBA format in the txtRgba element and the colour is converted from RGBA to hexadecimal format using the <code>rgbaToHex() </code> function. The result is displayed in the txtHex element and is also printed to the console. </p> &nbsp; <pre> function changeColor(e) { x = e.offsetX; y = e.offsetY; var imageData = ctx1.getImageData(x, y, 1, 1).data; rgbaColor = &#39;rgba(&#39; + imageData[0] + &#39;,&#39; + imageData[1] + &#39;,&#39; + imageData[2] + &#39;,1)&#39;; colorLabel.style.backgroundColor = rgbaColor; txtRgba.innerHTML = rgbaColor; var hexColor = rgbaToHex(rgbaColor); console.log(hexColor); txtHex.innerHTML = hexColor; } </pre> &nbsp; <p>These next lines of code assign the event handlers to the canvas elements and the colour strip element. When the colour strip is clicked, the <code>click() </code> function is executed. When the mouse is pressed, released or moved within the main canvas, the corresponding functions ( <code>mousedown(), mouseup(), mousemove() </code>) are executed to control the interaction and update the selected colour. </p> <pre> colorStrip.addEventListener(&quot;click&quot;, click, false); colorBlock.addEventListener(&quot;mousedown&quot;, mousedown, false); colorBlock.addEventListener(&quot;mouseup&quot;, mouseup, false); colorBlock.addEventListener(&quot;mousemove&quot;, mousemove, false); </pre> &nbsp; <p>The <code>rgbaToHex() </code>function converts a colour in <strong>RGBA</strong> format to hexadecimal format. First, the R, G, B and A component values of the RGBA colour are extracted using regular expressions. Then, the R, G and B values are converted to hexadecimal format using <code>toString(16) </code> and <code>padStart(2, &#39;0&#39;) </code>to make sure they have two digits. Finally, the hexadecimal values are combined and the colour is returned in hexadecimal format. </p> &nbsp; <pre> function rgbaToHex(rgbaColor) { var values = rgbaColor.match(/d+/g); var r = parseInt(values[0]); var g = parseInt(values[1]); var b = parseInt(values[2]); var a = parseFloat(values[3]); var hexR = r.toString(16).padStart(2, &#39;0&#39;); var hexG = g.toString(16).padStart(2, &#39;0&#39;); var hexB = b.toString(16).padStart(2, &#39;0&#39;); var hexColor = &#39;#&#39; + hexR + hexG + hexB; return hexColor; } </pre> &nbsp; <p>Here is all the code: </p> &nbsp; <pre> &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;utf-8&quot; /&gt; &lt;title&gt;Colorpicker demo&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;style&gt; @import url(https://fonts.googleapis.com/css?family=Open+Sans); body { margin: 0; padding: 0; background-color: #e6e6e6; } h2 { background-color: #dbdbdb; margin: 0; margin-bottom: 15px; padding: 10px; font-family: &#39;Open Sans&#39;; } #color-input { display: none; } #color-label { margin-left: 15px; position: absolute; height: 30px; width: 50px; } #color-input:checked ~ #color-picker { opacity: 1; } #color-picker { position: absolute; left: 70px; background-color: white; height: 150px; width: 185px; border: solid 1px #ccc; opacity: 0; padding: 5px; } canvas:hover { cursor: crosshair; } &lt;/style&gt; &lt;h2&gt;Canvas Color Picker&lt;/h2&gt; &lt;div&gt;&lt;p&gt;Color in RGBA is: &lt;span id=&quot;txtRgba&quot;&gt;&lt;/span&gt; &lt;/p&gt; &lt;div&gt;&lt;p&gt;Color in HEX is: &lt;span id=&quot;txtHex&quot;&gt;&lt;/span&gt; &lt;/p&gt; &lt;/div&gt; &lt;label for=&quot;color-input&quot; id=&quot;color-label&quot; style=&quot;background-color: red&quot;&gt;&lt;/label&gt; &lt;input type=&quot;checkbox&quot; id=&quot;color-input&quot; checked&gt;&lt;/input&gt; &lt;div id=&quot;color-picker&quot;&gt; &lt;canvas id=&quot;color-block&quot; height=&quot;150&quot; width=&quot;150&quot;&gt;&lt;/canvas&gt; &lt;canvas id=&quot;color-strip&quot; height=&quot;150&quot; width=&quot;30&quot;&gt;&lt;/canvas&gt; &lt;/div&gt; &lt;script type=&quot;text/javascript&quot;&gt; var colorBlock = document.getElementById(&#39;color-block&#39;); var ctx1 = colorBlock.getContext(&#39;2d&#39;); var width1 = colorBlock.width; var height1 = colorBlock.height; var colorStrip = document.getElementById(&#39;color-strip&#39;); var ctx2 = colorStrip.getContext(&#39;2d&#39;); var width2 = colorStrip.width; var height2 = colorStrip.height; var colorLabel = document.getElementById(&#39;color-label&#39;); var txtRgba = document.getElementById(&#39;txtRgba&#39;); var txtHex = document.getElementById(&#39;txtHex&#39;); var x = 0; var y = 0; var drag = false; var rgbaColor = &#39;rgba(255,0,0,1)&#39;; ctx1.rect(0, 0, width1, height1); fillGradient(); ctx2.rect(0, 0, width2, height2); var grd1 = ctx2.createLinearGradient(0, 0, 0, height1); grd1.addColorStop(0, &#39;rgba(255, 0, 0, 1)&#39;); grd1.addColorStop(0.17, &#39;rgba(255, 255, 0, 1)&#39;); grd1.addColorStop(0.34, &#39;rgba(0, 255, 0, 1)&#39;); grd1.addColorStop(0.51, &#39;rgba(0, 255, 255, 1)&#39;); grd1.addColorStop(0.68, &#39;rgba(0, 0, 255, 1)&#39;); grd1.addColorStop(0.85, &#39;rgba(255, 0, 255, 1)&#39;); grd1.addColorStop(1, &#39;rgba(255, 0, 0, 1)&#39;); ctx2.fillStyle = grd1; ctx2.fill(); function click(e) { x = e.offsetX; y = e.offsetY; var imageData = ctx2.getImageData(x, y, 1, 1).data; rgbaColor = &#39;rgba(&#39; + imageData[0] + &#39;,&#39; + imageData[1] + &#39;,&#39; + imageData[2] + &#39;,1)&#39;; fillGradient(); } function fillGradient() { ctx1.fillStyle = rgbaColor; ctx1.fillRect(0, 0, width1, height1); var grdWhite = ctx2.createLinearGradient(0, 0, width1, 0); grdWhite.addColorStop(0, &#39;rgba(255,255,255,1)&#39;); grdWhite.addColorStop(1, &#39;rgba(255,255,255,0)&#39;); ctx1.fillStyle = grdWhite; ctx1.fillRect(0, 0, width1, height1); var grdBlack = ctx2.createLinearGradient(0, 0, 0, height1); grdBlack.addColorStop(0, &#39;rgba(0,0,0,0)&#39;); grdBlack.addColorStop(1, &#39;rgba(0,0,0,1)&#39;); ctx1.fillStyle = grdBlack; ctx1.fillRect(0, 0, width1, height1); } function mousedown(e) { drag = true; changeColor(e); } function mousemove(e) { if (drag) { changeColor(e); } } function mouseup(e) { drag = false; } function changeColor(e) { x = e.offsetX; y = e.offsetY; var imageData = ctx1.getImageData(x, y, 1, 1).data; rgbaColor = &#39;rgba(&#39; + imageData[0] + &#39;,&#39; + imageData[1] + &#39;,&#39; + imageData[2] + &#39;,1)&#39;; colorLabel.style.backgroundColor = rgbaColor; txtRgba.innerHTML = rgbaColor; var hexColor = rgbaToHex(rgbaColor); console.log(hexColor); txtHex.innerHTML = hexColor; } colorStrip.addEventListener(&quot;click&quot;, click, false); colorBlock.addEventListener(&quot;mousedown&quot;, mousedown, false); colorBlock.addEventListener(&quot;mouseup&quot;, mouseup, false); colorBlock.addEventListener(&quot;mousemove&quot;, mousemove, false); function rgbaToHex(rgbaColor) { var values = rgbaColor.match(/d+/g); var r = parseInt(values[0]); var g = parseInt(values[1]); var b = parseInt(values[2]); var a = parseFloat(values[3]); var hexR = r.toString(16).padStart(2, &#39;0&#39;); var hexG = g.toString(16).padStart(2, &#39;0&#39;); var hexB = b.toString(16).padStart(2, &#39;0&#39;); var hexColor = &#39;#&#39; + hexR + hexG + hexB; return hexColor; } &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; </pre> &nbsp; <p>I hope that this tutorial has demonstrated the great potential that exists in developing applications using Canvas. There are much more advanced applications, and even games are being developed using this technology. It is therefore a field worth exploring, as it offers the possibility to create amazing and surprising things. </p>
salvietta150x40
1,489,503
What is a vector database?
Vector databases are all the rage these days. The reason is simple: theyve rapidly become a popular...
0
2023-06-02T11:52:27
https://blog.apify.com/what-is-a-vector-database/
ai, chatgpt, machinelearning, webscraping
--- title: What is a vector database? published: true date: 2023-04-25 11:21:45 UTC tags: ai, chatgpt, machinelearning, webscraping canonical_url: https://blog.apify.com/what-is-a-vector-database/ --- Vector databases are all the rage these days. The reason is simple: theyve rapidly become a popular way to add long-term memory to LLMs such as GPT-4, LLaMDA, and LLaMA. Learn how vector databases can store ML embeddings to integrate with tools like ChatGPT. ## **What are vectors?** In the context of AI and machine learning, particularly large language models, vector databases are really hot right now! People are [investing in vector databases](https://venturebeat.com/data-infrastructure/the-vector-database-is-a-new-kind-of-database-for-the-ai-era/?ref=blog.apify.com) like crazy. But what are they? Before I answer that, I better explain vectors. Thankfully, this part is quite simple. A vector is an array of numbers like this: **[0, 1, 2, 3, 4,]** Doesnt seem very impressive, does it? But whats really cool about these numbers is that they can represent more complex objects such as words, sentences, images, and audio files in an **embedding**. What is [embedding](https://developers.google.com/machine-learning/crash-course/embeddings/video-lecture?ref=blog.apify.com), you ask? In the context of large language models, embeddings represent text as a dense vector of numbers to capture the meaning of words. They map the semantic meaning of words together or similar features in just about any other data type. These embeddings can then be used for search engines, recommendation systems, and generative AIs such as **ChatGPT**. {% embed https://blog.apify.com/chatgpt-web-scraping/ %} The question is, where do you store these embeddings, and how do you query them quickly? **Vector databases** are the answer. These databases contain arrays of numbers clustered together based on similarity, which can be queried with [ultra-low latency](https://www.cisco.com/c/en/us/solutions/data-center/data-center-networking/what-is-low-latency.html?ref=blog.apify.com). In other words, vector databases index vectors for easy search and retrieval by comparing values and finding those that are most similar to one another. That makes vector databases ideal for AI-driven applications. > [**Building functional AI models for web scraping**](https://blog.apify.com/building-functional-ai-models-for-web-scraping/) ## **Why are vector databases important for LLMs?** The main reason vector databases are in vogue is that they can extend large language models with long-term memory. You begin with a general-purpose model, like **GPT-4** , **LLaMA** , or **LaMDA** , but then you provide your own data in a vector database. When a user gives a prompt, you can query relevant documents from your database to update the context, which will customize the final response. Whats more, vector databases integrate with tools like **LangChain** that combine multiple LLMs together. > [**➡️ What is LangChain?**](https://blog.apify.com/what-is-langchain/) ## **What are some examples of vector databases?** Here are a few of the top vector databases around, but things are moving so fast in AI, who knows how quickly this list might change? ### **Pinecone** [**Pinecone**](https://www.pinecone.io/?ref=blog.apify.com) is a very popular but **closed-source** vector database for machine learning applications. Once you have vector embeddings, you can manage and search through them in Pinecone to power semantic search, recommenders, and other applications that rely on relevant information retrieval. ### **Chroma** [**Chroma**](https://www.trychroma.com/?ref=blog.apify.com) is an AI-native, **open-source** embedding database based on [ClickHouse](https://github.com/ClickHouse/ClickHouse?ref=blog.apify.com) under the hood. Its a vector store designed from the ground up to make it easy to build AI applications with embeddings. ### **Weaviate and Milvus** Ive put [**Weaviate**](https://weaviate.io/?ref=blog.apify.com) and [**Milvus**](https://milvus.io/?ref=blog.apify.com) together because both are **open-source** options written in [Go](https://go.dev/?ref=blog.apify.com). Both allow you to store data objects and vector embeddings generated by machine learning models and scale them. > [➡️ **What is data ingestion for large language models?**](https://blog.apify.com/what-is-data-ingestion-for-large-language-models/) ## **How to feed your vector database** Thats all well and good, but you cant do much with a vector database if you dont have data in the first place, right? So now its time to present a great tool for feeding your vector databases: [**Website Content Crawler**](https://apify.com/apify/website-content-crawler?ref=blog.apify.com). **Website Content Crawler** (let's just call it WCC for brevity) was specifically designed to extract web data for feeding, fine-tuning, or training large language models. It automatically removes headers, footers, menus, ads, and other noise from web pages in order to return only the text content that can be directly fed to the models. WCC has a simple input configuration. That means it can be easily integrated into customer-facing products. Customers can enter just the URL of the website they want to be indexed by LLMs. The results can be retrieved by an API to formats such as JSON or CSV, which can be fed directly into your vector database or language model. You can find out more in the README, which contains examples of [how Website Content Crawler works](https://apify.com/apify/website-content-crawler?ref=blog.apify.com#how-does-it-work). You can also [integrate Website Content Crawler with LangChain](https://docs.apify.com/platform/integrations/langchain?ref=blog.apify.com). {% embed https://apify.com/data-for-generative-ai?ref=blog.apify.com %} WCC isnt the only tool suitable for LLMs. If you want to see what other GPT and AI-enhanced tools you could use to feed your vector databases, take your pick from [Apify Store](https://apify.com/store/categories/ai?ref=blog.apify.com). > [**➡️ How to use GPT Scraper to let ChatGPT access the internet**](https://blog.apify.com/gpt-scraper-chatgpt-access-internet/)
theovasilis1
1,491,735
How to built a simple template engine with Python and regex
Prologue As I mentioned previously I want to create a static content creation system. The first step is A Template Engine. Rather than building a fully featured template engine, I am planning on what is just needed, in this major iteration. I have al...
0
2023-06-04T23:08:11
https://dev.to/birnadine/how-to-built-a-simple-template-engine-with-python-and-regex-122b
## Prologue As I mentioned [previously](https://fyi.birnadine.guru/so-i-joined-the-time-complexity-cult) I want to create a static content creation system. The first step is A Template Engine. Rather than building a fully featured template engine, I am planning on what is just needed, in this major iteration. **I have also saved some bonus features for later major iterations 🎊.** With that being said, this major iteration(namely v1.0.0) will have 2 basic features: 1. Including external templates into another, OR Inheritance, I guess 🤔 2. Looping over a dataset to produce multiple pages Before anything, we should decide on the syntax. The generic one I have decided looks like... ```plaintext { macro_name macro_parameter } ``` Without further ado, let's go 🏃‍♀️ ### 1\. Including external templates into another For this, the syntax would look like this to embed another page called `index.html` into `base.html` * *base.html* ```html <html> <head>...</head> <body> <!-- some generic content --> { include content.main } </body> </html> ``` * *index.html* ```xml <h1>Welcome to SPC</h1> ``` So what I want to do is to read through *base.html* and replace the line if `{}` is encountered. We could do this in many different ways, but an easy one is **the regex** way. > ***regex*** stands for Regular Expression The usage of regex with python is much simple than other languages make it seem. If you want me to do a swing-by regex with *python*, please let me know in the comments. So to *substitute* the template we would do something like ```python import re # import the standard regex library pattern = r'{\s?\w+\s(\w+.\w+)\s?}' # regex pattern to search for specimen = """ <html> <head>...</head> <body> <!-- some generic content --> { include content.main } </body> </html> """ replace = "<h1>Welcome to SPC</h1>" parsed_str = re.sub(pattern, replace, specimen) # using .sub() from library ``` Now if we write `parsed_str` to a file, will be the page we intended for. Now, let's encapsulate it into a function for modularity and to be DRY. Thus, the function would be ```python def eval_include(specimen, replacement): global pattern return re.sub(pattern, replacement, specimen) ``` > If you are disgusted by the `global` keyword, just so you know, I am coming from assembly language and Cheat-Engine 😜, I am pretty comfortable with it. Now, an end user might use the library like ```python from os.path import realpath from mathew.macros import eval_include base = "" with open(realpath("templates/base.html"), "r") as b: base = b.read() index_template = "" with open(realpath("templates/index.html"), "r") as i: index_template = i.read() with open(realpath("out/index.html"), "w") as i: i.write( eval_include(base, index) # do the templating magic 🧙‍♂️ ) ``` Parsed page can be found in the `out/` dir. File discovery and all other stuff will be automated later. For now, let's just focus on one thing. ### 2\. Looping over a dataset to produce multiple pages Let's say, we have a list of article titles to display on the homepage of the blog page. E.g. 1. *pubslist.html* ```xml <section> <h2>Patrician Publications</h2> { include pubsdetail.html } </section> ``` 2. *pubslistitem.html* ```xml <article> <h4>{ eval pubs.title}</h4> <span>{eval pubs.cat }</span> <p>{ eval pubs.sum }</p> </article> ``` 3. and the *dataset* ```json {"pubs": [ {"title": "Some 404 content", "cat": "kavik", "sum": "Summary 501"}, {"title": "Some 403 content", "cat": "eric", "sum": "Summary 502"}, {"title": "Some 402 content", "cat": "beric", "sum": "Summary 503"}, {"title": "Some 401 content", "cat": "manuk", "sum": "Summary 504"}, ]} ``` The dataset can be mapped to python's *dict* without any logic. The difference between embedding another template from evaluating a variable and creating many pages by just replacing the data in the template appropriately and embedding the end-string to the destination template. Let's do it, shall we? For evaluating the variable, we could use the **Groups** feature in the regex. That's what the `()` around the `\w+.\w+` in the pattern for. We can easily access the matched string slice by the `.group()` method on the `match` object returned by `re` lib-functions. ```python str_1 = "Hello 123" pattern = r'\w+\s(\d+)' digits = re.finditer(patter, str) # returns aggregation of `match` objects for digit in digits: print(digit.group(1)) # 123 ``` > Notice we are calling for *1*, not *0*. Nothing that the lib is 1-index, it is 0-indexed but 0^th index is the entire str, "Hello 123" Remember the `.sub()` method, its second parameter accepts either `str` or a `callable`. This callable will get a `match` object as an argument for each matched pattern validates. So we can produce dynamic replacements based on each `match` like... ```python # retriving the key from template string key = m.group(1) # == pubs.title key = key.split(".") # == ["pubs", "title"] key = key[1] # == "title" # evaluating the variable with i^th record from dataset re.sub( pattern, # the pattern lambda m: dataset["pubs"][i][key] ) ``` > If `lambda` is mysterious for you, it is a way to define an ***anonymous*** or ***inline*** function in python Defining functions for lib API be ```python # map each datumset def __eval_map(string, data): global pattern     return re.sub(         pattern, lambda m: data[m.group(1).split(".")[1]], string     ) # parse the batch of dataset def parse_template(template, data):     return [     __eval_map(template, datum)     for datum in data     ] ``` > `parse_template` returns aggregated results using ***list comprehension*** syntax, if you are unfamiliar with the syntax let me know in the comment So accessing the key to evaluate is just as breezy as... ```python from os.path import realpath from mathew.macros import parse_template, eval_include specimen = """ <article> <h4>{ eval pubs.title}</h4> <span>{eval pubs.cat }</span> <p>{ eval pubs.sum }</p> </article> """ dataset = { "pubs": [ {"title": "Some 404 content", "cat": "kavik", "sum": "Summary 501"}, {"title": "Some 403 content", "cat": "eric", "sum": "Summary 502"}, {"title": "Some 402 content", "cat": "beric", "sum": "Summary 503"}, {"title": "Some 401 content", "cat": "manuk", "sum": "Summary 504"}, ], } # parse each `<article>` tag for each list item parsed_str = parse_template(specimen, dataset["pubs"]) # join the `<article>` tag-group pubs_list_items = "".join(parsed_str) pubs_list_template = "" with open(realpath("templates/pubslist.html"), "r") as p: pubs_list_template = p.read() # parse the `pubs_list` itself parsed_list = eval_include(pubs_list_template, pubs_list_items) # write the final file with base with open(realpath("out/pubs.html"), "w") as i: i.write( eval_include(base, parsed_list) ) ``` Final `pubslist.html` will be in `out/` directory. ## Done? Not quite so. Did you notice the fact, that we still have to read the template string manually, have the data populate in a specific format and the parsing of the template is still manual. These are for later. For now, we have a simple working template engine that does the job I intended it for. I am happy with it. Another thing, keen eyes might have noticed is the `macro_name` in the template does nothing, in fact, if you swap `include` with `eval` or anything, as long as the latter part is valid, the script does its job. This is a bad design but the worst part is our `eval_include` allows only one template. Gotta fix that! ## Epilogue I guess I don't have anything further, so I will just sign off, this is BE signing off. Cover by **Suzy Hazelwood**
birnadine
1,491,745
End to end testing concept
End-to-end test là loại test toàn diện nhất, nó kiểm tra hệ thống từ đầu đến cuối, bao gồm tất cả các...
0
2023-06-04T23:23:16
https://dev.to/mossi4476/end-to-end-testing-concept-9oa
testing
End-to-end test là loại test toàn diện nhất, nó kiểm tra hệ thống từ đầu đến cuối, bao gồm tất cả các thành phần liên quan. Nó có các đặc điểm chính sau: Kiểm tra toàn bộ quy trình hoạt động của hệ thống từ đầu đến cuối. Từ giao diện người dùng đến cơ sở dữ liệu. Sử dụng các trình duyệt hoặc ứng dụng thực sự để thực hiện các bước trong quy trình. Đòi hỏi phải khởi chạy toàn bộ môi trường hệ thống như ứng dụng, API, cơ sở dữ liệu. Có thể phát hiện các vấn đề liên quan đến tích hợp giữa các thành phần khác nhau. Ví dụ: Người dùng truy cập vào trang web thông qua trình duyệt, tương tác với giao diện người dùng Trang web gọi API và API gọi cơ sở dữ liệu Kiểm tra liệu kết quả trả về có đúng với kỳ vọng hay không. Như vậy, end-to-end test bao gồm toàn bộ quá trình tương tác của người dùng cuối với hệ thống, giúp đảm bảo tính toàn diện và tích hợp chính xác giữa các thành phần. Tuy nhiên, nó cũng tốn nhiều thời gian để thực hiện so với các loại test khác. End-to-end test thường được thực hiện cuối cùng để kiểm tra toàn bộ hệ thống hoạt động như mong đợi.
mossi4476
1,492,894
Different Types of JavaSript Functions
1. Function Declarations: function add(a, b) { return a + b; } Enter fullscreen...
0
2023-06-05T21:38:08
https://dev.to/gaurbprajapati/different-types-of-javasript-functions-4pn7
javascript, webdev, programming, tutorial
**1. Function Declarations:** ```javascript function add(a, b) { return a + b; } ``` Function declarations define a named function with the `function` keyword, followed by the function name and a set of parentheses for parameters. They are hoisted to the top of their scope, meaning they can be called before they are defined in the code. **2. Function Expressions:** ```javascript const multiply = function(a, b) { return a * b; }; ``` Function expressions involve assigning a function to a variable. They are created using the `function` keyword, followed by optional function name (anonymous if omitted), and a set of parentheses for parameters. Function expressions are not hoisted and can only be called after they are defined. **3. Arrow Functions:** ```javascript const square = (x) => { return x * x; }; ``` Arrow functions provide a concise syntax for writing functions. They use the arrow (`=>`) operator and automatically bind `this` to the surrounding context. They are commonly used for anonymous functions and have an implicit return if the body is a single expression. **4. Immediately Invoked Function Expressions (IIFE):** ```javascript (function() { // Code executed immediately })(); ``` IIFEs are self-invoking functions that are executed immediately after they are defined. They are enclosed within parentheses to ensure the function is treated as an expression. IIFEs are commonly used to create a new scope, encapsulate code, and avoid polluting the global namespace. **5. Methods:** ```javascript const obj = { name: 'John', greet: function() { console.log(`Hello, ${this.name}!`); } }; obj.greet(); // Output: Hello, John! ``` Methods are functions that are defined as properties of objects. They can be called using the object reference followed by the dot notation. Methods have access to the object's properties and can use the `this` keyword to refer to the object. **6. Callback Functions:** ``` function doSomething(callback) { // Perform some tasks // Invoke the callback function callback(); } function callbackFunction() { console.log("Callback function executed!"); } // Pass callbackFunction as a callback to doSomething doSomething(callbackFunction); ``` Callback functions are functions passed as arguments to other functions. They are invoked by the receiving function at a specific time or when a certain event occurs. Callback functions are commonly used for handling asynchronous operations, event handling, and functional programming patterns. **7. Anonymous Function with Function Expression:** ```javascript const greet = function(name) { console.log(`Hello, ${name}!`); }; greet('John'); // Output: Hello, John! ``` In this example, the `greet` variable is assigned an anonymous function using a function expression. The function takes a `name` parameter and logs a greeting to the console. It can be called using the `greet` variable. **8. Anonymous Function as a Callback:** ```javascript function calculate(a, b, operation) { const result = operation(a, b); console.log(`Result: ${result}`); } calculate(5, 3, function(x, y) { return x * y; }); // Output: Result: 15 ``` Here, the `calculate` function takes three arguments: `a`, `b`, and `operation`, where `operation` is a callback function. The callback function is defined anonymously as a parameter to the `calculate` function and multiplies `a` and `b`. It is invoked within the `calculate` function to perform a specific operation. These are some of the common types of functions in JavaScript. Understanding their differences and use cases can help you write clean, modular, and maintainable code.
gaurbprajapati
1,493,269
Hard Reset iPhone SE
Quickly press the volume up button Quickly press the volume down button Press and hold the side...
0
2023-06-06T07:46:08
https://dev.to/x2i/hard-reset-iphone-se-182p
iphone, force, reboot, hardreset
- Quickly press the volume up button - Quickly press the volume down button - Press and hold the side (power) button until you see activity and then release. This should force the phone to reboot
x2i
1,493,367
MBaaS: Mobile Backend as a Service
MBaaS stands for Mobile Backend as a Service. It is a cloud computing service model that provides...
23,271
2023-06-06T10:48:26
https://dev.to/sardarmudassaralikhan/mbaas-mobile-backendas-a-service-36lm
cloud, cloudcomputing, azure, aws
MBaaS stands for Mobile Backend as a Service. It is a cloud computing service model that provides developers with a backend infrastructure to support mobile application development. Traditionally, developing the backend infrastructure for mobile applications required setting up servers, managing databases, and implementing various backend functionalities like user management, data storage, push notifications, and integrations with third-party services. This process can be complex and time-consuming. MBaaS platforms simplify this process by offering pre-built backend services and infrastructure, which developers can leverage to quickly develop and deploy mobile applications. These platforms typically provide APIs and SDKs (Software Development Kits) that allow developers to integrate their mobile apps with the backend services seamlessly. Here are some common features and benefits of using MBaaS: 1. Backend Services: MBaaS provides ready-to-use backend services, such as user authentication, data storage, file management, push notifications, geolocation, social media integration, and more. 2. Scalability: MBaaS platforms are designed to handle scalability, allowing applications to grow without the need for significant backend infrastructure adjustments. 3. Cross-Platform Support: MBaaS platforms often support multiple mobile operating systems, enabling developers to build apps for iOS, Android, and other platforms using a unified backend. 4. Faster Development: By abstracting the backend complexities, MBaaS reduces development time and effort, allowing developers to focus more on the frontend and core app functionality. 5. Cost Savings: MBaaS eliminates the need for investing in and managing dedicated backend infrastructure, reducing operational costs for developers and organizations. 6. Simplified Maintenance: MBaaS providers handle backend infrastructure maintenance, including updates, security patches, and server management, freeing developers from these tasks. Popular MBaaS platforms include Firebase (owned by Google), AWS Amplify (Amazon Web Services), Microsoft Azure Mobile Apps, and Backendless, among others. These platforms offer a wide range of features and services, making it easier for developers to build robust and scalable mobile applications without having to develop and maintain the entire backend infrastructure from scratch.
sardarmudassaralikhan
1,493,408
A Smoother User Experience with Image Caching in .NET MAUI
Image caching is a technique that stores images in cache memory to improve an app’s performance. If...
0
2023-06-06T13:17:42
https://www.syncfusion.com/blogs/post/image-caching-in-dotnet-maui.aspx
dotnetmaui, development, mobile
--- title: A Smoother User Experience with Image Caching in .NET MAUI published: true date: 2023-06-06 11:15:00 UTC tags: dotnetmaui, development, mobile canonical_url: https://www.syncfusion.com/blogs/post/image-caching-in-dotnet-maui.aspx cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b33l7w1538e43w32v06e.png --- Image caching is a technique that stores images in cache memory to improve an app’s performance. If we search for an image, the app first looks for the image in the cache. If the image is found in the cache, the application will not try to load the image from the source. The image cache is significant because it helps reduce the app’s loading time and data usage. By caching images, the app can retrieve them more efficiently, which leads to a better user experience. In this blog, we’ll see how to implement the image caching feature in Syncfusion’s [.NET MAUI Avatar View](https://www.syncfusion.com/maui-controls/maui-avatarview ".NET MAUI Avatar View control") control. ## Image caching in .NET MAUI In .NET MAUI, **ImageSource** is a default feature that caches downloaded images for a day. The **UriImageSource** class provides properties to customize the image caching behavior. The **Uri** property specifies the image’s URI, and the **CacheValidity** property sets the image’s local storage duration. The default value for **CacheValidity** is one day. **CachingEnabled** is another property that toggles image caching on or off, with the default value being **true**. Refer to the following code example. ```xml <Image> <Image.Source> <UriImageSource Uri="https://www.syncfusion.com/blogs/wp-content/uploads/2022/06/Introducing-.NET-MAUI-Avatar-View-Control-thegem-blog-justified.png" CacheValidity="10" /> </Image.Source> </Image> ``` ## Implement image caching in .NET MAUI Avatar View The .NET MAUI Avatar View is a graphical representation of a user’s image. It allows you to customize the view by adding images, background color, icons, text, and more. You can also display useful information such as initials and status. Developers can easily create an Avatar View by customizing the prebuilt vector images to meet their specific requirements. This control can be utilized in various apps such as social media, messaging, and email clients, where user profiles play a vital role. Let’s see the steps to add the Syncfusion Avatar View control to your .NET MAUI app and implement the image caching feature in it. ### Step 1: Create a .NET MAUI app First, [create a .NET MAUI application](https://learn.microsoft.com/en-us/dotnet/maui/get-started/first-app?tabs=vswin&pivots=devices-android "Build your first .NET MAUI app documentation"). ### Step 2: Add .NET MAUI Avatar View reference The Syncfusion .NET MAUI controls are available on [NuGet.org](https://www.nuget.org/ "NuGet Gallery"). To add the .NET MAUI Avatar View to your project, open the **NuGet Package Manager** in [Visual Studio](https://visualstudio.microsoft.com/ "Visual Studio"), and search for [Syncfusion.Maui.Core](https://www.nuget.org/packages/Syncfusion.Maui.Core/ "Syncfusion.Maui.Core NuGet Package"), and then install it. ### Step 3: Register the handler **Syncfusion.Maui.Core** NuGet is a dependent package for all Syncfusion .NET MAUI controls. In the **MauiProgram.cs** file, register the handler for Syncfusion core. ```csharp Public static class MauiProgram { public static MauiApp CreateMauiApp() { var builder = MauiApp.CreateBuilder(); builder .UseMauiApp>App>() .ConfigureFonts(fonts => { fonts.AddFont(“OpenSans-Regular.ttf”, “OpenSansRegular”); fonts.AddFont(“OpenSans-Semibold.ttf”, “OpenSansSemibold”); }); builder.ConfigureSyncfusionCore(); return builder.Build(); } } ``` Step 4: Add the namespace Add the **Syncfusion.Maui.Core** namespace on your XAML page. ```xml xmlns: syncfusion ="clr-namespace:Syncfusion.Maui.Core;assembly=Syncfusion.Maui.Core" ``` ### Step 5: Initialize the .NET MAUI Avatar View control Then, initialize the .NET MAUI Avatar View control using the following code. ```xml <syncfusion:SfAvatarView /> ``` ### Step 6: Load custom image in .NET MAUI Avatar View You can add a custom image in the Avatar View using the **ImageSource** property and by setting **Custom** as the value in the **ContentType** property. Refer to the following code example. ```xml <syncfusion:SfAvatarView ImageSource="avatarviewimage.png" ContentType="Custom" VerticalOptions="Center" HorizontalOptions="Center" HeightRequest="200" WidthRequest="400" /> ``` ### Step 7: Implement image caching in .NET MAUI Avatar View Finally, implement the image caching feature in the .NET MAUI Avatar View using the **ImageSource** property for optimal image handling. ```xml <syncfusion:SfAvatarView ContentType="Custom" VerticalOptions="Center" HorizontalOptions="Center" HeightRequest="200" WidthRequest="400" > < syncfusion:SfAvatarView.ImageSource> <UriImageSource CachingEnabled="True" Uri="https://www.syncfusion.com/blogs/wp-content/uploads/2022/06/Introducing-.NET-MAUI-Avatar-View-Control-thegem-blog-justified.png" CacheValidity="10" /> </ syncfusion:SfAvatarView.ImageSource> </ syncfusion:SfAvatarView> ``` Now, you can quickly search for images and retrieve them without reloading them from the remote source in the .NET MAUI Avatar View control. ## Reference For more details, refer to the project on [GitHub](https://github.com/SyncfusionExamples/maui-avatarview-samples/tree/master/AvatarViewImageCache "Image caching in .NET MAUI using Avatar View GitHub demo"). ## Conclusion Thanks for reading! In this blog, we’ve seen how to implement the image caching feature in the [.NET MAUI Avatar View](https://www.syncfusion.com/maui-controls/maui-avatarview ".NET MAUI Avatar View control") control for optimal image handling. To try out the Avatar View control, download our Essential Studio for [.NET MAUI](https://www.syncfusion.com/maui-controls ".NET MAUI controls"). If you are not a Syncfusion customer, you can use our 30-day [free trial](https://www.syncfusion.com/downloads "Free evaluation of the Essential Studio products") to see how our components can benefit your projects. We encourage you to check out our .NET MAUI controls’ [demos on GitHub](https://github.com/syncfusion/maui-demos "Syncfusion .NET MAUI controls' GitHub demos") and share your feedback or questions in the comments below. You can also contact us through our [support forum](https://www.syncfusion.com/forums "Syncfusion Support Forum"), [support portal](https://support.syncfusion.com/ "Syncfusion Support Portal"), or [feedback portal](https://www.syncfusion.com/feedback/home "Syncfusion Feedback Portal"). Our team is always ready to assist you! ## Related blogs - [Time Regions in .NET MAUI Scheduler—An Overview](https://www.syncfusion.com/blogs/post/time-regions-dotnet-maui-scheduler.aspx "Blog: Time Regions in .NET MAUI Scheduler—An Overview") - [Easily Replicate a Sign-Up UI in .NET MAUI](https://www.syncfusion.com/blogs/post/replicate-sign-up-ui-dotnet-maui.aspx "Blog: Easily Replicate a Sign-Up UI in .NET MAUI") - [Create a Place Explorer App Using .NET MAUI and ChatGPT](https://www.syncfusion.com/blogs/post/place-explorer-app-using-dotnet-maui-chatgpt.aspx "Blog: Create a Place Explorer App Using .NET MAUI and ChatGPT") - [Chart of the Week: Creating a .NET MAUI Inversed Column Chart to Visualize Meta Reality Labs’s Yearly Operating Loss](https://www.syncfusion.com/blogs/post/dotnet-maui-inversed-column-chart-visualize-yearly-operating-loss-data.aspx "Blog: Chart of the Week: Creating a .NET MAUI Inversed Column Chart to Visualize Meta Reality Labs’s Yearly Operating Loss")
jollenmoyani
1,493,647
Navigating the Data Universe: Unleashing the Power of Cypher Query Language
In the vast landscape of data exploration, Cypher stands tall as a specialized query language that...
0
2023-06-06T15:30:24
https://dev.to/abmanan/navigating-the-data-universe-unleashing-the-power-of-cypher-query-language-ine
In the vast landscape of data exploration, Cypher stands tall as a specialized query language that opens doors to the realm of graph databases. With its expressive syntax and powerful traversal capabilities, Cypher empowers developers and data analysts to navigate the intricacies of interconnected data. In this blog post, we'll embark on a journey through the Cypher query language, discovering its unique features and exploring how it can be harnessed to uncover valuable insights and patterns in graph databases. ## Syntax: Simple, Yet Expressive One of the distinctive strengths of Cypher lies in its simplicity and intuitiveness. Its syntax is designed to resemble natural language patterns, making it easy to understand and write queries. Cypher queries revolve around the concept of patterns, allowing users to specify relationships, nodes, and properties in a concise and human-readable manner. This elegant syntax enables users to focus on the data relationships rather than complex SQL-like join operations, making it a powerful tool for both beginners and experienced data professionals. ## Traversing the Graph: Unveiling Relationships Cypher's true power emerges when it comes to graph traversal and pattern matching. By leveraging Cypher's graph pattern syntax, users can effortlessly navigate through the interconnected nodes and relationships in a graph database. Whether you're interested in finding specific paths between entities, identifying common neighbors, or uncovering complex patterns, Cypher provides the tools to express these queries in a natural and elegant way. Traversing the graph using Cypher allows for a holistic understanding of the connections and relationships that underpin your data, unlocking a deeper level of insights. ## Aggregations and Transformations: Unveiling Hidden Patterns Beyond traversing the graph, Cypher offers a wide array of powerful functions and operators for aggregating and transforming data. Whether it's counting occurrences, calculating averages, or applying filtering conditions, Cypher provides the means to perform complex computations and derive meaningful insights. Additionally, Cypher supports advanced graph algorithms, enabling users to leverage community detection, path finding, and centrality measures to extract valuable information from the graph. These capabilities allow for the discovery of hidden patterns, identification of influential nodes, and the ability to detect anomalies or clusters within the data. ## Unlocking New Possibilities: Use Cases for Cypher Cypher's expressive nature and graph-centric features make it a perfect fit for a variety of use cases. From social network analysis and recommendation engines to fraud detection and knowledge graph exploration, Cypher empowers data practitioners to extract actionable insights from highly connected data. By leveraging Cypher's capabilities, organizations can enhance personalization, identify fraud patterns, optimize network structures, and derive intelligence from their interconnected datasets. ## Conclusion: As we navigate the ever-expanding universe of data, Cypher serves as a guiding star, illuminating the intricacies of graph databases and facilitating the discovery of hidden relationships and patterns. With its expressive syntax, graph traversal capabilities, and advanced functions, Cypher provides a powerful toolkit for unlocking valuable insights and exploring the depths of interconnected data. By embracing Cypher's elegance and harnessing its potential, we can navigate the data universe with confidence, uncovering knowledge and making informed decisions that propel our data-driven endeavors to new heights. Disclaimer: While this blog post was created with the assistance of AI, it's important to clarify the collaborative nature of its development. The AI served as a valuable tool by offering suggestions and aiding in generating the text. However, the overall ideas, concepts, and structure of the blog were conceived and crafted by me, as a human writer. Check out Apache AGE, an extension for PostgreSQL that lets you build graph databases using SQL and Cypher language on top of relational database. - [https://age.apache.org/](https://age.apache.org/) - [https://github.com/apache/age](https://github.com/apache/age)
abmanan
1,493,656
Building a Real-time API with Next.js, Nest.js, and Docker: A Comprehensive Guide
In this blog post, I will guide you through the process of creating a real-time API using the...
0
2023-06-07T14:09:58
https://dev.to/deepak22448/building-a-real-time-api-with-nextjs-nestjs-and-docker-a-comprehensive-guide-3l6l
webdev, api, typescript, docker
In this blog post, I will guide you through the process of creating a real-time API using the powerful combination of Next.js, Nest.js, and Docker. We will start by building a simple UI and demonstrate how to listen for server changes in real-time from the frontend. Additionally, I will show you how to leverage Docker to containerize your application. As an added bonus, you'll learn how to utilize custom React hooks to enhance the functionality and efficiency of your application. Join me on this step-by-step journey as we explore the world of real-time APIs, containerization, and React hooks. **Check out the [source code](https://github.com/Deepak22448/realtime-api)** ###Step 1: Installing packages and Containerize the Backend. ###Step 2: Writing Backend APIs for the Frontend. ###Step 3: Installing packages and Containerize the Frontend. ###Step 4: Listening to Real-time Updates on the Frontend. Let's kickstart this exciting journey by diving into code. ##Let's Step 1: Containersize Backend. - Install the nestjs cli if not installed yet: ``` npm i -g @nestjs/cli ``` - Create a new project using nestjs cli: ``` nest new project-name ``` - Create a Dockerfile in the root of the folder and paste the following code. Do not worry, I will explain each line of code step by step. ``` FROM node:16 WORKDIR /app COPY . . RUN yarn install RUN yarn build EXPOSE 3001 CMD [ "yarn" , "start:dev" ] ``` - `FROM node:16` in a Dockerfile specifies the base image to use for the container as Node.js version 16. - `WORKDIR /app` in a Dockerfile sets the working directory inside the container to /app. - `COPY . .` in a Dockerfile copies all the files and directories from the current directory into the container. - `RUN yarn install` in a Dockerfile executes the yarn install command to install project dependencies inside the container. - `RUN yarn build` in a Dockerfile executes the command `yarn build` during the image building process, typically used for building the project inside the container. - `EXPOSE 3001` in a Dockerfile specifies that the container will listen on port 3001, allowing external access to that port. - `CMD [ "yarn" , "start:dev" ]` in a Dockerfile sets the default command to run when the container starts, executing `yarn start:dev`. - In the root of the folder, create a docker-compose.yml file, and paste the following code. I will explain it briefly because it is really simple. ``` version: '3.9' services: nestapp: container_name: nestapp image: your-username/nestjs volumes: - type: bind source: . target: /app build: . ports: - 3001:3001 environment: MONGODB_ADMINUSERNAME: root MONGODB_ADMINPASSWORD: example MONGODB_URL: mongodb://root:example@mongo:27017/ depends_on: - mongo mongo: image: mongo volumes: - type: volume source: mongodata target: /data/db ports: - 27017:27017 environment: MONGO_INITDB_ROOT_USERNAME: root MONGO_INITDB_ROOT_PASSWORD: example volumes: mongodata: ``` * To build nestjs image, we use Dockerfile and pass some environment variables as well as "bind" volume to apply the changes to the continuous. * Additionally, we are running a mongodb container that stores data on volumes and passes a few environment variables to it. In order to run the container, run the command `docker compose up -d` using the images we created earlier. ###Step 2: Writing Backend APIs for the Frontend. When calling the backend's api from the frontend using port 3001, we will run into the CORS problem. Therefore, update your main to resolve this `main.ts`: ``` async function bootstrap() { const app = await NestFactory.create(AppModule); app.enableCors(); await app.listen(3001); } ``` ``` nest g resource order --no-spec ``` * By running the above command, you will be able to create the necessary files like the module, service, and controller through Nest CLI. As we will be using MongoDB as our data base run the below package : ``` npm i @nestjs/mongoose mongoose ``` * To let Nestjs know that MongoDB is being used in our project, update the import array in 'app.module.ts'. ``` imports: [MongooseModule.forRoot(process.env.MONGODB_URL), OrderModule] ``` To create a MongoDB schema, add schema.ts to the'src/order' directory. Because the schema is so straight forward, let me explain briefly how it works. ``` import { Prop, Schema, SchemaFactory } from '@nestjs/mongoose'; import { HydratedDocument } from 'mongoose'; @Schema({ timestamps: true }) export class Order { @Prop({ required: true }) customer: string; @Prop({ required: true }) price: number; @Prop({ required: true }) address: string; } export const OrderSchema = SchemaFactory.createForClass(Order); // types export type OrderDocument = HydratedDocument<Order>; ``` - `@Schema({ timestamps: true })` enables automatic creation of createdAt and updatedAt fields in the schema. - In the schema, `@Prop([ required: true ])` specifies that this field is a required one. The Order module does not know about the schema, so import it, now your import array should update as follows: ``` imports: [ MongooseModule.forFeature([{ schema: OrderSchema, name: Order.name }]), ], ``` Our backend will have two API's - `http://localhost:3001/order` (GET request) to get all orders from MongoDB. - `http://localhost:3001/order` (POST request) to create a new order. Let me explain the code in order.service.ts, which is really simple, since you simply need to call those methods from your order.controller.ts. ``` import { Injectable } from '@nestjs/common'; import { InjectModel } from '@nestjs/mongoose'; import { Order } from './schema/order.schema'; import { Model } from 'mongoose'; import { OrderGateway } from './order.gateway'; @Injectable() export class OrderService { constructor( @InjectModel(Order.name) private orderModel: Model<Order>, private orderGateway: OrderGateway, ) {} async getAll(): Promise<Order[]> { return this.orderModel.find({}); } async create(orderData: Order): Promise<Order> { const newOrder = await this.orderModel.create(orderData); await newOrder.save(); return newOrder; } } ``` 1. We are simply returning all of the orders that are saved in MongoDB with the `getAll()` function. 2. The data needed to generate an order is taken out of the `create()` function, saved in our MongoDB, and then the created order is returned from the function. Now all you have to do is call those functions from your order.controller.ts file as described below. ``` @Controller('order') export class OrderController { constructor(private readonly orderService: OrderService) {} @Get() async findAll(): Promise<Order[]> { return this.orderService.getAll(); } @Post() async create(@Body() orderData: Order): Promise<Order> { return this.orderService.create(orderData); } } ``` As we will be using Socket.io to listen realtime upadtes lets download requried packages : ``` yarn add -D @nestjs/websockets @nestjs/platform-socket.io socket.io ``` We need to develop a "gateway" in order to use socket.io in nestjs. A gateway is nothing more than a straightforward class that simplifies socket handling.Io is incredibly easy. Create a `order.gateway.ts` into your order folder and past the below code. ``` import { OnGatewayConnection, OnGatewayDisconnect, WebSocketGateway, WebSocketServer, } from '@nestjs/websockets'; import { Server, Socket } from 'socket.io'; @WebSocketGateway({ cors: true }) export class OrderGateway implements OnGatewayConnection, OnGatewayDisconnect { @WebSocketServer() server: Server; handleConnection(client: Socket, ...args: any[]) { console.log(`Client connected:${client.id}`); } handleDisconnect(client: Socket) { console.log(`Client disconnected:${client.id}`); } notify<T>(event: string, data: T): void { this.server.emit(event, data); } } ``` - `handleConnection` is called when any new socket is connected. - `handleDisconnect` is called when any socket is disconneted. - The `notify` function emits an event with a specified name and data using a server to other sockets. Our application does not yet recognise the "gateway" we have created, therefore we must import the `order.gateway.ts` file into the "providers" array of the `order.module.ts` as shown below: ``` providers: [OrderService, OrderGateway], ``` Use the gateway's notify function in the `create` method of the 'OrderService' to alert other sockets when an order is made. Don't forget to inject the gateway into the constructor as well. updated `create` function should look like : ``` async create(orderData: Order): Promise<Order> { const newOrder = await this.orderModel.create(orderData); await newOrder.save(); this.orderGateway.notify('order-added', newOrder); return newOrder; } ``` The backend portion is now fully finished. ##Step 3: Installing packages and Containerize the Frontend. First lets start with creating a Next.js application : ``` npx create-next-app@latest ``` and go a head with default option. Now, create a "Dockerfile" in the root of your Next.js application and paste the following; as this is quite similar to the "Dockerfile" for the backend, I won't go into much detail here. ``` FROM node:16 WORKDIR /app COPY . . RUN npm install RUN npm run build EXPOSE 3000 CMD [ "npm" , "run" , "dev" ] ``` - On `port: 3000`, we'll be running our frontend. Let's build 'docker-compose.yml' with the code below: ``` version: "3.9" services: nestapp: container_name: nextapp image: your-username/nextjs volumes: - type: bind source: . target: /app build: . ports: - 3000:3000 ``` - We utilise the "bind" volume because when we update the code, the container reflects our changes. For your frontend container to start, run `docker compose up -d`. ###Step 4: Listening to Real-time Updates on the Frontend. Install sockt.io-client to subscribe to server updates. ``` npm install socket.io-client ``` On the home page we will display all the orders list. - inorder to fetch orders we will be creating a custom hook `useOrder` ##### you may think why to use a custom hook ? I'm using a custom hook because I don't want to make my JSX big, which I believe is difficult to maintain. Overfiew of `useOrder` hook : - It will essentially maintain the state by retrieving orders and listening to server real-time updates. ``` import { socket } from "@/utils/socket"; import { useEffect, useState } from "react"; interface Order { _id: string; customer: string; address: string; price: number; createdAt: string; updatedAt: string; } const GET_ORDERS_URL = "http://localhost:3001/order"; const useOrder = () => { const [orders, setOrders] = useState<Order[]>([]); // responseable to fetch intital data through api. useEffect(() => { const fetchOrders = async () => { const response = await fetch(GET_ORDERS_URL); const data: Order[] = await response.json(); setOrders(data); }; fetchOrders(); }, []); // subscribes to realtime updates when order is added on server. useEffect(() => { socket.on("order-added", (newData: Order) => { setOrders((prevData) => [...prevData, newData]); }); }, []); return { orders, }; }; export default useOrder; ``` Create a 'OrdersList' component that will utilise the 'useOrders' hook to produce the list. `OrderList` : ``` "use client"; import useOrder from "@/hooks/useOrder"; import React from "react"; const OrdersList = () => { const { orders } = useOrder(); return ( <div className="max-w-lg mx-auto"> {orders.map(({ _id, customer, price, address }) => ( <div key={_id} className="p-2 rounded border-black border my-2"> <p>Customer: {customer}</p> <p>Price: {price}</p> <p>Address: {address}</p> </div> ))} </div> ); }; export default OrdersList; ``` Render 'OrderList' just in the home route ('/'): ``` const Home = () => { return ( <> <h1 className="font-bold text-2xl text-center mt-3">Orders</h1> <OrdersList /> </> ); }; ``` I am using TailwindCSS you can skip it. Time to create another Custom Hook `useCreateOrder` which will be responeble to return function which will help th create a order from a form. `useCreateOrder` : ``` const Initial_data = { customer: "", address: "", price: 0, }; const useCreateOrder = () => { const [data, setData] = useState(Initial_data); const onChange = ({ target }: ChangeEvent<HTMLInputElement>) => { const { name, value } = target; setData((prevData) => ({ ...prevData, [name]: value })); }; const handleSubmit = async (event: FormEvent) => { event.preventDefault(); if (!data.address || !data.customer || !data.price) return; try { await fetch("http://localhost:3001/order", { method: "POST", headers: { "content-type": "application/json", }, body: JSON.stringify(data), }); setData(Initial_data); } catch (error) { console.error(error); } }; return { onChange, handleSubmit, data, }; }; ``` - As you can see, I control the state and return methods that are used to build orders because doing otherwise results in a thin JSX. Create a new folder called `app/create`, and in the `page.tsx` file of the `create` folder, there will be a display form that will create an order. `create/page.tsx` ``` const Create = () => { const { handleSubmit, onChange, data } = useCreateOrder(); return ( <form onSubmit={handleSubmit} className="bg-teal-900 rounded absolute top-1/2 left-1/2 -translate-x-1/2 -translate-y-1/2 flex flex-col p-3 space-y-3 w-1/2" > <input type="text" name="customer" onChange={onChange} value={data.customer} className="bg-slate-300 p-2 outline-none" autoComplete="off" placeholder="customer" /> <input type="text" name="address" onChange={onChange} value={data.address} autoComplete="off" className="bg-slate-300 p-2 outline-none" placeholder="address" /> <input type="number" name="price" onChange={onChange} value={data.price} autoComplete="off" className="bg-slate-300 p-2 outline-none" placeholder="price" /> <button type="submit" className="py-2 rounded bg-slate-100"> submit </button> </form> ); }; ``` Once you have completed the form and clicked "Submit," an order will be created. Congratulations you have created a realtime-api which can listen changes on your server. ##Conclusion : In conclusion, we have successfully built a real-time API using Next.js, Nest.js, and Docker. By containerizing our backend and frontend, and leveraging custom React hooks.
deepak22448
1,493,992
How to act in a job interview
I recently made the decision to check out a more technology related job position at the company I...
0
2023-06-06T19:15:27
https://dev.to/pcabreram1234/how-to-act-in-a-job-interview-3f4c
webdev, interview, beginners, programming
<p align="justify">I recently made the decision to check out a more technology related job position at the company I work for.</p> <p align="justify">When I attended said test, the first thing I noticed was that one of the members of said department was programming in JavaScript 😁 which I already started to like.</p> <p align = "justify"> As I began the interview with the head of the department with all professionalism, she asked me if she had any idea of the nature of the position for which she was applying. I immediately told her that I vaguely understood that it was a position for a development team, to which she replied: <strong> No, here we develop a few solutions for the areas of the company that we provide service to.</strong></p> <p align="justify"> It is a position whose main function is to provide support in the face of any inconvenience or situation that the systems present. In short, the position has more to do with part of DevOps but mostly it tries to be technical support, ensuring a continuous delivery of high quality in regards to solutions to possible problems that the systems present, that is. incidentally they are georeferential systems (with satellite positioning).</p> <br/> <p align="justify"> After having clarified that point, the young woman proceeded to do the behavioral interview on site, which she always recommends to be totally honest. I remember right now that one of the requirements for the job was to have solid knowledge of Java, unfortunately I told him that I had knowledge but in JavaScript, but if he gave me the opportunity I could learn the language and give me time for that process and so on with a technical test to check my progress.</p> <p align="justify"> she Finally she gave me a small technical test (exam) with the following points: </p> <ol> <li>Carry out an SQL query to find out the number of records from a location whose contracts have been made as of a specific date, (it should be noted that all these data were in separate tables joined by a foreign key).</li > <li>The second point was to put in my own words an analysis and solution to the scenario in a SQL statement was taking more than 30 seconds to complete.</li> <li>The last point consisted of making the necessary script in the language of my choice and that said script would do the following: <ul> <li>Read from table A records with a specific condition.</li> <li>Enter these records resulting in table B.</li> <li>In table B update the records with a specific condition.</li> <li>Finally, display the square root of the highest value in table B on the screen.</li> </ul> </li> </ol> <p align="justify">After a few days, the manager of the department I was applying for wrote to me and told me, <strong>I just reviewed your technical test and as far as I'm concerned, if you are fully committed to learning Java you are more than welcome in my team</strong>.</p> <p align="justify">You can already imagine my answer.👍🏽💪🏽</p> <p align="justify">With this experience I encourage you to apply for any job in spite of not meeting all the requirements. The experience gained in these interviews, especially with the feedback they give you, is invaluable. So don't be afraid and apply, you never know what might happen.</p>
pcabreram1234
1,494,403
10 AI Website Builders You Didn't Know About
Are you tired of the same old website builders? Want to try something new? Well, I've got some news...
0
2023-06-07T09:14:23
https://dev.to/hr21don/10-ai-website-builders-you-didnt-know-about-1582
ai, productivity, programming, webdev
Are you tired of the same old website builders? Want to try something new? Well, I've got some news for you! I've compiled a list of 10 AI website builders that you probably haven't heard of. Yes, you heard me right, AI website builders! These builders use artificial intelligence to help you create a website that is both beautiful and functional. ## 1. [Durable](https://durable.co/?reason=website-private&referrer=frequentflyers) ![Durable](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ub34a7vkgdliqqo1vb3.png) The first AI website builder on our list is Durable. Durable is a powerful website builder that uses AI to create stunning websites in seconds. It offers a wide range of templates and customization options, making it easy for users to create a unique website that stands out from the crowd. ## 2. [Mixo](https://app.mixo.io/ai-website-builder) ![Mixo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/71cdopdus1g4cg4fa8xn.png) Mixo is another AI website builder that is gaining popularity among website owners. It uses natural language processing to understand your content and create a website that is tailored to your needs. ## 3. [10Web ](https://10web.io/) ![10web](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pqtc4avh8b1oi3ntplal.png) Next up is 10Web, an AI website builder that offers a range of features, including website hosting, SEO optimization, and e-commerce integration. With 10Web, you can create a website that is both functional and visually appealing. ## 4. [Tilda](https://tilda.cc/) ![Tilda](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iswdi91m3g2mzt2d06ia.png) Tilda is an intuitive AI website builder focused on simplicity and creative freedom. It offers a range of pre-designed blocks and templates that users can customize to create unique websites. ## 5. [Sitejet](https://www.sitejet.io/en) ![SiteJet](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vflwerh1fgsa312x2joz.png) Sitejet is an AI-powered website builder that focuses on providing a collaborative platform for web designers and developers. With its intuitive drag-and-drop interface, you can create visually appealing websites without any coding knowledge. ## 6 [Ucraft](https://next.ucraft.com/) ![Ucraft](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cuuroqux8cb9m3gni52q.png) Ucraft is a user-friendly AI website builder that caters to both individuals and businesses. It offers a simple yet powerful interface that allows you to customize your website using drag-and-drop functionality. ## 7. [Strikingly](https://www.strikingly.com) ![Strikingly](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7lilevl8tjaanqr6j5e4.png) Strikingly is an AI-powered website builder that specializes in creating one-page websites. It provides a streamlined and intuitive editing experience, making it ideal for personal portfolios, event pages, or simple business websites. ## 8. [SnapPages](https://snappages.com/) ![SnapPages](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ly8qhsitmvijwofyd2hd.png) SnapPages is an AI website builder that emphasizes simplicity and elegance. It offers a range of modern templates and a user-friendly interface to help you create beautiful websites in minutes. ## 9. [Duda](https://www.duda.co/) ![Duda](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7sq14kkflf3q52x39gyl.png) Duda is an AI-powered website builder that focuses on creating responsive and mobile-friendly websites. It offers a wealth of customizable templates, along with powerful features like personalization, team collaboration, and client management. ## 10. [Pixpa](https://www.pixpa.com/) ![Pixpa](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipuwcj87fg1x5t3c3cdu.png) Pixpa is an AI website builder designed specifically for creatives, photographers, and artists. It provides a comprehensive platform to showcase your portfolio and sell your work online. ## Conclusion So there you have it, folks! 10 AI website builders that you probably didn't know about. Give them a try and let me know what you think in the comments below. 🤩💭
hr21don
1,496,879
A Comprehensive Guide to Distributed Tracing in Microservices
Introduction Distributed tracing is a technique used to monitor and profile applications...
0
2023-06-07T20:05:02
https://dev.to/subhafx/a-comprehensive-guide-to-distributed-tracing-in-microservices-3h8j
webdev, javascript, microservices, systemdesign
## Introduction Distributed tracing is a technique used to monitor and profile applications in complex, distributed systems. It involves tracking and recording the flow of requests as they traverse across multiple microservices or components. By capturing timing and context information at each step, distributed tracing enables developers and operators to understand the behavior and performance of their systems. In microservices architectures, where applications are composed of multiple loosely coupled services, distributed tracing plays a crucial role in several ways: 1. **Performance Monitoring** 2. **Troubleshooting and Root Cause Analysis** 3. **Service Dependencies and Impact Analysis** 4. **Load Distribution and Resource Optimization** 5. **Performance Baseline and Continuous Improvement** Now, let's delve into the fundamental concepts of distributed tracing and explore how it empowers us to gain deep insights into the behavior and performance of interconnected microservices. ## Basic Concepts of Distributed Tracing Each service in the system is instrumented to generate trace data. This usually involves adding code to create and propagate trace context across service boundaries. Commonly used frameworks and libraries provide built-in support for distributed tracing. Let's break down the hierarchy of distributed tracing to understand it better: 1. **Trace**: A trace represents the end-to-end path of a single request as it flows through various services. It is identified by a unique trace ID. Think of it as a tree structure that captures the entire journey of a request. 2. **Span**: A span represents an individual operation or action within a service. It represents a specific unit of work and contains information about the start and end times, duration, and any associated metadata. Spans are organized in a hierarchical manner within a trace, forming a parent-child relationship. 3. **Parent Span and Child Span**: Within a trace, spans are organized in a parent-child relationship. A parent span initiates a request and triggers subsequent operations, while child spans represent operations that are triggered by the parent span. This hierarchy helps visualize the flow and dependencies between different operations. 4. **Trace Context**: Trace context refers to the information that is propagated between services to maintain the trace's continuity. It includes the trace ID, which uniquely identifies the trace, and other contextual information like the parent span ID. Trace context ensures that each service can link its spans to the correct trace and maintain the overall trace structure. To illustrate this hierarchy, consider a scenario where a user makes a request to a microservices-based application. The request flows through multiple services to fulfill the user's request. Here's how the hierarchy could look: - Trace (Trace ID: 123456) - Span 1 (Service A): Represents an operation in Service A - Span 2 (Service B): Represents an operation triggered by Service A - Span 3 (Service C): Represents an operation triggered by Service B - Span 4 (Service D): Represents another operation triggered by Service A - Span 5 (Service E): Represents an operation triggered by Service D ![Distributed Tracing Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yih0olq20dsrz62jpyue.JPG) In this example, Trace 123456 captures the entire journey of the user request. Service A initiates the request and triggers Span 1. Span 1 then triggers operations in Service B (Span 2) and Service D (Span 4), each creating their own child spans. Service B triggers an operation in Service C (Span 3), and Service D triggers an operation in Service E (Span 5). The hierarchy allows you to understand the relationship between different spans and how the request flows through the various services. I hope this clarifies the hierarchy of distributed tracing for you. ## Implementing Distributed Tracing When comparing tracing frameworks or libraries for implementing distributed tracing in microservices, several popular options are available. Here is a brief list of some widely used frameworks: 1. **_OpenTelemetry_** is a vendor-neutral observability framework that provides support for distributed tracing, metrics, and logs. 2. **_Jaeger_** is an open-source end-to-end distributed tracing system inspired by Google's Dapper and OpenZipkin. 3. **_Zipkin_** is an open-source distributed tracing system initially developed by Twitter. 4. **_AWS X-Ray_** is a distributed tracing service provided by Amazon Web Services. 5. **_AppDynamics_** offers powerful analytics and correlation features, making it suitable for complex microservices architectures. When comparing these frameworks, consider factors such as language support, integration capabilities, scalability, ease of use, community support, and compatibility with your infrastructure and tooling stack. It is recommended to evaluate these frameworks based on your specific requirements, architectural considerations, and the level of observability and analysis you aim to achieve in your microservices environment. Here's a small coding example in Node.js using the OpenTelemetry library for instrumenting an Express web application with distributed tracing: ```javascript const express = require('express'); const { NodeTracerProvider } = require('@opentelemetry/node'); const { BatchSpanProcessor } = require('@opentelemetry/tracing'); const { JaegerExporter } = require('@opentelemetry/exporter-jaeger'); const { registerInstrumentations } = require('@opentelemetry/instrumentation'); const { HttpInstrumentation } = require('@opentelemetry/instrumentation-http'); // Configure the tracer provider with Jaeger exporter const tracerProvider = new NodeTracerProvider(); const jaegerExporter = new JaegerExporter({ serviceName: 'my-service', host: 'localhost', port: 6831 }); tracerProvider.addSpanProcessor(new BatchSpanProcessor(jaegerExporter)); tracerProvider.register(); // Initialize Express application const app = express(); registerInstrumentations({ tracerProvider, instrumentations: [new HttpInstrumentation()], }); // Define a route app.get('/', (req, res) => { // Create a span to represent the request processing const span = tracerProvider.getTracer('express-example').startSpan('hello'); // Perform some operations within the span span.setAttribute('custom.attribute', 'example'); span.addEvent('Processing started', { status: 'info' }); // Business logic goes here... span.end(); res.send('Hello, World!'); }); // Run the Express application app.listen(3000, () => { console.log('Server is running on port 3000'); }); ``` In this example, we start by importing the necessary modules from the OpenTelemetry library. We configure the tracer provider with a Jaeger exporter to send the trace data to a Jaeger backend for storage and visualization. Next, we initialize an Express application and register the necessary instrumentations using the `registerInstrumentations` function provided by OpenTelemetry. This automatically instruments the application to capture trace data for incoming requests, including HTTP instrumentation for tracing HTTP requests. We define a route handler for the root path ("/") and wrap the request processing logic inside a span. We set attributes and add events to the span to provide additional context and information. The business logic of the application can be implemented within this span. Finally, we start the Express application and listen on port 3000. The instrumented route will automatically generate and propagate trace context, allowing for distributed tracing across services. **Automatic vs Manual Instrumentation** Automatic instrumentation in distributed tracing offers the advantage of ease and convenience. It automatically captures essential trace data without requiring developers to modify their code explicitly. This approach is beneficial for quickly getting started with distributed tracing and for applications with a large codebase. However, it may lack fine-grained control and may not capture all relevant information. On the other hand, manual instrumentation provides greater flexibility and control. Developers can explicitly define spans, add custom metadata, and instrument specific areas of interest. This approach offers more detailed insights and allows for fine-tuning of tracing behavior. However, manual instrumentation requires additional effort and can be time-consuming, especially for complex applications. Choosing the suitable approach depends on the specific requirements and trade-offs. Automatic instrumentation is suitable for rapid adoption and simple applications, while manual instrumentation is preferred for more complex scenarios where granular control and detailed insights are crucial. A hybrid approach, combining both methods, can also be employed to strike a balance between convenience and customization. ## Trace Visualization and Analysis Trace visualization tools and dashboards aid in effectively analyzing and interpreting trace data in distributed tracing. They offer the following benefits: 1. **End-to-End Trace Visualization:** Provides a graphical representation of request flows, enabling developers to understand the complete journey of a request. 2. **Time-based Analysis:** Offers a timeline view to identify bottlenecks, latency issues, and long-running operations affecting performance. 3. **Dependency Mapping:** Presents a visual representation of service dependencies, helping identify critical services and their impact on others. 4. **Trace Filtering and Search:** Allows developers to focus on specific requests or spans based on criteria like operation name or error status. 5. **Metrics Integration:** Correlates trace data with performance metrics for deeper insights into resource utilization and error rates. 6. **Error and Exception Analysis:** Highlights errors and exceptions within traces, aiding in root cause analysis. 7. **Alerting and Anomaly Detection:** Enables setting up custom alerts for performance issues or deviations from expected behavior. By leveraging these tools, developers gain a comprehensive understanding of their system's behavior, optimize performance, troubleshoot issues, and make informed decisions for their microservices architecture. Challenges associated with identifying performance bottlenecks and latency issues using distributed traces: - **Large trace volumes:** Use sampling techniques to reduce data volume while capturing representative traces. - **Distributed nature of traces:** Employ distributed context propagation to track requests across services. - **Asynchronous communication:** Instrument message brokers and event systems to capture asynchronous message correlations. - **Data aggregation and analysis:** Utilize centralized log management and trace aggregation systems for consolidation and analysis. - **Performance impact of instrumentation:** Choose lightweight instrumentation libraries and optimize instrumentation implementation. - **Diverse technology stack:** Ensure compatibility and availability of tracing libraries across different technology stacks. Techniques to overcome these challenges: - **Proactive monitoring and alerting:** Implement real-time systems to identify and address performance issues promptly. - **Performance profiling and optimization:** Utilize profiling tools to optimize performance within services. - **Distributed tracing standards and best practices:** Adhere to trace context propagation and tracing conventions. - **Collaborative troubleshooting:** Foster collaboration among teams and utilize cross-functional dashboards and shared trace data. ## Use Cases and Real-World Examples of Distributed Tracing in Microservices Architectures Distributed tracing has proven to be instrumental in helping organizations improve performance, scalability, and fault tolerance in their distributed systems. Here are specific examples highlighting measurable results: 1. **Performance Improvement:** - A social media platform utilized distributed tracing to identify and optimize performance bottlenecks. By analyzing trace data, they discovered that a specific microservice responsible for generating user feeds was causing high latency. After optimizing the service and implementing caching strategies based on trace insights, they achieved a 30% reduction in response times, resulting in improved user experience and increased user engagement. 2. **Scalability Enhancement:** - An e-commerce company leveraged distributed tracing to scale their system during peak traffic periods. By analyzing trace data, they identified services experiencing high loads and optimized resource allocation. With this information, they scaled the infrastructure dynamically and implemented load balancing techniques based on trace insights. As a result, they achieved a 50% increase in concurrent user capacity and maintained optimal system performance even during high traffic events. 3. **Fault Tolerance and Resilience:** - A banking institution implemented distributed tracing to improve fault tolerance in their payment processing system. By analyzing trace data, they identified critical failure points and implemented circuit breaker patterns and retry mechanisms based on trace insights. This resulted in a significant reduction in failed transactions, with a 70% decrease in payment processing errors, leading to improved customer satisfaction and reliability. 4. **Failure Root Cause Analysis:** - A cloud-based SaaS provider used distributed tracing to troubleshoot performance issues. When customers reported slow response times, they examined trace data to identify the root cause. By analyzing spans associated with the slow requests, they discovered a dependency on an external service causing delays. With this insight, they reconfigured their service interactions and implemented fallback strategies, resulting in a 40% reduction in average response times and improved service reliability. These examples demonstrate how distributed tracing enables organizations to identify performance bottlenecks, optimize resource allocation, enhance scalability, improve fault tolerance, and troubleshoot issues effectively. The measurable results include reduced response times, increased capacity, decreased errors, and enhanced customer satisfaction, highlighting the tangible benefits of distributed tracing in optimizing distributed systems. ## Conclusion Distributed tracing in microservices provides key concepts and benefits. Understanding the basics and implementing it effectively helps optimize performance, troubleshoot errors, and allocate resources efficiently. Trace visualization and analysis tools aid in analyzing trace data. Real-world examples showcase its practical applications, such as improved order processing, enhanced payment reliability, and proactive performance management. Overall, distributed tracing is crucial for understanding system behavior, identifying issues, and making informed decisions for a robust microservices architecture. Some Popular SAAS companies providing easy integration of this feature are [Datadog](https://www.datadoghq.com/), [New Relic](https://newrelic.com/), [Dynatrace](https://www.dynatrace.com/) ## References - [What Is Distributed Tracing? An Introduction - By Splunk](https://www.splunk.com/en_us/data-insider/what-is-distributed-tracing.html) - [What is distributed tracing and why does it matter? - By Dynatrace](https://www.dynatrace.com/news/blog/what-is-distributed-tracing/) - [What is Distributed Tracing? How it Works & Use Cases - By Datadog](https://www.datadoghq.com/knowledge-center/distributed-tracing/)
subhafx
1,497,224
Setting up continuous integration with CircleCI and GitLab
Learn how to set up continuous integration pipelines with GitLab and CircleCI.
0
2023-06-07T23:12:45
https://circleci.com/blog/setting-up-continuous-integration-with-gitlab/
circleci, gitlab, cicd, tutorial
--- title: Setting up continuous integration with CircleCI and GitLab published: true description: Learn how to set up continuous integration pipelines with GitLab and CircleCI. tags: circleci,gitlab,cicd,tutorial cover_image: https://ctf-cci-com.imgix.net/3FPR2tmaqQeWPvKBQWQjTb/17ead46fd065f2f4c0fdf9447f2e3325/Tutorial-Intermediate-A.jpg # Use a ratio of 100:42 for best results. # published_at: 2023-06-07 22:35 +0000 canonical_url: https://circleci.com/blog/setting-up-continuous-integration-with-gitlab/ --- CircleCI supports [GitLab](https://circleci.com/blog/announcing-gitlab-support/) as a version control system (VCS). In this tutorial you will learn how to set up your first CircleCI CI/CD pipeline for a project hosted on GitLab. As GitLab can be used either as a SaaS tool, as well as self-managed on-premise installation, I will cover the steps to connect it with CircleCI for both. ## Prerequisites and application basics To follow along with this tutorial, you will need: - Basic knowledge of Git commands - Git installed and accessible on your machine - A [GitLab account](https://gitlab.com/users/sign_up) — either self-managed or SaaS - A [CircleCI account](https://circleci.com/signup/) ### Starter app Our starter application is a minimal Python Flask app with a single ‘hello world’ web page, which specifies required dependencies and includes a test. You can access a [publicly hosted version of the starter app](https://gitlab.com/cci-devrel-demos/python-app-base) on GitLab.com. You can get the application by downloading it straight from the GitLab web interface or cloning it via Git. For the purposes of this tutorial, you don’t need to write the application from scratch, but if you are interested in complete development instruction, you can read through the beginning of [this blog post](https://circleci.com/blog/setting-up-continuous-integration-with-github/. ## GitLab SaaS vs self-managed As mentioned above, this tutorial will teach you how to configure and run your CircleCI pipeline for a project hosted either on a self-managed or hosted version of GitLab. You can learn more about the differences between the hosted and self-managed versions in the [GitLab docs](https://docs.gitlab.com/ee/install/migrate/compare_sm_to_saas.html). Setting up GitLab SaaS requires you only to sign up at gitlab.com. You will also need to make sure you can push to GitLab repositories, either by setting up your [SSH key](https://docs.gitlab.com/ee/user/ssh.html) or [personal access token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html#personal-access-tokens) for using GitLab with either SSH or HTTPS, respectively. If you use GitLab self-managed, you will connect to it via your installation URL. This is specific to you, so for the purposes of this tutorial, use the URL `yourgitlabinstance.com` as a placeholder. Create a new project with the starter app source code. You can use the new project wizard, either at `gitlab.com/projects/new` or `yourgitlabinstance.com/projects/new`, for SaaS or self-managed respectively. ## Creating a CircleCI config file Before you set up CircleCI, you can tell it what to eventually start building. Create a new directory `.circleci` in the top level of the project. Now, create a new file `config.yml` in that directory (`.circleci/config.yml`) — this is your main configuration file for CircleCI. Paste the following in the file: ```yaml version: 2.1 jobs: test: docker: - image: cimg/python:3.10.11 steps: - checkout - restore_cache: key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }} - run: name: Install dependencies command: | python3 -m venv venv . venv/bin/activate pip install -r requirements.txt - save_cache: key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }} paths: - "venv" - run: name: Running tests command: | . venv/bin/activate python3 tests.py - store_artifacts: path: test-reports/ destination: python_app workflows: run-tests: jobs: - test ``` In the CircleCI config file, you define everything you need CircleCI to do. In this case, you have added a single job (`test`) and a workflow (`run-tests`) containing that job. The `test` job runs in a [Docker container](https://circleci.com/blog/docker-image-vs-container/) with Python installed, checks out the code at that specific commit, downloads the dependencies required for the app (Flask, in this case), sets up a Python venv, and runs the tests in `tests.py`. It also caches the dependencies to make subsequent pipeline runs faster and stores the test artifacts for easy access from CircleCI. To read more about CircleCI configuration, consult the [configuration reference](https://circleci.com/docs/configuration-reference/). Save, make a new commit, and push to your GitLab repository. Now it’s time to configure your CircleCI integration with GitLab. ## Configuring CircleCI with GitLab SaaS (GitLab.com) If you don’t have a CircleCI account yet, create one now. Head to https://circleci.com/signup/ and follow the instructions to sign up using your preferred method. If you choose GitLab, you will also be prompted to select your GitLab instance. Press **Connect** next to GitLab.com. ![Connect to your code](https://ctf-cci-com.imgix.net/34Js8TAdWkl0nANRkvODxw/cdef5ce2194a0331bed75a31ed986889/2023-05-08-ci-gitlab-18.48.16.png) This should now prompt you to authorize CircleCI with your GitLab account, giving CircleCI access to your repositories. ![Authorize CircleCI](https://ctf-cci-com.imgix.net/2VSZ1kg5MI1I1R9S9tYpVJ/3683095bd43b7f67dcbd4822452c8b4d/2023-05-08-ci-gitlab-19.09.38.png) Authorizing CircleCI will take you to the project creation wizard. You should see your repository on the list, and it should detect the CircleCI config you committed. ![Create new project](https://ctf-cci-com.imgix.net/55pwGMRJZ5arUonK1Ru7fG/914d779cd5a35492652aac7c8e57b806/2023-05-08-ci-gitlab-19.16.46.png) Click **Create Project** to start building. This has created the project in CircleCI, and what’s left is to commit and push a change to trigger your first CircleCI pipeline. ![Make a change](https://ctf-cci-com.imgix.net/4pjBH6grcr53Qs9Z3RpsvH/e2c5618dce322633f4739831674859fe/2023-05-09-ci-gitlab-01.27.40.png) Make any change, such as an update to your README file, then commit and push. This will create your first pipeline and start building and running your test. ![Build status](https://ctf-cci-com.imgix.net/7FKLZa07roF52X4xpiLd3t/225b6190675a9adf76e6ceac52e647ea/2023-05-09-ci-gitlab-01.29.14.png) Seconds later it should be marked as successful. The status of your CircleCI pipeline will be automatically updated in your GitLab UI as well. Navigate to Build/Pipelines (on the left-hand side in GitLab) and the pipeline should show up there as well. You can see an example based on this tutorial in this [public project](https://gitlab.com/cci-devrel-demos/cci-python-app/-/pipelines). ![Build/Pipelines](https://ctf-cci-com.imgix.net/5hodZ8zxiaNFW7TM7pNLTa/403fe6d9b33bcd1c25840c1f42534f59/2023-05-09-ci-gitlab-01.32.03.png) Clicking it will show you further details of the pipeline. Clicking the button under external - button **CircleCI: Workflow run tests** will take you back full circle into your workflow inside CircleCI. ![Pipeline details](https://ctf-cci-com.imgix.net/WGuoPZr3wfwmwPLDhcL7o/bf8ae751b53e7a6e9728d6b385243e77/2023-05-09-ci-gitlab-01.32.41.png) ## Connecting CircleCI to a self-managed GitLab instance Self-managed installations, unlike the hosted SaaS versions, run on infrastructure you control. They will of course also have a different URL to access them. In this tutorial, use `yourgitlabinstance.com` as a placeholder for your actual instance URL. This section closely follows the [GitLab integration instructions in the CircleCI docs](https://circleci.com/docs/gitlab-integration). Assuming you have followed the steps to get the project into GitLab and committed your first CircleCI config file, you can proceed with connecting to GitLab. If you don’t have a CircleCI account yet, create one now. Head to https://circleci.com/signup/ and follow the instructions to sign up with your preferred method. If you choose to sign up with GitLab, you will be prompted to select your GitLab instance. Press **Connect** next to GitLab self-managed. ![Connect self-managed](https://ctf-cci-com.imgix.net/34Js8TAdWkl0nANRkvODxw/cdef5ce2194a0331bed75a31ed986889/2023-05-08-ci-gitlab-18.48.16.png) ### GitLab self-managed project setup The wizard guides you through the setup process step by step. First enter the URL of your GitLab instance and click **Verify**. It will suffix it with ``/api/v4`` to complete your instance’s API endpoint. Create your [personal GitLab access token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html) with an API scope, paste it in the Personal Access Token field and click **Verify**. Next, pass in the `known_hosts` value and click **Verify**. You can get this by running `ssh-keyscan yourgitlabinstance.com` in the terminal and pasting the entire output in the `known_hosts field`. This allows CircleCI to [verify the authenticity](https://circleci.com/docs/gitlab-integration/#establish-the-authenticity-of-an-ssh-host) of your GitLab instance. Finally, select your project in the dropdown. This will likely be `cci-python-app`. You can also give it a different name. This will set your GitLab self-managed project up for building on CircleCI. ![New self-managed project](https://ctf-cci-com.imgix.net/3uHYOJan5yJSACrQlvHYCH/29d1adf431472cfa84bc2d9cf6a6a7be/2023-05-08-ci-gitlab-20.11.21.png) Trigger a pipeline by committing a change and pushing it to your GitLab repo. This will start your workflow and show it in the next few seconds. ![Project workflows](https://ctf-cci-com.imgix.net/vKnYtN1PPmy4QUmF5XH3m/b07d56d8985838bd4e56941c8e1c9402/2023-05-09-ci-gitlab-01.37.59.png) You can also see your pipeline in the GitLab UI by navigating to CI/CD > Pipelines on the left-hand side. Note that this might look different, depending on what version of GitLab you are using. ![GitLab pipelines view](https://ctf-cci-com.imgix.net/3yPlK18ZvGO75M2hc8dUsC/381d03ef962ea74f903ae24ad6512845/2023-05-09-ci-gitlab-01.33.49.png) Clicking on its status it will take you to the pipeline details, from where you can navigate back to the CircleCI UI to give you a 360 degree view of your project and your CI/CD pipeline’s workflows and jobs. ![Workflow and jobs](https://ctf-cci-com.imgix.net/68JW7CkbSe8OZsdCnYjI19/a9bff076e54e4f84dff3054f38295f0e/2023-05-09-ci-gitlab-01.34.42.png) Congratulations! You have successfully configured CircleCI to start building your project on a self-managed GitLab instance. ## Conclusion In this tutorial you have learned how to begin building projects hosted on GitLab with CircleCI. We have covered both on-premise self-managed GitLab as well as the hosted SaaS version on GitLab.com. Wishing you successful building!
zmarkan
1,497,225
New Language vs. Complex Requirements - What's Your Pick?
If given a choice, would you rather work on a coding project that demands you to learn a completely...
22,092
2023-06-16T07:00:00
https://dev.to/codenewbieteam/new-language-vs-complex-requirements-whats-your-pick-4d7g
discuss, beginners, codenewbie
If given a choice, would you rather work on a coding project that demands you to learn a completely new programming language or one that allows you to stick with your favorite language but comes with complex requirements? Join the conversation and let's unravel the trade-offs of these intriguing coding paths! Follow the [CodeNewbie Org](https://dev.to/codenewbieteam) and [#codenewbie](https://dev.to/t/codenewbie) for more discussions and online camaraderie! *{% embed* [https://dev.to/codenewbieteam](https://dev.to/codenewbieteam) *%}*
ben
1,497,943
Appwrite OSS Fund Sponsors Strawberry
Hi readers 👋, welcoming you back to the "Appwrite OSS Fund" series, where we celebrate open-source...
18,938
2023-06-09T16:29:18
https://dev.to/appwrite/appwrite-oss-fund-sponsors-strawberry-558i
Hi readers 👋, welcoming you back to the "Appwrite OSS Fund" series, where we celebrate open-source maintainers. 🎉 ## 🤔 What Is OSS Fund? On the 4th of May, the Appwrite team launched the [OSS Fund](https://appwrite.io/oss-fund), an initiative to support open-source project maintainers. Being an open-source company, we wanted to give back to the community and help as many people as we can. The OSS Fund is an initiative that is very close to our heart. Hear what our Founder and CEO has to say - The Appwrite Story: {% embed https://appwrite.io/oss-fund-announcement %} > ### 📢 Announcing The Seventeenth Project > After careful considerations from the committee we are thrilled to announce the seventeenth project: {% twitter 1666054582449864707 %} ## 🤔 What Is Strawberry? [Strawberry](https://github.com/strawberry-graphql/strawberry/) is a modern GraphQL library for Python, that leverages Python’s type hint for a concise and intuitive API design. The library makes it easy to build GraphQL API and provides a built-in debug server for testing and debugging, support for Django, FastAPI and other frameworks. ## 🤝 Meet The Maintainer [Patrick Arminio](https://twitter.com/patrick91) is the creator and main maintainer of Strawberry GraphQL. Originally from Italy, he is now based in London, UK, but frequently travels back to Italy to organize PyCon Italia (and to visit his family), one of the largest PyCons in Europe, with his friends. Currently, he works as a Developer Advocate at Apollo GraphQL, where he helps people get started with GraphQL. He also likes running, hiking and traveling to new places. ## 💡 How Did The Idea Of Strawberry Come Up? Patrick wanted to create a library that used modern python features and that has a great community around it, as he was frustrated with his experience using and contributing to other similar libraries. So the goals of Strawberry have always been to have a library with a great developer experience and a great community around. ## 🚘 The Journey So Far Patrick created Strawberry at the end of 2019, after being inspired by a few talks at DjangoCon US, a few months after that he started doing some talks and lighting talks about the library, mostly showing the idea to others and gathering their reaction. Once he realized people liked it, he kept working on the library more and more, slowly adding new features and making it better, while also fostering a great community around it. In October 2021, FastAPI wrote in their docs that Strawberry was now the recommended library for GraphQL,and that helped them getting more users and contributors. {% twitter 1444731443259793409 %} In February 2023 Strawberry had the first core dev sprint where they worked on onboarding new contributors, finalized some PRs and introduced support for field extensions. A couple of months after that Strawberry was also selected as one of the 20 projects for the GitHub Accelerator program! Now the goal is to release the new website and version 1.0 ## 🗒️ Ending Notes As Patrick continues to build Strawberry for the open source community, we want to thank maintainers like him for contributing back to the community.
haimantika
1,498,496
Day 758 : You Gotta Start Somewhere
liner notes: Professional : Good day today. Got up early to sit in on a meeting that was scheduled...
0
2023-06-08T23:00:02
https://dev.to/dwane/day-758-you-gotta-start-somewhere-594a
hiphop, code, coding, lifelongdev
_liner notes_: - Professional : Good day today. Got up early to sit in on a meeting that was scheduled to get more folks in other areas that can't attend at other times. After that, I went back to sleep for a couple of more hours. Responded to some community questions. Followed up with folks about questions I had. Updated some documentation and did a pull request. Resubmitted my travel expenses. Had a couple of other meetings. Good to see everyone. Also started looking into a couple of demos that I'll be updating. Day went by quickly! - Personal : Last night I went through some tracks for the radio show. Also, I think I got all the screens laid out for my next side project. I think I may be in a new trial program to try some new technology, that I could use in another side project I've been thinking about. ![A field of tall grass in Campagna, Italy with hills in the distance](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qta72bu9zejzxoyatz05.jpg) Going to go through tracks for the radio show. Then I want to set up a starter site for my next side project, work on the routing of pages and the dashboard. Maybe even set it up to be deployable. You gotta start somewhere. Might as well start at the beginning. Have a great night! peace piece Dwane / conshus https://dwane.io / https://HIPHOPandCODE.com {% youtube j6R6JybfChs %}
dwane
1,499,405
Need text area with number of lines limit, characters per line and total character count in React js functional
Need text area with number of lines limit, characters per line and total character count
0
2023-06-09T17:17:50
https://dev.to/bharathibillakanti/text-area-in-react-project-1lm6
--- title: Need text area with number of lines limit, characters per line and total character count in React js functional published: true description: Need text area with number of lines limit, characters per line and total character count tags: # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2023-06-09 17:13 +0000 ---
bharathibillakanti
1,499,641
The Essential Role of JavaScript in Modern Frontend Development
JavaScript is an essential technology that has a pivotal role in contemporary frontend development....
0
2023-06-10T01:56:56
https://dev.to/uzafar90/the-essential-role-of-javascript-in-modern-frontend-development-3m0
frontend, javascript, modernjs, tutorial
JavaScript is an essential technology that has a pivotal role in contemporary frontend development. It grants developers the ability to craft engaging and interactive user experiences, validate and manipulate user input, introduce asynchronous functionality, construct intricate web applications utilizing frameworks and libraries, and enhance performance. Within this blog post, we shall delve into the different facets of JavaScript's significance in frontend development, showcase practical examples of its applications, and incorporate code snippets to demonstrate its practical implementation. ## 1. Enhancing User Experience with Dynamic Content JavaScript offers a significant advantage by enabling the creation of dynamic content and the improvement of user experiences. Developers can enhance web applications with interactivity and responsiveness using JavaScript. Examples of commonly used user interface elements, such as sliders, modals, and dropdown menus, rely on JavaScript for their functionality. Let's explore an illustration of a responsive image slider implemented with JavaScript: ```html <!-- HTML --> <div class="slider"> <img src="image1.jpg" alt="Image 1" /> <img src="image2.jpg" alt="Image 2" /> <img src="image3.jpg" alt="Image 3" /> </div> ``` ```javascript // JavaScript const slider = document.querySelector('.slider'); let currentImageIndex = 0; function showNextImage() { currentImageIndex++; if (currentImageIndex >= slider.children.length) { currentImageIndex = 0; } slider.children[currentImageIndex].style.display = 'block'; } setInterval(showNextImage, 3000); ``` In this example, we create an image slider that automatically switches between images every three seconds. JavaScript enables us to manipulate the DOM and control the display of elements, allowing for dynamic and engaging user interfaces.<br/> ## 2. Manipulating and Validating User Input In real-time, JavaScript plays a vital role in the manipulation and validation of user input. By harnessing JavaScript's abilities, developers can offer instant feedback to users, guaranteeing data accuracy and improving the overall user experience. Let's examine an example of form validation utilizing JavaScript: ```html <!-- HTML --> <form id="myForm"> <input type="text" id="emailInput" placeholder="Enter your email" /> <button type="submit">Submit</button> </form> ``` ```javascript // JavaScript const form = document.getElementById('myForm'); const emailInput = document.getElementById('emailInput'); form.addEventListener('submit', (event) => { event.preventDefault(); if (!isValidEmail(emailInput.value)) { alert('Please enter a valid email address.'); } else { // Submit the form } }); function isValidEmail(email) { const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/; return emailRegex.test(email); } ``` In this example, we validate an email input field using JavaScript. When the form is submitted, the JavaScript function `isValidEmail` checks whether the entered email follows a valid format using a regular expression. If the input is invalid, an alert message is displayed. Otherwise, the form can be submitted. ## 3. Implementing Asynchronous Behavior with AJAX With JavaScript, developers can incorporate asynchronous behavior through AJAX, short for Asynchronous JavaScript and XML. AJAX facilitates the seamless retrieval and updating of data without the need for a complete page refresh. Let's delve into an example that demonstrates live search functionality using AJAX: ```html <!-- HTML --> <input type="text" id="searchInput" placeholder="Search..." /> <ul id="searchResults"></ul> ``` ```javascript // JavaScript const searchInput = document.getElementById('searchInput'); const searchResults = document.getElementById('searchResults'); searchInput.addEventListener('input', () => { const searchTerm = searchInput.value; // Make an AJAX request to fetch search results // and update the searchResults element with the results // using DOM manipulation }); ``` In this example, as the user types in the search input, an AJAX request is made to retrieve search results. The search results are then dynamically updated in real time without reloading the entire page, providing a seamless and responsive search experience. ## 4. Creating Modern Web Applications with Frameworks and Libraries Frontend development is made more convenient and productive by JavaScript frameworks and libraries. They offer reusable components, state management, and various tools to efficiently construct complex web applications. React, Angular, and Vue.js are among the popular frameworks. Now, let's examine a code snippet that illustrates the implementation of a basic counter component using React: ```javascript // JavaScript (with React) import React, { useState } from 'react'; function Counter() { const [count, setCount] = useState(0); const increment = () => { setCount(count + 1); }; const decrement = () => { setCount(count - 1); }; return ( <div> <button onClick={decrement}>-</button> <span>{count}</span> <button onClick={increment}>+</button> </div> ); } ``` In this example, we use React, a popular JavaScript library for building user interfaces, to create a counter component. React's state management allows us to track and update the count, and the component renders the count and buttons to increment and decrement it. JavaScript frameworks and libraries provide powerful tools and abstractions that significantly accelerate frontend development. ## 5. Optimizing Performance with JavaScript JavaScript provides opportunities to optimize the performance of web applications by utilizing techniques like code minification, lazy loading, and caching. These techniques can enhance loading speed and overall performance. Let's explore an example that demonstrates the implementation of lazy loading images using JavaScript: ```html <!-- HTML --> <img src="placeholder.jpg" data-src="image.jpg" alt="Lazy-loaded image"> ``` ```javascript // JavaScript const images = document.querySelectorAll('img[data-src]'); function lazyLoadImage(image) { image.setAttribute('src', image.getAttribute('data-src')); image.onload = () => { image.removeAttribute('data-src'); }; } const imageObserver = new IntersectionObserver((entries, observer) => { entries.forEach((entry) => { if (entry.isIntersecting) { lazyLoadImage(entry.target); observer.unobserve(entry.target); } }); }); images.forEach((image) => { imageObserver.observe(image); }); ``` In this example, the images have a `data-src` attribute that stores the actual image source. Using the Intersection Observer API, the images are lazy-loaded only when they become visible within the viewport. This approach helps improve the initial loading time of a page. ## Conclusion: JavaScript is an essential component of modern frontend development. Its ability to enhance user experiences, manipulate and validate user input, implement asynchronous behavior, leverage frameworks and libraries, and optimize performance makes it a vital tool for creating robust and interactive web applications. By harnessing the power of JavaScript, developers can unlock a world of possibilities in frontend development.
uzafar90
1,500,201
6 Must-Try Coding Problem Websites 💻
Do you know your language? No, really. A lot of us (myself included) can learn the basics of a...
0
2023-06-10T15:53:13
https://dev.to/jd2r/6-must-try-coding-problem-websites-53c0
algorithms, programming, coding
Do you know your language? No, really. A lot of us (myself included) can learn the basics of a programming language and think that we've mastered every possible dimension. And while all you really need are the basics to get started, it's helpful to test your newfound knowledge as you begin to dive deeper into the specific technicalities of your particular choice. To test, we'll turn to some of the simplest free resources out there - coding practice problems. Solving these problems turns a threefold profit: gaining experience with your particular language, strengthening your problem-solving and critical thinking skills (take that, AI!), and in some cases learning about and becoming comfortable with DSA (tech interviews, anyone?). And yes, LeetCode isn't the only option. Ready to get started? Let's explore this list of my favorite coding practice sites. ## Codewars I really love the [Codewars](https://www.codewars.com/dashboard) platform. Why? They're really fun problems. While other platforms might have you finding primes under 1000, Codewars features unique problems like Vigenère cipher cracking and fraction/mixed number converters. The system revolves around "honor", a measurement of your contribution to the platform and your competency at solving kata (practice problems). It features many different language options and opportunities to contribute your own solutions once you solve enough kata, besides from their remixing/refactoring challenges called Kumites. Get stuck on a problem and need help? There's an active [Discord](https://discord.gg/mSwJWRvkHA) community that's ready to point you in the right direction. There *is* a paid plan, [Codewars Red](https://www.codewars.com/subscription), which goes for about $48 USD a year - but the subscription is not required to have a blast on the platform with all the free offerings. Codewars also connects developers searching for a job with opportunities through their platform [Qualified Jobs](https://jobs.qualified.io/), so if you're looking for hire then you might want to make an account and check around. ## CodinGame This is probably my top pick to spend time on as of the time I'm writing this post. [CodinGame](https://www.codingame.com/home) was introduced to me probably about a month ago by one of my friends, who was just enthralled about the code golf aspect of it. I'm actually writing another post right now about golfing and Ruby, make sure to drop a follow so you don't miss it. The platform revolves around games. It's learning - through gaming. And who doesn't like games? Whether you're clashing it out with other coders around the world to solve a coding problem or personally optimizing a particular solution by yourself, it's super fun to do. Plus, it gives me the satisfaction of seeing that I'm currently in the top 2% of all users (that's worth a [friend request](https://www.codingame.com/profile/f0cf37fc3256bf8c7a914253d3da54725619225), right?) with their live ranking system. This is a great touch that's missing from a lot of other platforms, but if you're not one for competition it might not be a great choice. CodinGame also provides certifications to prove your skills that measure things like speed, efficiency, and proficiency in a certain language. ## LeetCode Here's the classic. Pretty much everyone knows about [LeetCode's](https://leetcode.com/problemset/all/) platform nowadays, and for good reason. It's an expansive library of over 2500 questions grouped into topic-specific lists. Mainly used for interview preparation, it's also a good platform to just spend time working through problems for the sheer knowledge about problem-solving and DSA you'll learn. But fear not, you don't have to just do problems from 1 onward. LeetCode offers fun contests and incentives frequently as a reason to solve challenges. I recently participated in the 30 Days of LC challenge, from which I earned 100 LeetCoins (LC currency which can be used to exchange for rewards) and a whole lot of knowledge. The platform does offer a paid plan, which though pricey ($159 a year) will probably check out if you devote time studying for interviews. Who knows, a job-specific preparation (available with premium) might just pay off in the form of getting a job at a big tech company someday. Is it the most exciting platform? No, not in my opinion. Will it equip you with problem-solving skills that will come in clutch in real world applications like tech interviews? Most certainly, and often that's more important than having fun initially. You've got to have the job to enjoy the job. ## GeeksForGeeks If you've read my posts before, you've probably seen that I recommend [GeeksForGeeks](https://www.geeksforgeeks.org/) as one of my favorite platforms - and it's because it literally does everything. Tutorials? Got it. Code examples? Got that too. Interview prep and coding problems? Check and check. In their [Practice](https://practice.geeksforgeeks.org/) portal, you can find many problems that you might find useful, especially if you're looking to prep for DSA. These problems are also grouped into helpful lists that categorize by data structure, company, or experience level suggested. There's no paid plan in general, but GFG courses are available for purchase at any time. If you get on their email list and wait a bit, you might just get a better deal during a sale incentive they'll occasionally push. It's not the most complex or in-depth study, and it's not the most exciting platform to learn on, but the resources that *are* available will make it well worth your time. ## Project Euler While all the other platforms have built-in editors and code checkers that will measure the efficiency and speed of the code you're running, the next two challenge websites have a different take on problems - off-site solutions. What does that mean? A prompt is given, and it's up to you to find the answer. The only input from you that is required is the answer, not the code. You can write the code however you want to, as long as the answer is correct. [Project Euler](https://projecteuler.net/archives) is one of these platforms. The problems start pretty easy, but get very hard as you work your way up the list. I like the freedom to come up with the answer on your own, but a feature I find slightly annoying is the fact that every time you get an answer wrong, you have to wait a progressively larger time span before a new submission is accepted. A unique feature of Project Euler's platform is that its problems are on lockdown. Above problem 100, no solutions are allowed to be shared publicly or else account shutdown is threatened. While you can google the answer to any LeetCode question (granted you won't learn anything by cheating), Project Euler answers are much harder to come by - and perhaps that's why the popularity of this platform is much lower than others. Project Euler offers a couple less than 900 questions, but this doesn't cut down on the depth of the platform as working through all 900 will take you quite a while. Everything is free, so no paid plan is offered or required. Would I recommend it for any use? No. If you're looking to study for a specific interview or preparation track, then use LeetCode or GFG's platform. If you're looking to go on a programming adventure and use your skills of any level, then this and the next option might be good to look into. ## Advent of Code I really enjoy [Advent of Code](https://adventofcode.com/). The platform is laid out really well, and I love the style of the whole thing. The challenges are all Christmas-themed, which is super unique and contributes a lot to the feel of the website. Canonically, you're supposed to do each problem on the day it's released - December 1st through 25th. In doing so, you'll qualify for a placement on the leaderboard (but good luck beating the cult that surrounds this every December). In contrast to Project Euler, solutions for each problem are publicly available anywhere from GitHub to private blogs. If you get stuck on a problem, these solutions are always around to push you in the right direction. If you really feel like blazing your own trail and challenging yourself, pick some rare script like Befunge-93. Coming in too late to solve the problems on the exact days they're issued? Don't worry, each problem is still available for you to solve today. The use case is the same as that of Project Euler. If you're specifically studying for something, this isn't a great choice. However, I think it's a great way to test your knowledge and problem-solving skills - because it's formatted the way it is, each problem strengthens problem-solving skills instead of specific language skills. ## Conclusion Coding challenges are some of the best ways to boost your problem-solving skills along with other good qualities like language familiarity, coding speed, and optimization. If you're not doing coding challenges every day, it's definitely a habit you should develop. Pick one of these platforms and give it a go. I promise it's worth your time. I hope you enjoyed this article! If you want to see more from me, don't forget to drop a like and a follow. Thanks, and see you in the next post! ✏
jd2r
1,500,363
Generate Breadcrumb and Navigation in SvelteKit
Building Dynamic Navigation As we know that in SvelteKit, +page.svelte files point to the...
0
2023-06-10T19:43:19
https://blog.aakashgoplani.in/generate-breadcrumb-and-navigation-in-sveltekit
sveltekit, breadcrumb, navigationbar
### Building Dynamic Navigation As we know that in SvelteKit, `+page.svelte` files point to the actual web page. So, to generate navigation, we need to figure out a way to get the list of all `+page.svelte` files. Currently, SvelteKit (v1.20.2) does not have any built-in mechanism to give us a list of all pages. The solution for this is to import all the `+page.svelte` files from `src/routes` using [Vite's Glob imports](https://vitejs.dev/guide/features.html#glob-import) and generate a list out of it. Here is how we can do that - In ***src/routes/+layout.svelte*** file, we fetch all the `+page.svelte` files using Vite. The output is in the form of an object with key-value pairs where the key is the path to `+page.svelte` files and value is a Promise object for dynamically importing modules. ![layout_output](https://cdn.hashnode.com/res/hashnode/image/upload/v1686392222038/dc25c6af-5c77-450f-90c5-0636d80aa339.png align="center") We will pass this object to our Navigation component that will perform a series of operations to generate navigation. ```ts <script lang="ts"> import Navigation from '$lib/components/Navigation.svelte'; const modules = import.meta.glob('./**/+page.svelte'); </script> <Navigation { modules } /> ``` Let's focus on ***src/lib/components/Navigation.svelte*** file: ```ts <script lang="ts"> import { page } from '$app/stores'; import { onMount } from 'svelte'; export let modules: Record<string, () => Promise<unknown>>; let menu: Array<{ link: string; title: string }> = []; onMount(() => { for (let path in modules) { let pathSanitized = path.replace('.svelte', '').replace('./', '/'); // for group layouts if (pathSanitized.includes('/(')) { pathSanitized = pathSanitized.substring(pathSanitized.indexOf(')/') + 1); } // for dynamic paths -> needs more triaging if (pathSanitized.includes('[')) { pathSanitized = pathSanitized.replaceAll('[', '').replaceAll(']', ''); } pathSanitized = pathSanitized.replace('/+page', ''); menu = [ ...menu, { title: pathSanitized ? pathSanitized.substring(pathSanitized.lastIndexOf('/') + 1) : 'home', link: pathSanitized ? pathSanitized : '/' } ]; } }); </script> <div> <ul> {#each menu as item} <li> <a href={item.link} class:active={$page.url.pathname === item.link}>{item.title}</a> </li> {/each} </ul> </div> <style lang="scss"> .nav-list { margin: 0 1rem; padding: 1rem; ul > li { display: inline-block; } ul, li { margin: 0; padding: 0; } a { padding: 1rem; color: red; text-decoration: none; &.active { color: blue; } } } </style> ``` * This component iterates over the object and generates an array link & title. ```ts [ { "title": "profile", "link": "/profile" }, ... ] ``` * In this process, it ignores the *group layouts* and *dynamic paths.* Now, this step is quite opinionated, one could override logic according to their requirements. Do let me know how you plan to deal with group and dynamic paths in the comments section! With this configuration, our Navigation Bar is ready. You can see the demo at [Stackblitz](https://stackblitz.com/edit/stackblitz-starters-aev7ds). In the next section, we will work our way through Breadcrumbs! ### Building Breadcrumbs SvelteKit uses a filesystem-based router. Files named `+layout.svelte` determines the layout for the current page and pages below itself in the file hierarchy. We can use SvelteKit’s store `page` to determine what the current path is and pass that location to a breadcrumb component which we add to the layout. ***src/routes/+layout.svelte*** ```html <Breadcrumb path={$page.url.pathname} /> ``` ***src/lib/components/Breadcrumb.svelte*** ```ts <script lang="ts"> export let path: string; let crumbs: Array<{ label: string, href: string }> = []; $: { // Remove zero-length tokens. const tokens = path.split('/').filter((t) => t !== ''); // Create { label, href } pairs for each token. let tokenPath = ''; crumbs = tokens.map((t) => { tokenPath += '/' + t; t = t.charAt(0).toUpperCase() + t.slice(1); return { label: t, href: tokenPath }; }); // Add a way to get home too. crumbs.unshift({ label: 'Home', href: '/' }); } </script> <div class="breadcrumb"> {#each crumbs as c, i} {#if i == crumbs.length - 1} <span class="label"> {c.label} </span> {:else} <a href={c.href}>{c.label}</a> &gt;&nbsp; {/if} {/each} </div> <style lang="scss"> .breadcrumb { margin: 0 1.5rem; padding: 1rem 2rem; a { display: inline-block; color: red; padding: 0 0.5rem; } .label { padding-left: 0.5rem; color: blue; } } </style> ``` ### Wrapping Up * You can see the demo at [Stackblitz](https://stackblitz.com/edit/stackblitz-starters-aev7ds). * I would like to give some credit here, the idea behind generating Navigation using Vite is from a Youtube video by [Web Jeda](https://youtu.be/Y_NE2R3HuOU) and the idea for generating Breadcrumb using SveleKit's store is from [Dean Fogarty's blog](https://df.id.au/technical/svelte/breadcrumbs/).
aakashgoplani
1,500,776
Azure Container Apps, Easy Auth and .NET authentication
Liquid syntax error: Unknown tag 'endraw'
0
2023-06-11T08:17:20
https://johnnyreilly.com/azure-container-apps-easy-auth-and-dotnet-authentication
azurecontainerapps, easyauth, aspnet, authentication
--- title: Azure Container Apps, Easy Auth and .NET authentication published: true tags: azurecontainerapps,easyauth,aspnet,authentication canonical_url: https://johnnyreilly.com/azure-container-apps-easy-auth-and-dotnet-authentication --- Easy Auth is a great way to authenticate your users. However, when used in the context of Azure Container Apps, .NET applications do not, by default, recognise that Easy Auth is in place. You might be authenticated but .NET will still act as if you aren't. `builder.Services.AddAuthentication()` and `app.UseAuthentication()` doesn't change that. This post explains the issue and solves it through the implementation of an `AuthenticationHandler`. ![title image reading "Azure Container Apps, Easy Auth and .NET authentication" with the Azure Container App logos](https://raw.githubusercontent.com/johnnyreilly/blog.johnnyreilly.com/main/blog-website/blog/2023-06-11-azure-container-apps-easy-auth-and-dotnet-authentication/title-image.png) <!--truncate--> If you're looking for information about Easy Auth and roles with .NET and Azure App Service, you can find it here: - [Azure App Service, Easy Auth and Roles with .NET](https://johnnyreilly.com/2021/01/14/azure-easy-auth-and-roles-with-dotnet-and-core) - [Azure App Service, Easy Auth and Roles with .NET and Microsoft.Identity.Web](https://johnnyreilly.com/2021/01/17/azure-easy-auth-and-roles-with-net-and-microsoft-identity-web) ## `User.Identity.IsAuthenticated == false` When I'm building an application, I want to focus on the problem I'm solving. I don't want to think about how to implement my own authentication system. Rather, I lean on an auth provider for that, and if I'm working in the Azure ecosystem, that often means Easy Auth, usually with Azure AD. I recently started building a .NET application using Easy Auth and deploying to Azure Container Apps. One thing that surprised me when I tested it out was that, whilst I was being authenticated, my app didn't seem to be aware of it. When I inspected the `User.Identity.IsAuthenticated` property in my application, it was `false`. The reason why lies [in the documentation](https://learn.microsoft.com/en-us/azure/container-apps/authentication#access-user-claims-in-application-code): > For all language frameworks, Container Apps makes the claims in the incoming token available to your application code. The claims are injected into the request headers, which are present whether from an authenticated end user or a client application. External requests aren't allowed to set these headers, so they're present only if set by Container Apps. Some example headers include: > > `X-MS-CLIENT-PRINCIPAL-NAME` > > `X-MS-CLIENT-PRINCIPAL-ID` > > _Code that is written in any language or framework can get the information that it needs from these headers._ The emphasis above is mine. What it's saying here is this: **you need to implement this yourself**. ## Examining the headers Sure enough, when I inspected the headers in my application, I could see these: ![screenshot of easy auth headers including X-MS-CLIENT-PRINCIPAL-NAME`, `X-MS-CLIENT-PRINCIPAL-ID`, `X-MS-CLIENT-PRINCIPAL-IDP` and `X-MS-CLIENT-PRINCIPAL`](https://raw.githubusercontent.com/johnnyreilly/blog.johnnyreilly.com/main/blog-website/blog/2023-06-11-azure-container-apps-easy-auth-and-dotnet-authentication/screenshot-easy-auth-headers.webp) The `X-MS-CLIENT-PRINCIPAL` header is particularly interesting. From the appearance, you might assume it's a [JWT](https://jwt.io/). It's not. It's actually a base 64 encoded JSON string that represents the signed in user and their claims. It's actually super easy to decode in your browser devtools: ```js JSON.parse(atob(xMsClientPrincipal)); ``` If you decode it, you'll see something like this: ![a screenshot of the decoded object](https://raw.githubusercontent.com/johnnyreilly/blog.johnnyreilly.com/main/blog-website/blog/2023-06-11-azure-container-apps-easy-auth-and-dotnet-authentication/screenshot-decoded-x-ms-client-principal-header.png) Given that this information is present, let's tell .NET about it. ## Implementing `AddAzureContainerAppsEasyAuth()` We're going to implement an `AuthenticationHandler` that takes the information from the `X-MS-CLIENT-PRINCIPAL` header and uses it to create a `ClaimsPrincipal`: ```cs title="AzureContainerAppsEasyAuth.cs" using Microsoft.AspNetCore.Authentication; using Microsoft.Extensions.Logging; using Microsoft.Extensions.Options; using System.Security.Claims; using System.Text.Encodings.Web; using System.Text.Json; using System.Text.Json.Serialization; /// <summary> /// Support for EasyAuth authentication in Azure Container Apps /// </summary> namespace Azure.ContainerApps.EasyAuth; public static class EasyAuthAuthenticationBuilderExtensions { public const string EASYAUTHSCHEMENAME = "EasyAuth"; public static AuthenticationBuilder AddAzureContainerAppsEasyAuth( this AuthenticationBuilder builder, Action<EasyAuthAuthenticationOptions>? configure = null) { if (configure == null) configure = o => { }; return builder.AddScheme<EasyAuthAuthenticationOptions, EasyAuthAuthenticationHandler>( EASYAUTHSCHEMENAME, EASYAUTHSCHEMENAME, configure); } } public class EasyAuthAuthenticationOptions : AuthenticationSchemeOptions { public EasyAuthAuthenticationOptions() { Events = new object(); } } public class EasyAuthAuthenticationHandler : AuthenticationHandler<EasyAuthAuthenticationOptions> { public EasyAuthAuthenticationHandler( IOptionsMonitor<EasyAuthAuthenticationOptions> options, ILoggerFactory logger, UrlEncoder encoder, ISystemClock clock) : base(options, logger, encoder, clock) { } protected override async Task<AuthenticateResult> HandleAuthenticateAsync() { try { var easyAuthProvider = Context.Request.Headers["X-MS-CLIENT-PRINCIPAL-IDP"].FirstOrDefault() ?? "aad"; var msClientPrincipalEncoded = Context.Request.Headers["X-MS-CLIENT-PRINCIPAL"].FirstOrDefault(); if (string.IsNullOrWhiteSpace(msClientPrincipalEncoded)) return AuthenticateResult.NoResult(); var decodedBytes = Convert.FromBase64String(msClientPrincipalEncoded); using var memoryStream = new MemoryStream(decodedBytes); var clientPrincipal = await JsonSerializer.DeserializeAsync<MsClientPrincipal>(memoryStream); if (clientPrincipal == null || !clientPrincipal.Claims.Any()) return AuthenticateResult.NoResult(); var claims = clientPrincipal.Claims.Select(claim => new Claim(claim.Type, claim.Value)); // remap "roles" claims from easy auth to the more standard ClaimTypes.Role / "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" var easyAuthRoleClaims = claims.Where(claim => claim.Type == "roles"); var claimsAndRoles = claims.Concat(easyAuthRoleClaims.Select(role => new Claim(ClaimTypes.Role, role.Value))); var principal = new ClaimsPrincipal(); principal.AddIdentity(new ClaimsIdentity(claimsAndRoles, clientPrincipal.AuthenticationType, clientPrincipal.NameType, ClaimTypes.Role)); var ticket = new AuthenticationTicket(principal, easyAuthProvider); var success = AuthenticateResult.Success(ticket); Context.User = principal; return success; } catch (Exception ex) { return AuthenticateResult.Fail(ex); } } } public class MsClientPrincipal { [JsonPropertyName("auth_typ")] public string? AuthenticationType { get; set; } [JsonPropertyName("claims")] public IEnumerable<UserClaim> Claims { get; set; } = Array.Empty<UserClaim>(); [JsonPropertyName("name_typ")] public string? NameType { get; set; } [JsonPropertyName("role_typ")] public string? RoleType { get; set; } } public class UserClaim { [JsonPropertyName("typ")] public string Type { get; set; } = string.Empty; [JsonPropertyName("val")] public string Value { get; set; } = string.Empty; } ``` There's a few things to note from the above: - `EasyAuthAuthenticationHandler` is an `AuthenticationHandler` that takes the information from the `X-MS-CLIENT-PRINCIPAL` header and uses it to create a `ClaimsPrincipal`. - The `MsClientPrincipal` class is a representation of the decoded `X-MS-CLIENT-PRINCIPAL` header. - `AddAzureContainerAppsEasyAuth` is an extension method for the `AuthenticationBuilder` object - this allows users to make use of the handler in their application. - With Easy Auth, role claims arrive in the custom `"roles"` claim. This is somewhat non-standard and so we remap `"roles"` claims to be `ClaimTypes.Role` / `"http://schemas.microsoft.com/ws/2008/06/identity/claims/role"` claims as well. This should ensure that anything built with the expectation of that type of claim behaves in the way you'd expect. ## Using `AddAzureContainerAppsEasyAuth()` Now we've written our handler, we can use it in our application. We do this by calling `AddAzureContainerAppsEasyAuth()` in our `Program.cs`: ```cs title="Program.cs" //... builder.Services .AddAuthentication(EasyAuthAuthenticationBuilderExtensions.EASYAUTHSCHEMENAME) .AddAzureContainerAppsEasyAuth(); // <-- here builder.Services.AddAuthorization(); //... var app = builder.Build(); //... app.UseAuthentication(); app.UseAuthorization(); //... ``` Now when we run our application, we'll see that `User.Identity.IsAuthenticated` is `true` when we're authenticated in Azure Container Apps! ## Easy Auth differs Azure App Service, Azure Container Apps, Azure Static Web Apps and Azure Functions One thing that became very clear to me as I worked on this, is that Easy Auth is implemented differently in Azure App Service, Azure Container Apps, Azure Static Web Apps and Azure Functions. Whilst the authentication appears to be the same, the headers are different across the services. So the code above will work in Azure Container Apps; for other Azure services I can't vouch for it. The code in this post is very similar to that in [`MaximeRouiller.Azure.AppService.EasyAuth`](https://github.com/MaximRouiller/MaximeRouiller.Azure.AppService.EasyAuth). But it's not quite the same, as that library depends upon a `WEBSITE_AUTH_ENABLED` environment variable, which isn't present in Azure Container Apps. Likewise, the `Microsoft.Identity.Web` package [supports Easy Auth, but for Azure App Service](https://github.com/AzureAD/microsoft-identity-web/wiki/1.2.0#integration-with-azure-app-services-authentication-of-web-apps-running-with-microsoftidentityweb). If you [take a look at the code](https://github.com/AzureAD/microsoft-identity-web/blob/c7f146e93180deece63f879453e98a872cdda658/src/Microsoft.Identity.Web/AppServicesAuth/AppServicesAuthenticationInformation.cs#L38), you'll see it is powered by environment variables and headers, which aren't present in Azure Container Apps. So whilst it would be tremendous if this was built into .NET, or in a NuGet package somewhere. I'm not aware of one at the time of writing, so I made this. Perhaps this should become a NuGet package? Let me know if you think so! I've also raised a [feature request in the `Microsoft.Identity.Web` repo to support Azure Container Apps](https://github.com/AzureAD/microsoft-identity-web/issues/2274). If you'd like to see this, please upvote it!
johnnyreilly
1,500,823
Enhancing Your CSS Skills: Dive into :not, :is, and :where Pseudo-Classes.
Introduction: In the world of CSS, readability and maintainability are crucial factors...
0
2023-06-11T15:52:06
https://dev.to/deepak22448/enhancing-your-css-skills-dive-into-not-is-and-where-pseudo-classes-2pm
css, beginners, webdev, frontend
#Introduction: In the world of CSS, readability and maintainability are crucial factors when it comes to writing clean and efficient code. Fortunately, CSS provides powerful pseudo-classes such as `:is`, `:where`, and `:not`, which can significantly improve the readability of your stylesheets. These pseudo-classes allow you to write concise and targeted selectors, making your code easier to understand and navigate. In this blog post, we will explore how leveraging the :is, :where, and :not pseudo-classes can enhance the readability of your CSS code, leading to more maintainable and efficient stylesheets. Let's dive in and unlock the potential of these powerful selectors! Assuming you have a fundamental knowledge of what CSS pseudo classes are, let's begin. ## :not() : ``` :not(<complex-selector-list>) { /* styles */ } ``` The parameter for the `:not()` pseudo-class must be a list of one or more selectors separated by commas. Consider the blow `index.html` : ``` <div class="card"> <h1>Heading</h1> <p>Caption</p> <p>Paragraph</p> <button>Learn More</button> </div> ``` ![html card](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5250t8ihf3nnf9s7i5q0.png) Just assume for any reason you wanted to apply some common styles to all the elements except the button in the above code. How will you respond? The straightforward solution is to override the button's styles, right? This is where the ':not()' function comes into play, it will apply the styles to all but ignore the elements passed as a parameter to it. ``` .card > :not(button) { color: teal; margin-top: 3px; letter-spacing: 0.8px; } ``` Above styles will only be applied to all the direct children of `.card` except the button. ![html card using is() class](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mu36q4iescfzp0xis1kk.png) ### The amazing feature of this pseudo-class is that it allows for chaining: `:not(x) y:not(z)`. ## : is() and where() : Basically `is()` and `where()` are similar and help in writing more clean and readable code. ``` :is(<complex-selector-list>) { /* styles */ } ``` The `:is()` CSS pseudo-class function takes a selector list as its argument, and selects any element that can be selected by one of the selectors in that list. This is useful for writing large selectors in a more compact form. ``` <ol> <li>Saturn</li> <li> <ul> <li>Mimas</li> <li>Enceladus</li> <li> <ol> <li>Voyager</li> <li>Cassini</li> </ol> </li> <li>Tethys</li> </ul> </li> <li>Uranus</li> <li> <ol> <li>Titania</li> <li>Oberon</li> </ol> </li> </ol> ``` It might sound confusing, but once you see the "CSS," everything will make sense. Take the HTML code above, for example. Imagine that you wanted to change the style of the "ol" element, which is a child of the "ol" or "ul" element, and again, which is a child of the "ol" or "ul" element. ![without is class of css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t250th9an5zkdj76kqeg.png) I was talking aboout the area highlited in red. ``` :is(ol, ul) :is(ol, ul) ol { list-style-type: lower-greek; color: chocolate; } Wow ! Do you think the CSS is now easier to read? ``` ![with is class of css](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8thc3pynlnmo7q6z1iw0.png) ### The sole distinction between `is()` and the `where()` pseudo-class is that `where()` lacks the specificity that `is()` possesses. ### List of most commonly used pseudo-class : - `:hover` - Matches when an element is being hovered over by the mouse. - `:active` - Matches when an element is being activated (clicked or pressed). - `:focus` - Matches when an element has received focus (such as through keyboard navigation). - `:visited` - Matches visited links. - `:first-child` - Matches the first child element of its parent. - `:last-child` - Matches the last child element of its parent. - `:nth-child()` - Matches elements based on their position among siblings (e.g., :nth-child(2) selects the second child). and the list goes on... ## Conclusion: In conclusion, by harnessing the power of CSS pseudo-classes like :is, :where, and :not, we can significantly improve the readability and maintainability of our code. These pseudo-classes provide us with flexible and targeted selectors that allow us to write concise and expressive stylesheets.
deepak22448
1,501,350
100 Days Coding Challenge - Day 23: FreeCodeCamp JavaScript Algorithms and Data Structures
Today I finish the first code of the certification, a palindrome checker, I did it with a queue...
0
2023-06-12T00:01:41
https://dev.to/alexmgp7/100-days-coding-challenge-day-23-freecodecamp-javascript-algorithms-and-data-structures-1mb8
webdev, javascript, programming, 100daysofcode
Today I finish the first code of the certification, a palindrome checker, I did it with a queue function that I found on internet, it was easy ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fdhifyx8f76vk76nsw4n.png)
alexmgp7
1,503,418
The Prospects of the Healthcare Industry in the Metaverse
A potentially game-changing technology for a number of industries is the metaverse. The healthcare...
0
2023-06-13T16:30:34
https://dev.to/donnajohnson88/the-prospects-of-the-healthcare-industry-in-the-metaverse-29j6
blockchainhealthcare, blockchain, webdev, metaverse
A potentially game-changing technology for a number of industries is the metaverse. The healthcare sector is one such industry. From virtual checkups to data security, the metaverse, when developed with [Blockchain Healthcare Application Development](https://blockchain.oodles.io/blockchain-healthcare-services/), can offer multiple advantages in this sector. Read on to understand more about the unlimited potential of the metaverse in the healthcare world. The Potential of Metaverse in Healthcare The metaverse can immensely benefit the healthcare industry. It can completely transform the doctor-patience relationship. With the metaverse, healthcare practitioners can give complex and individualised treatment to their patients. There are a few therapies in healthcare which already entered this virtual reality. Some of these therapies include cognitive therapy, physical therapy, rehabilitation therapy, and more. Also, Read:[ How the Automotive Industry is Getting into the Metaverse](https://blockchain.oodles.io/blog/how-the-automotive-industry-is-getting-into-the-metaverse/) Advantages of the Metaverse in Healthcare The following are the advantages of the metaverse in healthcare: Telepresence Personal Data Security Early Diagnosis Also, Read: [Increasing Importance of Blockchain for Healthcare Development](https://blockchain.oodles.io/blog/blockchain-for-healthcare-app-development/) How can the Metaverse Transform the Healthcare Sector? The existing healthcare system may be altered in various ways by the metaverse. A change in healthcare teaching is one instance. The inclusion of metaverse technology makes virtual worlds more accessible through better computing power and virtual reality headsets. Additionally, it provides more device connectivity. Also, Read: [Metaverse Impact on Financial Services](https://blockchain.oodles.io/blog/metaverse-impact-on-financial-services/) Conclusion The metaverse is a merger of augmented reality (AR), virtual reality (VR), blockchain, and artificial intelligence (AI). As a result, these technologies have the potential to spur innovation and technical advancement in healthcare. Consequently, it can easily change the entire landscape of the healthcare system in the future. [Hire Blockchain Development Company](https://blockchain.oodles.io/) to enable the implementation of decentralized healthcare systems, smart contracts, and digital identity verification, ensuring trust and efficiency within the metaverse’s healthcare ecosystem. To dive deep into the metaverse world, feel free to connect with our [Oodles blockchain experts](https://blockchain.oodles.io/about-us/). Read the complete blog here — [The Prospects of the Healthcare Industry in the Metaverse](https://blockchain.oodles.io/blog/the-prospects-of-the-healthcare-industry-in-the-metaverse/) https://medium.com/@pranjali.tiwari_70767/the-prospects-of-the-healthcare-industry-in-the-metaverse-61a87a9bcce9 #HireBlockchainDevelopmentCompany #Blockchainsolutionsforhealthcare #BlockchainHealthcareApplicationDevelopment #Blockchainforhealthcare #Healthcareblockchainsolution
donnajohnson88
1,504,291
Multifunktionales PostgreSQL GUI Tool
Steigern Sie Ihre Produktivität mit einem plattformübergreifenden GUI-Client für PostgreSQL: dbForge...
0
2023-06-14T12:11:48
https://dev.to/devartteam/multifunktionales-postgresql-gui-tool-59jg
postgres, postgressql, dbforge, dbforgestudio
Steigern Sie Ihre Produktivität mit einem plattformübergreifenden GUI-Client für PostgreSQL: dbForge Studio for PostgreSQL - https://www.devart.com/de/dbforge/postgresql/studio/ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rwlli0oovmbagsubc4l4.png)
devartteam
1,504,398
This Week In React #148: Remix Routing, Hydration, React.FC, Vite + RSC, Astro, Valhalla, Reanimated, Expo-Apple-Targets...
Hi everyone! Well, Dan Abramov is on holiday, which is probably why there hasn't been much happening...
18,494
2023-06-14T14:32:19
https://thisweekinreact.com/newsletter/148
react, reactnative
--- series: This Week In React canonical_url: https://thisweekinreact.com/newsletter/148 --- Hi everyone! Well, Dan Abramov is [on holiday](https://twitter.com/dan_abramov/status/1666459095979753477), which is probably why there hasn't been much happening this week in our beloved ecosystem 😅. We do, however, have a few interesting articles and a potential official package for integrating React Server Components with Vite. --- 💡 Subscribe to the [official newsletter](https://thisweekinreact.com?utm_source=dev_crosspost) to receive an email every week! [![banner](https://thisweekinreact.com/img/TWIR_POST.png)](https://thisweekinreact.com?utm_source=dev_crosspost) --- ## 💸 Sponsor [![Tina.io is a headless CMS for Markdown-powered sites](https://thisweekinreact.com/emails/issues/140/tinacms.jpg)](https://tina.io/?utm_source=newsletter&utm_term=this-week-in-react) **[Tina.io is a headless CMS for Markdown-powered sites](https://tina.io/?utm_source=newsletter&utm_term=this-week-in-react)** - Editing UI for your Markdown files - UI for MDX components - Supports static (SSG) and server-side rendering (SSR) - Option for visual editing (live-preview) - Build with reusable blocks Test a starter site - [Docusaurus](https://github.com/tinacms/tinasaurus) (Github) - [Next.js + Tailwind with visual editing](https://github.com/tinacms/tina-cloud-starter) (Github) Or run  `npx create-tina-app@latest` then visit `localhost:3000/admin` Watch the [4-min demo video](https://www.youtube.com/watch?v=zRkeKSZjlyw) --- ## ⚛️ React [![Colocate your routes into feature folders with Remix Custom Routes](https://thisweekinreact.com/emails/issues/148/remix-routing.jpg)](https://www.jacobparis.com/content/remix-custom-routes) **[Colocate your routes into feature folders with Remix Custom Routes](https://www.jacobparis.com/content/remix-custom-routes)** Explains how Remix v2's flat file routing will improve featured-based code colocation. This can be activated from Remix v1 with a feature flag. With Remix, you can create your own routing conventions: Jacob proposes to improve colocation even further with [remix-custom-routes](https://github.com/jacobparis-insiders/remix-custom-routes) and a naming convention using the `.route.tsx` suffix. --- [![Hydration is a tree, Resumability is a map](https://thisweekinreact.com/emails/issues/148/hydration.jpg)](https://www.builder.io/blog/hydration-tree-resumability-map) **[Hydration is a tree, Resumability is a map](https://www.builder.io/blog/hydration-tree-resumability-map)** Gives an interesting mental model for understanding the difference between hydration (React) and resumability (Qwik). With resumability, all components are considered static, and event handlers are the entry point for interactivity. There is no need to traverse a tree (`O(n)`): with resumability, scalability is good, like the lookup in a hashmap (`O(1)`). --- - 👀 [react-server-dom-vite](https://twitter.com/nkSaraf98/status/1667934636297650179): WIP React core PR being reviewed to add an official Vite integration package for React Server Components, supporting the lazy and ESM nature of the bundler. - 👤 [Sophie Alpert looking for a job](https://twitter.com/sebastienlorber/status/1668562588781748224): unique opportunity to hire one of the top React contributors of all time. - 📜 [You Can Stop Hating React.FC](https://www.totaltypescript.com/you-can-stop-hating-react-fc): for Matt Pocock, `React.FC` is no longer an anti-pattern starting with TypeScript 5.1 and React 18, but he recommends sticking to annotating props. - 📜 [Better Images in Astro](https://astro.build/blog/images/): Astro v2.6 brings a new optimised image component (experimental). It uses webp, avoids layout shifts, supports Markdown relative paths and external image services. - 📜 [Zedux: Is this the one?](https://omnistac.github.io/zedux/blog/zedux-is-this-the-one): introduction to Zedux, a new React state manager based on atoms and composition. There are a lot of React state managers out there, and I thought this one stood out and was worth taking a look at: impressive list of features, polished documentation, and takes the best of existing solutions. - 📜 [React: how to debug the source code](https://andreigatej.dev/blog/react-debugging-the-source-code/): a few setup tips if you want to debug Jest tests of the React codebase. - 📜 [Netlify Connect](https://twitter.com/ascorbic/status/1668586011117465600): integration of the Gatsby Valhalla GraphQL data layer into Netlify, following the acquisition of the React framework. - 📜 [Add Drizzle ORM to a Remix app](https://www.jacobparis.com/content/remix-drizzle) - 📜 [A Visual Guide to the new App Router in Next.js 13](https://www.builder.io/blog/next-13-app-router) - 🔗 [Thinking in React Query](https://tkdodo.eu/blog/thinking-in-react-query): slides from React-Summit talk. React-Query is an Async State Manager, not just a data fetching solution. - 📦 [Shadcn UI CLI](https://twitter.com/shadcn/status/1666861850091458560): new configurable CLI for the Radix/Tailwind Shadcn UI component collection. - 📦 [Hiber 3D](https://hiberworld.com/developer): new framework for creating interactive 3D worlds, based on React. - 📦 [NakedJSX](https://nakedjsx.org/): simple CLI tool to generate static sites in JSX. - 📦 [rad-event-listener](https://github.com/JLarky/rad-event-listener): alternative API for adding listeners. Returns a cleanup function: useful for reducing the React boilerplate. - 📦 [Tremor v3 - React library to build dashboards](https://twitter.com/tremorlabs/status/1666706183409860610) - 📦 [React-Redux 8.1](https://github.com/reduxjs/react-redux/releases/tag/v8.1.0) - 🎙️ [This Month in React – May 2023](https://podcasters.spotify.com/pod/show/reactiflux/episodes/This-Month-in-React--May-2023-e25flvh): our monthly podcast with Reactiflux. - 🎥 [Does Lock-In Even Matter Anymore?](https://www.youtube.com/watch?v=rtgjFEJaFI8): interesting reflection on the potential lock-in (or not) of Vercel when using certain Next.js features. - 🎥 [Is Next.js App Router Slow? Performance Deep Dive](https://www.youtube.com/watch?v=HbUDiNlU6Yw) - 🎥 [From Pages to the App Directory in Next.js 13 (Nested Layouts)](https://www.youtube.com/watch?v=QkntFWb_V8k) - 🎥 [High-school student makes React a million times faster](https://www.youtube.com/watch?v=VkezQMb1DHw) --- ## 💸 Sponsor [![React Bricks is a CMS with visual editing for Next.js, Remix and Gatsby.](https://thisweekinreact.com/emails/issues/148/reactbricks.png)](https://reactbricks.com/?utm_source=thisweekinreact) **[React Bricks is a CMS with visual editing for Next.js, Remix and Gatsby.](https://reactbricks.com/?utm_source=thisweekinreact)** **It's flexible for Developers**: with React components you can create your own design system. Add true inline Visual editing to your JSX and add sidebar controls to edit props like the background color. You can choose Next.js, Remix or Gatsby and any CSS framework! **Content editors** can easily edit content inline without breaking the design system. It's as easy as using Word or Pages, allowing them to create landing pages in minutes without relying on developer resources. **It's enterprise-ready** with Collaboration, Time-machine, Single Sign-on, GDPR-compliant datacenters, Global CDN for optimized images, E-commerce integration, Fine-grained permissions, Scheduled publishing and more. **Get started here: [https://reactbricks.com](https://reactbricks.com/?utm_source=thisweekinreact)** --- ## 📱 React-Native - 📦 [React-Native 0.72.0-RC.6](https://github.com/facebook/react-native/releases/tag/v0.72.0-rc.6): new Golden RC fixing bugs and bringing XCode 15 support. - 📦 [React-Native 0.71.10](https://github.com/facebook/react-native/releases/tag/v0.71.10): XCode 15 support, also available for v0.70 and v0.69. - 📦 [Reanimated 3.3](https://twitter.com/swmansion/status/1668640779923849216): supports React-Native v0.72 and React Native Web v0.19. - 📦 [Expo-Apple-Targets](https://github.com/EvanBacon/expo-apple-targets): Config Plugin to create Apple targets (Share Extension, Widgets, Watch App...). It only handles target generation and linking, and is not a way to [develop those targets with React-Native](https://twitter.com/Baconbrix/status/1668001651922223104). - 📜 [Running Maestro UI Tests in an Expo Development Build](https://blog.mobile.dev/running-maestro-ui-tests-in-an-expo-development-builds-1ca443ab2a30) - 🎙️ [React-Native-Radio 268 - Embarking on Expo SDK 48](https://reactnativeradio.com/episodes/rnr-268-embarking-on-expo-sdk-48) - 🎥 [Building a MacOS App with React Native: Is it Possible?](https://www.youtube.com/watch?v=xkYy_6kje9E) - 🎥 [What’s the best cross-platform technology in 2023](https://www.youtube.com/watch?v=lYfgGgJgHB0): nice retrospective of cross-platform from 2010 to 2023, and the main players in 2023. - 🎥 [React Native Shared Element Transitions with Reanimated 3](https://www.youtube.com/watch?v=tsleLxbvxe0) --- ## 🧑‍💻 Jobs 🧑‍💼 [**Passionfroot - Senior Full-stack Engineer (Remix) - €160k+, Berlin/remote**](https://passionfroot.recruitee.com/o/senior-fullstack-engineer) Passionfroot's mission is to empower the independent businesses of tomorrow via YouTube, Podcasts, Social Media, and Newsletters. Join us in building a tool that will empower creators globally to build scalable, sustainable businesses. 🧑‍💼 [**Callstack - Senior React Native Developer - Fully Remote, PLN 21-32k net on B2B, monthly**](https://www.callstack.com/senior-react-native-developer) Do you want to work on the world's most used apps? Would you like to co-create the React Native technology? Join the Callstack team of React & React Native leaders. Check our website for more details. We are looking forward to seeing your application - show us what you've got! 🧑‍💼 [**G2i - 100% Remote React Native Jobs**](https://twitter.com/gabe_g2i/status/1563204813881425926?s=20&t=ArRLC77BpRwXXCdx8fnUqw) We have several roles open for developers focused on React Native! Pay is ~160k plus 10% bonus. You must have production experience with RN and be based in the US. DM [@gabe_g2i](https://twitter.com/gabe_g2i) to learn more and don't forget to mention This Week in React. 💡 [How to publish an offer?](https://thisweekinreact.com/sponsor) --- ## 🔀 Other - [Polywasm - A polyfill for WebAssembly](https://github.com/evanw/polywasm) - [Wasmati - a TypeScript library to write Wasm at the instruction level](https://www.zksecurity.xyz/blog/posts/wasmati/) - [Ezno compiler/typechecker open-sourced](https://github.com/kaleidawave/ezno/discussions/21) - [Why We Should Stop Using JavaScript According to Douglas Crockford](https://www.youtube.com/watch?v=lc5Np9OqDHU) - [Web Apps on macOS Sonoma 14 Beta](https://blog.tomayac.com/2023/06/07/web-apps-on-macos-sonoma-14-beta/) - [Modern CSS in Real Life](https://chriscoyier.net/2023/06/06/modern-css-in-real-life/) - [Modern CSS For Dynamic Component-Based Architecture](https://moderncss.dev/modern-css-for-dynamic-component-based-architecture/) - [Lightning CSS 1.21](https://twitter.com/devongovett/status/1666476856990908416) - [ts-patterns 5.0](https://github.com/gvergnaud/ts-pattern/releases/tag/v5.0.0) - [What's New in DevTools (Chrome 115)](https://developer.chrome.com/blog/new-in-devtools-115/) --- ## 🤭 Fun [![alt](https://thisweekinreact.com/emails/issues/148/meme.jpg)](https://twitter.com/adamwathan/status/1666939669311901696) See ya! 👋
sebastienlorber
1,504,648
How @Autowired works in Spring - A detailed guide
In last post we have seen how @Autowired can be used at multiple places to inject the dependent...
0
2023-06-19T14:53:09
https://dev.to/themayurkumbhar/how-autowired-works-in-details-14a5
springboot, spring, java
In [last post](https://dev.to/themayurkumbhar/wiring-in-spring-51lc) we have seen how `@Autowired` can be used at multiple places to inject the dependent beans. Spring framework handles the dependency between the components and based on field, setter or constructor injection injects appropriate objects. ## Issue? What happens when spring context has multiple beans of same type? which object spring framework injects for `@Autowired` annotation? If we have single bean of required type in the spring context, there is no ambiguity to framework and it will inject the dependent bean wherever required. ## How Spring framework handle such scenarios? When spring framework gets into situation of multiple beans of same type which need to be injected in one of the `@Autowired` dependency, it will follow the `three` steps to resolve the ambiguity. > By Default IoC container will always look for `class type` of bean to inject. ### Step 1: look for bean name with same parameter/field name. Spring IoC container will look for bean name with equal to parameter name mentioned with `@Autowired` annotation in filed, setter parameter, constructor parameter. ```java // field injection @Autowired private SomeBean bean; // here spring looks for bean with name "bean" in context // setter injection @Autowired public void setSomeBean(SomeBean bean1){ // here spring looks for bean with name "bean1" in context to inject. this.bean = bean1; } // constructor injection @Autowired public Example(SomeBean bean2){ // here spring looks for bean with name "bean2" in context to inject. this.bean=bean2; } ``` > if no matching name bean found, follow step 2 ### Step 2: Spring IoC container looks for bean marked as `@Primary` bean So if no matching name bean found, then spring IoC Container check for if there is any bean among multiple beans which is marked as `@Primary` bean, if yes inject that bean. ```java @Primary // this is primary bean @Bean public SomeBean myBean(){ return new SomeBean("Hello"); } @Bean // this is normal bean public SomeBean myOtherBean(){ return new SomeBean("Welcome"); } // when @Autowired is invoked, as we have multiple beans of same class type, @Primary bean will be injected for below wiring. @Autowired public SomeBean bean; // as no name bean matching with myBean, myOtherBean so Primary bean will be injected here. ``` > If no `@Primary` bean found in context, follow step 3 ### Step 3: Spring IoC container will look for @Qualifier annotation and find the bean with name matching. Now, IoC container checks if `@Qualifier` is used or not and if found look for the name of the bean mentioned in `@Qualifier` and injects that. ```java @Bean public SomeBean myBean(){ return new SomeBean("Hello"); } @Bean public SomeBean myOtherBean(){ return new SomeBean("Welcome"); } // when @Autowired is invoked, as we have multiple beans of same class type, @Primary bean not found checks name with @Qualifier and injects below @Autowired // here qualifier used is "myBean" so IoC will inject the bean with name myBean public Example(@Qualifier("myBean")SomeBean bean){ this.bean = bean; } ``` > Use of @Qualifier is beneficial as the parameter name can change based on developers choice to improve readability of code, but it will not impact the autowiring due to qualifier name used. If all the above step followed and no matching bean is found then it will throw an exception `NoUniqueBeanDefinitionException` as multiple beans are present but are ambiguious. --- <meta name="keywords" content="Autowiring, Dependency Injection, Spring Framework, Bean Wiring, Inversion of Control, Component Scanning, Constructor Injection, Setter Injection, Autowired Annotation, Bean Configuration, Spring Bean Lifecycle, Qualifiers and Primary Beans, Autowire by Type, Autowire by Name, Autowire by Qualifier, Autowire by Annotation, Autowire vs. Explicit Wiring, Benefits of Autowiring, Common Autowiring Pitfalls">
themayurkumbhar
1,504,768
Situações que NullPointerException podem acontecer no Kotlin
Uma das motivações de usar o Kotlin é por ser uma linguagem segura, muito por conta da segurança a...
0
2023-06-16T11:23:00
https://dev.to/alexfelipe/situacoes-que-nullpointerexception-podem-acontecer-no-kotlin-2g3h
kotlin, programming
Uma das motivações de usar o Kotlin é por ser uma linguagem segura, muito por conta da segurança a referências nulas ou [Null Safety](https://kotlinlang.org/docs/null-safety.html). De uma forma resumida, é a habilidade para evitar que apps escritos em Kotlin lancem a famosa [`NullPointerException`](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin/-null-pointer-exception/) ou NPE. Para isso, a linguagem oferece o suporte a variáveis/parâmetros que podem ou não serem nulos. Tipos que podem ser nulos (nullables) são identificados com o sufixo `?`, como por exemplo `String?`, `Int?` etc e isso significa que para chamar qualquer membro de um nullable, seja propriedade ou método, precisamos garantir que ele não é nulo. ## Null Safety no Kotlin O Kotlin oferece uma série de recursos para lidar com nullables, seja por condições com `if`, chamada segura, operador elvis e outras possibilidades que você pode [conferir na página da documentação sobre null safety](https://kotlinlang.org/docs/null-safety.html#nullable-types-and-non-null-types). Embora exista diversas opções para escrever códigos que nos protejam do NPE, nem sempre temos essaa garantia! > _"Como assim, Alex? Me explica melhor isso ai..."_ Pois é, a partir do momento que a nossa aplicação precisa realizar alguma integração, o risco de NPE aumenta! ## Reflexão sobre o uso de nullables O primeiro caso comum é o uso de código escrito em Java no Kotlin, pois não tem um suporte rigoroso como o do Kotlin que nos indique se variáveis, parâmetros ou retornos podem ser nulos. Em outras palavras, se usarmos um código Java que tenha a possibilidade de retornar `null`, temos o risco de receber um NPE! Vamos seguir com uma demonstração, criando um modelo para representar um usuário: ```java public class User { private final int id; private final String userName; User(Integer id, String userName) { this.id = id; this.userName = userName; } public Integer getId() { return id; } public String getUserName() { return userName; } } ``` Então, vamos implementar um gerenciador de usuário que permite buscar usuários criados a partir de um id: ```java import java.util.stream.Stream; public class UserManager { public User findUserById(int userId) { var userFound = Stream.of( new User(1, "alex"), new User(2, "felipe")) .filter(user -> user.getId() == userId) .findFirst(); return userFound.orElse(null); } } ``` Observe que o código devolve `null` caso o usuário não for encontrado. Ao testar esse código em Kotlin com um id que não existe: ```kotlin println( UserManager() .findUserById(3) .userName ) ``` ![Mensagem da stack trace indicando o lançamento de um NullPointerException justificando que não pode chamar getUserName() porque o retorno foi null](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1i7041e7a38ovzds0upa.png) Pois é, temos um NPE! E não para por aqui, qualquer código de integração que envolva fazer uma conversão, buscar dados etc, pode resultar em um NPE se não utilizarmos nullables. Portanto, utilizou um código Java no Kotlin, ou fez uma integração com banco de dados ou API que precisa fazer uma busca, integração etc, considere o uso de nullables mesmo que a interface de comunicação não indique o problema! ## Evitando NPE com nullables Vamos verificar como ficam os códigos utilizando nullables: ```kotlin val user: User? = UserManager().findUserById(3) println( user?.userName ) ``` Com apenas essa modificação simples, você diminui a fragilidade da sua aplicação a qualquer mudança que ocorra. Um caso comum em códigos de integração é o processo de parsing de objetos, como por exemplo, um [DTO](https://en.wikipedia.org/wiki/Data_transfer_object). A implementação desse tipo de objeto seria: ```kotlin data class UserDTO( val id: Int?, val userName: String? ) ``` E sim, para cada propriedade nova, você deve considerá-la um nullable! E por fim, você pode oferecer métodos de conversão, como por exemplo, um `toUser()`: ```kotlin data class UserDTO( val id: Int?, val userName: String? ) { fun toUser(): User = User( id ?: 0, userName ?: "" ) } ``` A regra de preenchimento de valores padrão você pode personalizar da maneira que preferir, até mesmo pode lançar exceptions caso não receba valores necessário, como por exemplo, o id: ```kotlin data class UserDTO( val id: Int?, val userName: String? ) { fun toUser(): User = User( id ?: throw IllegalStateException("usuário deve conter um id"), userName ?: "" ) } ``` O que você achou desses pontos em relação a referências nulas no Kotlin? Tem outras dicas ou práticas que aplica no seu dia a dia? Aproveite para compartilhar nos comentários.
alexfelipe
1,505,166
Offshore Java Developers vs. Onshore: Making the Right Choice
In today's competitive digital landscape, businesses are constantly seeking the best talent and...
0
2023-06-15T05:53:31
https://dev.to/hiredevelopersdev/offshore-java-developers-vs-onshore-making-the-right-choice-50ln
offshorejavadevelopers, offshorejavadevelopment
In today's competitive digital landscape, businesses are constantly seeking the best talent and cost-effective solutions to meet their software development needs. When it comes to Java development, companies often face the decision of whether to hire offshore Java developers or opt for onshore resources. This article aims to explore the pros and cons of each option, helping you make the right choice for your business. ## Offshore Java Developers Offshore Java developers refer to software developers located in different countries or regions, usually with lower labor costs compared to onshore resources. Companies often choose offshore developers to reduce expenses while still maintaining high-quality development work. **Benefits of Offshore Java Developers** - **Cost Savings:** One of the primary advantages of **[hiring offshore Java developers](https://www.hiredevelopers.dev/offshore-java-developers/)** is the significant cost savings. Offshore developers often offer competitive rates, allowing businesses to allocate their budget more efficiently. - **Access to Global Talent Pool:** Offshore development opens doors to a vast global talent pool. You can hire skilled Java developers from countries renowned for their expertise in software development, such as India, Ukraine, or Poland. - **Round-the-Clock Development:** Time zone differences can work to your advantage when collaborating with offshore developers. While your in-house team rests, offshore developers can continue working, ensuring faster project delivery. **Challenges of Offshore Java Developers** - **Communication and Language Barrier:** Working with offshore developers may pose challenges in communication due to language barriers and cultural differences. However, proper communication channels and project management tools can alleviate this issue. - **Quality Control:** Ensuring consistent quality across different time zones and locations can be a concern when working with offshore developers. Implementing robust quality control measures and regular reporting can help mitigate this challenge. ## Onshore Java Developers Onshore Java developers are locally-based professionals who work within the same country or region as your business. While the cost may be higher compared to offshore resources, onshore development offers its own set of advantages. **Benefits of Onshore Java Developers** - **Proximity and Cultural Alignment:** Hiring onshore Java developers allows for closer collaboration, as they are located within the same geographic region. This proximity facilitates face-to-face meetings and better cultural alignment, resulting in smoother project execution. - **Higher Quality Assurance:** Onshore developers often adhere to stringent quality standards and follow industry best practices. This commitment to quality ensures that your software is developed to the highest standards. - **Easier Collaboration and Communication:** Onshore developers are more likely to share your language and time zone, making communication and collaboration more seamless. This enables faster feedback cycles and quicker resolution of issues. **Challenges of Onshore Java Developers** - **Higher Costs:** The primary drawback of onshore Java development is the higher costs associated with hiring local talent. Salaries, overheads, and other expenses may increase the overall project budget. - **Limited Talent Pool:** Depending on your geographic location, the availability of skilled Java developers may be limited. This constraint can make it challenging to find resources with the specific expertise you require. ## Offshore Java Developers vs. Onshore Making the Right Choice When deciding between offshore and onshore Java developers, there are several factors to consider. Here are some key points to help you make the right choice for your business: - **Budget Constraints:** If cost optimization is a critical factor, offshore Java developers can provide significant cost savings while delivering quality work. - **Project Complexity:** For complex projects that require extensive collaboration and continuous communication, onshore development may be a better choice due to the proximity and cultural alignment it offers. - **Time Sensitivity:** If time-to-market is a priority and your project requires round-the-clock development, offshore developers can provide a faster turnaround time due to the time zone differences. - **Data Security and Intellectual Property:** If your project involves sensitive data or intellectual property, onshore development can offer better data security measures and legal protections. - **Scalability and Flexibility:** Offshore development can provide greater scalability and flexibility by tapping into a diverse talent pool, allowing you to quickly ramp up or downsize your team as needed. - **Technical Expertise:** Assess the complexity and technical requirements of your project. Offshore Java developers are often well-versed in various technologies and frameworks, making them suitable for projects that demand specialized skills. On the other hand, onshore developers can provide in-depth knowledge of local markets and specific industry requirements, which can be advantageous for projects with unique regional considerations. - **Project Management:** Effective project management is essential for the success of any software development endeavor. Offshore developers might have experience working with remote teams and distributed project management methodologies, ensuring smooth coordination despite geographical distances. However, onshore developers can offer face-to-face interactions, fostering a collaborative environment and enabling real-time decision-making. - **Cultural Compatibility:** Consider the cultural compatibility between your company and the development team. Offshore Java developers often have exposure to various cultures and work styles, which can lead to a diverse and inclusive development environment. Onshore developers, being part of the same local culture, can quickly adapt to your company's work culture, potentially streamlining communication and collaboration. - **Time and Resource Allocation:** Assess the availability of time and resources within your organization. Offshore development allows you to leverage external resources while allocating your internal teams to focus on core business functions. Conversely, onshore development might require dedicated resources from your organization, potentially impacting other crucial projects or initiatives. - **Legal and Regulatory Compliance:** Depending on your industry and project requirements, compliance with specific legal and regulatory frameworks might be crucial. Onshore development can provide a better understanding of local regulations and ensure compliance with data protection laws and intellectual property rights. **[Offshore java development](https://www.hiredevelopers.dev/offshore-java-developers/)** partners can also adhere to international standards and frameworks, but ensure you have a clear understanding of their legal and compliance protocols. - **Scalability and Long-Term Partnerships:** Consider the scalability and long-term potential of your development needs. Offshore development offers scalability by providing access to a vast pool of skilled developers, allowing you to quickly scale up or downsize your team as per project requirements. Establishing long-term partnerships with offshore development companies can also yield cost benefits and foster collaboration on multiple projects over time. Onshore development, on the other hand, can facilitate closer relationships and promote a sense of ownership and loyalty towards your company. ## FAQs **Q: Are offshore Java developers less skilled than onshore developers?** No, the skill level of offshore Java developers can be just as high as onshore developers. It's essential to assess the experience and expertise of developers regardless of their location. **Q: How can I ensure effective communication with offshore developers?** To ensure effective communication with offshore developers, establish clear communication channels, utilize project management tools, and schedule regular video or voice meetings to address any concerns promptly. **Q: What are some best practices for managing offshore development projects?** Some best practices for managing offshore development projects include setting clear project goals, establishing a robust communication framework, providing detailed project documentation, and maintaining regular progress tracking. **Q: Can I hire a hybrid team of offshore and onshore developers?** Yes, many companies adopt a hybrid approach by combining the advantages of both offshore and onshore development. This allows for cost optimization while maintaining effective communication and collaboration. **Q: What factors should I consider when choosing an offshore Java development partner?** When selecting an **[offshore Java development company](https://www.hiredevelopers.dev/offshore-java-developers/)**, consider factors such as their experience, track record, client testimonials, communication capabilities, and ability to align with your project requirements. **Q: How do I ensure the quality of work from offshore developers?** To ensure the quality of work from offshore developers, implement robust quality control measures, conduct regular code reviews, establish clear performance metrics, and maintain open lines of communication. ## Conclusion Choosing between offshore and onshore Java developers is a crucial decision that depends on various factors unique to your business and project requirements. Offshore developers offer cost savings, a global talent pool, and round-the-clock development, while onshore developers provide proximity, cultural alignment, and higher quality assurance. Consider your budget, project complexity, time sensitivity, and data security needs when making this decision. Remember to assess the skills and expertise of developers regardless of their location, and consider a hybrid approach if it aligns with your goals. By making an informed choice, you can find the right Java development team to drive your business's success.
hiredevelopersdev
1,505,348
Web Application Testing Tutorial: A Comprehensive Guide With Examples And Best Practices
Web application testing is an approach to ensure the correct functioning and performance of the web...
0
2023-06-15T08:30:37
https://dev.to/nazneenahmd/web-application-testing-tutorial-a-comprehensive-guide-with-examples-and-best-practices-h1
webapplicationtest, webdev, tutorial, softwaretesting
Web application testing is an approach to ensure the correct functioning and performance of the web application by following a structured process. With web application testing, detection of bugs or errors is done, and ensure that all such are removed before the web application goes live. The primary purpose of web application testing is to fix any issues and vulnerabilities in the web application before it is released on the market. With this test, you can ensure the web application meets all the end-user requirements and provide a high-quality experience. However, it is important to conduct web application testing with accuracy. Understanding Web Application Web applications are application programs with interconnected modules which are loaded on the client side and delivered over the Internet through the browser interface. Developers build web applications for different uses, and their users vary from organization to individual. The commonly used web applications are webmail, online calculators, and eCommerce shops. Which web technologies are used in building web applications? They are mainly HTML (HyperText Markup Language), CSS (Cascading Style Sheets), and JavaScript. HTML uses tags to find different elements and their interaction. On loading a web page, HTML is compiled into a Document Object Model (DOM) that shows its structure. CSS is a style description framework mainly used to style and format visual elements of web applications. Further, JavaScript is a high-level scripting language upon which all dynamic behavior of web applications is scripted and executed. In addition to the above technologies, JavaScript frameworks like Angular, React, and Vue are used as they give ready-to-use tools and libraries. This simplifies the process of making a complex user interface. CSS Preprocessors like LESS and SASS allow writing and organizing CSS code efficiently. It gives features like variables, mixins, and nesting, allowing developers to maintain a reusable style across web applications. Also, the HTML templating simplifies generating dynamic HTML content. In addition, web applications also have a backend or server-side layer, which comprises APIs built using databases. It abstracts the relevant information into contracts that can be accessed by the front end through HTTP methods using appropriate requests and credentials. By combining these various technologies, web applications can deliver a seamless user experience with rich functionality and interactivity. The front-end technologies handle the visual presentation and user interaction, while the back-end technologies manage the data and business logic. The collaboration of these technologies enables the creation of powerful and dynamic web applications. Now let us learn about different types of web applications. It will help you get an idea of testing these applications. Type of Web Applications Different types of web applications vary in their function. Here are some of those: Static web apps: These web applications are stored on a server, and when you visit them, they look exactly as they are. You can think of them as simple websites like portfolios or landing pages. Dynamic web apps: These web applications are more interactive and fetch real-time data based on your request. They have databases on the server side that provide updated information to you. Dynamic web apps can be further divided into several types: Single-page apps: These web applications don't load new pages when you navigate. Instead, they rewrite the current page on the fly. An example of this is the Gmail app, where you can read, reply, and organize your emails without having to load separate pages. Multi-page apps: These web applications work by loading new content from the server whenever you perform a new action. Websites like Amazon and CNN are good examples of multi-page apps. Each time you click on a link or perform an action, a new page loads with updated information. Portal web apps: These web applications provide you with a range of categories on their home page. They often have features like a shopping cart or a user profile. Student portals or patient portals offering various services or information are portal web apps. Progressive web apps: These web applications are designed to give you a native app-like experience across different devices. They utilize features available in web browsers and mobile devices to provide a seamless experience. Spotify and Pinterest are examples of progressive web apps. eCommerce web apps: As the name suggests, these web applications are nothing but online stores that you use daily for purchasing goods. This web application helps you search for products, add them to a cart, make transactions, and complete purchases. Amazon and Flipkart are popular examples of eCommerce web apps. Animated web apps: These web applications go beyond the usual features of web apps and use animations to present content engagingly. Websites like Apple or Squadeasy incorporate various animated effects to make their interfaces more visually appealing and interactive. Rich Internet apps: These web applications are used by organizations to overcome restrictions imposed by web browsers. They mainly rely on plugins like Flash or Silverlight and give features similar to desktop applications. Adobe Flash and Microsoft Silverlight are examples of rich Internet apps. JavaScript (JS) web apps: These web applications are built using JavaScript frameworks like Angular, React.js, and Vue.js. They provide enhanced user interaction and are optimized for search engines. Many client portals, such as LinkedIn and Uber, are implemented as JavaScript web apps. As we are now aware of web applications and their types, it is crucial to know why testing web applications is important. What is Web Application Testing? Web application testing is the process of evaluating and assessing all aspects of a web application’s functionality, like detecting bugs with usability, compatibility, security, and performance. This testing practice ensures the quality of the web application and its working as per the end-user requirements. It systematically checks and verifies the web application's components and features to ensure a positive user experience. This systematic approach performs various tests to detect any issues, bugs, and vulnerabilities that might affect web applications' performance and security. Some of those tests are functional testing, performance testing, security testing, etc. The QA teams and testers mainly conduct it by simulating real-world scenarios and user interactions to verify the web application’s behavior and ensure its reliability. The main goal of web application testing is to uncover and rectify any issues and weaknesses in the web application and lower the incidence of data breaches or system failure. With web application testing, developers can check that the developed web application meets the required standards and delivers a seamless user experience. Now let us learn the significance of web app testing to get a better idea. Why is Web Application Testing Important? In this digital world, the use of web applications has risen tremendously as we enter the year 2023. As per the report of Statista, it is evident that there are 5.18 billion Internet users as of January 2023, which accounts for 64.4% of the global population. With the rise in the use of the Internet by people, access to web applications has surged. It has become an essential part of our daily lives. applications vary in their function Source We use them for online shopping, social media, banking, entertainment, and other means. However, any bug or error in the web applications can interfere with their usability and function, making them low-quality. But have you ever wondered how these applications are tested to ensure they work flawlessly and provide a great user experience? That's where web application testing comes in. It ensures that your web applications work correctly when rendered across multiple browsers, devices, and operating systems combinations. Benefits of Web Application Testing In this section, let’s discuss the benefits you can expect while performing web application testing. Improves app efficiency Executing web application testing is an integral approach to ensure the efficiency and quality of the web application. You can check how well the web application can handle a large number of users, check its reliable function, and give smooth navigation. Enhances user experience GUI (Graphical User Interface) testing focuses on the visual aspects of the web application. It ensures that the user interface is designed with the end users in mind, meeting their expectations and preferences. GUI testing detects and addresses common UI defects such as font inconsistencies, color issues, and navigation problems. Enhancing the user experience makes the app more appealing, leading to a higher conversion rate of users into customers. Improves app scalability Load testing is a type of testing that verifies the performance of a web application under various user loads. By simulating a large number of users accessing the app concurrently, load testing helps identify performance bottlenecks. It ensures that the app can handle high traffic volumes without slowing down or crashing. This improves the application's scalability, enabling it to handle peak usage times efficiently. Prevents data breaches Security testing is crucial for web applications to protect sensitive user data and maintain customer trust. It involves identifying and mitigating security vulnerabilities and threats that could lead to data breaches. By conducting security testing, web applications can be safeguarded against attacks and unauthorized access, ensuring the privacy and integrity of user information. Ensures cross-platform compatibility Compatibility testing is performed to ensure that web applications work seamlessly across different operating systems and web browsers. It verifies that the app's functionality, layout, and performance remain consistent across various platforms. By ensuring cross-platform compatibility, web apps can reach a wider audience and provide a consistent user experience regardless of the device or browser used. Increases user conversions Usability testing focuses on optimizing the user experience of a web application. It involves testing the app's features, navigation, and overall usability to ensure that users can easily interact with the app and consume its content. By identifying and addressing usability issues, web apps can provide a smooth and intuitive user experience, leading to increased user engagement and higher conversion rates. Web vs. Desktop vs. Mobile Application Testing At first glance, it might sound similar when looking at web application testing, desktop application testing, and mobile application testing. But when you delve into its concept, you will come to know about its major difference. Here are some common differences between the three to clear your concepts. FACTORS WEB APPLICATION TESTING DESKTOP APPLICATION TESTING MOBILE APPLICATION TESTING Aim Verifies the working of the web application across different browsers. Verifies the working of the desktop application across different computers and systems. Verifies the quality of the mobile app performance on various devices and OS. Focus It necessitates familiarity with OS and databases. It needs a crucial understanding of user interaction with the application. It requires an understanding of the real mobile devices and their support on the application. User interface It has a web-based interface. It has a native desktop interface. It has a native mobile interface. Connectivity Requires Internet connection. Work offline and online. Requires Internet connection. Device access Access through web browsers. Access directly on the desktop. Access on the mobile device. Hardware Minimal hardware requirements. Depends on desktop hardware specifications. Depends on mobile device hardware. Installation Not required. Required installation on the desktop. Installation is needed on mobile devices. Performance Depends on network and server response. Less dependency on network and server. Can be affected by device performance. However, the differences mentioned above are just general in terms, but there might be some specific differences based on the involved software application and technologies. Web Application Testing Scenario Web application testing is performed in different types of conditions and scenarios. When you start with a web application, a test scenario should be kept in mind so that all the features and aspects of web applications are tested. These scenarios are essential for ensuring the web application's quality, functionality, and usability. Some of those scenarios are as follows: User Navigation Flow: Test the user's ability to navigate smoothly between different website pages. Verify that links, buttons, menus, and navigation elements work as expected. Checkbox and Radio Button Selection: Ensure users can correctly select and deselect checkboxes and radio buttons. Validate that the selected options are properly recorded and displayed. Dropdown List Selection: Test whether users can choose the desired values from dropdown lists. Verify that the chosen values are accurately captured and used within the application. Command Button Functionality: Validate the functionality of various command buttons, such as Save, Next, Upload, Reload, etc. Ensure that these buttons perform the expected actions and update the application accordingly. Search Functionality: Test the search functionality across different web pages. Verify that the search feature returns accurate and relevant results based on user queries. Broken Link Verification: Check for any broken links within the web application. Ensure all links are valid and lead to the intended pages or resources. Tab Order Functionality: Test the tab order across pages, determining the focus sequence when users navigate using the keyboard. Verify that the tab order is logical and consistent, allowing for easy navigation and accessibility. Default Value Display: Ensure the web application correctly displays default values on each web page. Verify that these values are correctly pre-filled or selected for users' convenience. Email Functionality: If the web application involves email communication, for example, in a situation like a password reset, you should test the email functionality. Ascertain the success of the emails after they are sent, considering the accuracy of the content and appropriate recipients. Phases of Web Application Testing The web application testing life cycle is a structured approach to testing the web application’s reliability and quality. It follows a series of phases that help identify defects, ensure functionality and assess web application performance. These are the different phases involved in web application testing: Requirement gathering: In this phase, QA teams collect all requirements related to web application features. They initially conduct reviews and analyze the need and specifications of the web applications. This helps them to identify all the key features, functionality, and performance. Test planning: Here, the QA team prepares and updates the test plan documents by defining the test scope, objective, entry, and existing criteria of web application testing. Test case preparation: In this phase, test cases and test scenarios are created based on the end-user requirement and test objective. Additionally, inputs, expected outputs, test data for each test case, testing techniques, and methodologies are identified. Test environment set-up is also done, which includes configuration of hardware, software, and network configuration to simulate the real-world scenario. Test execution: After preparing for the test, testers run the test cases as per the plan. This involves performing functional testing, usability testing, regression testing, etc. They also report any defect found and document deviation from the intended result. Bugs reporting: When a test case fails during the execution of the testing process, the testers detect the bug and raise and report it using defect tracking tools like HP ALM QC and Jira. In other words, the testing team monitors and tracks the defects, assigns priorities, and collaborates with the development team to resolve them. Defect retesting: The testing team prepares and shares test reports, which include details about the test coverage, test execution results, defect metrics, and overall quality assessment. In this phase, when the developer fixes the identified bug, the tester retests and re-executes the failed test cases to ensure its fixation. Test closure: In the final stage, the testing team evaluates the overall testing process, identifies lessons learned, and prepares a test closure report. The test cycle is closed when all the defects are fixed and web applications function as expected. The phases mentioned above of web application testing help ensure that web applications are thoroughly tested before being deployed to production. These phases are executed by two different approaches, which include manual and automation. Read the below section to learn more. Web Application Testing Techniques Web application testing includes different testing techniques and different forms of testing. Including all tests at different phases of web application testing is crucial. Here are the testing techniques which should be followed while performing web application testing. Functional testing Functional testing ensures that all web application functionalities are verified and specification requirements are met. Such tests are performed using test cases that confirm the functionality of each web application component. Following are the checklists to be considered while performing functional testing: Correct working of links in the desired manner. Correct the working of buttons in the desired manner. Verifies validation of the fields like mandatory checks, character limit checks, accepted character checks, and error messages. Correct storage of data in the database during submission of the form. Checks form fields are populating default values. Checks integration between different modules of the system. Type of Functional Testing Functional testing is carried out through different levels of tests, which are discussed below: Unit testing: Functional testing begins at the unit testing level, where the developers test each module or unit of the web application. Generally, the developers check the expected working of each unit of code and help in the early detection of bugs in the web application development process. Integration testing: This type of functional testing involves a test of working different units of code together. In other words, in this level of testing, different components are integrated and tested together by the developers and testers. It can be executed using black box and white box testing techniques. System testing: The next level of testing is system testing, where the testing team tests the whole web application. It mainly helps validate all end-user requirements before it is released to them. Here, all the included components of web applications and their interactions are tested. Regression testing: This test here checks that any changes or updates made during system testing do not cause the web applications' non-functionality. It is mainly executed after every code change and ensures the web application's correct functioning. Acceptance testing: It is the final testing level, where the final test is conducted to ensure that the web application meets the end-users requirements. End-users mainly execute it to check that all the required web application features are implemented correctly. Non-Functional Testing Non-functional testing in web applications focuses on evaluating aspects other than functionality. It examines performance, usability, security, reliability, and scalability. This type of testing assesses how well the web application performs under different conditions, such as high user loads or varying network speeds. It also verifies if the application meets industry standards and compliance requirements. Non-functional testing aims to ensure the web application delivers a seamless user experience, performs optimally, and meets the expected non-functional requirements to meet user expectations and business needs. UI Testing In UI testing, critical components of web applications are tested, which include the web server interface, database server interface, and application server interface. This helps to verify the interconnection relationship of all components of the web applications. It will ensure seamless communication and data flow between these servers. Usability Testing Usability testing aims at assessing the web application's user interface. It checks if the interface aligns with industry standards regarding effectiveness and user-friendliness. Following global conventions and web standards is crucial while developing a web application. Usability testing is particularly important for applications that aim to automate manual processes. During usability testing, testers pay attention to specific critical factors such as correct navigation, a site map for easy browsing, and avoiding overcrowded content that can confuse end-users. The goal is to create an intuitive, user-friendly user interface that enhances the overall user experience. Here are different types of usability testing used to test web applications: Exploratory testing: This testing type involves exploring web applications with no particular goal in mind. It is undertaken to explore and comprehend the functionality and user interface of the web application. Comparative testing: This involves a comparison of the usability of the web applications to different types of web applications. It helps identify the components where the developing web application falls short compared to its competitors in the market. A/B testing: This test involves two versions of the same web application to find which gives a robust user experience. Such tests will help you detect particular elements of the web application which require optimization and improvement. Remote usability testing: Such a test involves verifying the usability of the web applications with users of different geographical locations. Such tests give valuable insight into the user-friendliness of the web application. Hallway testing: This test checks for the web application's usability by including people not from the development team. Its primary purpose is to test the web application from different perspectives of the included people. This not only ensures the quality of the web application but also helps find those usability issues which might mistakenly get missed by the team. Performance Testing Performance testing allows you to evaluate how well a web application can perform in different scenarios for various criteria like response time and interoperability. It involves different types of tests, such as stress testing and load testing, to assess the application's functionality under different testing scenarios. Several types of performance testing can be used for web applications, including Stress testing: It pushes the web application to its limits to see how it performs under extreme conditions. This test helps identify potential performance issues that may occur during peak usage. Spike testing: It is helpful for web applications that experience sudden spikes in traffic, such as ticketing systems or e-commerce websites. It evaluates how well the web application handles these sudden bursts of activity. Endurance testing: It assesses the applicant's ability to manage a constant load over an extended period. This type of testing is relevant for web applications that are expected to be used continuously. Volume testing: It is particularly valuable for web applications that deal with large amounts of data, such as data analytics or database management systems. It tests the application's ability to handle and process a significant volume of data efficiently. Scalability testing: It focuses on assessing the web application's ability to handle increasing numbers of users or data as the application grows over time. Following are the checklists to be considered while performing performance testing: Check the performance of web applications when being used by multiple users simultaneously. Check the performance of web applications when single functions are being used by multiple users simultaneously. Response time of web applications on different Internet speeds should be verified. Check the performance of web applications on switching Internet connection between two or more networks. Check whether the web application can save data in the event of a system crash. Compatibility Testing Compatibility testing is the process of testing web applications that ensure that they work and function seamlessly across different web browsers, OS, and hardware platforms. Such a test is mainly performed to check and verify whether the web application meets user experience on diverse types of devices and environments. Different types of compatibility testing include the following: Browser compatibility testing: Such testing method checks the function and compatibility of the web applications across different types of web browsers like Chrome, Firefox, Internet Explorer, and Safari. It considers testing of layout, design, and function of the web application in each of the specific browsers. Device compatibility testing: With the increasing use of mobile devices, it has become crucial to test the working of web applications on those to give a seamless experience to users. Device compatibility testing allows checking the web application on different screen sizes, resolutions, and OS of the different devices. Operating system compatibility testing: This testing verifies if the web application operates smoothly on various operating systems such as Windows, Mac, and Linux. It aims to maintain consistent functionality, performance, and appearance across different operating systems. Network compatibility testing: This testing evaluates how the web application performs under different network conditions. It checks if the application remains usable and responsive in low bandwidth or high latency scenarios, ensuring a good user experience regardless of network limitations. Database compatibility testing: This testing ensures the web application seamlessly works with different database systems. It examines the application's functionality and performance when interacting with databases like MySQL, Oracle, or PostgreSQL. Following are the checklists to be considered while performing compatibility testing: Check the web application on different OS and browsers like Chrome, Mozilla, and others in different scenarios: Ensure font size, family, and spacing. Ensure placement of fields and text on the screens. Check for any error messages, tooltips, and placeholders. Security Testing This test type finds any security flaws in the web application and ensures it is safe and secure against online threats. The main goal of a security test is to identify any security risk and vulnerability with timely fixing before it is released in the market. Penetration testing: Also known as pen testing, this testing simulates an actual attack on a web application. The goal is to uncover vulnerabilities and assess the security measures' effectiveness. It can be done manually or with the help of automated tools, and it helps you understand how well our application can withstand attacks. Security scanning: With security scanning, we test web applications specifically for security-related problems. This could be misconfigured security settings or insecure network configurations. With this, you can discover potential weaknesses in the application's security and take steps to strengthen them. Security auditing: This testing involves comprehensively reviewing a web application's security controls and processes. The aim is to identify any potential vulnerabilities and suggest improvements. It ensures that the security measures are adequate and effective in safeguarding the application. Ethical hacking: Ethical hacking is a unique approach where professional hackers, who follow ethical guidelines, attempt to breach a web application's security. The purpose is to find any vulnerabilities other testing types may have missed. By performing ethical hacking, we can uncover hidden weaknesses and enhance the application's overall security. Following are the checklists to be considered while performing security testing: Check access to the web application’s restricted function by only authorized users. Whether it uses secure protocols like HTTPS. Ensure storage of confidential data like passwords and payment information of users in an encrypted format. Verifies the use of strong password policies. Ensures that any deactivated users do not access web applications. Verifies the cookies do not store passwords. Ensure the end of the session on clearing the cache. Ensure logged out from the web application at the end of sessions. Approaches to Web Application Testing Web application testing, being a subset of software testing, enables developers to verify whether there are any bugs and errors in the application. Primarily, it is executed by two different approaches: Manual Testing Manual testing of web applications is needed when in-depth testing is required. It involves executing test cases manually without relying on automated testing tools. They carefully examine every aspect of the application to identify any flaw affecting its usability. When manually testing a web application, testers simulate real-world usage scenarios. They click buttons, fill out forms, navigate through different pages, and perform various actions to ensure everything functions smoothly. With this, organization can validate their web application and assess important factors like accuracy, completeness, user-friendliness, efficiency, and more. It is often the initial step in creating user-friendly and intuitive interfaces. Automation Testing Web application testing using an automated approach involves testing with the use of automation testing frameworks with minimal requirement of human effort. Technically, automation testing of web applications refers to using automated tools and scripts to execute test cases and validate the web application's functionality, performance, and usability. These scripts simulate user actions like clicking buttons, filling out forms, and navigating through different pages. To perform automation testing of web applications, testers utilize specialized testing frameworks and tools such as Selenium, Cypress, or Playwright. These tools provide features like recording and playback, script creation, element identification, and reporting capabilities. However, it's important to note that not all tests can or should be automated. Automation testing is most effective for repetitive tasks, large-scale projects, and scenarios where a high level of accuracy is required. Certain aspects of testing, such as usability evaluation or exploratory testing, still benefit from manual intervention and human judgement. Factors to Consider in Web Application Testing When you begin with web application testing, certain factors should be considered to ensure successful test completion. Here are six key factors to look for in web app testing. Evaluate HTML Page Interactions, TCP/IP Communications, and JavaScript: Assess how diverse HTML pages interact with each other and the server. Checks TCP/IP communications to ensure proper data transfer and communication between the web application and the server. Evaluate the functionality and correctness of JavaScript code used within the web application. Validate Applications for CGI Scripts, Database Interfaces, Dynamic Page Generators, etc: Verify that any CGI scripts (Common Gateway Interface) used in the web application function correctly and securely. Test the database interfaces to ensure proper data storage, retrieval, and manipulation. Check dynamic page generators to ensure they generate and display content accurately. Test Web Applications across Browsers, Operating Systems, Localization, and Globalization: It is important to test the web application on different web browsers (e.g., Chrome, Firefox, Safari, etc.) and operating systems (e.g., Windows, macOS, Linux, etc.) to ensure compatibility and consistent behavior. Additionally, assess the web application's localization and globalization capabilities by testing different languages, character sets, and regional settings. Test Web URLs for Proper Functioning: Verify that all web URLs within the application work perfectly, leading to the intended pages or resources. Check for any broken links, redirects, or errors in URL handling. Check for Typos, Grammar Mistakes, and Incorrect Punctuation: Verify the web application's content or included information for typos, spelling errors, grammar mistakes, or incorrect punctuation. Ensure clarity, accuracy, and consistency of the text and overall language used in the web application. Map Old Pages to New Pages to Avoid Content Loss During Transition: If the web application undergoes any updates, redesigns, or restructuring, ensure that old pages are correctly mapped or redirected to new ones. This helps prevent users from encountering broken links or losing access to valuable content during the transition process. Why is End-to-End Web Application Testing a Priority? When you have considered all the crucial factors in testing web applications; next, you have to ensure that they are end-to-end tested. This will provide complete information on the quality of web applications and identify and fix all the bugs and errors. Here are some other reasons why end-to-end web application testing should not be ignored. Ensuring functional integrity: End-to-end testing validates a web application's entire flow and functionality, including all the interconnected components and systems. Testing the complete user journey helps ensure the application functions as expected. Identifying integration issues: Web applications often rely on integrations with various systems, databases, APIs, and third-party services. End-to-end testing helps uncover any issues or failures in these integrations, ensuring smooth communication and data exchange between different components. Validating user experience: It allows organizations to assess the user experience and ensure its alignment with the preferred standards. This allows the detection of any usability issues and navigation challenges, providing a positive and satisfying end-user experience. Detecting performance bottlenecks: You can easily identify performance issues due to interactions between different components or external systems as you simulate real test scenarios. This will help to evaluate the web application's performance, scalability, and responsiveness. Enhancing reliability and stability: End-to-end testing contributes to the overall reliability and stability of the web application. Thoroughly testing the application from end to end helps identify and fix bugs, errors, or vulnerabilities, reducing the risk of application failures or security breaches. Web Application Testing Tools Web application testing using the automation approach saves lots of time in the testing process. It ensures the fixation of errors and bugs at an early stage. The use of automation testing tools can help in accomplishing this. Here are some tools that can be leveraged to automate the web application testing process. Selenium: Selenium is an open-source test automation tool widely used for web application testing. It provides automation capabilities across multiple operating systems, such as Windows, Mac, and Linux, and popular web browsers, including Chrome, Firefox, and Edge. Selenium allows testers to write test scripts in various programming languages like Java, Python, and C#, making it flexible and adaptable for different testing needs. Cypress: Cypress is an open-source testing tool specifically designed for testing web applications built on JavaScript frameworks. It allows QA engineers to write tests using JavaScript, providing real-time execution and simultaneous viewing of test cases being created. Cypress offers a rich set of features for easier debugging, time-traveling, and stubbing network requests, making it popular among developers and testers for its simplicity and efficiency. Playwright: Playwright is an open-source tool for browser automation and testing of web applications. It supports multiple web browsers, including Chrome, Firefox, etc. Playwright offers APIs for automating web interactions and performing tests in various programming languages such as JavaScript, Python, etc. It focuses on providing reliable cross-browser testing ability and supports headless and UI testing scenarios. Puppeteer: Puppeteer is another popular open-source tool for browser automation developed by Google. It allows developers and testers to control and interact with web pages programmatically. Puppeteer provides a high-level API for automating tasks like generating screenshots, PDFs, and crawling pages. It supports Chrome and other Chromium-based browsers and is commonly used for web scraping, testing, and generating performance reports. How to Perform Web Application Testing? Web application testing requires some preparation before digging into the actual testing process. This will help you get all aspects of testing web applications in one place and pipeline to have a systematic approach to the testing process. Initiation of web application testing needs some prior preparation which ensures that the testing process is aligned with the project objectives and the test environment is accurately set up. A clear test strategy is in place to guide the testing efforts. Here are the steps to be followed for preparing for web application testing: Identify Test Objectives: When starting with a web application test, the first thing you have to be clear about is its test objective and goal. For example, it is important to identify the exact areas, key functionalities, and environments that must be tested. This will allow you to create effective test cases and ensure comprehensive coverage. Establish Test Environment: Set up the test environment that closely resembles the production environment in which the web application will be deployed. This involves configuring the hardware, software, networks, databases, and other components required to replicate the deployment environment. A well-prepared test environment helps conduct realistic tests and identify potential issues early on. Define Test Strategy: Develop a test strategy that outlines the approach, methodologies, and techniques to be employed during the testing process. This includes determining the types of tests to be performed (e.g., functional, performance, security), selecting appropriate testing tools and frameworks, establishing test timelines and milestones, and defining roles and responsibilities within the testing team. A well-defined test strategy ensures a systematic and structured approach to web application testing. After you have prepared for the test and know everything about what and how to test, you have to move to the actual testing process. Web app testing can be performed on a local computer or in the cloud, each with its advantages and disadvantages. Testing on a local machine provides greater control over the testing environment. Teams can customize the infrastructure and tools to meet their specific requirements, resulting in faster testing cycles without network latency. As a result, more resources will be required to help scale up to larger scenarios. In contrast, cloud-based testing offers virtually unlimited resources and scalability without hardware limitations. Additionally, this method is cost-effective since teams pay only for the resources they need. Web Application Testing on the Cloud Web application testing in the cloud means that web applications are deployed and tested on cloud-based servers and resources. But why test on the cloud even though we have so many automation testing frameworks and tools in the market which allow web app testing? The cloud-based platform offers several benefits which ease your web application testing process. You can scale up and down the testing environment according to the testing needs. You can access the web application from anywhere with an Internet connection, facilitating remote collaboration and enabling teams to work seamlessly across different locations. The focus on web applications has surged the testing tools and platforms standard. You can leverage the true capability of web application testing by performing the test on a cloud-based platform like LambdaTest. LambdaTest is a cloud-based digital experience testing platform that offers both manual and automated web application testing across 3000+ real browsers, devices, and OS. This allows cross browser compatibility testing of the web applications. It will ensure that web applications work flawlessly for all your users, regardless of their preferred browser or operating system. Challenges in Web Application Testing In testing web applications, there are certain challenges that most developers and testers encounter and find difficult to address. This may lead to failure in completing web app testing, and the quality of the application may be affected. Hence, it is of utmost importance to be aware of the challenges of web application testing: Uncontrolled web app environments: Web applications run on various platforms, screen resolutions, browsers, and devices, making it difficult to achieve comprehensive test coverage across all environments. Testers must carefully evaluate and prioritize the most relevant combinations for testing based on user demographics and usage patterns. Frequent UI change:Web applications undergo regular updates, introducing new features, third-party integrations, or changes to the user interface. Keeping up with these changes can be challenging for testers, as it requires maintaining and updating test scripts to align with the evolving UI, ensuring proper test coverage, and avoiding script failures. Handling image comparisons: Web automation often involves comparing images for visual validations. Managing image comparisons can be complex, as variations in pixel details, such as shape, size, and color, must be carefully handled to ensure accurate and reliable image conversions for testing purposes. Usability issues: Usability problems can significantly impact the success of a web application. When multiple features are squeezed into limited-screen real estate, usability can be affected. Testers must employ proper usability testing tools and techniques to create comprehensive test plans focusing on seamless navigation, intuitive user interfaces, and meeting user expectations. Best Practices of Web Application Testing The challenges mentioned above in web application testing can be mitigated by following the below best practice: Test on different browsers and devices: You should test your web application on a variety of browsers (such as Chrome, Firefox, Safari, and Edge) and devices (desktop, mobile, tablets). This ensures your application functions correctly and looks consistent across different platforms, providing a seamless user experience. Test for scalability and load handling: Always perform scalability and load testing to assess your web application's performance under heavy user loads. Simulating high user traffic will help identify performance bottlenecks, such as slow page load times or crashes, and allows you to optimize your application's performance accordingly. Perform security audits: You should conduct security audits to find any potential vulnerabilities, such as SQL injection, cross-site scripting (XSS), or authentication flaws. It is important to implement proper security measures, such as secure coding practices and encryption, to protect your application and user data. Implement continuous testing practices: Embrace continuous testing methodologies, such as continuous integration and continuous delivery (CI/CD), to ensure that your web application is continuously tested throughout the development lifecycle. Validate user input and data handling: Thoroughly test user input fields, form submissions, and data handling processes to ensure data integrity and prevent common issues like data loss, incorrect calculations, or validation errors. Validate input against expected formats, perform boundary value analysis, and handle error conditions gracefully. Test for accessibility: Pay attention to web accessibility standards and guidelines, such as the Web Content Accessibility Guidelines (WCAG), to ensure that your web application is accessible to users with impairments. Test for screen reader compatibility, keyboard navigation, color contrast, and other accessibility features to provide an inclusive experience for all users. Monitor and analyze application performance: Continuously monitor your web application's performance using tools like Application Performance Monitoring (APM) or web analytics. Monitor key metrics such as response time, server load, and error rates to identify and proactively address performance bottlenecks. Conclusion Web app testing plays a crucial role in ensuring web applications' quality, functionality, and security. It helps to identify and rectify issues early in the development lifecycle, reducing the risk of costly bugs or vulnerabilities in production. It ensures that the application functions as intended, provides a seamless user experience across platforms, and handles varying user loads effectively. Organizations can deliver robust and reliable web applications by following best practices such as testing on different browsers and devices, performing scalability and load testing, conducting regular security audits, and implementing continuous testing practices. As technology evolves, web application testing must adapt to emerging trends and challenges, such as cloud infrastructure, mobile responsiveness, and the increasing complexity of web applications. Organizations can continuously improve their web application testing processes and deliver high-quality applications that meet user expectations by staying updated with the latest testing methodologies, tools, and best practices.
nazneenahmd
1,505,402
Streamlining Busy Schedules: The Role of Student Athlete Shuttles in NYC
Introduction Balancing academics and athletics can be an arduous task for student...
0
2023-06-15T09:52:33
https://dev.to/limousinecarservices_5/streamlining-busy-schedules-the-role-of-student-athlete-shuttles-in-nyc-4c77
## **Introduction** Balancing academics and athletics can be an arduous task for student athletes. With demanding practice schedules, training sessions, and competitions, managing time efficiently becomes crucial. This is where student athlete shuttles come to the rescue, providing a reliable and convenient transportation solution in the bustling city of New York. In this article, we explore how [shuttle services in NYC](https://vipgts.com/luxury-shuttle-service-nyc/) contribute to managing the busy schedules of student athletes, ensuring they reach their destinations promptly while alleviating logistical burdens. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ac3e9atxz18uniwc3i0t.png) ### **Convenience and Time Management:** Convenience and time management are key benefits offered by student athlete shuttles in NYC. These services eliminate the hassles of navigating congested city streets and searching for parking. Student athletes can rely on spacious shuttle buses or [limo services](https://vipgts.com/services/special-occasion/),luxurious Cadillac Escalade stretch limos to transport them seamlessly between campuses, training facilities, and sporting venues. By outsourcing transportation, athletes gain precious time that can be allocated to academics, training, or rest. With dependable shuttle services, they can streamline their busy schedules, ensuring punctuality and reducing the logistical burdens that come with managing their academic and athletic commitments in a bustling city like New York. ### **Enhanced Productivity:** Student athlete shuttles in NYC not only provide transportation but also contribute to enhanced productivity. Instead of wasting valuable time behind the wheel or on public transportation, athletes can utilize their commuting time effectively. Whether it's reviewing game strategies, completing assignments, or simply relaxing, these shuttles offer a conducive environment for focused work. By eliminating the stress of driving and providing a comfortable space, student athletes can optimize their schedules and make the most of their time. With enhanced productivity, they can excel academically and athletically, achieving a well-rounded balance that sets them up for success. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/efl08hlsvrajobgshzc7.png) ## **Flexibility for NYC Sporting Events:** Student athlete shuttles in NYC offer unparalleled flexibility when it comes to attending sporting events. Whether it's a thrilling basketball game at Madison Square Garden or a baseball match at Yankee Stadium, these shuttles ensure athletes and fans arrive on time and depart seamlessly. With the convenience of shuttle services, student athletes can fully immerse themselves in the electrifying atmosphere of NYC sporting events. No need to navigate through traffic or search for parking—shuttle services provide a hassle-free transportation option, allowing athletes to focus on their game and fans to enjoy the event to the fullest. ## **Reliable and Safe Transportation:** Reliable and safe transportation is paramount for student athletes in NYC. Student athlete shuttles provide a trustworthy solution, operated by experienced drivers familiar with the city's bustling streets. Equipped with modern safety features, these shuttles prioritize the well-being of athletes. Whether it's commuting between campuses, training facilities, or attending NYC sporting events, athletes can rely on these shuttles for secure and timely transportation. With their transportation needs taken care of, student athletes can focus on their athletic pursuits without worrying about the logistics, ensuring peace of mind and optimal performance. ## **Booking and Contact Information:** To experience the convenience and efficiency of student athlete shuttles in NYC, you can book your shuttle service through Shuttle Service NYC. They offer a range of transportation options, including spacious shuttle buses and luxurious Cadillac Escalade stretch limos. For booking and further inquiries. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fzbuamz4kybmd3roc3r9.png) ## **Conclusion** In conclusion, student athlete shuttles in NYC provide a valuable solution for managing busy schedules. With reliable and safe transportation, athletes can focus on their studies, training, and NYC sporting events. These shuttles offer convenience, time management, enhanced productivity, and flexibility, ensuring athletes reach their destinations on time and without hassle. By utilizing student athlete shuttles, the logistical burdens of navigating traffic and finding parking are alleviated, allowing athletes to fully immerse themselves in their academic and athletic pursuits.To experience the benefits of efficient transportation, book or call your student athlete shuttle service through Shuttle Service NYC
limousinecarservices_5
1,505,703
Accessible anchor links with Markdown-it and Eleventy
I like to be able to link directly to a section in a long content. I wish every site provided anchor...
0
2023-06-15T14:23:32
https://nicolas-hoizey.com/articles/2021/02/25/accessible-anchor-links-with-markdown-it-and-eleventy/
--- title: Accessible anchor links with Markdown-it and Eleventy published: true date: 2021-02-25 21:08:34 UTC tags: canonical_url: https://nicolas-hoizey.com/articles/2021/02/25/accessible-anchor-links-with-markdown-it-and-eleventy/ --- I like to be able to link directly to a section in a long content. I wish every site provided anchor links associated to headings, even if [Text Fragments](https://web.dev/text-fragments/) might be a cross browser thing sometimes in the future. Here's how I made the anchor links of my [Eleventy](https://11ty.dev/) based site accessible. I've been using the [markdown-it-anchors](https://github.com/valeriangalliat/markdown-it-anchor) plugin in my Eleventy configuration for a while, but even if I applied some settings different from the defaults (which heading levels to consider, how to generate a slug, which visual symbol to use, etc.), I never tried to change the rendering function, as I thought the default one was enough. ## Were my Anchor Links Accessible? But a few weeks ago, I read this great detailed post where [Amber Wilson](https://amberwilson.co.uk/) explains how she figured out how to make such anchor links really accessible: [Are your Anchor Links Accessible?](https://amberwilson.co.uk/blog/are-your-anchor-links-accessible/) My anchor links were not accessible at all… 😱 ## Enhancing `markdown-it-anchor`'s rendering Amber also uses Eleventy and shared [a new plugin to automate such accessible anchor links](https://amberwilson.co.uk/blog/are-your-anchor-links-accessible/#automating-accessible-anchor-links), but I wanted to keep the features I'm already using in `markdown-it-anchor` and enhance it with better accessibility. Fortunately, `markdown-it-anchor` provides [a large set of options](https://github.com/valeriangalliat/markdown-it-anchor#usage), including a way to provide our own rendering function with the `renderPermalink` option. After a while diving into `markdown-it` and `markdown-it-anchor` documentation and code, I've been able to create a rendering function that generates accessible anchor links, which you should be able to use in any Eleventy project! 🎉 The code is primarily based on [`markdown-it-anchor`'s default `renderPermalink` function](https://github.com/valeriangalliat/markdown-it-anchor/blob/85afd1f054032d6a3c83102329c413b56cad99a9/index.js#L13-L34). Here is [my version](https://github.com/nhoizey/nicolas-hoizey.com/blob/4c9e42b306a387e9533a1036a6286b7f24091ed4/.eleventy.js#L111-L176): ``` renderPermalink: (slug, opts, state, idx) => { // based on fifth version in // https://amberwilson.co.uk/blog/are-your-anchor-links-accessible/ const linkContent = state.tokens[idx + 1].children[0].content; // Create the openning <div> for the wrapper const headingWrapperTokenOpen = Object.assign( new state.Token('div_open', 'div', 1), { attrs: [['class', 'heading-wrapper']], } ); // Create the closing </div> for the wrapper const headingWrapperTokenClose = Object.assign( new state.Token('div_close', 'div', -1), { attrs: [['class', 'heading-wrapper']], } ); // Create the tokens for the full accessible anchor link // <a class="deeplink" href="#your-own-platform-is-the-nearest-you-can-get-help-to-setup"> // <span aria-hidden="true"> // ${opts.permalinkSymbol} // </span> // <span class="visually-hidden"> // Section titled Your "own" platform is the nearest you can(get help to) setup // </span> // </a > const anchorTokens = [ Object.assign(new state.Token('link_open', 'a', 1), { attrs: [ ...(opts.permalinkClass ? [['class', opts.permalinkClass]] : []), ['href', opts.permalinkHref(slug, state)], ...Object.entries(opts.permalinkAttrs(slug, state)), ], }), Object.assign(new state.Token('span_open', 'span', 1), { attrs: [['aria-hidden', 'true']], }), Object.assign(new state.Token('html_block', '', 0), { content: opts.permalinkSymbol, }), Object.assign(new state.Token('span_close', 'span', -1), {}), Object.assign(new state.Token('span_open', 'span', 1), { attrs: [['class', 'visually-hidden']], }), Object.assign(new state.Token('html_block', '', 0), { content: `Section titled ${linkContent}`, }), Object.assign(new state.Token('span_close', 'span', -1), {}), new state.Token('link_close', 'a', -1), ]; // idx is the index of the heading's first token // insert the wrapper opening before the heading state.tokens.splice(idx, 0, headingWrapperTokenOpen); // insert the anchor link tokens after the wrapper opening and the 3 tokens of the heading state.tokens.splice(idx + 3 + 1, 0, ...anchorTokens); // insert the wrapper closing after all these state.tokens.splice( idx + 3 + 1 + anchorTokens.length, 0, headingWrapperTokenClose ); }, ``` I hope there are enough comments in the code to understand how it works. The main `markdown-it` behavior I had to understand is that [it uses an array of tokens to represent HTML nodes](https://github.com/markdown-it/markdown-it/blob/master/docs/architecture.md#token-stream), instead of a more traditional [Abstract Syntax Tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). ## Adapting the CSS to the new HTML structure If you're already using `markdown-it-anchor`, the anchor link is inside the heading: ``` <h3 id="were-my-anchor-links-accessible"> Were my Anchor Links Accessible? <a class="deeplink" href="#were-my-anchor-links-accessible">#</a> </h3> ``` With my new code, following Amber advice, it is now: ``` <div class="heading-wrapper"> <h2 id="were-my-anchor-links-accessible">Were my Anchor Links Accessible?</h2> <a class="deeplink" href="#were-my-anchor-links-accessible"> <span aria-hidden="true">#</span> <span class="visually-hidden">Section titled Were my Anchor Links Accessible?</span> </a> </div> ``` My actual code is a little more complex as I use a SVG for the anchor symbol, but the HTML structure is the same, so you can take some inspiration from my CSS code, which is heavily inspired from Stephanie Eckles' [Smol Article Anchors](https://smolcss.dev/#smol-article-anchors): ``` // Anchor links // Based on https://smolcss.dev/#smol-article-anchors .heading-wrapper { display: grid; // anchor link on the far right for long wrapping headings grid-template-columns: minmax(auto, max-content) min-content; align-items: stretch; gap: 0.5rem; } .deeplink { display: grid; justify-content: center; align-content: center; &:link, &:visited { padding: 0 0.25rem; border-radius: 0.3em; color: var(--color-meta); text-decoration: none; svg { fill: none; stroke: currentColor; stroke-width: 2px; stroke-linecap: round; stroke-linejoin: round; } } .heading-wrapper:hover &, &:hover, &:focus { color: var(--color-link-hover); background-color: var(--color-link-hover-bg); } } @media (min-width: 65rem) { .heading-wrapper { // Anchor link in the left margin on larger viewports grid-template-columns: min-content auto; margin-left: -2rem; // 1rem width + .25rem * 2 paddings + 0.5rem gap } .deeplink { grid-row-start: 1; } } ``` On viewports `< 65rem`, the anchor link is inside the content container, at the right of the heading. If a long heading wraps on multiple lines, the anchor link is located on the far right, but if the heading is short, the anchor link follows it directly. I'm not sure setting `grid-template-columns: minmax(auto, max-content) min-content;` is the best way to do it, feel free to suggest an enhancement. On viewports `>= 65rem`, there is space around the content, so I move the anchor link in the margin on the left. ## Enhancing `markdown-it-anchor` for everyone I asked [Valérian Galliat](https://www.codejam.info/val.html), maintainer of `markdown-it-anchor`, if he would be open to merge a pull request providing this enhancement: [https://github.com/valeriangalliat/markdown-it-anchor/issues/82](https://github.com/valeriangalliat/markdown-it-anchor/issues/82) But I think this would break (at least visually) all current uses of the plugin, so I believe it would require a new option to activate it. We'll discuss this before I provide the PR.
nhoizey
1,506,014
Exploring Nodejs: Chapter 2 — URLS AND THE INTERNET
Chapter 2 is here! In this chapter, we will talk about how the internet works, what happens when we...
23,403
2023-06-15T18:19:03
https://dev.to/itsmohamedyahia/exploring-nodejs-chapter-2-urls-and-the-internet-1e14
webdev, beginners, tutorial, node
Chapter 2 is here! In this chapter, we will talk about how the internet works, what happens when we go to a url, different parts of the url and how does DNS Servers help us traverse the web. Hope you enjoy it. --- ### WHAT HAPPENS UNDER THE HOOD WHEN I ENTER A URL IN THE BROWSER ADDRESS BAR?? Well, you need to know what a URL is. **OKAY, WHAT IS A URL??** URL stands for 'uniform resource locator'. **WHY??** 'Uniform' since all URLs have a uniform and standard structure 'Resource Locator' since that URL is used to locate a file or a 'resource' on the internet or even locally on your computer. and it consists of some parts as shown in the image: ![url constituents](https://upload.wikimedia.org/wikipedia/commons/thumb/b/ba/URL_structure.jpg/800px-URL_structure.jpg?20200419144644) <small>A picture showing URL structure. Illustration made by [Noémie2602](https://commons.wikimedia.org/w/index.php?title=User:No%C3%A9mie2602&action=edit&redlink=1) and licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en).</small> Lets focus on the domain for a second, we refer to `wikipedia` as the domain or `en.wikipedia.org` can be called the domain name too, ### HEY, WHY IS IT CALLED A DOMAIN?? Just wait for a second, be patient, that domain, mmm, well, lets get back a second, i get back a page when i enter the url, **BUT WHERE IS THIS PAGE STORED??** This page is stored on a computer like the one you are reading from right now, but if that computer's sole purpose is to serve files then we would call it a server, makes sense, so i enter the domain part (which is the unique part of a url if we are writing `https://www.google.com/`) and then my browser magically knows that it should get a file from another computer across the internet, **IS THAT IT?** Well, it's not magic once you understand what happens, we have to think first how did our browser could get to that other computer, they have to be connected, right?? and they are both connected to the "internet". ### **WHAT IS THE INTERNET EXACTLY, IS THAT SOME CLOUD IN THE SKY?** The internet is very simpler than what most people have in mind, it is just wires, yes, your computer is connected to that other computer(server) through a wire unless you are connecting to a wifi then most of the pathway is wires. In the case of you connecting to a wifi, that wifi will be connected to your router then a modem (or a 3-in-1 device) then to your ISP infrasture (which is basically routers) and your ISP is connected to other main ISPs and so on and there are wires that cross seas and oceans to connect the whole world together! (If you are interested, you can see actual footage of the physical infrastructure of the 'internet' through [this youtube video](https://www.youtube.com/watch?v=TNQsmPf24go&ab_channel=Vox)) When a computer is connected to a router, it is given an ip address and your router also has an ip address, every device basically that is connected to the internet has an ip address to identify it from the rest of devices, just like a home address so that data which is travelling across the 'web' or 'internet' or wires knows where to go to. Now, to get a file from another computer, you have to request it, just like humans in real life, if you want something, you request it, the server then returns a response which might include the website files. When you enter an ip address in the address bar, the browser sends a request to the server to fetch the files, the file or files sent back is determined by a set of code which tells the server, hey, if you get a certain request, send a certain response. That set of code is the 'backend' and can be written by multiple languages, one of which is Javascript and will be run by Nodejs, the runtime environment. ### **BUT WE ARE NOT WRITING IP ADDRESSES, ARE WE??** Well, we could, but for convenience, we are writing a url which the unique part of is the domain name. **DOES A SERVER HAS A DOMAIN NAME SO THAT WE COULD IDENTIFY IT WITH??** No, servers have ip addresses. **BUT HOW DO WE GET THE FILES BACK FROM THE SERVER IF WE AREN'T WRITING THE SERVER IP ADDRESS??** The browser sends a request to another server, a DNS (domain name system) server which holds a database with domain names corresponding to ip addresses with the domain name, the DNS server will make a 'domain lookup' (will make a query [search and get]) for the ip address associated with the domain name you typed and then your browser sends a request to the ip address. ### **WHAT ABOUT HTTPS PART OF THE URL??** HTTPS stands for Hypertext Transfer Protocol Secure. **WHAT??** 'Hypertext' refers to text documents which contains hyperlinks or links, which makes those documents 'hyper' since they are not totally inanimate. They can refer to other documents. And those documents, you guessed it, are HTML files or other forms of text files. 'Transfer Protocol', intuitive i think, we can understand that it is a protocol or a standard (something or structure that is set and agreed upon and followed) for transferring hypertext. **WHAT ABOUT SECURE??** 'Secure' means that the transfer process is secured. **YOU HAVE TO BE MORE DETAILED THAN THAT??** Okay, chill, lets give an example, when you enter your account information (username and password) or card info to a website, you and then click login or pay, you are essentially sending a request to the server with the data. Now, someone might be watching that connection and be able to get the request data that you sent which will include your secret information. Now to avoid this issue, we can encrypt this connection (by TLS protocol or SSL, previously) so that anyone spying will see just encrypted data which he cant decrypt. ### **YOU NEVER TOLD ME ABOUT THE OTHER PARTS OF THE URL.** One by one, my friend, okay let's talk about it. 'path': a website has many html files, there is the main page and there are other pages, the part after the whole domain name so `google.com` gets sent to the server as part of the request under a key value pair (so the request will contain a key-value pair, the key is called 'path' and the value is anything after the whole domain which in case of `https://www.google.com/` is the slash `/`, now the JS code on the server analyzes that path value and based on the value will return a specific page, if it finds the path to be just `/` it will return the main page, if it is `/article-501` it will return another html page which hopefully should be a page about article 501 if the developer coded it logically. 'anchor': if you know css, then you would know that `#` is written in front of the element id name in the css file or to form a css selector for that id element, same in js when using querySelector method. When a link is written without an anchor, the browser opens the page at the top (the top part of the page will be the top of the html file) but if there is an anchor, the browser will open the page at the section or element which has an id with the name after the `#` (the top of the page initially will the be the section or element) and you can scroll up or down. ### YOU THOUGHT SO. I didnt forget, you wanted to know why a domain was called like that. A domain from the name suggests a seperate area from its surroundings, similarly, a domain in the URL suggests that any pages under that domain are of a seperate entity from the larger network of the internet. `google` is a domain name so all pages with the domain google are related and are seperate from other pages of the internet. --- This chapter concludes here, and I hope you enjoyed it. If you have any questions, please feel free to ask them below. Additionally, I welcome any corrections or valuable information you may have to contribute. <em>The content of this series draws inspiration from Maximilian Schwarzmüller's Node.js course on Udemy, serving as an index for the topics covered. However, it should be noted that the content reflects my personal interpretation and additional research conducted by me.</em>
itsmohamedyahia
1,506,204
Healthcare’s Digital Accessibility Problem
The push for digital accessibility aims to ensure equal access and inclusion for all individuals. So...
0
2023-06-15T21:58:03
https://devinterrupted.com/podcast/healthcares-digital-accessibility-problem-w-ros-plum-ertz/
devops, programming, podcast, interview
The push for digital accessibility aims to ensure equal access and inclusion for all individuals. So why are healthcare companies failing to keep up? In this week’s episode of Dev Interrupted, we sit down with Plum Ertz, Director of Engineering at [Ro](https://ro.co/), to dissect healthcare’s digital accessibility problem. Following explosive growth in telemedicine due to consumer behavior changes brought on by the pandemic, the healthcare industry has struggled to provide accessible healthcare services on the internet, regardless of the technology or disability someone may have. Plum sheds light on the work being done at Ro to address these challenges, emphasizing the simple steps that teams can take to enhance their digital accessibility. Plum also shares stories from her time at Buzzfeed, where she accidentally took down the entire site after forgetting a semicolon and once mistakenly calling a midterm election 5 hours too early! >“Everything we do and everything that we're working on is built around: How is this solving a patient problem? Like if anyone said anything about the blockchain, I would straight up walk out of the building.” {% spotify spotify:episode:3QaO88GkXKxskjm3PSm6Pk %} ## Episode Highlights: * (1:35) Defining accessibility * (6:20) Why accessibility matters * (9:50) An explosion in the number of telemedicine users * (12:19) How Ro defines success * (16:05) Screwup stories * (18:15) Stories from Plum's time at Buzzfeed * (22:50) Getting started with accessibility at your company [Read the full episode transcript ](https://devinterrupted.com/podcast/healthcares-digital-accessibility-problem-w-ros-plum-ertz/) --- ***While you’re here, check out this video from our [YouTube channel](https://www.youtube.com/c/DevInterrupted?sub_confirmation=1), and be sure to like and [subscribe](https://www.youtube.com/c/DevInterrupted?sub_confirmation=1) when you do!*** {% youtube 83XcO7i-__I %} --- ## A 3-part Summer Workshop Series for Engineering Executives Engineering executives, [register now](https://linearb.io/events/2023-summer-series/?utm_source=Dev.to&utm_medium=referral&utm_campaign=202306%20-%20Event%20-%20Summer%20Series%20-%20Podcast%20Distribution) for LinearB's 3-part workshop series designed to improve your team's business outcomes. Learn the three essential steps used by elite software engineering organizations to decrease cycle time by 47% on average and deliver better results: Benchmark, Automate, and Improve. Don't miss this opportunity to take your team to the next level - [save your seat today](https://linearb.io/events/2023-summer-series/?utm_source=Dev.to&utm_medium=referral&utm_campaign=202306%20-%20Event%20-%20Summer%20Series%20-%20Podcast%20Distribution). [![Register Today](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nd6b3egikvnssdi9x50g.png)](https://linearb.io/events/2023-summer-series/?utm_source=Dev.to&utm_medium=referral&utm_campaign=202306%20-%20Event%20-%20Summer%20Series%20-%20Podcast%20Distribution)
conorbronsdon
1,506,687
Configuring connection for Oracle NetSuite REST connector
Configuring connection for Oracle NetSuite REST connector Step 1: Enabling an...
0
2023-06-16T10:40:02
https://tech.forums.softwareag.com/t/configuring-connection-for-oracle-netsuite-rest-connector/278219/1
webmethods, oracle, rest, integration
--- title: Configuring connection for Oracle NetSuite REST connector published: true date: 2023-05-04 05:05:27 UTC tags: webMethods, oracle, rest, integration canonical_url: https://tech.forums.softwareag.com/t/configuring-connection-for-oracle-netsuite-rest-connector/278219/1 --- ## Configuring connection for Oracle NetSuite REST connector ### Step 1: Enabling an Application in NetSuite to Use OAuth 2.0 **1.1 Login in your NetSuite account.** Go to **Setup** > **Integration** > **Manage Integrations** > **New** [![1](https://global.discourse-cdn.com/techcommunity/optimized/3X/d/7/d7ce7d83ed3625086848bcf49b2124852b688b5f_2_690x368.png)](https://global.discourse-cdn.com/techcommunity/original/3X/d/7/d7ce7d83ed3625086848bcf49b2124852b688b5f.png "1") **1.2 Configure the app as shown below.** **Name** Enable **Token Based Authentication** Enable **Authentication code grant** and provide **redirect URI.** In Scope, Enable **REST Web services**. And **Save**. [![2](https://global.discourse-cdn.com/techcommunity/optimized/3X/1/5/15699e2724f742b06c1972de009ca620130a1ae1_2_690x401.png)](https://global.discourse-cdn.com/techcommunity/original/3X/1/5/15699e2724f742b06c1972de009ca620130a1ae1.png "2") Make a note of **Client ID** and **Client Secret**. [![3](https://global.discourse-cdn.com/techcommunity/optimized/3X/1/e/1e7f3f6e849f66807dd2e6981c1fcb66726054ae_2_633x500.png)](https://global.discourse-cdn.com/techcommunity/original/3X/1/e/1e7f3f6e849f66807dd2e6981c1fcb66726054ae.png "3") ### Step 2: POST Request to the Token Endpoint **2.1 Generate Code Challenge using a Code Verifier** To Apply SHA256 on the code\_verifier you can use the online pkce generator [Online PKCE Generator Tool](https://tonyxu-io.github.io/pkce-generator/) **Note** Apply SHA256 on the code\_verifier parameter. The length of the code\_verifier parameter must be between 43 and 128 characters. Valid characters for the code\_verifier parameter are alphabet characters, numbers, and non-alpha numeric ASCII characters: hyphen, period, underscore, and tilde (- . \_ ~). Please make a note of **code challenge** and **code verifier**. **2.2 Generate Authorization code.** Using the browser run below GET request. _https://.app.netsuite.com/app/login/oauth2/authorize.nl? **scope** = rest\_webservices& **redirect\_uri** =<Enter\_your\_redirect\_URL\_used\_in step1.2>& **response\_type** =code& **client\_id** =<ClientID\_from\_step1.2>& **state** =ykv2XLx1BpT5Q0F3MRPHb94j& **code\_challenge** =<code\_Challenge\_generated\_at\_Step2.1>& **code\_challenge\_method** =S256_ Note : The length of the **state** parameter must be between 24 and 1024 characters. Valid characters are all printable ASCII characters. Once you hit the request. Post login you will see the below screen click on continue. [![4](https://global.discourse-cdn.com/techcommunity/optimized/3X/6/a/6a5ac45f6c4316d89ac78ce7e5dd06c0b9b39c67_2_690x212.png)](https://global.discourse-cdn.com/techcommunity/original/3X/6/a/6a5ac45f6c4316d89ac78ce7e5dd06c0b9b39c67.png "4") You will be redirected to a new page. In the redirected URL you will find the authorization code. [![5](https://global.discourse-cdn.com/techcommunity/optimized/3X/3/e/3e01d952096655e3e5f559f92285037c30070660_2_690x18.png)](https://global.discourse-cdn.com/techcommunity/original/3X/3/e/3e01d952096655e3e5f559f92285037c30070660.png "5") Make a note of **Authorization Code**. **2.3 Generate Access and Refresh Token.** Using Postman, run a POST request as shown below. **Method** : POST **Endpoint** : https://.suitetalk.API.netsuite.com/services/rest/auth/oauth2/v1/token **Authorization** : Basic **UserName** : Client\_ID from step 1.2 **Password** : Client\_Secret from step 1.2 ![6](https://global.discourse-cdn.com/techcommunity/original/3X/9/b/9b49366200af9600811a7e963a61eb1ce87039c9.png) In the Body section, choose x-www-form-urlencoded and provide details as shown below. **code** : Authorization code from step 2.2 **redirect\_uri** : Redirect URL from step 1.2 **grant\_type** : authorization\_code **code\_verifier** : Code Verifier from Step 2.1 [![7](https://global.discourse-cdn.com/techcommunity/optimized/3X/5/0/5081f1eca9e9dedb16a1db5947933325d92cc3f9_2_690x309.png)](https://global.discourse-cdn.com/techcommunity/original/3X/5/0/5081f1eca9e9dedb16a1db5947933325d92cc3f9.png "7") Once you click send. You will receive Access Token and Refresh Token in the response. Make a note of **Access Token** and **Refresh Token**. ### Step 3: Create a connection for Oracle NetSuite REST connector **Provide the below details in the connection.** **Client ID** - Client\_ID from step 1.2 **Client Secret** - Client\_Secret from step 1.2 **Access Token** - Access Token from step 2.3 **Refresh Token** - Refresh Token from step 2.3 **Refresh Url** - [https://accountID.suitetalk.api.netsuite.com/services/rest/auth/oauth2/v1/token](https://accountID.suitetalk.api.netsuite.com/services/rest/auth/oauth2/v1/token) **Grant Type** - refresh\_token **Instance Url** - [https://accountID.suitetalk.api.netsuite.com](https://accountID.suitetalk.api.netsuite.com) **Server Url** - [https://accountID.suitetalk.api.netsuite.com](https://accountID.suitetalk.api.netsuite.com) > Note - Update acountID with your NetSuite accountID **Other Recommended configurations** Minimum Pool Size - 10 Maximum Pool Size - 100 Pool Increment Size - 10 Block Timeout - 10000 Expire Timeout - 10000 Response TimeOut - 900000 Keep Alive Interval - 900000 [Read full topic](https://tech.forums.softwareag.com/t/configuring-connection-for-oracle-netsuite-rest-connector/278219/1)
techcomm_sag
1,508,181
Tủ nấu cơm công nghiệp chất lượng, giá tốt
https://tunaucom.com.vn/ Chuyên cung cấp các dòng tủ nấu cơm, tủ hấp cơm công nghiệp ✔️Chất...
0
2023-06-18T02:02:11
https://dev.to/tunaucomnewsun/tu-nau-com-cong-nghiep-chat-luong-gia-tot-1p2h
https://tunaucom.com.vn/ Chuyên cung cấp các dòng tủ nấu cơm, tủ hấp cơm công nghiệp ✔️Chất lượng✔️Chính hãng ✔️Bảo hành lâu dài✔️Giá tốt nhất thị trường✔️✔️✔️ 0963.997. 355 Số 28 ngõ 168 Nguyễn Xiển, Thanh Xuân, Hà Nội #tucomcongnghiep #tunaucomcongnghiep #tuhapcomcongnghiep #noinaucomcongnghiep #noihapcomcongnghiep https://plaza.rakuten.co.jp/tunaucomnewsun https://penzu.com/p/1d86d8df https://peatix.com/user/17822546 https://matters.town/@tunaucomnewsun
tunaucomnewsun