id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,430,157
Beginner's Guide to Data Analytics: Diving into Our Data Management Platform
In this post, I'm going to walk you through the parts and components of our own Data Management...
0
2023-04-08T12:39:39
https://dev.to/apachedoris/beginners-guide-to-data-analytics-diving-into-our-data-management-platform-2baj
database, datascience, data, analytics
In this post, I'm going to walk you through the parts and components of our own Data Management Platform (DMP), and how we improve analytic efficiency by architectural optimization. Let's start from the basics. As the raw material of our DMP, the data sources include: - Business logs from all sales ends; - Sales data from third-party platforms; - Basic data from within the company. These constitute our data assets, from which we derive a number of tags to describe the customers' age, address, preferenced products, the device they used, etc. Using these tags as filters, we pick out a group of customers that match certain characteristics (the "Grouping" process). Then we view the behavior pattern of the target group. ![1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zg4k3ugjaiz86l2zwchg.png) **What is the** **DMP** **used for?** From the data user's perspective, they mainly use the DMP for two purposes: tag query and grouping. - **Tag Query**: Sometimes they find a certain customer (or a group of customers) and check what tags they are attached to. - **Grouping**: After grouping, they might want to check if a certain customer is among a specified group, in order to find grounding for their marketing decisions. They might also pull the grouping result set from the DMP to their own business system for further development. Most of the time, they will start analyzing the shopping mode of the target group directly. **How does the** **DMP** **work?** Firstly, we, as data platform engineers, define the tags and the rules for grouping. Next, we define the domain-specific language (DSL) that describes what we want to do, so we can submit computation tasks to Apache Spark. Then, computing results will be stored in Apache Hive and Apache Doris. Lastly, data users can perform whatever queries they need in Hive and Doris. ![2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3umnlvwrai77gwb9t9u5.png) To provide an abstraction of the DMP, it can be seen as a four-layered architecture: - **Metadata Management**: All meta information about the tags are stored in the source data tables; - **Computation & Storage**: This layer is supported by Spark, Hive, Doris, and Redis; - **Scheduling**: This is like a command center of the DMP. It arranges the tasks throughout the whole platform, such as aggregating data into basic tags, converting data of basic tags into SQL semantics for queries based on the DSL rules, and passing computing results from Spark to Hive and Doris; - **Service**: This is where data users conduct grouping, profile analysis, and check the tags. ![3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cxmch9zqwids3neqgri2.png) ## Tags Tags are the most important elements of our DMP, so I'm going to take a whole chapter to introduce them. Every tag in our DMP goes through a five-phased lifecycle: 1. **Demand**: Data users put forward what kind of tags they need. 2. **Production**: Data developers sort out the data and produce the tags. 3. **Grouping**: Tags are put into use for grouping. 4. **Marketing**: Operators launch precision marketing practice on the targeted groups. 5. **Evaluation**: All relevant parties evaluate the utilization rate and effectiveness of tags, and then plan for subsequent improvement. Every day, the grouping results will be updated and the outdated data will be cleaned. Both processes are automatic. We have also partially automated the tag production process. Our next priority is to realize 100% automation of it. ### How Tags are Produced? To derive tags from data, we need to turn our raw data into more organized and structured ones first: ![4](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v4oz09ct8ne0735mkdz0.png) - **ODS (Operational Data Store)**: This layer contains the user login history, event tracking logs, transaction data, and the binlogs from various databases. - **DWD (Data Warehouse Details)**: Data processed by the ODS layer will be sent here to form user login tables, user event tables, and order information sheets. - **DM (Data Market)**: The DM layer stores the aggregated data from DWD. Data in DM will be paired by “and”, “or”, and “xor” logic to produce tags that are more comprehensible for data users. ### Types of Tags Tags can be categorized differently based on different metrics: ![5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4dy90m339ruaytg31qgs.png) ## Where Do We Store Our Data? We made a list of what would look like an ideal data warehouse for us: - Support high-performance queries so as to handle massive consumer end traffic; - Support SQL for data analysis; - Support data update; - Be able to store huge amounts of data; - Support extension functions to deal with user-defined data structures; - Be closely integrated into the big data ecosystem. It was hard to find a single tool that met all these needs, so we tried a mixture of multiple tools. We stored part of our offline and real-time data in Hbase for basic tag queries, most of the offline data in Hive, and double wrote the rest of our real-time data into Kudu and ES for real-time grouping and data query. The grouping results would be produced by Impala and then cached in Redis. ![6](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7w1zurlb5zb6ren9u1rp.png) As you can imagine, complicated storage could lead to tricky maintenance. Also, doublewriting added to the risk of data inconsistency since one of them might fail. So, in Storage Architecture 2.0, we introduced [Apache Doris](https://github.com/apache/doris) and Apache Spark. The whole data pipeline was a Y-shaped diagram. ![7](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7pvfgm2xzgn9siz0m45b.png) We stored offline data in Hive, and the basic tags and real-time data Doris. Then, based on Spark, we conducted federated queries across Hive and Spark. The query results would be stored in Redis. With this new architecture, we compromised a little bit of performance for much lower maintenance costs. **P.S.** For your reference, here is a summary of the applicable scenarios for the various engines that we've used or investigated. ![8](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0p78ghm9cn1ryu4woyt.png) ## What Are Our High-Performance Queries? Some of our data queries demand high performance, such as grouping checks and customer group analysis. ### Grouping Check Grouping check means to see if certain users are categorized into one or several given groups. It is a two-step check: 1. **Check in static group packets**: perform pre-computations and store the results in Redis; use Lua script for bulk check and thus increase performance. 2. **Check in real-time behavior groups**: extract data from contexts, API, and Apache Doris for rule-based judgement. Meanwhile, we have tried to increase performance by asynchronous checks, quick short-circuit calculation, query statement optimization, and join table quantity control. ### Customer Group Analysis Customer group analysis is to figure out the behavioral path of consumers. It entails join queries across the group packets and multiple tables. Apache Doris does not support path analysis functions so far, but its computing model is friendly to user-defined function development, so we have built a UDF for this purpose and it works well. ### Our Gains After Architectural Simplication The newly introduced data warehouse, Apache Doris, can be applied in multiple scenarios, including point queries, batch queries, behavioral path analysis, and grouping. In point queries and small-scale join queries. It delivers a high performance of over 10,000 QPS and an RT99 of less than 50ms. Apart from its strong scalability and easy maintenance, we also benefit from the much simpler tag models brought by the integration of real-time and offline data. ## Conclusion In this post, I zoomed in on various parts of our DMP, explained how the data tags, storage and query worked. We believe that a good tag system and fast query speeds are the recipe for efficient data analytics. So our follow-up efforts will be put into these aspects. I'm writing this piece to share our practice in the data engineer community and hopefully, collect some valuable suggestions, so if you've got any ideas, meet me in the comment section.
apachedoris
1,430,161
Top 6 Benefits of SAFe® for Teams Certification
Before we set foot on the day's topic, we must review the term Scale Agile Framework(SAFe) and see...
0
2023-04-08T13:02:28
https://dev.to/sagarjivani/top-6-benefits-of-safer-for-teams-certification-3542
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yjkzcho39kd2spjbuh8y.JPG) Before we set foot on the day's topic, we must review the term Scale Agile Framework(SAFe) and see what it's all about. SAFe is a method used by Agile practitioners to scale up the Agility across all layers of the organization. Due to the continuous growth of demand for the product and the growth of the companies, it is inevitable to scale up. Scaling up the Agile team and applying agile across the organization will help change toward a positive and forward-thinking organization or enterprise. According to a recent study by 14th State of Agile Report on 2022 it shows that the most used scaling method is Scaled Agile Framework(SAFe) with a 35% that is a 5% increase within a year. The following pie chart shows the priority of different scaling methods based on use. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ufnubdt1l4z0c7sj8zy.JPG) Back to the topic, let's check out the benefits of getting SAFe Certification for Team **1. Constant Delivery** Continuous supply of products to customers is one of the significant factors that have led to the popularity of Scaling Agility. Holding a SAFe Certification shows that you can implement and integrate DevOps, a combination of culture, tools, philosophies, and practices that are put together with an aim of improve an organization's ability to deliver products and services at a high velocity. In addition, you get to learn how to work with the DevOps teams to control work backlogging by ensuring there are regular and continuous releases **2. Internationally Recognised** The SAFe certificate is a globally recognized certification that proves that you have undergone the training and are qualified to be part of a SAFe Framework. This kind of recognition means that you will have all the opportunities on your finger ti[p[s. It will also give you an edge at your workplace, where you will be exposed to more incredible opportunities now that you are certified that you have what it takes to take up more complex tasks. **3. Improving Mindset** Behavior and belief drive a Business. You must change any beliefs and behaviors attached to the delayed success to change the business. The SAFe training will teach you to scale up as a team member. There are different skills, values, and principles that you have to live by as a SAFe Framework practitioner. The Safe Certification will go into detail about these factors, and at the end, you will be able to understand and use them to make a change at an individual and enterprise level **4. Dipper Pockets** Considering that the Scaling up of enterprises has become a widespread and critical practice, SAFe Agilist is being paid loads of money to ensure the process is done correctly. The market statics shows that the Scaling Agilist is being paid at least 25% more compared to other teammates who lack Certification. Let's look at a list of salaries of various salaries s of SAFe Agilist in 6 countries. Country - Currency - Salary Per Annum United State of America USD 80,000 – 135,000 United Kingdom GBP 33,000 – 75,000 India INR 12,000,000 – 20,000,00 Canada CAD 66,000 – 145,000 Australia AUD 100,000 – 157,000 **Learn Lean-Agile Leadership** Agile and Lean are one of the most popular and finely used principles in the SAFe framework. They involve using standards, beliefs, behaviors, and processes that are the basis of scaling up any organization. The Certification will cut across the learning of pillars, principles, and parameters essential for Agile Scaling up. You will understand that SAFe also includes the individuals primarily involved in the production process by working tirelessly to meet the deadlines. It is the reason the SAFe method is trendy. **6. Building Your CV** A SAFe certification on your profile will be like the icing on the cake. It will guarantee that you will be distinct from the crowd. It would be best if you had all the leverage you can get when applying for a job. What will make you stand out? SAFe Certification allows you to deeply understand the process and terminology when working in a Scaled Agile enterprise, which is a unique trait. It will give the employer reason to employ you since you already have the SAFe Certification instead of getting someone without these credentials. SAFe Certification is a highly valued rank. It requires just a few days to get certified, and you will be ready to challenge your competitors and be unique among them. ValueX2, [SAFe for Teams Certification training](https://www.valuex2.com/safe-for-teams-certification-training), has a reputation as one of the best certifiers with a 100% exam pass rate, and if you are considering success, it would be wise to start there.
sagarjivani
1,430,200
Governança de Dados Eficaz
A governança de dados é uma abordagem sistemática para gerenciar e proteger os dados de uma...
0
2023-04-08T13:47:34
https://dev.to/lidiagoncalves/governanca-de-dados-eficaz-10c2
ai, machinelearning, datascience, cloud
A governança de dados é uma abordagem sistemática para gerenciar e proteger os dados de uma organização. É um conjunto de processos, políticas, padrões e procedimentos que garantem que os dados sejam gerenciados de forma consistente, segura e eficiente em toda a empresa. A governança de dados é essencial para garantir que as informações críticas da empresa sejam gerenciadas de maneira adequada e que os dados possam ser usados de maneira confiável para apoiar a tomada de decisões e operações de negócios. A governança de dados deve ser vista como um investimento estratégico, pois pode ajudar as organizações a identificar oportunidades de negócios, melhorar a eficiência operacional e reduzir os riscos. No entanto, a implementação de uma governança de dados eficaz pode ser um desafio, pois requer mudanças culturais, de processos e tecnológicas. Para implementar uma governança de dados eficaz, é importante seguir algumas etapas chave: 1. Definir a estrutura da governança de dados: a primeira etapa é estabelecer uma estrutura clara para a governança de dados. Isso pode incluir a criação de um comitê de governança de dados, a definição de papéis e responsabilidades, e o estabelecimento de políticas e padrões. 2. Para implementar uma governança de dados eficaz, é preciso seguir algumas práticas essenciais. A primeira delas é estabelecer um comitê de governança de dados, composto por membros de diferentes áreas da organização, como TI, jurídico, compliance e negócios. Esse comitê será responsável por definir as políticas e diretrizes de governança de dados, bem como por monitorar a implementação e conformidade dessas políticas. Outra prática importante é estabelecer um catálogo de dados, que alcance informações sobre os dados que a organização coleta, armazena e usa, incluindo sua origem, sua qualidade, sua conversão e suas restrições de uso. Esse catálogo deve ser atualizado regularmente e compartilhado com todas as áreas da organização. Além disso, é importante estabelecer padrões de qualidade de dados, que definem critérios para avaliar a precisão, a completude, a consistência e a integridade dos dados. Esses padrões devem ser seguidos por todas as áreas da organização que trabalham com dados. Outra prática essencial é estabelecer políticas de privacidade e segurança de dados, que definem como os dados devem ser tratados e protegidos. Essas políticas devem ser seguidas por todas as áreas de organizações que trabalham com dados. Por fim, é importante monitorar e avaliar continuamente a implementação da governança de dados, para garantir que as políticas e diretrizes estejam sendo seguidas e para identificar oportunidades de melhoria. Essa avaliação pode ser realizada por meio de auditorias internas e externas, bem como por meio de indicadores de desempenho definidos pelo comitê de governança de dados.. 3. Demonstrar o valor: é importante demonstrar o valor da governança de dados por meio de casos de sucesso e exemplos práticos, mostrando como ela pode gerar resultados positivos para a organização. 4. Capacitar e engajar as equipes: capacitar as equipes envolvidas na governança de dados pode ajudar a garantir sua adoção e aceitação, além de contribuir para a melhoria da qualidade dos dados e dos processos de gestão.
lidiagoncalves
1,430,235
Introducing terminal tool to manage embedded database
Hello everyone, a few months ago I wrote a terminal tool in Rust to help view data stored in embedded...
0
2023-04-08T15:00:33
https://dev.to/chungquantin/introducing-terminal-tool-to-manage-embedded-database-4c3f
rust, database, opensource, terminal
Hello everyone, a few months ago I wrote a terminal tool in Rust to help view data stored in embedded databases like RocksDB or Sled. As someone who enjoys coding low-level systems, I found it very difficult when there were almost no tools available to view the byte data of embedded databases. That's why EDMA was created to solve that problem. The project is still in its early stages, but it already encompasses some of the main features. I hope it will be useful for those who are into low-level or embedded programming. ![EDMA banner](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ny2iycel1j9opttmo2g.png) EDMA is a command-line tool that allows users to interact with embedded databases at a byte level. It provides a simple yet powerful interface to view, modify, and delete data in databases like RocksDB or Sled. The tool is built using Rust, a systems programming language known for its safety, speed, and concurrency. One of the key features of EDMA is its ability to work with a wide range of data types. Users can view and modify data in the database in different formats, including binary, hexadecimal, and ASCII. They can also specify the type of data stored in the database and EDMA will interpret it accordingly. Another useful feature of EDMA is its support for filtering and sorting data. Users can search for specific data based on a range of criteria, such as key or value length, and sort the data by key or value. This makes it easy to find and analyze specific data in the database. EDMA is designed to be easy to use and customizable. Users can configure the tool to work with their specific database and data format requirements. They can also extend the functionality of the tool by creating custom plugins. Overall, EDMA is a valuable tool for developers working with embedded databases. It simplifies the process of working with low-level data and provides a convenient interface for analyzing and modifying data in embedded databases. Link to Github repository: https://github.com/nomadiz/edma
chungquantin
1,430,259
The Sky’s the Limit: Debating the Benefits of AWS Spending Restrictions
Yesterday, I posted a tweet with an imaginary conversation that is sadly based on many real...
0
2023-04-09T20:11:50
https://theburningmonk.com/2023/04/the-skys-the-limit-debating-the-benefits-of-aws-spending-restrictions/
aws
--- title: The Sky’s the Limit: Debating the Benefits of AWS Spending Restrictions published: true date: 2023-04-08 11:13:25 UTC tags: AWS canonical_url: https://theburningmonk.com/2023/04/the-skys-the-limit-debating-the-benefits-of-aws-spending-restrictions/ --- Yesterday, I posted [a tweet](https://twitter.com/theburningmonk/status/1643901447073824771) with an imaginary conversation that is sadly based on many real conversations I have had. The tweet received some interesting replies, and the point about spending limits came up multiple times. So as a thought experiment, let’s think about the pros and cons of a spending limit and if it makes sense. ![](https://theburningmonk.com/wp-content/uploads/2023/04/img_64314c335a099.png) ### What is a spending limit? But first, let’s define what a spending limit is and how it differs from the existing [AWS Budget feature](https://docs.aws.amazon.com/cost-management/latest/userguide/budgets-managing-costs.html). AWS Budget lets you monitor and manage your AWS costs by setting a custom cost and usage budget. It provides notifications when your defined thresholds are approached or exceeded. It helps you maintain better control over your spending and optimize resource utilization. The thresholds can be set against both actual accumulated cost as well as projected cost (based on the rate you’re currently spending). And you can set multiple thresholds in your budget. For example, at 20%, 50% and 100% of your budget. It should be said that **every AWS customer should use AWS Budget**. It’s not perfect by any stretch, especially since billing data are several hours behind the curve. But it can still help you detect and catch spending abnormalities before they become a bigger surprise, as illustrated by [this thread](https://twitter.com/donkersgood/status/1635244161778737152). What it doesn’t do, however, is stop you from using your AWS resources when you have reached your budget. That is what a spending limit is. That is, you will no longer be able to use anything in your AWS account until the next payment period or you raise the spending limit. Many services have spending limits, such as mobile phone plans or prepaid debit cards, or even some subscription-based services such as Midjourney. So let’s have a look at some pros and cons of having a spending limit and I will share my thoughts at the end. ### The cons of spending limit 1. **Service disruption** : Once the spending limit is reached, your system becomes unavailable. It will negatively impact your business operations and user experience. 2. **Hard to find the right limit** : Businesses would likely struggle to accurately determine the appropriate spending limit for their needs. Most businesses already struggle to understand their cloud spending. This would exacerbate the problem by adding potential service disruption to the mix. 3. **More management overhead** : Businesses would need to closely monitor their AWS usage to ensure they do not hit the spending limit. Which could require additional time and resources. 4. **Limited flexibility** : A hard spending limit doesn’t cater for growing businesses or businesses with fluctuating needs. They may need more resources during certain periods, or may experience unexpected surge in traffic because of external events. 5. **Under-utilization of resources** : In an attempt to avoid reaching the spending limit, businesses might underutilize resources and fail to fully leverage the potential of AWS services. 6. **Potential loss of revenue for AWS** : AWS might lose revenue from customers who would have otherwise continued to use their services beyond the limit. ### The pros of spending limit 1. **Tighter budget control** : AWS customers would have better control over their budget, as they can define an exact spending limit and avoid unexpected costs. 2. **More predictable cost** : With a hard spending limit in place, AWS customers can accurately predict their monthly AWS expenses and better allocate resources for their businesses. 3. **Prevent abuse** : A fine-grained spending limit can help prevent unauthorized usage or abuse of services. For example, by disabling spending on unused regions and services. 4. **Fewer billing surprises** : Customers would experience fewer surprises in their monthly bills, as their spending would be capped at the limit they set. 5. **Encourages efficient resource usage** : A spending limit might encourage AWS customers to optimize their AWS usage and make more efficient use of resources. ### My thoughts I’m against the idea of a global (account or organization-wide) spending limit. The suggestion of a spending limit often comes up in the immediate aftermath of a costly mistake. However, as a business, the impact of a potential service disruption makes it a non-starter. After suffering revenue loss from a costly mistake, the last thing I’d want would be a service outage to add salt to the wound! Service disruption means user churn, damaged reputation, loss of market opportunities and wasted productivity. And the potential loss of revenue to AWS also means we’re unlikely to ever see this kind of global spending limit. On the other hand, what works for businesses might not work best for hobbyists. For whom, even small unforeseen cost surprises would have an outsized impact. But more importantly, a small-scale, service-wide spending limit might be very beneficial. For example, by allowing AWS customers to specify a spending limit on specific services such as API Gateway and Lambda. It can help ease many businesses’ concerns about “potentially unbounded costs”. ![](https://theburningmonk.com/wp-content/uploads/2023/04/img_64314c94f3599.png) For those of you who have worked with serverless technologies, you would know that these fears are usually unfounded. Especially if you have cost control mechanisms in place already, such as using AWS Budget. Again, **everybody should be using AWS Budget** regardless if you’re a hobbyist or a multi-billion dollar company. If businesses are able to put a spending limit on a service they wish to experiment with then it will encourage more experimentation. This creates a virtuous feedback cycle which would benefit both AWS and its customers in the long run. ![](https://theburningmonk.com/wp-content/uploads/2023/04/img_64314ccfcf5e0.png) Even AWS measures the success of their services by adoption, not revenue. And we know that successful experimentations with serverless are the launchpad for wider adoptions of AWS services. I lose count of the number of times clients have told me how they went all-in on serverless after an initial successful experiment. There are many such examples from [my podcast](https://realworldserverless.com/) alone. And if a service-wide spending limit can help alleviate some of these fears and encourage more customers to try things out, then it’s a win-win situation for everyone. Additionally, the spending limit must be opt-in and might be time-boxed so they are not accidentally left hanging around after the experimentation phase. The post [The Sky’s the Limit: Debating the Benefits of AWS Spending Restrictions](https://theburningmonk.com/2023/04/the-skys-the-limit-debating-the-benefits-of-aws-spending-restrictions/) appeared first on [theburningmonk.com](https://theburningmonk.com).
theburningmonk
1,430,271
25 Programming Memes Refresh Your Mind
1. 2. 3. 4. 5. ...
0
2023-04-09T17:17:00
https://dev.to/jon_snow789/21-programming-memes-refresh-your-mind-1fa0
jokes, webdev, programming, javascript
### 1. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5e8otrwlfo71smasf6pz.png) --- ### 2. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ctjr73a6hre282i9pv06.png) --- ### 3. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1l2xvk9l8t6r4jqio9lm.png) --- ### 4. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/apfjrkam05norf9gh8rj.png) --- ### 5. {% instagram Co6neDVr4qp %} --- ### 6. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ly473jmxy955evavhtgo.png) --- ### 7. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1hnwwtxmx710tsl5pc98.png) --- ### 8. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2uo9719po2mqzk0ygthi.png) --- ### 9. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otavqk6subp8qecerr9c.png) --- ### 10. {% instagram CqObFa6PR0l %} --- ### 11. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tc5xzwalzn3xeaa0f5gz.png) --- ### 12. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fo1nv0qn4kc1guysv8ys.png) --- ### 13. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h6uizqsbt1lfeao5gat1.png) --- ### 14. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ngbfg3etyrv3my0xmphr.png) --- ### 15. {% instagram Cpj5rrNBY7k %} --- ### 16. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1p0r2t17e1cuyr9yhg9q.png) --- ### 17. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2oufegl3j4najt4l9e3d.png) --- ### 18. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m0kolh74z5hi3cokqz4x.png) --- ### 19. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bf6m9edoi6fhafu0s3eg.png) --- ### 20. {% instagram CpPRfgJrhz4 %} --- ### 21. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zof3vtru1s306egubgc5.png) --- ### 22. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bivaw0mcvv0f5nlotc4l.png) --- ### 23. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hskjv0nzwxx1d0mj1ze3.png) --- ### 24. ![programming memes for coders](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/deprhhzblyo2ncq1wl87.png) --- ### 25. {% instagram Clgh_6QJPy- %} --- --- ## For more information 1. Subscribe my Youtube Channel [https://www.youtube.com/@democode](https://www.youtube.com/@democode) 2. Check out my Fiver profile if you need any freelancing work [https://www.fiverr.com/amit_sharma77](https://www.fiverr.com/amit_sharma77) 3. Follow me on Instagram [https://www.instagram.com/fromgoodthings/](https://www.instagram.com/fromgoodthings/) 4. Check out my Facebook Page [Programming memes by Coder](https://www.facebook.com/programmingmemesbycoders) 5. Linktree [https://linktr.ee/jonSnow77](https://linktr.ee/jonSnow77) --- --- {% link https://dev.to/jon_snow789/20-github-repositories-every-developer-should-bookmarkhigh-value-resources-4jm6 %} --- ---
jon_snow789
1,430,332
How to Continue on the Path When You Want to Give Up
I've been studying Computer Science concepts now for the past couple of years, and I wish I could say...
0
2023-04-08T18:49:53
https://dev.to/beaucoburn/how-to-continue-on-the-path-when-you-want-to-give-up-5gbi
webdev, beginners, programming, discuss
I've been studying Computer Science concepts now for the past couple of years, and I wish I could say that the path has been easy and lined with roses. Actually, the exact opposite is true. The path to build my skills has been a very difficult one and many times I still feel like a failure. I look at my own work and wonder why it's not as good as other people's work. I look at challenges from Hackerrank or Front End Mentor and many times I just feel lost. With my current job, I work as an English teacher and I'm very fortunate that I have made a lot of good friends that are programmers. I'm drawn to the idea that I can work with technology and I can be creative. These are two things that I get very passionate about and I'm excited that even though I still feel like I suck at programming, this pushes me forward. Many times, I have a love hate relationship with this profession. There are many times when I have a bug or I spend so much time trying to figure out a problem that I want to run my head through a wall. I guess maybe it's a good thing that the walls where I live are made of brick, so that kind of stops me there, but if you've been down this path I'm positive you know how this feels. Then all of a sudden, you find the solution to this problem and it's the best feeling in the world. Many people that I talk to relate this to feeling like a wizard or a god, because now I've found the solution to this problem and I've made this thing work that has taunted me for so long. Spending a lot of time with these ups and downs, takes a toll on the body and the mind. After a while, many people feel like they want to just give up. Many times I've wondered if this was the right industry for me and maybe I just need to stop. There are already people who are much smarter than I am, not to mention the AI is coming and is going to take all of our jobs anyway. But, there is something that I have been learning along the way. None of that matters. Also, there is a way to get around our problems when they become too difficult or when we look at our goals and think that they are insurmountable. The first thing that I believe that helps is to know that when we are feeling this way, we are not alone. Actually, to feel this way is very normal. We all go through this time, and the important thing to realize is that the thing that makes us successful is not failure or success exactly but the thing that makes us actually successful is not giving up. Things will be hard but that's ok. Don't give up. When I realize that I need to keep pushing and that there isn't any new problem under the sun, but the problem still seems way too big, I need to change my thinking about the problem. Usually, it's my current thinking about the problem that is really contributing to this current problem. Also, as I'm changing my thinking about the problem, I can maybe look at the problem in a different way, but maybe I can start to break that problem down into smaller pieces. I don't really need to worry about conquering the whole mountain right now, but I only need to worry about taking the next step. Taking the next step is a lot less intimidating than climbing the whole mountain. We have a saying, "The best way to eat an elephant, is one piece at a time." We go little bit by little bit and then before we know it, we have gone much further than we've ever thought that we could before. If you feel down and not sure you can continue, don't worry. Find someone else. Tell them what you are going through. Think about these different things, and remember that tomorrow is a new day.
beaucoburn
1,430,364
How to Create a Digital Clock Using JavaScript
Digital Clock Using JavaScript In this article, we’ll show you “How to create a digital...
0
2023-04-08T19:45:03
https://rutikkpatel.medium.com/how-to-create-a-digital-clock-using-javascript-5bcc1d11b5cf
javascript, webdev, frontend, tutorial
## Digital Clock Using JavaScript In this article, we’ll show you **“How to create a digital clock using JavaScript”** In this article, I will guide you through the process of building a simple digital clock that can display the current time on your webpage. We will start by creating a basic **HTML** document and adding some **CSS** to style the clock. Then, we will use **JavaScript** to get the current time and update the clock every second. Throughout the article, I will use the key concepts and functions of JavaScript, such as the Date object and **setInterval()** method. By the end of this blog, you will have a fully functional **digital clock** that you can use on your own website. Whether you are a **beginner** or an **experienced** developer, this article is perfect for anyone who wants to learn more about JavaScript and create an interactive element for their website. So, grab your favorite code editor, and let’s get started! ![Created By [Author](https://rutikkpatel.medium.com/) ( [Rutik Patel](https://rutikkpatel.medium.com/) )](https://cdn-images-1.medium.com/max/2560/1*Y24rG35gzAAGsCf0NesQWA.png) &nbsp; ## Points to be discussed * Preview * YouTube Tutorial * HTML Code * CSS Code * JavaScript Code * References &nbsp; ## Preview : A live demo of the website can be viewed by [clicking here](https://rutikkpatel.github.io/JavaScript_Programs/JS_Digital_Clock/index.html). ![Preview of Digital Clock](https://cdn-images-1.medium.com/max/3722/1*8noiE1FMQYR85kfo_MaRvQ.png) &nbsp; ## YouTube Tutorial : {% youtube HUerJsojcJg %} &nbsp; ## HTML CODE : index.html ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>JavaScript Digital Clock</title> <link rel="stylesheet" href="style.css"> <!-- Bootstrap CDN --> <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha1/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-GLhlTQ8iRABdZLl6O3oVMWSktQOp6b7In1Zl3/Jr59b6EGGoI1aFkw7cmDA6j6gD" crossorigin="anonymous"> <!-- My JavaScript --> <script src="script.js"></script> </head> <body> <div class="container my-5 py-4 bg-warning"> <div class="jumbotron"> <h1 class="display-4">Current Time : <span id="time"></span></h1> <hr size=5px;> <h1 class="display-5">Current Date : <span id="date"></span></h1> </div> </div> <div class="text-center"> <h1>DIGITAL CLOCK USING JavaScript</h1> </div> <footer> <p id="footer" class= "text-center"> © 2023 Designed By : <a href="https://rutikkpatel.github.io/Portfolio1/" target="_block">Rutik Patel</a> </footer> <!-- Bootstrap Bundle Script CDN --> <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0-alpha1/dist/js/bootstrap.bundle.min.js" integrity="sha384-w76AqPfDkMBDXo30jS1Sgez6pr3x5MlQ1ZAGC+nuZB+EYdgRZgiwxhTBTkF7CXvN" crossorigin="anonymous"></script> </body> </html> ``` &nbsp; ## CSS CODE : style.css ```css * { margin: 0; padding: 0; } body { background: rgba(162, 155, 254, 0.5) !important; } .container { border-radius: 15px; } #footer a { text-decoration: none; line-height: 34px; color: #000000; } #footer a:hover { color: red; font-weight: bold; text-decoration: underline; } ``` &nbsp; ## JavaScript CODE : script.js ```js let a; let time; let date; const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' } setInterval(() => { a = new Date(); // Concate Hours, minutes and seconds time = a.getHours() + ":" + a.getMinutes() + ":" + a.getSeconds(); document.getElementById('time').innerText = time; // For Current Date // 14-02-2023 Date Format // date = a.toLocaleDateString(); // Display month day in string format - Wednesday, 20 December 2000 date = a.toLocaleDateString(undefined, options) document.getElementById('date').innerText = date; }) ``` &nbsp; ## References : **GitHub Repository**: [https://github.com/rutikkpatel/JavaScript_Programs/tree/main/JS_Digital_Clock](https://github.com/rutikkpatel/JavaScript_Programs/tree/main/JS_Digital_Clock)
rutikkpatel
1,431,601
Build a Custom Toast Notification Component with ReactJs & Context API
Toast notifications are a popular way to provide users with quick feedback and alerts on actions they...
0
2023-04-10T14:36:10
https://blog.frontendpro.dev/build-a-custom-toast-notification-component-with-reactjs-context-api
--- title: Build a Custom Toast Notification Component with ReactJs & Context API published: true date: 2023-04-10 10:25:17 UTC tags: canonical_url: https://blog.frontendpro.dev/build-a-custom-toast-notification-component-with-reactjs-context-api --- Toast notifications are a popular way to provide users with quick feedback and alerts on actions they take on the web application. While there are many pre-built libraries available for adding toast notifications to your React project, building your own custom component can provide greater flexibility and control over the user experience. In this blog post, I will guide you through the process of building a custom toast notification component using ReactJs and the Context API. You'll learn how to use Context API and the useReducer hook to manage the state of your toast notification component. We'll also show you how to customize the position of your notification and add a progress bar to display the remaining time. Additionally, we'll implement a pause on hover and a dismiss button functionality. By the end of this tutorial, you'll have a fully functional custom toast notification component that you can customize according to your project's design and functionality requirements. So, let's start building! ## Demo Check out this video demo to see the final Custom Toast Notification Component in action: {% embed https://youtu.be/S4UQt50UzGw %} ## Cloning the starter code Before we begin building our custom toast notification component, let's clone the starter code from GitHub for our React project. To do this, open up your terminal and navigate to the directory where you want to clone your project. Then, run the following command: ``` git clone https://github.com/rishipurwar1/react-custom-toast-notification.git ``` Once the cloning process is complete, navigate into the project directory by running: ``` cd react-custom-toast-notification ``` You will find the CSS code for this project in the `src/App.css` file, which is imported into the `App.js` file. Now, we'll install all the dependencies of our project. Enter the following command in your terminal: ``` npm install ``` This command will install all the required packages listed in the package.json file. Next, we'll start the development server by running: ``` npm start ``` You should see something like this on your browser: ![Custom Toast Notification](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nfvjmfwjqwni7n1ofdng.png) Now that we have our React project set up, we can move on to creating our custom toast notification component. ## Creating the Toast Notification Component We will start by creating a new file in the `src/components` folder called `Toast.js`. Inside the `Toast.js` file, we will define our functional component `Toast` which will return the toast notification markup. ```javascript const Toast = () => { return ( <div> {/* toast notification markup */} </div> ) } export default Toast; ``` Next, we will define the markup for our toast notification. In this example, we will create a toast notification with a message, an icon and a dismiss button. ```javascript import { IconAlertCircleFilled, IconCircleCheckFilled, IconCircleXFilled, IconInfoCircleFilled, IconX, } from "@tabler/icons-react"; const toastTypes = { success: { icon: <IconCircleCheckFilled />, iconClass: "success-icon", progressBarClass: "success", }, warning: { icon: <IconAlertCircleFilled />, iconClass: "warning-icon", progressBarClass: "warning", }, info: { icon: <IconInfoCircleFilled />, iconClass: "info-icon", progressBarClass: "info", }, error: { icon: <IconCircleXFilled />, iconClass: "error-icon", progressBarClass: "error", }, }; const Toast = ({ message, type, id }) => { const { icon, iconClass, progressBarClass } = toastTypes[type]; return ( <div className="toast"> <span className={iconClass}>{icon}</span> <p className="toast-message">{message}</p> <button className="dismiss-btn"> <IconX size={18} color="#aeb0d7" /> </button> </div> ) } export default Toast; ``` In the above code, we have imported some icons from `@tabler/icons-react` library. Then we have defined an object called `toastTypes` to make our toast notification component more flexible. This object contains data for different types of notifications, such as `success`, `warning`, `info`, and `error`. Each type has its specific **icon** , **iconClass** , and **progressBarClass** associated with it. While `progressBarClass` is currently unused, it will be used later to give a background color to the progress bar. You can find the CSS code for all these classes in the `App.css` file. The `Toast` component takes three props - `message`, `type`, and `id`. The `message` prop is used to display the text message of the notification. The `type` prop is used to determine the type of notification and the corresponding icon and icon class name. Although the `id` prop is not currently used in the above code, we will use it later to remove the notification. Finally, we have defined a dismiss button in our Toast component, which will allow the user to remove the notification. Now that we have created the `Toast` component, let's create a container component called `ToastsContainer` that will hold all the `Toast` components. Let's create a new file `ToastsContainer.js` in the `src/components` directory and add the following code: ```javascript import Toast from './Toast'; const ToastsContainer = ({ toasts }) => { return ( <div className="toasts-container"> {toasts.map((toast) => ( <Toast key={toast.id} {...toast} /> ))} </div> ); }; export default ToastsContainer; ``` The `ToastsContainer` component accepts an array of toast objects as the `toasts` prop. It then maps over this array using the [map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) method and renders a `Toast` component for each object. We are using the spread syntax `{...toast}` to pass all the properties of the `toast` object, such as `message`, `type`, and `id`, as individual props to the `Toast` component. We'll render the `ToastsContainer` component inside the `ToastContextProvider` component, which we have yet to create. ## Setting up the Toast Notification Context Now that we have our `Toast` and `ToastsContainer` components set up, it's time to move on to the next step, which is creating a context for our toast notifications. First, let's create a new file called `ToastContext.js` in the `src/contexts` folder. Inside this file, we'll create a new context using the `createContext` function provided by React: ```javascript // ToastContext.js import { createContext } from "react"; export const ToastContext = createContext(); ``` We've created a new `ToastContext` using the `createContext` function, and we've exported it so that we can use it in other parts of our application. Now, let's create a `ToastContextProvider` component that will wrap our entire application and provide the `ToastContext` to all of its children: ```javascript // ToastContext.js export const ToastContextProvider = ({ children }) => { return ( <ToastContext.Provider value={{}}> {children} </ToastContext.Provider> ); }; ``` ## Creating the Toast Notification Reducer Function Next, let's create a new file called `toastReducer.js` in the `src/reducers` folder. In this file, we'll create a `toastReducer` function to manage the state of the toasts: ```javascript // toastReducer.js export const toastReducer = (state, action) => { switch (action.type) { case "ADD_TOAST": return { ...state, toasts: [...state.toasts, action.payload], }; case "DELETE_TOAST": const updatedToasts = state.toasts.filter( (toast) => toast.id !== action.payload ); return { ...state, toasts: updatedToasts, }; default: throw new Error(`Unhandled action type: ${action.type}`); } }; ``` Our `toastReducer` function takes in a `state` and an `action` and returns a new state based on the **action type**. We have two types of actions: `ADD_TOAST`, which adds a new toast to the `toasts` array in our state, and `DELETE_TOAST`, which removes a toast from the `toasts` array based on its ID. Now let's go back to the `ToastContext.js` file and import the `toastReducer` function and [`useReducer`](https://react.dev/reference/react/useReducer) hook: ```javascript // ToastContext.js import { createContext, useReducer} from "react"; import { toastReducer } from "../reducers/toastReducer"; ``` Inside the `ToastContext.Provider` component, we'll use the `useReducer` hook that takes in the `toastReducer` function and `intialState`: ```javascript // ToastContext.js const initialState = { toasts: [], }; export const ToastContextProvider = ({ children }) => { const [state, dispatch] = useReducer(toastReducer, initialState); return ( <ToastContext.Provider value={{}}> {children} </ToastContext.Provider> ); }; ``` Now, we need to create some functions inside the `ToastContextProvider` component to **add** and **remove** toasts from the **state**. Firstly, we'll create an `addToast` function that takes in `message` and `type` as arguments and dispatches an `ADD_TOAST` action to add a new toast to the state: ```javascript // ToastContext.js const addToast = (type, message) => { const id = Math.floor(Math.random() * 10000000); dispatch({ type: "ADD_TOAST", payload: { id, message, type } }); }; ``` In addition to the `addToast` function, we'll create individual functions for each type of toast notification - `success`, `warning`, `info`, and `error`. These functions will call the `addToast` function with the corresponding type: ```javascript // ToastContext.js const success = (message) => { addToast("success", message); }; const warning = (message) => { addToast("warning", message); }; const info = (message) => { addToast("info", message); }; const error = (message) => { addToast("error", message); }; ``` To remove toast notifications, we'll create a `remove` function that takes in a toast `id` as an argument and dispatches a `DELETE_TOAST` action to remove the toast from the state: ```javascript // ToastContext.js const remove = (id) => { dispatch({ type: "DELETE_TOAST", payload: id }); }; ``` Then, create a `value` object that holds all the functions we have created and pass it to the `ToastContext.Provider` component: ```javascript // ToastContext.js export const ToastContextProvider = ({ children }) => { // rest of the code const value = { success, warning, info, error, remove }; return ( <ToastContext.Provider value={value}> {children} </ToastContext.Provider> ); }; ``` Next, we need to render the `ToastsContainer` component inside the `ToastContextProvider` component like this: ```javascript // import ToastsContainer import ToastsContainer from "../components/ToastsContainer"; export const ToastContextProvider = ({ children }) => { const [state, dispatch] = useReducer(toastReducer, initialState); // rest of the code return ( <ToastContext.Provider value={value}> <ToastsContainer toasts={state.toasts} /> {children} </ToastContext.Provider> ); }; ``` Finally, wrap our `App` component in the `ToastContextProvider` component in order to make the context available to all of our child components: ```javascript // src/index.js import { ToastContextProvider } from "./contexts/ToastContext"; root.render( <React.StrictMode> <ToastContextProvider> <App /> </ToastContextProvider> </React.StrictMode> ); ``` ## **Creating the** `useToast` Hook Next, let's create our custom hook, `useToast.js`, in the `src/hooks` folder, which will allow us to access the toast-related functions from the `ToastContext` directly without having to manually call `useContext` and import the `ToastContext` in every component. ```javascript // useToast.js import { useContext } from 'react'; import { ToastContext } from "../contexts/ToastContext"; export const useToast = () => useContext(ToastContext); ``` The `useToast` hook is a simple function that utilizes the `useContext` hook from React to access the `ToastContext`. This hook provides a simple and intuitive API for showing different types of toasts in our application since it returns the context's value, which includes all the functions for adding and removing toasts. ## Using the `useToast` Hook Now that we have created our custom hook `useToast`, we can use it to show toasts in our components. This hook provides a `value` of the context containing all the toast functions that we defined earlier: `success`, `warning`, `info`, `error`, and `remove`. To use the hook, we simply need to import it into our `App` component and call it to get access to the `value` object. After that, we can assign it to a variable named `toast`: ```javascript // App.js import { useToast } from "./hooks/useToast"; const App = () => { const toast = useToast(); return ( // JSX ); }; ``` Next, we need to add an `onClick` event to each of the buttons defined in the `App` component so that when a button is clicked, it should display the corresponding toast notification. For example, to show a success toast, we would call `toast.success("MESSAGE")`, where `MESSAGE` is the text we want to display in the toast. Here's an example of how we can use the `useToast` hook in an `App` component: ```javascript // App.js const App = () => { const toast = useToast(); return ( <div className="app"> <div className="btn-container"> <button className="success-btn" onClick={() => toast.success("Success toast notification")} > Success </button> <button className="info-btn" onClick={() => toast.info("Info toast notification")} > Info </button> <button className="warning-btn" onClick={() => toast.warning("Warning toast notification")} > Warning </button> <button className="error-btn" onClick={() => toast.error("Error toast notification")} > Error </button> </div> </div> ); }; ``` Now, you should be able to create new toast notifications by clicking on those buttons. ![Creating a new toast notification](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/me4onj2nkal7umsa6ndg.gif) Using the `useToast` hook can make adding and removing toasts in our app easier, and it's also a great way to keep your code clean and organized. > *Check out my open-source project, [FrontendPro](https://www.frontendpro.dev/) to take your frontend development skills to the next level for FREE!* ## Adding dismiss button functionality To add this functionality, we'll first import the `useToast` hook in our `Toast` component and call the `useToast` hook to get access to the `value` object.: ```javascript // Toast.js import { useToast } from "../hooks/useToast"; const Toast = ({ message, type, id }) => { const { toastClass, icon, iconClass } = toastTypes[type]; const toast = useToast() // call useToast return ( <div className={`toast ${toastClass}`}> <span className={iconClass}>{icon}</span> <p className="toast-message">{message}</p> <button className="dismiss-btn"> <IconX size={18} color="#aeb0d7" /> </button> </div> ); }; ``` Next, we'll define a `handleDismiss` function, which will call the `toast.remove()` function with a toast `id` to remove the toast. We will then attach an `onClick` event to the dismiss button to call the `handleDismiss` function: ```javascript const Toast = ({ message, type, id }) => { // code const handleDismiss = () => { toast.remove(id); }; return ( <div className={`toast ${toastClass}`}> <span className={iconClass}>{icon}</span> <p className="toast-message">{message}</p> {/* Add onClick */} <button className="dismiss-btn" onClick={handleDismiss}> <IconX size={18} color="#aeb0d7" /> </button> </div> ); }; ``` With this change, you can now manually remove a toast by clicking the dismiss button. ![Deleting toast notifications using dismiss button](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/djpfuuxtrri90tmeucyk.gif) ## Adding a Progress Bar and Auto-Dismiss Timer In this section, we will add a progress bar that will indicate how much time is remaining before the toast disappears and an auto-dismiss timer to automatically remove toast after a certain amount of time. To implement the auto-dismiss timer functionality, we will use the `useEffect` hook to attach a timer to each toast using `setTimeout` when it's mounted. This timer will call the `handleDismiss` function after a certain amount of time has passed, which will remove the toast from the screen. We can achieve this by adding the following code to our Toast component: ```javascript // Toast.js import { useEffect, useRef } from "react"; // import useEffect & useRef const Toast = ({ message, type, id }) => { // rest of the code const timerID = useRef(null); // create a Reference const handleDismiss = () => { toast.remove(id); }; useEffect(() => { timerID.current = setTimeout(() => { handleDismiss(); }, 4000); return () => { clearTimeout(timerID.current); }; }, []); return ( // JSX ); }; ``` In the above code, we create a new timer using `setTimeout` that will call the `handleDismiss` function after 4000 milliseconds (or 4 seconds). We then return a `cleanup` function from the `useEffect` hook that will clear the timer using the `clearTimeout` function when the `Toast` component is unmounted. With these changes, our `Toast` component will now automatically remove itself after 4 seconds. Now let's add the progress bar to our Toast component. First, update the JSX of our `Toast` component to include the progress bar like this: ```javascript const Toast = ({ message, type, id }) => { // code return ( <div className="toast"> <span className={iconClass}>{icon}</span> <p className="toast-message">{message}</p> <button className="dismiss-btn" onClick={handleDismiss}> <IconX size={18} color="#aeb0d7" /> </button> {/* Toast Progress Bar */} <div className="toast-progress"> <div className={`toast-progress-bar ${progressBarClass}`}></div> </div> </div> ); }; ``` Next, we need to style and animate the progress bar using CSS: ```css .toast-progress { position: absolute; bottom: 0; left: 0; width: 100%; height: 4px; background-color: rgba(0, 0, 0, 0.1); } .toast-progress-bar { height: 100%; animation: progress-bar 4s linear forwards; } .toast-progress-bar.success { background-color: var(--success); } .toast-progress-bar.info { background-color: var(--info); } .toast-progress-bar.warning { background-color: var(--warning); } .toast-progress-bar.error { background-color: var(--error); } @keyframes progress-bar { 0% { width: 100%; } 100% { width: 0%; } } ``` In the above code, we define the progress bar style in the `.toast-progress` and `.toast-progress-bar` CSS classes. Additionally, we define four more CSS classes: `.toast-progress-bar.success`, `.toast-progress-bar.info`, `.toast-progress-bar.warning`, and `.toast-progress-bar.error`. These classes define the background color of the progress bar based on the dynamic `progressBarClass` value in the `Toast` component. We also use the `@keyframes` rule to define the `progress-bar` animation. This animation animates the width of the progress bar from 100% to 0% over 4 seconds. After applying these changes, our Toast component now displays an animated progress bar. ![Toast notification progress bar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nw837b15rf185lomodn4.gif) ## Adding Hover on Pause functionality After adding the progress bar to our `Toast` component, we can now further enhance its functionality by adding a pause on the hover feature. With this feature, users can pause the auto-dismiss timer and the progress bar animation by simply hovering their mouse over the Toast. To add the hover-on pause functionality, we can use the `onMouseEnter` and `onMouseLeave` events in React. When the user hovers over the Toast, we can clear the auto-dismiss timer using the `clearTimeout` function to pause the timer. Then, when they move their mouse away, we can start a new timer with the remaining time. First, let's create a new reference called `progressRef` and attach it to the progress bar element to track whether the progress bar animation is currently paused or not. ```javascript // Toast.js const progressRef = useRef(null); {/* Toast Progress Bar */} <div className="toast-progress"> <div ref={progressRef} className={`toast-progress-bar ${progressBarClass}`} ></div> </div> ``` Next, we will create `handleMouseEnter` function to handle the `onMouseEnter` event. When the mouse enters the toast, we will clear the timer using the `clearTimeout` and set the progress bar animation to `paused` to pause the animation. ```javascript // Toast.js const handleMouseEnter = () => { clearTimeout(timerID.current); progressRef.current.style.animationPlayState = "paused"; }; ``` Similarly, we will create a `handleMouseLeave` function to handle the `onMouseLeave` event. When the mouse leaves the toast, we will set the progress bar animation back to `running` to resume the animation. ```javascript const handleMouseLeave = () => { const remainingTime = (progressRef.current.offsetWidth / progressRef.current.parentElement.offsetWidth) * 4000; progressRef.current.style.animationPlayState = "running"; timerID.current = setTimeout(() => { handleDismiss(); }, remainingTime); }; ``` In the above code, we first calculate the remaining time by dividing the current width of the progress bar by the total width of the progress bar container and multiplying it by the total duration (which is 4 seconds in our case). Next, we set the animation play state back to `running` to resume the progress bar animation. Then, we create a new timer using `setTimeout` and pass in the `handleDismiss` function as the callback, which will automatically dismiss the `Toast` after the remaining time has passed. This ensures that the `Toast` will still auto-dismiss even if the user pauses the animation for a certain period of time. Now we need to add these event listeners to the wrapper `div` of the `Toast` component. ```javascript const Toast = ({ message, type, id }) => { // rest of the code return ( <div className="toast" onMouseEnter={handleMouseEnter} onMouseLeave={handleMouseLeave} > <span className={iconClass}>{icon}</span> <p className="toast-message">{message}</p> <button className="dismiss-btn" onClick={handleDismiss}> <IconX size={18} color="#aeb0d7" /> </button> {/* Toast Progress Bar */} <div className="toast-progress"> <div ref={progressRef} className={`toast-progress-bar ${progressBarClass}`} ></div> </div> </div> ); }; ``` With these changes, users can now hover over the `Toast` to pause the animation and resume it when they move their mouse away. ## Customizing Toast Notification Component position To customize the position of our Toast notification component, we can pass a different `position` class as a prop to the `ToastsContainer` component and add the corresponding CSS for that class. By default, our `ToastsContainer` component is positioned at the top right of the screen using the `.toasts-container` class. Let's first create a few more position classes in our CSS file and remove the default `top` and `right` property from the `.toasts-container`: ```css .toasts-container { display: flex; flex-direction: column-reverse; row-gap: 12px; position: fixed; z-index: 9999; } .top-right { top: 16px; right: 16px; } .top-left { top: 16px; left: 16px; } .top-center { top: 16px; left: 50%; transform: translateX(-50%); } .bottom-left { bottom: 16px; left: 16px; } .bottom-center { bottom: 16px; left: 50%; transform: translateX(-50%); } .bottom-right { bottom: 16px; right: 16px; } ``` Next, let's update our `ToastsContainer` component to accept a `position` prop and add that to the wrapper div: ```javascript const ToastsContainer = ({ toasts, position = "top-right" }) => { return ( <div className={`toasts-container ${position}`}> {toasts.map((toast) => ( <Toast key={toast.id} {...toast} /> ))} </div> ); }; ``` Now, when we use the `ToastsContainer` component, we can pass a different `position` prop to customize its position on the screen: ```javascript // ToastContext.js <ToastContext.Provider value={value}> <ToastsContainer toasts={state.toasts} position="bottom-right" /> {children} </ToastContext.Provider> ``` With these changes, we can customize the position of our Toast notifications by simply passing a `position` class as a prop. ## Adding animation to the Toast component Currently, our Toast notifications appear and disappear suddenly without any animation. In this section, we will add a slide-in and slide-out animation to the `Toast` component using CSS. To add a slide-in effect, we can use the `@keyframes` rule to define an animation that gradually changes the `opacity` of the Toast from `0` to `1` and translates it from `100%` to `0%` along the x-axis. We can then apply this animation to the `.toast` class using the `animation` property in CSS. ```css /* App.css */ .toast { /* rest of the properties */ animation: slide-in 0.4s ease-in-out forwards; } @keyframes slide-in { 0% { opacity: 0; transform: translateX(100%); } 100% { opacity: 1; transform: translateX(0%); } } ``` To add a slide-out effect, we can use a similar approach. We can define another animation using the `@keyframes` rule that gradually changes the opacity of the Toast from 1 to 0 and translates it from `o%` to `100%`. ```css /* App.css */ .toast-dismissed { animation: slide-out 0.4s ease-in-out forwards; } @keyframes slide-out { 0% { opacity: 1; transform: translateX(0%); } 100% { opacity: 0; transform: translateX(100%); } } ``` To apply the `.toast-dismissed` class to the `Toast` component when it is dismissed, we can create a new state variable called `dismissed` and set it to `true` when the Toast is removed. Then, we can conditionally add the `.toast-dismissed` class to the `Toast` component based on the value of `dismissed`. ```javascript // import useState hook import { useEffect, useRef, useState } from "react"; const Toast = ({ message, type, id }) => { // rest of the code const [dismissed, setDismissed] = useState(false); const handleDismiss = () => { setDismissed(true); setTimeout(() => { toast.remove(id); }, 400); }; return ( <div className={`toast ${dismissed ? "toast-dismissed" : ""}`} onMouseEnter={handleMouseEnter} onMouseLeave={handleMouseLeave} > {/* rest of the code */} </div> ); }; ``` In the above code, we have also updated the `handleDismiss` function slightly. Now, when the dismiss button is clicked, or the auto-dismiss timer is completed, the `dismissed` state variable is set to `true`, and the `.toast-dismissed` class is added to the `Toast` component. This will trigger the slide-out animation defined in the `slide-out` keyframe animation. Finally, after a short delay of 400ms, we remove the Toast component using the `toast.remove()` function. With these changes, we have added animations to our Toast component. When the Toast component appears, it slides in from the right of the screen, and when it is dismissed, it slides out in the same direction. ## Conclusion In this blog, we covered various aspects of building the Toast Notification component, including creating a context, creating a Toast component, and adding functionality such as an auto-dismiss timer, progress bar, pause on hover, and animation. We also learned how to customize the position of the Toast component using CSS. With this custom Toast notification component, you can easily add beautiful and informative notifications to your application. The possibilities for customization are endless, and you can tailor the component to your specific needs. We hope this blog has been helpful and informative, and we encourage you to try building your own custom Toast notification component using ReactJS and Context API. If you have any feedback or suggestions, please feel free to leave them in the comments section below. Also, don't forget to follow me on [Twitter](https://twitter.com/thefierycoder) for more exciting content. Thank you for reading! > *Check out my open-source project, [FrontendPro](https://www.frontendpro.dev/) to take your frontend development skills to the next level for FREE!*
thefierycoder
1,432,773
Join Celebrations! Appwrite 1.3 Ships Relationships
Our latest release Appwrite 1.3 includes the most requested feature, Database Relationships 🎉 While...
0
2023-04-12T09:33:48
https://dev.to/appwrite/join-celebrations-appwrite-13-ships-relationships-57fc
opensource, programming, news, showdev
Our latest release Appwrite 1.3 includes the most requested feature, **Database Relationships** 🎉 While relationships deserve all the spotlight, we are bringing many more exciting features. Databases were the focus of this release and got some new and shiny **query operators**, such as `Query.select()` or `Query.isNull()`. Databases now allow give you more freedom with some of the **index and page limits** eliminated. Oh, and I can’t stress this enough, the new Console UI with **configurable layout** options and a **multi-level menu** is just beautiful 😍 Auth service got some thrilling new **password policy** configurations and Teams API now lets you set **team preferences**. Functions service got improvements to **variables UI** in Appwrite Console, and that’s not even everything! Grab your favorite snack and enjoy our announcement. 🍿 ## 🤔 New to Appwrite? [Appwrite](https://appwrite.io/) is an open-source back-end-as-a-service that abstracts all the complexity of building a modern application by providing you with a set of REST, GraphQL, and Realtime APIs for your core back-end needs. Appwrite takes the heavy lifting for developers and handles user authentication and authorization, databases, file storage, cloud functions, webhooks, and much more! ## 🔗 Relationships in Database Service [Appwrite Databases](https://appwrite.io/docs/databases) lets you store your data and share it with all of your app users. Relationships allow you to create connections between your data to make them predictable, consistent, and easy to work with. With Relationships Beta added to Appwrite, you can now set up the four most common types of relationships directly in your Appwrite Console, and start querying your data together. With one-to-one and many-to-one relations, you can now store similar data in separate collections to have separate permission rules. This lets you build secure apps but still keep reading data exceptionally simple. One-to-many relations bring a powerful replacement to all of your array attributes. With a related collection, you can now store data of any complexity, and query it as you like. You can also get all the data in one request without any need to do mapping on the client side. Many-to-many relationships are now much simpler. You no longer need to manage your own junction table as we do it for you. As complex as relationships sound, with Appwrite, there is no need to write lengthy queries. After setting up a relationship, simply read your documents and get exactly what you expect: ```ts const databases = new Databases(client); const response = await databases.listDocuments('production', 'calendars'); console.log(response); ``` ```json { "total": 1, "documents": [ { "title": "Meetings", "events": [ { "name": "Lunch Break", "date": "2023-04-03T16:30:00.000+00:00" }, { "name": "Onboarding", "date": "2023-04-03T18:00:00.000+00:00" } ] } ] } ``` It’s that easy! 🤩 If the response payload ever grows too big, with the new `Query.select()` operator you can ask for only attributes you are interested in. More on that in a later section 🤫 ## 🤸 Databases Getting Flexier Maximum page size and indexes make end-product faster but can create a lot of road bumps during development. These limits even created a blocker for some use cases. **Not anymore!** The maximum page size for getting your documents was capped at 100. In Appwrite 1.3, this limit was eliminated. We noticed it felt limiting to many projects and implementing infinite cursor pagination wasn’t a cup of tea for everyone. While it is not recommended to query millions of documents in a single request, we made sure to provide you with tools, and yet you, a developer, make the decision. Strict indexes for queries in Appwrite 1.3 are no longer necessary! If you ever used Appwrite Databases, chances are, you have seen the error `⚠️Index Not Found`. Such strict indexes can get in the way during development and can be a blocker to some use cases with complex filtering possibilities. With Appwrite 1.3, you are no longer going to get errors telling you to set up an index. Let’s get a little crazy and see what we can do now: ```ts const databases = new Databases(client); const response = await databases.listDocuments('production', 'events', [ Query.limit(999999), Query.orderDesc('$id'), Query.orderAsc('name'), Query.orderDesc('date'), Query.greaterThan('duration', 60 * 60), Query.search('description', 'urgent') ]); console.log(response); ``` As complex as this looks, Appwrite will accept this query and give you the results. As your app grows and this query gets slower, make sure to add indexes!😅 ## 🔍 New Database Queries Since we released the last database refactor in Appwrite 1.0, we noticed a huge increase in developers using Appwrite Databases. Databases became much quicker, more powerful, and more stable. With all the new feedback from our community, we noticed queries need some more attention, which is exactly what we did 😎 We are proud to announce the most requested query `Query.isNull()` alongside `Query.isNotNull()`. Null query lets you check if the value of an attribute in the document was provided, or was left empty during the creation of the document. With relationships added to databases, we decided to also introduce `Query.select()` to allow developers to specify what attributes they are interested in, similarly to how it can be done with GraphQL. Specifying attributes with a select query not only decreases the size of the response, but also makes underlying queries faster and more efficient. We also added `Query.startsWith()` and `Query.endsWith()` to bring some more searching possibilities to your string attributes. While this doesn’t sound like much, it is a great addition for many applications as workarounds for this feature were overcomplicated. Last but not least, we added a `Query.between()` to improve the performance of your numeric and datetime queries when you are looking for a specific range. This was already possible with `Query.greaterThan()` and `Query.lessThan()`, but by using `Query.between()` we have seen improvements in performance. We are constantly working on adding new query capabilities and plan to keep introducing new ones in the future with each release. Let’s bring all the queries together and see them in action! ```ts const databases = new Databases(client); const response = await databases.listDocuments('production', 'profiles', [ Query.isNotNull('employerId'), Query.startsWith('phoneNumber', '+61'), Query.between('age', 15, 18), Query.select(['twitterUrl', 'linkedInUrl']) ]); console.log(response); ``` We just filtered profiles to only see unemployed folks with a phone number from Canada, and a good age for student part-time job. We are only fetching their socials, so we can easily get in touch with them. ## ✨ Improved Passwords Security To keep users of your application safe, Appwrite already had a minimum of 8 character limit for the password. Today we have improved password security with two new rules. 🔥 With Appwrite 1.3 you can now enable password history and configure its length. With password history enabled, your users won’t be allowed to use the exact same password that they used previously when changing their password. The length of password history lets you set how much into the past should Appwrite remember, a maximum of up to 20 passwords. You can now also enable a rule for password dictionary. Appwrite knows what are the most common passwords, and with this rule enabled, it will not allow you users to set any of those passwords. It prevents your users from having passwords like `password`, `123456678`, or `qwertyui`. Appwrite currently knows the 10,000 most commonly used passwords thanks to the same list used by other industry-leading auth providers. You can check out the [dictionary list](https://github.com/danielmiessler/SecLists/blob/master/Passwords/Common-Credentials/10k-most-common.txt) on GitHub. ![Password Policies Console UI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4tdc5h66hqw3179td9wv.png) ## 🧑‍🤝‍🧑 Teams Get Preferences Teams API got an upgrade and can now store preferences on teams. Similarly to user preferences, this feature is intended to help you store app configuration and apply it in a proper scope. You can, for example, use team preferences for business apps where the team owner (IT department in a company) does all the configuration and layout setup. Then all team members (all employees) read this setup from preferences and see the app in exactly the same way. With this addition to the preferences family, you can now store preferences on all necessary levels. Let’s consider a scenario when we are storing if a dark theme is enabled or not. By writing to a local device, you are creating a preference on the session level. When you open the app on a new device, you will no longer have the dark theme. By using user preferences and storing them on the user level, the dark theme would be enabled as soon as you sign into your account. Finally, with team-level preferences, you can share the theme with multiple users such as students, employees, friends, or the community. ![Team Preferences Console UI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ue8dblkilimztsho9jz.png) ## 🔑 Importing Function Variables Variables in Appwrite Functions lets you set secrets that you can securely access in your code. With Appwrite 1.3 we added a highly-requested button to `Import .env file` 🤯 With this new feature you can drag&drop the `.env` file from your function folder and Console will take care of adding all variables defined in there. Additionally, all existing variables get updated with values from your file. This is exciting for those of us who have many variables and don’t want to create them one by one. ![Function Variables Import GIF](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lgd3d04algqcqyczg67d.gif) ## ⏩ What’s Next? With Appwrite 1.3 out in the public, we are starting to actively work on features for Appwrite 1.4. Some of you may have already noticed GitHub activity around Functions, which we plan to be our focus for the next release. As always, we are looking forward to hearing feedback from you and prioritizing features **YOU** are anticipating. ## 📚 Learn More You can use the following resources to learn more and get help: * [🚀 Appwrite GitHub](https://github.com/appwrite/appwrite/) * [📜 Appwrite Docs](https://appwrite.io/docs) * [💬 Discord Community](https://appwrite.io/discord)
meldiron
1,433,491
Effortlessly migrate your on-premise machines & application to any Cloud platform
Effortlessly migrate your on-premise machines &amp; application to any Cloud...
0
2023-04-12T08:57:55
https://dev.to/tikoosuraj/effortlessly-migrate-your-on-premise-machines-application-to-any-cloud-platform-4c9g
## Effortlessly migrate your on-premise machines & application to any Cloud platform ![Image taken from AWS](https://cdn-images-1.medium.com/max/4200/0*uyhfvc_ZJxSo839d.png) This blog is more about how we can effortlessly migrate our on-prem machines and Applications to the AWS cloud platform. In order to do this, we will be consuming the **CloudEndure **service which easily assists us to do our end to end migration. **CloudEndure** is a SAAS service that automatically does a lift-and-shift (rehost) solution that simplifies, expedites, and reduces the cost of migrating applications to AWS. In this article, we will be migrating our virtual machines from one region to another region. The source can vary in your case it can be your on-premies. For our demonstration, we are doing region to region migration. Getting started with CloudEndure sign up for the **CloudEndure **Account. The CloudEndure is a separate portal not available directly on amazon. Below is the URL for the portal. > [https://console.cloudendure.com/#/register/register](https://console.cloudendure.com/#/register/register) ![](https://cdn-images-1.medium.com/max/3810/1*br0RHaXHiOvZYkwh-kUwrg.png) Once Registration is done you can login into the directly portal. In the portal, we have created a project called **DemoMigration**. For our use case, we have amazon Linux which is running in the EU(**Ireland**) and we will migrate the same in another region which is AWS EU(**Stockholm**) ![Image taken from [Surajtikoo](undefined)](https://cdn-images-1.medium.com/max/3832/1*kuEzr2s_pMRPpcpqSQmJ5A.png) CloudEndure consists of there parts which are the source, replication, and target. Once you have set-up the project configures the below settings. ![](https://cdn-images-1.medium.com/max/2194/1*lsIdXH5l750LEpZpKfL_Hg.png) The **REPLICATION SETTINGS** tab enables us to define your Source and Target environments, and the default Replication Servers in the Staging Area of the Target infrastructure. ![Image taken by [Surajtikoo](undefined)](https://cdn-images-1.medium.com/max/3390/1*j8h-h6UJNhI_CnbhswA_BA.png) Once all the required configuration is setup. Install the CloudEndure agent on the source machine to initiate replication. We have an amazon virtual machine VM with docker service installed in the AWS environment(source machine). See more on installing the CloudEndure agent [here](https://docs.cloudendure.com/Content/Installing_the_CloudEndure_Agents/Installing_the_Agents/Installing_the_Agents.htm). The replication instance will be launched on the target machine. The Replicated data will be in EBS snapshots. ![](https://cdn-images-1.medium.com/max/3008/1*-0hzxyatEsLiR6YbGCsNlQ.png) Before proceeding make sure you have defined the blueprint for the target machine. Based on that the target machine will be created. Once all the required configuration is setup. Time to start the replication process to migrate the server. ![Image taken by [Surajtikoo](undefined)](https://cdn-images-1.medium.com/max/3836/1*C6Fd3LUszpzcVl5Nd1gKhw.png) CloudEndure gives the option to launch in test mode and cutover mode. **Test Mode-** Each Source machine for which a Target machine is launched will be marked as having a test Target machine launched on this date. **CutOver Mode-** Each Source machine for which a Target machine is launched will be marked as having a cutover Target machine launched on this date. Once the migration is completed we can see the machine is getting created in the target region with the same set of configurations. This migration is very much useful when an organization needs to migrate a lot of its servers and applications into the cloud. Below you can find some helpful links and documentation regarding best practices and become more familiar with the CloudEndure Migration service: [https://docs.cloudendure.com/Content/Configuring_and_Running_Migration/Migration_Best_Practices/Migration_Best_Practices.htm](https://docs.cloudendure.com/Content/Configuring_and_Running_Migration/Migration_Best_Practices/Migration_Best_Practices.htm)
tikoosuraj
1,436,005
React Toastify : The complete guide.
This article was originally published on the React Toastify : The complete guide. In this guide we...
0
2023-04-14T19:19:35
https://deadsimplechat.com/blog/react-toastify-the-complete-guide/
react, tutorial, javascript, webdev
This article was originally published on the [React Toastify : The complete guide.](https://deadsimplechat.com/blog/react-toastify-the-complete-guide/) In this guide we will start with the basics for creating toast notification and step by step move on to creating complex notifications and exploring the full capabilities of the React Toastify library Here is the run up on what we are going to learn in this article 1. What are Toast Notifications? 2. What is react Toastify? 3. Installing react Toastify 4. Creating a basic toast notification 5. Types of Toast Messages 6. Setting the Toast Message Position 7. Custom Styling with HTML and CSS of Toast Messages 8. Using transition and animations 9. Promise based Toast Messages 10. Handling autoClose 11. Render String, number and Component 12. Setting custom icons, using built-in icons and disable icons 13. Pause toast when window loses focus 14. Delay toast notification 15. Implementing a controlled progress bar 16. Updating a toast when an event happens 17. Custom close button or Remove the close button 18. Changing to different transitions 19. Defining custom enter and exit animation 20. Drag to remove If you are looking for [JavaScript chat API and SDK](https://deadsimplechat.com/javascript-chat-api-sdk), DeadSimpleChat is the solution for you. ## What are Toast Notifications? Toast or Toastify notifications are pop-up messages that come up with some information generally on the side bar. This could be information like sucess, loading, error or caution. There could also be a progress bar or not. there are a variety of Toast Notifications available. Refer to below images for an example: ![Toast Notification Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bxaut8wscc5vvbynshf3.png) ## What is react toastify? React toastify is one of the most popular libraries out there for creating toast notification in react. With react toastify you can easily create toast notifications and alerts in your react application ## Installing React Toastify To install the react toastify, first you need a react application. You can easily add to your react application or if you learning you can create a new react application with `create-react-app` Then, in your react applicaiton you can install react toastify by: ```javascript npm install --save react-toastify ``` with yarn ```javascript yarn add react-toastify ``` ## Creating a basic toast notification Creating a basic toast notification is easy. In your App.Js file import react toastify and the react-toastify CSS like: ```javascript import { ToastContainer, toast } from 'react-toastify'; import 'react-toastify/dist/ReactToastify.css'; ``` ## Creating a basic toast notification Creating a basic toast notification is easy. In your App.Js file import react toastify and the react-toastify CSS like: ``` import { ToastContainer, toast } from 'react-toastify'; import 'react-toastify/dist/ReactToastify.css'; ``` then in your App() function just create a notification ``` function App(){ const notify = () => toast("This is a toast notification !"); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) } ``` Now we have learnt how to create basic toast notifications. Let us now learn about some of the properties of toast notifications, styling characterstics and the types of toast notifications available ## Types of toast notifications There are 5 pre-defined types of toast notifications available in the react toastify these are: 1. Default 2. info 3. success 4. warning 5. error This is how these four types look like and how you can implement each of the type below: Just call the Toast Emitter with the type you want to implement ## Default ![default notification](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ldqcjtbc0tgj0zf6svcd.png) ```javascript const notify = () => toast("This is a toast notification !"); ``` This is the default, so no need to call anything here. The complete function looks like this: ```javascript function Toastify(){ const notify = () => toast.error("This is a toast notification !"); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) ``` ## Info ![info notification react toastify](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tekjwwgkrwtncfjehqpj.png) ```javascript const notify = () => toast.info("This is a toast notification !"); ``` The complete function looks like ```javascript function Toastify(){ const notify = () => toast.info("This is a toast notification !"); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) } ``` #### Success ![success](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dzk26lkbf416c7aohpj5.png) ```javascript const notify = () => toast.success("This is a toast notification !"); ``` the complete function looks ```javascript function Toastify(){ const notify = () => toast.success("This is a toast notification !"); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) } ``` #### Warning ![Warning](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rww4d8dbak524dxobquo.png) ```javascript const notify = () => toast.warning("This is a toast notification !"); ``` ```javascript function Toastify(){ const notify = () => toast.warning("This is a toast notification !"); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) } ``` #### Error ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xcjndgdtwc69z3mojjrc.png) ```javascript const notify = () => toast.error("This is a toast notification !"); ``` ```javascript function Toastify(){ const notify = () => toast.error("This is a toast notification !"); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) } ``` ## Setting the Toast Notification position You can set the Toast notification position from a variety of pre set positions available in the react toastify library the available positions are: 1. top-left 2. top-right 3. top-center 4. bottom-left 5. bottom-right 6. bottom-center Here is how the positional notification looks like in each of the position and the code to implement the position To add the position setting you need to edit the `ToastContainer` and set the position property like ```javascript <ToastContainer position="top-right" /> ``` In our example you can set it like ```javascript function Toastify(){ const notify = () => toast.error("This is a toast notification !"); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer position="top-right" /> </div> ) } ``` you can set other settings like so: ```javascript <ToastContainer position="top-left" /> <ToastContainer position="top-right" /> <ToastContainer position="top-center" /> <ToastContainer position="bottom=left" /> <ToastContainer position="bottom-right" /> <ToastContainer position="bottom-center" /> ``` The different positions look like this #### top-left ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aadzosgi6rre6hdc4m8z.png) #### top-right ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r8h0s4x9hptck9orpn2o.png) #### top-center ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/61jngl29jvorzs8x8bg5.png) #### bottom left ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hrexw2y691tdsuudhtnc.png) #### bottom right ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lwbdavzt94rdstbkum7k.png) #### bottom center ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vzwt211z67nmeyskx4da.png) ## Custom Styling the notification with HTML and CSS You can custom style the toast notification with HTML and CSS, here is how you can customize the notification There are bunch of CSS variables that are exposed by the react toastify library. You can override them and do customization that most will work for most people. Here are the variables that you can override: ```css :root { --toastify-color-light: #fff; --toastify-color-dark: #121212; --toastify-color-info: #3498db; --toastify-color-success: #07bc0c; --toastify-color-warning: #f1c40f; --toastify-color-error: #e74c3c; --toastify-color-transparent: rgba(255, 255, 255, 0.7); --toastify-icon-color-info: var(--toastify-color-info); --toastify-icon-color-success: var(--toastify-color-success); --toastify-icon-color-warning: var(--toastify-color-warning); --toastify-icon-color-error: var(--toastify-color-error); --toastify-toast-width: 320px; --toastify-toast-background: #fff; --toastify-toast-min-height: 64px; --toastify-toast-max-height: 800px; --toastify-font-family: sans-serif; --toastify-z-index: 9999; --toastify-text-color-light: #757575; --toastify-text-color-dark: #fff; //Used only for colored theme --toastify-text-color-info: #fff; --toastify-text-color-success: #fff; --toastify-text-color-warning: #fff; --toastify-text-color-error: #fff; --toastify-spinner-color: #616161; --toastify-spinner-color-empty-area: #e0e0e0; // Used when no type is provided // toast("**hello**") --toastify-color-progress-light: linear-gradient( to right, #4cd964, #5ac8fa, #007aff, #34aadc, #5856d6, #ff2d55 ); // Used when no type is provided --toastify-color-progress-dark: #bb86fc; --toastify-color-progress-info: var(--toastify-color-info); --toastify-color-progress-success: var(--toastify-color-success); --toastify-color-progress-warning: var(--toastify-color-warning); --toastify-color-progress-error: var(--toastify-color-error); } ``` If changing variables is not enough for you. You can override existing class. Here are the classes that you can easily override ```javascript /** Used to define container behavior: width, position: fixed etc... **/ .Toastify__toast-container { } /** Used to define the position of the ToastContainer **/ .Toastify__toast-container--top-left { } .Toastify__toast-container--top-center { } .Toastify__toast-container--top-right { } .Toastify__toast-container--bottom-left { } .Toastify__toast-container--bottom-center { } .Toastify__toast-container--bottom-right { } /** Classes for the displayed toast **/ .Toastify__toast { } .Toastify__toast--rtl { } .Toastify__toast-body { } /** Used to position the icon **/ .Toastify__toast-icon { } /** handle the notification color and the text color based on the theme **/ .Toastify__toast-theme--dark { } .Toastify__toast-theme--light { } .Toastify__toast-theme--colored.Toastify__toast--default { } .Toastify__toast-theme--colored.Toastify__toast--info { } .Toastify__toast-theme--colored.Toastify__toast--success { } .Toastify__toast-theme--colored.Toastify__toast--warning { } .Toastify__toast-theme--colored.Toastify__toast--error { } .Toastify__progress-bar { } .Toastify__progress-bar--rtl { } .Toastify__progress-bar-theme--light { } .Toastify__progress-bar-theme--dark { } .Toastify__progress-bar--info { } .Toastify__progress-bar--success { } .Toastify__progress-bar--warning { } .Toastify__progress-bar--error { } /** colored notifications share the same progress bar color **/ .Toastify__progress-bar-theme--colored.Toastify__progress-bar--info, .Toastify__progress-bar-theme--colored.Toastify__progress-bar--success, .Toastify__progress-bar-theme--colored.Toastify__progress-bar--warning, .Toastify__progress-bar-theme--colored.Toastify__progress-bar--error { } /** Classes for the close button. Better use your own closeButton **/ .Toastify__close-button { } .Toastify__close-button--default { } .Toastify__close-button > svg { } .Toastify__close-button:hover, .Toastify__close-button:focus { } ``` you can also build your own style using the scss file. Just edit the scss directory and build your own stylesheet. If you just want to change some colors and stuff like that you can easily do that by changing some variables variables are defined in the __variables.scss file ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b0egu0wggm3p28bybmly.png) ## Passing CSS classes to components The ToastContainer will accept the following props for styling 1. className 2. toastClassName 3. bodyClassName 4. progressClassName 5. style ```javascript toast("Custom style", { className: "black-background", bodyClassName: "grow-font-size", progressClassName: "fancy-progress-bar", }); ``` This is how you can customize the toast notification ## Using transitions and animation When it comes to animation there are millions of ways you can animate the toast notification. m and of course you can create your custom animations as well The four available transitions are: 1. bounce 2. slide 3. zoom 4. flip You can use one of the default transitions like to add a default transition import the transitions from react-toastify and then in the ToastContainer add the transition like shown above. You can also implement the transition per toast as well, so you can have different transition for different toasts Instead of the `ToastContainer` add the transition setting in the toast function ```javascript import { Bounce, Slide, Zoom, ToastContainer, toast } from 'react-toastify'; const notify = () => toast.success("This is a toast notification !"); <ToastContainer transition={Zoom} /> </div> ``` ```javascript function Toastify(){ const notify = () => toast.success("This is a toast notification !",{ transition: Slide }); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) } ``` For custom transition, create the custom transition and import it in your App.js file then in the ToastContainer or the toast function use it as you use the default transition. like ```javascript import { Bounce, Slide, Zoom, ToastContainer, toast } from 'react-toastify'; const notify = () => toast.success("This is a toast notification !"); <ToastContainer transition={yourCustomTransition} /> </div> ``` ## Promise based Toast Messages The react toasitfy exposes a `toast.promise` function. You can add a promise or a function that returns a promise. When that promise is resolved or fails the notification will be updated accordingly. When the promise is pending a notification spinner is also displayed ```javascript function Toastify(){ const resolveAfter2Seconds = new Promise(resolve => setTimeout(resolve, 2000)) const notify = () => toast.promise(resolveAfter2Seconds, { pending: "waiting for the promise to resolve", success: "promise resolved successfully", error: "promise failed to resolve" }); return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) } ``` This is how you can create a promise based toast message. Displaying a simple message is useful in most of the cases. For added interactivity you can display a message according to the promise response. You can show some message if the promise resolves successfully or show some other message if the promise fails to resolve and even show a message when the promise is pending. Here is how you can implement this: ```javascript function Toastify(){ const notify = new Promise(resolve => setTimeout(() => resolve("this is soo cool"), 2000)); toast.promise( notify, { pending: { render(){ return "Promise Pending" }, icon: false, }, success: { render({data}){ return `Great, ${data}` }, // other options icon: "👍", }, error: { render({data}){ // When the promise reject, data will contains the error return <MyErrorComponent message={data.message} /> } } } ) return ( <div> <button onClick={notify}>Notify !</button> <ToastContainer /> </div> ) } ``` ## Handling Auto Close The `autoClose` property of the toast accepts a duration in milliseconds or `false` ```javascript import { ToastContainer, toast } from 'react-toastify'; //Close the toast after 2 seconds <ToastContainer autoClose={2000} /> ``` You can also individually close each toast at different points in time ```javascript function Toastify(){ const notify = () => toast("closeAfter10Seconds", { autoClose: 10000 }); const notify2 = () => toast("closeAfter2Seconds", { autoClose: 2000 }); return ( <div> <button onClick={notify} >Notify !</button> <button onClick={notify2} >Notify2 !</button> <ToastContainer autoClose={2000} /> </div> ) } ``` ![individually close](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ggivzbk0q97my6gc0ptq.png) You can also prevent the toast notification from closing by default using the false statement instead of the seconds like ```javascript toast("hi", { autoClose: false }); ``` ## Render String, number and Component You can render any react node including a String, number or even a component. This is how you can do it ```javascript import React from "react"; import { ToastContainer, toast } from 'react-toastify'; const Data = ({closeToast, toastProps})=>{ <div> Hi how are you? {toastProps.position} <button>Retry</button> <button onClick={closeToast}>close</button> </div> } function Toastify(){ const notify = () => toast( <Data /> ); return ( <div> <button onClick={notify} >Notify !</button> <ToastContainer /> </div> ) } ``` ## Setting custom icons, using built-in icons and disable There are built in icons available with the toastify library that are useful for most cases, you can include customize icons for your application and lastly you can delete the icons from the toast notification itself as well. The toast types all display their associated icons, so you can specify a toast type and display the corresponding icon #### Custom Icon to display a custom icon simply supply the toast type with an icon like so ```javascript //you can use a string toast.success("custom toast icons". { icon: "👍" }); //or a component toast.success("custom toast icons". { icon: CustomIcon }); // or a function toast.success("custom toast icons". { icon: ({theme,type}) => <img src="url"/> }); ``` ## Pause taost when window loses focus The default behavior of the toast is to pause whenever the window loses focus. But you can set it to not pause by setting the `pauseOnFocusLoss` property to false like ```javascript //All all toasts <ToastContainer pauseOnFocusLoss={false} /> // for individual toasts toast('cool', { pauseOnFocusLoss: false }) ``` ## Delay toast notification To delay notification display you can use `setTimeout` or use the delay prop that uses `setTimeout` under the hood ```javascript toast('cool') toast('how are you?', {delay: 1000}); ``` ## Limit the number of toast displayed Notification can be handy, but there can be multiple number of notification displayed. You can limit the number of notifications displayed using the limit prop ```javascript import React from 'react'; import { toast, ToastContainer } from 'react-toastify'; import 'react-toastify/dist/ReactToastify.css'; // Display a maximum of 3 notifications at the same time function App(){ const notify = () => { toast("lorem ipsum"); } return ( <div> <button onClick={notify}>Click on me a lot!</button> <ToastContainer limit={3}> </div> ) } ``` When the number of toasts displayed hits the limit the remaining toasts are added to a queue and then displayed whenever a slot is available You can clear the queue as well, to clear the queue so that no more toasts are displayed use the `clearWaitingQueue()` method ```javascript toast.clearWaitingQueue(); ``` ## Implementing a controlled progress bar If you are programming a file upload or something else where you need to indicate the progress of the program then a controlled progress bar that indicates that process might come useful Lets see an example where we upload file to the server and the progress bar will load as the file is being uploaded ```javascript import React from 'react'; import axios from 'axios'; import { toast } from 'react-toastify'; function demoOfProgressBar(){ // Keeping a reference of the toast id const toastId = React.useRef(null); function uploadFiles(){ axios.request({ method: "post", url: "/cool", data: informationToBeUploaded, onUploadProgress: p => { const progress = p.loaded / p.total; // see if the toast is being displayed if (toastId.current === null) { toastId.current = toast('Uploading', { progress }); } else { toast.update(toastId.current, { progress }); } } }).then(data => { //When the upload is done the progress bar will close and the transition will end toast.done(toastId.current); }) } return ( <div> <button onClick={uploadFiles}>Upload files to server</button> </div> ) } ``` ## Updating a toast when an event happens You can update a toast when an event happens, For example if you are uploading a file to the server, you can update the toast when the upload is completed Things you can update 1. Change the Type of toast or change colors or theme 2. Change the content of the toast 3. Apply a transition when the change happens Lets see these with examples in action #### 1. Change the Type of toast or change colors or theme ```javascript import React from 'react'; import { toast } from 'react-toastify'; function changeToast() { const toastId = React.useRef(null); const notify = () => toastId.current = toast("this is soo cool", { autoClose: false }); const updateToast = () => toast.update(toastId.current, { type: toast.TYPE.INFO, autoClose: 2000 }); return ( <div> <button onClick={notify}>Notify</button> <button onClick={updateToast}>Update</button> </div> ); } ``` #### 2. Change the content of the toast Changing the content of the toast is easy as well. Just pass any valid element or even a react component. In the below examples I will show how to render a string as well as component to change the toast in action ```javascript // With a string toast.update(toastId, { render: "New content", type: toast.TYPE.INFO, autoClose: 5000 }); // Or with a component toast.update(toastId, { render: MyComponent type: toast.TYPE.INFO, autoClose: 5000 }); toast.update(toastId, { render: () => <div>New content</div> type: toast.TYPE.INFO, autoClose: 5000 }); ``` #### 3. Apply a transition when the change happens If you want to apply a transition when a change happens thenyou can use the `className` or the `transition` property to achieve this ```javascript toast.update(toastId, { render: "stuff", type: toast.TYPE.INFO, transition: Zoom }) ``` ## Custom close button or Remove the close button You can pass the a custom close button to the toast container and override the default button Here is how you can do this. ```javascript import React from 'react'; import { toast, ToastContainer } from 'react-toastify'; const CloseButton = ({ closeToast }) => ( <i className="fastly-buttons" onClick={closeToast} > delete </i> ); function App() { const notify = () => { toast("this is a toast notification"); }; return ( <div> <button onClick={notify}>Notify</button>; <ToastContainer closeButton={CloseButton} /> </div> ); } ``` Here is how you can define it per toast ```javascript toast("this is a toast notification", { closeButton: CloseButton }) ``` You can also remove it globally and show it per toast or show it globally and hide it per toast like: ```javascript toast("this is a toast notification", { closeButton: true }) //hide it per toast and show globally toast("this is a toast notification", { closeButton: false }) ``` ## Adding Undo action to toast messages You can add a undo button to the toast messages. For example if some one did a action that they want to undo they can easily undo the action by clicking on the undo button ## Drag to remove you can drag the toast to remove it. You need to define the width percentage to remove the toast. You can also disable drag to remove. #### set the percentage of drag when the toast is removed pass the `draggablepercent` props with the percentage of screen that needs to be dragged to remove the toast as follow ```javascript <toastContainer draggablePercent={50} /> ``` ```javascript toast('Cool',{ draggablePercent: 90 }) ``` To disable the ability to drag to dismiss pass the false to draggable property ```javascript <ToastContainer draggable={false} /> ``` You can disable per toast notification as follows ```javascript toast('Cool',{ draggable: false }) ``` Here is how you can enable drag to remove in react Toastify notification.
alakkadshaw
1,436,568
ModuleNotFoundError when running PySpark code that do Sentiment analysis
ModuleNotFoundError when running PySpark...
0
2023-04-15T05:40:07
https://dev.to/alyayma04892036/modulenotfounderror-when-running-pyspark-code-that-do-sentiment-analysis-4c4
{% stackoverflow 76012139 %}
alyayma04892036
1,437,492
Wordpress Calendar
I am new to wordpress. I am using the The Event Calendar plugin which works perfectly fine. Now I...
0
2023-04-16T10:37:45
https://dev.to/yuridevat/wordpress-calendar-1a9e
wordpress, help
I am new to wordpress. I am using the **The Event Calendar** plugin which works perfectly fine. Now I want to show the list of upcoming events on the homepage as well. This would work with the Pro version, which is not possible for me to choose. Is there another way to do so? I tried with embedding using JS but I cannot make any custom JS code work to run - neither in the function.php nor via the WP Code Editor Plugin. I am happy for any suggestions (also other plugins which let you do so without paying). Thanks.
yuridevat
1,437,591
Introduction to Blockchain Wallet
A blockchain wallet is a must-have tool for any cryptocurrency investor or Web3 enthusiast interested...
0
2023-04-17T09:30:00
https://frankiefab.hashnode.dev/introduction-to-blockchain-wallet
web3, cryptowallet, blockchain, metamask
A blockchain wallet is a must-have tool for any cryptocurrency investor or Web3 enthusiast interested in interacting with decentralized applications, purchasing non-fungible tokens (NFTs), or holding cryptocurrencies. ## What is a blockchain wallet? A blockchain (or crypto) wallet is a digital wallet that allows users to store, manage, and trade their cryptocurrencies and other blockchain-based digital assets. It securely enables users to store, send, or receive cryptocurrencies and other blockchain-based digital assets through a combination of public and private cryptographic keys. It usually consists of a pseudonymous 32-character public key address that can be easily restored with the 12-24 word mnemonic or seed phrase associated with it and also requires passwords for the users' protection. It consists of: **Public key:** The user wallet address that is shared with the public or other crypto users to receive cryptocurrencies or NFTs or process transactions like transfers. **Private key:** This act like a unique password or address that grants complete access to a user's funds. It proves ownership of digital assets associated with a wallet address and therefore should be kept private. ## Uses of blockchain wallet A blockchain or crypto wallet allows you to: - Interact with decentralized applications and blockchain networks. - Manage not only your funds, but also NFTs, POAPs, DAO governance tokens, and other collectibles. - It acts as a personal on-chain identity or passport. ## Classification of Blockchain Wallets Blockchain wallets are classified into: ### 1. Non-custodial blockchain wallets Non-custodial blockchain wallets offer private keys and complete control of funds to their users. They are not convenient to use and don't have trading functionality, although they have support for decentralized exchanges for swapping cryptocurrencies with others. **Examples of Non-custodial blockchain wallets** **Hardware Wallets:** Common examples of hardware wallets are Trezor and Ledger, which are portable just like flash drives. They are notably the safest blockchain wallets for cold storage of cryptocurrencies and other digital assets. They are less vulnerable to hacks since they are offline and not connected to any decentralized application or protocol through the internet. **Web-Based Wallets:** They come as browser extensions and offer efficient use of web3 protocols such as NFT marketplaces. The majority of blockchain wallets nowadays exist as web browser extensions. Examples of Web-Based Wallets include MetaMask, Wallet Connect, Phantom, Argent X, Tally Ho, Petra, and so on. **Mobile Wallets:** These are application software downloadable from app stores. They are vulnerable to the risks of malware infections and virus attacks. Some examples of wallets that fall into this category are Valora, Zerion, Pera Algo wallet, Rainbow, MetaMask, Blockchain.com, and Trust Wallet. **Multisignature wallets:** Commonly called multi-sig wallets are blockchain wallets that require two or more users to sign in with their private keys to confirm or make a transaction. For example, Gnosis Safe is a suitable multi-sig wallet for Decentralized Autonomous Organizations (DAOs) and enterprises' management of funds. ### 2. Custodial blockchain wallets These classes of blockchain or crypto wallets are provided and managed by centralized exchanges that act as custodians, responsible for securing and holding their user's funds. The wallet provider controls the security keys that grant access to the stored cryptocurrency and digital assets. To own a custodial wallet, the user is required to complete verification through Your Customer (KYC) identity to have total access to all options and features provided on the platform. They have inbuilt trading functionality for easy buying and selling of cryptocurrencies. It also offers the freedom to create multiple individual wallets in an account. The cons of this wallet are that it is vulnerable to hacks, rug pulls, or scams since the user's private keys are held by a third party and the wallet holdings are kept in hot storage. **Examples of Non-custodial blockchain wallets** - Binance - Coinbase - Kraken - Kucoin - Gate.io - and other wallets provided by centralized exchanges (CEXs). ## How a Blockchain Wallet Works A user is provided with a unique address, similar to a bank account, once they sign up. To make a purchase or exchange cryptocurrency, a user can either send a request to another user to transfer a token in exchange for fiat, or they can fund their wallet from a bank if it's a custodial wallet. For a non-custodial wallet, there is an option to swap a cryptocurrency for another using decentralized exchanges. Every cost of processing a transaction is covered by gas fees. ![Mobile blockchain wallet apps](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i62ldv8k56qzxe50jph0.jpg) ## How to keep a blockchain wallet safe Wallet security is an important consideration for users, as a compromised account may result in users losing control of their assets. Here are some things to note to keep your cryptocurrency wallet secure: - Your seed phrase or private key should keep confidential written on physical paper and not shared with anyone. - Large crypto holdings should be stored in a cold wallet such as Ledger. - Enable two-factor authentication (2FA) for additional wallet security. - Confirm links to protocols or exchanges before interacting with them to avoid scams from phishing websites. Thank you very much for taking the time to read this, I hope you found it helpful. Please share and check out other articles on my [blog](https://frankiefab.hashnode.dev). ## Further Reading * [Blockchain wallet](https://www.investopedia.com/terms/b/blockchain-wallet.asp) * [What is a cryptocurrency wallet](https://whiteboardcrypto.com/what-is-a-cryptocurrency-wallet/#) * [Curious beginner guide to crypto wallets](https://creatoreconomy.so/p/curious-beginner-guide-to-crypto-wallets)
frankiefab100
1,438,117
Avoiding Common Angular Development Issues: Tips and Best Practices for Node and NPM
Outline: I. Introduction Brief overview of Angular development Importance of avoiding...
0
2023-04-17T02:28:38
https://dev.to/whizfactor/avoiding-common-angular-development-issues-tips-and-best-practices-for-node-and-npm-3bce
angular, node, npm
## **Outline:** **I. Introduction** - Brief overview of Angular development - Importance of avoiding common issues - Overview of Node and NPM **II. Common Angular Development Issues** - Unhandled Exceptions - Poor Performance - Memory Leaks - Incompatible Versions - Dependency Issues - Debugging **III. Best Practices for Node and NPM** - Keeping Node and NPM Updated - Using a Package Manager - Using Semantic Versioning - Using a Dependency Management Tool - Managing Your Dependencies **IV. Tips for Avoiding Common Angular Development Issues** - Consistent Coding Practices - Writing Testable Code - Using Best Practices for Component Design - Properly Handling Asynchronous Operations - Managing State **V. Conclusion** - Recap of Common Angular Development Issues - Importance of Best Practices for Node and NPM - Tips for Avoiding Common Angular Development Issues - Final Thoughts --- **Let's Go!** Angular development has become increasingly popular in recent years, as it offers developers a robust framework for building complex web applications. However, as with any development project, there are common issues that can arise and cause headaches for developers. From unhandled exceptions to dependency issues, these issues can slow down development and even cause applications to fail. In this article, we’ll explore some common issues developers may encounter in Angular, and provide tips and best practices for avoiding them, specifically when working with Node and NPM. --- ## **Common Angular Development Issues** **Unhandled Exceptions:** One common issue that can cause major problems in Angular development is unhandled exceptions. These are errors that occur during runtime and are not properly handled by the application. When an unhandled exception occurs, the application will typically crash or behave unpredictably, making it difficult to diagnose the root cause of the issue. **Poor Performance:** Another issue that can plague Angular development is poor performance. This can manifest in a number of ways, including slow load times, unresponsive user interfaces, and sluggish application behavior. Poor performance can be caused by a number of factors, including poorly optimized code, inefficient data structures, and insufficient resources. **Memory Leaks:** Memory leaks are another common issue in Angular development. A memory leak occurs when a program uses memory but fails to release it when it is no longer needed. Over time, memory leaks can cause the application to slow down, crash, or become unstable. **Incompatible Versions:** One of the biggest challenges in Angular development is managing compatibility between different versions of libraries and dependencies. Incompatible versions can cause a range of issues, from broken functionality to security vulnerabilities. **Dependency Issues:** Another common issue in Angular development is dependency issues. These can arise when dependencies are not properly managed, or when there are conflicts between different dependencies. Dependency issues can cause a range of problems, from broken functionality to security vulnerabilities. **Debugging:** Debugging is an essential part of any development process, but it can be particularly challenging in Angular development. The complex nature of Angular applications can make it difficult to identify and diagnose issues, which can slow down development and cause frustration for developers. --- ## **Best Practices for Node and NPM** **Keeping Node and NPM Updated:** One of the most important best practices for Node and NPM is keeping them updated. Node and NPM are constantly evolving, and new versions often contain important bug fixes, security updates, and performance improvements. By keeping Node and NPM up to date, developers can ensure that they are using the most stable and secure versions of these tools. **Using a Package Manager:** Using a package manager is another important best practice for Node and NPM. A package manager is a tool that automates the process of installing, updating, and removing packages and dependencies. This can save developers time and effort, and help to ensure that all dependencies are properly managed and up to date. Two popular package managers for Node are npm and yarn, and both are widely used by the Angular community. **Using Semantic Versioning:** Semantic versioning is a best practice for managing dependencies in Node and NPM. Semantic versioning is a standard for versioning software that ensures compatibility between different versions of libraries and dependencies. By following semantic versioning, developers can avoid compatibility issues and ensure that their applications are using the most stable and secure versions of dependencies. **Using a Dependency Management Tool:** A dependency management tool can help developers to manage their dependencies more efficiently. These tools can automate the process of installing and updating dependencies, and can help to ensure that all dependencies are properly managed and up to date. Some popular dependency management tools for Node and NPM include Bower, Webpack, and Rollup. **Managing Your Dependencies:** Another important best practice for Node and NPM is managing your dependencies. This involves keeping your dependencies up to date, removing unused dependencies, and being mindful of the dependencies you add to your project. By managing your dependencies carefully, you can avoid compatibility issues, reduce the risk of security vulnerabilities, and keep your application running smoothly. --- ## **Tips for Avoiding Common Angular Development Issues** **Consistent Coding Practices:** Consistent coding practices can help to avoid a range of common Angular development issues. By using consistent coding practices, developers can reduce the risk of errors and ensure that their code is easy to maintain and understand. Some best practices for consistent coding practices include using a consistent naming convention, following a style guide, and using code reviews to ensure that all code meets the same standards. **Writing Testable Code:** Writing testable code is another important tip for avoiding common Angular development issues. By writing code that is easy to test, developers can ensure that their code is robust and reliable. Some best practices for writing testable code include using dependency injection, separating concerns, and using unit tests and integration tests to verify code functionality. **Using Best Practices for Component Design:** Using best practices for component design can help to ensure that Angular applications are easy to develop, maintain, and scale. Some best practices for component design include using a modular architecture, separating concerns, and using a consistent naming convention. **Properly Handling Asynchronous Operations:** Asynchronous operations can be a major source of bugs and errors in Angular development. By properly handling asynchronous operations, developers can ensure that their code is robust and reliable. Some best practices for handling asynchronous operations include using promises or observables, using async/await syntax, and avoiding callback hell. **Managing State:** Managing state can be a challenging task in Angular development, but it is critical to building reliable and scalable applications. Some best practices for managing state include using a centralized state management tool, using immutable data structures, and using reactive programming techniques. --- ## **Conclusion** In conclusion, Angular development offers a powerful framework for building complex web applications, but it can also be challenging due to the many common issues that can arise. By following best practices for Node and NPM, and by implementing tips for avoiding common Angular development issues, developers can ensure that their applications are robust, reliable, and scalable. By consistently updating dependencies, writing testable code, using best practices for component design, properly handling asynchronous operations, and managing state effectively, developers can build applications that meet the needs of their users and the demands of the modern web.
whizfactor
1,438,212
openGauss Creating an Index for an MOT Table
Standard PostgreSQL create and drop index statements are supported. For example – "" create index ...
0
2023-04-17T03:08:33
https://dev.to/tongxi99658318/opengauss-creating-an-index-for-an-mot-table-4dka
opengauss
Standard PostgreSQL create and drop index statements are supported. For example – "" create index text_index1 on test(x) ; The following is a complete example of creating an index for the ORDER table in a TPC-C workload – "" create FOREIGN table bmsql_oorder ( o_w_id integer not null, o_d_id integer not null, o_id integer not null, o_c_id integer not null, o_carrier_id integer, o_ol_cnt integer, o_all_local integer, o_entry_d timestamp, primary key (o_w_id, o_d_id, o_id) ); create index bmsql_oorder_index1 on bmsql_oorder(o_w_id, o_d_id, o_c_id, o_id) ; NOTE: There is no need to specify the FOREIGN keyword before the MOT table name, because it is only created for create and drop table commands. For MOT index limitations, see the Index subsection under the _SQL Coverage and Limitations _section.
tongxi99658318
1,438,228
AI Based Tools
• Job Search with AI-Powered Resume Writing and Editing Rezi • Turn an idea into a stunning...
0
2023-04-17T04:02:44
https://dev.to/osandalelum/ai-based-tools-441m
ai, machinelearning, deeplearning, skills
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nycr74z1h3bjbjptqt5j.png) • Job Search with AI-Powered Resume Writing and Editing [Rezi](https://www.rezi.ai/) • Turn an idea into a stunning app/website. Without code. Without limits [buzzy](https://www.buzzy.buzz/) • Generate Quizes [Quiz](https://www.testportal.net/) • Create a private AI brain for your business with our GPT for numbers platform. [iGenius | AI Brain for Business Data](https://www.igenius.ai/) • Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects. [Welcome | Weaviate - vector database](https://weaviate.io/)
osandalelum
1,462,878
Extend Python VENV: Organize Dependencies Your Way
Introduction Virtual environments are great way to organise development process by...
0
2023-05-11T12:49:42
https://dev.to/dmikhr/extend-python-venv-organize-dependencies-your-way-3h0g
tutorial, python, cicd
### Introduction Virtual environments are great way to organise development process by isolating project specific packages. In such way Python has a built-in tool `venv` for creating virtual environments. In this tutorial we are going to explore how to extend its functionality by implementing a feature that stores information about production and development related packages in separate requirements file. This tutorial will guide you how to implement such feature, explaining logic behind the solution and purpose of each function in the script. Prerequisites for this tutorial are basic knowledge of Python and experience with virtual environments. Examples from this tutorial requires Python 3.5+. ### Tools There are different ways to split dependencies in Python. [Poetry](https://python-poetry.org/docs/master/managing-dependencies/) provides versatile options to configure Python app including staging: production, development and testing. It also provides a lot of other useful options for Python developers. But, despite powerfull functionality there might be reasons to use simpler alternatives like `venv`. If you are working on a complex project, planning to publish your project on python repository like PyPi or it's historically based on Poetry then staying with Poetry makes sense. Other alternatives worth mentioning are virtualenv and pipenv. In data science and scientific computing also `conda` has substantial popularity. While these tools provide options to customize your virtual environment it's not always needed. ### Simpler alternatives If you are working on a pet project, simple service or just don't want to complicate code with extra third party tools then using `venv` can be an ideal solution. It's built-in Python library and has limited set of commands which makes it easy to learn. In the simplest scenario there no need to learn `venv` at all. Just navige to the project directory and create virtual environment: ```bash python -m venv .venv ``` What it will do is to call python module `venv` and create virtual environment in `.venv` directory under the current directory. Other common names for virtual environment folders are `venv`, `env`, `.env`. Now this folder contains executables for python and it will store installed packages. Let's now install some packages. But before we can do that virtual environment must be activated ```bash source .venv/bin/activate ``` When you no longer need virtual environment active deactivate it by simply typing `deactivate`. In our hypothetical scenario let's assume that we are building a web app on Flask, but before that let's check that we are really in the virtual environment by calling `which python`. It will produce a path to currently used python executable. If path leads to the current directory and ends like `.venv/bin/python` then environment is activated and we are good to go. Installing Flask ```bash pip install Flask ``` Packages especially complex ones like Flask doesn't exist in isolation, it depends on other python packages that will be installed alongside. To ensure that this is the case call `pip freeze`. It shows a list of installed packages in our virtual environment with their corresponding versions. A good practice is to maintain a text file `requirements.txt` with relevant dependencies so other developers can setup environment for work by installing all required packages by calling `pip install -r requirements.txt`. Creating `requirements.txt` ``` pip freeze > requirements.txt ``` Apart from packages that are required for running an app usually there is a need in tools that are used in development and testing. For this example we choose a package for testing `pytest`, for maintaining code style in accordance with PEP8 let's install `flake8` (for manual check) and `autopep8` that will format our code properly. ```bash pip install pytest flake8 autopep8 ``` Type `pip freeze` to see that now there are more packages. And finally save to file: `pip freeze > requirements.txt` ### Keeping dependencies separate Here we came to the point where `venv` simplicity gives us a bit of inconvenience. Currently installed packages can be divided into 2 groups - one group of packages related to Flask are necessary for running the app. However it's redundant to install test packages into production environment. The solution is to maintain two separate files - one for app packages (`requirements.txt`) another for development packages (`requirements-dev.txt`). In this case testing packages are also stored in `-dev` file like for example in [Flasgger](https://github.com/flasgger/flasgger#how-to-run-tests). It's possible to split dependencies manually removing them from original `requirements.txt` and saving them into `requirements-dev.txt`. However if new packages will be installed this process should be repeated. At this point we have two options: either moving to more advances third-party virtual environment management tools or we can automate this process. ### Automation Basically this process consists of two steps: save current packages list into file: `pip freeze > requirements.txt` and then some script sould sort packages keeping app related packages in `requirements.txt` while moving development packages to its development counterpart. This sequence can be executed manually from terminal however on UNIX based operation systems including MacOS there is more convenient solution: [make](https://www.gnu.org/software/make/manual/make.html) utility. Basic usage is `make command-name`. This utility when called search for file named `Makefile` and execute sequence of commands under command name section. Let's create our Makefile and command ```bash freeze: pip freeze > requirements.txt python -m split_dependencies.py ``` Calling `make freeze` will save dependencies into `requirements.txt` and call python script to split dependencies between separate files. To run it correctly first let's create the basis for a future script. Create file `split_dependencies.py` and put the following code into it: ```python if __name__ == "__main__": pass ``` Now run `make freeze`. If everything has been done right terminal will produce the list of executed commands: ``` pip freeze > requirements.txt python -m split_dependencies ``` At this point the following file structure is presented in working directory: ``` . ├── .venv ├── Makefile ├── requirements.txt └── split_dependencies.py ``` ### Script architecture It's time to start developing the script. But before coding let's think about things that our script should be doing. It's going to consists of multiple functions and main executable function `run()` that will be responsible for executing other functions. It's a good practice to follow single responsibility principle while coding function. So, every function will have only one task. For example list of packages should be loaded from file (`load_requirements`) but for cleaning data from newline character another function can be used. Then to obtain cleared list of packages these function will be called in a sequence one passing data to another forming data pipeline. A list of functions with their interfaces that will be used in the script separated by their responsibility: **Load** * `load_requirements(fname="requirements.txt")` * `clean_list(items: list)` - remove newline character from each string loaded from `requirements.txt` **Logic** * `is_dev_requirement(item: str)` - check if current package is related to development category * `is_prod_requirement(item: str)` - check if current package is related to production (or app running) category * `extract(criteria: Callable, data: list)` - filter list of packages by criteria (`is_dev_requirement`, `is_prod_requirement`) **Save** * `prepare_data(data: list)` - prepare data for saving (joining list of filtered packages to string) * `save_requirements(fname: str, data: str)` - save requirements to file (`requirements.txt` and `requirements-dev.txt`) Since packages related to development and testing usually much less in quantity compared to app related packages it makes sense to store information about them for filtering purposes. Here is an example of a list with packages that might be used for development: ```python DEV_REQUIREMENTS = ["autopep8", "black", "flake8", "pytest-asyncio", "pytest", "Faker"] ``` Also during the coding we are going to use python [typing](https://docs.python.org/3/library/typing.html) to make code more readable in terms what datatypes are used and your IDE can use this information for providing type hinting. Further type checkers like [mypy](https://github.com/python/mypy) can be used to enable static typing (Python by design is dynamic typing language). Since Python 3.5 typing is a built-in feature. However some additional functionality for typing can be achieved by using `typing` package (shipped with Python distribution). ### Coding Using developed script architecture let's code the script. It's going to be managed by `run()` function where script logic will be implemented. **Step 1:** loading list of packages from `requirements.txt` and cleaning it from newline characters. **Step 2:** forming two lists of packages - for production and development **Step 3:** preparing data and saving it into two text files. Here is a final version of a script. Feel free to experiment with it: ```python from typing import Callable DEV_REQUIREMENTS = ["autopep8", "black", "flake8", "pytest-asyncio", "pytest", "Faker"] def run(): dependencies = clean_list(load_requirements()) dev_dependencies = extract(is_dev_requirement, dependencies) prod_dependencies = extract(is_prod_requirement, dependencies) save_requirements("requirements-dev.txt", prepare_data(dev_dependencies)) save_requirements("requirements.txt", prepare_data(prod_dependencies)) def extract(criteria: Callable, data: list) -> list: return list(filter(lambda item: criteria(item), data)) def is_dev_requirement(item: str) -> bool: package_name, _ = item.split("==") return package_name in DEV_REQUIREMENTS def is_prod_requirement(item: str) -> bool: return not is_dev_requirement(item) def load_requirements(fname="requirements.txt") -> list: with open(fname, "r") as f: dependencies = f.readlines() return dependencies def save_requirements(fname: str, data: str) -> None: with open(fname, "w") as f: f.write(data) def prepare_data(data: list) -> str: return "\n".join(data) + "\n" def clean_list(items: list) -> list: return list(map(lambda item: item.strip(), items)) if __name__ == "__main__": run() ``` ### Commentary * `load_requirements` - loads packages from file to list using `readlines` method * `clean_list` removes newline character from each string through map which applies `item.strip()` to each element of `items` list * `extract` - filters initial list using built-in function `filter` * `is_dev_requirement` - takes a string with information about package (example: `Flask==2.2.3`), extract package name and its version and checks whether package in the list of development packages * `is_prod_requirement` - check whether package is production related, calls `is_dev_requirement` and returns negated result * `prepare_data` joins list of packages by newline character and adds one at the end since according to convention in UNIX based opearation systems * `save_requirements` - writes data to file, called twice since packages are saved into two separate files. Writing is performed with `w` flag since this is text data * `run` - implements script logic and ensures data exchange between functions Now if someone wants to work on the project in order to set it up dependencies should be installed from both requirements files: ```bash pip install -r requirements.txt pip install -r requirements-dev.txt ``` ### Conclusions In this tutorial we looked at the way of extending `venv` functionality with a simple Python script that can be shipped with a project. Regarding the options on extensibility of this script there are plenty of things that can be done. For example splitting packages into three parts instead of two by introducing testing stage or applying sorting to make package list more convenient to navigate. The latter can be especially handy in large projects with lots of dependencies. You can find the files and code used in this tutorial here: [https://github.com/dmikhr/split-dependencies-demo](https://github.com/dmikhr/split-dependencies-demo)
dmikhr
1,438,254
Password Cracking: What is a Rainbow Table Attack and how do I prevent it?
What is a Rainbow Table? A rainbow table attack is a password cracking method that uses a...
0
2023-04-17T05:56:53
https://dev.to/jsquared/password-cracking-what-is-a-rainbow-table-attack-and-how-do-i-prevent-it-2676
cybersecurity, security, penetrationtesting, hacking
## What is a Rainbow Table? A rainbow table attack is a password cracking method that uses a special table (a “rainbow table”) to crack the password hashes in a database. Applications don’t store passwords in plaintext, but instead encrypt passwords using hashes. After the user enters their password to login, it is converted to hashes, and the result is compared with the stored hashes on the server to look for a match. If they match, the user is authenticated and able to login to the application. The rainbow table itself refers to a precomputed table that contains the password hash value for each plain text character used during the authentication process. If hackers gain access to the list of password hashes, they can crack all passwords very quickly with a rainbow table. ## How Does This Attack Work? Hackers must first gain access to leaked hashes in order to carry out rainbow table attacks. The password database itself might be poorly secured, or they may have gained access to the Active Directory(A database and set of services that connect users with the network resources they need to get their work done). Others gain access through phishing techniques of those that might have access to the password database. Additionally, there are already millions and millions of leaked password hashes on the dark web that are available to hackers. The reason why hackers like to use the rainbow table method is that it's an easy way to decrypt passwords to enable them to gain unauthorized access to systems, rather than relying on the dictionary attack method (which consumes more memory space) or brute force attack (which consumes more computing power). All the attacker needs to do is just check the rainbow table to find the password’s hash. Rainbow tables are deliberately designed to consume less computing power at the cost of using more space. As a result, it usually produces results quicker than a dictionary or brute force attacks, often taking minutes to crack where other methods may take much longer. But this does have some downsides. Rainbow tables take a considerable amount of time to compile from the ground up. This is because all the hashes and the computing work that goes with them must be calculated and stored beforehand (although precompiled ones can also be downloaded online). But once you figure that out, you have a rainbow table that you can always reuse whenever you need to crack a password. ## Real World Scenarios: Let's move on to how we can see this method of password cracking in the real world. Here are two real life examples of how this could be used: - An attacker spots a web application with outdated password hashing techniques and poor overall security. The attacker steals the password hashes and, using a rainbow table, the attacker is able to decrypt the passwords of every user of the application. - A hacker finds a vulnerability in a company’s Active Directory and is able to gain access to the password hashes. Once they have the list of hashes they execute a rainbow table attack to decrypt the hashes into plaintext passwords. How to Prevent Rainbow Table Attacks: 1. Salting: Hashed passwords should never be stored without salting. Salting is a technique to protect passwords stored in databases by adding a string of 32 or more characters and then hashing them. This makes the password more difficult to decrypt. 2. Multifactor Authentication: Using multi-factor (MFA) or two-factor authentication (2FA) that involves multiple steps, for example, makes it difficult for anyone to access your account with just a password. This makes it impossible for an attacker to use a rainbow table attack effectively. 3. Outdated Hashing Algorithms: Hackers look for applications and servers using obsolete password hashing algorithms MD5 and SHA1. If your application uses either algorithm, your risk for rainbow table attacks substantially increases. 4. Monitoring Servers: Most modern server security software monitors against attempts to access sensitive information and can automatically act to mitigate intruders before they can find the password database. ## Conclusion and Final Thoughts Some security experts argue that rainbow tables have been rendered obsolete by modern password cracking methodologies. Instead, most attackers now use the more advanced Graphics Processor Unit (GPU) based password cracking methods. A moderately-sized GPU farm can easily recreate a rainbow table within a few seconds. This means that encoding those passwords into a rainbow table would not make that much sense. Moreover, most passwords are salted anyway, meaning we would need rainbow tables for each salt value, and for larger salts, this is entirely impractical. Bitcoin and other cryptocurrency miners have been tapping GPU technology to calculate hashes for bitcoin farming. There are existing tools that can leverage GPU technology to decrypt password hashes potentially. For example, the Linux-based GPU cluster was used to crack 90 percent of the 6.5 million leaked LinkedIn password hashes in 2012. Nonetheless, rainbow tables may not be the biggest threat to organizations today. Still, they are certainly a threat and should be considered and accounted for as part of an overall security strategy. **_If you liked this article, please consider liking and following for more blogs on cybersecurity and hacking!!_** -Jsquared
jsquared
1,438,289
Dietoxone - Fat Loss Reviews, Pros, Cons, Price, Scam And Legit?
Dietoxone ends meet. This has quite a few advantageous features. The road to using it begins with my...
0
2023-04-17T06:37:52
https://dev.to/dietoxone151889/dietoxone-fat-loss-reviews-pros-cons-price-scam-and-legit-31l4
webdev, javascript, beginners, programming
[Dietoxone](https://www.mid-day.com/brand-media/article/alert-2023-dietoxone-gummies-uk-and-ireland--dietoxone-keto-bhb-official-price-23281012) ends meet. This has quite a few advantageous features. The road to using it begins with my marvelous thoughts as this relates to my joke. Actually, we don't have to do something with some modus operandi ourselves. [https://www.mid-day.com/brand-media/article/alert-2023-dietoxone-gummies-uk-and-ireland--dietoxone-keto-bhb-official-price-23281012](https://www.mid-day.com/brand-media/article/alert-2023-dietoxone-gummies-uk-and-ireland--dietoxone-keto-bhb-official-price-23281012) https://infogram.com/dietoxone-1h7z2l89nj1lx6o https://www.sympla.com.br/produtor/dietoxoneinfo
dietoxone151889
1,438,398
Building Docker Images Smaller, Rootless and Non-Shell for Kubernetes
After building a Docker image faster, I wanted to build it for the K8s cluster. Running the container...
0
2023-04-17T08:51:17
https://rnemet.dev/posts/docker/image_k8s/
docker, kubernetes, tutorials
--- title: Building Docker Images Smaller, Rootless and Non-Shell for Kubernetes published: true date: 2023-04-14 13:43:46 UTC tags: docker, kubernetes, tutorials canonical_url: https://rnemet.dev/posts/docker/image_k8s/ --- After building [a Docker image faster](https://rnemet.dev/posts/docker/building_image_fast/), I wanted to build it for the K8s cluster. Running the container on the local machine isn't the same as running it on a cluster. I'm packaging a Go application in my example. But the same principles apply to any other language. ## Starting Dockerfile I'm starting with the following Dockerfile(Dockerfile_1): ```dockerfile ARG GO_VERSION=1.20.3 FROM golang:${GO_VERSION}-buster as builder WORKDIR /app COPY go.mod go.sum /app/ RUN go mod download -x COPY . /app/ RUN go build -o app FROM debian:buster WORKDIR /app COPY --from=builder /app/app /app/ ENTRYPOINT [ "/app/app" ] ``` And I build it with the following command: ```bash docker buildx build -t rnemet/echo:0.0.1 . -f Dockerfile_1 --cache-to type=registry,ref=rnemet/echo:test --cache-from type=registry,ref=rnemet/echo:test --cache-from type=registry,ref=rnemet/echo:main --load ``` I'm using the `--cache-to` and `--cache-from` flags to cache the build process and the `--load` to load the image into the local Docker daemon. Also, I'm using BuildKit to build an image. Let's see what I got: ```bash ❯ docker images rnemet/echo REPOSITORY TAG IMAGE ID CREATED SIZE rnemet/echo 0.0.1 6fc43a3d85eb 4 minutes ago 133MB ``` If I run the container as a Pod in a K8s cluster or a Docker container on my local machine: ```bash ❯ docker exec -it echo bash root@30fa7aa78401:/app# whoami root ``` I managed: * to open a shell in the container * found out that I'm running as root * I run the `whoami` command, which is not a command I need when I run the container The `whoami` is not the problem, but the fact that it is present. I don't need it, and it's not part of my application. There must be others as well. I need to clean up the image. Then I learned that my app is run as root. This isn't good. My app needs to run as a non-root user. Why? There are many reasons. For example, the app can access the whole system with root privileges. Do you remember? The container runtime isolates the process from the host system using [namespaces](https://www.linux.com/news/understanding-and-securing-linux-namespaces/). But the process can still access the host system. As the cherry on top, I can open a shell in the container and run commands as root. I do not even want to discuss why this is not good. ## Smaller, Rootless, and Non-Shell Image The first thing I have to do is to change the target image. I'm using `debian:buster` as a target image. That needs to be changed. I decided to use `scratch` as a target image. It's a minimal image. It's not a Linux distribution. It's not a shell. It's not a root user. It's a blank slate. So my next Dockerfile(Dockerfile_2): ```dockerfile ARG GO_VERSION=1.20.3 FROM golang:${GO_VERSION}-buster as builder WORKDIR /app COPY go.mod go.sum /app/ RUN go mod download -x COPY . /app/ RUN go build -o app FROM scratch WORKDIR /app COPY --from=builder /app/app /app/ ENTRYPOINT [ "/app/app" ] ``` And build it: ```bash docker buildx build -t rnemet/echo:0.0.2 . -f Dockerfile_2 --cache-to type=registry,ref=rnemet/echo:test --cache-from type=registry,ref=rnemet/echo:test --cache-from type=registry,ref=rnemet/echo:main --load ``` And the result: ```bash ❯ docker images rnemet/echo REPOSITORY TAG IMAGE ID CREATED SIZE rnemet/echo 0.0.2 188f310d88e2 42 seconds ago 19.1MB rnemet/echo 0.0.1 6fc43a3d85eb 59 minutes ago 133MB ``` Wow, the image is smaller. But it fails when I try to run it as a Pod in a K8s cluster or a Docker container on my local machine. Argh... The problem is how I build the application. I'm building it on a particular Linux image with Go installed. That means all the libs and tools that Go needs to build and run the application are part of the image. So, I must build the app with all its dependencies packed together. When it comes to Go, it's easy(Dockerfile_3): ```dockerfile ... RUN CGO_ENABLED=0 go build -ldflags="-s -w" -o app ... ``` I'm using the `CGO_ENABLED=0` flag to disable using Cgo. I'm using the `-ldflags="-s -w"` flag to strip the debug information from the binary. And results are: ```bash ❯ docker images rnemet/echo REPOSITORY TAG IMAGE ID CREATED SIZE rnemet/echo 0.0.3 81e4098f0e76 15 seconds ago 13MB rnemet/echo 0.0.2 188f310d88e2 25 minutes ago 19.1MB rnemet/echo 0.0.1 6fc43a3d85eb About an hour ago 133MB ``` The image is smaller, and it works. But I'm still running it as root. I need to change that, too. But when working with `scratch` image, I need to create the user in the `builder` image and then copy the `/etc/password` file to the `scratch` image(Dockerfile_3): ```dockerfile ARG GO_VERSION=1.20.3 FROM golang:${GO_VERSION}-buster as builder WORKDIR /app COPY go.mod go.sum /app/ RUN go mod download -x COPY . /app/ RUN CGO_ENABLED=0 go build -ldflags="-s -w" -o app RUN useradd -u 10001 appuser FROM scratch WORKDIR /app COPY --from=builder /etc/passwd /etc/passwd USER appuser COPY --chown=appuser:appuser --from=builder /app/app /app/ ENTRYPOINT [ "/app/app" ] ``` I'm running as a non-root user with no shell, and the image is smaller. And I should be happy, but no. Why? Let's go a few steps back. ### Why Is Image Smaller? In this case, the image is smaller because I'm using a `scratch` image as a target image, which produces an image size of around 19MB. Plus, I'm using stripping debugging information from the binary. That's why the image is so small. That is nice, right? ### Cgo Or No Cgo? I'm using the `CGO_ENABLED=0` flag to disable using Cgo. Why? Because I'm using a `scratch` image as a target image. If I use `debian:buster` as a target image, I don't need to disable Cgo. Why? Because `debian:buster` image has all the libs and tools that Go needs to build and run the application. So I don't need to pack all the libs and tools with the application. I can use the libs and tools that are part of the image. The `cgo` tool is a tool that allows Go programs to call C code. It enables your Go app to call OS's native libraries dynamically. Using it leads to smaller and faster builds. But, as you can see, it is not always possible to use it. So, in this case, I have to disable it and create a static binary. The static binary will be bigger but will have all the libs the app needs to run. It will be more portable. It is common to disable `cgo` when creating multi-architecture images. Unless your app depends on C libraries, you can generally disable `cgo`. Then you have to leave `cgo` enabled. It's always some trade-off. ### Do I Always Need To Scratch When It Itches? While the `scratch` image is minimal, it uses the `root` user, and I can not use the `USER` instruction to change it. I had to do a workaround. See Dockerfile_3. I had to add the user to the `builder` image and then copy `/etc/password` file to the `scratch` image. Not a big deal, but it's a workaround. And I'm not too fond of workarounds. I wonder if it is the only workaround I will do when using the `scratch` image. Small detour. You need a base image that you can trust and use easily. When choosing a base image, you should trust the creator of the base image, and you don't want to make any changes or workarounds. In most cases, you'll see Alpine Linux because it is small, and people trust it. But it has its own [problems](https://martinheinz.dev/blog/92). My personal choice is [Distroless images](https://github.com/GoogleContainerTools/distroless). They are small, secure, and easy to use. It is not as small as a `scratch` image, but still small. And they are secure. They are based on Debian, and they use `non-root` user. So you don't need to do any workarounds. Let's check it out. I'm using `distroless/static-debian11:nonroot` as a target image(Dockerfile_4): ```dockerfile ARG GO_VERSION=1.20.3 FROM golang:${GO_VERSION}-buster as builder WORKDIR /app COPY go.mod go.sum /app/ RUN go mod download -x COPY . /app/ RUN CGO_ENABLED=0 go build -ldflags="-s -w" -o app FROM gcr.io/distroless/static-debian11:nonroot WORKDIR /app COPY --from=builder /app/app /app/ ENTRYPOINT [ "/app/app" ] ``` And results are: ```bash ❯ docker images rnemet/echo REPOSITORY TAG IMAGE ID CREATED SIZE rnemet/echo 0.0.4 f04110959f3c About a minute ago 15.5MB rnemet/echo 0.0.3 0280e6cfb546 5 hours ago 13MB rnemet/echo 0.0.2 188f310d88e2 6 hours ago 19.1MB rnemet/echo 0.0.1 6fc43a3d85eb 7 hours ago 133MB ``` Using `distroless/static-debian11:nonroot` as a target image, I'm getting an image size of around 15.5MB. And I'm running as a non-root user. I do not have any workarounds. As well, I'm having `USER` instruction. My Dockerfile is clean and easy to read. I like it. One last try to use `cgo` in Dockerfile_5: ```dockerfile ARG GO_VERSION=1.20.3 FROM golang:${GO_VERSION}-buster as builder WORKDIR /app COPY go.mod go.sum /app/ RUN go mod download -x COPY . /app/ RUN go build -ldflags="-s -w" -o app FROM gcr.io/distroless/base-debian11:nonroot WORKDIR /app COPY --from=builder /app/app /app/ ENTRYPOINT [ "/app/app" ] ``` After building it, the results are: ```bash ❯ docker images rnemet/echo REPOSITORY TAG IMAGE ID CREATED SIZE rnemet/echo 0.0.5 030cf87dd36e 3 minutes ago 33.5MB rnemet/echo 0.0.4 f04110959f3c 15 minutes ago 15.5MB rnemet/echo 0.0.3 0280e6cfb546 5 hours ago 13MB rnemet/echo 0.0.2 188f310d88e2 6 hours ago 19.1MB rnemet/echo 0.0.1 6fc43a3d85eb 7 hours ago 133MB ``` So, the smallest image is when we use a `scratch` image as a target image. But we have to do some workarounds. The second smallest image is when we use `distroless/static-debian11:nonroot` as a target image. And we don't have to do any workarounds. When you compare Distrolees images with a `scratch` image, you can see that the second is smaller. But, I trust Google and do not have to do any workarounds if I use Distrolees images. In the case of the Go app, I had only one workaround. If I need to pack another app based on Java or NodeJS, I could do more work using a `scratch` image. Distrolees images already come prepackaged with some tools. So I don't have to do any workarounds. ### Base Image Conclusion Let's summarize the results: | image:tag | base image | size | non-user | no-shell | comments | |-----------|------------|------|------|----------|-------------| | rnemet/echo:0.0.1 | debian:buster | 33.5MB | no | no | - | | rnemet/echo:0.0.2 | scratch | 19.1MB | no | no | do not work | | rnemet/echo:0.0.3 | scratch | 13MB | yes | yes | static build, strip debug information, requires workaround for non-root user | | rnemet/echo:0.0.4 | distroless/static-debian11:nonroot | 15.5MB | yes | yes | static build, strip debug information | | rnemet/echo:0.0.5 | distroless/base-debian11:nonroot | 33.5MB | no | no | - | In the end, I got what I wanted, a fast build, a small image, a non-root user, no shell access, and no workarounds. I'm happy with the results. I'm not telling you to use Distroless images without any doubts. Maybe for your case, [Alpine Linux](https://hub.docker.com/_/alpine)](https://hub.docker.com/_/alpine) is better as it is tiny. Or you prefer [Wolfi](https://github.com/wolfi-dev/). Or start from `scratch`. As you can see, there will be some trade-offs. You need to consider your apps needs, the effort willing to put into your Dockerfile and your trust in the creator of the base image. Ah, one more thing please try to avoid the latest tags. Use versioned base images whenever possible. You'll have much more control and less headache. ## Nice to Have: Labels Add some metadata to your images. They can be handy later on. Like: ```dockerfile LABEL version="0.0.1-beta" LABEL vendor="rnemet.dev" LABEL release-date="2024-02-12" ``` You could also use `ARG` to set labels. Like: ```dockerfile ARG VERSION=0.0.1-beta ARG VENDOR=rnemet.dev ARG RELEASE_DATE=2024-02-12 LABEL version="0.0.1-beta" LABEL vendor="rnemet.dev" LABEL release-date="2024-02-12" ``` ## Conclusion I hope you enjoyed this post and learned something new. If you found this helpful post, please share it with your friends. You can subscribe to my newsletter at the bottom of this post and/or share this post. I would appreciate it. If you like what I write check my [newsletter](https://rnemet.substack.com/) and [blog](https://rnemet.dev). ## References * [BuildKit](https://docs.docker.com/build/buildkit/) * [Linux namespaces](https://www.linux.com/news/understanding-and-securing-linux-namespaces/) * [Docker user namespaces](https://docs.docker.com/engine/security/userns-remap/) * [Scratch image](https://hub.docker.com/_/scratch) * [Building image from scratch](https://docs.docker.com/build/building/base-images/#create-a-simple-parent-image-using-scratch) * [Distroless images](https://github.com/GoogleContainerTools/distroless) * [Wolfi](https://github.com/wolfi-dev/) * [Alpine Linux](https://hub.docker.com/_/alpine)
madmaxx
1,438,479
Comparison of Apache AGE with Other Graph Databases and Graph Processing Frameworks
There are many graph databases and graph processing frameworks available, each with its own strengths...
0
2023-04-17T10:54:49
https://dev.to/hamza_ghouri/comparison-of-apache-age-with-other-graph-databases-and-graph-processing-frameworks-4di5
apacheage
There are many graph databases and graph processing frameworks available, each with its own strengths and weaknesses. Here is a comparison of Apache Age with some of the other popular solutions: **Apache Spark GraphX:** Apache Spark GraphX is a graph processing framework built on top of Apache Spark. It provides a distributed processing engine for graph processing and can handle large-scale graphs efficiently. However, it does not have a built-in graph database and requires integration with an external database. **Neo4j:** Neo4j is a popular graph database that provides a native graph storage and processing engine. It is known for its fast query performance and scalability. However, it can be expensive to run at scale, and it does not have built-in support for distributed processing. **JanusGraph:** JanusGraph is an open-source graph database that uses Apache Cassandra or Apache HBase for distributed storage and Apache TinkerPop for graph processing. It provides a scalable and distributed system for managing large-scale graphs. However, it can be complex to set up and configure. Compared to these solutions, Apache Age has the following advantages: **Distributed processing:** Apache Age provides a distributed storage system and uses Apache Spark for distributed processing, allowing it to handle large-scale graphs efficiently. **Multiple programming language support:** Apache Age supports multiple programming languages, including Python, Java, and Scala, making it easy to integrate with existing applications and workflows. **Open-source:** Apache Age is an open-source software, meaning that it is free to use and has an active development community. **Cypher query language:** Apache Age uses the Cypher query language, which is similar to SQL but designed specifically for graph databases, making it easy to write and understand complex graph queries. Overall, Apache Age is a good choice for managing and analyzing large-scale graphs and relationships, providing a scalable and distributed system with support for multiple programming languages and a user-friendly query language.
hamza_ghouri
1,438,492
Implementing Login with Metamask, send Ether, user registration using React, NodeJS, Sequelize and GraphQL
## In this Project we are going to learn how you can implement a login functionality using React on...
0
2023-04-17T12:22:41
https://dev.to/olivermengich/implementing-login-with-metamask-send-ether-user-registration-using-react-nodejs-sequelize-and-graphql-35k7
web3, frontend, javascript, ethereum
[](url)## <u>In</u> this Project we are going to learn how you can implement a login functionality using React on frontend NodeJS backend with GraphQL and stores data on an SQL database. When creating DAPPs there’s always a need to implement functionality where users can login there metamask account to your applications once they are registered. In this article, we will be generating JWT tokens to allow users to access protected routes in NodeJS. Let’s look at the authentication flow in the image below. ![Auth flow](https://miro.medium.com/v2/resize:fit:720/format:webp/1*OGjgv9BT4w71MVYdi02MTg.png) ### Prerequisites * NodeJS * React * Metamask So, let’s get started. Go to your terminal, create a new folder name it any name you like. I’ll call it web3-login. ``` pnpm create vite@latest code web3-login ``` Now in this folder, let’s create another directory here, call it backend, and install a couple of dependencies: ``` mkdir backend cd backend npm init -y npm i bcryptjs web3 ethereumjs-util uuid express sequelize jsonwebtoken npm i express-graphql graphql sqlite3 npm i -D nodemon mkdir graphql mkdir database models cd graphql && mkdir resolvers && mkdir schema ``` Now, on your frontend run the following in terminal: ``` npm i @metamask/detect-provider web3 ``` Your folders should look like this: ![folder_vieq](https://miro.medium.com/v2/resize:fit:622/format:webp/1*Zao4zK_eoM7EgpnHqLhruw.png) Now, in your backend directory, make a new file. call it index.js. type in the following lines of code: ``` const express = require('express'); const {graphqlHTTP} = require('express-graphql'); // we will make this folder. Keep following const graphqlSchema = require('./graphql/schema/schema.graphql'); const graphqlResolver = require('./graphql/resolvers/resolvers.graphql.js'); const sequelize = require('./database/db'); const app = express(); app.use(express.json()); // you can also use the CORS library of nodejs app.use((req, res, next)=>{ res.setHeader("Access-Control-Allow-Origin","*"); res.setHeader("Access-Control-Allow-Methods","POST,GET,OPTIONS"); res.setHeader("Access-Control-Allow-Headers", 'Content-Type, Authorization'); if(req.method ==="OPTIONS"){ return res.sendStatus(200); } next(); }); // creates a graphql server app.use('/users',graphqlHTTP({ schema: graphqlSchema, rootValue: graphqlResolver, graphiql: true })) sequelize .sync() .then(res=>{ // only connects to app when app.listen(4000,()=>{ console.log("Backend Server is running!!"); }) }) .catch(err=>console.error); ``` Now, go to your database folder. create a new file. call it db.js. copy and paste this line of code. Also create a new file with the .sqlite extension. e.g. db.sqlite ``` const sequelize = require('../database/db'); const { DataTypes } = require('sequelize'); const User = sequelize.define('Users', { id: { type: DataTypes.UUIDV1, allowNull: false, primaryKey: true }, email: { type: DataTypes.STRING, allowNull: false, unique: true }, password: { type: DataTypes.STRING(64), allowNull: false }, address: { type: DataTypes.STRING, allowNull: false, unique: true }, nonce: { type: DataTypes.INTEGER, allowNull: false, unique: true }, },{ timestamps: true } ); module.exports = User; ``` We’ll use the nonce parameter for cryptographic communication. We store the address of user for verification. 1. Now, go to your graphql/schema folder 2. Create a new file. call it graphql.schema.js 3. Paste in the following lines of code ``` const {buildSchema} = require('graphql'); //we build the schema here. module.exports = buildSchema(` type User{ id: ID! email: String! password: String address: String! nonce: Int! } input UserInput{ email: String! password: String! address: String! } type AuthData{ userId: ID! token: String! tokenExpiration: Int! } type Token{ nonce: Int } type jwtToken{ token: String, message: String } type RootQuery { loginMetamask(address: String!): Token! login(email: String!, password: String!): AuthData! signatureVerify(address: String!,signature: String!): jwtToken! users: [User!] } type RootMutation { register(userInput: UserInput): User! } schema { query: RootQuery mutation: RootMutation } `); ``` 1. Go to graphql/resolvers folder 2. Create a new file. call it graphql.resolvers.js 3. Let's start with user registration ``` const User = require('../../models/user.model.js'); module.exports = { //from our schema, we pass in the user email, password and address, // a nonce is generated register: (args) => { return User.create({ id: uuid.v1(), ...args.userInput, nonce: Math.floor(Math.random() * 1000000) }).then(res=>{ console.log(res); return res }).catch(err=>{ console.log(err); }) }, } ``` Now, we go on the frontend: ``` return( <div className="login_metamask"> <h1>Metamask</h1> <div className="success"> <h3>Succcessfull Transaction</h3> <div className="animation-slider"></div> </div> <section className="metamask__action"> <button onClick={Enable_metamask} className="enable_metamask"> <img src={require("../images/metamask.png")} alt="metamaskimage" /> <h2>Enable Metamask</h2> </button> <button onClick={send_metamask} className="enable_metamask"> <img src={require("../images/metamask.png")} alt="metamaskimage" /> <h2>Send Ether</h2> </button> <button onClick={Login_metamask} className="enable_metamask"> <img src={require("../images/metamask.png")} alt="metamaskimage" /> <h2>Login with Metamask</h2> </button> <button onClick={Logout_metamask} className="enable_metamask"> <img src={require("../images/metamask.png")} alt="metamaskimage" /> <h2>LOGOUT</h2> </button> </section> </div> ) ``` ![Image of metamsk](https://miro.medium.com/v2/resize:fit:1400/format:webp/1*WDhvFTB9mqosjryWAYtG8A.png) Now, we need to enable meta mask. When you click the “ENABLE METAMASK” button, the code will run and pop up window will open. Key in your password. ``` import Web3 from "web3"; import detectEthereumProvider from "@metamask/detect-provider"; async function Enable_metamask(){ const provider = await detectEthereumProvider(); if (provider) { window.web3 = new Web3(provider); web3 = window.web3; ethereum = window.ethereum; const chainId = await ethereum.request({ method: 'eth_chainId' }); const accounts = await ethereum.request({ method: 'eth_requestAccounts' }); account = accounts[0]; console.log(chainId, accounts); } } ``` Now, let’s implement the login with Metamask functionality: ``` async function Login_metamask(){ //1. First we query for the user nonce in the backend passing in the user account address let requestBody={ query: ` query { loginMetamask(address: "${account}"){ nonce } } ` } const handleSignMessage = (nonce, publicAddress) => { return new Promise((resolve, reject) => web3.eth.personal.sign( web3.utils.fromUtf8(`Nonce: ${nonce}`), publicAddress, (err, signature) => { if (err) return reject(err); return resolve({ publicAddress, signature }); } ) ); } //2. if metamask is enabled we send in the request if(web3 && ethereum && account){ fetch('http://localhost:4000/users', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(requestBody), }) .then(res=>{ if(res.status !== 200 && res.status !==201){ //3. if we get an error, respond error throw new Error("Failed"); } return res.json(); }) .then(data => { console.log(data); //4. we log retrieve the nonce const nonce = data.data.loginMetamask.nonce; console.log(nonce); if(nonce != null){ //5. we then generate a signed message. and send it to the backend return handleSignMessage(nonce,account) .then((signedMessage)=>{ console.log(signedMessage.signature) requestBody = { query: ` query { signatureVerify(address: "${account}",signature: "${signedMessage.signature}"){ token message } } ` } fetch('http://localhost:4000/users', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(requestBody), }) .then(response =>{ if(response.status !== 200 && response.status !==201){ throw new Error('Failed'); } return response.json(); }) .then(data => { console.log(data); }) .catch(err=>console.error); }) }else{ //Redirect the user to registration site. console.log('Please Register at our site. ') } }) .catch((error) => { console.error('Error encountered:', error); }); }else{ await Enable_metamask(); } } ``` Now, on our backend we receive this request and process it: ``` const User = require('../../models/user.model.js'); function VerifySignature(signature,nonce) { const msg = `Nonce: ${nonce}`; //convert the message to hex const msgHex = ethUtil.bufferToHex(Buffer.from(msg)); //if signature is valid const msgBuffer = ethUtil.toBuffer(msgHex); const msgHash = ethUtil.hashPersonalMessage(msgBuffer); const signatureBuffer = ethUtil.toBuffer(signature); const signatureParams = ethUtil.fromRpcSig(signatureBuffer); const publicKey = ethUtil.ecrecover( msgHash, signatureParams.v, signatureParams.r, signatureParams.s ); const addressBuffer = ethUtil.publicToAddress(publicKey); const secondaddress = ethUtil.bufferToHex(addressBuffer); return secondaddress; } module.exports = { //from our schema, we pass in the user email, password and address, // a nonce is generated register: (args) => { return User.create({ id: uuid.v1(), ...args.userInput, nonce: Math.floor(Math.random() * 1000000) }).then(res=>{ console.log(res); return res }).catch(err=>{ console.log(err); }) }, loginMetamask: function({address}){ return User.findOne({address}) .then(user=>{ if(!user){ // if user doesn't exit return null return{ nonce: null } } return { //else return the nonce of user nonce: user.dataValues.nonce } }) .catch(err=>{ throw err; }) }, signatureVerify:async function({address, signature}){ const user =await User.findOne({address}); if(!user){ return{ token: null, message: 'User not Found. Sign In again' } } // then verify the signature sent by the user i let secondaddress = VerifySignature(signature,user.nonce) if(address.toLowerCase() === secondaddress.toLowerCase()){ //change the user nonce user.nonce = Math.floor(Math.random() * 1000000); await user.save() const token = await jwt.sign({address: user.address,email: user.email},'HelloMySecretKey',{expiresIn: '1h'}); return{ token: token, message: 'User not Found. Sign In again' } }else { return{ token: null, message: 'User not Found. Sign In again' } } }, } ``` From this, a json web token will be returned to the client. We can use this to authenticate to protected routes on our server. Now, for user experience: {% embed https://www.youtube.com/embed/vkgj_lCKPtY %} #### Wrapping Up ** This is just a simple authentication mechanism with Metamask yet effective. Find thesource code on GitHub link: https://github.com/OliverMengich/web3LoginSendEtherMetamask.git Thank you.**
olivermengich
1,438,546
Best Practices to protect an RDS MySQL Database ✅
Amazon RDS is a very popular choice for creating MySQL databases in the cloud. Many modern companies...
0
2023-04-17T11:32:10
https://dev.to/aws-builders/best-practices-to-protect-an-rds-mysql-database-2f95
aws, mysql, devops, tutorial
Amazon RDS is a very popular choice for creating MySQL databases in the cloud. Many modern companies use it to store their business data. However, as with any other database, securing these databases requires special attention to protect against potential threats and vulnerabilities. In this article, we will explore 10 best practices for securing your AWS RDS MySQL database instance. ## 1. Use strong passwords and rotate them The use of strong passwords is an essential measure to protect your database instance. A strong password consists of a random combination of upper and lower case letters, numbers and special characters. It should be at least 20 characters long. It is also important to change your password regularly. Passwords should be changed at regular intervals, for example every 60 to 90 days. Use the MySQL [validate_password](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Concepts.PasswordValidationPlugin.html) plugin, which provides additional security like password expiration, password reuse restrictions or password failure tracking. ## 2. Keep MySQL Updated One of the major advantages of the managed services offered by AWS is that you don’t have to manage them and can activate automatic updates! It would be silly to do without this… Granted, it doesn’t protect against 0-day flaws, but it’s already very good to protect against known (and patched!) flaws. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sgj7sncu9hxakupuu70i.png) ## 3. Change default port This good practice protects against bots that regularly scan the internet for instances that are not sufficiently protected. It is very basic but it would be a shame to do without it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kznfhr6wzn57hoduqckc.png) ## 4. Use different users with different permissions A user should be created for each use. For example, if we have a script that makes backups of our application, we will not use the same user as the application. A read-only user is sufficient here. The root user should only be used to administer the instance, not to use it! ## 5. Disconnect your RDS instance from the internet If you have the luxury of being able to disconnect your instance from the internet and make it accessible only from within its VPC network, do it! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dn5quz5711pq987fewgt.png) ## 6. Use a Firewall AWS Security Group can be used to allow only certain IP addresses to connect to your instance’s port : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0bawwe5v50jv7woe867a.png) ## 7. Use MySQL engine host check Instead of creating your MySQL users like this` CREATE USER 'new_user'@'%' IDENTIFIED BY 'password‘;` , use MySQL engine host check : ```sql # Fixed IP address CREATE USER 'new_user'@'123.123.123.123/255.255.255.255' IDENTIFIED BY 'password'; # Private IP range from your subnet CREATE USER 'new_user'@'192.168.0.0/255.255.255.0' IDENTIFIED BY 'password'; ``` ## 8. Use SSL Implement encryption in transit using SSL : ```sql ALTER USER 'new_user'@'123.123.123.123/255.255.255.255' REQUIRE SSL; ``` ## 9. Grant desired privileges to desired databases Instead of granting your MySQL user privileges like this `GRANT ALL PRIVILEGES ON * . * TO 'new_user'@'123.123.123.123/255.255.255.255';` grant desired privileges to desired databases : ```sql GRANT SELECT, INSERT, UPDATE, DELETE ON my_db.* TO 'some_user'@'123.123.123.123/255.255.255.255'; ``` ## 10. Use 2FA or 3FA MySQL 8.0.27 and higher supports multifactor authentication (MFA), such that accounts can have up to three authentication methods. IAM authentication can also enable multi-factor authentication. _**If you liked this post, you can find more on my blog https://adrien-mornet.tech/ 🚀**_
adrienmornet
1,461,679
brokerow
Recenzje aktualnych brokerów forex, czy broker to scammer i oszust czy warto przyjrzeć się mu pod...
0
2023-05-08T20:19:49
https://dev.to/amolicamil17888/brokerow-lc7
brokerów, inwestycje
Recenzje aktualnych brokerów forex, czy broker to scammer i oszust czy warto przyjrzeć się mu pod kątem inwestycji? Opinie traderów dotyczące inwestowania z Montana Trading LTD <a href =`https://forum2.pl/index.php?topic=152.0`> Montana Trading LTD</a> Branża finansowa jest otwarta na innowacje, a Montana Trading LTD to dom maklerski, który szybko zdobywa uznanie dzięki swojemu innowacyjnemu podejściu i zaangażowaniu w sukces swoich klientów. Zespół utalentowanych i pełnych pasji specjalistów sprawia, że Montana Trading LTD przedefiniowuje działalność maklerską, dostarczając niesamowitej obsługi i rezultatów swoim rosnącym klientom. W tym przeglądzie przyjrzymy się historii powstania brokera Montana Trading LTD, unikalnym cechom, które wyróżniają go na tle konkurencji, oraz powodom jego dynamicznego rozwoju na rynku finansowym.
amolicamil17888
1,461,925
Tự động kết nối lại CSDL khi mất kết nối
Automatically reconnect DB when connection...
0
2023-05-09T03:04:04
https://dev.to/daitran1412/tu-dong-ket-noi-lai-csdl-khi-mat-ket-noi-16f8
{% stackoverflow 76205544 %}
daitran1412
1,462,045
How to create a translate project with Django
This article will create a translator app using Django, Django is a popular Python web framework....
0
2023-05-09T06:11:25
https://dev.to/abdullafajal/how-to-create-a-translate-project-with-django-hl0
django, translate, python
This article will create a translator app using Django, Django is a popular Python web framework. This app will allow users to input text in one language and receive a translation in another language. We will be using the translate python package to do the translation. If you want to learn Python and Django, then our courses are available, you can see them. Let us begin without wasting your time at all. I think Django will be already installed in your system If not then you can install it like this: ``` pip install django ``` And Install the translate package: ``` pip install translate ``` Now let's create our project: ``` django-admin startproject translator ``` A project named movie will be created, you can do cd movie and go inside it. ``` cd translator ``` You have to create an app, you can give any name to the project and app, we have given it the name of app. ``` python manage.py startapp app ``` Now you have to mention this app in your project setting in INSTALLED_APPS ``` INSTALLED_APPS = [ "django.contrib.admin", "django.contrib.auth", "django.contrib.contenttypes", "django.contrib.sessions", "django.contrib.messages", "django.contrib.staticfiles", "app", # new ] ``` Create views that handle the translation. Now we need to create a view that will handle the translation. Open up app/views.py and add the following code: ``` from django.shortcuts import render from translate import Translator # Create your views here. def home(request): if request.method == "POST": text = request.POST["translate"] to_lang = request.POST["tolanguage"] from_lang = request.POST["fromlanguage"] translator = Translator(to_lang=to_lang, from_lang=from_lang) translation = translator.translate(text) context = { "translation": translation, } return render(request, "home.html", context) return render(request, "home.html") ``` This code defines a view function called home() which is associated with the route for the homepage ("/") in a Django web application. The function starts by checking the request method. If the request method is "POST", the function extracts data from the form submitted through the POST method. The text variable contains the text entered in the form, the to_lang variable contains the target language selected in the form and the from_lang variable contains the source language selected in the form. A new instance of the Translator class from the translate module is then created, passing in the to_lang and from_lang variables as parameters. The translate() method of the Translator instance is then called with text as the argument to translate the text. The translated text is then stored in the translation variable. Finally, a dictionary named context is created, which contains the translated text as a value associated with the key "translation". The home() function then renders the "home.html" template, passing in the context dictionary as a context variable. If the request method is not "POST", the function simply renders the "home.html" template without any context variable. We have to create the template mentioned in the views, and for that, we have to create a directory named templates inside our app, in that, we will write the code of the template. [Read More Here](https://espere.in/How-to-create-a-translate-project-with-Django/)
abdullafajal
1,462,070
User interface for the coffee shop mobile application
A post by Bima Laroi Bafih
0
2023-05-09T07:16:42
https://dev.to/bimaexz/ui-4igo
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/34tqu8momiu0f63dr3yn.jpg)
bimaexz
1,462,134
The Functional Programming jungle: Ramda versus monads and monoids
There exist many tutorials and introductory blogs on Functional Programming (FP from now...
22,921
2023-05-09T11:16:37
https://dev.to/mjljm/the-functional-programming-jungle-ramda-versus-monads-and-monoids-4agm
typescript, functional, ramda, transduce
There exist many tutorials and introductory blogs on Functional Programming (FP from now on). However, I think too many are theoretical, scary and very time consuming. If you already are an FP nerd, skip this series of articles. It won’t bring anything new. This series aims at giving a simple and yet clear introduction to functional programming, what it is, what it is not and what benefits you can draw from it. In this first article, I will share my experience of converting to FP and will try to guide you through the jungle of FP libraries. ## Jumping into the FP jungle Like most programmers around the place, I was an Object-Oriented Programming (OOP) fan. However, it had already dawned upon me that the available JS tools for array and list management were a bit limited. So, I would never fail to import Ramda or Lodash when my code took me to data manipulation. That included using array map, reduce or forEach functions. At some point, a bit curious about all these weird words that you find in the Ramda doc (words like functors, transducers, transformers,…) , I started digging more deeply. ## Ramda and the likes Back then, my perception was that a functional programmer is a person who sticks to his well-known OOP paradigm but writes lots of small functions and makes extensive use of Ramda or the likes when it comes to manipulating data. The internet is full of blogs and articles that will reinforce that cliché. If you look for “functional programming library javascript” on your favorite search engine, you will find dozens of libraries like Ramda, Rambda, Rambdax, Remeda, Lodash, Bluebird, Curry,… Each of these libraries has its active community that discusses in great details the benefits of extensively using reduce or transduce when it comes to analyzing or managing data. The usual “10 best FP libraries for 2023” websites mainly put the focus on execution speed and function count. The Grail seems to be that new library that does as much as all the previous ones but much faster and with more available functions. So, can we reduce Functional Programming to an extensive and clever use of the reduce function (accidental pun)? ## Monads and monoids However, if you can spare the time, you might also notice while roaming around on the internet a bunch of other libraries written by seemingly mad scientists who keep throwing Monads and Monoids at you. Examples of such libraries are TS-belt, Folktale, FP-TS, Effect… All these libraries seem very scary at first glance. You will find lots of chitchat on forums discouraging you from looking further in that direction. The main reasons invoked are: - not very spread in the IT community: other developers will not be able to understand and maintain your code. - difficult to integrate: FP does not go well with existing frameworks. - beautiful but too theoretical: coders have to be practical and efficient. It’s enough to put a bit of Ramda here and there in your OOP dev. It’s easy to let oneself be convinced to look away from monads and monoids. The names are (purposefully?) scary. Most of this stuff is written by mathematicians and academics. Reading articles on these topics is a time-consuming hassle (really is). ## So why would you want to go further than Ramda? Confining oneself to using Ramda now and again amounts to swimming in the baby pool without ever wanting to dive in the large deep ocean. FP is not about providing utilities for arrays. Ramda definitely was a necessary step in the history of Functional Programming. But, as often in science, it is been absorbed and outperformed by the broader concepts it gave birth to. The map function that we all use with arrays as been theorized and can be in fact applied to many more objects. Nowadays, functional programming is about efficient function composition, data aggregation, asynchronous and synchronous code management, error and resource handling, code reliability and testing… All these topics are very current in modern development. Going into the monadic world is definitely taking a lead in development techniques. However I think it unnecessary to talk long hours about monads and monoids and the mathematics underpinning then. That would make me very smart but wouldn’t help you much. Instead, I suggest we take a pragmatic example and try to figure out how OOP and FP approaches would differ in that particular case. We can then pragmatically discuss the pros and cons of each solution. That will be the topic of the next article in this series.
mjljm
1,462,288
How we implemented the card animation in Appwrite Cloud Public Beta
For the annoucement of the public beta of Cloud, we wanted to create something unique that everyone...
0
2023-05-09T11:54:31
https://dev.to/appwrite/how-we-implemented-the-card-animation-in-appwrite-cloud-public-beta-4npb
webdev, frontend, svelte, css
For the annoucement of the public beta of Cloud, we wanted to create something unique that everyone could call their own. We decided to create a personalized card, with animation and interactivity sprinkled it, to really give the whole experience a “special” feel. The Cloud Beta Card page in the Appwrite Console ![Recording of the cloud card on console](https://cloud.appwrite.io/v1/storage/buckets/645a32f6c1de73d0babb/files/645a337ec39e649e2063/view?project=645a32f155b26e6044f9) When our design team showed us the design, we LOVED it. But soon, reality struck, and I realized I am going to have to *implement* this design. It needs to look good, work on multiple browsers, and perform well. Quite the challenge! ![Designers happy with their design, while frontend developer drowns in fear](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ze0ul8qow977nquqba0o.png) Nevertheless, we pulled it off with the power of Svelte and CSS, and I'm excited to share what I learned with everyone. ## Inspiration Our inspiration for this feature came from [https://poke-holo.simey.me/](https://poke-holo.simey.me/) which features a similar card animation effect for Pokemon cards. This project, like Appwrite, is open-source, and like our console, built with Svelte! Meaning we could learn a lot from it. ## The Implementation Process The final card animation can be broken down into pieces. We’ll be going over the main ones in isolation: - “Popping up” the card on click - Rotating the card - Card shine You can preview the code and output here: [https://appwrite-card-snippets.vercel.app/](https://appwrite-card-snippets.vercel.app/) If you’re curious, you can also check out the [source code](https://github.com/appwrite/console/blob/cloud-cards/src/routes/card/Card.svelte) of the card element, where we integrate all of these pieces and some *extra* details ✨ ### Setup the base HTML structure The card needs a back and front side to it. Since we’re doing animations in the 3D space, we also use the `perspective` CSS attribute. Make sure to bring your 3D glasses! ![https://media3.giphy.com/media/Gjy7TJTRD3dixQNNB7/giphy.gif?cid=ecf05e471wtvkifjomcdhcyk6tmnz9yh28ep7zleqe3cj0cq&ep=v1_gifs_search&rid=giphy.gif&ct=g](https://media3.giphy.com/media/Gjy7TJTRD3dixQNNB7/giphy.gif?cid=ecf05e471wtvkifjomcdhcyk6tmnz9yh28ep7zleqe3cj0cq&ep=v1_gifs_search&rid=giphy.gif&ct=g) The result of this markup is the following: ```html <div class="card"> <div class="card-inner"> <div class="card-back"> <img src="https://cloud.appwrite.io/v1/cards/cloud-back?mock=normal" alt="The back of the Card" loading="lazy" width="450" height="274" /> </div> <div class="card-front"> <img src="https://cloud.appwrite.io/v1/cards/cloud?mock=normal" alt="The front of the card" width="450" height="274" /> </div> </div> </div> <style> .card { perspective: 1000px; } .card-inner { display: grid; transition: transform 0.8s; transform-style: preserve-3d; } /* Do an horizontal flip when you move the mouse over the flip box container */ .card:hover .card-inner { transform: rotateY(180deg); } /* Position the front and back side */ .card-front, .card-back { grid-area: 1/1; backface-visibility: hidden; } /* Rotate the back side 180 degrees */ .card-back { transform: rotateY(180deg); } </style> ``` You can preview it [here](https://appwrite-card-snippets.vercel.app/structure). ### Popping up the card We want the card to pop up when the user clicks it, so they can have a closer look. While popping up, we also want the card to spin around because it’s more fun that way 🕺 We will use Svelte’s `spring` store to achieve this effect. Whenever we set the store to a new value, they’ll smoothly transition to it instead of immediately changing. We’ll need two stores, `scale` for controlling the card size and `rotateDelta` to control the rotation. ```html <script> import { spring } from 'svelte/motion'; let active = true; const smooth = { stiffness: 0.03, damping: 0.45 }; const scale = spring(1, smooth); const rotateDelta = spring(0, smooth); function popup() { scale.set(1.45); rotateDelta.set(360); } function retreat() { scale.set(1); rotateDelta.set(0); } $: if (active) { popup(); } else { retreat(); } $: style = [`--scale: ${$scale}`, `--rotateDelta: ${$rotateDelta}deg`].join(';'); </script> <div class="card" {style}> <button class="card-inner" on:click={() => (active = !active)}> <div class="card-back"> <img src="https://cloud.appwrite.io/v1/cards/cloud-back?mock=normal" alt="The back of the Card" loading="lazy" width="450" height="274" /> </div> <div class="card-front"> <img src="https://cloud.appwrite.io/v1/cards/cloud?mock=normal" alt="The front of the card" width="450" height="274" /> </div> </button> </div> <style> /* Button reset */ button { background: none; border: none; padding: 0; cursor: pointer; outline: inherit; } .card { perspective: 1000px; } .card-inner { display: grid; transform: scale(var(--scale)) rotateY(var(--rotateDelta)); transform-style: preserve-3d; } /* Position the front and back side */ .card-front, .card-back { grid-area: 1/1; backface-visibility: hidden; } /* Rotate the back side 180 degrees */ .card-back { transform: rotateY(180deg); } </style> ``` You can preview the result [here](https://appwrite-card-snippets.vercel.app/popup). ### Rotating the card with the cursor Now comes my favorite part, rotating the card around with your cursor! We want to create an enjoyable experience by letting the user control the card beyond just popping it up and allowing them to move the card freely. We’ll also be using the `spring` store here, but we’ll only need one this time, `rotate`, which has an x and y-axis. ```html <script lang="ts"> import { spring } from 'svelte/motion'; const smooth = { stiffness: 0.066, damping: 0.25 }; const rotate = spring({ x: 0, y: 0 }, smooth); const round = (num: number, fix = 3) => parseFloat(num.toFixed(fix)); function getMousePosition(e: MouseEvent | TouchEvent) { if ('touches' in e) { return { x: e?.touches?.[0]?.clientX, y: e?.touches?.[0]?.clientY, }; } else { return { x: e.clientX, y: e.clientY, }; } } const interact = (e: MouseEvent | TouchEvent) => { const { x: clientX, y: clientY } = getMousePosition(e); const el = e.target as HTMLElement; const rect = el.getBoundingClientRect(); // get element's current size/position const absolute = { x: clientX - rect.left, // get mouse position from left y: clientY - rect.top, // get mouse position from right }; const center = { x: round((100 / rect.width) * absolute.x) - 50, y: round((100 / rect.height) * absolute.y) - 50, }; rotate.set({ x: round(-(center.x / 3.5)), y: round(center.y / 2), }); }; const interactEnd = () => { setTimeout(() => { rotate.set({ x: 0, y: 0 }); }, 500); }; $: style = [`--rotateX: ${$rotate.x}deg`, `--rotateY: ${$rotate.y}deg`].join(';'); </script> <div class="card" {style}> <div class="card-inner" on:pointermove={interact} on:mouseout={interactEnd} on:blur={interactEnd}> <div class="card-back"> <img src="https://cloud.appwrite.io/v1/cards/cloud-back?mock=normal" alt="The back of the Card" loading="lazy" width="450" height="274" /> </div> <div class="card-front"> <img src="https://cloud.appwrite.io/v1/cards/cloud?mock=normal" alt="The front of the card" width="450" height="274" /> </div> </div> </div> <style> .card { perspective: 1000px; } .card-inner { display: grid; transform-style: preserve-3d; transform: rotateY(var(--rotateX)) rotateX(var(--rotateY)); transform-origin: center; } /* Position the front and back side */ .card-front, .card-back { grid-area: 1/1; backface-visibility: hidden; } /* Rotate the back side 180 degrees */ .card-back { transform: rotateY(180deg); } </style> ``` You can see the result [here](https://appwrite-card-snippets.vercel.app/rotate). ### Glare Less but not least, we can also add a little bit of shine to the card, augmenting the 3D feel. ```html <script lang="ts"> import { spring } from 'svelte/motion'; const smooth = { stiffness: 0.066, damping: 0.25 }; const glare = spring({ x: 0, y: 0, o: 0 }, smooth); const round = (num: number, fix = 3) => parseFloat(num.toFixed(fix)); function getMousePosition(e: MouseEvent | TouchEvent) { if ('touches' in e) { return { x: e?.touches?.[0]?.clientX, y: e?.touches?.[0]?.clientY, }; } else { return { x: e.clientX, y: e.clientY, }; } } const interact = (e: MouseEvent | TouchEvent) => { const { x: clientX, y: clientY } = getMousePosition(e); const el = e.target as HTMLElement; const rect = el.getBoundingClientRect(); // get element's current size/position const absolute = { x: clientX - rect.left, // get mouse position from left y: clientY - rect.top, // get mouse position from right }; glare.set({ x: round((100 / rect.width) * absolute.x), y: round((100 / rect.height) * absolute.y), o: 1, }); console.log(absolute, round((100 / rect.width) * absolute.x)); }; const interactEnd = () => { setTimeout(() => { glare.update((old) => ({ ...old, o: 0 })); }, 500); }; $: style = [`--glareX: ${$glare.x}%`, `--glareY: ${$glare.y}%`, `--glareO: ${$glare.o}`].join( ';' ); </script> <div class="card" {style}> <div class="card-inner" on:pointermove={interact} on:mouseout={interactEnd} on:blur={interactEnd}> <div class="card-back"> <img src="https://cloud.appwrite.io/v1/cards/cloud-back?mock=normal" alt="The back of the Card" loading="lazy" width="450" height="274" /> </div> <div class="card-front"> <img src="https://cloud.appwrite.io/v1/cards/cloud?mock=normal" alt="The front of the card" width="450" height="274" /> <div class="card-glare" /> </div> </div> </div> <style> .card { perspective: 1000px; } .card-inner { display: grid; transform-style: preserve-3d; transform: rotateY(var(--rotateX)) rotateX(var(--rotateY)); transform-origin: center; } /* Position the front and back side */ .card-front, .card-back { grid-area: 1/1; backface-visibility: hidden; } .card-front { display: grid; } .card-front > * { grid-area: 1/1; } /* Rotate the back side 180 degrees */ .card-back { transform: rotateY(180deg); } .card-glare { border-radius: 14px; transform: translateZ(1px); z-index: 4; background: radial-gradient( farthest-corner circle at var(--glareX) var(--glareY), rgba(255, 255, 255, 0.8) 10%, rgba(255, 255, 255, 0.65) 20%, rgba(0, 0, 0, 0.5) 90% ); mix-blend-mode: overlay; opacity: calc(var(--glareO) * 0.5); } </style> ``` You can view the result [here](https://appwrite-card-snippets.vercel.app/glare). ## The End Result The end result was a smooth and visually appealing card animation that added a touch of interactivity to our dashboard. We hope this detailed breakdown of our implementation process is helpful for other developers looking to add similar effects to their web applications. And we also hope it inspired a bit of awe with the magic that is frontend development 🪄 Thank you for choosing Appwrite Cloud Beta for your cloud computing needs!
thomasglopes
1,462,316
The Complete Guide to OOP In Javascript
We know that Javascript is a multi-paradigm language. That is, it supports functional as well as...
0
2023-06-02T07:17:45
https://dev.to/merudra754/the-complete-guide-to-oop-in-javascript-2lk0
javascript, webdev, programming, development
We know that Javascript is a multi-paradigm language. That is, it supports functional as well as object oriented programming. In this blog, I am explaining the concept of OOP in Javascript in detail. ## What is Object Oriented Programming - Object Oriented Programming is a programming paradigm based on the concept of objects. - Objects are used to model (describe) the real-world or abstract features. - Objects contain data (properties) and behaviour (methods). By using objects we can pack data and corresponding behaviour in one block. - OOP was developed with the goal of organizing code, to make more flexible and easier to maintain (avoid spaghetti code). There are six very important terminologies related to OOP: 1] **Classes** - Class is a blueprint or templates to build new objects. - Classes are non-tangible logical entity, that is, they don't occupy memory. - A class can exist without its object but converse is not true. 2] **Objects** - Objects are instances of class. - They are real world entities, ie they occupy memory. 3] **Data Abstraction** - Data abstraction means hiding the unnecessary details and only showing the useful data to the user. 4] **Data Encapsulation** - Encapsulation means wrapping up of properties and behaviours in a single unit. - Encapsulation is achieved by making all the properties and some methods private in a class. - Encapsulation leads to data hiding. - Getters are used to accept values and setters are used to change the class state, hence restricting the outsiders to manipulate the class state. - Encapsulation promotes loose coupling, ie, other sections of the code can be changed or refactored without affecting another parts. - Encapsulation helps in maintaining security and avoiding accidental bugs, because the properties/fields are hidden from the outer world. - In Javascript, encapsulation can be achieved by using the ES6 classes as well as function closures. 5] **Polymorphism** - Polymorphism means the ability of a method to act in more the one way, depending upon the situation. - A child class can override the methods present in its parent class. - It promotes the DRY (Don't Repeat Yourself) principle. 6] **Inheritance** - It means the ability of an object to inherit properties and behaviours from its parent object. - Inheritance promotes reusability and maintainability. <hr> Now we have learnt the important terminologies of OOP. Let us now learn how actually OOP is implemented in Javascript. OOP in Javascript is completely different from other traditional object oriented programming languages like C++ and Java. - Everything in Javascript is either a primitive or an object. **In Javascript, Object Oriented Programming is implemented using Prototypes and Prototypal Inheritence** **Prototypes** Prototypes are the mechanism by which Javascript objects inherit features from one another. **Prototypal Inheritance** Before going to the definition of prototypal inheritance let me ask you a question. You write the following code in console. <kbd>![Prototypal Inheritance Demo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9sw4d84mhcokdx4jl15y.png)</kbd> Here, you have made only one property in the Student object, but you can see in the following image that there are other properties and methods already available in the Student object along with the **name** property. <kbd> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4cf2nifdetugmhof98z.png) </kbd> So how is that possible? It is possible because of prototypal inheritance. Prototype inheritance in javascript is the linking of prototypes of a parent object to a child object to share and utilize the properties of a parent class using a child class. Prototypes are hidden objects that are used to share the properties and methods of a parent class to child classes. The prototype contains methods that are accessible to all objects linked to that prototype. ## Classical OOP vs OOP in JS <kbd> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gn11au38vo2gdupq2qrz.png) </kbd> - In classical OOP, the child class have all the public properties and methods of its parent class. That is, flow of data is from parent to child. - In Javascript, the child object delegates or entrust the methods and properties to its prototype object. Thai is, flow of data is from child to parent. <hr> ## How do we actually create prototypes? And how do we link objects to prototypes? How can we create new objects, without having classes? Prototypes can be created using three ways: 1. Constructor functions 2. ES6 classes 3. Object.create() method Now, I will explain each way in detail :) ## 1. Constructor Functions When a regular function is called using the **new** keyword, then the function acts as a constructor fuction and returns a brand new object, whose properties and methods can be set by passing parameters and attaching those values to the **this** keyword. ```javascript let Student = function (firstName,lastName){ this.firstName = firstName this.lastName = lastName } let rudra = new Student("Rudra","Pratap") console.log(rudra) // Student { firstName: 'Rudra', lastName: 'Pratap' } console.log(rudra.firstName) // Rudra ``` Now, we can use this **Student** constructor function to declare as many other objects. ```javascript let rudra = new Student("Rudra","Pratap") console.log(rudra.firstName) // Rudra let tim = new Student("Tim","Cook") console.log(tim.lastName) // Cook let zuck = new Student("Mark","Zuck") console.log(zuck.firstName) // Zuck ``` <br> Every constructor function has a property called prototype. And that prototype property contains at least two more properties : the definition of the constructor function and a **\_\_proto\_\_** object. <kbd> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nuhojg7bfdlkb4uzf5j0.png) </kbd> Every object of a constructor function have access to the properties and methods present in the prototype of its parent constructor function. <br> **Note:** Anonymous functions can't be made constructor functions. Regular functions can be made constructors. <kbd> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eu69hz5092pyk79fjt0y.png) </kbd> Anonymous functions can't be made constructors. <kbd> ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yfaypdvxi53fncejq30a.png) </kbd> <br> Let suppose you want a method **introduceMyself()**, that can be called on each object, how can we do it? **First solution [Naive]** ```javascript let Student = function (firstName,lastName){ this.firstName = firstName this.lastName = lastName this.introduceMyself = function(){ console.log(`Hello, my name is ${this.firstName} ${this.lastName}.`) } } let rudra = new Student("Rudra","Pratap") rudra.introduceMyself() let tim = new Student("Tim","Cook") // Hello, my name is Rudra Pratap. tim.introduceMyself() // Hello, my name is Tim Cook. ``` This will work as we wanted, but it is very inefficient. Suppose you are working on a large project having thousands of objects constructed by the **Student** constructor function, each of the object will have this **introduceMyself()** methods, as a result a large amount of memory will be wasted just to store a method which is common to all objects. **Second Solution [Optimized]** A efficient solution would be adding the **introduceMyself()** method in the constructor function's prototype. ```javascript let Student = function (firstName,lastName){ this.firstName = firstName this.lastName = lastName } Student.prototype.introduceMyself = function(){ console.log(`Hello, my name is ${this.firstName} ${this.lastName}.`) } ``` As we already know that the object has access to the prototype of its parent constructor function through the **\_\_proto\_\_** object. Whenever we try to access a method or a property in an object by using a dot or bracket notation, that method/property is first searched inside the object, if it is found there then it is used, and if it is not found there, the Javascript engine will look into its parent object (here constructor function) prototype and tries to find out that method/property. ```javascript let rp = new Student("Rudra","Pratap") rp.introduceMyself() // Hello, my name is Rudra Pratap. ``` <br> Hence, we can make many objects of a constructor function and we have to only define the methods once, inside its prototype. All its child object will automatically inherit those methods. ```javascript let Student = function (firstName,lastName){ this.firstName = firstName this.lastName = lastName } Student.prototype.introduceMyself = function(){ console.log(`Hello, my name is ${this.firstName} ${this.lastName}.`) } let rp = new Student("Rudra","Pratap") rp.introduceMyself() // Hello, my name is Rudra Pratap. let tim = new Student("Tim","Cook") tim.introduceMyself() // Hello, my name is Tim Cook. let zuck = new Student("Mark","Zuck") zuck.introduceMyself() // Hello, my name is Mark Zuck. ``` <hr> ## 2. ES6 Classes - ES6 classes are just syntactic sugar over constructor functions. - They are not like the classes found in traditional OOP languages like Java and C++. - They are just the transformed form of constructor functions, under the hood they use the concept of prototypes and prototypal inheritance. - Classes were added in Javascript in 2015 so that programmers from other backgrounds like Java and C++, can write object oriented code more easily. - Using classes, we don’t have to manually mess with the prototype property. <br> Classes can be defined by two types: class declaration and class expression. ```javascript let Person = class { ... } // class expression class Person{ // class declaration .... } ``` A class must have a constructor method. **Constructor Methods** - The constructor method is a special method of a class for creating and initializing an object instance of that class. - A class can't have more than one constructor method. - Classes are not hoisted. <br> **Class Declaration : constructor functions vs classes** 1. using constructor function ```javascript let Student = function (firstName,lastName){ this.firstName = firstName this.lastName = lastName } ``` 2. using class ```javascript class Student { constructor(firstName,lastName){ this.firstName = firstName this.lastName = lastName } } ``` <br> **Defining methods : constructor functions vs classes** 1. In constructor function ```javascript let Student = function (firstName,lastName){ this.firstName = firstName this.lastName = lastName } Student.prototype.introduceMyself = function(){ console.log(`Hello, my name is ${this.firstName} ${this.lastName}.`) } ``` 2. In class ```javascript class Student { constructor(firstName,lastName){ this.firstName = firstName this.lastName = lastName } introduceMyself(){ console.log(`Hello, my name is ${this.firstName} ${this.lastName}.`) } } ``` Now we know that there is slight difference in syntax of constructor function and classes, but the way to instantiate them is exactly the same for each of them. ```javascript let rp = new Student("Rudra","Pratap") rp.introduceMyself() // Hello, my name is Rudra Pratap. ``` <hr> ## 2. Object.create() As I have already wrote that OOP in Javascript can be achieved by Constructor Functions, ES6 classes and Object.create(), it's time to discuss the last one. Object.create() is used to manually set the prototype of an object to any other object that we want. ```javascript let Person = { greet:function(){ console.log(`Hi ${this.name}`) } } let rp = Object.create(Person) rp.name="Rudra" rp.greet() // Hi Rudra console.log(rp.__proto__ === Person) // true console.log(rp.__proto__.__proto__ === Object.prototype) // true console.log(rp.__proto__.__proto__.__proto__) // null ``` ```javascript let Person = { greet:function(){ console.log(`Hi ${this.name}`) } } let Teacher = Object.create(Person) Teacher.task = function(){ console.log("I am teaching") } let MathTeacher = Object.create(Teacher) MathTeacher.name = "Rudra" MathTeacher.greet() // Hi Rudra MathTeacher.task() // I am teaching console.log( MathTeacher.__proto__ === Teacher ) // true console.log( MathTeacher.__proto__.__proto__ === Person ) // true console.log( MathTeacher.__proto__.__proto__.__proto__ === Object.prototype ) // true console.log( MathTeacher.__proto__.__proto__.__proto__.__proto__ ) // null ``` Object.create() is the least common way to implement prototypal inheritance, but if it is need then we can do more programmatically, like ```javascript let Person = { init:function(firstName,lastName){ this.firstName = firstName this.lastName = lastName }, greet: function(){ console.log(`Hi ${this.firstName} ${this.lastName}`) } } let rudra = Object.create(Person) rudra.init("Rudra","Pratap") // it is not a constructor rudra.greet() // Hi Rudra Pratap ``` **Puzzle** Find the output of the following snippet ```javascript var Employee = { company: 'xyz' } var emp1 = Object.create(Employee); delete emp1.company console.log(emp1.company); ``` Solution: The answer is ‘xyz’, because there is no property company in emp1 object, but in its prototype which is Employee. If you want to delete the property company, then ```javascript var Employee = { company: 'xyz' } var emp1 = Object.create(Employee); delete emp1.__proto__.company console.log(emp1.company) // undefined console.log(Employee) // {} ```
merudra754
1,462,385
The Benefits of Using Node.js for Real-time Data Processing
Node.js has become increasingly popular in recent years, particularly for real-time data processing....
0
2023-05-09T12:59:29
https://dev.to/saint_vandora/the-benefits-of-using-nodejs-for-real-time-data-processing-3f0l
javascript, node, productivity, tutorial
Node.js has become increasingly popular in recent years, particularly for real-time data processing. With its powerful event-driven architecture, Node.js is a perfect fit for real-time applications that need to process and serve data quickly and efficiently. In this article, we'll take a look at the benefits of using Node.js for real-time data processing. ## What is Node.js? Node.js is an open-source, cross-platform runtime environment that allows developers to use JavaScript on the server-side. It was developed by Ryan Dahl in 2009 and has since grown into one of the most popular server-side technologies available. Node.js is built on top of Google's V8 JavaScript engine, which provides lightning-fast performance. Additionally, Node.js uses an event-driven, non-blocking I/O model, which makes it ideal for real-time applications. ## Benefits of Using Node.js for Real-time Data Processing - **Speed** Node.js is known for its speed, and this is especially important for real-time data processing. With Node.js, you can handle large volumes of data quickly and efficiently, without having to worry about performance bottlenecks. This is because Node.js uses non-blocking I/O, which allows it to handle multiple requests simultaneously without blocking the event loop. - **Scalability** Scalability is another important factor for real-time data processing, and Node.js excels in this area. Because of its event-driven architecture and non-blocking I/O, Node.js is highly scalable, allowing you to handle increasing amounts of data without sacrificing performance. - **Ease of Use** Node.js is easy to learn and use, particularly if you're already familiar with JavaScript. This is because you can use the same language on both the client and server-side, which simplifies the development process. Additionally, Node.js has a large and active community, which means you can find plenty of resources and support online. - **Real-time Communication** Real-time communication is essential for many modern applications, and Node.js makes it easy to implement. With its built-in support for WebSockets, you can establish a persistent connection between the client and server, allowing for real-time communication without the need for polling. - **Third-party Libraries** Node.js has a vast ecosystem of third-party libraries, making it easy to add additional functionality to your real-time applications. These libraries include everything from database connectors to authentication frameworks, allowing you to quickly build complex applications without reinventing the wheel. ## Conclusion Node.js is an excellent choice for real-time data processing, thanks to its speed, scalability, ease of use, real-time communication capabilities, and vast ecosystem of third-party libraries. If you're building a real-time application, whether it's a chat app, a gaming platform, or a data visualization tool, Node.js is definitely worth considering. Its powerful event-driven architecture and non-blocking I/O make it an ideal choice for processing and serving data in real-time, and its ease of use and large community make it easy to get started. Thanks for reading... **Happy Coding!**
saint_vandora
1,462,414
How to open asf files on mac?
It just so happens that the ASF file format is difficult to open on Mac. Since QuickTime is not...
0
2023-05-09T13:27:25
https://dev.to/annawat/how-to-open-asf-files-on-mac-3eoi
asf, file, mac, player
It just so happens that the ASF file format is difficult to open on Mac. Since QuickTime is not compatible with this format, users need to resort to another video player for Mac applications to solve this problem. A better program will help us, which will easily play the [asf file mac](https://mac.eltima.com/how-to-open-asf-files-on-mac/), Elmedia Video Player. Elmedia Player is a player that easily plays ASF files on Mac. It's a reliable and easy-to-use solution that guarantees smooth playback of not only the file format mentioned above, but almost any video or audio file. The set of functions is very diverse, you will find everything you need.
annawat
1,462,417
Top 10 Best Practices for Coding That Every Developer Should Follow
Coding is an essential part of software development, and it is essential to write clean, efficient,...
0
2023-05-20T03:51:45
https://dev.to/thealphamerc/top-10-best-practices-for-coding-that-every-developer-should-follow-3ab5
programming, learning, development, coding
Coding is an essential part of software development, and it is essential to write clean, efficient, and maintainable code. In this article, we will discuss the top 10 best practices that every developer should follow while coding. <img width="100%" style="width:100%" src="https://media.giphy.com/media/lT9Y1nrHdZWX9QoSH0/giphy.gif"> Following these best practices can help you to produce high-quality code that is easy to read, debug, and maintain. #1. Write Readable Code Writing readable code is the most crucial aspect of coding. Your code should be easy to read and understand, and other developers should be able to follow it.One of the best ways to write readable code is to use appropriate naming conventions for variables, functions, classes, and so on. #### Example ```javascript // ❌ Poorly written code let x = [2, 3, 5, 7, 11, 13]; for (let i = 0; i < x.length; i++) { for (let j = i + 1; j < x.length; j++) { if (x[i] + x[j] === 14) { console.log(x[i], x[j]); } } } // ✅ Better written code let primes = [2, 3, 5, 7, 11, 13]; for (let i = 0; i < primes.length; i++) { for (let j = i + 1; j < primes.length; j++) { if (primes[i] + primes[j] === 14) { console.log(primes[i], primes[j]); } } } ``` By using appropriate naming conventions and formatting, the second block of code is much more readable and easier to understand than the first block of code. #2. Use Consistent Coding Style Consistency in coding style is essential for a team of developers working on the same project. It saves time, reduces confusion, and improves the maintainability of the code. Adopt a coding style guide and stick to it. #### Example ```javascript // ❌ Inconsistent coding style function add(x, y) { return x+y; } // ✅ Consistent coding style function add(x, y) { return x + y; } ``` The second block of code has a consistent coding style, which makes it easier to read and understand the code. #3. Avoid Magic Numbers or Strings Magic numbers or strings are hard-coded values in the code that can make it difficult to read and maintain. Use constants or variables to replace them to improve the readability and maintainability of the code. #### Example ```javascript // ❌ Magic numbers and strings function getArea(width, height) { if (width == 10 && height == 5) { return 50; } if (width == 5 && height == 10) { return 25; } } // ✅ Constants instead of magic numbers and strings const WIDTH_1 = 10; const HEIGHT_1 = 5; const WIDTH_2 = 5; const HEIGHT_2 = 10; function getArea(width, height) { if (width == WIDTH_1 && height == HEIGHT_1) { return WIDTH_1 * HEIGHT_1; } if (width == WIDTH_2 && height == HEIGHT_2) { return WIDTH_2 * HEIGHT_2; } } ``` By using constants instead of magic numbers or strings, the code becomes more self-explanatory and easier to understand. #4 Write Modular Code Modular code makes the code more organized and enhances its maintainability. It is much easier to update or add new features to modular code than to spaghetti code. Break your code into smaller, easier-to-manage modules or functions that carry specific responsibilities. #### Example ```javascript // Monolithic Code function calculatePay(employee) { var pay = 0; if (employee.role == 'Manager') { pay = employee.salary + 1000; } if (employee.role == 'Developer') { pay = employee.salary; } if (employee.role == 'Tester') { pay = employee.salary - 1000; } return pay; } // Modularity function calculatePay(employee) { return employee.salary + employee.bonus; } function getEmployeeBonus(role) { if (role == 'Manager') { return 1000; } if (role == 'Developer') { return 0; } if (role == 'Tester') { return -1000; } } function calculateEmployeeBonus(employee) { return (employee.salary * getEmployeeBonus(employee.role)) / 100; } function calculateTotalPay(employee) { return calculatePay(employee) + calculateEmployeeBonus(employee); } ``` In this example, modular code is used to break up a monolithic function into smaller, easier to understand and maintainable functions. #5. Use Comments Adding comments to your code is a great way to document your code and make it more understandable. Do not hesitate to add comments explaining why a particular piece of code has been written. #### Example ```javascript // ❌ Poorly commented code function add(x, y) { // function to add two numbers return x + y; // returns the sum of two numbers } // ✅ Better commented code function add(x, y) { // This function accepts two numbers x and y // It returns the sum of x and y return x + y; } ``` Adding comments that explain how the code works can help other developers understand and maintain the code. #6. Use Descriptive Function and Method Names Using descriptive function and method names can make your code more readable by providing information about what a given function or method does. #### Example ```javascript // ❌ Poor function and method names function f(x, y) { for (let i = 0; i < x.length; i++) { if (x[i] == y) { return i; } } return -1; } class Foo { x() {} } // ✅ Descriptive function and method names function indexOfItemInArray(array, item) { for (let i = 0; i < array.length; i++) { if (array[i] == item) { return i; } } return -1; } class Bar { getXCoordinate() {} } ``` #7 Use Version Control Version control is one of the essential tools for managing code. Use a version control system like Git or SVN to keep track of changes to your code and to collaborate better with your team members. #### Example ```sh git init git add . git commit -m "Initial commit" git push ``` Using a version control system like Git makes it easier to track changes, collaborate with team members and roll back changes when needed. #8. Use Debugging Tools Debugging your code is an essential part of the development process. Make use of debugging tools provided by your integrated development environment (IDE) or use debugging libraries like Log4J and Apache for better error handling. #### Example ```javascript function addNumbers(x, y) { return x + y; } const result = addNumbers(5, '7'); // Type Error try{ console.log(addNumbers(5, '7')); // Throws Type Error on console }catch(error){ console.log(error); } ``` #9. Test Your Code Perform test cases to ensure the code is working correctly. Write unit tests and functional tests to identify errors at various stages of development. Adopt Test-Driven Development (TDD) to write tests before writing code. #### Example ```javascript // Unit test example function testAddNumbers() { if (addNumbers(2, 3) === 5 && addNumbers(1, -1) === 0 && addNumbers(0, 0) === 0 && addNumbers(-1, -1) === -2) { return true; } return false; } // Functional Test Example function testLogin() { if (login('admin', 'admin123') === true && login('guest', 'password') === false) { return true; } return false; } ``` #10. Refactor Regularly Refactoring is a technique used to improve the quality of code by cleaning up messy code. Refactor your code regularly to eliminate unnecessary dependencies, remove repetitive code and get rid of any potential bottlenecks. #### Example ```javascript // ❌ Poorly written code with duplication and lots of statements function calculateSalary(numberOfHours, hourlyPay, overtime, bonus) { let salary = 0; if (numberOfHours > 40) { if (overtime) { salary += (40 * hourlyPay) + (numberOfHours - 40) * hourlyPay * 1.5; } else { salary += 40 * hourlyPay; } } else { salary += numberOfHours * hourlyPay; } if (bonus) { salary *= 1.1; } return salary; } // ✅ Better written code without duplication and fewer statements. function calculateSalary(numberOfHours, hourlyPay, overtime, bonus) { let salary = numberOfHours * hourlyPay; if (overtime && numberOfHours > 40) { salary += (numberOfHours - 40) * hourlyPay * 1.5; } if (bonus) { salary *= 1.1; } return salary; } ``` #Conclusion Following these best practices can make a significant difference in the quality of code you produce. Writing clean and efficient code not only makes coding easier but also makes it more enjoyable. It helps you avoid potential errors, reduce debugging time and improve the overall quality of your work. By adopting these best practices, developers can achieve more efficient code development, reduce errors, and respond faster to new requirements. So, start implementing these best practices today and watch your code quality soar to new heights. Thank you for taking the time to read my article! If you found the information helpful and would like to stay up-to-date on the latest programming tips and tricks,You can follow me on [Twitter](https://twitter.com/theAlphaMerc), [LinkedIn](https://www.linkedin.com/in/thealphamerc), and [GitHub](https://github.com/TheAlphamerc), where I share articles, code snippets, and programming-related insights regularly. By following me, you will never miss out on any useful updates, and that way we can constantly improve our knowledge on programming together. Looking forward to seeing you in my social media community!
thealphamerc
1,462,591
Install Postgres on Ubuntu from Source
About Postgres A robust relational database management system (RDBMS) that is open-source...
0
2023-05-09T17:48:05
https://dev.to/arun3sh/install-postgres-1lfo
postgres, apacheage, systemprogramming, programming
## About Postgres A robust relational database management system (RDBMS) that is open-source and has grown in popularity recently is PostgreSQL. Since its initial release in 1996, a committed development community has continued to develop and support it. One of PostgreSQL's distinguishing characteristics is its capacity for handling sophisticated SQL features like stored procedures, triggers, and views. Because of this, it is a fantastic option for enterprise-level applications that need complicated data handling. The scalability of PostgreSQL is a key component. It is appropriate for high-traffic applications since it supports several concurrent connections and can manage huge datasets. Additionally, PostgreSQL is known for being a trustworthy and safe database management system. To safeguard sensitive data from unauthorized access, it offers strong data encryption and access control features. Additionally, PostgreSQL places a high priority on extensibility, making it simple to add new functions and data types. As a result, it functions as a versatile database management system that can be modified to suit certain business requirements. Overall, PostgreSQL is a feature-rich database management system that is adaptable and capable of handling challenging data administration jobs. It's a great option for a variety of applications thanks to its scalability, security, and flexibility. ## Install Postgres on Ubuntu Let's install Postgres on Ubuntu Here I am going to choose version 14 for Postgres. **Install the dependencies required to build PostgreSQL:** ```sql sudo apt update sudo apt install build-essential libreadline-dev zlib1g-dev libssl-dev libxml2-dev libxslt-dev ``` **Download the source code for PostgreSQL from the official website:** ```bash curl -LO https://ftp.postgresql.org/pub/source/v14.7/postgresql-14.7.tar.gz ``` **Extract the source code archive:** ``` tar xvf postgresql-14.7.tar.gz ``` **Change to the directory where the source code was extracted:** ```bash cd postgresql-14.7 ``` **Configure the build options by running the following command: **```bash ./configure ``` **Build the PostgreSQL binaries by running:** ```bash make ``` **Install PostgreSQL by running:** ``` sudo make install ``` **Initialize the database cluster by running:** ```bash sudo /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data ``` **Start the PostgreSQL server by running:** ```bash sudo /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start ``` **You can then connect to the PostgreSQL server by running:** ```bash sudo -u postgres /usr/local/pgsql/bin/psql ``` That's it! You now have a working installation of PostgreSQL from source on your Ubuntu system. The default configuration will build the server and utilities, as well as all client applications and interfaces that require only a C compiler. All files will be installed under `/usr/local/pgsql` by default. ## Issues I faced: **configure: error: readline library not found** ```bash sudo apt install libreadline-dev ```
arun3sh
1,462,678
Синтаксис С++
Ассаламу алейкум уважаемые программисты, смотрите Синтаксис в языке программирования C++. Я хотел бы...
0
2023-05-09T18:05:05
https://dev.to/islomali99/sintaksis-s-5pk
webdev, beginners, programming, cpp
Ассаламу алейкум уважаемые программисты, смотрите Синтаксис в языке программирования C++. Я хотел бы попросить вас взглянуть на приведенный ниже код, чтобы понять его синтаксис. ``` #include <iostream> using namespace std; int main() { cout << "Hello World"; return 0; } ``` Теперь разберем каждую строчку кода. - **строка:** #include <iostream> из которой вызывается библиотека (#include). "iostream" означает поток ввода и вывода - позволяет работать с объектами ввода и вывода, cout (см. 5 строк в коде). - **строка:** использование пространства имен std означает, что мы можем использовать имена для объектов и переменных из стандартной библиотеки. - Не беспокойтесь, если вы не понимаете, как включить #include <iostream> и как использовать пространство имен std. Просто подумайте об этом как о чем-то, что всегда будет появляться в вашей программе. - **строка:** пустая строка опущена. С++ игнорирует пробелы. `Еще одна вещь, которая все время возникает в программе на C++, — это int main(). Это называется функцией. Любой код внутри фигурных скобок {} будет выполнен. Код внутри скобок {} будет прочитан как первая строка операторов, которые будут выполняться при запуске программы.Если код написан вне скобок {}, программа не запустится, потому что код, который мы написали, должен быть внутри только скобки {}!` - **строка: **cout (произносится как «видеть») — это объект, используемый в сочетании с оператором ввода ( ) для вывода/печати текста. В нашем примере будет выведено «Hello World» << . `cout `добавляется из комбинаций `"c"` + `"out"`, то есть расширение `"c"` C++ (язык программирования C++) `"out"` - `"выход"`. - В языке программирования C++ после каждого оператора `;` ставится точка с запятой. **Например:** `cout << "Привет, мир";` `Тело программы может быть записано в функции int main() (помните, что его можно записать в одну строку).` ``` int main() {cout << "Hello World"; return 0;} ``` - **строка: **вернуть 0; завершает основную функцию. - **строка:** } Не забудьте добавить закрывающую фигурную скобку, чтобы фактически завершить основную функцию. `использование пространства имен std; в отсутствие` ``` #include <iostream> int main() { std::cout << "Biz boshladik!"; return 0; } ```
islomali99
1,462,711
Making the right choice: WunderGraph or ApolloOS for your API architecture??
Written by our Founder and CEO... My name is Jens, and I'm the Founder &amp; CEO of WunderGraph,...
0
2023-05-15T13:00:00
https://wundergraph.com/blog/wundergraph_vs_apollo
graphql, webdev, javascript, programming
![WunderGraph or Apollo?](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mxsbxzcwc4kkg0w6pmoc.png) Written by our Founder and CEO... My name is Jens, and I'm the Founder & CEO of WunderGraph, but most importantly, I'm a Developer - always have been and always will be. At WunderGraph, we're a small but growing team, and I'm still actively involved in developing the product. That's why what I'm sharing with you isn't just from a CEO's perspective - it's from the viewpoint of a developer who's in the trenches every day. While WunderGraph and Apollo have similar goals, their approaches differ - and understanding their histories can shed some light on that. Let's take a closer look at both. ## Apollo, successor to Meteor, the "best way" to build with GraphQL Geoff Schmidt, the CEO of Apollo describes Apollo as the successor to Meteor. > Apollo began life as the Meteor 2 data system, designed from all of the learnings from Livedata, which ultimately could be summarized as: there needed to be an abstraction layer in between the client and the server, rather than embedding MongoDB queries in the client. --- Geoff Schmidt, CEO Apollo Meteor ran into the limitations of tightly coupling the client and the server. Interestingly, we can see this pattern repeat itself, with solutions that generated a GraphQL API from the Database, and expose it to the client. So, the idea behind Apollo is to create an abstraction layer between client and server. Additionally, Apollo introduced the concept of the Supergraph, the idea to implement a Graph in a distributed way. This can also be found in their headline: > The GraphQL developer platform. It's all about helping companies to build GraphQL APIs. ## Should you use Apollo for GraphQL? Or is there another way? WunderGraph, is a new way to think about APIs, leveraging GraphQL The history of WunderGraph is different. I've worked in Enterprise Software Development for a while and realised that there's a huge problem with API composition & development. We spent so much time composing APIs of different protocols, transports, and so on. Early on, I had the idea that it should be possible to query every system using GraphQL, removing the overhead of having to write manual code to mediate protocols and implementations. While I was working on the solution, a GraphQL Engine to automatically mediate between a GraphQL layer and the underlying systems, I've realized that my thinking was wrong and limiting. We don't need more abstractions. We don't need better ways to build GraphQL APIs. In fact, the goal shouldn't be to build GraphQL APIs. My understanding of APIs was wrong. While using GraphQL as the language for API composition is promising, it's important to shift away from a "server-full" mindset. This shift can help us create more flexible and scalable solutions that meet the needs of modern development. ## Treating APIs as dependencies can transform your development process Let's try and understand this. What if we treated APIs as depedencies? Let me explain this a bit more in depth. The traditional, Apollo-way of adding another internal or external API or database to a system is to extend "THE Graph". However, I've realized that the number of APIs is only growing, and so we'd have to continuously extend our Graphs with every new API we'd like to use. There needs to be a better way to build a system that can handle an exponentially growing number of APIs. We can borrow the solution from existing programming paradigms, it's called "dependency management" or, more well known, "package management". If you need another "package", you can npm install or go get it. The package manager manages the dependencies of you project and helps you to update them. Now, APIs are not code. You can't just "add an API" to your application. It's a lot more complicated than that... or is it? ## GraphQL as the language for API composition The Apollo-way is to build Subgraphs and compose them into a Supergraph. If an application needs to use multiple APIs, it talks to the Supergraph. That's what I call "API-full". WunderGraph takes a different approach. We don't build one monolithic Supergraph. We also don't have to adopt GraphQL and Subgraphs all across the organization. In fact, WunderGraph embraces diversity in API styles. WunderGraph can compose APIs of different styles, like REST, GraphQL, Apollo Federation (Subgraphs and Supergraphs), gRPC, Kafka, PostgreSQL, MySQL, MongoDB, etc... But instead of building a single, monolithic API, WunderGraph allows you to build a unique composition for each application. That's the whole point of "package management". In an exponentially growing system of APIs, you will eventually run out of resources to connect to "gigantic" Supergraphs. It's the same with npm. You only install the packages you need, otherwise your node_modules folder would grow beyond the limits of your system. In more technical terms, WunderGraph allows you to compose multiple API dependencies into a single unified Graph with just a few lines of code. ```typescript const weather = introspect.graphql({ apiNamespace: 'weather', url: 'https://web.archive.org/web/20230115160917/https://graphql-weather-api.herokuapp.com/', }); const countries = introspect.graphql({ apiNamespace: 'countries', url: 'https://web.archive.org/web/20230115160917/https://countries.trevorblades.com/', }); configureWunderGraphApplication({ apis: [weather, countries], }); ``` This is how you would create an API composition for your application, where you are combining a weather and a countries API. Then you would define you GraphQL Operations, WunderGraph generates an API Gateway for you that "executes" your API composition. ## Should you expose GraphQL APIs? How we expose APIs is another topic which distinguishes WunderGraph from Apollo. Apollo, by default, exposes your GraphQL APIs over HTTP. You then have to add additional 3rd party libraries to lock your API down, add authentication, authorization, and so on. So, their approach is to give the user a very basic server implementation and leave security to the user. The WunderGraph approach is to never expose a GraphQL API at all, and manage all aspects, from security to authentication, authorization and caching out of the box. We provide an end-to-end solution that implements OWASP recommendations by default . WunderGraph is locked-down from the beginning, you don't have to add any additional libraries or modules. WunderGraph does out of the box: - Input Validation using JSON-Schema - Injection Prevention / Process Validation (at compile time) - Query Limiting (Depth), Query Cost Analysis (at compile time) - Server-side batching and caching - Timeouts -Access Control protection with RBAC and ABAC - mitigates batching attacks - prevents introspection attacks With Apollo, you're responsible to handle the security of your GraphQL APIs on your own. But should you really be focusing on security, or is this a framework concern, and you should focus on the application itself? Interestingly, in the beginning, Geoff was hesitant to allow clients to send arbitrary queries to the server. > Similarly, and also to accommodate larger and more complex apps, we would move UI component data dependency specification from the backend (a DDP "publication") into the UI components, by letting the frontend send a query over the wire to the backend. The reason we didn't do this in the first place was security—I thought it would be hard to convince people to use Meteor if the client was sending arbitrary clients to the server. --- Geoff Schmidt, CEO Apollo It seems that they've dropped these concerns in favour of adoption. But as you will see, doing the right thing security-wise doesn't have to come with a lot of effort. But let's say you still want to expose a GraphQL API and handle everything on your own. WunderGraph still allows you to expose a [GraphQL API](https://docs.wundergraph.com/docs/guides/expose-a-graphql-api-from-wundergraph#the-solution-exposing-your-graph-ql-from-wunder-graph), even though we still reccommend against it! ## Comparing the WunderGraph Developer Workflow with ApolloOS To make this work, we've developed the WunderGraph Compiler and Runtime, which takes the API composition and Operations as input, and compiles them into a JSON-RPC API, while also generating a type-safe client for ease of use. Let's break down the WunderGraph flow: - Add API dependencies to your project - Define GraphQL Operations - WunderGraph generates a JSON-RPC API and a type-safe client - Deploy Serverless API Gateway alongside your frontend Now let's compare this flow with Apollo: - Wrap all "API dependencies" in Subgraph - Compose the Subgraphs - Deploy the Supergraph - Add Apollo client to your Frontend - Define GraphQL Operations - Lock down the API, add authentication, authorization, caching, and so on via additional extensions The development workflow is quite similar, but the WunderGraph Architecture comes with batteries included. ## Open Source GraphQL Solutions While I tried to focus on Architecture and Developer Workflow, there are a few other things I'd like to point out regarding Open Source. WunderGraph supports Apollo Federation out of the box, not just the Subgraph implementation, but it's actually possible to use WunderGraph as an Apollo Federation Gateway replacement. Our approach to GraphQL doesn't just make Federation a lot more secure, but is also very efficient, [as our benchmarks show.](https://github.com/wundergraph/federation-benchmarks) That being said, our Gateway is not just super fast and secure, it's also truly open source. Apollo re-licensed their Federation implementation using the [Elastic 2.0 license](https://www.elastic.co/licensing/elastic-license), while WunderGraph is licensed under the Apache 2.0 license. The [GraphQL Engine of WunderGraph](https://github.com/wundergraph/graphql-go-tools)is licensed under the MIT license. This means, no other company is allowed to use Apollo's implementation to provide a hosted GraphQL Gateway, while this is possible with WunderGraph. In fact, the WunderGraph Federation implementation is a separate package ([graphql-go-tools]( https://github.com/wundergraph/graphql-go-tools)) which is MIT licensed, and it's being commercially used by many other companies. We're even collaborating with some of them actively to maintain and improve it. The Elastic 2.0 license works against such collaborative efforts as really only a single company benefits from it. ## The elephant in the room: Pricing Given the recent changes in pricing from Apollo, I think it's important to highlight the key differences. Although WunderGraph is open source, we do offer a cloud solution that includes free and paid tiers. Both WunderGraph Cloud and Apollo have a generious free tier, but there are two major differences between them. 1. **In ApolloOS, Introspection queries do count as operations in GraphOS. In WunderGraph, introspection happens at build time and so introspection queries do not count as operations.** 2. **In the free tier of ApolloOS, you are on shared infrastructure up to 100 TPS. In WunderGraph Cloud each user will get their own dedicated infrastructure.** ### Paid cloud plans With ApolloOS pricing starts at $500 per month, while with WunderGraph Cloud your pricing is $25 per month. ### Enterprise plans This biggest advantage of WunderGraph vs ApolloOS is the ability to self-host. Self-hosting is an enterprise feature of ApolloOS. With WunderGraph you are given the ability to self-host from the start for free. Per Apollo's [Documentation](https://www.apollographql.com/docs/graphos/routing/self-hosted/): > Self-hosted supergraphs are an enterprise-only feature for organizations with advanced performance or compliance requirements. Per WunderGraphs [Documentation](https://docs.wundergraph.com/docs/self-hosted): > WunderGraph is fully open source and can be hosted with your favourite cloud provider ## Summary Do you think in Graphs, or in API dependencies? Should you have a single monolithic Supergraph, or small, lightweight API compositions per application, deployed on "Serverless" API Gateways? Should we adopt "THE GRAPH" for all our APIs, or use GraphQL as a "meta-API" style to compose a variety of API styles? Is it easier to achieve incremental adoption with a GraphQL-only model, or by using GraphQL only virtually, leaving existing systems as is? Ask yourself which model is best for your organization today, but also allows you to scale in the future. Do you believe in exponential growth of APIs, like we do, or do you think all APIs can be rewritten as Subgraphs. Our goal is to build the GitHub for APIs. We want to enable new ways of collaboration between API providers and consumers. We believe, the API era has only just begun, and will be fueled by the idea of "API dependencies". GitHub and Open Source brought together developers to build and collaborate on common solutions, allowing for re-usability and composition. There's barely any npm package without dependencies. You never have to start from scratch, but can build on top of existing packages. With APIs, we're really just starting to enable this kind of collaboration. The solution is not the Supergraph. It's on-demand composition of any API style, and a model of collaboration that is similar to GitHub, like issues when there's a problem with an API dependency, discussions to propose and discuss API changes, watchers to get notified of API changes, and so on.
slickstef11
1,462,723
Configuring HashiCorp Vault In AWS For High Availability In Kubernetes
Regardless of what your Kubernetes environment looks like, whether it’s one cluster or fifty...
0
2023-05-09T19:05:58
https://dev.to/thenjdevopsguy/configuring-hashicorp-vault-in-aws-for-high-availability-in-kubernetes-nod
kubernetes, hashicorp, devops, cloud
Regardless of what your Kubernetes environment looks like, whether it’s one cluster or fifty clusters, at some point you will have a secret, password, or API key that you need to store in an encrypted fashion for one of your containerized workloads. Because Kubernetes Secrets are stored as Base64 plain text in Etcd (the Kubernetes data store), you’ll see a lot of engineers going the third-party route. In this blog post, you’ll learn about one of the most popular third-party routes, Vault. ## Prerequisites To follow along with this blog post in a hands-on fashion, you should have the following: - An EKS cluster running. - A KMS key. - Access to create IAM users. If you don’t have these prerequisites, that’s okay! You can still read, follow along, and understand how it all works. ## What Is Vault Before diving into the Vault, implementation, let’s discuss what Vault is. Vault is a secret manager from HashiCorp. Outside of it being so popular in the Kubernetes space, HashiCorp Vault has very much been the go-to secret store for all types of workloads. Whether you’re running containerized environments or running on VMs or even running on bare-metal, you can store your secrets in Vault and utilize them within your environment. When you’re thinking of a “secret”, it’s anything that you want to be encrypted at rest and in transit. You don’t want the contents of the secret to ever be in plain text (other than when the endpoint is reading the value). A secret could be anything from an API key to a password to even a username. Regardless of what the content of the secret is, it’s essentially anything that you wish to have as an encrypted value. ## AWS KMS and Auto Unseal One of the key aspects of HA is the fact that there will be several servers, or in the case of Kubernetes, Pods, that are running Vault. Because of that, you very rarely want to manually go through and unseal the instance of Vault. What’s the unseal process? When Vault is started, it starts as sealed. This means Vault knows how to access the storage for the secrets, but it doesn’t know how to unencrypt it. You can think of it like a bank vault. You need a certain access key to access the bank vault. When you unseal Vault, you’re getting the “key” (like a bank vault) to unencrypt Vault itself so you can begin to store and utilize secrets. Why does this matter? Because if you have five Pods running Vault, that means you’d have to perform the unseal process manually across all of them. Instead, you can use the Auto Unseal. When you use the Auto Unseal, you’ll just need to manually unseal one Vault Pod. After that one Vault Pod is unsealed, the rest of the Pods get automatically unsealed. The best way in AWS to Auto Unseal is by using AWS KMS (in the prerequisites of this blog post). To set up what’s needed for the Unseal process, you’ll need a KMS key and an IAM user to authenticate to AWS from Vault. First, create the Vault Namespace. ```jsx kubectl create namespace vault ``` Next, create the Kubernetes Secret with the IAM user's access key and secret key to authenticate to AWS. ```markdown kubectl create secret generic -n vault eks-creds \ --from-literal=AWS_ACCESS_KEY_ID="" \ --from-literal=AWS_SECRET_ACCESS_KEY="" ``` Once complete, you can set up the Vault configuration. ## Vault Helm Config Now that the Namespace and `eks-creds` Kubernetes Secret are created, let’s learn how to implement Vault in an HA fashion. For the purposes of utilizing Kubernetes, the best way to go about this implementation is by using Helm. Because the `values.yaml` for the Vault Helm config is so large, let’s break it down into chunks below. First, you’d set the configuration to global and ensure that the injector exists so Vault can be injected into the Pods as a sidecar container. ```jsx # Vault Helm Chart Value Overrides global: enabled: true injector: enabled: true # Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/ image: repository: "hashicorp/vault-k8s" tag: "latest" ``` Next, create the storage for the Raft algorithm backend. ```jsx server: # This configures the Vault Statefulset to create a PVC for data # storage when using the file or raft backend storage engines. # See https://www.vaultproject.io/docs/configuration/storage/index.html to know more dataStorage: enabled: true # Size of the PVC created size: 20Gi # Location where the PVC will be mounted. mountPath: "/vault/data" # Name of the storage class to use. If null it will use the # configured default Storage Class. storageClass: null # Access Mode of the storage device being used for the PVC accessMode: ReadWriteOnce # Annotations to apply to the PVC annotations: {} ``` Set the resource limits and requests along with the readiness probe to ensure that the Vault Pods are getting the resources (CPU, memory) that they need along with confirming that they’re running as expected. ```jsx # These Resource Limits are in line with node requirements in the # Vault Reference Architecture for a Small Cluster resources: requests: memory: 8Gi cpu: 2000m limits: memory: 16Gi cpu: 2000m # For HA configuration and because we need to manually init the vault, # we need to define custom readiness/liveness Probe settings readinessProbe: enabled: true path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204" livenessProbe: enabled: true path: "/v1/sys/health?standbyok=true" initialDelaySeconds: 60 ``` Create the audit storage and add the environment variables for the Kubernetes Secret you created in the previous section to authenticate to AWS for the purposes of utilizing AWS KMS for the unsealing process. ```jsx # This configures the Vault Statefulset to create a PVC for audit logs. # See https://www.vaultproject.io/docs/audit/index.html to know more auditStorage: enabled: true standalone: enabled: false # Authentication to AWS for auto unseal extraSecretEnvironmentVars: - envName: AWS_ACCESS_KEY_ID secretName: eks-creds secretKey: AWS_ACCESS_KEY_ID - envName: AWS_SECRET_ACCESS_KEY secretName: eks-creds secretKey: AWS_SECRET_ACCESS_KEY ``` Next, create the HA configuration for Vault. Notice in the `seal` block that the AWS KMS key ID (`kms_key_id`) is blank. You’ll have to input this for your environment. ```jsx # Run Vault in "HA" mode. ha: enabled: true replicas: 3 raft: enabled: true setNodeId: false config: | ui = true listener "tcp" { tls_disable = 1 address = "[::]:8200" cluster_address = "[::]:8201" } seal "awskms" { region = "us-east-1" kms_key_id = "" } storage "raft" { path = "/vault/data" retry_join { leader_api_addr = "http://vault-0.vault-internal:8200" } retry_join { leader_api_addr = "http://vault-1.vault-internal:8200" } retry_join { leader_api_addr = "http://vault-2.vault-internal:8200" } } service_registration "kubernetes" {} ``` Lastly, enable the Vault UI so you can access the Vault dashboard. ```jsx # Vault UI ui: enabled: true serviceType: "LoadBalancer" serviceNodePort: null externalPort: 8200 ``` All together, the `override-values.yaml` Helm config should look like the below. ```jsx # Vault Helm Chart Value Overrides global: enabled: true injector: enabled: true # Use the Vault K8s Image https://github.com/hashicorp/vault-k8s/ image: repository: "hashicorp/vault-k8s" tag: "latest" resources: requests: memory: 256Mi cpu: 250m limits: memory: 256Mi cpu: 250m server: # This configures the Vault Statefulset to create a PVC for data # storage when using the file or raft backend storage engines. # See https://www.vaultproject.io/docs/configuration/storage/index.html to know more dataStorage: enabled: true # Size of the PVC created size: 20Gi # Location where the PVC will be mounted. mountPath: "/vault/data" # Name of the storage class to use. If null it will use the # configured default Storage Class. storageClass: null # Access Mode of the storage device being used for the PVC accessMode: ReadWriteOnce # Annotations to apply to the PVC annotations: {} # Use the Enterprise Image image: repository: "hashicorp/vault" tag: "latest" # These Resource Limits are in line with node requirements in the # Vault Reference Architecture for a Small Cluster resources: requests: memory: 8Gi cpu: 2000m limits: memory: 16Gi cpu: 2000m # For HA configuration and because we need to manually init the vault, # we need to define custom readiness/liveness Probe settings readinessProbe: enabled: true path: "/v1/sys/health?standbyok=true&sealedcode=204&uninitcode=204" livenessProbe: enabled: true path: "/v1/sys/health?standbyok=true" initialDelaySeconds: 60 # This configures the Vault Statefulset to create a PVC for audit logs. # See https://www.vaultproject.io/docs/audit/index.html to know more auditStorage: enabled: true standalone: enabled: false # Authentication to AWS for auto unseal extraSecretEnvironmentVars: - envName: AWS_ACCESS_KEY_ID secretName: eks-creds secretKey: AWS_ACCESS_KEY_ID - envName: AWS_SECRET_ACCESS_KEY secretName: eks-creds secretKey: AWS_SECRET_ACCESS_KEY # Run Vault in "HA" mode. ha: enabled: true replicas: 3 raft: enabled: true setNodeId: false config: | ui = true listener "tcp" { tls_disable = 1 address = "[::]:8200" cluster_address = "[::]:8201" } seal "awskms" { region = "us-east-1" kms_key_id = "" } storage "raft" { path = "/vault/data" retry_join { leader_api_addr = "http://vault-0.vault-internal:8200" } retry_join { leader_api_addr = "http://vault-1.vault-internal:8200" } retry_join { leader_api_addr = "http://vault-2.vault-internal:8200" } } service_registration "kubernetes" {} # Vault UI ui: enabled: true serviceType: "LoadBalancer" serviceNodePort: null externalPort: 8200 ``` Once you save the `override-values.yaml` file, run the helm installation with the following. ```markdown helm install vault hashicorp/vault \ -f ./override-values.yaml \ --namespace vault ``` ## Vault Configuration Now that Vault is running, you’ll have to take two steps: - Initialize Vault - Unseal one Vault Pod Run the following command to initialize Vault. ```jsx kubectl exec --stdin=true --tty=true vault-0 -n vault -- vault operator init ``` Once the command runs, you’ll see five unseal keys that get printed to the terminal. Run the following command THREE (3) times inputting a new unseal key from the command above. For example, the command above will output five keys, so you can use keys 1, 2, and 3. ```jsx kubectl exec --stdin=true --tty=true vault-0 -n vault -- vault operator unseal ``` Once complete, Vault will be unsealed and the other Pods will be auto-unsealed with KMS.
thenjdevopsguy
1,462,879
Let’s convert that variable in C#
Learn C#: Conversions Table of Contents: Learn C#: Variables This post Introduction: My name is...
0
2023-05-09T21:47:46
https://dev.to/thesnowmanndev/learn-c-conversions-53jf
csharp, beginners, programming, learning
**Learn C#: Conversions** _**Table of Contents:**_ 1. [Learn C#: Variables](https://dev.to/thesnowmanndev/learn-c-variables-34bf) 2. This post _**Introduction:**_ My name is Kyle, I am aiming to write small, short read articles on Dev.to to help people learn C#. This is the second post in many where I write a summarized post of my recent blog posts on my blog. Let's get into it! --- _**Conversions in C#:**_ In the previous post I wrote about simple types (value types) in the C# language. For this post, I will be writing about converting variables in the C# language. Most importantly I will be explaining two conversions: Implicit Conversions and Explicit Conversions. --- _**Implicit Conversions:**_ In C# you can convert simple types into other types without having to cast (I will get into this soon). For example, you can change a int variable into a long, float, double, or decimal. The conversion will be done automatically by the compiler when certain expectations are met. To find out more about the demands, read [my blog post](https://kylemartin0819.wordpress.com/2023/05/09/conversions-in-c/) to learn more in depth about conversions. ``` int number = 123456789; long biggerNumber = number; ``` Essentially, a variable defined and assigned earlier in code can be converted to another variable type as long as those language specifications are met. --- _**Explicit Conversions:**_ In C# you can convert variables by explicitly telling the compiler to convert the variable. Explicit Conversions is also known as casting. For example, a double can be explicitly converted to a byte, short, int, long, char, float, or decimal by defining the type of the assignment. With casting, it can result in precision loss. If the double value is 1337.12 and you cast, it into an int it will become 1337. ``` double x = 1337.12; int y = (int)x; // Output: 1337 ``` On [my blog](https://kylemartin0819.wordpress.com/2023/05/09/conversions-in-c/) I also dive deeper into explaining Explicit Conversions, like I said earlier - my goal is to create summarized or condensed guides on dev. --- _**Convert Class**_ In C# the System namespace provides a class named Convert that contains many methods for converting objects or variables as well. For example, you can use Convert.ToBoolean() to convert a string variable into a bool. ``` string doorOpenText = "True"; bool doorOpen = Convert.ToBoolean(doorOpenText); ``` --- Again, I am aiming to consistently write condensed dev posts to summarize my lengthier blog posts. I have recently created a blog to capture the knowledge I have gained over the past five years of learning computer science and programming. Currently, I am fixated on C#. I know Dev.to really isn't a website where there are a lot of C# developers or learners, and it is more Javascript / front-end centric. I still am hoping to help some learners. If you are interested in reading a lengthier post. Check out my blog: https://kylemartin0819.wordpress.com/2023/05/09/conversions-in-c/ Don't forget to leave an emoji interaction, comment, or follow! My profile also has links to my GitHub where I post small applications, programming challenges, or learnings from online courses or books. I plan on doing programming book reviews as well someday. Thanks for reading! Kyle
thesnowmanndev
1,463,096
Optional Chaining in JavaScript(?.)
The Optional Chaining Operator denoted with a question mark and a dot (?.) is a feature introduced in...
0
2023-05-10T04:07:27
https://dev.to/stevepurpose/optional-chaining-in-javascript-2635
webdev, javascript, api, objects
The Optional Chaining Operator denoted with a question mark and a dot (?.) is a feature introduced in JavaScript ES2020. Decided to write about this operator because for a very long time I used to be very confused when I was going through code and saw it. Looks like the Ternary Operator but I could not just grasp its use until I started writing more advanced React.js code. The Optional Chaining Operator is generally used to avoid the `TypeError` which occurs when we try to access the property of a non existent field in nested objects. The nested object could be data from an API. Anyone who has ever worked with a Rest API for instance, can attest to the nesting of data that goes on there. This Optional Chaining Operator creates more user-friendly errors. A non existent field is undefined and as we know data types like undefined and null are primitive data types and one thing about primitives is that they don't have properties. Let us illustrate with code below. ```javascript const person={ name:'Steve', address:{ city:'Abuja', street:'5 garki lamido' } ``` 1. If we try to access `person.name` in the above code we get Steve as output. ``` console.log(person.name) ``` 2. But if we try to access `person.country`: ```javascript console.log(person.country) ``` We get undefined above because we don't have a country field. Now if we try to access `person.country.name` ```javascript console.log(person.country.name) ``` Recall from 2 we got `undefined`, then we tried to access `undefined.name` and we got a `TypeError:cannot read properties of undefined`. Primitives cannot have properties never forget that. Bring in Mr Optional Chaining to mitigate against the `TypeError` which we are getting. So if we were to try ```javascript console.log(person.country?.name) ``` _we get undefined as output_. ## Use With Methods Suppose we try to access a method with the above code, as we can see there is no method in our `person` object so if we have a method called `methodMan()` ```javascript console.log(person.methodMan()) ``` we would get an error message telling us it is not a function. But if we were to do: ```javascript console.log(person.methodMan?.()) ``` we would get undefined. > ### Note We can use optional chaining multiple times in nested objects. ```javascript const building={ name:"better days ahead", place:"school", details:{ location:"mayfield way", education:"basic" } } ``` let us try to access `building.founded.age.education` ```javascript console.log(building.found.age.education) ``` we get a `TypeError: cannot read properties of undefined`. Notice that we don't have `found` and `age` in our object So let us do some Optional Chaining. ```javascript console.log(building.found ?.age?.education) ``` _we get undefined_ ## Conclusion So we can see that the `?.` operator is a very good error handling operator that can help generate user friendly errors when we are working on our applications.
stevepurpose
1,463,118
New Community to check out
Hello everyone, I am excited to announce the launch of ShiftSync, a new community dedicated to...
0
2023-05-10T04:35:08
https://dev.to/isolderea/new-community-to-check-out-4e2c
Hello everyone, I am excited to announce the launch of ShiftSync, a new community dedicated to quality engineering. Whether you're a developer, tester, or DevOps specialist, ShiftSync is the place to be. I am thrilled to be part of a platform where we can share our expertise and collaborate on all things quality engineering. Whether you have a question about automation testing, need advice on deployment strategies, or want to discuss the latest industry trends, ShiftSync is the perfect place to do it. At ShiftSync, we believe that quality engineering is the foundation of successful software development. That's why we've created a community that focuses on the importance of QA and the value it brings to the entire software development lifecycle. So, if you're passionate about quality engineering and want to connect with like-minded individuals, join us at ShiftSync. Let's build a community where we can learn, grow, and inspire each other to create better software. Tag a friend and let them know about this exciting new community. Thank you and see you at ShiftSync! https://shiftsync.tricentis.com/?utm_source=is-md&utm_medium=referral&utm_campaign=mp_community_launch_global_en_2023-05 #automation #testing #qa #software #developer #share #devops #softwaredevelopment #thankyou #quality #engineering #engineering #like #community
isolderea
1,463,198
Generative AI with Azure OpenAI (GPT-4 Overview)
Hey, there! In this article we'll learn about Generative AI, and how to use the Azure OpenAI service...
0
2023-05-23T17:04:30
https://dev.to/esdanielgomez/generative-ai-with-azure-openai-gpt-4-overview-5el5
ai, openai, azure, tutorial
Hey, there! In this article we'll learn about Generative AI, and how to use the Azure OpenAI service for natural language interaction with GPT-4. ## **Generative AI overview** Generative Artificial Intelligence is a type of AI that creates new and original content based on what it has learned from previous data. It can be used to generate text, images, music, and more. In short, it's an AI that generates content based on what is described in the input (prompt). ####**Azure OpenAI** Azure OpenAI is part of the Azure Cognitive Services that uses generative AI models. In this sense, these are the workloads that we can consider: - Generating Natural Language. - Generating Code. - Generating Images. For our Azure OpenAI overview, we'll exemplify the first scenario: *Generate content* through text. ## **Exploring Azure OpenAI** ####**Create Azure OpenAI service:** As a first point, within Azure we'll have to create a new OpenAI resource like any other. Here, in addition to the subscription and resource group, we must specify the name, region, and the pricing tier for the new resource. ![Create Azure OpenAI resource](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2x83c4hn41t2uy075duo.png) With the service created, here we can see the resource overview, keys & endpoints, and the *Go to Azure OpenAI Studio* option to go to the low-code dashboard. ![Azure OpenAI resource](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yovohtgezrhjhcwsukke.png) ####**Azure OpenAI Studio:** And here we have it: The Azure OpenAI portal (https://oai.azure.com/). ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7ouzw7lamfp2i77zbiz.png) The first thing we must do is create/establish a new deployment to specify which model we want to use. In our example we'll use the `gpt-4-32k` model for the natural language interaction between the assistant and the users. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fsb19dpan19cym460js.png) Here we can learn more about the available models: [Azure OpenAI Service models](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/models#gpt-4-models). ####**Chat playground** With the model established, we can now go to the playground to interact with it: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3hvhktg65stpwwwd093p.png) To adjust our chat assistant, it is important to take into account three aspects/roles: - **System message (prompt):** Instructions we can give the model about how it should behave and any context it should reference when generating a response. - **User:** The user's question or comment. - **Assistant:** The assistant's response. ####**Example:** Let's suppose that we want our wizard to be an expert on Azure resources, and that it only responds to us if the resources are considered as IaaS, or SaaS. For this scenario we would have this System message (prompt): `You are an assistant that knows about Azure resources, and who only answers if the resource name is of type Platform as a Service (PaaS), or Infrastructure as a Service (IaaS).` We can also indicate to the assistant examples so that it has more context: `User: Azure App Service.` `Assistant: Platform as a Service (PaaS)` `User: Azure SQL Database.` `Assistant: Platform as a Service (PaaS)` `User: Azure Virtual Machine.` `Assistant: Infrastructure as a Service (IaaS)` The recommendation is always to provide useful information so that the attendee has a good context of the topic to be discussed. ####**Interaction with the example:** ![Chat examples](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y2grmb8qnf7n5bo0bkow.png) ####**Parameters for the model:** Within our *Chat Playground* area, we can see the parameters option to optimize our model: - **Temperature:** Controls randomness. At higher temperatures, the answers can be more creative and unexpected. - **Max length (tokens):** Set a limit on the number of tokens per model response. - **Top probabilities:** Similar to temperature, this controls randomness but uses a different method. You can learn more about it [here](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/chatgpt-quickstart?pivots=programming-language-studio&tabs=command-line). ## **Thanks for reading!** If you have any questions or ideas in mind, it will be a pleasure to be able to be in communication with you, and together exchange knowledge with each other. If you would like to learn more about Azure OpenAI, this Microsoft Learn module is a good option: [Introduction to Azure OpenAI Service](https://learn.microsoft.com/en-us/training/modules/explore-azure-openai/). See you on [Twitter](https://twitter.com/esDanielGomez) / [LinkedIn](https://www.linkedin.com/in/esdanielgomez).
esdanielgomez
1,463,567
Great Blogger Collection
https://www.cnblogs.com/wangiqngpei557/
0
2023-05-10T14:01:37
https://dev.to/dingzhanjun/great-blogger-collection-1ja
https://www.cnblogs.com/wangiqngpei557/
dingzhanjun
1,464,458
Python Dictionaries
Creating a Dictionary in Python Creating a Python dictionary is as simple as placing the...
0
2023-05-11T07:01:47
https://dev.to/atchukolanaresh/python-dictionaries-d14
## Creating a Dictionary in Python Creating a Python dictionary is as simple as placing the required key and value pairs within a curly bracket. “{}”. A colon separates the key-value pair. “:” When there are multiple key-value pairs, they are separated by a comma. “,” The syntax of declaring a dictionary in python is as follows – ```python my_dict={'Name':'Ravi',"Age":'32'} ``` Now let’s look at some different ways of creating a Python dictionary – ```python #creating an empty dictionary my_dict={} #creating a dictionary with integer keys fruits={'1':'apple','2':'banana','3':'cherry'} #creating a dictionary with mixed keys random_dict={'1':'red','Name':'Anushka'} print(my_dict) print(fruits) print(random_dict) ``` **Output** – ```python {} {'1': 'apple', '2': 'banana', '3': 'cherry'} {'1': 'red', 'Name': 'Anushka'} ``` Another way we can create a dictionary in Python is using the built-in dict() function! Let’s see how we can do that – ```python Dict = dict([(1, 'Scaler'), (2, 'Academy')]) print("\nCreate a Dictionary by using dict(): ") print(Dict) ``` **Output** – ```python Create a Dictionary by using dict(): {1: 'Scaler', 2: 'Academy'} ``` ## Updating Dictionary in Python We can do multiple things when trying to update a dictionary in Python. Some of them are – 1. Add a new entry. 2. Modify an existing entry. 3. Adding multiple values to a single key. 4. Adding a nested key. Now let’s try to implement these through code – ```python #creating an empty dictionary my_dict={} print(my_dict) #adding elements to the dictionary one at a time my_dict[1]="James" my_dict[2]="Jim" my_dict[3]="Jake" print(my_dict) #modifying existing entry my_dict[2]="Apple" print(my_dict) #adding multiple values to a single key my_dict[4]="Banana,Cherry,Kiwi" print(my_dict) #adding nested key values my_dict[5]={'Nested' :{'1' : 'Scaler', '2' : 'Academy'}} print(my_dict) ``` **Output –** ```python {} {1: 'James', 2: 'Jim', 3: 'Jake'} {1: 'James', 2: 'Apple', 3: 'Jake'} {1: 'James', 2: 'Apple', 3: 'Jake', 4: 'Banana,Cherry,Kiwi'} {1: 'James', 2: 'Apple', 3: 'Jake', 4: 'Banana,Cherry,Kiwi', 5: {'Nested': {'1': 'Scaler', '2': 'Academy'}}} ``` ## Deleting Dictionary Elements Now that we have covered how to update Python dictionaries, let’s look at how we can either delete the dictionary entirely or remove individual entries. To remove an entire dictionary in Python, we use the “del” keyword. Let’s check out its implementation – ```python my_dict={1: 'James', 2: 'Apple', 3: 'Jake', 4: 'Banana,Cherry,Kiwi'} del my_dict[4] print(my_dict) ``` **Output –** ```python {1: 'James', 2: 'Apple', 3: 'Jake'} ``` Using del We deleted the entries for a specific key in the above example. Now, if we want to clear an entire dictionary in Python, we will use the .[clear()](https://www.scaler.com/topics/clear-in-python/) Consider the following code – ```python my_dict={1: 'James', 2: 'Apple', 3: 'Jake', 4: 'Banana,Cherry,Kiwi'} my_dict.clear() print(my_dict) ``` **Output** – ```python {} ``` As you can see, the entire content of the dictionary “my_dict” was removed. ### Use the for loop along with the items() method: ```python my_dict={1: 'James', 2: 'Apple', 3: 'Jake', 4: 'Banana,Cherry,Kiwi'} for x in my_dict.items(): print(x) ``` **Output** – ```python (1, 'James') (2, 'Apple') (3, Jake') (4, 'Banana, Cherry, Kiwi') ``` ### 4. Use the for loop along with the values() method: ```python my_dict={1: 'James', 2: 'Apple', 3: 'Jake', 4: 'Banana,Cherry,Kiwi'} for x in my_dict.values(): print(x) ``` **Output** – ```python James Apple Jake Banana,Cherry,Kiwi ``` ### Properties of Python Dictionary Keys I am sure by now you have realised that [Python dictionary values](https://www.scaler.com/topics/python-dictionary-values/) have no restrictions and can be any Python object. The same is not true for keys. So there are specific properties of keys we must be aware of to work with Python dictionaries efficiently – 1. Duplicate keys are not allowed. When duplicate keys are encountered in Python, the last assignment is the one that wins. For example – ```python my_dict={1: 'James', 2: 'Apple', 3: 'Jake', 1: 'Banana,Cherry,Kiwi'} print(my_dict) ``` **Output** – ```python {1: 'Banana, Cherry, Kiwi', 2: 'Apple', 3: 'Jake'} ``` 2. Keys are immutable- they can only be numbers, strings or tuples. We have seen several examples earlier in this article to see how this works. Click here to learn more about [Python Dictionary Keys() Method](https://www.scaler.com/topics/python-dictionary-keys/). ### Built-in Python Dictionary Functions & Methods Now that we know how to work with Python dictionaries let’s look at some dictionary functions and dictionary methods. ### Dictionary Functions cmp(dict1,dict2): It compares the items of both the dictionaries and returns true if the value of the first dictionary is greater than the second else returns false. len(dict): It gives the total number of items in the dictionary. str(dict): It produces a printable string representation of the dictionary. all(dict): It returns true if all the keys in the dictionary are true. any(dict): It returns true if any key in the dictionary is true. sorted(dict): It returns a new sorted list of keys in a dictionary. ### Python Dictionary Methods dict.clear() It removes all elements of the dictionaries. dict.copy() It returns a copy of the dictionary. dict.pop() It removes an element from the specified key. dict.get() It is used to get the value of the specified key. dict.fromkeys() It creates a new dictionary with keys from seq and values set to value. dict.items() It returns the list of dictionary tuple pairs. dict.keys() It returns the list of keys of the dictionary. dict.update(dict2) It adds the dict2’s key-value pairs to dict. dct.has_key() It returns true if the specified key is present in the dictionary else false.
atchukolanaresh
1,463,670
Anahtarı olmayan APT deposuna güvenilirliğin sağlanması
Debian tabanlı sistemlerde güvenilir olmayan bir depodan indirme yaparken sizi güvenlik konusunda...
0
2023-05-10T15:27:15
https://dev.to/aciklab/anahtari-olmayan-apt-deposuna-guvenilirligin-saglanmasi-4018
apt, trusted, guven
Debian tabanlı sistemlerde güvenilir olmayan bir depodan indirme yaparken sizi güvenlik konusunda uyarmaktadır. "apt update" komutu sonrasında muhtemelen aşağıdaki gibi bir hata ile karşılaşılmaktadır: ``` Err:1 https://www.xyz.com/ubuntu stable InRelease Certificate verification failed: The certificate is NOT trusted. The certificate issuer is unknown. Could not handshake: Error in the certificate verification. [IP: X.Y.Z.K 443] ``` Bu hatanın asıl çözümü uygun olan bir anahtarın bulunup, /etc/apt/trusted.gpg.d/ klasörü altına anahtarın eklenmesi olacaktır. Fakat elinizde anahtar yoksa depolara istisna olarak güvenmek gerekmektedir. # Eski sürümlerle uyumlu yol Eski sürümlerde [trusted=yes] gibi ifadelerle çeşitli depolara güvenilir istisna verilmesi sağlanmakta idi. Güncel sürümlerde ise bu özellik işe yaramamaktadır. # Güncel çözüm Bu konuda çözüm için ise */etc/apt/apt.conf.d/* klasörü içerisine örneğin 10depo.conf adında bir dosya oluşturup dosyanın içeriğini aşağıdaki gibi yazmanız gerekmektedir. ``` Acquire::https::www.xyz.com::Verify-Peer "false"; ``` Bu dosya içerisindeki www.xyz.com yerine, güvenilmesi istenen deponun domain adresini yazmanız yeterlidir. Bu adımdan sonra "apt update" dediğinizde ilgili depoya güvendiği ve ilgili paketlerin meta verilerini çekebildiğini göreceksiniz.
aliorhun
1,463,916
The Ultimate Guide To Choosing The Right Hoodie Or Sweatshirt
When it comes to choosing the right hoodie or sweatshirt, there are many factors to consider such as...
0
2023-05-10T19:22:36
https://dev.to/dressesmax/the-ultimate-guide-to-choosing-the-right-hoodie-or-sweatshirt-2f4b
clothingbrand, fashion, clothing
When it comes to choosing the right hoodie or sweatshirt, there are many factors to consider such as fabric, style, fit, and color/design. It's important to choose a hoodie/sweatshirt that not only looks good but also feels comfortable and fits well. This guide is designed to provide helpful tips and information on how to choose the right hoodie or sweatshirt that fits your style and needs. By following the advice in this guide, you'll be able to make an informed decision and find the perfect hoodie or a sweatshirt that will keep you cozy and stylish. Table of Contents: Factors to Consider When Choosing a Hoodie/Sweatshirt Pullover v. Zip up Fit Color and Design How to Choose the Right Size? How to Care for your Hoodie/Sweatshirt? How to Style Your Hoodie or Sweatshirt? Conclusion   Factors to Consider When Choosing a Hoodie/Sweatshirt The fabric of a hoodie or sweatshirt is a crucial factor to consider when choosing the right one. There are several common types of fabrics that are used in hoodies and sweatshirts, each with its own advantages and disadvantages. Cotton is a popular choice as it is soft, breathable, and easy to care for. It is also hypoallergenic and gentle on the skin. However, it tends to shrink and wrinkle easily, so it may require more attention when washing and drying. Polyester is another common fabric used in hoodies and sweatshirts. It is known for its durability, [wrinkle resistance](https://www.dressesmax.com/), and moisture-wicking properties. This makes it a good choice for active wear and outdoor activities. Polyester also holds its shape well and dries quickly, making it a popular choice for those who want a low-maintenance garment. However, it can sometimes feel less breathable and less soft than cotton. Fleece is a popular choice for colder weather as it provides excellent insulation and warmth. It is made from synthetic materials and is known for its softness, durability, and resistance to pilling. Fleece is often used as a lining in hoodies and sweatshirts to provide added warmth and comfort. Blends of cotton and polyester are also commonly used, as they combine the benefits of both fabrics, creating a soft and durable garment that is easy to care for. Pullover v. Zip up The style of a hoodie or sweatshirt can vary widely, and it's important to choose one that matches your personal style and preferences. Pullover hoodies are the most traditional style, with a front pocket and no zippered closure. They are often more casual and relaxed in style. Zip-up hoodies, on the other hand, offer a more versatile option with the ability to be worn open or closed. They often have pockets on both sides of the front zipper, allowing for more storage space. Hooded hoodies and sweatshirts offer an added layer of warmth and protection from the elements, making them a popular choice for cooler weather. However, non-hooded options are also available and may be preferred for a more streamlined look. [Crewneck sweatshirts](https://www.dressesmax.com/) have a simple, round neckline and are often more fitted in style. V-neck options have a V-shaped neckline, which can add a touch of sophistication to a casual look. Sleeve length can also vary, with options ranging from short sleeves to long sleeves and everything in between. Choosing the right style can make all the difference in how comfortable and confident you feel in your hoodie or sweatshirt.   Fit The fit of a hoodie or sweatshirt is another important consideration when choosing the right one. Regular fit hoodies and sweatshirts offer a classic and comfortable fit, with a relaxed silhouette that is not too tight or too loose. They are a great choice for everyday wear and can be easily layered with other clothing items. Slim fit options, on the other hand, are designed to be more form-fitting and are often preferred by those who want a more tailored look. They hug the body and can highlight the wearer's physique, making them a popular choice for athletic wear or more formal occasions. Oversized fit hoodies and sweatshirts have gained popularity in recent years, offering a relaxed and casual look that can be stylish and comfortable. They are designed to be loose and baggy, providing plenty of room to move around in. This style can be worn in a variety of ways, including as a statement piece or as a comfortable layering option. However, it's important to be mindful of the proportions of an oversized fit to avoid looking sloppy or shapeless. Ultimately, the right fit will depend on your personal preferences and the intended use of the hoodie or sweatshirt. Color and Design Color and design are important aspects of choosing the right hoodie or sweatshirt. Solid options are versatile and can be easily matched with a variety of other clothing items. Patterned options, on the other hand, can add interest and texture to an outfit. From stripes to florals, there are a variety of patterns to choose from depending on your personal style. Graphic designs have become increasingly popular in recent years, featuring everything from logos to artwork to slogans. They can add a unique and eye-catching element to your outfit, making a statement or showcasing your interests. Plain options, on the other hand, offer a more classic and timeless look. They can be dressed up or down and are a great choice for those who prefer a more understated style. The right color and design will depend on your personal preferences and the occasion for which you plan to wear the [hoodie or sweatshirt.](https://www.dressesmax.com/)   How to Choose the Right Size? Choosing the right size is crucial to ensuring your hoodie or sweatshirt is comfortable and fits well. The first step is to measure yourself using a tape measure, taking into account your chest, waist, and hip measurements. These measurements can then be compared to the size chart provided by the manufacturer to determine the best size for you. It's important to note that different brands may have slightly different size charts, so it's important to check each one before making a purchase. If you're unsure about your size or between sizes, trying on the hoodie or sweatshirt in-store can be a great option. This allows you to get a better sense of the fit and how it looks on your body. When trying on a hoodie or sweatshirt, make sure to move around and raise your arms to ensure it's comfortable and not too restrictive. Additionally, consider the material and how it may shrink or stretch over time with wear and washing. By taking the time to choose the right size, you can ensure your hoodie or sweatshirt will be comfortable and functional for everyday wear. How to Care for Your Hoodie/Sweatshirt? Caring for your hoodie or sweatshirt properly is important to ensure it lasts as long as possible. Washing and drying instructions can vary depending on the material and manufacturer, so it's important to read the care label carefully before washing. In general, most hoodies and sweatshirts can be machine washed in cold water and tumble dried on a low setting. However, some may require special care, such as hand washing or air drying, to maintain their shape and color. Proper storage is also important to prevent damage and maintain the quality of your hoodie or sweatshirt. It's best to store them in a cool, dry place, away from direct sunlight, to prevent fading or discoloration. Additionally, hanging them up can help to maintain their shape and prevent wrinkles. If you need to fold them, be sure to avoid creasing or folding along the graphic or design. By following these simple care tips, you can keep your hoodie or sweatshirt looking and feeling great for years to come. How to Style Your Hoodie or Sweatshirt? Styling a hoodie or a sweatshirt can be fun and versatile, allowing you to create different looks depending on the occasion and your personal style. Layering with other clothing items, such as a denim jacket or a leather bomber, can add a touch of sophistication to a casual outfit. Dressing up or down can also make a big difference, with a hoodie or sweatshirt pairing well with everything from jeans and sneakers to dress pants and boots. Accessorizing with hats, scarves, aAnd jewelry can also help to elevate your look, whether you prefer a sleek, minimalist style or a more eclectic, bohemian vibe. Pairing with different footwear options, such as sneakers, boots, or even heels, can also change the overall aesthetic of your outfit. Finally, choosing the right colors and designs to match your personal style is key, with everything from bold graphics and patterns to classic neutral tones and solid colors available to suit your preferences. Conclusion In summary, choosing the right hoodie or sweatshirt involves considering a variety of factors, including the fabric, fit, color and design, and size. Measuring yourself and using size charts can help to ensure you choose the right size, while properly caring for your hoodie or sweatshirt can help it last as long as possible. Ultimately, the best hoodie or sweatshirt for you will depend on your personal style and needs. Whether you prefer a classic solid color or a bold graphic design, taking the time to choose the right one can help you feel comfortable and confident in your outfit. If you are still looking to explore your options between whether to buy a
dressesmax
1,464,058
Free hosting providers for front-end & back-end applications
I have created a list of free web hosting service providers. I have checked each and every website in...
0
2023-05-10T21:42:41
https://dev.to/richixaws/free-hosting-providers-for-front-end-back-end-applications-b3f
java, javascript, microservices
I have created a list of free web hosting service providers. I have checked each and every website in the past weeks! - My youtube channel - Vuelancer hosting providers **Freemium Resources** Heroku - https://heroku.com - Deploying Front & Backend Netlify - https://netlify.com Vercel (Zeit) - https://vercel.com - Deploying Front & Backend apps at free of cost Firebase - https://firebase.com - Deploying Front & Backend apps at free of cost Surge - https://surge.sh Render - https://render.com Hostman - https://hostman.com - free for frontend always, pay only for backend apps. (free credits will also there for you) Glitch - https://glitch.com - Deploying Front & Backend apps at free of cost Deno deploy - https://deno.com/deploy - Site for deploying for apps created using deno. Fly - https://fly.io - Deploying Front & Backend apps at free of cost Fleek - https://fleek.co Begin - https://begin.com Stormkit - https://stormkit.io - Deploying Front & Backend apps at free of cost Deta - https://deta.sh - Deploying Node.js and Python apps and APIs. They support most web frameworks like Express, Koa, Flask, and FastAPI. They also provide a very fast and powerful NoSQL database for free. (updated) Commonshot - https://commons.host - Static web hosting and CDN. Heliohost - https://heliohost.org - PHP, Ruby on rails, perl, django, java(jsp) Fast - https://fast.io (newly added) kintohub - https://kintohub.com - supports the static site, web app, backend apps. (updated) Hubspot - https://hubspot.com - CRM tool for marketing, sales, content management, and customer service (newly added) Torus.host - https://torus.host - needs an aws account.} Bip - https://bip.sh - Static web hosting provider (updated) Cloudflare pages - https://pages.cloudflare.com - Static frontend hosting provider (updated) Qoddi - https://qoddi.com/ - PaaS App Hosting Platform (newly added) Thank you! We can use github pages, gitlab, bitbucket to deploy static web pages! **Premium providers** AWS GCP Digitalocean Microsoft Azure Vultr Alibaba Cloud Linode **Final thoughts** Use VERCEL for any front-end application as well as medium-level backend apps. Use HEROKU for any level backend apps. Other websites are also useful when you are deploying static sites. But I recommend you to go through each website and find which one suits your project.
richixaws
1,464,321
Dictionary Methods of Python
1. items() :- It gives a list containing a tuple for each key value...
0
2023-05-11T04:29:56
https://dev.to/ankit_dagar/dictionary-methods-of-python-1ioe
### 1. items() :- It gives a list containing a tuple for each key value pair example:- ``` >>> thisdict = { ... "brand": "Ford", ... "model": "Mustang", ... "year": 1964 ... } >>> thisdict.items() dict_items([('brand', 'Ford'), ('model', 'Mustang'), ('year', 1964)]) ``` ### 2. clear() :- It removes all the elements from the dictionary example:- ``` >>> thisdict.clear() >>> thisdict {} ``` ### 3. copy() :- It gives a copy of the dictionary example:- ``` >>> newdict=thisdict.copy() >>> newdict {'brand': 'Ford', 'model': 'Mustang', 'year': 1964} ``` ### 4. fromkeys() :- it is used to create a new dictionary with keys from the old dictionary list . example:- ``` >>> newDict=dict.fromkeys(thisdict,'ankit') >>> newDict {'brand': 'ankit', 'model': 'ankit', 'year': 'ankit'} ``` ### 5. get() :- It gives the value of the specified key example:- ``` >>> newDict.get('brand') 'ankit' ``` ### 6. keys() :- It gives a list containing the dictionary's keys example:- ``` >>> newDict.keys() dict_keys(['brand', 'model', 'year']) ``` ### 7. pop() :- It removes the element with the specified key example:- ``` >>> newdict.pop('brand') 'Ford' >>> newdict {'model': 'Mustang', 'year': 1964} ``` ### 8. popitem() :- It removes the last inserted key-value pair example:- ``` >>> newdict.popitem() ('year', 1964) >>> newdict {'model': 'Mustang'} ``` ### 9. setdefault() :- It gives the value of the specified key. If the key does not exist: insert the key, with the specified value example:- ``` >>> newdict.setdefault('year',1964) 1964 >>> newdict {'model': 'Mustang', 'year': 1964} ``` ### 10. update() :- It updates the dictionary with the specified key-value pairs example:- ``` >>> fruits1 = {'apple': 2, 'banana': 3} >>> fruits2 = {'orange': 4, 'pear': 1} >>> fruits1.update(fruits2) >>> fruits1 {'apple': 2, 'banana': 3, 'orange': 4, 'pear': 1} ``` ### 11. values() :- It returns a list of all the values in the dictionary example:- ``` >>> fruits1 {'apple': 2, 'banana': 3, 'orange': 4, 'pear': 1} >>> fruits1.values() dict_values([2, 3, 4, 1]) ``` ## References GeeksforGeeks W3Schools freecodecamp
ankit_dagar
1,464,361
String Methods
Introduction The aim of this paper is to help the reader get familiar with the standard...
0
2023-05-11T05:42:44
https://dev.to/utsavdhall/string-methods-pg2
python
- ## Introduction The aim of this paper is to help the reader get familiar with the standard functions of strings in Python. - ## Functions 1 . `.count()`: This function returns the count of a specific value inside a string , the value is passed as an argument. **Syntax:** s="aabbzzz" character_to_be_counted='z' frequency=s.count(character_to_be_counted) #This is the generic syntax print(frequency) - The snippet above prints the number of time **"z"** appears in the string **"s"** which is `3` 2 . `.find() and .index()`: The reason I have grouped these two together is because they serve almost the same functionality, both of these functions will return the first index of where a specified value is found , however when they fail to find the value, `.index()` raises a **ValueError** exception and `.find()` returns -1. **Syntax:** s="aabbcc" answer=s.find("a") answer2=s.index("a") print(answer,answer2) - Both of these functions will return `0` (The first occurrence of **"a"**) 3 . `.format()`: This function places text inside given string, The user needs to define a placeholder inside the string so the function knows where to place specific values. **Syntax:** text="The price of {fruit} is {price}" modified_string=text.format(fruit="mango",price=35") print(modified_string) - The output for the snippet above is `The price of mango is 35` - `format` maps the values to their respective placeholders. 4 . `.isalpha()`: This function returns a boolean value after checking if all characters in a string are alphabets or not. **Syntax:** text1="Company" text2="Company10" check_alpha1=text1.isalpha() check_alpha2=text2.isalpha() print(check_alpha1,check_alpha2) - The above snippet prints `True` and `False` respectively. 5 . `.replace`: This function replaces a given character or a phrase with another one. An optional argument can be added , specifying the number of existing phrases the user wants to remove. **Syntax:** text="I like Bananas" current_phrase="Bananas" new_phrase="Oranges" modified_text=text.replace(current_phrase,new_phrase) #This is the generic syntax print(modified_text) 6 . `.upper() and .lower()`: Both of these functions have similar functionality, `.upper()` changes all characters in a string to uppercase and `.lower()` changes all the characters to lowercase. **Syntax:** text="AaAbC" upper_text=text.upper() lower_text=text.lower() print(upper_text,lower_text) - The snippet above prints `AAABC` and `aaabc` respectively. 7 . `.split()`: This function splits the string on the basis of the given argument which is space by default. **Syntax:** text1="This is a sentence" new_list=text1.split() print(new_list) - The above snippet will print `["This","is","a","sentence"]` splitting the above string by space 8 . `.strip()`: This function removes the leading and trailing occurrences of a specified string, the default argument is space. **Syntax:** text1=" abcd " new_text1=text1.strip() print(new_text1) - The above snippet will print `abcd` removing all instances of spaces from the right and the left end. 9 . `.join()`: This function is the polar opposite of the `.split()` function , This function will help you out if you want to join a list to make a string based on a separator. **Syntax:** list=["This","is","a","sentence"] separator=" " answer=separator.join(list) print(answer) - The snippet above will print `This is a sentence` creating a space will adding the specified separator between the words. 10 . `isdigit()`: Returns `True` if all characters in the given string are digits , else returns `False`. **Syntax:** digit="10273" non_digit="adefre13" print(digit.isdigit()) print(non_digit.isdigit()) - The snippet above prints `True` and `False` respectively.
utsavdhall
1,464,455
Python Strings
Strings: What is a string in python? Technically speaking, an immutable data...
0
2023-05-11T06:58:51
https://dev.to/atchukolanaresh/python-strings-6cd
# Strings: ## What is a string in python? Technically speaking, an **immutable data sequence** is known as a string in Python. In simple words, as discussed in the case of a crossword, a Python string is nothing but an array of characters, but a computer does not understand characters. It only understands the language of 0’s and 1’s. So these characters are converted to a number so the computer can understand them clearly. This conversion is known as encoding. ASCII and Unicode are some of the popular encodings used. To put it simply, Python strings are a sequence of **Unicode characters**. ## Creating a String in Python A string in Python can be easily created using single, double, or even triple quotes. The characters that form the string are enclosed within any of these quotes. **Note:** Triple quotes are generally used when we are working with multiline strings in Python and docstring in Python. Let’s take a look at an example where we will create strings using the three types of quotes – **Code:** ```python single_string='Hello' double_string="Hello World" triple_string="""Welcome to Scaler""" print(single_string) print(double_string) print(triple_string) ``` **Output:** ```python Hello Hello World Welcome to Scaler ``` ## Different Methods of Python String Module Python has a plethora of built-in functions of the string module that can be used for string manipulations. Let's take a look at some of the commonly used functions. capitalize(): Takes the first character of the string and converts it to upper case casefold(): Converts the entire string into lowercase characters center(): This function aligns the string in the 'center' count(): The count() function will return the count of the specified character in the string encode(): The encoded version of the string is returned endswith(): As the function name suggests, it tests if a string ends with a specified substring expandtabs(): This function sets the tab size of the specified string find(): Looks for a specified string inside another string and returns the index where the substring was found format(): The function we studied above, formats the specified values in the string format_map(): Performs the same function as format() index(): Looks for a specified character inside another string and returns the index where the substring was found isalnum(): Boolean function that would return true if all the characters in the string are alphanumeric i.e. contain only numbers or alphabets and no special characters [isalpha()](https://www.scaler.com/topics/isalpha-in-python/) Boolean function that returns True if all the characters in the string are alphabets isascii(): If all the characters in the string are ASCII characters, then this boolean function returns True isdecimal(): If all the characters in the string are decimals (numbers) this boolean function returns True isdigit(): Boolean function that would return true if all the characters in the string are digits isidentifier(): If the string is an identifier, then this boolean function will return True islower(): If all the characters in the string are in lowercase, then this boolean function returns True isnumeric(): If all the characters in the string are numeric, then this boolean function returns True isspace(): If the character is a space, then this function returns True isupper(): If all the characters in the string are in uppercase, then this boolean function returns True join(): This function takes an iterable and converts it into a string lower(): This function converts the string into lowercase replace(): Returns a string where a specified value is replaced with a specified value rfind(): [rfind() function in python](https://www.scaler.com/topics/rfind-in-python/) searches the string for a specified value and returns the last index of where it was found rindex(): Searches the string for a specified value and returns the last index of where it was found rstrip(): Returns a right trim version of the string split(): Splits the string at the specified separator, and returns a list splitlines(): [splitlines() in Python](https://www.scaler.com/topics/splitlines-in-python/) splits the string at line breaks and returns a list startswith(): The [startswith() in python](https://www.scaler.com/topics/startswith-in-python/) returns true if the string starts with the specified value strip(): Returns a trimmed version of the string, i.e. without spaces at the start and end of the string upper(): [upper()](https://www.scaler.com/topics/upper-in-python/) converts a string into upper case code : ```python string1 = "123" string2 = "abcd" string3 = "12!cdef" string4 = "ab cd" print(string1.isnumeric()) print(string1.isalnum()) print(string2.isalpha()) print(string3.isalnum()) print(string4.isalpha()) ``` **Output:** ```plaintext True True True False False ``` ### Syntax of Python String Split ```python str.split(separator, maxsplit) ``` str: variable containing the input string. Datatype – string. ## Parameters of Split Function in Python - **separator:** This is the delimiter. The string splits at this specified delimiter. This is optional, i.e., you may or may not specify a separator. In case no separator has been specified, the default separator is space. In the split() function, we can have one or more characters as delimiters, or we can also have two or more words as delimiters. - **maxsplit:** This specifies the maximum number of times the string should be split. This is optional, i.e., you may or may not specify the maxsplit count. In case it has not been specified, the default value is -1, i.e., no limit on the number of splits. In case any negative value is entered, it works the same as in the case when no value is specified. ## Example of Python String Split() ### Without Specifying Any Maxsplit ### Case 1: ```python str = 'InterviewBit is a great place for learning' print(str.split()) ``` In the above case, no separator has been specified, and hence the default separator i.e., space is taken to split the string. Also, we see that no maxsplit count is given, so the default value is -1, i.e., no limit on the number of splits. So the string is split wherever a space is found. ### Output: ```python ['InterviewBit', 'is', 'a', 'great', 'place', 'for', 'learning'] ``` **Case 2** ```python str = 'InterviewBit, is a, great place, for learning' print(str.split(',')) ``` In the above case, the separator has been specified, i.e., (",") comma. But since there is no _maxsplit_ count, so the default value again is -1, i.e., no limit on the number of splits. ### Output: ```python ['InterviewBit', ' is a', ' great place', ' for learning'] ``` ## join(): ## Syntax of join() Function in Python The syntax of join() is as follows: ```pyhton stringSeperator.join(iterable) ``` ## Parameter of join() Function in Python The join method takes a single parameter, - iterable: The iterable can be of any type, including List, Tuple, Dictionary, Set, and String. ## Return Value of join() Function in Python **Return Type:** string It returns a string after concatenating all elements of iterable. ## Example of join() Function in Python An example to show the working of join is as follows: ```python elements = ['A', 'B', 'C', 'D'] str1 = "" str2 = "-" str1 = str1.join(elements) str2 = str2.join(elements) print(str1) print(str2) ``` **Output:** ```python ABCD A-B-C-D ```
atchukolanaresh
1,465,149
# Build a web server with Rust and tokio - Part 0: the simplest possible GET handler
Welcome to this series of blog posts where we will be exploring how to build a web server from...
0
2023-05-11T19:06:08
https://dev.to/geoffreycopin/-build-a-web-server-with-rust-and-tokio-part-0-the-simplest-possible-get-handler-1lhi
rust, tokio, webdev
Welcome to this series of blog posts where we will be exploring how to build a web server from scratch using the Rust programming language. We will be taking a hands-on approach, maximizing our learning experience by using as few dependencies as possible and implementing as much logic as we can. This will enable us to understand the inner workings of a web server and the underlying protocols that it uses. By the end of this tutorial, you will have a solid understanding of how to build a web server from scratch using Rust and the tokio library. So, let's dive in and get started on our journey! In this first part, we'll be building a barebones web server that can only anwser GET requests with a static Not Found response. This will give us a good starting point to build upon in the following tutorial. ## Setting up our project First, we need to create a new Rust project. We'll use the following crates: * [tokio](https://docs.rs/tokio/1.28.0/tokio/): async runtime * [anyhow](https://docs.rs/anyhow/1.0.44/anyhow/): easy error handling * [maplit](https://docs.rs/maplit/1.0.2/maplit/): macro for creating HashMaps * [tracing](https://docs.rs/tracing/0.1.27/tracing/): structured logging * [tracing-subscriber](https://docs.rs/tracing-subscriber/0.2.19/tracing_subscriber/): instrumentation ```bash cargo new webserver cargo add tokio --features full cargo add anyhow maplit tracing tracing-subscriber ``` ## Anatomy of a simple GET request In order to actually see what a GET request looks like, we'll set up a simple server listening on port 8080 that will print the incoming requests to the console. This can be done with `netcat`: ```bash nc -l 8080 ``` Now, if we open a new terminal and use `curl` send a simple GET request to our server, we should see the following output: <img src="https://raw.githubusercontent.com/geoffreycopin/http_server/gh-pages/blog/img/coloured-get.png"> Let's break down the request parts: * <span style="background-color: #F8676A">the method:</span> indicates the action to be performed on the resource. In this case, we are performing a GET request, which means we want to retrieve the resource * <span style="background-color: #D36FEB">the path:</span> uniquely identifies the resource. In this case, we are requesting the root path `/` * <span style="background-color: #6D88FD">the protocol:</span> the protocol version. At this stage, we will always asume HTTP/1.1 * <span style="background-color: #0F8D9F">the headers:</span> a set of key-value pairs that provide additional information about the request. Our request contains the `Host` header, which indicates the host name of the server, the `User-Agent` header, which describes the client software that is making the request and the `Accept` header, which indicates the media types that are acceptable for the response. We'll go into more details about headers in a later tutorial We'll use the following `struct` to represent requests in our code: ```rust // req.rs #[derive(Debug, Clone, Eq, PartialEq)] pub struct Request { pub method: Method, pub path: String, pub headers: HashMap<String, String>, } #[derive(Debug, Copy, Clone, Eq, PartialEq, Hash)] pub enum Method { Get, } ``` Parsing the request is just a matter of splitting the request string into lines. The first line contains the method, path and protocol separated by spaces. The following lines contain the headers, followed by an empty line. ```rust // req.rs use std::{collections::HashMap, hash::Hash}; use tokio::io::{AsyncBufRead, AsyncBufReadExt}; // [...] impl TryFrom<&str> for Method { type Error = anyhow::Error; fn try_from(value: &str) -> Result<Self, Self::Error> { match value { "GET" => Ok(Method::Get), m => Err(anyhow::anyhow!("unsupported method: {m}")), } } } pub async fn parse_request(mut stream: impl AsyncBufRead + Unpin) -> anyhow::Result<Request> { let mut line_buffer = String::new(); stream.read_line(&mut line_buffer).await?; let mut parts = line_buffer.split_whitespace(); let method: Method = parts .next() .ok_or(anyhow::anyhow!("missing method")) .and_then(TryInto::try_into)?; let path: String = parts .next() .ok_or(anyhow::anyhow!("missing path")) .map(Into::into)?; let mut headers = HashMap::new(); loop { line_buffer.clear(); stream.read_line(&mut line_buffer).await?; if line_buffer.is_empty() || line_buffer == "\n" || line_buffer == "\r\n" { break; } let mut comps = line_buffer.split(":"); let key = comps.next().ok_or(anyhow::anyhow!("missing header name"))?; let value = comps .next() .ok_or(anyhow::anyhow!("missing header value"))? .trim(); headers.insert(key.to_string(), value.to_string()); } Ok(Request { method, path, headers, }) } ``` ## Accepting connections Now that we know how to parse a request, we can start accepting connections. Each time a new connection is established, we'll spawn a new task to handle it in order to keep the main thread free to accept new connections. ```rust // main.rs use tokio::{io::BufStream, net::TcpListener}; use tracing::info; mod req; static DEFAULT_PORT: &str = "8080"; #[tokio::main] async fn main() -> anyhow::Result<()> { // Initialize the default tracing subscriber. tracing_subscriber::fmt::init(); let port: u16 = std::env::args() .nth(1) .unwrap_or_else(|| DEFAULT_PORT.to_string()) .parse()?; let listener = TcpListener::bind(format!("0.0.0.0:{port}")).await.unwrap(); info!("listening on: {}", listener.local_addr()?); loop { let (stream, addr) = listener.accept().await?; let mut stream = BufStream::new(stream); // do not block the main thread, spawn a new task tokio::spawn(async move { info!(?addr, "new connection"); match req::parse_request(&mut stream).await { Ok(req) => info!(?req, "incoming request"), Err(e) => { info!(?e, "failed to parse request"); } } }); } } ``` We can now run our server on port `8081`with the following command: `cargo run -- 8081`. Sending a GET request to `localhost:8081` should print the following output: ``` INFO http_server: listening on: 0.0.0.0:8081 INFO http_server: new connection addr=127.0.0.1:49351 INFO http_server: incoming request req=Request { method: Get, path: "/", headers: {"Host": "localhost", "User-Agent": "curl/7.87.0", "Accept": "*/*"} } ``` ## Sending a response At this stage, we'll answer every request with a static `Not found` page. Our response will have the following format: <img src="https://raw.githubusercontent.com/geoffreycopin/http_server/gh-pages/blog/img/coloured-response.png"> Let's explore the different parts of the response: * the status line: contains the protocol version, the status code and a human-readable status message * the response headers: encoded in the same way as for the request. Our response contains the `Content-Length` header, which specified the length of the response body, and the `Content-Type` header, which indicates that the response body is encoded in HTML. The headers are followed by an empty line. * the response body: contains the actual data that will be displayed in the browser. We used an empty HTML document for brevity We'll use the following `struct` to represent responses in our code: ```rust // resp.rs use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt}; #[derive(Debug, Clone)] pub struct Response<S: AsyncRead + Unpin> { pub status: Status, pub headers: HashMap<String, String>, pub data: S, } #[derive(Debug, Copy, Clone, Eq, PartialEq, Hash)] pub enum Status { NotFound, } impl Display for Status { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { match self { Status::NotFound => write!(f, "404 Not Found"), } } } ``` The `data` field is generic over the type of the response body to account for future use cases where we might want to send a stream of data. Creating a response from an HTML string is straight forward: ```rust // resp.rs use std::io::Cursor; use maplit::hashmap; // [..] impl Response<Cursor<Vec<u8>>> { pub fn from_html(status: Status, data: impl ToString) -> Self { let bytes = data.to_string().into_bytes(); let headers = hashmap! { "Content-Type".to_string() => "text/html".to_string(), "Content-Length".to_string() => bytes.len().to_string(), }; Self { status, headers, data: Cursor::new(bytes), } } } ``` Sending a response is a bit more involved. We'll use the `AsyncWrite` trait to write the response to a generic output stream. ```rust // resp.rs use std::{ collections::HashMap, fmt::{Display, Formatter}, io::Cursor, }; use maplit::hashmap; use tokio::io::{AsyncRead, AsyncWrite, AsyncWriteExt}; // [...] impl<S: AsyncRead + Unpin> Response<S> { pub fn status_and_headers(&self) -> String { let headers = self .headers .iter() .map(|(k, v)| format!("{}: {}", k, v)) .collect::<Vec<_>>() .join("\r\n"); format!("HTTP/1.1 {}\r\n{headers}\r\n\r\n", self.status) } pub async fn write<O: AsyncWrite + Unpin>(mut self, stream: &mut O) -> anyhow::Result<()> { stream .write_all(self.status_and_headers().as_bytes()) .await?; tokio::io::copy(&mut self.data, stream).await?; Ok(()) } } impl Display for Status { fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { match self { Status::NotFound => write!(f, "404 Not Found"), } } } ``` ## Puting it all together We'll use the following document as our `404` page: ```html <!-- static/404.html --> <!DOCTYPE html> <html lang="en"> <head> <title>Page Not Found</title> <style> body { background-color: #f8f8f8; font-family: Arial, sans-serif; font-size: 16px; color: #333; } .container { max-width: 600px; margin: 0 auto; padding: 40px 20px; text-align: center; border: 1px solid #ddd; border-radius: 5px; background-color: #fff; box-shadow: 0 2px 4px rgba(0,0,0,0.1); } h1 { font-size: 48px; margin-bottom: 20px; color: #333; } p { font-size: 24px; margin-bottom: 40px; } </style> </head> <body> <div class="container"> <h1>404</h1> <p>The page you are looking for could not be found.</p> </div> </body> </html> ``` We can now use our `Response` struct to send a `Not found` page to the client when we receive a request: ```rust // main.rs // [...] let resp = resp::Response::from_html( resp::Status::NotFound, include_str!("../static/404.html"), ); resp.write(&mut stream).await.unwrap(); // [...] ``` Navigating to `localhost:8081` should now display our `Not found` page. That's a good start, but we're still far from a fully functional web server. In the next part, we'll add support for serving static files. You can find the code for this part [here](https://github.com/geoffreycopin/http_server). Looking for a Rust dev? [Let’s get in touch!](mailto:copin.geoffrey@gmail.com)
geoffreycopin
1,465,228
A Step-by-Step Guide to Easily Creating Azure Virtual Machines with PowerShell
In a recent blog article, we demonstrated how to Create a Virtual Machine in Less Than a Minute with...
0
2023-05-11T21:33:50
https://dev.to/henriettatkr/a-step-by-step-guide-to-easily-creating-azure-virtual-machines-with-powershell-1kib
azure, tutorial, beginners, devops
In a recent blog article, we demonstrated how to [Create a Virtual Machine in Less Than a Minute with ARM Template and Azure Quick Start](https://dev.to/henriettatkr/create-a-virtual-machine-in-less-than-a-minute-with-arm-template-and-azure-quick-start-1bbb). However, there are other ways of creating an Azure virtual machine, and in this blog article, we will show you how to achieve this using PowerShell. ## Creating an Azure virtual machine using PowerShell PowerShell is a task automation and configuration management framework from Microsoft, consisting of a command-line shell and associated scripting language. It allows you to perform various tasks and automate processes, including creating and managing virtual machines in the cloud. Let's get started with the steps to create an Azure virtual machine using PowerShell. ## Step 1: Log in to the Azure Portal using this [link](https://azure.microsoft.com/en-gb/free/) ## Step 2: Once you are logged in, you should see a search bar at the top of the page. To the right of the search bar, there should be an icon that looks like a **>_** symbol as highlighted in the image below. This is the Cloud Shell icon. Click on the Cloud Shell icon to open the Cloud Shell window. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k23gakz8hbi7s4zulhir.png) ## Step 3: Select the PowerShell option to open the PowerShell environment within the Cloud Shell. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/goqtt3wgfiues422ja0x.png) ## Step 4: Fill in the storage account and file sharing fields if requested to create storage. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wxvitcgmi1yi9kzp7gn4.png) ## Step 5: Create a Resource Group To create a resource group, you can use the command below ``` New-AzResourceGroup -Name 'myCoolRg' -Location 'EastUS' ``` **Note:** Replace **_myCoolRg_** with the name of your resource group and **_EastUS_** with the location of your choice. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q56ftj9v61z51wo7nn9r.png) ## Step 6: Once you have created a resource group, you can create a new virtual machine using the code below ``` New-AzVM -ResourceGroupName 'myCoolRg' -Name "myvMps" -Location "West US" -VirtualNetworkName "myVnetPS" -SubnetName "mySubnetPS"-SecurityGroupName "myNSGPS"-PublicIpAddressName "myPublicIpPs" ``` Once you hit enter, you will be prompted to supply a username and password ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cv7n7bbt8bxuoe2m2auk.png) ## Step 7: Verify the virtual machine creation Once the virtual machine creation process is complete, you will see the screen below ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1xxjrbh5wfdc3byiek9i.png) ## Step 8: Navigate to Azure Resource Group page in the portal and click con the Resource Group created to see the running Virtual Machine ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9xyhjl98zsrakuhuor5v.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3lgsfc7p5n773l4udat0.png) ## Step 9: The Virtual machine can be delete using this command ``` Remove-AzResourceGroup -Name 'myCoolRg' ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0yqcashqyq0thcycb63u.png) Creating an Azure virtual machine using PowerShell is a straightforward processing you can create yours using the steps above.
henriettatkr
1,465,294
I need supporter another people
A post by Benson Onditi
0
2023-05-11T21:39:29
https://dev.to/oino/i-need-supporter-another-people-4oca
oino
1,465,342
Generics In TypeScript
by Ekisowei Daniel Generics play a crucial role in programming, as they enable creating type-safe...
0
2023-05-11T23:53:53
https://blog.openreplay.com/generics-in-typescript/
typescript, webdev
by [Ekisowei Daniel](https://blog.openreplay.com/authors/ekisowei-daniel) <blockquote><em>Generics play a crucial role in programming, as they enable creating type-safe functions without specifying the exact type beforehand but allowing constraints and checks on the programmer’s types. This article introduces the concept of generics, lists their advantages, and shows how to use them.</em></blockquote> [Generics](https://www.typescriptlang.org/docs/handbook/2/generics.html/) in TypeScript allow developers to write reusable and flexible code by abstracting over types. Using generics, developers can create functions, classes, and interfaces that work with any type rather than being limited to a specific type. The ability to create a component that can operate over several types rather than just one is one of the main tools in the toolbox for creating reusable elements in programming languages such as C# and Java. As a result, users can use different types while consuming these components. This article introduces the idea of Generics in [TypeScript](https://www.w3schools.com/typescript/typescript_intro.php///) and when to use them. It describes how generics can be used with functions, classes, interfaces, and types and gives examples of their use in real-world situations. These examples are clearly defined so you can follow them in your preferred integrated development environment. ## Advantages of Generics The list of advantages that generics offer in TypeScript is as follows: - We can securely store one object using generics without storing the other types. - Using generics, we can avoid having to typecast any variables or functions at the time of calling. - Generics are typically checked at compile time to ensure no problems at runtime. - Generics can help improve code performance by allowing TypeScript to optimize it for specific types of data, reducing the type-checking needed at runtime. - Using generics makes code more readable and understandable, making it easier for others to work with and maintain. ## Using Generics in Functions Using generics in functions, you can create code that can handle various data types. And also makes your code more flexible and reusable, as it can be applied to different input types without requiring individual functions for each type. As a result, your code becomes easier to maintain and comprehend. Developers define a placeholder type within angle brackets, such as `<T>`, and use that placeholder type within the function to implement generics. The following is an example of how to write the function that returns the first element of an `array`: Here’s an example of a function that returns the first element of an array in TypeScript using generics: ```typescript function firstelement<T>(arr: T[]): T { return arr[0]; } ``` To use the function with different types of arrays, the specific type is passed when the function is called. For example: ```typescript function firstElement<T>(array: T[]): T | undefined { return array[0]; } const numbers = [1, 2, 3]; const firstNumber = firstElement<number>(numbers); console.log(firstNumber); // 1 const names = ["Daniel", "Micheal", "Charlie"]; const firstName = firstElement<string>(names); console.log(firstName); // 'Daniel' ``` In this example, the function is used with two arrays of different types, numbers (an array of numbers) and names (an array of strings). By bypassing the specific type when the function is called, the function can work with the appropriate data type. ## Using Generics in Classes and Interfaces Generics can also be used in classes and interfaces. For example, a class representing a stack can be written with a generic type to support any data type. ```typescript class Stack<T> { private data: T[] = []; push(item: T) { this.data.push(item); } pop(): T | undefined { return this.data.pop(); } } let numberStack = new Stack<number>(); numberStack.push(1); numberStack.push(2); console.log(numberStack.pop()); // 2 console.log(numberStack.pop()); // 1 let stringStack = new Stack<string>(); stringStack.push("a"); stringStack.push("b"); console.log(stringStack.pop()); // 'b' console.log(stringStack.pop()); // 'a' ``` To use the class with different data types, the specific type is passed when the class is instantiated. For example: ```typescript let numbers = new Stack<number>(); numbers.push(1); numbers.push(2); console.log(numbers.pop()); // 2 console.log(numbers.pop()); // 1 let names = new Stack<string>(); names.push("Alice"); names.push("Bob"); console.log(names.pop()); // 'Bob' console.log(names.pop()); // 'Alice' ``` In this example, the class is used to create two stacks of different types, numbers (a stack of numbers) and names (a stack of strings). The class can work with the appropriate data type by passing the specific type when the class is instantiated. <div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;"> <hr/> <h3><em>Session Replay for Developers</em></h3> <p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p> <img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async"> <p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p> <hr/> </div> ## Built-in Generic Type and Interfaces TypeScript provides several built-in generic types and interfaces, such as `Array`, `Promise`, and `Map`, commonly used in JavaScript. These types are defined with a generic type and can be used with any data type. For example, `Array` is a generic type representing an ordered element collection. It can be used with any data type, such as numbers or strings, bypassing the specific type when the Array is created. ```typescript let numbers = [1, 2, 3]; let names = ['Daniel', 'Micheal', 'Charlie']; ``` Another example is the Promise type, which represents a value that may not be available yet but will be at some point in the future. The Promise type is also defined with a generic type, representing the value that will be available in the future. ```typescript let promise: Promise<string> = new Promise((resolve, reject) => { setTimeout(() => { resolve("Hello, World!"); }, 1000); }); promise.then((value: string) => console.log(value)); // 'Hello, World!' ``` Finally, `Map` is a generic type representing a collection of key-value pairs. The generic types for the keys and values can be different. ```typescript let map = new Map<string, number>(); map.set("Daniel", 25); map.set("Michael", 30); console.log(map.get("Daniel")); // 25 ``` In addition to using these built-in generic types and interfaces, you can also extend them to add additional functionality. For example, you could create a custom Stack class that extends the built-in Array type, as shown in the previous example. This allows us to take advantage of the built-in functionality of the Array type while adding your own custom functionality. ## Conclusion Generics in TypeScript provide an essential feature for writing reusable and flexible code. They allow developers to be abstract over types, making it possible to write functions, classes, and interfaces that work with any data type. This feature of TypeScript makes the code more readable and maintainable and reduces the amount of duplicated code.
asayerio_techblog
1,465,419
Book Summary: RESTful Web Clients - Enabling Reuse Through Hypermedia
by Mike Amundsen “RESTful Web Clients: Enabling Reuse Through Hypermedia” provides...
0
2023-05-16T11:04:40
https://fagnerbrack.com/book-summary-restful-web-clients-enabling-reuse-through-hypermedia-c81256a55102
rest, softwareengineering, programming, frontend
--- title: Book Summary: RESTful Web Clients - Enabling Reuse Through Hypermedia published: true date: 2023-05-11 22:01:38 UTC tags: rest,softwareengineering,programming,frontenddevelopment canonical_url: https://fagnerbrack.com/book-summary-restful-web-clients-enabling-reuse-through-hypermedia-c81256a55102 --- #### by Mike Amundsen “RESTful Web Clients: Enabling Reuse Through Hypermedia” provides insights into designing and building RESTful web clients that effectively consume and interact with web services. The book emphasizes the importance of leveraging hypermedia and existing web standards to create reusable and flexible clients. It talks about REST as an Architectural Style for Distributed Systems based on [Roy Fielding's Dissertation](https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm), not the way REST is used in the wild, which is RPC over HTTP using JSON. **DISCLAIMER:** The code examples are not taken from the book. ![](https://cdn-images-1.medium.com/max/1024/1*GpwwdQ51DeyLezEVBy1VHg.png) The most valuable points from the book are: #### Understanding RESTful Web Services The book starts by providing a solid understanding of RESTful web services, focusing on the concepts of resources, representations, and hypermedia. #### Hypermedia-Driven Web Clients Mike emphasizes the importance of designing web clients that are driven by hypermedia, allowing them to adapt to changes in the web service without requiring updates. For example, a web client uses hypermedia controls provided by the web service to navigate through resources and perform actions, ensuring that the client can adapt to changes in the API without breaking. ``` // Control response for media types with +json suffix { actions: [{ name: 'prev', target: 'http://server.com/resource-a', method: 'POST' }, { name: 'next', target: 'http://server.com/resource-b', method: 'PUT' }] }; ``` #### Leveraging Web Standards The book highlights the importance of using existing web standards such as HTML, CSS, and JavaScript to build RESTful web clients, enabling reuse and reducing development effort. For example, a developer builds a web client using HTML and JavaScript, allowing it to be used across various platforms and devices without the need for additional development. ``` <!-- Navigation interception using Code on Demand --> <script> [...document.querySelectorAll('form')].forEach((action) => { action.addEventListener('submit', (event) => { event.preventDefault(); // Custom handling of navigation }); }); </script> <!-- Control response for text/html media type --> <form name="prev" method="POST" action="http://server.com/resource-a"> <button type="submit"></button> </form> <form name="next" method="POST" action="http://server.com/resource-b"> <button type="submit"></button> </form> ``` #### Web Client Patterns and Anti-Patterns The author discusses various patterns and anti-patterns for building RESTful web clients, helping developers avoid common pitfalls and build more robust and maintainable clients. For example, a developer learns to avoid tightly coupling the web client to a specific API structure, ensuring that the client remains flexible and adaptable to changes in the web service. ``` const response = { actions: [{ name: 'prev', target: 'http://server.com/resource-a', method: 'POST' }, { name: 'next', target: 'http://server.com/resource-b', method: 'PUT' }] }; // This: const query = queriable(response); const action = { url: query('actions[name="prev"]->["target"]') || '' // self, method: query('actions[name="prev"]->["method"]') || 'GET' }; const response = await fetch(action); // Instead of this: const action = { url: response.actions[0].target, method: response.actions[0].method }; const response = await fetch(action); ``` #### Building Custom Web Clients The book provides guidance on building custom web clients that can work with different media types and hypermedia controls, increasing the reusability and flexibility of the clients. For example, a developer creates a custom web client that can handle various media types through Content Negotiation, such as XML and JSON, allowing it to work with different web services without requiring significant modification. ``` const HALResponse = await fetch({ ... headers: { 'Accept': 'application/hal+json' } }); const HTMLResponse = await fetch({ ... headers: { 'Accept': 'text/html' } }); ``` By applying the concepts and best practices presented in “RESTful Web Clients: Enabling Reuse Through Hypermedia,” developers can build more flexible, maintainable, and reusable web clients that effectively interact with RESTful web services inside and outside an organization. Thanks for reading. If you have feedback, contact me on [Twitter](https://twitter.com/FagnerBrack), [LinkedIn](https://www.linkedin.com/in/fagnerbrack/) or [Github](http://github.com/FagnerMartinsBrack).
fagnerbrack
1,465,612
Medusa (3/4): Commerce Modules and Features
Medusa offers an impressive amount of e-commerce features all available in the medusa package or...
22,956
2023-05-12T07:01:31
https://dev.to/ntyrberg/medusa-34-commerce-modules-and-features-2da2
webdev, javascript, react, opensource
Medusa offers an impressive amount of e-commerce features all available in the medusa package or separately as independent modules. This is the third part of a four part series where the previous parts covered the basics of Medusa and their vision and history. This part will give a an outline of some of the commerce modules and what features they offer. The final part will go into detail on how the Medusa Admin works. ## Modules and Features For more technical details on how Medusa’s commerce modules and features work, you can visit the team’s Documentation at: [https://docs.medusajs.com/](https://docs.medusajs.com/) ### Cart & Checkout Comes with basic cart functionality allowing users to add, update and remove items easily. Developers can choose to integrate any third-party provider of shipping and payment options during checkout, either through existing plug-ins or by creating their own. ### Orders Customers place orders to purchase a product. Admins can manage orders, for example capture payment and fulfill items in the order. Admins are also able to edit the order by adding, removing, or updating the order, either by force or by confirmation from the customer. It is also possible to create draft orders as well as returning, swapping and replace orders with new ones. ### Multi-Warehouse Medusa recently updated its Multi-Warehouse module that allows merchants to manage inventory and store products in multiple locations within the same commerce application. Through the admin page (see next article), it is also simple to manage from which location to allocate a product to an order or where to return items. ### Customers Customers can choose to shop as guests or create an account. They are able to make purchases as guests but can also manage order history and details as account holders. Admins can segment customers into groups, allowing for easily directed marketing and discounts for specific groups. ### Products Admins using Medusa can manage an unlimited amount of product variants. Products can be managed through changes in price, inventory, product details etc, and developers are able to implement custom product information as well. Products can also be organized into categories, allowing customers to navigate and filter while browsing. ### Gift cards Medusa handles gift cards in a versatile way. Customers can both purchase and use gift cards on the storefront. Developers are able to implement custom logic to send gift cards to specific customers as well as implement shipping profiles that specifically handle the fulfillment of the card. It is also possible to create custom gift cards using the admin page. ### Price lists Price lists are special prices applied to products based on a set of conditions. Price lists can be used to add special prices or override current prices for specific products simultaneously. Developers are able to change the default pricing logic and implement custom solutions of how customers view prices this way, which can speed up admin work compared to changing prices on individual products. ### Regions & Currencies Regions are handled very flexibly using Medusa. Admins are able to specify an unlimited amount of custom regions. Each region can have a custom set of settings such as currency, payment gateways, tax providers etc. Currencies can be associated with one or many regions, and developers are in full control of the price of a product per currency and per region. A customer within a specific region will automatically get products, prices, shipping options, taxes and more specified for that region. ### Taxes Taxes are closely related to regions since they often differ geographically. Medusa provides default tax provider to calculate rates, but developers are able to integrate custom solutions as well. ### Sales channels Medusa allows for a large amount of different sales channels, such as the web storefront, mobile apps or selling directly through social media platforms. For each channel, it is possible to choose which products belong to the channel, and products can be made available on multiple channels simultaneously. Each order is associated with a given channel, allowing for easy management and follow-up. ## The mechanics of Medusa Within each module described above, Medusa utilizes a set of abstractions in its architecture. This section aims to explain the overall architecture of Medusas commerce modules. ### Headless architecture As mentioned earlier, Medusa is a headless backend and is built in Node.js on top of Express. In the Medusa core package, you get access to all commerce modules such as orders, inventory, cart and products. ### Entities and Services The backend connects to a database that stores the e-commerce store’s data. Tables in the database are represented by *entities* built on top of Typeorm. An example would be the order-entity which represents the order table in the database. To manipulate e*ntities*, *services* are created as TypeScript or JavaScript classes with utility methods for retrieval and other purposes. The *services* can be accessed throughout the Medusa backend through dependency injection and are a sort of a bundled helper methods that represent a certain *entity* of functionality in Medusa. ### Endpoints Rather than having a tightly-coupled frontend which is the case for traditional e-commerce platforms, the Medusa backend provides *endpoints*, which are REST APIs used by frontends like storefront or admin interfaces. Each commerce module contains a set of *endpoints* specific to the functionalities that it provides and are available within the Medusa backend. ### Events Medusa also uses an event-driven architecture to manage events. When an action occur, such as an order being placed, an *event* is triggered. To handle an *event*, Medusa connects to a *service* that implements a pub/sub model. Messages sent by the pub/sub model are received by *subscribers*. *Subscribers* are TypeScript or JavaScript classes that add their methods as handlers for specific events. These handler methods are then executed only when an *event* is triggered. Using the example of an order being placed, a *service* specific to ordering (the *publisher*) then sends a message to the warehouse (the *subscriber*) that initiate shipping, update the inventory and notify customers. ### Loaders and plugins The Medusa backend architecture can be tailored to suit individual requirements, with resources like *entities*, *services* and *endpoints* created without modifying the backend. Loaders are used to load both Medusa and custom resources such as plugins, which can be packaged for reuse in other Medusa backends or published for others to use. Existing plugins can also be installed into a Medusa backend. To learn more about the Medusa Admin, read the next article:
ntyrberg
1,465,681
Hexagon rotating gradient border
Hexagon rotating gradient border This CodePen was created by Temani Afif This...
20,957
2023-05-13T07:12:00
https://dev.to/jon_snow789/hexagon-rotating-gradient-border-4dh2
Hexagon rotating gradient border --- #### This CodePen was created by [Temani Afif](https://codepen.io/t_afif) {% codepen https://codepen.io/t_afif/pen/VwEXoaw %} --- >This codepen was not created by me, all rights belong to its respective owners --- ### Check Our Latest Post {% link https://dev.to/jon_snow789/glassmorphism-css-generator-k90 %} --- Thanks for Reading ❤️! Check my website [Demo coding](https://democoding.in/) for updates about my latest CSS Animation, CSS Tools, and some cool web dev tips. Let's be friends! Don't forget to subscribe to our channel : [Demo code](https://www.youtube.com/@democode) --- {% twitter 1655834579394252800 %}
jon_snow789
1,465,688
How nuvo Importer SDK 2.0 Transforms the Way You Onboard Data
Data import use cases vary across industries, from e-commerce to construction, HR, fintech, or...
0
2023-05-12T08:10:30
https://www.getnuvo.com/blog/importer-sdk-2-0-transforms-the-way-you-onboard-data
webdev
Data import use cases vary across industries, from e-commerce to construction, HR, fintech, or energy. However, dealing with messy and ever-changing customer data is a common challenge for B2B software companies. With this in mind, our AI-assisted nuvo Data Importer was built to provide our clients with a seamless, scalable, and secure data import experience covering even the most complex edge cases. And foremostly, it offers the highest data privacy and security standards on the market. During the past years, we have continuously advanced our solution based on new challenges, formats, and industry-specific use cases to provide our clients with the most powerful import solution. Now, after months of research and development, we proudly announce the release of Importer SDK 2.0! This version introduces many new powerful features that extend advanced data validation and cleaning capabilities, increase customizability, and speed up implementation. Here are seven of the most notable new features: - Start the importer at any step, at any event, and with your preferred data format using the Dynamic Import feature - Transpose and de-nest data; merge and split columns; join sheets; and perform many other complex data manipulations with the dataHandler feature - Set up conditional dropdown fields with the Value-Based Dropdown feature - Parse nested JSON files with allowNestedData - Up to 95% loading time reduction during columns and option matching using our new architecture - Fully customize your importer with only a few lines of code using the Simplified Styling feature - Support multiple languages even faster with our default translations using our language property via the Simplified Translation feature Let’s dive into how these new features transform the way you onboard data. ‍ ## Start the Importer at Any Step, at Any Event, and with Your Preferred Data Format Providing your users with the best possible data import experience inevitably comes with high flexibility requirements. The Dynamic Import feature allows you to start the importer at any step, at any event, and with your preferred data format. This enables you to cover various use cases like allowing your users to import complex file structures such as .txt and zip files or to use nuvo not only as an importer but also as a data management UI, where users can edit their existing data. Moreover, you can start the import by fetching data from any API instead of uploading a file manually. This is game-changing when you want to enable your users to migrate data from their CRM, ERP, PIM, or other applications to yours. Sounds too good to be true? Check out the documentation or reach out to us to learn how the [Dynamic Import](https://docs.getnuvo.com/sdk/dynamic-import/) feature can cover your use case. ![Dynamic Import Feature](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/okrmndzw1p3ayeeouxn1.png) ## Perform Complex Data Manipulations Dealing with customer data can be a nightmare for onboarding and engineering teams due to its varying structure, size, and format. During the past years, we have seen many wild edge cases across industries. Hence, developing a feature that can deal with even the messiest input data became a high priority – dataHandler. The dataHandler feature allows for solving complex data manipulation scenarios, such as transposing data, merging and splitting columns, joining sheets, de-nesting data, and more. Unlike our cleaning functions that iterate through every entry or access only one column at a time, the dataHandler functions (headerStep and reviewStep) work on the entire data at once. This gives you complete control over the input data directly after the upload and gives you access to modify the data after the mapping step. In addition, it allows you to automatically add, delete, and modify columns and rows in the data, helping you to manage different input file structures and giving greater flexibility for data manipulation. Whether you need to transform a few columns or an entire dataset, the dataHandler functions provide the flexibility and power required to do the job efficiently. Go to our [documentation](https://docs.getnuvo.com/sdk/data-handler/) for more details and some sample code sandboxes to test it out! ## Conditional Rendering of Dropdowns Dropdown or category fields are often used in data import and migration processes. But mapping and validating data against hundreds or even thousands of different dropdown options or categories can be challenging, frustrating, and highly error-prone for the user. Additionally, when the dropdown fields depend on the values of certain columns or other conditions, complexity multiplies for both the user and the engineering team setting up an importer. With our Value-Based Dropdown feature, you can now control which options are displayed in a dropdown column based on the value(s) of other columns in the same row. We achieve this by allowing you to link dropdown options with other columns by using specific operators such as AND, OR, GTE (greater than or equal to), LTE (less than or equal to), and others. By applying these operators, you can define even complex conditions that determine whether or not to display a given dropdown option. Once the logic is defined, dropdown options are automatically updated based on the values in the linked columns in the “Review Entries” step. Learn more about it in our documentation. ![Value-based Dropdown Feature](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lkyw5y3n66ac5cpr4pvg.png) ## Parsing Nested JSON Files Multi-dimensional or grouped data (also called nested data) is something a lot of companies struggle with when importing and reformatting customer data. Breaking up these dimensions into a two-dimensional structure that can be displayed as a table takes up significant time and effort for engineering teams and is a key challenge our clients brought us. Our new feature allowNestedData solves this issue by allowing you to de-nest .json files based on pre-defined rules. The de-nesting process involves replacing arrays with underscores "_" and objects with periods "." to facilitate the display of data in a 2D table. Find more information in our [documentation](https://docs.getnuvo.com/sdk/settings/#allownesteddata-beta). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kmggba8cwtomhj4f7t61.png) ## Up to 95% Loading Time Reduction During Columns and Option Matching You hopefully noticed a significant reduction in loading time during the “Match Columns” step already after the March release. For Importer SDK 2.0, we enhanced the matching module and mechanism further. With the SDK 2.0, we process the column headers on the backend side by default, which reduces the matching time up to 95% in comparison to our January version. Additionally, we have added an optional functionality (processingEngine === “node”) to reduce the mapping time even further by processing also the uploaded spreadsheet content on the backend side. Please be aware that migrating to SDK 2.0 does not automatically apply this option. By default, SDK 2.0 won't process the spreadsheet content of your users. Install the latest version and try it out with your own sample file. We'd love to hear your feedback! Furthermore, check out the new optional SDK 2.0 functionality in our documentation to even further speed up the mapping process by allowing the option mapping on the backend side. ## Simplified Styling and Simplified Translation Speed and ease of implementation while maintaining flexibility for customization are key aspects of nuvo Data Importer. To speed up the implementation even further, we have significantly simplified the styling options, allowing you to fully customize and white-label the importer using only a handful of properties within the global class. Additionally, we implemented a simple way to change the UI language, that ables you to implement multiple-language support significantly faster. You can apply nine different languages by only changing the language key within the settings. Of course, you can always override the default text or add additional languages by using our i18nOverrides functionality. If your language still needs to be included inside the language property, please reach out to us, and we are happy to add it. Check out our [translation guide](https://docs.getnuvo.com/sdk/multi-language/) for more details. ![Simplified styling with only a handful of properties](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/micislxnkkskuza347he.png) Simplified styling with only a handful of properties And that’s a wrap! We hope these new features help you to say goodbye to non-scalable import scripts and manually cleaning or reformatting customer files. Do you want to hear more? Join nuvo’s Co-Founders Michael Zittermann (CEO) and Ben Hartig (CTO) on Tuesday, May 16 at 10:30 am CEST for a 30-minute session on the new capabilities and find out how they help you to create the best possible data import experience. [Register now](https://app.livestorm.co/nuvo/how-nuvo-importer-sdk-transforms-the-way-you-onboard-data)
getnuvo
1,465,691
APIs vs SDKs: Why you should always have both
What are APIs and SDKs? We explore the different use cases that each addresses, what great APIs or...
0
2023-05-22T22:18:16
https://dev.to/speakeasy/apis-vs-sdks-why-you-should-always-have-both-4ahh
api, sdk, openapi, javascript
> What are APIs and SDKs? We explore the different use cases that each addresses, what great APIs or SDKs look like, and explain why you need both. ## What’s an API?​ An API, or Application Programming Interface, allows different software applications to communicate with each other. For example, a developer in an e-commerce company might create an “orders” API that enables other services to easily retrieve orders made by a certain user. Consumers of the API may be external developers or even other developers within the same company. There are several different API technologies. Let’s look into the most popular in 2023: REST, GraphQL and gRPC - **REST (Representational State Transfer)**: REST is the most widely used approach for creating APIs, primarily due to its simplicity and compatibility with HTTP. It dictates structured access to resources via well-known CRUD (Create/Read/Update/Delete) patterns. A common pattern in modern web development is to create a front-end written in React or a similar framework, which fetches data from and communicates with a back-end server via a REST API. - **GraphQL**: GraphQL is a newer API technology that enables API consumers to request only the data they need. This reduces bandwidth required and improves performance, and is particularly suitable in situations where a REST API returns large amounts of unnecessary data. However, GraphQL is more complex to implement and maintain, and users need to have a deeper understanding of the underlying data models and relationships in order to construct the right queries. - **gRPC (Google Remote Procedure Call)**: gRPC is a high-performance, open-source framework designed for low-latency and highly-scalable communication between microservices. gRPC is strongly-typed, which helps catch errors earlier in the development process and improves reliability. However, gRPC ideally requires support for HTTP/2 and protocol buffers, which many web and mobile clients may not support natively. Also note that far fewer developers are familiar with gRPC than REST, which can limit adoption. For these reasons, gRPC is mainly used for internal microservice communications. In summary, REST remains the most popular API technology due to its simplicity and widespread adoption. GraphQL and gRPC are popular for specific use cases. ## Why are APIs important?​ There are several use cases for APIs: - **Leverage the Best**: Third party APIs provide developers easy access to services that would take years to build. With just a few lines of code, developers can add payment capabilities (e.g. [Stripe](https://speakeasyapi.dev/post/apis-vs-sdks-difference/www.stripe.com/)), mapping data (e.g. [Google Places API](https://developers.google.com/maps/documentation/places/web-service/overview)), or transactional email (e.g. [Resend](http://www.resend.com/)) into their application, all powered by 3rd parties. This allows developers to stay focused on the value add of their app, and outsource the rest. - **Scale Applications**: By providing a well-defined programmatic interface, APIs allow developers to build their applications more efficiently, as they can update or replace individual components without affecting the entire system. - **Collaborate Efficiently**: APIs can be used internally to facilitate collaboration between teams by standardizing communication and providing a clear contract for developers to work against. Common functionalities are developed into services and APIs so that they can be leveraged by every team – accelerating the development of new applications and reducing duplicative work. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/24h1tkqybnzh0vs5p9rg.png) ## What is an SDK?​ An SDK, or Software Development Kit, contains libraries, documentation, and other tools that aim to make it easy to integrate with an API in a specific language. It simplifies integrating with an API – without the developer needing to dig through docs to patch together details like how the API handles auth, query parameters, or returns response objects. The idea is that providing an intuitive and streamlined way to integrate with an API enables developers to quickly and easily access the API's functionality. Which is, after all, the goal of an API developer right? If this sounds a little academic, hang on for the code examples shortly. A typical SDK contains: - **API client**: A pre-built client that handles communication with the API, taking care of low-level details like HTTP requests, authentication, and error handling. - **Data models**: Classes or structures that represent the API's data objects, providing a strongly-typed and native representation of the data returned by an API. - **Helper functions and utilities**: These assist developers in performing common tasks related to the API, such as data serialization, pagination, or error handling. - **Documentation and code samples**: These should be written specifically in the SDK’s language. ## Why use an SDK? Why are they important?​ It’s worth diving a bit deeper on why SDKs for an API are so important. Without an SDK, developers have to manually handle the API integration process. This is time-consuming and error-prone. For example, developers need to: - Read the API documentation, and especially the expected request and response data formats. - Write code to form and send HTTP requests to the relevant API endpoints, parse the responses into the right data structures, and handle any errors that might occur. - Manage the low-level details of authentication, retries, and rate limiting. The latter often are left out in the initial stages of integrating with an API, but are incredibly important if the API is being used in critical workflows. Overall, this approach tends to resemble “trial and error”, which creates a high risk of users introducing bugs into the integration. API users end up frustrated and less motivated to use the API. However, with a language-idiomatic SDK, developers can avoid these common pitfalls. This results in a faster integration process, and greater API usage as API consumers are provided with a reliable, well-supported – even enjoyable! – experience. SDKs offer several advantages over the APIs alone: - **Faster development**: SDKs provide pre-built functions that help developers accomplish tasks quickly. For example, an SDK for an e-commerce API might include a pre-built function and parameters for placing an order. - **Standardized / streamlined data and type definitions**: SDKs can ensure that data returned by an API is handled in a standard and recommended manner. - **Consistency**: By offering standardized components and best practices, SDKs promote consistency, making maintenance and management much easier. This might be particularly important for internal APIs, where you don’t want every API user to create an idiosyncratic implementation. - **Breaking change mitigation**: When an API introduces breaking changes, the SDK can in some cases be updated to accommodate those changes “under-the-hood” – while maintaining a consistent interface for the developers. - **Streamlined documentation**: SDK docs focus on achieving specific outcomes and abstract away many low-level details, making them more usable, easier to understand, and ultimately more effective for developers. ## Example of integrating to an API with and without an SDK​ To highlight these differences, let’s look at an example of what integrating with an e-commerce API might look like, first without an SDK and then with one. The use case will be enabling a new customer to place an order. This requires fetching information about the product being ordered, creating a new customer, and creating the order itself. **First, here’s what integrating might look like without an SDK:** ```javascript const fetch = require('node-fetch'); const apiKey = 'your_api_key'; const baseUrl = 'https://api.ecommerce.com/v1'; const headers = { 'Authorization': `Bearer ${apiKey}`, 'Content-Type': 'application/json' }; const productName = 'Awesome Widget'; const customer = { firstName: 'John', lastName: 'Doe', email: 'john.doe@example.com' }; const quantity = 2; async function placeOrder(productName, customer, quantity) { try { // Step 1: Get product information const productResponse = await fetch(`${baseUrl}/products`, { headers }); if (productResponse.status !== 200) { throw new Error(`Could not fetch products. Status code: ${productResponse.status}`); } const productData = await productResponse.json(); const product = productData.products.find(p => p.name === productName); if (!product) { throw new Error(`Product '${productName}' not found.`); } // Step 2: Create a new customer const customerResponse = await fetch(`${baseUrl}/customers`, { method: 'POST', headers, body: JSON.stringify({ customer }) }); if (customerResponse.status !== 201) { throw new Error(`Could not create customer. Status code: ${customerResponse.status}`); } const customerData = await customerResponse.json(); const customerId = customerData.customer.id; // Step 3: Place the order const orderResponse = await fetch(`${baseUrl}/orders`, { method: 'POST', headers, body: JSON.stringify({ order: { customerId, items: [ { productId: product.id, quantity } ] } }) }); if (orderResponse.status !== 201) { throw new Error(`Could not place order. Status code: ${orderResponse.status}`); } console.log('Order placed successfully!'); } catch (error) { console.error(`Error: ${error.message}`); } } placeOrder(productName, customer, quantity); ``` Note that the API consumer would need to construct all this code themself. They would need to refer to the API documentation to figure out which APIs should be called, what the response data structures look like, which data needs to be extracted, how to handle auth, what error cases might arise and how to handle them… oof! **Now here’s the SDK version of this code:** ```javascript const { EcommerceClient } = require('ecommerce-sdk'); const apiKey = 'your_api_key'; const client = new EcommerceClient(apiKey); const productName = 'Awesome Widget'; const customer = { firstName: 'John', lastName: 'Doe', email: 'john.doe@example.com' }; const quantity = 2; async function placeOrder(productName, customer, quantity) { try { await client.placeOrder(productName, customer, quantity); console.log('Order placed successfully!'); } catch (error) { console.error(`Error: ${error.message}`); } } placeOrder(productName, customer, quantity); ``` Notice how much simpler and concise it is. Authentication is handled automatically with the developer just needing to copy in their key. Pre-built functions mean the developer doesn’t need to parse through pages of API docs to stitch together the required calls and associated data extraction themselves. Error handling and retries are built-in. Overall, a far easier and superior experience. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u0qkti4e90wv2oii7bv0.png) ## What’s the difference between SDKs and APIs?​ In summary, APIs & SDKs are symbiotic. Let’s talk about coffee to draw the analogy better. You can think of APIs as the fundamental, bare metal interfaces that enable applications or services to communicate. In our analogous example, APIs are like going to a coffee shop and getting a bag of beans, a grinder, a scale, filter paper, a coffemaker/brewer, kettle, and an instruction guide. Good luck making a delicious brew! SDKs on the other hand are critical to enabling APIs to reach their full potential, by providing a rapid, ergonomic way to access the API’s underlying functionality. In our coffee example, SDKs are more akin to telling a skilled barista “I’d like a latte please”. The barista does all of the work of assembling the ingredients, and you get to focus on the end result. ## API and SDK best practices​ Now we know what APIs and SDKs do, what should you keep in mind as you’re building them, to ensure they fulfill the promises we’ve outlined above? Here are some “gotchas!” to watch out for when building awesome APIs: - **Design carefully**: It can be extremely difficult to get users to change how they use an API once it’s in production. Avoiding unnecessary breaking changes, where possible, will save you many headaches and irate users later. - **Documentation**: In addition to an “API reference” that details every endpoint and response, consider creating a “usage guide” that walks users through how to use APIs in sequence to accomplish certain tasks. - **Authentication**: Creating and sending users API keys manually works fine for an MVP, but has obvious security and scalability challenges. An ideal solution is to offer a self-service experience where end-users can generate and revoke keys themselves. For more on API auth, [check out our guide](https://speakeasyapi.dev/post/api-auth-guide/). - **Troubleshooting and support**: Users will inevitably run into issues. It’s easy for members of the team to quickly get inundated with support requests. Try to provide self-service tools for troubleshooting API issues, such as logging and monitoring, and community support channels. Building great SDKs presents a different set of considerations. Keep these in mind if you want to offer a great SDK to your users: - **How stable is the underlying API?** If the API is undergoing frequent changes, it might be particularly challenging to manually keep the SDKs up-to-date and in sync with the API. - **Creation and maintenance cost**: Creating native language SDKs for all your customers’ preferred languages can be a huge hiring and skills challenge. Each language SDK also has to be updated every time the API changes – ideally in lockstep to avoid the SDK and API being out of sync. This is time-consuming and costly. Many companies have deprecated or scaled back their SDKs after misjudging the work required. - **Testing and validation**: Plan for thorough testing of the SDKs across different platforms and languages, including unit tests, integration tests, and end-to-end tests, to ensure the SDKs are reliable and compatible with the API. - **Documentation**: Provide clear examples and code snippets in each language to make the SDKs easy to use and understand. ## APIs, SDKs and Speakeasy​ As discussed above, APIs and SDKs are both incredibly important tools. Unfortunately however, creating SDKs for every API has been out of reach for most teams, given their cost, the skills required, and distraction from the core product roadmap. **Until now.** At Speakeasy, we believe that every API deserves an amazing SDK – one that is idiomatic, reliable, low-cost, always-in-sync, and easy to manage. And that’s exactly what you get with Speakeasy: - **Idiomatic**: we know how important it is that the SDK feels ergonomic. That’s why we built a generator from the ground up with a focus on creating idiomatic SDKs in a range of languages. - **Reliable**: our SDKs have been battle-tested in real-world scenarios through our customers. And we handle all support and troubleshooting – so you don’t have to. - **Low-cost**: until now, only the largest companies have been able to afford an engineering team to focus on SDKs. Speakeasy provides access to best-in-class SDKs for a fraction of the price of a full engineering team. - **Always-in-sync**: our SDKs rebuild after every API spec change. - **Easy to manage**: we offer a fully-featured web application so you can always understand the current status of your SDK generation. [Read our case study with Vessel](https://speakeasyapi.dev/post/case-study-vessel/), a unified API for CRM and sales, to learn more about us. If you have an API spec, why not try it for yourself now? Just click the button below to generate your first SDKs for free.
ndimares
1,465,760
9 Pillars to Build a Rock-Solid SaaS Platform
In today’s fast-paced world, where businesses are shifting gears and treading on the digital...
0
2023-05-12T10:16:02
https://dev.to/cygnismedia/9-pillars-to-build-a-rock-solid-saas-platform-1ebm
saas, startup, devops, webdev
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7b5ibpdk8a2amh83laz.jpg) In today’s fast-paced world, where businesses are shifting gears and treading on the digital transformation journey, Software as a Service (SaaS) platforms have become indispensable. In this article, we’ll uncover the nine key elements of a successful SaaS platform to help you navigate the entwined, uncharted paths of web application and software development. **[Read now](https://www.cygnismedia.com/blog/9-pillars-to-build-saas-platform/)**
cygnismedia
1,465,812
The Complete Guide to Selenium Automation Testing
Selenium, a robust tool designed for automating web-based applications, is an indispensable...
0
2023-05-12T11:01:14
https://dev.to/shubhankarn9/the-complete-guide-to-selenium-automation-testing-47cc
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/46t02qg3dx2lkutb65vt.jpg) Selenium, a robust tool designed for automating web-based applications, is an indispensable resource for developers, testers, and quality assurance professionals due to the wide range of functionalities it provides for software testing. In this guide, we will furnish you with a comprehensive overview of Selenium automation testing, which covers its benefits, components, test case writing, debugging, automation frameworks, and advanced topics. ## Benefits of Using Selenium for Automation Testing There are several benefits to using Selenium for software testing, making it a highly valuable tool for automating web-based applications. These benefits include: Increased efficiency and productivity in the software testing process. - Improved accuracy and consistency of test results. - Reduced time and cost required for testing. - The ability to test across multiple platforms and browsers. - Enhanced test coverage and scalability. ## Getting Started with Selenium Before commencing Selenium, you must meet some prerequisites, including having a fundamental understanding of programming languages such as Java, Python, or C#, a text editor such as Sublime Text or Notepad++, an Integrated Development Environment (IDE) such as Eclipse or IntelliJ IDEA, and a web browser such as Google Chrome or Mozilla Firefox. Once you meet these requirements, you can proceed to set up the Selenium environment by downloading and installing the Selenium WebDriver, which is the core component of Selenium. Additionally, Selenium has other components, including Selenium IDE, Selenium Grid, and Selenium Remote Control (RC), that are essential for automation testing. ## Writing Selenium Test Cases Selenium test cases can be written using programming languages such as Java, Python, or C#, or using Selenium IDE, a simple Integrated Development Environment (IDE) utilized for creating Selenium test cases without the need for coding. To ensure the effectiveness and efficiency of the test cases, it is important to follow best practices, such as writing clear, concise, and maintainable test cases, organizing test cases into suites for easy management, using appropriate and meaningful test case names, and implementing proper error handling and reporting mechanisms. ## Debugging and Troubleshooting Selenium Test Cases Debugging and troubleshooting are critical aspects of Selenium automation testing. Common errors that can occur while writing Selenium test cases include element not found errors, stale element reference errors, element not interactable errors, and timeout errors. To debug and troubleshoot these errors, you can utilize several techniques, such as inspecting web elements using the browser developer tools, adding wait times to ensure that elements are loaded before interaction, and using implicit and explicit waits to manage synchronization issues. ## Performance Optimization Performance optimization is another essential aspect of Selenium automation testing. To optimize the performance of Selenium test cases, you can utilize efficient locators to identify web elements, minimize the use of Thread.sleep() statements, and use appropriate wait times to avoid unnecessary delays. ## Selenium Automation Frameworks An automation framework is a set of guidelines and best practices used for designing and developing automated tests. Selenium automation frameworks provide a structure for organizing test cases, managing test data, and generating test reports. There are several types of Selenium automation frameworks, including data-driven frameworks, keyword-driven frameworks, and hybrid frameworks. When selecting the appropriate automation framework for your project, it is essential to consider factors such as your team's technical expertise, the size and complexity of your application, and your testing goals and objectives. ## Advanced Topics in Selenium Selenium offers numerous advanced features and functionalities that can help you take your automation testing to the next level. Some of the most common advanced topics in Selenium include integration with CI/CD pipelines, cross-browser testing with Selenium, and parallel testing with Selenium Grid. These functionalities enable you to automate your testing as part of your continuous integration and delivery process, test your web application across multiple browsers, and run your test cases in parallel across multiple machines and browsers, which ultimately reduces the time it takes to complete ## Conclusion Selenium is a powerful tool for automating your software testing process. By following the best practices and techniques outlined in this guide, you can write effective and efficient Selenium test cases, debug and troubleshoot errors, optimize your test cases for better performance Testrig Technologies is an excellent resource for anyone looking to learn about [Selenium Automation Testing](https://automationtestingcompany.com/). Testrig commitment to quality and customer satisfaction make them a reliable partner for businesses looking to improve their testing processes and achieve their quality goals.
shubhankarn9
1,465,821
Always experiment first
This is a quickie post! One of the things people getting into doing websites is that they mess up...
0
2023-05-12T11:19:00
https://dev.to/merri/always-experiment-first-5854
html, css, javascript, webdev
This is a quickie post! One of the things people getting into doing websites is that they mess up their ways of working by developing straight into production. Back in the days this often meant FTP, making changes locally on files and then copying the files into production. There is however a big problem with this way of working: production site is optimized for production use! This means there are things like caching that come into your way. You might end up doing some silly things like disabling caching on your production `.htaccess` file, or something similar. The above is solving the wrong issue. What you should instead consider is how you do your work. Instead of developing straight into production (and then having problems) you should isolate your development from production. To do this there are multiple ways. One way is to have a local production environment. However this has one downfall: you may need to gain access to it from your phone for testing things out. So, while most of your development can happen on your local machine there are times you need to opt-out. But how? One solution is to use services like [Codepen](https://codepen.io): instead of trying our your new stuff in production you can write the stuff you need to experiment with in isolation. This way you can even develop and debug on your phone! It is possible to plug or connect a keyboard to your phone. Another alternative is to go for frameworks. Tools like Astro and publish to Vercel lets you skip a lot of the issues and complexity related to development vs production, such as the caching problem. Astro provides it's own local development environment, and you can also build a production static site locally and then serve it with for example `npx http-server public`. While there is some initial learning curve to Astro and Vercel they also let you avoid getting neck deep into some other topics when you're focusing in just getting things done. So what to learn from here? Always consider your ways of working before solving problems by writing more code. If you're making a new feature or learning some new cool CSS and you're still not sure if things work ok in a multitude of different browsers, experiment in isolation! It saves you time and allows fast iteration.
merri
1,465,921
Description of important clauses in SQL and Apache age query format part 2
In this blog we will discuss some of the important clauses in Apache age here. For part 1 you can...
0
2023-05-12T13:01:20
https://dev.to/talhahahae/description-of-important-clauses-in-sql-and-apache-age-query-format-part-2-5dip
apache, apacheage, postgressql, sql
In this blog we will discuss some of the important clauses in Apache age here. For part 1 you can visit [here](https://dev.to/talhahahae/description-of-important-clauses-in-sql-and-apache-age-query-format-part-1-8j8) ## Match: The MATCH clause allows you to specify the patterns Cypher will search for in the database. Cypher: ``` SELECT * FROM cypher('graph_name', $$ MATCH (v) RETURN v $$) as (v agtype); ``` Above query will return all the vertices in the graph. ## Return: Return statement returns a node Cypher: ``` SELECT * FROM cypher('graph_name', $$ MATCH (n {name: 'B'}) RETURN n $$) as (n agtype); ``` Above query will return node n. ## Order by: Order by is used for sorting of output on basis of properties of the output. Cypher: ``` SELECT * FROM cypher('graph_name', $$ MATCH (n) WITH n.name as name, n.age as age ORDER BY n.name RETURN name, age $$) as (name agtype, age agtype); ``` The query returns the nodes sorted by the name. ## Limit: Limit is used to constraint the number of output records. Cypher: ``` SELECT * FROM cypher('graph_name', $$ MATCH (n)RETURN n.name ORDER BY n.name LIMIT 3 $$) as (names agtype); ``` The query returns the name of first 3 nodes. ## Create: Create clause is used to create graph vertices and edges. Cypher for creating a vertex: ``` SELECT * FROM cypher('graph_name', $$ CREATE (n) $$) as (v agtype); ``` Above query will return nothing. Cypher for creating an edge between 2 nodes: ``` SELECT * FROM cypher('graph_name', $$ MATCH (a:Person), (b:Person) WHERE a.name = 'Node A' AND b.name = 'Node B' CREATE (a)-[e:RELTYPE]->(b) RETURN e $$) as (e agtype); ``` Above query will return the created edge. ## Delete: The delete clause is used to delete the graph and vertices. You cannot delete a node without also deleting edges that start or end on said vertex. Either explicitly delete the vertices or use DETACH DELETE. Cypher: ``` SELECT * FROM cypher('graph_name', $$ MATCH (v:Useless) DELETE v $$) as (v agtype); ``` Above query will delete the selected vertex. ## Remove Remove and delete are completely different clauses. Remove is used to remove properties from vertex and edges. Cypher: ``` SELECT * FROM cypher('graph_name', $$ MATCH (andres {name: 'Andres'}) REMOVE andres.age RETURN andres $$) as (andres agtype); ``` The returned node has no age property in it. References: 1. https://age.apache.org
talhahahae
1,465,970
Datacenter Proxies: The Ultimate Solution for Collecting Big Data
Datacenter proxies have become an indispensable tool in collecting big data. Their ability to...
0
2023-05-12T14:05:59
https://dev.to/dexodata/datacenter-proxies-the-ultimate-solution-for-collecting-big-data-4hmg
proxy, datacenterproxies, dexodata, bigdata
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iyeizt1ea9sl4p1ln5yp.jpg) Datacenter proxies have become an indispensable tool in collecting big data. Their ability to extract large amounts of data from the web helps provide useful insights when analyzing a range of activities including _market research, competitive analysis, and customer segmentation_. By shielding online activities, datacenter proxies ensure safety from potential cyber threats. With a proper proxy, harvesting and processing big data can be done easily without much effort. Given their value, datacenter proxies are essential for businesses working with big data. As businesses gather and analyze larger data sets to make informed decisions, they require reliable proxies more than ever. Datacenter proxies perform better than residential IPs while scraping, thanks to _faster speeds, higher anonymity, and optimized data throughput_. If your organization frequently undertakes large-scale data scraping, datacenter proxies should be an integral part of your digital infrastructure. Datacenter proxies take a crucial part in data security, web scraping, and accessing geo-restricted content. They offer numerous benefits and can be used in a variety of applications. Consider these examples to see how you can utilize the power of datacenter proxies for various use cases: - Safeguard sensitive information - Bypass geographic limitations - Extract large volumes of information for analysis, price comparisons, sentiment analysis, or trend forecasting. As the value of data continuously increases, incorporating datacenter proxies into a range of applications has become essential. From securing data to accessing geo-restricted content to web scraping, these proxies offer versatile and varied uses. For instance, datacenter proxies can facilitate sensitive data access by providing secure locations. They can also be leveraged to extract key information from web pages, providing businesses with valuable insights to power their marketing initiatives. Additionally, datacenter proxies offer unrestricted browsing and content access, helping end-users harness the vast online information to great advantage. Indeed, datacenter proxies have numerous benefits and are a worthy tool for data protection and online data acquisition. _Premium datacenter proxy services distinguish themselves by offering superb performance, reliability, and security_. Superior proxies guarantee swift connection speeds and unparalleled stability, with ample scalability to meet customer needs. They must also provide robust security measures that safeguard user privacy and ensure the confidentiality of sensitive data. High-quality datacenter proxies integrate smoothly with an array of software tools, web browsers, and operating systems. Our experienced 24/7 customer support team is readily available to guide and assist clients throughout the big data collection journey. Here are the main functions of our server proxies: - The purpose of IP address obfuscation is to enhance security by concealing the client and destination server addresses from one another - To heighten efficiency by caching commonly requested content from destination servers - Incorporate content filtering mechanisms, like blocking or restricting access to websites containing malicious, inappropriate content - In order to overcome network restrictions like blocking of specific websites by administrators, certain measures may be employed - This feature logs and monitors the requests made to destination servers, providing valuable oversight for tracking purposes. Datacenter proxies can provide security by hiding both the client and destination server’s IP addresses. Along with that, they might increase server performance by caching frequently-requested content. Additionally, server proxies can be utilized for several other purposes such as content filtering, bypassing restrictive network protocols, and monitoring and logging server activity. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z3k6q6idbxoklygc82zy.jpg) ## Boost Your Data Collection Efforts with Datacenter Proxies Datacenter proxies are powerful enough to unleash the full potential of big data analysis. By this they allow businesses to gain insights into market trends, pricing dynamics, customer choices, etc. By leveraging these insights through data-driven decision-making, enterprises can maintain their competitiveness. Additionally, by collecting feedback, customer reviews, and behavioural patterns, companies can refine their products and services. _Improved customer satisfaction and loyalty will follow, paving the way for innovative solutions._ Having access to vast amounts of data can also fuel research and development efforts, leading to groundbreaking innovations that optimize operations and save costs. Proficient data collection and analysis are crucial for progress in today’s data-rich setting. _Datacenter proxies are an essential tool for unlocking big data’s potential, and propelling your business forward._ Offering unmatched speed, security, and flexibility, they establish a rock-solid foundation for data-driven decision-making that promises future growth. Unleash big data’s full potential with confidence, backed by the reliability of datacenter proxies. Harness their power to transform your business into an unstoppable force. ## Embark on Your Data-Driven Journey Now Begin your data-driven journey right now and connect with digital scientists and entrepreneurs who have harnessed the power of datacenter proxies to elevate their pursuits. As you embark on your odyssey, monitor, learn, address, and respond to all challenges that arise. Remember, with the right proxy companions, no insight is out of reach. If you’re inspired to delve deeper into digital realms and explore beyond, our site’s blog offers additional informative content to propel your journey.
dexodatamarketing
1,466,140
SQL Server vs MySQL: Difference, Performance, and Features
In the world of database management systems, SQL Server and MySQL are two of the most popular and...
0
2023-05-12T17:08:13
https://dev.to/devartteam/sql-server-vs-mysql-difference-performance-and-features-1kho
mysql, sql, database, dbforge
In the world of database management systems, SQL Server and MySQL are two of the most popular and widely used solutions. Both platforms offer solid features and solid performance, but which one is best for your needs? Let's check - https://blog.devart.com/mysql-vs-sql-server.html ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajitumykq3f5r0xhd28i.png)
devartteam
1,466,353
Ecalcular
A post by eudychalas
0
2023-05-12T21:01:17
https://dev.to/eudychalas/ecalcular-5eh5
codepen, programming, eudchalas
{% codepen https://codepen.io/ECHALAS/pen/BaqYEPr %}
eudychalas
1,466,560
Visualizing Data in dotnet with Polyglot Notebooks and SandDance
SandDance lets you make amazing interactive data visualizations in your Polyglot Notebooks. Here's how it works.
22,917
2023-05-13T02:55:33
https://accessibleai.dev/post/polyglot_sanddance/
datavisualization, dotnet, vscode, dataviz
--- title: "Visualizing Data in dotnet with Polyglot Notebooks and SandDance" published: true canonical_url: https://accessibleai.dev/post/polyglot_sanddance/ description: SandDance lets you make amazing interactive data visualizations in your Polyglot Notebooks. Here's how it works. tags: DataVisualization,Dotnet,VSCode,DataViz series: Polyglot Notebooks cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9mntw0kc3qi612tzuhmb.png # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2023-05-13 02:51 +0000 --- As I've been doing more and more [dotnet development in notebooks with Polyglot Notebooks](https://accessibleai.dev/post/introducing_polyglot_notebooks/) I've found myself wanting more options to create rich data visualizations inside of VS Code the same way I could use Plotly for data visualization using Python and Jupyter Notebooks. Thankfully, [Microsoft SandDance](https://www.microsoft.com/en-us/research/project/sanddance/) exists and fills some of that gap in terms of doing rich data visualization from dotnet code. In this article I'll talk more about what SandDance is, show you how you can use it inside of a Polyglot Notebook in VS Code, and show you a simple way you can use it without needing a Polyglot Notebook. ## What is SandDance? SandDance is a [Microsoft Research project](https://www.microsoft.com/en-us/research/project/sanddance/) designed to fluidly visualize data in an interactive manner. SandDance supports different visualization types and aggregating, sorting, and colorizing data points to help the user perform complex data analysis. ![SandDance](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2rfjnwuassnaidhud0d8.png) SandDance can be integrated into a variety of different environments from a [web application](https://microsoft.github.io/SandDance/app/) to a [Power BI integration](https://appsource.microsoft.com/en-US/product/power-bi-visuals/WA200000430), but most interesting to me is the extension to Polyglot Notebooks that allows you to directly pipe any data source you want into SandDance for analysis. {% embed https://youtu.be/QQmEtdbPXeE %} ## Loading Data in Polyglot Notebooks In order to work with SandDance, we'll need to install the `SandDance.InteractiveExtension` NuGet package. Along with that we'll install a few other extensions that make it easier to load and work with tabular data. ``` #r "nuget:SandDance.InteractiveExtension,*-*" #r "nuget:DataView.InteractiveExtension,*-*" #r "nuget:Microsoft.ML.DataView" #r "nuget:Microsoft.Data.Analysis" ``` Once these NuGet packages are downloaded and installed into the notebook you can start loading your data with a C# code cell: ```csharp using System.IO; using System.Collections.Generic; using Microsoft.Data.Analysis; using Microsoft.ML; // Load a CSV file into a DataFrame string contents = File.ReadAllText("Titanic.csv"); DataFrame ratings = DataFrame.LoadCsvFromString(contents); // Display the first 3 rows of the DataFrame below the cell ratings.Head(3) ``` This will load the entire CSV file into an interactive `DataFrame` and then display the `DataFrame` below the cell as shown below: ![DataFrame](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qf2k07mlj3bry89c2dil.png) The `Microsoft.Data.Analysis.DataFrame` class is one I want to explore at more length, but at early glance it appears to be very analogous to a Pandas DataFrame in Python. ## Sending Data to SandDance Once you have a `DataFrame`, you can convert it to a `TabularDataResource` and then call the `ExploreWithSandDance` extension method to begin visualizing your data. ```csharp using Microsoft.DotNet.Interactive.Formatting.TabularData; TabularDataResource tabular = ratings.ToTabularDataResource(); SandDanceDataExplorer explorer = tabular.ExploreWithSandDance(); explorer.Display() ``` This will open SandDance immediately below the cell using the data source you provided. ![SandDance in a Polyglot Notebook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vzoxsjgo5asosl8vlqbb.png) The user experience in Polyglot with SandDance currently feels a bit cramped to me and I do not know of an easy way of maximizing the SandDance output to a full window (aside from the steps in the next section) or customizing the behavior of the SandDance widget at the moment. That being said, both SandDance and Polyglot Notebooks are relatively young technologies and will likely evolve and grow over time. ## SandDance without Polyglot Notebooks If SandDance is interesting to you and you don't want to use it inside of Polyglot Notebooks, there's a VS Code extension that allows you to run SandDance for any `.csv` file you have saved to disk. First, [install the SandDance extension](https://marketplace.visualstudio.com/items?itemName=msrvida.vscode-sanddance) into VS Code. Next, right click on any `.csv` file to show the `View in SandDance` context menu option. ![View in SandDance](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vhaehbgx0x111jhmh4df.png) From there you'll be able to explore data in SandDance as if you had imported it with Polyglot Notebooks. In fact, I often prefer this way of working with SandDance because it gives you more screen space to work with and SandDance as an extension respects your VS Code theme while the version that integrates with Polyglot Notebooks does not. ## Closing Thoughts SandDance is an amazing experience, but I don't see it meeting all of my data visualization needs at the moment. Specifically, I can see myself wanting to make small lightweight visualizations with a number of pre-configured settings and wanting those visualizations to be embedded in my notebook. This way I could make a scatter plot of a specific type and embed it in my dashboard and share it with other people for a more guided experience like I can in a Jupyter Notebook with visualization libraries like Plotly. Still, SandDance is a great tool to have in your toolbox and I am hopeful about the direction this technology is growing.
integerman
1,466,626
Cohort Retention Analysis from A-Z in Tableau
I recently learnt how to carry out cohort retention analysis in Tableau. But most of the articles I...
0
2023-05-13T04:07:48
https://dev.to/alumassy/cohort-retention-analysis-from-a-z-in-tableau-4l0a
dataanalysis, tableau, businessintelligence, tutorial
I recently learnt how to carry out cohort retention analysis in Tableau. But most of the articles I came across were just using Tableau merely as a visualisation tool for the cohort retention rate results. For example, they’d use SQL for calculation and then Tableau for visualization or use Excel for calculation and then visualise in Tableau. I found there’s an efficient way to do all this. Calculating the cohort retention rate and also visualizing the results all in one place — Tableau. Isn’t that awesome? In this article, you’ll learn : - What cohort retention analysis is - Why it’s important - What data is required for cohort retention analysis & if it’s not available, how to calculate it (derived columns/calculated field) - How to create the cohort retention table - How to interpret the cohort retention rate ## Prerequisites - You should have a [Tableau public account](https://id.tableau.com/register?clientId=wcS7HwY98qdfgBREHT7Xoln7ipc75U0a) to be able to create and publicly share your visualisation after you’re done. - You should know the [basics](https://www.tableau.com/blog/getting-ready-publish-your-first-data-visualization) of Tableau. In this article, I assume you’re already familiar with the Tableau environment. I’ll focus on teaching you how to use Tableau for cohort analysis specifically. ## Understand cohort retention analysis Cohort analysis is used by businesses to understand the behaviour, patterns and trends of their customers so that they can subsequently tailor their products and services to the identified cohorts. You might ask yourself what a cohort is. A cohort is simply a group of people in this case who are customers, that share common characteristics such as time and size. Therefore, Cohort Analysis is an analysis of several different cohorts to get a better understanding of their behaviour, patterns, and trends. There are different types of cohorts to analyze. They include: - Time-based cohorts - Segment-based cohorts - Size-based cohort In this case, the type of cohorts you are going to create are time-based cohorts. Specifically, the cohort analysis you're going to do is retention-based. You are going to look at the time frame a certain group of people made their first purchase and then track the percentage of them that made subsequent purchases in future quarters. ## Get Data The Data set that you are going to be using is the famous Superstore Dataset. You can access the dataset by downloading it from [Kaggle](https://www.kaggle.com/datasets/vivek468/superstore-dataset-final). Open Tableau Public and connect to the Superstore data. Go ahead and analyse the rows and columns to know what kind of data about the superstore was documented. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddhkzvs1ltfhr7yqtrik.png) Proceed to the Sheet 1 tab. This is going to be your workspace and where you’re going to subsequently create your visualisation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hc7owqgvqdsuwd7hd37u.png) ## Data points of interest To carry out cohort analysis the following data points are required. You need: - Unique identifier. In this case, your unique identifier is the Customer ID - First purchase date. This refers to the date the customer made their first purchase from the business and became a customer. This initial date is going to come in handy in creating a cohort group. - Revenue data ## Create calculated fields ### Customers’ First purchase date(quarter) As mentioned earlier, the first purchase date is used in assigning a cohort to each customer. Since the first purchase date field is not readily available in the data set. You are going to come up with it. This is known as a calculated field. A calculated field is a numeric or date field that derives its data from the calculation of the data in other fields that are readily present in the dataset. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lyt5y27i6dn5d4qq9qp5.png) Through the ‘first purchase’ calculated field, you’re going to create quarter and year cohorts which will be assigned to customers depending on the date they made their first purchase Earlier in this article, I defined a cohort as a group of people that share common characteristics like time. In this case, a cohort is going to be a group of customers that made their first purchase in the same quarter and same year. If you explored this Superstore dataset at the start, you might have realised that this data is spanning four years (2014–2017), if decided to make the month cohorts, it would make the cohort table very big and hard to analyse at just a glance. So I deemed it efficient to form cohorts based on the quarter and year in which a customer made their first purchase. What I’m trying to say is that you can create time-based cohorts based on other time parameters like day, week or month and it wouldn’t matter. The calculation to establish the quarter in which a customer made their first purchase is as below: ``` DATE({ FIXED [Customer ID] : MIN(DATETRUNC('quarter', [Order Date])) }) ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4wa2sv44t4mncpfg2r0a.png) ### Customers per first quarter This calculated field is going to be used to establish the number of 'unique' customers that made their first purchase in each quarter. This calculation builds on the first calculated field that you've just completed. ``` { FIXED [Customers First Purchase Quarter] : COUNTD([Customer ID]) } ``` ### Retention Rate ``` COUNTD([Customer ID])/SUM([Customers per First Quarter]) ``` ## Assemble the cohort table This is about assembling the calculated fields you’ve created to form a cohort retention table. **Step 1:** Click on `Customers First Purchase` field and drag it to the rows. {% embed https://youtu.be/KeX1TrLM5tw %} **Step 2:** On the rows, click on the drop-down, then switch the specifications from year to quarter and from continuous to discrete. {% embed https://youtu.be/sE7Mrv7Pc3c %} **Step 3:** Click on the `Order Date` field and drag it to the columns. Click the drop-down in the columns to switch the specifications from year to quarter and from continuous to discrete. {% embed https://youtu.be/xhqHdHADhx4 %} **Step 4:** Drag the `Customers per first quarter` field you created from the measures to the dimensions. And then drag it to the rows. {% embed https://youtu.be/Mjx5RP4oXeQ %} **Step 5:** Click the `Retention Rate` field you previously created and drag it to the text tile. Click the drop-down to format number to percentage of one decimal place. {% embed https://youtu.be/ba6Orsc5qMY %} **Step 6:** Drag the `Retention Rate` field from the text tile to the colour tile. Then drag `Customer ID` field to the tooltip tile. On the customer ID tool tip, click the drop down to change from attribute to a measure of count distinct. Then finally make the values visible by clicking T on the tool bar. {% embed https://youtu.be/sXfLa8paAwU %} **_Bonus tip:_** You can further customize the look of the table by going over to the [coolors](https://coolors.co/) website to pick out some unique colours that appeal to you. ## Interpret the retention rate ### Going down the view Going down the view of the cohort table, look at the first column and second column, you see 1. different year and quarter groups (cohorts) 2. the number of customers that made their first purchase in each of those periods. ### Going across the view Going across the view (3rd column), you see the percentage of customers that continued to make purchases at the superstore 0 through 15 quarters after making their first purchase. For example, 160 customers made their first purchase in 2014 Q2. Of those 160 customers, 24.4% and 36.3% of them came back to make purchases in 2014 Q3 and 2014 Q4 respectively. And so forth… ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s9h9ix5ydk3qs29es8cu.png) ## Conclusion Creating calculated fields is inevitable in carrying out Cohort analysis in Tableau. And with creating calculated fields comes the need to use functions. If you are fairly new to the concept of Tableau functions, I understand there might be some knowledge gaps for you to fill. I recommend you check out this Tableau article on [functions](https://help.tableau.com/current/pro/desktop/en-us/functions.htm) to gain a deeper understanding of the subject.
alumassy
1,466,686
Some Code i Have made
A post by Toonify
0
2023-05-13T06:46:49
https://dev.to/toonify/some-code-i-have-made-2475
meme, coding, pen, webdev
****
toonify
1,466,789
Answer: How to Create the event before cell change with cancel in Excel. For the active cell
answer re: How to Create the event before...
0
2023-05-13T09:27:25
https://dev.to/oscarsun72/answer-how-to-create-the-event-before-cell-change-with-cancel-in-excel-for-the-active-cell-48g
{% stackoverflow 76238510 %}
oscarsun72
1,466,809
Mind blowing journey
Warning. You must have a linux distribution to follow this...
0
2023-05-13T10:49:06
https://dev.to/grafeno30/mind-blowing-journey-blm
## Warning. You must have a linux distribution to follow this post **TOC:** Pseudocode Executable (Binary machine code) Scripts - Bash script (text file) - Assembler -Netwide Assembler (NASM) - programming language - C language - programming language - C++ language - programming language - Java language - programming language - Python - ### Pseudocode Pseudocode is a plain language description of the steps in an algorithm. It is intended for human reading. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1r5m2ph8np2650lf91o9.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/enb21axiz0bpn4oe5o3y.png) ``` Algorithm Sum Var Sum = 0 Begin Read a, b Sum = a + b Write Sum End ``` ### Executable (Binary machine code) Machine code instructions for a physical CPU. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pi0gr4d29z0ww2qw20xn.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fc1jcy7ebvjre28mawx.png) **EXE** is a common filename extension denoting an executable file for Microsoft Windows **ELF**: The Executable and Linkable Format is a common standard file format for executable files, object code, shared libraries…. If you run the command **file** you will get information about the executable. Try the following: `file /usr/bin/ls` (Please, pause and try it!) “ls” is a command but it is also executable program (ELF) To display file headers of an elf file: `readelf -h /usr/bin/ls` (Please, pause and try it!) **Magic numbers** is the name given to constant sequences of bytes (usually) at the beginning of files, used to mark those files as being of a particular file format. They serve a similar purpose to file extensions. How to see the executable inside? `hexdump -C /usr/bin/ls | less` (Please, pause and try it!) You could see what system calls are made using the command “strace” `strace /usr/bin/ls | more` (Please, pause and try it!) ### Scripts - Bash script (text file) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x4ichtgvi67w2xvvp7oa.png) Follow the next steps: 1)`nano hello.sh` 2) Add the following code: `#!/bin/bash echo "Hello World"` 3) Set the script executable permission by running **chmod** command: `chmod +x hello.sh` 4) Run or execute the script using following syntax: `./hello.sh` You can also do step 2 and 3 as follows: bash ./hello.sh ### Assembler -Netwide Assembler (NASM) - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ru2ydtaxsmwsoxmxgq2w.png) Let’s install NASM: `sudo apt update sudo apt install nasm` The Netwide Assembler (NASM) is an assembler for the Intel x86 architecture. It can be used to write 16-bit, 32-bit and 64-bit (x86-64) programs. List of available records: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a32h4xte5po6t4apa6zg.png) We are going to use the following system calls: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/49lrdqzulzqj52bvl78x.png) 1. nano hello.asm: ``` global _start section .text _start: mov rax, 1 ; system call for write mov rdi, 1 ; file handle 1 is stdout mov rsi, message ; address of string to output mov rdx, 13 ; number of bytes syscall ;invoke operating system to do the write mov rax, 60 ; system call for exit xor rdi, rdi ; exit code 0 syscall ; invoke operating system to exit section .data message: db "Hello, World", 10 ; note the newline at the end ``` Execute the following command: `nasm -felf64 hello.asm && ld hello.o && ./a.out` From the txt file called hello.asm an object file “hello.o” is created and then “ld” command creates an executable file “a.out” (ld, command to link object files) To display file headers elf file: `readelf -h ./a.out` ### Programming language - C language - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aymhq3avec4kqkspliq9.png) Let’s install compiler gcc: ``` sudo apt install build-essential sudo apt install gcc gcc --version ``` gcc, 15 million lines of code in 2019 (it is one of the biggest free program) Let's create a C program: 1. nano hello.c: ``` #include <stdio.h> int main (void) { printf ("Hello, world!\n"); return 0; } ``` 2. `gcc -Wall hello.c -o hello2` 3. `./hello2` ### Programming language - C++ language - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vdsb3vl1l5fu005lhns3.png) Let's create a C program: 1) To install cpp compiler ``` sudo apt install g++ g++ --version ``` 2) nano hello_cpp.cpp: ``` #include <iostream> int main(int argc, char** argv) { std::cout << "Hello, world!" << std::endl; return 0; } ``` 3) `g++ -Wall -pedantic -o hello2 hello_cpp.cpp` 4)` ./hello2` ### Programming language - Java language - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/funy0kv53khs0y4ec3du.png) Let's create a Java program: 1) Install jdk and jre: ``` sudo apt install default-jdk sudo apt install default-jre ``` 2) nano hello.java: ``` class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); } } ``` 3) `javac hello.java` This creates a file called hello.class (it is in bytecode, which is code that can be executed by the Java VM). 4) `file HelloWorld.class ` (this command is optional, it is only for didactic purposes) HelloWorld.class: compiled Java class data, version 55.0 5) `hexdump -C HelloWorld.class` (this command is optional, it is only for didactic purposes) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u6srfnzt3fguau0vkg3b.png) 6) `java HelloWorld` Output: Hello, World! ## Programming language - Python - ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/khyn1peoaqf8ksebj4ip.png) Don't have python installed? Use the following script: [](https://github.com/grafeno30/myscripts/blob/master/install_python3.sh) Let's create a python program: 1) nano hello.py: `print("Hello world!")` 2)`python3 hello.py` Output: Hello world! Sources: [](https://en.wikipedia.org/) [](https://www.geeksforgeeks.org/readelf-command-in-linux-with-examples/) [](https://www.linux.org/threads/how-to-read-a-executable-file-in-linux.7976/)
grafeno30
1,466,872
Smooth Sailing with Docker: A Beginner's Guide to Containerization
Welcome to Docker world where containers bring magic to the world of software development and...
0
2023-05-14T09:31:52
https://dev.to/jeptoo/everything-docker-4eni
docker, beginners, cloudcomputing
Welcome to Docker world where containers bring magic to the world of software development and deployment! Why did the Docker container never ask for help? Because it couldn't find a container support group – it was too self-contained! Now that we've shared a giggle or a laugh let's embark on Docker exploration. In this article you'll learn docker fundamentals and it's installation...let's dive in! ##TABLES OF CONTENTS [Introduction](#introduction) [Docker Features](#docker-features) [Docker Architecture](#docker-architecture) [Installing Docker Desktop](#installing-docker-desktop) [Conclusion](#conclusion) ##Introduction Docker is a container management server that is used in development, packaging and deployment of applications automatically. - **Container** - is an instance of an image that allows developers package the application with all parts needed such as libraries and other dependencies. - **Image** - is a file that has multiple layers used to execute code in a docker container. They are a set of instructions used to create docker containers ###How Docker works Docker uses containerization where applications are packaged into containers that has everything they need to run(code, libraries, dependencies). Docker uses OS-level virtualization to create the containers and ensuring that each container operates as an isolated and self-contained unit. **Docker Engine** - is a client-server based application that hosts containers. It has three main components: - Server(Docker deamon): Creates, and manages docker images, containers, network and volumes on docker. - REST API(Docker Client): Allows users to interact with server, issuing commands to build, run, and manage containers. - Client: is a docker command-line interface(CLI), that allows interaction with docker using docker commands. ##Docker Features **Scalability** Docker containers are lightweight hence easily scalable. Its portability makes it simple to manage workloads, scaling up or down apps and services as demands in real-time. **Swarm** It is a clustering and scheduling tool for docker containers. Swarm uses docker API as its front-end hence various tool to control it. It also helps us control cluster of docker hosts as a single virtual host. It's a self-organizing group of engines used to enable pluggable backends. **Security** Docker saves secret into the swarm itself. Docker container provide a high level of isolation between different application preventing them fro interacting with or affecting each other hence a more secure and stable platform for running multiple apps on a single host. **Routing Mesh** It enables connection even if there is no task running on the node. It routes the incoming requests for published ports on available nodes to an active container. ##Docker Architecture In a nutshell, the client talks with the docker deamon which helps in building, running and distributing docker containers. The client runs with the deamon on the same system or connected remotely. With the help of REST API over a network the client and deamon interacts. ![docker](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywtl0f2lu6qjobquovjs.jpeg) **DOCKER CLIENT** Docker client uses commands and REST API to communicate with the server(Docker Deamon). When a client runs any docker client terminal, the client terminal sends the docker commands to the docker deamon which receives the commands in form of command and REST API's request Docker Client can communicate with more than one docker deamon and it uses CLI to run the `docker build`, `docker pull`, `docker run`. **DOCKER HOST** Provides an environment to execute and run applications. It contains docker deamon, containers, images, network and storage. **DOCKER REGISTRY** Manages and stores docker images. It is of two types: - _Public Registry_ - Also called Docker Hub is used by everyone. - _Private Registry_ - uses to share images within the enterprise. **DOCKER OBJECTS** ![OBJ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w9y6jpd2cllpm0a56fgi.png) - **Docker Image** - are the read-only binary templates used to create Docker Containers. Images enables collaboration between developers. - **Docker Containers** - is used to hold the entire package that is needed to run the application. containers need less resources which is a plus. Containers is a copy of a template while the image is a template. - **Docker Networking** - provides isolation for docker containers, it links docker container to many networks. **Types of Docker Network** - _Bridge_ - default network driver for the container, used when when multiple docker communicates with the same docker host. - _Host_ - used when we don't need network isolation between container and host. - _None_ - disables all the networking. - _Overlay_ - enables containers to run on different docker host. Offers Swarm services to communicate with each other. - _Macvlan_ - Used when we want to assign MAC (Media Access Control)addresses to the container. **Docker Storage** It is used to store data in containers. Docker has the following storage options;- _Data Volume_ - provides the ability to create persistence storage, it allows us to name volumes, list volumes and containers associated with the volume. _Directory Mounts_ - it mounts host's directory with a container, it is the best option. _Storage Plugins_ - it provides ability to connect to external storage platforms. ##Installing Docker Desktop We can install docker on any operating system but docker runs natively on Linux distributions. We will install docker engine for linux Ubuntu. **Prerequisites** To Install docker Desktop ensure that: - Have a 64-bit version of either Ubuntu Jammy Jellyfish 22.04 (LTS) or Ubuntu Impish Indri 21.10. Docker Desktop is supported on x86_64 (or amd64) architecture. - Meet [the system requirements](https://docs.docker.com/desktop/install/linux-install/#system-requirements) **Steps** 1. Set up [Docker's package repository](url) 2. Download latest [DEB package](https://desktop.docker.com/linux/main/amd64/docker-desktop-4.19.0-amd64.deb?utm_source=docker&utm_medium=webreferral&utm_campaign=docs-driven-download-linux-amd64) 3. Install the package with apt: ``` `sudo apt-get update` `sudo apt-get install ./docker-desktop-<version>-<arch>.deb` ``` Launch Docker Desktop using the terminal: ``` `systemctl --user start docker-desktop` ``` or search _Docker Desktop_ on _Applications_ menu and open it. Check docker binary versions by running: ``` `docker compose version` `docker --version` `docker version` ``` To login to docker ``` `systemctl --user enable docker-desktop` ``` To stop Docker Desktop ``` systemctl --user stop docker-desktop ``` - If you are using Windows you can install it from [here](https://docs.docker.com/desktop/install/windows-install/#:~:text=Double%2Dclick%20Docker%20Desktop%20Installer,bottom%20of%20your%20web%20browser.) or if you are using Mac you can find docker installation [here](https://docs.docker.com/desktop/install/mac-install/) ##Conclusion Remember, Docker is a powerful tool with numerous advanced features and use cases. Continue learning by referring to the official Docker documentation and community resources. Embrace the world of containerization and elevate your software development and deployment workflows with Docker. Happy containerizing!
jeptoo
1,466,893
AI Video Generation - Where are we today?
Have you ever wondered how AI is changing the world of video generation? In recent months, there...
0
2023-05-13T19:07:33
https://dev.to/killswitchh/ai-video-generation-where-are-we-today-4ceg
ai, programming, todayilearned, news
Have you ever wondered how AI is changing the world of video generation? In recent months, there have been major advances in AI video generation. This technology is still in its early stages, but it has the potential to revolutionize the way we create and consume video content. In this blog post, we will discuss the latest advances in AI video generation and explore the potential applications of this technology. We will also provide some examples of how AI video generation is being used today. Lets Read on to learn more about this exciting new technology! ## Generative AI for Images A video is nothing but a set of arranged images displayed in a rapid succession. Let's have a look at Generative AI Imaging before we go to videos. This is a very popular topic and it has already affected a lot of sectors including digital art, content-creation. When it comes to AI image generation there are 4 main players. - MidJourney - DALL-E - Adobe Firefly - Stable diffusion These models are trained on massive datasets of images, and they can use this data to create new images that are both realistic and creative. I went ahead and gave a same prompt to all 4 models. `PROMPT: Award-winning landscape photography` ![AI collage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j85w6qek87oxcc0yn3ua.png) ## AI for videos It all started with [this](https://www.youtube.com/watch?v=XQr4Xklqzw8) viral video of will smith eating spaghetti. ![will smith](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/onuv9i1l11ntd1swch50.jpg) ModelScope is a diffusion model from Alibaba that can generate new videos from prompts. chaindrop used ModelScope to create a video of Will Smith eating spaghetti. They first generated the video at 24 FPS, then used Flowframes to increase the FPS to 48 and slow it down to half speed. Of course, ModelScope isn't the only game in town regarding the emerging field of text2video. Runway debuted "Gen-2" and Stability AI also released an [SDK ](https://stability.ai/blog/stable-animation-sdk) for videos. ![Dance v2v gif](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nn83x9kjxiq7b8g2yw9s.gif) With Stable Animation SDK, you can create animations in three different ways: 1. Text to animation 2. Text + image to animation 3. Video + text to animation While this is not anything new and a lot of community-made solutions like Deforum (https://lnkd.in/gb4eqbzg) have been existing for the past few months, the ease of use of an SDK will enable a lot of creative projects. ![Human](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vl8o97rd0l4vq72jipy4.gif) Stable diffusion also released Control Net. [ControlNet](https://github.com/lllyasviel/ControlNet) is a neural network that adds extra conditions to diffusion models to control image generation. This allows for unprecedented levels of control over the content, style, and composition of generated images. ![building gif](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zbptf9oeo24pk2sxwako.gif) Cant wait to see what else the world has to offer!
killswitchh
1,466,964
Bose Web Scraping Tutorial
In This tutorial you will learn about the Bose framework, a framework which provides an easier and...
0
2023-05-13T14:01:28
https://dev.to/kumarchandan1991/bose-web-scraping-tutorial-3lff
webscraping, python, tutorial, webscrapingtools
![Featured](https://www.omkar.cloud/bose/assets/images/featured-0bacb14445a1b4cf367c10cd01454000.jpg) In This tutorial you will learn about the Bose framework, a framework which provides an easier and structured way of using Selenium for web scraping. Think of it Swiss Army knife for Web Scraping When using Selenium to scrape websites, there is usually a lot of boilerplate work involved such as: - Downloading the appropriate Chrome driver for Selenium - Creating the driver by specifying the driver's path, which can be challenging on Windows - Specifying the correct ChromeOptions to make Selenium undetectable by bot protection sites - passing profiles and user agents to Selenium - Debugging in case of errors. The Bose framework solves all these problems and puts an end to these nuisances for all developers. It is Django for web scrapers. We will use the Bose framework to scrape quotes.toscrape.com, a website that lists quotes from famous authors. Along the way, I will also share some great features of the Bose framework. So let's get started. ## Getting Started First, let's download the Bose starter project by cloning the starter template: ```bash git clone https://github.com/omkarcloud/bose-starter my-bose-project ``` Then, change into `my-bose-project` directory, install dependencies, and start the project: ```bash cd my-bose-project python -m pip install -r requirements.txt python main.py ``` Whenever we use Selenium, we need to download the correct Selenium version for Chrome. In past you must have visited chrome website to download the correct driver but thankfully with bose when you run the project for the first time, Bose automatically downloads the correct driver in the `build/` directory. ## Scraping quotes.toscrape.com We are going to scrape quotes.toscrape.com for quotes and their authors. ![Dashboard](https://www.omkar.cloud/bose/assets/images/dashboard-6536dc47aa2adc244e41bcbebb81a568.png) Write following code in src/scraper.py: ```python from selenium.webdriver.common.by import By from bose import BaseTask, Wait, Output class Task(BaseTask): def run(self, driver): driver.get("https://quotes.toscrape.com/") els = driver.get_elements_or_none_by_selector('div.quote', Wait.SHORT) items = [] for el in els: text = driver.get_element_text(el.find_element(By.CSS_SELECTOR, "span.text")) author = driver.get_element_text(el.find_element(By.CSS_SELECTOR, "small.author")) item = { "text" : text, "author" : author, } items.append(item) Output.write_finished(items) ``` This code defines a Python class Task that inherits from a BaseTask class. All Scraping Tasks must inherit from BaseTask to perform scraping. Now in the run method of Task, we recieve driver object as a parameter, which is an instance of the BossDriver. BossDriver extends Selenium WebDriver to add much powerful utlility methods for scraping. For example here `get_elements_or_none_by_selector` finds all elements with selector div.quote and wait upto 4 seconds (Wait.SHORT) to find them. The same thing in selenium would have been quite verbose. Then, the method initializes an empty list items and iterates over the els list, which contains the div elements with the class name quote. For each element, it extracts the text of the quote and the author name using the find_element and get_element_text methods. Finally, it appends a dictionary item to the items list containing the quote text and author name. Then we use the Output Object from bose which aims to simplify reading, writing of JSON, csv files easy in selenium. Now using Output Object we write in it output/finished.json using the Output.write_finished method. --- Now, To run the project, execute the command: ```bash python main.py ``` You see it start scraping quotes.toscrape.com and put them in output/finished.json ![Result](https://www.omkar.cloud/bose/assets/images/result-908f901c7b8961b3d84853c103a277f9.png) Furthermore, the /tasks/1/ directory will also be generated. Web scraping can often be fraught with errors, such as incorrect selectors or pages that fail to load. When debugging with raw Selenium, you may have to sift through logs to identify the issue. Fortunately, Bose makes it simple for you to debug by storing information about each run. If you observe the tasks/1/ directory, you will notice that it contains three files, which are listed below: ### `task_info.json` It contains information about the task run such as duration for which the task run, the ip details of task, the user agent, window_size and profile which used to execute the task. ![task info](https://www.omkar.cloud/bose/assets/images/task-info-1ad8d89552138e2edc900434144dfbe0.png) ### `final.png` This is the screenshot captured before driver was closed. ![final](https://www.omkar.cloud/bose/assets/images/final-d2ca24d2717d17576eb8233ad0cd2b10.png) ### `page.html` This is the html source captured before driver was closed. Very useful to know in case your selectors failed to select elements. ![Page](https://www.omkar.cloud/bose/assets/images/page-cffce10976b4bf201b49a479c2340075.png) ### `error.log` In case your task crashed due to exception we also store error.log which contains the error due to which the task crashed. This is very helful in debugging. ![error log](https://www.omkar.cloud/bose/assets/images/error-log-9ebb09dca133b2d7df1ae6cfc67df909.png) ## Exception handling In bose, when an exception occurs in a scraping task, the browser will remain open instead of immediately closing. This is useful for debugging purposes, as it allows you to see the live browser state when the exception occurred. For example, if we replace the code in scraper.py with the following code which selects a non exisiting selector causing an exception to be raised and run it: ```python from selenium.webdriver.common.by import By from bose import BaseTask, Wait, Output class Task(BaseTask): def run(self, driver): driver.get("https://quotes.toscrape.com/") els = driver.get_elements_or_none_by_selector('div.some-not-exisiting-selector', Wait.SHORT) items = [] for el in els: text = driver.get_element_text(el.find_element(By.CSS_SELECTOR, "span.text")) author = driver.get_element_text(el.find_element(By.CSS_SELECTOR, "small.author")) item = { "text" : text, "author" : author, } items.append(item) Output.write_finished(items) ``` You will notice that the browser does not close, and Bose prompts you to press enter to close the browser. This feature is very handy when you are trying to debug your code in the browser when the exception occurred. ![error prompt](https://www.omkar.cloud/bose/assets/images/error-prompt-cdcbd3f0419dd68b5d8264c2c96661f6.png) ## Browser Configuration Bose makes it easy to configure the Selenium driver with different options, such as - which profile to use - which user agent to use - which window size to use You can easily configure these options using the BrowserConfig class. For example, here's how you can configure the driver to use Chrome 106 user agent, window size 1280x720, and profile 1: ```python from selenium.webdriver.common.by import By from bose import BaseTask, BrowserConfig, UserAgent,WindowSize class Task(BaseTask): browser_config = BrowserConfig(user_agent=UserAgent.user_agent_106, window_size=WindowSize.window_size_1280_720, profile=1) def run(self, driver): driver.get("https://quotes.toscrape.com/") ``` In this example, we set the `BrowserConfig` using the `browser_config` property of the `Task` class. ## Using Undetected Driver Bose also supports the `undetected_driver` library, which provides a robust driver to help evade detection by anti-bot services like Cloudflare. Although it is slower to start, it is much less detectable. To use it, pass the `use_undetected_driver` option to `BrowserConfig`, like so: ```python from selenium.webdriver.common.by import By from bose import BaseTask, BrowserConfig, UserAgent,WindowSize class Task(BaseTask): browser_config = BrowserConfig(use_undetected_driver=True, user_agent=UserAgent.user_agent_106, window_size=WindowSize.window_size_1280_720, profile=1,) ``` ## Outputting Data in Bose Bose provides great support to easily output data as CSV, Excel, or JSON using the `Output` class. To use it, call the `write` method for the type of file you want to save. All data will be saved in the `output/` folder: ```python from bose import Output data = [ { "text": "\u201cThe world as we have created it is a process of our thinking. It cannot be changed without changing our thinking.\u201d", "author": "Albert Einstein" }, { "text": "\u201cIt is our choices, Harry, that show what we truly are, far more than our abilities.\u201d", "author": "J.K. Rowling" } ] Output.write_json(data, "data.json") Output.write_csv(data, "data.csv") Output.write_xlsx(data, "data.xlsx") ``` ## Using LocalStorage Just like how modern browsers have a local storage module, Bose has also incorporated the same concept in its framework. You can import the LocalStorage object from Bose to persist data across browser runs, which is extremely useful when scraping large amounts of data. The data is stored in a file named `local_storage.json` in the root directory of your project. Here's how you can use it: ```python from bose import LocalStorage LocalStorage.set_item("pages", 5) print(LocalStorage.get_item("pages")) ``` ## Conclusion In summary, Bose is an excellent framework that simplifies the boring parts of Selenium and web scraping. We encourage you to read the reference of BossDriver at [Boss Driver](https://www.omkar.cloud/bose/docs/reference/boss-driver/), which is an extended version of Selenium that adds some great methods to help you in scraping.
kumarchandan1991
1,467,203
Maintaining a Python Package with GitHub Actions
What I built A hangman game created using Python and distributed as a Python package...
0
2023-05-20T16:43:04
https://dev.to/viniciusenari/maintaining-a-python-package-with-github-actions-2bfn
githubhack23, python, github, opensource
## What I built A hangman game created using Python and distributed as a Python package through [PyPI](https://pypi.org/). It utilizes GitHub actions to test and lint the code whenever there is a push or pull request to the main branch. Additionally, whenever a new release is created, the new version of the package is automatically deployed to PyPI. ### Category Submission: Wacky Wildcards ### App Link Package on PyPI: https://pypi.org/project/hangman-package/ ### Screenshots Demonstration of the game in GitHub Codespaces: ![Demonstration](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hl3mlq6va7jurugoip92.gif) ### Description The game can be installed using pip. ``` pip install hangman-package ``` To play, you just need to import the package, create an instance of the Hangman class and call the `play` method. ```python from hangman import Hangman hangman = Hangman() hangman.play() ``` You can pass a list of words to the Hangman class to utilize custom words. By default, the words are a list of programming language names. ```python hangman = Hangman(words=['apple', 'banana', 'orange', 'melon']) ``` ### Link to Source Code Github repository: https://github.com/viniciusenari/hangman-package ### Permissive License MIT License ## Background (What made you decide to build this particular app? What inspired you?) I wanted to build something with GitHub Actions to learn the basics of how it works. Since I am most comfortable working with Python, I looked for workflow templates available for Python and found two that could be helpful in maintaining a Python package. ![Workflow templates](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ng385ksaf8fs82h1tu1f.png) I wasn't sure what to build for a package, so I ended up going with the first idea that came to mind—a hangman game. While it's not an original idea, it served its purpose of allowing me to utilize and learn more about GitHub Actions. ### How I built it (How did you utilize GitHub Actions or GitHub Codespaces? Did you learn something new along the way? Pick up a new skill?) I utilized two GitHub actions workflows, one for [testing and linting](https://github.com/viniciusenari/hangman-package/blob/main/.github/workflows/python-package.yml) and another one for [publishing my package](https://github.com/viniciusenari/hangman-package/blob/main/.github/workflows/python-publish.yml). The workflow for testing and linting is triggered whenever there is a push or a pull request to the main branch. It utilizes the pytest library to run tests and flake8 to lint, ensuring that the code follows conventions and works correctly. It runs against multiple Python versions. The workflow for publishing is triggered when a new release is created. It utilizes the build command to create a distributable version of the package and publishes it to PyPI. Overall, I learned how to utilize these workflows to automate important tasks in my development process, making it easier to maintain code quality and distribute my package. In addition to GitHub Actions, I also utilized GitHub Codespaces to experiment with my package. Without needing to rely on my local development environment, I was able to test the package. ### Additional Resources/Info https://en.wikipedia.org/wiki/Hangman_(game)
viniciusenari
1,467,210
An Overview of Node.js Build Tools and Task Runners
Node.js Build Tools and Task Runners are an essential part of any modern web development process....
0
2023-05-13T21:01:41
https://dev.to/saint_vandora/an-overview-of-nodejs-build-tools-and-task-runners-2ghm
webdev, javascript, node, tutorial
Node.js Build Tools and Task Runners are an essential part of any modern web development process. They help developers automate repetitive tasks, streamline the build process, and ultimately save time and money. In this article, we'll provide an overview of the most popular Node.js build tools and task runners and discuss how they can benefit your development process. ## What are Node.js Build Tools and Task Runners? Build tools and task runners are tools that help automate various tasks in the development process. They can help with tasks such as compiling code, optimizing assets, running tests, and deploying code. They are especially useful in large-scale projects, where there are many moving parts and many developers working on the same codebase. Node.js has several build tools and task runners that can be used to streamline the development process. Some of the most popular ones are Grunt, Gulp, and Webpack. Let's take a closer look at each of these tools. - **Grunt** Grunt is one of the oldest Node.js build tools and task runners. It was released in 2012 and has been a popular choice for developers ever since. Grunt has a plugin-based architecture, which means that developers can easily extend its functionality with plugins. One of the biggest advantages of Grunt is its ease of use. It has a simple configuration file that developers can use to specify the tasks they want to run. Grunt also has a large community of developers, which means that there are many plugins available to help with a wide range of tasks. - **Gulp** Gulp is another popular Node.js build tool and task runner. It was released in 2014 and has quickly gained popularity among developers. Like Grunt, Gulp has a plugin-based architecture and can be extended with plugins. One of the biggest advantages of Gulp is its performance. Gulp uses streams to pass data between tasks, which means that it can process files more quickly than Grunt. Gulp also has a simple configuration file and is easy to use. - **Webpack** Webpack is a powerful build tool and task runner that is often used in modern web development. It is especially useful for projects that use frameworks like React or Angular. Webpack can be used to bundle and optimize assets, as well as to compile code. One of the biggest advantages of Webpack is its flexibility. It can be configured to work with a wide range of project types and can handle complex dependencies with ease. Webpack also has a large community of developers and is constantly being updated with new features. ## Conclusion Node.js build tools and task runners are essential for modern web development. They help automate repetitive tasks, streamline the build process, and ultimately save time and money. Grunt, Gulp, and Webpack are some of the most popular tools available and each has its own strengths and weaknesses. When choosing a build tool or task runner, it's important to consider the needs of your project and the skills of your development team. With the right tool, you can make your development process more efficient and enjoyable for everyone involved. Thanks for reading... **Happy Coding!**
saint_vandora
1,467,250
My Journey as a Beginner in Full Stack Development
As someone who is new to the world of full stack development, I recently completed the Phase 1 Full...
0
2023-05-13T21:49:18
https://dev.to/binil_tz/my-journey-as-a-beginner-in-full-stack-development-2llm
development, beginners, github, javascript
As someone who is new to the world of full stack development, I recently completed the Phase 1 Full Stack Development Course and wanted to share my experience. This course covered a wide range of topics, from front-end development to back-end development, and provided me with a solid foundation for building web applications. _What I Learned_ Throughout the course, I learned the following key concepts and technologies: _HTML, CSS, JSON, and JavaScript_ These are the fundamental building blocks of web development and are essential for creating dynamic and interactive web pages. HTML is used to structure content on the page, while CSS is used to style the content and give it visual appeal. JavaScript is used to add interactivity and functionality to the web page, such as responding to user input and manipulating the content on the page. JSON is a lightweight data interchange format that is commonly used to send and receive data between client and server. During the course, I learned how to create basic HTML and CSS layouts, how to use JSON to exchange data between client and server, and how to use JavaScript to add interactivity and functionality to my web pages. I also learned about best practices for organizing and structuring my code, which helped me write more maintainable and scalable code. _Git and Github_ Git is a popular version control system that allows developers to track changes to their code over time, collaborate with other developers, and maintain different versions of their code. Github is a web-based platform that provides hosting for Git repositories and offers a variety of collaboration and project management tools. During the course, I learned how to use Git to track changes to my code, create branches, merge changes, and collaborate with other developers. I also learned how to use Github to host my code, collaborate with other developers, and manage project tasks and issues. _Slack, Piazza, and Canvas_ Slack, Piazza, and Canvas are communication and collaboration tools commonly used in online courses and in the workplace. Slack is a messaging and collaboration platform that allows teams to communicate and collaborate in real-time. Piazza is a Q&A platform that allows students to ask and answer questions related to course material. Canvas is an online learning management system that provides course materials, assignments, and grades to students. During the course, I learned how to use these tools to communicate with my classmates and instructors, ask and answer questions related to course material, and access course materials and assignments. _Conclusion_ Overall, the Phase 1 Full Stack Development Course provided me with a solid foundation in front-end and back-end development, as well as important tools for collaboration and project management. I look forward to building on these skills in future courses and projects and share my knowledge.
binil_tz
1,467,472
React: Abstract Design Pattern-DRY & Single Shared Responsibility(Part-2)
React: Abstract Design Pattern-DRY &amp; Single Shared Responsibility(Part-2) ...
0
2023-05-14T06:19:26
https://javascript.plainenglish.io/react-abstract-design-pattern-dry-single-shared-responsibility-9fbef42a6e56
react, webdev, javascript, design
--- title: React: Abstract Design Pattern-DRY & Single Shared Responsibility(Part-2) published: true date: 2023-05-14 05:54:34 UTC tags: reactjs,webdevelopment,javascript,design canonical_url: https://javascript.plainenglish.io/react-abstract-design-pattern-dry-single-shared-responsibility-9fbef42a6e56 --- ### React: Abstract Design Pattern-DRY & Single Shared Responsibility(Part-2) ### Understanding the Abstract Method pattern in React and exploring how it can be implemented in your applications. ![img](https://cdn-images-1.medium.com/max/1024/1*KGprQ-67EafNQzDbPEEnjw.jpeg) This is in continuation with the last article on **React — Abstract Design pattern**. This article is basically on the potential areas where performance issues could arise.[Link to last article](https://vivekdogra02.medium.com/react-abstract-design-pattern-dry-single-shared-responsibility-part-1-c63dfac5eb8c) #### In continuation with the last post code, there are a few potential areas where performance issues could arise, 1. **Rendering large datasets** : If the **transactions** array contains a large number of items, rendering them all at once can impact performance. To mitigate this, consider implementing pagination or virtualization techniques to load and render only the visible portion of the dataset. 2. **Dynamic function creation** : The **getFormatAmount** function dynamically creates the **formatAmount** function based on the transaction type. While this approach provides flexibility, it can impact performance if the function creation process becomes complex or computationally expensive. Ensure that the function creation logic remains simple and efficient to avoid any performance bottlenecks. 3. **Complex transaction processing** : If the processing or manipulation of transaction data within the **TransactionItem** component becomes computationally intensive, it may lead to performance issues. Analyze the operations performed on each transaction and optimize them as necessary. Consider leveraging memoization or caching techniques to avoid unnecessary calculations or redundant operations. 4. **Inefficient rendering updates:** If there are frequent updates to the **transactions** array or individual transactions, it can trigger unnecessary re-renders. To optimize this, ensure that the **transactions** array and its items are immutably updated to prevent unintended re-renders. Additionally, consider implementing more granular component updates using React's **shouldComponentUpdate** or **React.memo** to prevent unnecessary rendering of components that have not changed. It’s important to profile and analyze the performance of your application using tools like **React DevTools, Chrome DevTools, or other performance monitoring tools**. By identifying and addressing specific performance bottlenecks, you can optimize the code and improve the overall performance of your financial app. #### To avoid potential performance issues in the provided code, here are some suggestions: 1. **Implement pagination or virtualization:** If the **transactions** array contains a large number of items, consider implementing pagination or virtualization techniques. Instead of rendering all transactions at once, load and render only a subset of transactions based on the current viewport or user interaction. This approach improves initial load time and reduces rendering overhead. 2. **Optimize the dynamic function creation** : The **getFormatAmount** function dynamically creates the **formatAmount** function based on the transaction type. To optimize performance, ensure that the function creation logic remains lightweight and efficient. Avoid complex computations or heavy operations within the function creation process. If possible, precompute or memoize the format functions to avoid repetitive calculations. 3. **Streamline transaction processing:** Analyze the operations performed on each transaction within the **TransactionItem** component. Look for opportunities to optimize data processing and manipulation. Use efficient algorithms and data structures to minimize computational overhead. Consider memoization techniques to cache expensive calculations or avoid redundant operations. 4. **Use immutability and granular updates** : Ensure that the **transactions** array and its items are updated in an immutable manner. Avoid mutating the original array directly, as it can lead to unnecessary re-renders. Instead, create new arrays or objects when making updates. Additionally, utilize React's **shouldComponentUpdate** lifecycle method or the **React.memo** higher-order component to prevent unnecessary re-renders of components that have not changed. 5. **Profile and optimize** : Utilize **performance profiling tools** such as React DevTools or Chrome DevTools to identify performance bottlenecks. Measure the rendering times, identify components with excessive re-renders, and analyze the performance of critical code sections. Optimize the identified areas by refactoring, simplifying computations, or implementing caching strategies. By implementing these suggestions, you can improve the performance of your application, ensuring efficient rendering and minimizing unnecessary computations. **Regular performance testing and profiling** will help you identify specific areas for optimization and ensure your financial app performs optimally. **Here’s an updated version of the code with optimizations to address the performance concerns:** ``` import React, { useMemo } from 'react'; const FinancialApp = ({ transactions }) => { // Pagination or virtualization logic here const getFormatAmount = useMemo(() => { const formatAmount = (amount) => `$${amount}`; return (type) => { switch (type) { case 'expense': return (amount) => `- $${amount}`; case 'income': return (amount) => `+ $${amount}`; case 'transfer': return (amount) => `$${amount}`; default: return formatAmount; } }; }, []); return ( <div> {/* Rendering logic with pagination or virtualization */} {visibleTransactions.map((transaction, index) => ( <div key={transaction.id}> <TransactionItem transaction={transaction} formatAmount={getFormatAmount(transaction.type)} /> </div> ))} </div> ); }; const TransactionItem = ({ transaction, formatAmount }) => { // Transaction item rendering logic here return ( <div> <span>{transaction.description}</span> <span>{formatAmount(transaction.amount)}</span> </div> ); }; ``` **In the updated code:** 1. The **getFormatAmount** function is **memoized** using the **useMemo** hook. This ensures that the function is only created once and doesn't get re-computed on subsequent renders unless the dependencies change. It improves performance by avoiding unnecessary function re-creation. 2. **The rendering logic includes placeholders for pagination or virtualization techniques.** Based on your specific requirements, you can implement the necessary logic to render a subset of transactions based on the current viewport or user interactions. This approach improves performance by reducing the number of rendered items. 3. The **TransactionItem** component receives the **formatAmount** function as a prop. This allows the component to access the appropriate format function without dynamically creating it on each render. It reduces redundancy and optimizes rendering by avoiding unnecessary function calls. Remember to adapt the code to your specific needs, including the implementation of **pagination or virtualization** logic. These optimizations provide a starting point for improving performance, but the exact implementation will depend on your application’s requirements and complexity. ### Here are some additional customization options for the Pagination component, each with its own code example: 1. **Custom Styling:** ``` const Pagination = ({ currentPage, totalPages, onPageChange }) => { const pages = [...Array(totalPages).keys()].map((page) => page + 1); return ( <div className="pagination-container"> {pages.map((page) => ( <button key={page} onClick={() => onPageChange(page)} className={currentPage === page ? 'active' : ''} > {page} </button> ))} </div> ); }; ``` In this example, we add a custom CSS class 'pagination-container' to the parent div element. You can then define the styling for the pagination component using CSS to match your desired look and feel. **2. Previous and Next Buttons:** ``` const Pagination = ({ currentPage, totalPages, onPageChange }) => { const pages = [...Array(totalPages).keys()].map((page) => page + 1); return ( <div> <button onClick={() => onPageChange(currentPage - 1)} disabled={currentPage === 1}> Previous </button> {pages.map((page) => ( <button key={page} onClick={() => onPageChange(page)} className={currentPage === page ? 'active' : ''} > {page} </button> ))} <button onClick={() => onPageChange(currentPage + 1)} disabled={currentPage === totalPages}> Next </button> </div> ); }; ``` In this example, we add previous and next buttons to allow the user to navigate to the previous and next pages, respectively. The buttons are disabled when the user is on the first or last page to prevent unnecessary navigation. **3. Compact View:** ``` const Pagination = ({ currentPage, totalPages, onPageChange }) => { const pages = [...Array(totalPages).keys()].map((page) => page + 1); return ( <div> {pages.map((page) => ( <button key={page} onClick={() => onPageChange(page)} className={currentPage === page ? 'active' : ''} > {page} </button> ))} <span>Page {currentPage} of {totalPages}</span> </div> ); }; ``` In this example, we remove the previous and next buttons and instead display the current page number and total number of pages. This provides a more compact view of the pagination component. **4. Custom Labels:** ``` const Pagination = ({ currentPage, totalPages, onPageChange }) => { const pages = [...Array(totalPages).keys()].map((page) => page + 1); return ( <div> {pages.map((page) => ( <button key={page} onClick={() => onPageChange(page)} className={currentPage === page ? 'active' : ''} > {page} </button> ))} <button onClick={() => onPageChange(currentPage + 1)} disabled={currentPage === totalPages}> Next Page </button> </div> ); }; ``` In this example, we customize the label of the next button to display “Next Page” instead of just “Next”. This provides a more descriptive and user-friendly label for navigation. **5. Page Size Selection:** ``` const Pagination = ({ currentPage, totalPages, onPageChange, pageSizeOptions, pageSize, onPageSizeChange }) => { const pages = [...Array(totalPages).keys()].map((page) => page + 1); return ( <div> {pages.map((page) => ( <button key={page} onClick={() => onPageChange(page)} className={currentPage === page ? 'active' : ''} > {page} </button> ))} <select value={pageSize} onChange={(e) => onPageSizeChange(e.target.value)}> {pageSizeOptions.map((option) => ( <option key={option} value={option}> {option} </option> ))} </select> </div> ); }; ``` In this example, we introduce a page size selection feature. It includes a dropdown select element where the user can choose the number of items to display per page. The pageSizeOptions array contains the available page size options, and the pageSize state represents the currently selected page size. The onPageSizeChange function is called when the user selects a new page size. **6. Custom Page Range:** ``` const Pagination = ({ currentPage, totalPages, onPageChange, pageRange }) => { const startPage = currentPage - Math.floor(pageRange / 2); const endPage = currentPage + Math.floor(pageRange / 2); const pages = [...Array(totalPages).keys()].map((page) => page + 1).slice(startPage, endPage + 1); return ( <div> {pages.map((page) => ( <button key={page} onClick={() => onPageChange(page)} className={currentPage === page ? 'active' : ''} > {page} </button> ))} </div> ); }; ``` In this example, we introduce a custom page range feature. The pageRange represents the number of consecutive page buttons to display around the current page. The startPage and endPage variables calculate the range of pages to be shown based on the current page and the page range. The pages array is then sliced to obtain the desired range of page numbers. Feel free to mix and match these customization options or come up with your own variations based on your application’s specific needs. The goal is to tailor the Pagination component to fit your desired functionality and design. **By leveraging the abstract method pattern in React,** we can build a robust and modular financial app that effectively manages various types of transactions and ensures clean, reusable code. ### Conclusion: By implementing the Abstract Method pattern in React using a functional approach, we can achieve code reuse, remove redundancy, and avoid code smells. The abstract component provides a common structure and behavior, while child components extend it to define specific rendering and logic. This approach promotes a cleaner and more maintainable codebase, making it easier to scale and modify our applications. Remember, the Abstract Method pattern is just one of many patterns that can be leveraged in React development. It’s important to evaluate its suitability based on the specific needs and complexity of your application. By applying design patterns thoughtfully, we can enhance code quality and build. **I Hope you liked it, By leveraging this pattern,** you can build applications that are easier to manage and evolve over time. Understanding and applying design patterns in React empowers developers to write clean, efficient, and robust code. Start incorporating the Abstract Design pattern into your React projects and experience the benefits firsthand. Happy coding! ### Thanks for reading - 👏 Please clap for the story and follow me 👉 - 📰 [View more content](https://medium.com/@vivekdogra02) - 🔔 Follow me: [LinkedIn](https://www.linkedin.com/in/vivek-dogra-7404a530/)| [Twitter](https://twitter.com/VivekDo07905087)| [Github](https://github.com/vivekdogra02) * * *
vivekdogra02
1,467,476
How to create custom views programmatically
Hello wonderful community 👋. As a new iOS developer I'm starting to learn how to create views...
0
2023-05-14T06:31:13
https://dev.to/msa_128/how-to-create-custom-views-programmatically-2cfm
ios, beginners, native, uikit
Hello wonderful community 👋. As a new iOS developer I'm starting to learn how to create views programmatically, it has been tricky to learn how to do this coming from using Storyboards 😅. In this tutorial, I'll guide you step-by-step on programmatically creating views in iOS. It's an opportunity for me to share what I've learned, with the hope of helping others who are also exploring this area. Let's start coding! 😀 ## Starting a new Xcode project > I'm using Xcode version 14.3 and targeting iOS 16 1. Open Xcode and click on New Project, we will get this window: ![new project window](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vp8go9y797myrjl7coeq.png) Here we are going to select the simplest template available, iOS > App 2. Now let's give some basic information: - Product Name: It's the name of our app. - Team: If you have a Apple Developer Account you should select it here, otherwise leave it as None is alright (leave it as None will allow you to test your app only in the simulator). - Interface: Since we are doing an example with `UIKit` we need to select Storyboard (don't worry we are not actually using them). - Language: Select Swift. The rest of the options can be unchecked, they won't be used in this simple example. ![app basic init](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19z58mvrq97svs4ox7pu.png) 3. Click on Next and select a location on your computer to save the project, finally click on Create. We now have a simple app ready for us. 🎉 ![basic project template](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9zpwdyvf473g0kpnxnta.png) ## Deleting storyboard references from our project Now it's time to delete the storyboard file and configurations associated with it. In this simple project we have two storyboard files, `Main` and `LaunchScreen`. We only need to delete the `Main` one because this is the one that is loaded once our app starts. We want to provide our own by using code. ![storyboard files](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82fl7grirpnadjgv5ol7.png) 1. Delete `Main` from the project, when prompt click Move to trash. ![deleting main](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yb8e0whtaxksrcexfaor.png) 2. Delete the references of this storyboard, open `Info.plist` and delete the value called `Storyboard Name` by clicking the minus icon. ![info deleting storybaord](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/47osbt6h6ytgsrc7vh7s.png) 3. Delete the storyboard reference in the target app, this is important because we no longer have the storyboard file so if we try to compile we would get an error because of the missing reference. ![target storyboard delete reference](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z0psfkflntunkkzp29l6.png) Now run your app and you'll get a nice black screen, this mean you deleted the storyboard correctly!. ![simulator black screen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/id8wb2ugtlyvokmu6jjr.png) ## Configuring programmatically the root window Now you have an empty screen, nothing was loaded and that's a problem. The storyboard made all of this configuration automatically for you. Now it's your responsibility to configure the screen that will be show when your app starts. 1. Open `SceneDelegate.swift` and configure the root window of your app. Locate the method called `func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions)`. Once you found it, let's change the method's body with the following: ```swift guard let windowScene = (scene as? UIWindowScene) else { return } let window = UIWindow(windowScene: windowScene) let viewController = ViewController() window.rootViewController = viewController self.window = window window.makeKeyAndVisible() ``` This code do several things: - It tries to create a `windoScene` by using the passed `scene` object, this object will contain the scene of our app. - It creates a new `window`, this will tell your app to create a windowScene, this object will contain your app's user interface. - It creates a `viewController`, this object is responsible for controlling your view objects. This class is already on your template (`ViewController.swift), so there is nothing special about its name. - Assigning the new ViewController as the root view controller, i.e., the first view controller we are going to create to start our app UI. - We are assigning the new created window to our class window, `self.window = window`. - Making the window visible on the screen, `window.makeKeyAndVisible()`. After that configuration your app interface is back to life. If you run it at this point you'll still get a back screen, that's ok because we haven't configured anything on our View objects. ## Creating our simple interface programmatically Now it's time to add some code to make our interface show something on the screen! ### ViewController life cycle The `UIViewController` has a specific life cycle, i.e., certain predefined methods are called at specific moments of the object's life. Since we are creating our interface programmatically we need to understand two methods of this life cycle. - `func loadView()`: This method is used to load our custom views and make some configuration on them before they appear on screen. - `func viewDidLoad()`: This method is called once the ViewController is loaded into memory, here you can create more customization and configuration to your views. As you can guess, the object first run `loadView` and then `viewDidLoad`. ### Creating an empty `UIView` Each `ViewController` creates a default empty View, but we want to have our own custom one, so let's start by doing that. 1. Create a new file for our new custom view, let's call it `CustomView.swift` and add the following code inside. ```swift // CustomView.swift import UIKit final class CustomView: UIView { override init(frame: CGRect) { super.init(frame: frame) backgroundColor = .yellow } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } ``` Our view inherits from `UIView` so we need to define at least these two constructors (init methods). The one we need to focus on is the first one. Here we are adding a background color. 2. Create the custom view object and show it on the screen, to achieve this lets open the `ViewController.swift` and inside the class create a new object of our view and assign it as the new view in the `loadView`. You should end up with a code similar to this: ```swift // ViewController.swift import UIKit class ViewController: UIViewController { lazy var customView = CustomView() override func loadView() { // we are creating a class property because we may have delegates // assign your delegates here, before view view = customView } override func viewDidLoad() { super.viewDidLoad() } } ``` One important point to mention, the view property is assigned by default to use all of the phone screen. So we don't need to assign any constrains because of this. The `CustomView` will take all of the screen. Now run your app and you'll see a yellow screen. ![yellow view](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zxcloqmrsh5zfnrmgbi.png) At this point, you should have the following files on your project: ![current files](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9335lsbq7rwnf22lrh37.png) Great, now you loaded an empty custom view to your screen!. ## Adding a simple label on the empty View To finish this tutorial let's add a simple label using auto layout. 1. Create a `UILabel` property on the `CustomView` class and give it some default values like a title and color. You don't need to provide any dimension, we will do this with auto layout. Your `CustomView` class should look something like this: ```swift // CustomView.swift import UIKit final class CustomView: UIView { // 1. Creating the new element lazy var label: UILabel = { // internal label, not the same as the external label let label = UILabel() label.text = "Hello World" label.textAlignment = .center label.textColor = .white label.backgroundColor = .black return label }() override init(frame: CGRect) { super.init(frame: frame) backgroundColor = .yellow // 2. Adding the new element into the view addSubview(label) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } ``` If you run your code the new label won't appear on the screen, that's because the label has a size of 0 and its position in the screen's position (0,0). This happened because we didn't provide a `CGRect` initializer. It's ok, we are going to fix this with auto layout. 2. Add auto layout constrains to allow the object to appear on the view (this will give it an area), also remember to set to false the property `translatesAutoresizingMaskIntoConstraints` if we don't do this our view may present problems rendering with auto layout. Remember that each ui element is represented as a box, so each edge has a name we can refer in our code to assign the constrains. ![ui box](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/myp5gc6ctdrfmn7b96ml.png) Your code inside `CustomView.swift` should look like something like this: ```swift // CustomView.swift import UIKit final class CustomView: UIView { // 1. Creating the new element lazy var label: UILabel = { // internal label, not the same as the external label let label = UILabel() label.text = "Hello World" label.textAlignment = .center label.textColor = .white label.backgroundColor = .black return label }() override init(frame: CGRect) { super.init(frame: frame) backgroundColor = .yellow // 2. Adding the new element into the view addSubview(label) // 3. Add the auto layout constrains setUpLabelConstrains() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } private func setUpLabelConstrains() { // Needed to avoid auto layout conflicts label.translatesAutoresizingMaskIntoConstraints = false // Set the top part of the label to the safe area of the custom view and add a vertical separation of 64 points label.topAnchor.constraint(equalTo: self.safeAreaLayoutGuide.topAnchor, constant: 64).isActive = true // Set the left part of the label to the safe area of the custom view and add a horizontal separation of 18 points label.leftAnchor.constraint(equalTo: self.safeAreaLayoutGuide.leftAnchor, constant: 18).isActive = true // Set the right part of the label to the safe area of the custom view and add a horizontal separation of -18 points // note: if this value it's positive the element will be separate from outside the screen 18 points. label.rightAnchor.constraint(equalTo: self.safeAreaLayoutGuide.rightAnchor, constant: -18).isActive = true // Set the height of the label to be equal to 48 points label.heightAnchor.constraint(equalToConstant: 48).isActive = true } } ``` Now let's run the app, a custom label should appear on the screen now. This custom label is a child of the custom view, which was created by the ViewController and set as the default view for our ViewController. ![final result](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/diojj4u4rsbokeyf6w68.png) ## Conclusion Congratulations, you created an app's ui using only code!. I hope this simple tutorial helps you to understand this topic. There is a lot more to cover but this is a good first step, thank you for reading!. Until next time.
msa_128
1,467,531
Python and Visual Studio Code on Windows 10.
GitHub: Salah Ud Din - 4yub1k Goals: Download the Python setup. Install Python. Download...
0
2023-05-14T18:39:14
https://dev.to/4yub1k/easy-how-to-set-up-python-and-visual-studio-code-on-windows-10-5gon
python, vscode, programming, tutorial
[GitHub: Salah Ud Din - 4yub1k](https://github.com/4yub1k) #### Goals: - Download the Python setup. - Install Python. - Download the VScode setup. - Install VScode. - Install the VScode extension “Python”. - Run you first python.py script. - Video tutorial complete. #### Download Python: Go to Python official page : [[Link](https://www.python.org/)]. And drop down Downloads, click “windows” in sub menu. ![Download Python](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/csxvneova79x38v5dorq.png) Now, Under “Stable Releases”, scroll down and search for the required version. As I need to install python’s latest version for **[“windows 64-bit”](https://www.python.org/ftp/python/3.11.1/python-3.11.1-amd64.exe)**, will select “Windows installer (64-bit)”, you can check whether your windows version is 64 or 32 bit, just visit Microsoft : [[windows version](https://support.microsoft.com/en-us/windows/32-bit-and-64-bit-windows-frequently-asked-questions-c6ca9541-8dce-4d48-0415-94a3faa2e13d)]. ![Python Versions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97v53ge7mtpm3t2pwohv.png) #### Install Python: Open the downloaded python installer, you will see this window as shown in image below. Check the “ADD pathon.exe to PATH” then click on “**Install Now**”. ![Install Python](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5w9asbqp6tkhjlokptic.png) ![Python installation successful](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oa790arnu94dn45xuas3.png) Python installation successful. #### Download VScode: Go to VScode official page : [[Link](https://code.visualstudio.com/)]. And click on Download at right top corner. ![Download Vscode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yqqokaqhf87ct6dzbk8b.png) As we don’t want the vscode to run for all users, we will install “User Installer”. Click on x64 in front of User Installer to downloads it. ![VScode Version](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/upbqhtlzz72kegx0kxa6.png) #### Install VScode: Open the vscode .exe you download. ![Install VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ozqi6ffjeh7pl4ryshw2.png) Click “I accept the agreement”, and keep clicking Next. If you want shortcut of vscode on Desktop screen, make sure to check this box as shown below. ![VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/901i23bvqf7hx0hikmfv.png) ![VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5oin2kll7vpje5iqnd8c.png) VScode installation completed, click finish to open VScode. #### Install Python Extension: Now, we need to install a very powerful extension of vscode for python. For more about it, go to link [[Python Extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)]. Click on the Extensions as shown. ![VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aedqtned7xrcjyy1ieuv.png) Search for python in search box. ![VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9gsbesq3irpnvv7cfkkr.png) Click the one from Microsoft. ![VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d7hdsbaqhthdpd1jqbdo.png) Then click on install the extension and wait for installation to complete. We are ready ! ![VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fxbn1ymxacorolo1bwj1.png) #### Run Python Script: create a file , fileName.py. We will use first as file name here, ‘first.py’. Go to “Terminal Menu” and click “New Terminal” ![Python VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iqb2huknm59cldw2548m.png) It will open the command line for you. ![VScode Terminal](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qxi2zj4xok6wtchvc9wt.png) Enter “py fileName.py” to run the script. ![VScode](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v82hove6eg3xna04z37x.png) Video Link : [[Python & VScode Installation](https://www.youtube.com/watch?v=D8swqvpFDM8&ab_channel=NerdyAyubi)] {% youtube D8swqvpFDM8 %} Thank you. 😎👍
4yub1k
1,467,778
Java Message Service (JMS)
Java Message Service (JMS) is a messaging API that provides a standard way for Java applications to...
0
2023-05-14T14:43:57
https://dev.to/sandeepseeram/java-message-service-jms-2h8k
java, jms, pubsub
Java Message Service (JMS) is a messaging API that provides a standard way for Java applications to send and receive messages. JMS is a loosely coupled messaging system, which means that the sender and receiver of a message do not need to be running at the same time. JMS is also a reliable messaging system, which means that messages are not lost or corrupted. JMS is used in a variety of applications, including: • Enterprise application integration (EAI) • Business-to-business (B2B) integration • Web services • Cloud computing JMS provides two messaging domains: • Point-to-point messaging • Publish-subscribe messaging In point-to-point messaging, there is a one-to-one relationship between the sender and receiver of a message. In publish-subscribe messaging, there is a one-to-many relationship between the sender and receiver of a message. JMS provides a number of features that make it a powerful messaging API, including: • Message persistence • Message acknowledgment • Message delivery guarantees • Message filtering • Message expiration • Message transformation JMS is a mature and widely used messaging API. It is supported by a variety of messaging vendors, including IBM, Oracle, and Red Hat. Here are some of the benefits of using JMS: • Loosely coupled messaging: The sender and receiver of a message do not need to be running at the same time. • Reliable messaging: Messages are not lost or corrupted. • Standardized API: JMS is a standard API, which means that Java applications can communicate with any JMS provider. • Vendor-neutral: JMS is a vendor-neutral API, which means that Java applications can communicate with any JMS provider without being locked into a particular vendor. • Mature and widely used: JMS is a mature and widely used messaging API, which means that there is a large body of knowledge and expertise available. Java program that sends messages via Java JMS: ``` import javax.jms.*; import javax.naming.*; public class JmsMessageSender { public static void main(String[] args) throws NamingException, JMSException { // Set up the JNDI context to access the JMS provider Context context = new InitialContext(); ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("ConnectionFactory"); Destination destination = (Destination) context.lookup("queue/MyQueue"); // Create a JMS connection and session Connection connection = connectionFactory.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); // Create a JMS message producer MessageProducer producer = session.createProducer(destination); // Create a text message and send it TextMessage message = session.createTextMessage("Hello, world!"); producer.send(message); // Clean up resources producer.close(); session.close(); connection.close(); } } ``` Java program that receives messages from the JMS queue named “MyQueue”: ``` import javax.jms.*; import javax.naming.*; public class JmsMessageReceiver { public static void main(String[] args) throws NamingException, JMSException { // Set up the JNDI context to access the JMS provider Context context = new InitialContext(); ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("ConnectionFactory"); Destination destination = (Destination) context.lookup("queue/MyQueue"); // Create a JMS connection and session Connection connection = connectionFactory.createConnection(); Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE); // Create a JMS message consumer MessageConsumer consumer = session.createConsumer(destination); // Start the connection connection.start(); // Receive messages until there are no more while (true) { Message message = consumer.receive(); if (message instanceof TextMessage) { TextMessage textMessage = (TextMessage) message; System.out.println("Received message: " + textMessage.getText()); } else { System.out.println("Received message of unsupported type: " + message.getClass().getName()); } } } } ```
sandeepseeram
1,467,798
Every new developer should know about these 10 GitHub repositories.
Github is like Facebook for programmers. It is not unfair to give this amazing place this name. After...
0
2023-05-14T16:41:27
https://dev.to/danmusembi/every-new-developer-should-know-about-these-10-github-repositories-39ef
**Github is like Facebook for programmers**. It is not unfair to give this amazing place this name. After all, this site not only lets you share your code and keep track of changes, but it also helps you connect with other great coders from all over the world. Many developers love to spend time on GitHub studying the project, learning new things all the time, making connections with other developers, and adding to open-source projects. Github's success is shown by the fact that it has more than 37 million users and more than 100 million repositories. This shows how much hackers love this great site. Congratulations if you're a programmer who often visits GitHub; I've compiled a list of repositories you may wish to add as favourites, depending on the topics you're interested in. 1. [ FreeCodeCamp](https://github.com/freeCodeCamp/freeCodeCamp) A non-profit group and one of the best online open-source communities where you can learn to code and help others. You can find more than 306k stars and more than 23k forks on their GitHub page. They have a big group with a great site where people can help each other and get better at code. There will be new bugs and pull requests every week. Put this on your list of favourites if you want to learn from and work with millions of other people. 2.[Tensorflow](https://github.com/tensorflow/tensorflow) Visit the Tensorflow source on GitHub if you want to find a math tool used in machine learning and neural networks. TensorFlow is a set of open-source software that lets you quickly do the math based on graphs. It was made by the Google Brain team, which is made up of engineers and experts. It is used at Google for both studying and making things. There are more than 138k stars and more than 78k forks on GitHub for this project. You will start with setting up Tensorflow and then move on to a more in-depth look at the topic. We need to say again that this can be done in the Python language. 3.[Free Programming Books](https://github.com/EbookFoundation/free-programming-books) Free Programming Books" is a GitHub account that collects free programming books from many different fields and languages. It has a wide range of tools for self-study and career growth. The community keeps the repository up to date so that it stays useful. It gives everyone free access to high-quality training tools. It is a useful tool for programmers of all levels because it encourages the open sharing of information and makes it easier to learn on your own time. 4.[Coding Interview University](https://github.com/jwasham/coding-interview-university) To what extent have you prepared for interviews with major tech firms like Google, Microsoft, Facebook, and Amazon? If so, and if you want thorough instructions and tools, I've created the greatest repository on Github to help you prepare for interviews at these firms. To advance to a software engineering career at these firms as an experienced software or web developer, you will need a background in computer science. This repository has a wealth of information that will help you to better understand computer science and prepare for interviews with various IT businesses. 5.[Bootstrap](https://github.com/twbs/bootstrap) The popular web development framework Bootstrap has over 137,000 ratings and over 67,000 forks on GitHub. Starting with the installation, you'll have access to detailed instructions and extra connections to related resources. Make this your go-to resource for learning about this widely used framework 6.[Public APIs](https://github.com/public-apis/public-apis) As a developer, dealing with application programming interfaces (APIs) is typically essential. This repository provides a comprehensive index of publicly available APIs, streamlining development processes. These APIs may be used without cost and are organized in a way that makes discovery simple. By bookmarking this repository, programmers will have ready access to a large library of APIs that can be included in their applications with little effort. 7.[Facebook React](https://github.com/facebook/react) The Facebook React GitHub account serves as the canonical home for the Facebook-built React JavaScript library. A gathering place for the React community, it houses the library's code, documentation, and bug tracker. The repository provides developers with a central hub for tracking the progress of React and sharing their contributions. It also provides a home for associated projects and packages that expand React's functionality for online and mobile development, such as React DOM and React Native. The Facebook React GitHub account is an essential part of the React ecosystem, helping developers all across the globe. 8.[Awesome Python](https://github.com/vinta/awesome-python) We've re-provided a repository for the Python community. The best Python libraries, frameworks, and other tools are all collected in one repository on GitHub. There are additional tools available for creating podcasts. You should save this page if you want to learn to code in Python. 9.[Security Guide for Developers](https://github.com/jcavell/security-guide-for-developers) The "Security Guide for Developers" is a large project on GitHub that gives developers all the tools and best practices they need to make their apps safer. It covers a wide range of security themes, such as secure code techniques, login and authorization, data protection, input checking, and secure communication. The repository has instructions, code samples, and tools that coders can use to find and fix common security holes. It's a great resource for makers who want to make safe and reliable apps that protect private data and keep out possible threats. Regular changes make sure that the repository is still useful and fits in with how security is changing. 10.[Awesome Interview Questions](https://github.com/DopplerHQ/awesome-interview-questions) The "Awesome Interview Questions" GitHub project is a collection of interview questions and other tools that are meant to help job seekers prepare for interviews. It covers a wide range of technical and non-technical areas, such as algorithms, data structures, system design, behavioural questions, and more. The library is a great way for candidates to learn about typical interview questions and get an idea of what skills and information companies are looking for. It is changed regularly by the community to make sure it is up-to-date and covers everything. Finally, these 10 GitHub repositories are essential for beginning developers. They provide study, inquiry, and development across many areas, languages, and technology. These repositories include free programming books, full web development curricula, popular frameworks, and curated lists of resources for beginning developers. These repositories provide doorways to the broad world of programming, enabling learning, skill development, and industry trends. Developers may learn programming ideas, receive hands-on experience, and benefit from the developer community by using these repositories. New developers should use these repositories to learn. They may improve their abilities, knowledge, and foundation as developers by using the materials offered. These repositories let developers learn and collaborate by sharing information. These repositories may help novice developers flourish and start a successful programming career.
danmusembi
1,467,802
➡️👷💪 Linear Class Builder
TLDR: Github @reggi/linear-builder-class Code Generates Classes using the Linear Builder Class...
0
2023-05-14T16:49:24
https://dev.to/reggi/linear-class-builder-1j5m
typescript, deno, oop, javascript
> **TLDR**: [Github @reggi/linear-builder-class](https://github.com/reggi/linear-builder-class) Code Generates Classes using the Linear Builder Class pattern My goal is to define types and ensure that the generic values "match" from one method to the next. ```ts new Example() .stringOrNumber(1) .toObject((knowsItsANumber) => { return { yepStillKnows: knowsItsANumber } }) .chainMe(({ yepStillKnows }) => { return yepStillKnows + 1 }) .done (stillANumber => { console.log(stillANumber) // number }) ``` > Note: this 👆 is `example_two` One common approach to achieve this is to maintain separate class instances that are returned for each method, instead of returning this and having a regular class where all methods are on that one class. With the latter approach, each "set" value can't be dynamically typed. By passing in a value and returning a new class instance, you can keep track of the types. This is a linear process because you can only set one property at a time in the same sequence. Due to the way that generics work, the only way to do this effectivly is to create and return new class instances which can result in a lot of boilerplate code. That's where this tool comes in. This creates the scaffolding classes required to pull this off. What is a [builder class](https://refactoring.guru/design-patterns/builder/typescript/example)? It's a pattern of class creation where class methods generally return `this`, and you chain methods together, internal class state manages changes, an example is something like `builder.getProduct().listParts()`. With this project, I was interested in creating generating the code for a class from a small definition. ``` request*: Request pathname*: string match: MatchHandler<M> data: DataHandler<M, D> component: ComponentHandler<M> ``` > Note: the `*` syntax means the type is a "native" type and it shouldn't be included when using the `importsFrom` option. Builds a class with these methods: ```ts new Leaf() .request(new Request('https://example.com')) .pathname('/hello-world') .match(async (_req, ctx) => { const match = new URLPattern({ pathname: ctx.pathname }).exec(ctx.url.pathname) const age = ctx.url.searchParams.get('age') if (!match) return null if (!age) throw new Error('missing age param') const x = await Promise.resolve({ age }) return x }) .data(async data => { const x = await Promise.resolve({ ...data }) return { age: parseInt(x.age)} }) .component(props => { return <div>{props.age}</div> }) ``` Where the return value is a class with these properties. ``` { request: Request pathname: string, match: MatchHandler<M>, data: DataHandler<M, D>, component: ComponentHandler<M>, } ``` This example was created using Deno, you can build the code using: ``` deno run -A ./example/build.ts > example/leaf.ts ``` and run the usage using: ``` deno run example/usage.tsx ``` [Cover Attribution Creative Commons](https://pxhere.com/en/photo/1626734)
reggi
1,467,963
Apache AGE: Advanced Features for Efficient Graph Data Management in PostgreSQL
Introduction Apache AGE is a PostgreSQL extension that enables users to store and manage...
0
2023-05-14T20:43:50
https://dev.to/ahmedmohamed/apache-age-advanced-features-for-efficient-graph-data-management-in-postgresql-43n3
postgres, apacheage
## Introduction Apache AGE is a PostgreSQL extension that enables users to store and manage graph data in a relational database environment. It aims to provide a unified storage solution for both relational and graph model data, allowing users to use standard ANSI SQL and openCypher, a popular graph query language. This article explores some of the advanced features of Apache AGE that can enhance data management. ## Organization of Graph Labels Hierarchically Apache AGE also offers hierarchical graph label organization, where users can define hierarchical graph label organization by creating label groups that can contain other label groups or labels. This feature can be useful in managing and analyzing large and complex data sets with multiple layers of relationships. ## Indexing Properties on Vertices and Edges Furthermore, Apache AGE enables users to create property indexes on vertices and edges using standard SQL syntax, which can greatly improve query performance and reduce the time needed to extract insights and patterns from graph data. ## Support for Cypher Query Language One of the major advantages of Apache AGE is its support for Cypher, a popular query language for graph databases. This feature allows users to easily query relationships and patterns within the data, leveraging this powerful tool within a PostgreSQL-based environment. ## SQL and Cypher Hybrid Querying Apache AGE also supports hybrid querying, which means users can work with both structured and graph data in a single database environment, leading to faster development cycles, efficient data analysis, and a streamlined overall workflow. ## Multi-graph Querying Another valuable feature of Apache AGE is its support for querying multiple graphs. By analyzing multiple graphs together, users can identify patterns and insights that may not be visible in a single graph data set. This feature is particularly useful in social network analysis scenarios, where users may need to analyze relationships between individuals or groups across multiple graphs. ## Support for Full PostgreSQL Features Finally, AGE's full support for PostgreSQL functionality and API allows for a seamless integration into existing PostgreSQL-based workflows. ## Conclusion Overall, Apache AGE provides an efficient and comprehensive solution for managing graph data within a PostgreSQL environment, making it a valuable tool for developers and data scientists who need to manage and analyze graph data within a relational database environment. REF: [age-manual](https://age.apache.org/age-manual/master/intro/overview.html) [ApacheAGE](https://age.apache.org/)
ahmedmohamed
1,467,973
Useful Linux commands
🔹ls - List files and directories 🔹cd - Change the current directory 🔹mkdir - Create a new directory...
0
2023-05-14T21:23:26
https://dev.to/justplegend/useful-linux-commands-1af8
linux, beginners
🔹ls - List files and directories 🔹cd - Change the current directory 🔹mkdir - Create a new directory 🔹rm - Remove files or directories 🔹cp - Copy files or directories 🔹mv - Move or rename files or directories 🔹chmod - Change file or directory permissions 🔹grep - Search for a pattern in files 🔹find - Search for files and directories 🔹tar - manipulate tarball archive files 🔹vi - Edit files using text editors 🔹cat - display the content of files 🔹top - Display processes and resource usage 🔹ps - Display processes information 🔹kill - Terminate a process by sending a signal 🔹du - Estimate file space usage 🔹ifconfig - Configure network interfaces 🔹ping - Test network connectivity between hosts **PERMISSIONS IN LINUX** Source:KodeKloud ![permissions-kodekloud](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/69k45rek67pwjmoo80hc.png) Source:bytebytego ![permissions-byte](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oedzefm13cbk8z9rk28q.jpg) Links: 1.[40 Useful commands](https://www.hostinger.com/tutorials/linux-commands) 2.[Most used commands](https://kinsta.com/blog/linux-commands/)
justplegend
1,468,023
Generics com Java
Conteúdos Motivação Generic Methods Generic Classes Generics Interfaces Bounded...
0
2023-05-14T22:57:39
https://dev.to/patriciaclares/generics-com-java-fk2
webdev, java, programming, softwareengineering
## Conteúdos - [Motivação](#motiva--o) - [Generic Methods](#generic-methods) - [Generic Classes](#generic-classes) - [Generics Interfaces](#generics-interfaces) - [Bounded Generics](#bounded-generics) - [Multiple Bounds](#multiple-bounds) - [Wildcards](#wildcards) - [Type erasure](#type-erasure) ### Motivação Generics foi introduzido no Java SE 5.0 para poupar o desenvolvedor de usar casting excessivos durante o desenvolvimento e reduzir bugs durante o tempo de execução. Olhe no exemplo abaixo um código sem Generics ```java List list = new LinkedList(); list.add(new Integer(1)); // Na linha abaixo o compilador irá reclamar pois ele não sabe qual o tipo de dado do retorno Integer i = list.iterator().next(); ``` Então para silenciar o compilador, precisamos adicionar um casting: ```java Integer i = (Integer) list.iterator.next(); ``` Não é garantido que a lista contenha apenas inteiros, pois é possível adicionar outros tipos de objetos nela: ```java List list = List.of(0, "a"); ``` O que pode provocar uma exceção em tempo de execução. Seria muito mais fácil especificar apenas uma vez o tipo de objeto que estamos trabalhando tornando o código mais fácil de ser lido, evitando os casts excessivos e também potencias problemas em tempo de execução. ```java // Na linha abaixo estamos especificando o tipo da nossa Lista List<Integer> list = new LinkedList<>(); list.add(1); Integer i = list.iterator().next(); ``` ### Generic Methods Vamos começar com as características de um método genérico, observe o código abaixo: ```java public static <T, G> Set<G> fromArrayToEvenSet(T[] a, Predicate<T> filterFunction, Function<T, G> mapperFunction) { return Arrays.stream(a) .filter(filterFunction) .map(mapperFunction) .collect(Collectors.toSet()); } ``` A característica predominante de um método genérico é o **diamond operator** logo antes da tipagem do retorno da função, onde eles informam os tipos genéricos dos parâmetros que estamos recebendo, no nosso caso ***T*** e ***G*** No código acima estamos recebendo na nossa função genérica um array ***a*** que pode ser de qualquer tipo (*string, int e etc…*). Em seguida estamos recebendo uma função que é responsável por filtrar o conteúdo do array, note que a função é do tipo *Predicate<T>* pois essa função tem uma única responsabilidade que é retornar true ou false. Por último estamos recebendo uma função responsável por transformar um objeto em outro, nesse caso a função é do tipo *Function<T, G>* pois ele trabalha em cima do objeto ***T*** para retornar um objeto diferente que estamos chamando de ***G.*** Segue o exemplo completo: ```java public class Main { public static void main(String[] args) { Integer[] intArray = {1, 2, 3, 4, 5}; final var stringEvens = fromArrayToEvenSet(intArray, Main::toNumeric, Main::isEven); System.out.println(stringEvens); // [number: 4, number: 2] } private static Numeric toNumeric(final int number){ return new Numeric(number); } public static boolean isEven(final int number){ return number % 2 == 0; } public static <T, G> Set<G> fromArrayToEvenSet(T[] a, Function<T, G> mapperFunction, Predicate<T> filterFunction) { return Arrays.stream(a) .filter(filterFunction) .map(mapperFunction) .collect(Collectors.toSet()); } static class Numeric { private final int number; Numeric(int number) { this.number = number; } @Override public String toString() { return "number: " + this.number; } } } ``` ### Generic Classes Ja vimos anteriormente que um método pode ser genérico mas e se a classe fosse genérica o que aconteceria? Quando usamos uma classe genérica, precisamos identificar com o ***diamond operator*** a quantidade de objetos genéricos que vamos trabalhar, Segue abaixo o exemplo: ```java public class Example<T> { public void doSomething(final T parameter) { System.out.println("parameter: " + parameter); } } ``` Se tentarmos adicionar um outro objeto como parâmetro no método, o compilador irá reclamar pois não especificamos ele na classe. Podemos adicionar esse objeto genérico no próprio método com o ***diamond operator*** como vimos antes, mas vamos seguir pela classe. ***Na classe*:** ```java public class Example<T, G> { public void doSomething(final T parameter, final G parameter2) { System.out.println("parameter: " + parameter); System.out.println("parameter2: " + parameter2); } } ``` Agora precisamos instanciar essa classe e utilizar esse método. ```java public class Main { public static void main(String[] args) { // Note que para cada uma das instâncias estamos passando objeto diferentes final var integerExample = new Example<Integer, String>(); final var listExample = new Example<List, Double>(); final var doubleExample = new Example<Double, Character>(); // E ao utilizar o método, precisamos respeitar o objeto que espeficifamos na instância. integerExample.doSomething(1, "Olá"); listExample.doSomething(List.of("1", 2, "3", 4), 4.88); doubleExample.doSomething(10.99, 'C'); } } ``` ### Generics Interfaces Como vimos em classes, as interfaces seguem a mesma regra: ```java public interface ExampleInterface<T> { void doSomething(final T parameter); } ``` Utilização da interface com o tipo inteiro. ```java // Note que estamos especificando o tipo que a interface irá receber aqui. public class Example implements ExampleInterface<Integer>{ // E então o método que estamos sobrescrevendo se transforma no mesmo tipo @Override public void doSomething(Integer parameter) { System.out.println("parameter: " + parameter ); } } ``` Utilização da interface com o tipo Lista. ```java public class Example implements ExampleInterface<List<String>>{ @Override public void doSomething(List<String> parameter) { System.out.println("parameter: " + parameter ); } } ``` ### Bounded Generics Bounded significa restrito/limitado, podemos limitar os tipos que o método aceita, por exemplo, podemos especificar que o método aceita todas as subclasses ou a superclasse de um tipo, o que também faz com que nesse exemplo, o nosso tipo genérico herde os comportamentos de Number: ```java public static <T extends Number> Set<Integer> fromArrayToSet(T[] a) { return Arrays.stream(a) .map(Number::intValue) .collect(Collectors.toSet()); } ``` No exemplo acima, limitamos o tipo genérico ***T*** para aceitar apenas subclasses da superclasse *Number,* então o que aconteceria se tentarmos passar uma lista de String para o nosso parâmetro *T[]?* ```java String[] stringArray = {"a", "b", "c"}; // Na linha abaixo o compilador irá reclamar de instâncias inválidas do tipo String para Number final var stringEvens = fromArrayToSet(stringArray); ``` ### Multiple Bounds Como vimos em ***Bounded Generics*** podemos limitar quem pode utilizar nosso método genérico, e podemos limitar mais ainda usando interfaces, observe o código abaixo: ```java public class Main { public static void main(String[] args) { final Person wizard = new Wizard(); final Person muggle = new Muggle(); startWalkAndEat(wizard); startWalkAndEat(muggle); } public static <T extends Person> void startWalkAndEat(T a) { a.walk(); a.eat(); } static class Muggle extends Person {} static class Wizard extends Person implements Comparable{ @Override public int compareTo(Object o) { return 0; } } static class Person { public void walk(){} public void eat(){} } } ``` Tanto um Trouxa como um Bruxo podem comer e andar porque ambos são pessoas, que foi a restrição que adicionamos, apenas subclasses(*classes filhas*) de ***Person*** e a mesma(*classe pai*) podem utilizar o método ***startWalkAndEat()*** mas e se adicionarmos mais uma restrição em cima da atual, onde apenas as classes que implementam a interface ***Comparable*** seriam permitidas, o que aconteceria? ```java public class Main { public static void main(String[] args) { final Person wizard = new Wizard(); final Muggle muggle = new Muggle(); startWalkAndEat(wizard); // A linha abaixo começa a dar erro de compilação pois a classe muggle não implementa a interface Comparable startWalkAndEat(muggle); } // Adição da nova restrição public static <T extends Person & Comparable> void startWalkAndEat(T a) { a.walk(); a.eat(); } static class Muggle extends Person {} static class Wizard extends Person implements Comparable{ @Override public int compareTo(Object o) { return 0; } } static class Person { public void walk(){} public void eat(){} } } ``` Como comentado no código, agora não é possível passar a classe Muggle para o método ***startWalkAndEat()***, porque esta classe não implementa a interface ***Comparable***. ### Wildcards Observe o código abaixo: ```java public class Example<T extends Number> { public long sum(List<T> numbers) { return numbers.stream().mapToLong(Number::longValue).sum(); } } ``` Utilizaremos ele dessa forma: ```java public class Main { public static void main(String[] args) { final var example = new Example<>(); List<Number> numbers = new ArrayList<>(); numbers.add(5); numbers.add(10L); numbers.add(15f); numbers.add(20.0); example.sum(numbers); } } ``` Estamos passando vários tipos de numbers para a lista, mas e se criarmos uma lista de inteiros? Se inteiros pertence a Number então, não daria problema, certo? Errado! Por isso nasceu a necessidade de termos o wildcard, segue o código a seguir. ```java public class Main { public static void main(String[] args) { final var example = new Example(); List<Number> numbers = new ArrayList<>(); numbers.add(5); numbers.add(10L); numbers.add(15f); numbers.add(20.0); // Aqui funciona example.sum(numbers); List<Integer> numbersInteger = new ArrayList<>(); numbersInteger.add(5); numbersInteger.add(10); numbersInteger.add(15); numbersInteger.add(20); // Mas aqui temos um erro de compilação example.sum(numbersInteger); } } ``` O `List<Integer>` e o `List<Number>` não estão relacionados como _Integer_ e _Number_, eles apenas compartilham o pai comum (List<?>). Então para resolver esse problema podemos utilizar o Wildcard: ```java public class Example { public long sum(List<? extends Number> numbers) { return numbers.stream().mapToLong(Number::longValue).sum(); } } ``` Dessa forma ambas as listas irão funcionar. É importante observar que, apenas com wildcards podemos limitar os parâmetros dos métodos, não é possível fazer isso com generics. Também seria possível utilizar sem wildcards, da forma a seguir: ```java public class Example { public <T extends Number> long sum(List<T> numbers) { return numbers.stream().mapToLong(Number::longValue).sum(); } } ``` Mas dessa forma limitamos T apenas para Numbers. Nesse caso o wildcard seria mais flexível. ### Type erasure Esse mecanismo possibilitou suporte para Generics em tempo de compilação mas não em tempo de execução. Na prática isso significa que o compilador Java usa o tipo genérico em tempo de compilação para verificar a tipagem dos dados, mas em tempo de execução todos os tipos genéricos são substituídos pelo tipo raw correspondente. ```java public class MinhaLista<T> { private T[] array; public MinhaLista() { this.array = (T[]) new Object[10]; } public void add(T item) { array[0] = item; } public T get(int index) { return array[index]; } } ``` Depois da compilação: ```java public class MinhaLista { private Object[] array; public MinhaLista() { this.array = (Object[]) new Object[10]; } public void add(Object item) { // ... } public Object get(int index) { return array[index]; } } ``` Para saber mais sobre type erasure, segue o [link da baeldung](https://www.baeldung.com/java-type-erasure)
patriciaclares
1,468,108
Day 104 of #365DaysOfCode: Implementing OAuth 2.0 and Exploring Python
Hey, guys! Day 104 of #365DaysOfCode has been productive. Today, I worked on implementing OAuth 2.0...
0
2023-05-15T00:25:53
https://dev.to/arashjangali/day-104-of-365daysofcode-implementing-oauth-20-and-exploring-python-1kkf
webdev, javascript, beginners, programming
Hey, guys! Day 104 of #365DaysOfCode has been productive. Today, I worked on implementing OAuth 2.0 for authentication in my app. I focused on setting up separate GoogleStrategy instances and authentication routes for each user model, an important step to enhance security and user experience. On day 4 of #100DaysOfPython, I explored control flow and logical operators, gaining a deeper understanding of their functionality in Python. I registered the app on Google Developer Console and imported Passport.js and Express session. To accommodate different user types, I need to create separate authentication routes and corresponding GoogleStrategy instances.
arashjangali
1,468,116
Passwordless encryption with public key for GitHub
Public keys registered for authentication on GitHub and GitLab can be obtained by anyone. I made a...
0
2023-05-15T00:57:28
https://dev.to/yoshi389111/passwordless-encryption-with-public-key-for-github-kb6
go, cryptography, github, gitlab
Public keys registered for authentication on GitHub and GitLab can be obtained by anyone. I made a command that can encrypt/decrypt files using that public key and own private key. This command can handle encrypted files without a password. Of course it is necessary if a passphrase is registered with the private key. However, there is no need to pass the password from the sender to the recipient. I named the command `git-caesar`. Don't worry, I don't use the [Caesar cipher](https://en.wikipedia.org/wiki/Caesar_cipher). It's a new command, so there may be bugs. I'm not familiar with encryption technology, so I'm looking forward to hearing from people who know more. This command is heavily influenced by [QiiCipher](https://github.com/Qithub-BOT/QiiCipher). QiiCipher is also a shell script-based tool that encrypts/decrypts small files using RSA public keys for GitHub. QiiCipher is intended to be implemented with the minimum requirements of standard shell functions + OpenSSL/OpenSSH, while this command is implemented in the GO language to make it easier to handle. ## Public key for GitHub and GitLab authentication I think it's common to register a public key for ssh in GitHub or GitLab for authentication during `git push`. Anyone can obtain this registered public key. * GitHub public key URL: `https://github.com/USER_NAME.keys` * GitLab public key URL: `https://gitlab.com/USER_NAME.keys` These public keys are primarily for signing, but some algorithms can also be used for encryption. Using this public key, passwordless encryption/decryption is realized. Public key encryption algorithms support RSA (key length of 1024 bits or more), ECDSA, and ED25519. * RSA (key length of 1024 bits or more) - public key prefix: `ssh-rsa` * ECDSA - P256 -- public key prefix: `ecdsa-sha2-nistp256` - P384 -- public key prefix: `ecdsa-sha2-nistp384` - P521 -- public key prefix: `ecdsa-sha2-nistp521` * ED25519 -- public key prefix: `ssh-ed25519` Others, namely DSA, ECDSA-SK and ED25519-SK for secret keys, and RSA with key length less than 1024 bits are not supported. * DSA -- Excluded from GitHub/GitLab due to deprecation and lack of research on how to implement it. - public key prefix: `ssh-dss` * ECDSA-SK, ED25519-SK -- As far as I have researched, I have determined that this is not possible. At least, it cannot be achieved simply by using a library. - public key prefix: `sk-ecdsa-sha2-nistp256@openssh.com` - public key prefix: `sk-ssh-ed25519@openssh.com` * RSA (key length less than 1024 bits) -- determined to be a security problem. Also, since the key length is short, there are restrictions on key encryption. ## Message body encryption Even when public-key cryptography is used, it is common to encrypt the body of the message with symmetric-key cryptography. This command uses symmetric key encryption method AES in AES-256-CBC mode. ```go package aes import ( "bytes" "crypto/aes" "crypto/cipher" "crypto/rand" ) func Encrypt(key, plaintext []byte) ([]byte, error) { // pad the message with PKCS#7 padding := aes.BlockSize - len(plaintext)%aes.BlockSize padtext := append(plaintext, bytes.Repeat([]byte{byte(padding)}, padding)...) ciphertext := make([]byte, aes.BlockSize+len(padtext)) iv := ciphertext[:aes.BlockSize] encMsg := ciphertext[aes.BlockSize:] // generate initialization vector (IV) _, err := rand.Read(iv) if err != nil { return nil, err } // encrypt message (AES-CBC) block, err := aes.NewCipher(key) if err != nil { return nil, err } cbc := cipher.NewCBCEncrypter(block, iv) cbc.CryptBlocks(encMsg, padtext) return ciphertext, nil } func Decrypt(key, ciphertext []byte) ([]byte, error) { // extract the initial vector (IV) iv := ciphertext[:aes.BlockSize] encMsg := ciphertext[aes.BlockSize:] // create an decrypter in CBC mode block, err := aes.NewCipher(key) if err != nil { return nil, err } cbc := cipher.NewCBCDecrypter(block, iv) // decrypt ciphertext msgLen := len(encMsg) decMsg := make([]byte, msgLen) cbc.CryptBlocks(decMsg, encMsg) // Unpad the message with PKCS#7 plaintext := decMsg[:msgLen-int(decMsg[msgLen-1])] return plaintext, nil } ``` First, prepare a shared key (32 bytes = 256 bits) by some method, such as random numbers or key exchange. Encrypt with AES-256-CBC using that shared key. A random number called an initialization vector (IV) is required for encryption. An IV is like a salt when hashing a password, so that even if the same data is encrypted, the ciphertext will be different. This is fine to publish. This IV must be passed to the recipient, so I insert it at the beginning of the ciphertext. When receiving, the first block is extracted as IV and then decoded. Also, AES-256-CBC requires padding that matches the block length. There are several methods of padding, but this time we use the PKCS#7 method. The ciphertext can be passed as it is, but the important point is how to share the shared key used for encryption with the other party. ## For RSA public keys RSA public keys are limited to small enough data, but can also be used for encryption. However, primitive encryption methods are difficult for unskilled humans to master (for example, it is easy to make mistakes like using block ciphers in ECB cipher mode). So I used RSA-OAEP, a cipher suite using RSA. ```go package rsa import ( "crypto" "crypto/rand" "crypto/rsa" "crypto/sha256" ) func Encrypt(pubKey *rsa.PublicKey, plaintext []byte) ([]byte, error) { return rsa.EncryptOAEP(sha256.New(), rand.Reader, pubKey, plaintext, []byte{}) } func Decrypt(prvKey *rsa.PrivateKey, ciphertext []byte) ([]byte, error) { return rsa.DecryptOAEP(sha256.New(), rand.Reader, prvKey, ciphertext, []byte{}) } func Sign(prvKey *rsa.PrivateKey, message []byte) ([]byte, error) { hash := sha256.Sum256(message) return rsa.SignPKCS1v15(rand.Reader, prvKey, crypto.SHA256, hash[:]) } func Verify(pubKey *rsa.PublicKey, message, sig []byte) bool { hash := sha256.Sum256(message) err := rsa.VerifyPKCS1v15(pubKey, crypto.SHA256, hash[:], sig) return err == nil } ``` As a result of testing, if the RSA key length is about 800 bits, it seems that a 32-byte key can be encrypted. Since the key length supported this time is 1024 bits or more, a 32-byte key can be encrypted without problems. The shared key for AES-256-CBC is encrypted using the recipient's RSA public key and sent to the other party, who can then decrypt it with their own RSA private key. ## For ECDSA public keys ECDSA is an algorithm for signing. Therefore, it cannot be used directly for encryption/decryption. Related to ECDSA is the ECDH key exchange algorithm, and ECDSA keys can be used almost as is for ECDH. And if you use key exchange, the sender uses "sender's private key" and "recipient's public key", and the receiver uses "recipient's private key" and "sender's public key" You can get the same key by exchanging the key. When using this exchanged key to encrypt/decrypt with a symmetric key encryption system such as AES, there is no need to pass the key itself. ```go package ecdsa import ( "crypto/ecdsa" "crypto/rand" "crypto/sha256" "encoding/asn1" "math/big" "github.com/yoshi389111/git-caesar/caesar/aes" ) func Encrypt(peersPubKey *ecdsa.PublicKey, message []byte) ([]byte, *ecdsa.PublicKey, error) { curve := peersPubKey.Curve // generate temporary private key tempPrvKey, err := ecdsa.GenerateKey(curve, rand.Reader) if err != nil { return nil, nil, err } // key exchange exchangedKey, _ := curve.ScalarMult(peersPubKey.X, peersPubKey.Y, tempPrvKey.D.Bytes()) sharedKey := sha256.Sum256(exchangedKey.Bytes()) // encrypt AES-256-CBC ciphertext, err := aes.Encrypt(sharedKey[:], message) if err != nil { return nil, nil, err } return ciphertext, &tempPrvKey.PublicKey, nil } func Decrypt(prvKey *ecdsa.PrivateKey, peersPubKey *ecdsa.PublicKey, ciphertext []byte) ([]byte, error) { curve := prvKey.Curve // key exchange exchangedKey, _ := curve.ScalarMult(peersPubKey.X, peersPubKey.Y, prvKey.D.Bytes()) sharedKey := sha256.Sum256(exchangedKey.Bytes()) // decrypt AES-256-CBC return aes.Decrypt(sharedKey[:], ciphertext) } type sigParam struct { R, S *big.Int } func Sign(prvKey *ecdsa.PrivateKey, message []byte) ([]byte, error) { hash := sha256.Sum256(message) r, s, err := ecdsa.Sign(rand.Reader, prvKey, hash[:]) if err != nil { return nil, err } sig, err := asn1.Marshal(sigParam{R: r, S: s}) if err != nil { return nil, err } return sig, nil } func Verify(pubKey *ecdsa.PublicKey, message, sig []byte) bool { hash := sha256.Sum256(message) signature := &sigParam{} _, err := asn1.Unmarshal(sig, signature) if err != nil { return false } return ecdsa.Verify(pubKey, hash[:], signature.R, signature.S) } ``` Note: As will be described later, the key pair for the sender is generated each time. ## For ED25519 public keys ED25519 is also an algorithm for signing. Therefore, it cannot be used for encryption/decryption as well. Also related to ED25519 is the X25519 key exchange algorithm. Calculations based on the ED25519 key yield the X25519 key. However, it is not possible to make an ED25519 key from a reversed X25519 key due to lossy transformation. ```go package ed25519 import ( "crypto/ecdh" "crypto/ed25519" "crypto/sha512" "math/big" ) func toX2519PrivateKey(edPrvKey *ed25519.PrivateKey) (*ecdh.PrivateKey, error) { key := sha512.Sum512(edPrvKey.Seed()) return ecdh.X25519().NewPrivateKey(key[:32]) } // p = 2^255 - 19 var p, _ = new(big.Int).SetString("7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed", 16) var one = big.NewInt(1) func toX25519PublicKey(edPubKey *ed25519.PublicKey) (*ecdh.PublicKey, error) { // convert to big-endian bigEndianY := toReverse(*edPubKey) // turn off the first bit bigEndianY[0] &= 0b0111_1111 y := new(big.Int).SetBytes(bigEndianY) numer := new(big.Int).Add(one, y) // (1 + y) denomInv := y.ModInverse(y.Sub(one, y), p) // 1 / (1 - y) // u = (1 + y) / (1 - y) u := numer.Mod(numer.Mul(numer, denomInv), p) // convert to little-endian littleEndianU := toReverse(u.Bytes()) // create x25519 public key return ecdh.X25519().NewPublicKey(littleEndianU) } func toReverse(input []byte) []byte { length := len(input) output := make([]byte, length) for i, b := range input { output[length-i-1] = b } return output } ``` Key exchange using X25519 is performed as follows. ```go package ed25519 import ( "crypto/ecdh" "crypto/ed25519" "crypto/rand" "crypto/sha256" "github.com/yoshi389111/git-caesar/caesar/aes" ) func Encrypt(otherPubKey *ed25519.PublicKey, message []byte) ([]byte, *ed25519.PublicKey, error) { // generate temporary key pair tempEdPubKey, tempEdPrvKey, err := ed25519.GenerateKey(rand.Reader) if err != nil { return nil, nil, err } // convert ed25519 public key to x25519 public key xOtherPubKey, err := toX25519PublicKey(otherPubKey) if err != nil { return nil, nil, err } // convert ed25519 prevate key to x25519 prevate key xPrvKey, err := toX2519PrivateKey(&tempEdPrvKey) if err != nil { return nil, nil, err } // key exchange sharedKey, err := exchangeKey(xPrvKey, xOtherPubKey) if err != nil { return nil, nil, err } // encrypt AES-256-CBC ciphertext, err := aes.Encrypt(sharedKey, message) if err != nil { return nil, nil, err } return ciphertext, &tempEdPubKey, nil } func Decrypt(prvKey *ed25519.PrivateKey, otherPubKey *ed25519.PublicKey, ciphertext []byte) ([]byte, error) { // convert ed25519 public key to x25519 public key xOtherPubKey, err := toX25519PublicKey(otherPubKey) if err != nil { return nil, err } // convert ed25519 prevate key to x25519 prevate key xPrvKey, err := toX2519PrivateKey(prvKey) if err != nil { return nil, err } // key exchange sharedKey, err := exchangeKey(xPrvKey, xOtherPubKey) if err != nil { return nil, err } // decrypt AES-256-CBC return aes.Decrypt(sharedKey, ciphertext) } func exchangeKey(xPrvKey *ecdh.PrivateKey, xPubKey *ecdh.PublicKey) ([]byte, error) { exchangedKey, err := xPrvKey.ECDH(xPubKey) if err != nil { return nil, err } sharedKey := sha256.Sum256(exchangedKey) return sharedKey[:], nil } func Sign(prvKey *ed25519.PrivateKey, message []byte) ([]byte, error) { hash := sha256.Sum256(message) sig := ed25519.Sign(*prvKey, hash[:]) return sig, nil } func Verify(pubKey *ed25519.PublicKey, message, sig []byte) bool { hash := sha256.Sum256(message) return ed25519.Verify(*pubKey, hash[:], sig) } ``` Using this exchanged key, encryption/decryption can be achieved in the same way as ECDSA. ## Multiple public keys, different types of keys GitHub and GitLab can have multiple public keys. When working on multiple PCs/environments, using a single private key increases the possibility of leakage due to portability/copying. In order to avoid that, we will generate a key pair in each environment and register the public key of it on GitHub, etc. In this command, I wanted to make it possible for the receiver of the ciphertext to be able to decrypt it with any private key in such a case. Also, it is normal for the other party's key algorithm to be different from your own key algorithm. So I decided to encrypt it like this. First, the shared key that encrypts the message body is generated with random numbers. * For RSA: 1. Encrypt the shared key using the recipient's RSA public key. 2. Pass the encrypted shared key to the other party * For ECDSA, ED25519: 1. Generate a one-time key pair. 2. Perform key exchange with the one-time private key and the recipient's public key. 3. Encrypt the shared key with the exchanged key. 4. Pass the encrypted shared key and the one-time public key to the other party The container containing the encrypted shared key and the one-time public key is called an envelope. The recipient selects the envelope that can be decrypted with own private key, restores the shared key, and decrypts the ciphertext. ## Signature verification In addition to encryption/decryption, this command also signs the "encrypted message body" with the sender's private key. The recipient can check for spoofing and tampering by verifying the signature using her public key such as GitHub. Signature verification using GitHub etc. is optional. Decryption only is also possible without signature verification. ## Ciphertext file structure The ciphertext generated by this command is a ZIP file containing the following two files. * `caesar.json` - a json file containing the following items - signature data - signer's public key - version information - list of envelopes * `caesar.cipher` - encrypted message body Please note the following: * This ZIP file does not retain file name information before encryption. If necessary, the sender should inform the recipient of the file name separately. * This ZIP file does not hold information about where the sender's public key is located (GitHub account name, URL, etc.). * Encrypt only one file. If you encrypt multiple files at once, archive them in advance. ## Installation Requires go 1.20 or higher. Execute the following commands to install and upgrade. ``` go install github.com/yoshi389111/git-caesar@latest ``` To uninstall, run the following command. ``` go clean -i github.com/yoshi389111/git-caesar ``` ## Usage ``` Usage: git-caesar [options] Application Options: -h, --help print help and exit. -v, --version print version and exit. -u, --public=<target> github account, url or file. -k, --private=<id_file> ssh private key file. -i, --input=<input_file> the path of the file to read. default: stdin -o, --output=<output_file> the path of the file to write. default: stdout -d, --decrypt decryption mode. ``` * `-u` specifies the location of the peer's public key. Get from `https://github.com/USER_NAME.keys` if the one specified looks like a GitHub username. If it starts with `http:` or `https:`, it will be fetched from the web. Otherwise, it will be determined as a file path. If you specify a file that looks like GitHub username, specify it with a path (e.g. `-u ./octacat`). Required for encryption. For decryption, perform signature verification if specified. * `-k` Specify your private key. If not specified, it searches `~/.ssh/id_ecdsa`, `~/.ssh/id_ed25519`, `~/.ssh/id_rsa` in order and uses the first one found. * `-i` Input file. Plaintext file to be encrypted when encrypting. When decrypting, please specify the ciphertext file to be decrypted. If no options are specified, it reads from standard input. * `-o` output file. Outputs to standard output if no option is specified. * Specify `-d` for decrypt mode. Encrypted mode if not specified. ## Example of use Encrypt your file `secret.txt` for GitHub user `octacat` and save it as `sceret.zip`. ``` git-caesar -u octacat -i secret.txt -o secret.zip ``` In the same situation, the private key uses `~/.ssh/id_secret`. ``` git-caesar -u octacat -i secret.txt -o secret.zip -k ~/.ssh/id_secret ``` Decrypt GitLab user `tanuki`'s file `secret.zip` and save it as `sceret.txt`. ``` git-caesar -d -u https://gitlab.com/tanuki.keys -i secret.zip -o secret.txt ``` Same situation, no signature verification. ``` git-caesar -d -i secret.zip -o secret.txt ``` ## GitHub Repository The GitHub repository is below. https://github.com/yoshi389111/git-caesar
yoshi389111
1,468,184
MusicStar.AI - Create Music with A.I.
Are you an artist looking for inspiration? Or a fan who wants to know what it feels like to be a...
0
2023-05-15T02:39:02
https://dev.to/musicstarai/musicstarai-create-music-with-ai-1i1l
music
Are you an artist looking for inspiration? Or a fan who wants to know what it feels like to be a star? MusicStar.AI is designed for anyone, regardless of musical talent, who wants to make professional-sounding music. MusicStar.AI provides the tools you need, whether you're a music professional working on your next hit or a music fan wishing to create music like your favorite artist. [https://musicstar.ai](https://musicstar.ai)
musicstarai
1,468,239
Perl Weekly #616 - Camel in India
Originally published at Perl Weekly 616 Hi there, When I say India, I mean Indian Subcontinent i.e....
20,640
2023-05-15T04:21:13
https://perlweekly.com/archive/616.html
perl, news, programming
--- title: Perl Weekly #616 - Camel in India published: true description: tags: perl, news, programming canonical_url: https://perlweekly.com/archive/616.html series: perl-weekly --- Originally published at [Perl Weekly 616](https://perlweekly.com/archive/616.html) Hi there, When I say <strong>India</strong>, I mean <strong>Indian Subcontinent</strong> i.e. Bangladesh, Bhutan, India, Maldives, Nepal, Pakistan and Srilanka. Having said, my prime concern is <strong>India</strong> right now. You must have realised the context of <strong>Camel</strong> by now. Today I would like to talk about the popularity of <strong>Perl</strong> in <strong>India</strong>. I remember when I left <strong>India</strong> in <strong>2000</strong>, it was on the rise and that is how I got into <strong>Perl Programming</strong>. After moving to <strong>England</strong>, I noticed how frequent the <strong>Perl</strong> fans meetup and share ideas. I got the opportunity to attend some and realised how it can boost the popularity of the language and spread the positive word among newbies. Regular <strong>Perl Mongers</strong> meet also keep the subject live and active. Ever since the <strong>COVID</strong> happened, all these in-person events stopped initially. But the good news is that we are getting back on track with the events like <a href="https://tprc.to/tprc-2023-tor">The Perl and Raku Conference 2023</a> and <a href="https://perlkohacon.fi">Perl and Koha Conference in Helsinki</a>. I have decided to attend <strong>The Perl and Raku Conference in Canada</strong>. I have submitted talk proposal too, little late though. I am still waiting for the acceptance. The other conference is clashing with my planned visit to <strong>India</strong> unfortunately. Going back to the main topic, I am wondering if we ever had any <strong>Perl Conference in India</strong>. I am aware of active <strong>Perl Mongers</strong> but never heard of their activities. In the year <strong>2021</strong>, <strong>Will Braswell</strong> reached out to me with a proposal to do <strong>The Perl Conference in India</strong>. Because of <strong>COVID</strong>, we put it on hold then. At the time, we decided to do month long tour to cover major cities spending 2-3 days in each city. I suggested the best time would be <strong>August</strong> as I will have long summer break. We would definitely need sponsors, ideally if we do under the banner of <strong>The Perl Foundation</strong> then the reach would be even greater, in my humble opinion. The local <strong>Perl Mongers</strong> group can be handy in organising the event. I would be eager to know if any <strong>Perl Mongers</strong> group interested in these events. I would expect the groups down south would be more active as they are highly aware of technologies in general. I just realised this is my <strong>130th edition</strong> of the weekly newsletter. I also completed <strong>5 years</strong> as co-editor of the weekly newsletter in this month. Enjoy the rest of the newsletter. Please do share your ideas and suggestions. -- Your editor: Mohammad S. Anwar. ## Announcements ### [2023 Stack Overflow Developer Survey](https://stackoverflow.az1.qualtrics.com/jfe/form/SV_czLVsbnGnF4Q04e) Please take the survey and share your Perl experience. --- ## Articles ### [Welcome new contributors with the first-timers-only tag](https://blogs.perl.org/users/dean/2023/05/welcome-new-contributors-with-the-first-timers-only-tag.html) Nice attempt to encourage first-timers to contribute. ### [SVG px conversion in Perl](https://github.polettix.it/ETOOBUSY/2023/05/08/svg-px-conversion/) Nice little example showing how to convert px to point in SVG. ### [How to Send and Receive Email with Perl](https://blogs.perl.org/users/den/2023/05/how-to-send-and-receive-email-with-perl.html) Simplest and easiest way to deal with email. Nice attempt. ### [SVG Text elements](https://github.polettix.it/ETOOBUSY/2023/05/09/svg-text-elements/) ### [SVG to PDF::Collage](https://github.polettix.it/ETOOBUSY/2023/05/10/svg-to-collage/) Possible way to generate a template for PDF::Collage in a graphical-ish way. --- ## Discussion ### [PTS 2023 - Tux](https://blogs.perl.org/users/tux/2023/05/pts-2023.html) Another great report by one of the most respectable participant. You really don't want to miss it. ### [PTS 2023 - Chad Granum](https://blogs.perl.org/users/chad_exodist_granum/2023/05/pts-2023.html) Bonus event report for all of us by Chad Granum. --- ## The Weekly Challenge <a href="https://theweeklychallenge.org/">The Weekly Challenge</a> by <a href="http://www.manwar.org/">Mohammad Anwar</a> will help you step out of your comfort-zone. You can even win prize money of $50 Amazon voucher by participating in the weekly challenge. We pick one winner at the end of the month from among all of the contributors during the month. The monthly prize is kindly sponsored by Peter Sergeant of <a href="https://perl.careers/">PerlCareers</a>. ### [The Weekly Challenge - 217](https://theweeklychallenge.org/blog/perl-weekly-challenge-217) Welcome to a new week with a couple of fun tasks: "Sorted Matrix" and "Max Number". If you are new to the weekly challenge, why not join us and have fun every week? For more information, please read the <a href="https://theweeklychallenge.org/faq">FAQ</a>. ### [RECAP - The Weekly Challenge - 216](https://theweeklychallenge.org/blog/recap-challenge-216) Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Registration Number" and "Word Stickers" tasks in Perl and Raku. You will find plenty of solutions to keep you busy. ### [Sticky Numbers](https://raku-musings.com/sticky-numbers.html) Line by line description of the source code is very handy to follow the process. You even get the bonus link to official documentation. Keep it up great work. ### [Car Nicknaming](https://dev.to/oldtechaa/perl-weekly-challenge-216-car-nicknaming-4hko) New language features, never used before, is in action this week. Nice one, well done. ### [PWC216 - Registration Number](https://github.polettix.it/ETOOBUSY/2023/05/11/pwc216-registration-number/) Too many questions but the solution is too simple. How? Do checkout the solution. ### [PWC216 - Word Stickers](https://github.polettix.it/ETOOBUSY/2023/05/12/pwc216-word-stickers/) Quite a detailed discussion of the solution, kudos for the efforts. Thanks for sharing. ### [The Weekly Challenge 6^3](https://github.com/manwar/perlweeklychallenge-club/tree/master/challenge-216/james-smith#readme) Find the best use of Perl magics as always. Highly recommended. ### [Perl Weekly Challenge 216: Registration Number](https://blogs.perl.org/users/laurent_r/2023/05/perl-weekly-challenge-216-registration-number.html) Just one this week making good use of Raku power as always. Thanks for your contributions. ### [words, grep and letters!](https://fluca1978.github.io/2023/05/08/PerlWeeklyChallenge216.html) Two choices, grep and regex one. You pick your favourite. Nice one. ### [Perl Weekly Challenge 216](https://wlmb.github.io/2023/05/08/PWC216/) True one-liner in Perl without any gimmicks. Just love it. Thanks for everything. ### [Alphabetical exercises](http://ccgi.campbellsmiths.force9.co.uk/challenge/216) Compact one-liner in Perl using the cool comnination of join,sort,uniq and split. Brilliant work. ### [The Weekly Challenge #216](https://hatley-software.blogspot.com/2023/05/robbie-hatleys-solutions-to-weekly_13.html) Loved the breakdown of complications in the task #2. Nice writings, I must admit. ### [Word Registration](https://blog.firedrake.org/archive/2023/05/The_Weekly_Challenge_216__Word_Registration.html) Enjoy the Ruby implementation this week with great details. Thanks for your contributions. ### [Perl Weekly Challenge #216](https://github.com/manwar/perlweeklychallenge-club/blob/master/challenge-216/shimon-ben-avraham/raku/ch-1.md) First blog for the weekly challenge, well crafted. Well done. ### [Letter frequency](https://dev.to/simongreennet/letter-frequency-338m) Just love the combination of Perl and Python. It reminds me when I used to contribute. Great work, keep it up. --- ## Perl Tutorial A section for newbies and for people who need some refreshing of their Perl knowledge. If you have questions or suggestions about the articles, let me know and I'll try to make the necessary changes. The included articles are from the <a href="https://perlmaven.com/perl-tutorial">Perl Maven Tutorial</a> and are part of the <a href="https://leanpub.com/perl-maven">Perl Maven eBook</a>. --- ## Rakudo ### [2023.19 Pakku](https://rakudoweekly.blog/2023/05/08/2023-19-pakku/) --- ## Weekly collections ### [NICEPERL's lists](http://niceperl.blogspot.com/) <a href="https://niceperl.blogspot.com/2023/05/cdxliii-28-great-cpan-modules-released.html">Great CPAN modules released last week</a>;<br><a href="https://niceperl.blogspot.com/2023/05/dlv-metacpan-weekly-report-assign.html">MetaCPAN weekly report</a>;<br><a href="https://niceperl.blogspot.com/2023/05/dlxxx-stackoverflow-perl-report.html">StackOverflow Perl report</a>. --- ## Events ### [The Perl and Raku Conference 2023](https://tprc2023.sched.com/) July 11-13, 2023, Toronto, Canada ### [Perl and Koha](https://perlkohacon.fi/) August 14-18, 2023, Helsinki, Finland --- ## <a href="https://perl.careers/?utm_source=perlweekly&utm_campaign=perlweekly&utm_medium=perlweekly">Perl Jobs by Perl Careers</a> ### [Perl Programmer with Rust Experience - UK Remote](https://job.perl.careers/2oc) Are you a talented Perl programmer with Rust experience looking to work for a cutting-edge enterprise tech publisher that’s at the forefront of the industry? Look no further than our client, a renowned publisher that provides unique news and stimulating perspectives on the enterprise tech that powers businesses across the globe. ### [Adventure Awaits! Senior Perl roles in Malaysia, Dubai and Malta](https://job.perl.careers/ga9) Clever folks know that if you’re lucky, you can earn a living and have an adventure at the same time. Enter our international client: online trading is their game, and they’re looking for Perl folks with passion, drive, and an appreciation for new experiences. ### [UK Remote Perl Programmer](https://job.perl.careers/d5q) If you’re a talented Perl programmer with a passion for delivering high-quality work and a desire to learn and grow with a small and focused team, we want to hear from you! Our client's tech stack includes Debian, Apache, Nginx, Exim, Redis, MySQL/MariaDB, and PostgreSQL, as well as open-source tools written in Perl (DBIx and Plack), Bash, HTML, CSS, and JavaScript. --- You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics. Want to see more? See the [archives](https://perlweekly.com/archive/) of all the issues. Not yet subscribed to the newsletter? [Join us free of charge](https://perlweekly.com/subscribe.html)! (C) Copyright [Gabor Szabo](https://szabgab.com/) The articles are copyright the respective authors.
szabgab
1,468,576
Browser Games With JavaScript
been developing web apps and games, and right now I'm making this Rpg Game that is Made by a...
0
2023-05-15T11:18:44
https://dev.to/anradev/browser-games-with-javascript-2enb
babylonjs, javascript, web3, html
been developing web apps and games, and right now I'm making this Rpg Game that is Made by a JavaScript Game Engine called babylonJs
anradev
1,468,581
Building a Delivery Workflow with TypeScript (TS)
What do you do when you’re hungry and there is no way to cook food? Definitely, you rely on a food...
0
2023-05-14T20:00:00
https://orkes.io/blog/building-a-delivery-workflow-with-typescript/
typescript, netflixconductor, orchestration, microservices
What do you do when you’re hungry and there is no way to cook food? Definitely, you rely on a food delivery application. Have you ever wondered how this delivery process works? Well, let me walk you through the process of how Conductor helps in orchestrating the delivery process. In this article, you will learn how to build a delivery workflow using Conductor, an open-source microservice and workflow orchestration framework. Conductor handles the process as a workflow that divides the delivery process into individual blocks. Let’s see this in action now! ## Delivery Workflow Consider that we get a request in the delivery app to send a package from an origin to a destination. The application has the details of both the registered clients and riders. It should connect the best-fitting rider to deliver the package. So, the application gets the registered riders list, picks the nearest riders, and lets them compete to win the ride. Looks simple? Yeah! Conductor comes into action here! You can simply build your delivery application by connecting small blocks together using Conductor. ## What you need! - A list of registered riders. - A way to let our riders know they have a possible delivery. - A method for our riders to compete or be the first to select the ride. ## Building the application Let’s begin to bake our delivery app. Initially, we need to get some API calls for processes such as getting the riders list, notifying the riders, etc. We will make use of the [dummy JSON](https://dummyjson.com/) that provides us with fake APIs. So, in this case, we will use the user API for pulling our registered riders, and for notifying the rider about a possible ride, we will use the posts API. Since we are creating this workflow as code, instead of using the workflow diagram, let's try to start with the test and build our workflow app from scratch. For the demonstration purpose, we will be using [Orkes Playground](https://play.orkes.io/), a free Conductor platform. However, the process would be the same for Netflix Conductor. ## Workflow as Code ### Project Setup First, you need to set up a project: 1. Create an npm project with `npm init` and install the SDK with `npm i @io-orkes/conductor-javascript`. 2. You'll need to add jest and typescript support. For this, copy and paste the **jest.config.js** and **tsconfig.json** files into your project in the root folder. Then add the following devDependencies as a separate JSON file: ``` "scripts": { "test": "jest" }, "devDependencies": { "@tsconfig/node16": "^1.0.2", "@types/jest": "^29.0.3", "@types/node": "^17.0.30", "@types/node-fetch": "^2.6.1", "@typescript-eslint/eslint-plugin": "^5.23.0", "@typescript-eslint/parser": "^5.23.0", "eslint": "^6.1.0", "jest": "^28.1.0", "ts-jest": "^28.0.1", "ts-node": "^10.7.0", "typescript": "^4.6.4" }, ``` 3. Run `yarn` to fetch them. So, now you’ve created your project. As we are creating the workflow as code, next, let's create two files; **mydelivery.ts **and **mydelivery.test.ts**. By writing our code along with the test, you will get instant feedback and know exactly what happens with every step. ## Creating Our Workflow Let’s begin creating our workflow. Initially, we need to calculate the distance between the two points, i.e., the rider and the package to be delivered. We leverage this distance to calculate the shipment cost too. So let's create a workflow that can be reused in both situations. Let the first workflow be **calculate_distance** that outputs the result of some function. So in our **mydelivery.ts**, let's update the following code: ``` import { generate, TaskType, OrkesApiConfig, } from "@io-orkes/conductor-javascript"; export const playConfig: Partial<OrkesApiConfig> = { keyId: "your_key_id", keySecret: "your_key_secret", serverUrl: "https://play.orkes.io/api", }; export const calculateDistanceWF = generate({ name: "calculate_distance", inputParameters: ["origin", "destination"], tasks: [ { type: TaskType.INLINE, name: "calculate_distance", inputParameters: { expression: "12", }, }, ], outputParameters: { distance: "${calculate_distance_ref.output.result}", identity: "${workflow.input.identity}", // Some identifier for the call will make sense later on }, }); ``` Now in our test file, create a test that generates the workflow so we can look at it later on the Playground. ``` import { orkesConductorClient, WorkflowExecutor, } from "@io-orkes/conductor-javascript"; import { calculateDistanceWF, playConfig } from "./mydelivery"; describe("My Delivery Test", () => { const clientPromise = orkesConductorClient(playConfig); describe("Calculate distance workflow", () => { test("Creates a workflow", async () => { // const client = new ConductorClient(); // If you are using Netflix conductor const client = await clientPromise; const workflowExecutor = new WorkflowExecutor(client); await expect( workflowExecutor.registerWorkflow(true, calculateDistanceWF) ).resolves.not.toThrowError(); console.log(JSON.stringify(calculateDistanceWF, null, 2)); }); }); }); ``` Now, run `npm test`. We have just created our first workflow, which basically prints the output of its task. If you look at the generated JSON, you'll notice that there are some additional attributes apart from the ones we’ve given as inputs. That's because the generate function will generate default values, which you can overwrite later. You'll also notice that I’ve called this **"${calculate_distance_ref.output.distance}"** using the generated task reference name. If you don't specify a _taskReferenceName_, it will generate one by adding **_ref** to the specified name. To reference a task output or a given task, we always use the _taskReferenceName_. Another thing to notice is the true value passed as the first argument of the registerWorkflow function. This flag specifies that the workflow will be overwritten, which is required since we will run our tests repeatedly. Let's create a test to actually run the workflow now. You can add the origin and destination parameters previously known by the workflow definition (Workflow input parameters). We are not using it for now, but it is relevant in the further steps. ``` test("Should calculate distance", async () => { // Pick two random points const origin = { latitude: -34.4810097, longitude: -58.4972602, }; const destination = { latitude: -34.4810097, longitude: -58.491168, }; // const client = new ConductorClient(); // If you are using Netflix conductor const client = await clientPromise; const workflowExecutor = new WorkflowExecutor(client); // Run the workflow passing an origin and a destination const executionId = await workflowExecutor.startWorkflow({ name: calculateDistanceWF.name, version: 1, input: { origin, destination, }, }); const workflowStatus = await workflowExecutor.getWorkflow(executionId, true); expect(workflowStatus?.status).toEqual("COMPLETED"); // For now we expect the workflow output to be our hardcoded value expect(workflowStatus?.output?.distance).toBe(12); }); ``` Now, run `yarn test`, and great, we have our first workflow execution run! ## Calculating Actual Distance Next, we need to calculate the actual or approximate distance between the two points. To get the distance between two points in a sphere, we could use the [Haversine](http://www.movable-type.co.uk/scripts/latlong.html) formula, but since we don't want a direct distance (because our riders can't fly :P), we will implement something like [Taxicab geometry](https://en.wikipedia.org/wiki/Taxicab_geometry). ### Calculating distance using an INLINE Task An INLINE task can be utilized in situations where the code is required to be simple. The INLINE task can take input parameters and an expression. If we go back to our **calculate_distance** workflow, it takes no context and returns a hard-coded object. Now, let’s modify our inline task to take the origin and destination to calculate the approximate distance. ``` export const calculateDistanceWF = generate({ name: "calculate_distance", inputParameters: ["origin", "destination"], tasks: [ { name: "calculate_distance", type: TaskType.INLINE, inputParameters: { fromLatitude: "${workflow.input.from.latitude}", fromLongitude: "${workflow.input.from.longitude}", toLatitude: "${workflow.input.to.latitude}", toLongitude: "${workflow.input.to.longitude}", expression: function ($: any) { return function () { /** * Converts from degrees to Radians */ function degreesToRadians(degrees: any) { return (degrees * Math.PI) / 180; } /** * * Returns total latitude/longitude distance * */ function harvisineManhatam(elem: any) { var EARTH_RADIUS = 6371; var a = Math.pow(Math.sin(elem / 2), 2); // sin^2(delta/2) var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a)); // 2* atan2(sqrt(a),sqrt(1-a)) return EARTH_RADIUS * c; } var deltaLatitude = Math.abs( degreesToRadians($.fromLatitude) - degreesToRadians($.toLatitude) ); var deltaLongitude = Math.abs( degreesToRadians($.fromLongitude) - degreesToRadians($.toLongitude) ); var latitudeDistance = harvisineManhatam(deltaLatitude); var longitudeDistance = harvisineManhatam(deltaLongitude); return Math.abs(latitudeDistance) + Math.abs(longitudeDistance); }; }, }, }, ], outputParameters: { distance: "${calculate_distance_ref.output.result}", }, }); ``` If we run the test, it will fail because the result is not 12; i.e., in the workflow **calculate_distance**, the input parameter was defined as 12. But in accordance with Red-Green-Refactor, we can calculate the distance using Taxicab geometry if we pick the two cardinal points. So this test should pass. So let’s decide the origin and destination points to be the same, so the result is 0. It is actually **.toEqual(0)** since it’s returning an object. So we can fix that in the test. **Note**: Key takeaway from the above case: I’ve used ES5 javascript on my editor and not as a string. However, you can't use closures with the rest of the file’s code, and the returned function has to be written in ES5. Else the tests will fail. Running the test now registers a new workflow overwriting the old one. ### Finding Best Rider Now that we have the **calculate_distance **workflow. We can think of this workflow as a function that can later be invoked into a different project/file. And let's create workflow number two, i.e.,**findNearByRiders**, which will hit a microservice that pulls the registered riders list. ### Hitting Microservice We can use the HTTP task to hit something simple as an HTTP microservice. The HTTP task will take some input parameters and hit an endpoint with our configuration. It is similar to cURL or Postman. We will be using [dummy json](https://dummyjson.com/users), which returns a list of users with an address. Preferably, consider this address as the last reported address from the riders. ``` export const nearByRiders = generate({ name: "findNearByRiders", tasks: [ { type: TaskType.HTTP, name: "get_users", taskReferenceName: "get_users_ref", inputParameters: { http_request: { uri: "http://dummyjson.com/users", method: "GET", }, }, }, ], outputParameters: { possibleRiders: "${get_users_ref.output.response.body.users}", }, }); ``` Our **findNearByRiders** workflow hits an endpoint and returns the list of all available riders. Let's write the test. ``` describe("NearbyRiders", () => { // As before, we create the workflow. test("Creates a workflow", async () => { const client = await clientPromise; const workflowExecutor = new WorkflowExecutor(client); await expect( workflowExecutor.registerWorkflow(true, nearByRiders) ).resolves.not.toThrowError(); }); test("Should return all users with latest reported address", async () => { const client = await clientPromise; const workflowExecutor = new WorkflowExecutor(client); const executionId = await workflowExecutor.startWorkflow({ name: nearByRiders.name, input: { place: { latitude: -34.4810097, longitude: -58.4972602, }, }, version: 1, }); //Let's wait for the response... await new Promise((r) => setTimeout(() => r(true), 2000)); const workflowStatus = await client.workflowResource.getExecutionStatus( executionId, true ); expect(workflowStatus.status).toBe("COMPLETED"); expect(workflowStatus?.output?.possibleRiders.length).toBeGreaterThan(0); console.log("Riders", JSON.stringify(workflowStatus?.output, null, 2)); }); }); ``` If we run our test, it should pass since the number of users is around 30. Looking at the printed output, you can see that the whole structure is being returned by the endpoint. Our workflow is incomplete because it only returns the list of every possible rider. But we need to get the distance between the riders and the packages. For this, we must run our previous workflow **calculate_distance** for every rider on the fetched list. Let’s prepare the data to be passed to the next workflow. Here, we utilize the JQ Transform task, which runs a JQ query over the JSON data. ### JSON_JQ_TRANSFORM Task Let's add the JQ task. ``` export const nearByRiders = generate({ name: "findNearByRiders", tasks: [ { type: TaskType.HTTP, name: "get_users", taskReferenceName: "get_users_ref", inputParameters: { http_request: { uri: "http://dummyjson.com/users", method: "GET", }, }, }, { type: TaskType.JSON_JQ_TRANSFORM, name: "summarize", inputParameters: { users: "${get_users_ref.output.response.body.users}", queryExpression: ".users | map({identity:{id,email}, to:{latitude:.address.coordinates.lat, longitude:.address.coordinates.lng}} + {from:{latitude:${workflow.input.place.latitude},longitude:${workflow.input.place.latitude}}})", }, }, ], outputParameters: { possibleRiders: "${get_users_ref.output.response.body.users}", }, }); ``` From the task definition here, you can see the mapping of JQ users to the variable output of the HTTP task and then extracting the address. The expected result should have the structure {identity:{id,email}, to:{latitude,longitude}, from:{latitude,longitude}}. ### Dot Map method At this point, we have an array with all possible riders and a workflow to calculate the distance between two points. We must aggregate these to calculate the distance between the package and the riders so that the nearby riders can be chosen. While aggregating in javascript or changing the data for every item in the array, we usually leverage the map method, which takes a function that will be applied to every item in the array. Since we extracted the distance calculated, and need to map our riders through a “function”. Let’s create a dot map workflow for this. This workflow takes the array of riders as the input parameters and the workflow ID of the **calculate_distance** to run on each rider. Note that this new workflow will work for every array and workflow ID provided and is not limited to the riders and the **calculate_distance** workflow. ``` describe("Mapper Test", () => { test("Creates a workflow", async () => { const client = await clientPromise; await expect( client.metadataResource.create(workflowDotMap, true) ).resolves.not.toThrowError(); }); test("Gets existing workflow", async () => { const client = await clientPromise; const wf = await client.metadataResource.get(workflowDotMap.name); expect(wf.name).toEqual(workflowDotMap.name); expect(wf.version).toEqual(workflowDotMap.version); }); test("Can map over an array using a workflow", async () => { const client = await clientPromise; const workflowExecutor = new WorkflowExecutor(client); const from = { latitude: -34.4810097, longitude: -58.4972602, }; const to = { latitude: -34.494858, longitude: -58.491168, }; const executionId = await workflowExecutor.startWorkflow({ name: workflowDotMap.name, version: 1, input: { inputArray: [{ from, to, identity: "js@js.com" }], mapperWorkflowId: "calculate_distance", }, }); await new Promise((r) => setTimeout(() => r(true), 1300)); const workflowStatus = await client.workflowResource.getExecutionStatus( executionId, true ); expect(workflowStatus?.status).toBe("COMPLETED"); expect(workflowStatus?.output?.outputArray).toEqual( expect.arrayContaining([ expect.objectContaining({ distance: 2.2172824347556963, }), ]) ); }); }); ``` #### Workflow ``` export const workflowDotMap = generate({ name: "workflowDotMap", inputParameters: ["inputArray", "mapperWorkflowId"], tasks: [ { type: TaskType.JSON_JQ_TRANSFORM, name: "count", taskReferenceName: "count_ref", inputParameters: { input: "${workflow.input.inputArray}", queryExpression: ".[] | length", }, }, { type: TaskType.JSON_JQ_TRANSFORM, name: "dyn_task_builder", taskReferenceName: "dyn_task_builder_ref", inputParameters: { input: {}, queryExpression: 'reduce range(0,${count_ref.output.result}) as $f (.; .dynamicTasks[$f].subWorkflowParam.name = "${workflow.input.mapperWorkflowId}" | .dynamicTasks[$f].taskReferenceName = "mapperWorkflow_wf_ref_\\($f)" | .dynamicTasks[$f].type = "SUB_WORKFLOW")', }, }, { type: TaskType.JSON_JQ_TRANSFORM, name: "dyn_input_params_builder", taskReferenceName: "dyn_input_params_builder_ref", inputParameters: { taskList: "${dyn_task_builder_ref.output.result}", input: "${workflow.input.inputArray}", queryExpression: 'reduce range(0,${count_ref.output.result}) as $f (.; .dynamicTasksInput."mapperWorkflow_wf_ref_\\($f)" = .input[$f])', }, }, { type: TaskType.FORK_JOIN_DYNAMIC, inputParameters: { dynamicTasks: "${dyn_task_builder_ref.output.result.dynamicTasks}", dynamicTasksInput: "${dyn_input_params_builder_ref.output.result.dynamicTasksInput}", }, }, { type: TaskType.JOIN, name: "join", taskReferenceName: "join_ref", }, { type: TaskType.JSON_JQ_TRANSFORM, name: "to_array", inputParameters: { objValues: "${join_ref.output}", queryExpression: ".objValues | to_entries | map(.value)", }, }, ], outputParameters: { outputArray: "${to_array_ref.output.result}", }, }); ``` - From the above example workflow, we get the number of arrays. - At "dyn_task_builder", we create a SubWorkflow task template for every item within the array. - At "dyn_input_params_builder", we prepare the parameters to pass on to each SubWorkflow. - Using FORK_JOIN_DYNAMIC, we create each task using our previously created template and pass the corresponding parameters. After the join operation, use a JSON_JQ_TRANSFORM task to extract the results and return an array with the transformations. ## Calculating distance between package and riders Given that we now have the origin and destination points, let us modify the **NearbyRiders** workflow so that using the riders' last reported locations, we get the distance between the package and the riders. To achieve this, we pull the riders from the microservice, calculate the distance to the package and sort them by the distance from the package. ``` describe("NearbyRiders", () => { // As before, we create the workflow. test("Creates a workflow", async () => { const client = await clientPromise; const workflowExecutor = new WorkflowExecutor(client); await expect( workflowExecutor.registerWorkflow(true, nearByRiders) ).resolves.not.toThrowError(); }); // First, let's test that the API responds to all the users. test("Should return all users with latest reported address", async () => { const client = await clientPromise; const workflowExecutor = new WorkflowExecutor(client); const executionId = await workflowExecutor.startWorkflow({ name: nearByRiders.name, input: { place: { latitude: -34.4810097, longitude: -58.4972602, }, }, version: 1, }); // Let’s wait for the response... await new Promise((r) => setTimeout(() => r(true), 2000)); const workflowStatus = await client.workflowResource.getExecutionStatus( executionId, true ); expect(workflowStatus.status).toBe("COMPLETED"); expect(workflowStatus?.output?.possibleRiders.length).toBeGreaterThan(0); }); // So now we need to specify input parameters, else we won't know the distance to the package test("User object should contain distance to package", async () => { const client = await clientPromise; const workflowExecutor = new WorkflowExecutor(client); const executionId = await workflowExecutor.startWorkflow({ name: nearByRiders.name, input: { place: { latitude: -34.4810097, longitude: -58.4972602, }, }, version: 1, }); // Let’s wait for the response... await new Promise((r) => setTimeout(() => r(true), 2000)); const nearbyRidersWfResult = await client.workflowResource.getExecutionStatus(executionId, true); expect(nearbyRidersWfResult.status).toBe("COMPLETED"); nearbyRidersWfResult.output?.possibleRiders.forEach((re: any) => { expect(re).toHaveProperty("distance"); expect(re).toHaveProperty("rider"); }); }); }); ``` #### Workflow ``` export const nearByRiders = generate({ name: "findNearByRiders", inputParameters: ["place"], tasks: [ { type: TaskType.HTTP, name: "get_users", taskReferenceName: "get_users_ref", inputParameters: { http_request: { uri: "http://dummyjson.com/users", method: "GET", }, }, }, { type: TaskType.JSON_JQ_TRANSFORM, name: "summarize", inputParameters: { users: "${get_users_ref.output.response.body.users}", queryExpression: ".users | map({identity:{id,email}, to:{latitude:.address.coordinates.lat, longitude:.address.coordinates.lng}} + {from:{latitude:${workflow.input.place.latitude},longitude:${workflow.input.place.latitude}}})", }, }, { type: TaskType.SUB_WORKFLOW, name: "distance_to_riders", subWorkflowParam: { name: "workflowDotMap", version: 1, }, inputParameters: { inputArray: "${summarize_ref.output.result}", mapperWorkflowId: "calculate_distance", }, }, { type: TaskType.JSON_JQ_TRANSFORM, name: "riders_picker", taskReferenceName: "riders_picker_ref", inputParameters: { ridersWithDistance: "${distance_to_riders_ref.output.outputArray}", queryExpression: ".ridersWithDistance | map( {distance:.distance, rider:.identity}) | sort_by(.distance) ", }, }, ], outputParameters: { possibleRiders: "${riders_picker_ref.output.result}", }, }); ``` This will give us a list of riders with their distance to the package, sorted by distance from the package. ## Picking a Rider Now we have all the required data, such as package origin/destination, riders, and their distance from the package. Next, we’ll pre-select N riders, notify them of the possible ride, and ensure that a rider picks the ride. And for this last part, we will create a worker who will randomly select one. ``` export const createRiderRaceDefinition = (client: ConductorClient) => client.metadataResource.registerTaskDef([ { name: "rider_race", description: "Rider race", retryCount: 3, timeoutSeconds: 3600, timeoutPolicy: "TIME_OUT_WF", retryLogic: "FIXED", retryDelaySeconds: 60, responseTimeoutSeconds: 600, rateLimitPerFrequency: 0, rateLimitFrequencyInSeconds: 1, ownerEmail: "youremail@example.com", pollTimeoutSeconds: 3600, }, ]); export const pickRider = generate({ name: "pickRider", inputParameters: ["targetRiders", "maxCompetingRiders"], tasks: [ { name: "do_while", taskReferenceName: "do_while_ref", type: TaskType.DO_WHILE, inputParameters: { amountOfCompetingRiders: "${workflow.input.maxCompetingRiders}", riders: "${workflow.input.targetRiders}", }, loopCondition: "$.do_while_ref['iteration'] < $.amountOfCompetingRiders", loopOver: [ { taskReferenceName: "assigner_ref", type: TaskType.INLINE, inputParameters: { riders: "${workflow.input.targetRiders}", currentIteration: "${do_while_ref.output.iteration}", expression: ($: { riders: { distance: number; rider: { id: number; email: string }; }[]; currentIteration: number; }) => function () { var currentRider = $.riders[$.currentIteration - 1]; return { distance: currentRider.distance, riderId: currentRider.rider.id, riderEmail: currentRider.rider.email, }; }, }, }, { type: TaskType.HTTP, name: "notify_riders_of_ride", taskReferenceName: "notify_riders_of_ride", inputParameters: { http_request: { uri: "http://dummyjson.com/posts/add", method: "POST", body: { title: "Are you available to take a ride of a distance of ${assigner_ref.output.result.distance} km from you", userId: "${assigner_ref.output.result.riderId}", }, }, }, }, ], }, { type: TaskType.SIMPLE, name: "rider_race", inputParameters: { riders: "${workflow.input.targetRiders}", }, }, ], outputParameters: { selectedRider: "${rider_race_ref.output.selectedRider}", }, }); ``` To select the rider and notify them, we utilize the DO_WHILE task. By simulation, we let the riders know that there is a ride they will be interested in. The notifying order would be from the nearest package to the less near one. Finally, we simulate with a simple task that a rider has accepted our ride. For this, we need to register the task initially. By doing this, we let the Conductor know that a worker will be doing the simple tasks. The actual [worker](https://orkes.io/content/getting-started/first-workflow-application) needs to be set up for the scheduled tasks to be executed. Otherwise, while running the above workflow, it will be in a SCHEDULED state and wait for the worker to finish the task, which will never get picked up by a worker. ## Setting up Worker To implement the worker, we need to create an object of type RunnerArgs. The worker takes a taskDefName which should match our SIMPLE task's reference name. You may have multiple workers waiting for the same task. However, the first one to poll for work with the task name gets the job done. ``` export const riderRespondWorkerRunner = (client: ConductorClient) => { const firstRidertoRespondWorker: RunnerArgs = { taskResource: client.taskResource, worker: { taskDefName: "rider_race", execute: async ({ inputData }) => { const riders = inputData?.riders; const [aRider] = riders.sort(() => 0.5 - Math.random()); return { outputData: { selectedRider: aRider.rider, }, status: "COMPLETED", }; }, }, options: { pollInterval: 10, domain: undefined, concurrency: 1, workerID: "", }, }; const taskManager = new TaskRunner(firstRidertoRespondWorker); return taskManager; }; ``` ### Workflow ``` // Having the nearby riders, we want to filter out those willing to take the ride. // For this, we will simulate a POST where we ask the rider if he is willing to take the ride describe("PickRider", () => { test("Creates a workflow", async () => { const client = await clientPromise; await expect( client.metadataResource.create(pickRider, true) ).resolves.not.toThrowError(); }); test("Every iteration should have the current driver", async () => { const client = await clientPromise; await createRiderRaceDefintion(client); const runner = riderRespondWorkerRunner(client); runner.startPolling(); // Our ‘N’ pre-selected riders const maxCompetingRiders = 5; const targetRiders = [ { distance: 12441.284548668005, rider: { id: 15, email: "kminchelle@qq.com", }, }, { distance: 16211.662539905119, rider: { id: 8, email: "ggude7@chron.com", }, }, { distance: 17435.548525470404, rider: { id: 29, email: "jissetts@hostgator.com", }, }, { distance: 17602.325904122146, rider: { id: 20, email: "aeatockj@psu.edu", }, }, { distance: 17823.508069312982, rider: { id: 3, email: "rshawe2@51.la", }, }, { distance: 17824.39318092907, rider: { id: 7, email: "dpettegre6@columbia.edu", }, }, { distance: 23472.94011516013, rider: { id: 26, email: "lgronaverp@cornell.edu", }, }, ]; const workflowExecutor = new WorkflowExecutor(client); const executionId = await workflowExecutor.startWorkflow({ name: pickRider.name, input: { maxCompetingRiders, targetRiders, }, version: 1, }); await new Promise((r) => setTimeout(() => r(true), 2500)); const workflowStatus = await client.workflowResource.getExecutionStatus( executionId, true ); expect(workflowStatus.status).toEqual("COMPLETED"); // We check our task and select the number of riders we are after. const doWhileTaskResult = workflowStatus?.tasks?.find( ({ taskType }) => taskType === TaskType.DO_WHILE ); expect(doWhileTaskResult?.outputData?.iteration).toBe(maxCompetingRiders); expect(workflowStatus?.output?.selectedRider).toBeTruthy(); runner.stopPolling(); }); }); ``` ## Baking Delivery App - Combining blocks Finally, we have all our ingredients ready. Now, let’s bake our delivery app together. In a nutshell, when we have a client with a package request with the origin and destination points, we need to pick the best rider to deliver the package from the origin to the destination. As a bonus, let’s compute the delivery cost and make it less expensive if our client is paying by card instead of cash. So, we run the **nearbyRiders** workflow passing the origin as an input parameter. This would give a list of possible riders, of which one would be picked based on “who answers first”. Next, we calculate the distance from the origin to the destination to compute the cost. Therefore, the workflow delivers the output with the selected rider and the shipping cost. ### Workflow ``` export const deliveryWorkflow = generate({ name: "deliveryWorkflow", inputParameters: ["origin", "packageDestination", "client", "paymentMethod"], tasks: [ { taskReferenceName: "possible_riders_ref", type: TaskType.SUB_WORKFLOW, subWorkflowParam: { version: nearByRiders.version, name: nearByRiders.name, }, inputParameters: { place: "${workflow.input.origin}", }, }, { taskReferenceName: "pick_a_rider_ref", type: TaskType.SUB_WORKFLOW, subWorkflowParam: { version: pickRider.version, name: pickRider.name, }, inputParameters: { targetRiders: "${possible_riders_ref.output.possibleRiders}", maxCompetingRiders: 5, }, }, { taskReferenceName: "calculate_package_distance_ref", type: TaskType.SUB_WORKFLOW, subWorkflowParam: { version: calculateDistanceWF.version, name: calculateDistanceWF.name, }, inputParameters: { from: "${workflow.input.origin}", to: "${workflow.input.packageDestination}", identity: "commonPackage", }, }, { type: TaskType.SWITCH, name: "compute_total_cost", evaluatorType: "value-param", inputParameters: { value: "${workflow.input.paymentMethod}", }, expression: "value", decisionCases: { card: [ { type: TaskType.INLINE, taskReferenceName: "card_price_ref", inputParameters: { distance: "${calculate_package_distance_ref.output.distance}", expression: ($: { distance: number }) => function () { return $.distance * 20 + 20; }, }, }, { type: TaskType.SET_VARIABLE, inputParameters: { totalPrice: "${card_price_ref.output.result}", }, }, ], }, defaultCase: [ { type: TaskType.INLINE, taskReferenceName: "non_card_price_ref", inputParameters: { distance: "${calculate_package_distance_ref.output.distance}", expression: ($: { distance: number }) => function () { return $.distance * 40 + 20; }, }, }, { type: TaskType.SET_VARIABLE, inputParameters: { totalPrice: "${non_card_price_ref.output.result}", }, }, ], }, ], outputParameters: { rider: "${pick_a_rider_ref.output.selectedRider}", totalPrice: "${workflow.variables.totalPrice}", }, }); ``` ## Wrapping Up And our app is finally ready. Building an app this way resembles the same process as app building via coding. But here, we put together small building blocks to make a giant workflow. Following this article along with [Orkes Playground,](https://play.orkes.io/) you can seamlessly visualize the building blocks. You can make further improvements to the application by focusing on that particular block without losing the application's perspective as a whole. [ You can test out Conductor for free in [Orkes Playground](https://play.orkes.io/), or if you’re looking for a cloud version, you may have a sneak peek at [Orkes Cloud](https://orkes.io/cloud/).
rizafarheen
1,468,713
Angular 16 Signals a new way of managing application state.
Before going into the topic let's brief …How Angular currently deals with reactive state...
0
2023-05-15T13:31:06
https://dev.to/imbhanu47/angular-16-signals-a-new-way-of-managing-application-state-2aim
angular16, angular, solidjs, javascript
**Before going into the topic let's brief …How Angular currently deals with reactive state management.** - we all know that In Angular, the primary way to manage state changes is through services, components, and data binding using properties and events or RxJs. Angular provides features like two-way data binding, event binding, and property binding to facilitate the flow of data between components and their templates. - On the other hand, [Solid.js](https://www.solidjs.com/) is a separate JavaScript library that introduces the concept of "Signals" for managing reactive state in web applications. It is not directly related to Angular. But now in Angular 16 they introduced the concept of signal (Still under developer preview) included in @angular/core. **What is this Signal actually… ?** - Signals are a new way of managing state changes in Angular applications, inspired by Solid.js. Signals are functions that return a value (get())and can be updated by calling them with a new value (set()). Signals can also depend on other signals, creating a reactive value graph that automatically updates when any dependencies change. Signals can be used with RxJS observables, which are still supported in Angular v16, to create powerful and declarative data flows. - Signals offer several advantages over the traditional change detection mechanism in Angular, which relies on Zone.js to monkey-patch browser APIs and trigger change detection globally. Signals allow you to run change detection only in affected components, without traversing the entire component tree or using Zone.js. This improves runtime performance and reduces complexity. I know this theory is boring. Let's dive into the action. Let's say you have an e-commerce application where users can add items to their shopping cart. You want to display the total price of the items and update it every time a new item is added or removed. Here's how you can use Signals to achieve this: ``` @Component({ selector: 'my-cart', template: ` <ul> <li *ngFor="let item of items"> {{item.name}} - ${{item.price}} <button (click)="removeItem(item)">Remove</button> </li> </ul> Total Price: ${{totalPrice()}} `, }) export class CartComponent { items = [ { name: 'Product A', price: 10 }, { name: 'Product B', price: 15 }, { name: 'Product C', price: 20 }, ]; // Define a signal for the list of items itemList = signal(this.items); // Define a computed value for the total price totalPrice = computed(() => { return this.itemList().reduce((acc, curr) => acc + curr.price, 0); }); removeItem(item) { // Update the itemList signal by removing the selected item this.itemList.set(this.itemList().filter((i) => i !== item)); } } ``` In this example, we define a signal itemList for the list of items in the cart and a computed value totalPrice that depends on itemList. Whenever an item is removed from the cart, we update the itemList signal which triggers the re-calculation of totalPrice. > using **Computed**() we can recalculate several other signal values if some signal changes. If there is any change in signal that notifies all the dependant who listens to it. `// To set a value of a signal you call the function signal like variableName = signal<type>(value); //And to get the value out of the signal you call the variableName as a function like <p>{{ variableName() }}</p>` To change the value of a signal there are 2 methods available. Update and mutate. ``` variableName = signal<any>({ a: 'Test' }); variableName.mutate( currentValue => currentValue.b = 'Another Test'; ); //or variableName = signal<CustomerItem[]>([ { customer: 'Bhanu' } ]); variableName.mutate( customerArray => customerArray.push = { customer: 'Prakash' }; ); ``` **Effect** Effects are a concept that developers might already know from NGRX. Besides just getting the value by calling the signal by name like variableName(), they trigger an action that has no impact on the signals value but is executed when the value of the signal changes. The effect is also triggered when the function is executed for the first time. This means you can not only define it and it will be triggered once the signals value changes. Its executed the first time the code runs over the effect. ``` public weatherData$: Observable<WeatherData>; public dateSignal = signal<Date>(Date.now()); public effectRef = effect( () => { this.weatherData$ = this.httpClient.get<WeatherData>('/weatherData?date=' + dateSignal()); } ) ``` If the effect had an if else statement in it where one signal was in the if and another signal was in the else statement. It will register to both signals. But it is only executed when one of those 2 signals change. It's not executed when a third signal changes that is not mentioned in the effect. The effect() function returns an Effect. It can be manually scheduled or destroyed. There are 3 methods on the Effect. 1. schedule(): Schedule the effect for manual execution. 2. destroy(): Shut down the effect, removing it from any upcoming scheduled executions. 3. consumer: Direct access to the effect's Consumer for advanced use cases. **computed** The computed function is like an effect. The difference is that it returns a new (and immutable) signal of type Signal instead of an Effect. This means that we can recalculate several other signal values if some signal changes. ``` import { Component, computed, SettableSignal, signal, Signal } from '@angular/core'; @Component({ selector: 'signal-test-component', template: '{{valueX()}}' }) export class SignalTestComponent { public valueX: SettableSignal<number> = signal<number>(5); public valueY: SettableSignal<number> = signal<number>(5); public valueZ: Signal<number>; constructor() { console.log('The number is: ' + valueX()); this.valueZ = computed( () => this.valueX() + this.valueY() ); } } ``` valueZ will always depend on valueX and valueY and therefore has no methods to update or mutate. It does not need them. That's the difference between a SettableSignal and a Signal. Nice visual explained by [TeckShareSkk](https://medium.com/r/?url=https%3A%2F%2Fyoutu.be%2Fn5023Q7vGr0) Sample E-Comm App using [Angular 16 Signals](https://medium.com/r/?url=https%3A%2F%2Fgithub.com%2Fuibhanu5%2FAngular-16-Signals%2Ftree%2Fmaster). _I hope you find this post helpful. Thanks for reading._
imbhanu47
1,468,731
React 基礎 Part 01 -- URL に応じて Router でコンポーネントを出し分ける
Router をインストール npm install react-router-dom @types/react-router-dom added 15 packages...
22,994
2023-05-15T15:24:08
https://dev.to/kaede_io/react-ji-chu-part-01-url-niying-zite-router-dekonponentowochu-sifen-keru-38k3
react, router
## Router をインストール ```php npm install react-router-dom @types/react-router-dom added 15 packages in 2s ``` react-router-dom と その types をインストール --- ## App で Router で コンポーネントを出し分ける App.tsx ```ts import './App.css'; import { BrowserRouter, Route, Routes, } from "react-router-dom" const App = () => { return ( <BrowserRouter> <Routes> <Route path="/" element= { <Home/>} /> <Route path="/test" element= { <Test/>} /> </Routes> </BrowserRouter> ); } export const Home = () => <h2>Home</h2> export const Test = () => <h2>Test</h2> export default App; ``` * BrowserRouter * Routes * Route / -> Home * Route /test -> Test このように Route を、Routes と BrowserRouter で挟む --- ## 画面 これによって、ブラウザでは 1. / (root) では Home * ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tig1wdwx0f48cnfmadj2.png) 2. /test では Test * ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00kgbx12zm7ja7xhq7k1.png) が描画される。
kaede_io
1,468,771
Do you really need to be good at Mathematics to be successful at programming?
When it comes to the world of programming, there has always been a longstanding debate about the...
0
2023-05-15T14:45:08
https://dev.to/agentebimene/do-you-really-need-to-be-good-at-mathematics-to-be-successful-at-programming-1mb0
webdev, programming, beginners, productivity
When it comes to the world of programming, there has always been a longstanding debate about the importance of mathematics in becoming a successful programmer. Some argue that a strong foundation in mathematics is crucial for mastering programming skills, while others believe that mathematics is not necessarily a prerequisite for becoming proficient in coding. So, do you really need to be exceptionally good in mathematics to excel in programming? Let’s explore this question and shed light on the relationship between mathematics and programming. First and foremost, it is important to acknowledge that mathematics and programming share several fundamental concepts. Both disciplines involve logic, problem-solving, and critical thinking. The ability to analyze problems, break them down into smaller components, and formulate logical solutions is a skillset that is valuable in both mathematics and programming. Furthermore, mathematical principles like algebra, calculus, and discrete mathematics underpin various aspects of programming, such as algorithms, data structures, and computational complexity. However, it is essential to note that programming is a vast field, and not all programming tasks require advanced mathematical knowledge. Many everyday programming tasks involve creating user interfaces, developing web applications, or building databases, where a solid understanding of mathematics may not be directly applicable. In these cases, programming proficiency relies more on practical coding skills, problem-solving abilities, and familiarity with programming languages and frameworks. Moreover, programming languages themselves have evolved to provide abstractions and libraries that handle complex mathematical operations, making it easier for programmers to work without delving deeply into the underlying mathematics. High-level programming languages like Python, JavaScript, and Ruby enable developers to focus on the logical structure of their code without needing an in-depth understanding of mathematical algorithms. That being said, there are certainly areas of programming where advanced mathematics plays a crucial role. Fields like scientific computing, data science, machine learning, cryptography, and game development require a deeper understanding of mathematical concepts. In these domains, algorithms, statistical analysis, linear algebra, and calculus become indispensable tools for solving complex problems. Proficiency in mathematics can help programmers optimize algorithms, develop efficient models, and make data-driven decisions. It is worth mentioning that while mathematics can enhance problem-solving skills and provide valuable tools for certain programming tasks, it is not the only path to success in programming. Programming also requires creativity, logical reasoning, attention to detail, and an ability to think algorithmically. These skills can be honed through practice, experience, and a deep understanding of programming principles, regardless of mathematical aptitude. Ultimately, the answer to the question of whether you need to be really good in mathematics to be good at programming depends on the specific area of programming you’re interested in. If you aspire to work in specialized fields where mathematical knowledge is essential, investing time in understanding and mastering relevant mathematical concepts will undoubtedly benefit you. However, for many other areas of programming, while a basic understanding of mathematics is helpful, it is not necessarily a prerequisite for becoming a skilled programmer. While mathematics and programming are interconnected disciplines, being really good in mathematics is not an absolute requirement to excel in programming. Strong mathematical skills can be advantageous in certain specialized fields of programming, but practical coding abilities, problem-solving aptitude, and a deep understanding of programming principles are equally vital. With determination, practice, and a love for learning, anyone can become proficient in programming, regardless of their mathematical prowess. Furthermore, it is important to remember that programming is a collaborative field. Many successful software development projects involve teams with diverse skill sets, including programmers with varying degrees of mathematical knowledge. Collaborating with mathematicians or domain experts who possess strong mathematical backgrounds can complement your programming skills and lead to more robust and innovative solutions. Moreover, the availability of numerous online resources, tutorials, and coding boot camps has made programming more accessible to individuals with diverse backgrounds. These resources often focus on teaching practical programming skills, algorithms, and problem-solving techniques, without assuming a high level of mathematical proficiency. This accessibility has opened doors for aspiring programmers who may not have had extensive exposure to mathematics but possess a passion for coding and a willingness to learn. It’s also important to note that programming is not a static field. As technology advances, new tools, frameworks, and libraries emerge, simplifying complex tasks and reducing the reliance on mathematical knowledge. Modern programming languages and development environments continue to evolve, offering more intuitive and user-friendly interfaces that reduce the need for manual mathematical calculations. In addition, many programming roles are not solely focused on writing code. Software engineering encompasses various aspects, such as project management, system design, user experience, and quality assurance. These areas require a broader skill set that includes communication, creativity, problem-solving, and critical thinking, rather than relying solely on mathematical prowess. Ultimately, while mathematics can undoubtedly provide a strong foundation and enhance certain aspects of programming, it is not the sole determinant of success in the field. The most important qualities for a programmer include a passion for learning, logical thinking, perseverance, and the ability to adapt to new technologies and trends. Programming is a dynamic discipline that rewards continuous learning and practical application. In conclusion, while a solid understanding of mathematics can be advantageous in certain programming domains, it is not an absolute prerequisite for becoming a skilled programmer. Programming encompasses a wide range of tasks and roles, some of which require advanced mathematical knowledge, while others focus more on practical coding skills and problem-solving abilities. With dedication, practice, and a growth mindset, anyone can become proficient in programming, regardless of their mathematical background. The key lies in understanding the specific requirements of the programming domain you are interested in and continually expanding your knowledge and skills in that area.
agentebimene
1,469,085
Операторы С++.
Ассаламу алейкум, уважаемый программист, поговорим с вами об Операторах. Операторы используются для...
0
2023-05-15T19:23:35
https://dev.to/islomali99/opieratory-s-3ehk
beginners, webdev, programming, tutorial
Ассаламу алейкум, уважаемый программист, поговорим с вами об Операторах. Операторы используются для выполнения операций над переменными и значениями. В приведенном ниже примере мы используем оператор `+` для добавления двух значений. ``` #include <iostream> using namespace std; int main() { int a = 100 + 50; cout << a; return 0; } ``` Оператор `+` часто используется для сложения двух значений вместе, как в приведенном выше примере, который также можно использовать для добавления переменной и значения или переменной и другой переменной. ``` #include <iostream> using namespace std; int main() { int n1 = 100 + 50; // 150 (100 + 50) int n2 = n1 + 250; // 400 (150 + 250) int n3 = n2 + n2; // 800 (400 + 400) cout << n3; return 0; } ``` C++ делит операторы на следующие группы: 1. Арифметические операторы 2. Операторы присваивания 3. Операторы сравнения 4. Логические операторы 5. Побитовые операторы `Оператор - используется для выполнения операций над переменными и значениями.` `Арифметические операторы` `Арифметические операторы используются для выполнения общих математических операций.` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vizhs74dosay9gt06n2e.jpeg) `Операторы присваивания` Операторы присваивания используются для присвоения значений переменной. В следующем примере мы используем оператор присваивания ( `=` для присвоения значения` 10` переменной с именем `x` : ``` #include <iostream> using namespace std; int main() { int x = 10; cout << x; return 0; } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxaxwq58gipyfwryj659.png) **Операторы сравнения** `Операторы сравнения используются для сравнения двух значений.` Объяснение: Возвращаемое значение сравнения равно `true (1) или false (0).` **Список всех операторов сравнения:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/urwenotijk3modbvzg4o.png) **Логические операторы** Логические операторы используются для определения логики между переменными или значениями: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5zrrxd0js9590q0inae5.jpeg)
islomali99
1,469,471
Spinning into Action: A Step-by-Step Guide to Creating a Stunning Loader in HTML and CSS
Loaders, also known as spinners, are a common element used in web design to indicate that content...
0
2023-05-16T05:11:34
https://dev.to/devxvaibhav/spinning-into-action-a-step-by-step-guide-to-creating-a-stunning-loader-in-html-and-css-j01
html, learning, webdev, loaders
![spinningLoader](https://images.unsplash.com/photo-1567382728468-08a3dcf3dd3c?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=870&q=80) Loaders, also known as spinners, are a common element used in web design to indicate that content is being loaded or processed. They not only provide visual feedback to users but also enhance the overall user experience. In this step-by-step guide, we will walk you through the process of creating a stunning loader using HTML and CSS. By the end of this tutorial, you'll have the skills to create your own captivating loaders that will leave your users in awe. And yes this post is not fully mine, I took help from ChatGPT to create this informative post for you all 😄. ## Understanding the Concept Before we dive into the coding process, let's take a moment to understand the concept behind a loader. A loader typically consists of animated elements that spin or move in a continuous loop. The rotation creates an illusion of activity, indicating that the content is being loaded in the background. By applying CSS animations, we can achieve this effect and bring our loader to life. ## Step 1: Setting Up the HTML Structure First, let's set up the HTML structure for our loader. We'll use a <div> element with a class name to identify our loader container. Inside the container, we'll create multiple child elements that represent the animated elements of our loader. Here's an example: ``` <div class="loader"> <div class="loader-element"></div> <div class="loader-element"></div> <div class="loader-element"></div> <!-- Add more loader elements if desired --> </div> ``` In this example, we have three loader elements, but feel free to add more if you want a more intricate design. ## Step 2: Styling the Loader with CSS Now that we have our HTML structure in place, let's move on to styling the loader using CSS. We'll define the size, color, animation, and positioning of the loader elements. Here's an example of how we can style the loader: ``` .loader { display: flex; justify-content: center; align-items: center; height: 200px; gap: .3rem; /* Adjust the height to fit your design */ } .loader-element { width: 20px; /* Adjust the size of the loader element */ height: 20px; /* Adjust the size of the loader element */ background-color: #333; /* Set the background color */ border-radius: 50%; /* Create a circular shape */ animation: beat 2s ease-in-out infinite; /* Apply the animation */ } @keyframes beat { 0% { transform: scale(0); /* Start at 0 scale */ } 50% { transform: scale(1); /* Scale up to 100% */ } 100% { transform: scale(0); /* Scale back down to 0% */ } } ``` In this CSS code, we first set the `.loader` class to a flex container to horizontally and vertically center the loader elements within it. Adjust the height property to fit your design requirements. Next, we style the `.loader-element` class, setting its size, background color, and border radius to create a circular shape. The animation property applies the spin animation we define next. The `@keyframes` rule defines the beat animation, which scale the loader element from 0 to 1 and then again 0 continuously over a duration of 2 second. This creates the coming in and out effect. ## Step 3: Incorporating the Loader Now that we have defined the HTML structure and CSS styles for our loader, it's time to incorporate it into your web page. Follow these steps: - Open your HTML editor or your preferred text editor and create a new HTML file. - Paste the copied HTML code and CSS code into the file. - Save the file with a descriptive name, such as "loader.html" and "loader.css". - Open the saved HTML file in your web browser to see the loader in action. By incorporating this code into your web page, you will have a stunning loader that spins and adds visual interest while your content loads. Feel free to customize the loader further by adjusting the size, colors, or animation properties to suit your website's design and branding. You can also experiment with different loader element shapes, additional CSS animations, or transitions to make it even more visually appealing. You can see this loader in action [here](https://replit.com/@NewId7/loader) ## Conclusion Creating a stunning loader using HTML and CSS is a fantastic way to engage your users and provide visual feedback during content loading or processing. By following the step-by-step guide outlined above, you now have the skills to design and implement your own captivating loader. Feel free to experiment and customize the loader further to match your website's style and requirements. With your newfound knowledge, you can enhance the user experience and add a touch of elegance to your web projects.
devxvaibhav
1,469,489
Dynamic Workflows using Code in Netflix Conductor
Conductor is a popular platform for building resilient stateful applications by creating workflows...
0
2023-05-16T05:56:10
https://orkes.io/blog/dynamic-workflows-using-code-in-netflix-conductor/
microservices, java, python, clojure
[Conductor](https://github.com/conductor-oss/conductor) is a popular platform for building resilient stateful applications by creating workflows that span across services. You can try out the workflows from this article at [Playground](https://play.orkes.io/), a free hosted version of Conductor. ## What is a Workflow? Before diving into how to create workflows using Conductor, let’s first define what a workflow is. In simple terms, a workflow is a series of tasks or steps executed in a specific order to accomplish a goal. Workflows are used to automate complex processes and ensure that all the necessary steps are completed in a logical sequence. Conductor workflows are composed of Tasks and Operators. **Workflow = {Tasks + Operators}** **Tasks** are services encapsulating the business logic that runs outside the Conductor server and are implemented as a Microservice, Lambda, or Worker. Workers run outside the Conductor server and can be implemented in any supported language. A workflow can contain multiple workers written in different languages. **Operators** are primitives from programming languages that are used to control the flow of execution inside a workflow. Conductor supports operators such as switch, loop, fork/join, and sub-workflows, allowing you to define complex workflows. ## Separation of Workflows from Re-usable Services Conductor promotes clear separation between application workflow and services that are used as building blocks of the workflow as ‘Tasks’. This separation ensures that the tasks follow the single responsibility principle and are generally stateless in nature. This model ensures two things: 1. Re-usability of services across many workflows promoting greater collaboration across teams. 2. Stateless Task Workers can quickly scale up/down based on the volume while maintaining the state only at the Conductor server. ## Workflow definition – JSON or Code? **Why not both?** Conductor server stores the workflow definitions as JSON on the server side. However, this does not restrict users from expressing their workflows as JSON alone. Conductor supports creating workflows using Code and executing both pre-registered as well as dynamic workflows expressed using code. ### Simple Example Let’s take an example of a simple two-task workflow: ![Two-Task workflow in Conductor](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5kvennmhjqnuxh75gbhb.png) #### JSON ```json { "name": "simple_two_task_workflow", "version": 1, "schemaVersion": 2, "tasks": [ { "name": "task1", "taskReferenceName": "task1", "inputParameters": {}, "type": "SIMPLE" }, { "name": "task2", "taskReferenceName": "task2", "inputParameters": {}, "type": "SIMPLE" } ] } ``` The same workflow in different languages: #### Java ```java ConductorWorkflow workflow = new ConductorWorkflow(workflowExecutor); workflow.setName("simple_two_task_workflow"); workflow.setVersion(1); //Add tasks workflow.add(new SimpleTask("task1", "task1")); workflow.add(new SimpleTask("task2", "task2")); //Execute workflow.executeDynamic(//input); ``` #### Go ```go conductorWorkflow := workflow.NewConductorWorkflow(executor.NewWorkflowExecutor(client.NewAPIClient(nil, nil))) conductorWorkflow.Name("simple_two_task_workflow").Version(1) //Add Tasks conductorWorkflow. Add(workflow.NewSimpleTask("task1", "task1")). Add(workflow.NewSimpleTask("task2", "task2")) //Execute conductorWorkflow.StartWorkflow(//input) ``` #### Python ```python workflow = ConductorWorkflow( executor=workflow_executor, name=simple_two_task_workflow, version=1, ) #Two Tasks task1 = SimpleTask('task1', 'task1') task2 = SimpleTask('task2', 'task2') #Add tasks to the workflow using >> operator workflow = workflow >> task1 >> task2 # Execute the workflow workflow.start_workflow(#input) ``` Creating workflows using code opens up use cases where it might be impossible to define workflows using static definitions — this could be when the number of tasks and their flow depends on the data that is dependent on the other factors. ## Complex Example Let’s take an example of a hypothetical workflow that is created dynamically based on the user data: ``` ConductorWorkflow workflow = new ConductorWorkflow(workflowExecutor); workflow.setName("complex_dynamic_workflow"); workflow.setVersion(1); //Get the list of users to send notification to List<UserInfo> users = getUsers(); Task<?>[] tasks = new Task[users.size()]; int counter = 0; for (UserInfo user : users) { if(user.sendEmail) { SimpleTask task = new SimpleTask("send_email", "send_email_" + counter); task.input("email", user.email); tasks[counter++] = task; } else { SimpleTask task = new SimpleTask("send_sms", "send_sms_" + counter); task.input("phone", user.phone); tasks[counter++] = task; } } //Run all the tasks in parallel workflow.add(new ForkJoin("run_in_parallel", tasks)); //Execute workflow and get the future to wait for completion. CompletableFuture executionFuture = workflow.execute(new HashMap<>()); //Alternatively, kick off the workflow if it's going to be a long-running workflow String workflowId = workflowClient.startWorkflow(new StartWorkflowRequest().withWorkflowDef(workflow.toWorkflowDef())); ``` In the above example, we load up the user list from a backend store, and for each user, create a task to send sms or email. A workflow is created by adding appropriate notification tasks for each user here and is then executed. Depending on how long the workflow takes to complete, Conductor provides a way to wait for the workflow completion using ‘Futures’ or kick off a workflow that returns the workflow execution id that can be used by workflowClient to monitor the execution. (or can be searched and viewed in the UI) You can write workflows using code in Java, Golang, Python, CSharp, Javascript, and even Clojure with Conductor. Visit [Conductor SDK on GitHub](https://github.com/conductor-sdk) for the latest SDKs with fully working example apps. ## Dynamic Workflows and Conductor The ability to dynamically create workflows using code allows developers to address very complex use cases where it is impossible to pre-define workflows. With Conductor, you can still do this, with the full power of Conductor visualization that allows you to visualize the entire execution in the UI. It’s like having your cake and eating it too! ## Summary Netflix Conductor is a powerful platform that lets you create the most complex workflows while making it very easy to handle runtime scenarios with powerful debugging and visualization tools, reducing the mean time to detect and resolve issues in the production environment. Be sure to check out [Conductor](https://github.com/conductor-oss/conductor) on GitHub. Our Orkes team do provide [Conductor Playground —  a free version of Conductor](https://play.orkes.io/). [Join our Slack Community](https://join.slack.com/t/orkes-conductor/shared_invite/zt-xyxqyseb-YZ3hwwAgHJH97bsrYRnSZg)
rizafarheen
1,469,495
How To Choose The Best User Acceptance Tools?
User Acceptance Testing (UAT) is a crucial step in the software development lifecycle that involves...
0
2023-05-16T06:07:41
https://officialpanda.com/2023/03/09/how-to-choose-the-best-user-acceptance-tools/
user, acceptance, tools
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u00xb4uu7d4aszhh7hy.png) User Acceptance Testing (UAT) is a crucial step in the software development lifecycle that involves testing a software product to ensure that it meets the needs of end users. UAT is typically carried out by a team of end-users or business analysts who test the software in a real-world environment to validate its functionality, usability, performance, and compatibility. To carry out UAT effectively, it’s important to have the right tools that can help automate the testing process, track bugs and issues, and facilitate collaboration between team members. A UAT tool is a software application that is designed to support the UAT process by providing a range of features such as test case management, bug tracking, collaboration, reporting, and analytics. **How To Choose The Best User Acceptance Tools**? There are several user acceptance testing tool available in the market, ranging from open-source solutions to commercial products. These tools offer a range of features that can help streamline the UAT process, improve collaboration between team members, and ensure that the software product meets the needs of end users. Thus, below are some tips to choose the best user acceptance tool. **Identify your requirements**:Before you start evaluating different UAT tools, it’s essential to identify your requirements. What features do you need in a UAT tool? What is your budget? What are the project timelines? Answering these questions can help you narrow down your search and select a tool that meets your specific needs. **Consider ease of use**:The UAT tool you choose should be easy to use and require minimal training for your team members. Look for a tool that has an intuitive user interface and requires little technical expertise. **Look for integration capabilities**:It should integrate with the other tools you are using in your development process. Look for a tool that can integrate with your project management tool, bug tracking tool, and version control tool. **Check for reporting and analytics capabilities**:It should have robust reporting and analytics capabilities. It should be able to generate reports that provide insights into the status of the testing process, identify issues and bugs, and track progress. **Evaluate customer support**:Customer support is a crucial factor to consider when selecting a UAT tool. Look for a tool that provides reliable customer support, including documentation, user guides, and technical support. **Consider scalability**:The UAT tool you choose should be scalable and able to grow with your project. Look for a tool that can handle large volumes of test cases, users, and data. **Trial the tool**:Finally, it’s a good idea to trial the UAT tool before making a final decision. Most UAT tools offer a free trial period, during which you can test the tool’s features and functionality and evaluate its suitability for your project. **Conclusion** Choosing the right User Acceptance Testing (UAT) tool is essential for the success of any software development project. A good UAT tool can help streamline the UAT process, improve collaboration between team members, and ensure that the software product meets the needs of end-users. Opkey is a no-coding testing automation platform that offers a comprehensive suite of testing tools to streamline testing efforts. Ultimately, the choice of testing tools and methods will depend on the specific needs of your project, and it’s important to evaluate different options and choose the one that meets your requirements.
rohitbhandari102