id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
500,517 | How Mandarine Framework could surpass NestJS | Introduction Mandarine is a relatively new typescript framework that runs on Deno. Its obj... | 0 | 2020-10-28T20:49:36 | https://dev.to/andreespirela/how-mandarine-framework-could-surpass-nestjs-13c3 | nestjs, mandarine, typescript, deno | # Introduction
Mandarine is a relatively new typescript framework that runs on [Deno](https://deno.land). Its objective is very simple: to provide as many built-in functionalities as possible for application development.
Mandarine is a ready-for-production framework & one of the few packages that are production-ready, and [by far, the best server-side framework in Deno](https://dev.to/andreespirela/why-mandarine-is-by-far-the-best-server-side-framework-in-deno-52dd)
Mandarine was mainly inspired by [Spring Boot](https://spring.io/projects/spring-boot) & owns a big set of built-in tools for development such as [Dependency Injection](https://www.mandarinets.org/docs/master/mandarine/dependency-injection), [Built-in authentication system](https://www.mandarinets.org/docs/master/mandarine/auth-introduction), [MQL (Mandarine Query Language)](https://www.mandarinets.org/docs/master/mandarine/mandarine-query-language), [ORM (with MQL)](https://www.mandarinets.org/docs/master/mandarine/data-interaction), Built-in session middleware, CORS middleware, and much more.
For more information about what Mandarine has and why is _perhaps_ the **best server-side framework** in Deno, [click here](https://dev.to/andreespirela/why-mandarine-is-by-far-the-best-server-side-framework-in-deno-52dd).
# Mandarine & NestJS
Mandarine & NestJS are perhaps brothers from different mothers, they are really similar in terms of syntax & features, some would definitely argue Mandarine has a much _better & simpler_ syntax than NestJS, this is because NestJS tends to adopt the Angular pattern of modules while Mandarine only cares about declarations: there's no such thing as a "module" in Mandarine.
Mandarine does not have the concept of "modules" or "providers", everything is plain declaration of ES6 classes & decorators and it's ready to be used. NestJS on the other hand, abstracts this a little bit more.
Finally, friendly reminder that Mandarine & NestJS run on different runtimes: Mandarine is for Deno while NestJS for NodeJS.
# Mandarine: Richer In Functionalities
Mandarine is perhaps richer in functionalities. This does not mean that Mandarine has all the features NestJS has, not at all. This means, Mandarine has many others (and more) built-in tools that make development easier and reduce the boilerplate of many packages with only one goal: Every major functionality you would write in Mandarine _should be_ a Mandarine-powered feature.
To explain this a little better, let's look at some of the features Mandarine has:
- [Built-in Authentication] (https://www.mandarinets.org/docs/master/mandarine/auth-complete-guide) (No need to write the logic behind securing endpoints, or authorizing users as it is already provided)
- [Built-in Repositories](https://www.mandarinets.org/docs/master/mandarine/data-repositories) (No need to code the logic behind database queries as Mandarine's ORM is fully powered by MQL)
- [MQL (Mandarine Query Language)](https://www.mandarinets.org/docs/master/mandarine/mandarine-query-language)
- [Session Middleware](https://www.mandarinets.org/docs/master/mandarine/session-middleware) (No need to use external packages to manage sessions, as it is already built-in in Mandarine with the option to create your own session container)
- [CORS Middleware](https://www.mandarinets.org/docs/master/mandarine/cors-middleware)
- [Serving Static Content](https://www.mandarinets.org/docs/master/mandarine/serving-static-content)
- [`X-Response-Time` Header Middleware](https://www.mandarinets.org/docs/master/mandarine/built-in-response-time-header)
- [Resource Handlers](https://www.mandarinets.org/docs/master/mandarine/resource-handlers) (Interceptors for different kind of static content)
- Pipes
- Custom Decorators
And of course, there are a few more features that are important but that is not the purpose of this post.
> For more information about Mandarine's feature, visit [its official documentation](https://www.mandarinets.org/docs/master/mandarine/introduction)
# Mandarine surpassing NestJS, Really?
> Extraordinary claims require extraordinary evidence
Many of you may be thinking, how come a Deno framework could possibly better or more stable than a NodeJS one?
Let's start with some facts:
- Deno is at version 1.5.0, and it's production ready, this has been stated even by Ryan Dahl (Creator of NodeJS & Deno)
- Deno is richer in functionalities.
- There is already room to argue NodeJS and Deno are both good enough for production environments, some would argue Deno is even better since it provides a set of built-in dev tools (like bundling, linting...) & typescript support as first language.
With that said, we can put Mandarine & NestJS at the same level of the discussion in terms of real-world stability.
# Mandarine's Plan To Be Better... Much Better
Deno has an excellent relation with Rust, in fact, you can connect Rust-written libraries to Deno & call the methods inside those libraries through JS. When you think about it, the limit is the sky. We are talking about direct connection of a very powerful language like Rust (which some argue it's much better and simpler than C++) being called/used from a very popular language like JS. **This is a key for Mandarine's plan of success**.
As stated in [this blog: ](https://www.mandarinets.org/posts/making-mandarine-the-most-stable-framework-with-rust-)
> If Deno is not capable of providing a required feature for stability, we would use Rust again to cover these needs.
That means, Mandarine is not only a Typescript framework but will also be powered by Rust.
### Why does this matter ?
Rust is a very popular language (most loved language for 4 years in a row according to [Stackoverflow](http://stackoverflow.com/)), it has been out there for 10 years, and we can all agree it is very very stable. As a matter of fact, Google is considering integrating it in its V8 engine ([Click here for more](https://www.chromium.org/Home/chromium-security/memory-safety/rust-and-c-interoperability))
Now with Deno, Rust & Mandarine, Mandarine would connect every requested feature that Deno cannot provide to its Rust core which means, Rust would provide functionalities to Mandarine thus bringing Rust's stability & Rust limitless internal processes to Mandarine (The JS/TS world).
Quick example of what this means: You want to use Mandarine to modify images, but there are no packages in Deno to do such things (nor Deno provides such functionality). It doesn't matter, Mandarine will provide you a rust application to modify images that you can call from your JS/TS application.
Of course, this is a very vague example, Mandarine's rust core is meant to go along with Mandarine's goals. But I mention this example for a better perspective on what this means.
### What does this have to do with NestJS & Mandarine?
With the support of Rust, Mandarine can provide functionalities that are yet to exist in Deno & NodeJS, this will give Mandarine advantage in both runtimes.
One of the main features of NestJS (which Mandarine is missing) are microservices. Microservices in NestJS are entirely coded in Typescript (JS). On the other side, Mandarine could provide a communication-interface for microservices written in Rust but used in the JS side, thus providing performance and more stability in terms of maintenance.
### Is It Happening Already?
One of the goals Mandarine has had for quiet a long time is to provide a very stable, multi-threaded, database driver not only for Deno but for its internal usage.
This week, Mandarine came out with [`Mandarine Postgres`](https://deno.land/x/mandarine_postgres@v2.1.6), a PostgreSQL rust driver that you can use from Deno which officially made Mandarine a framework with mixed codebases.
This driver makes use of [`tokio-postgres`](https://github.com/sfackler/rust-postgres) under the hood, a widely-used Rust driver for postgres which has been out there for more than 4 years.
This, again, is one of the examples of "What Deno can't provide, Rust can".
# The End
This post is not meant to dismiss NestJS awesome work and features rather than presenting both facts & views to how Mandarine _could_ in the future surpass NestJS in terms of functionality and perhaps stability with the Deno core.
This post is also meant to show how Deno can get to be more stable than NodeJS in some scenarios, or how packages in Deno can take advantage of Rust for the better.
The different opinions towards NestJS are just that, opinions. It is up to the reader to interpret this post. While it is true that Mandarine may be at the same level of NestJS, it is also true NestJS has been out there for much more time and it is widely used globally because of the excellent work they have done & provided to different companies around the globe.
----------------
<img src="https://raw.githubusercontent.com/mandarineorg/mandarineorg_web/master/app/mandarine-static/assets/images/headers/orange.png" width="300" height="300" />
## Simple Usage
[Click here](https://www.mandarinets.org/docs/master/mandarine/hello-world) to see a quick example on how to get started with Mandarine
## Mandarine on social media
- [Twitter](https://twitter.com/mandarinets)
- [Discord](https://discord.gg/qs72byB)
<img src="https://raw.githubusercontent.com/mandarineorg/mandarineorg_web/master/app/mandarine-static/assets/images/headers/black.png" width="300" height="300" /> | andreespirela |
500,541 | Hi, there. I'm having an issue that i really don't understand. | Hi, there. I'm having an issue that i really don't understand. I have noticed that sometimes my compo... | 0 | 2020-10-28T21:47:20 | https://dev.to/sylvastudio/hi-there-i-m-having-an-issue-that-i-really-don-t-understand-1hpc | react, sass, javascript, webdev | Hi, there. I'm having an issue that i really don't understand. I have noticed that sometimes my components that i grouped into a parent component goes against it normal html positioning, the will be stacking on top or behind each components nested together with them within same parent. Please i need help to understand this thing. Even if i get tired and decide to use css to position it, it will still not work. For example, if i group a navbar, Todoform and Todolist inside another component called Home, and i put Home component inside App.js. | sylvastudio |
500,618 | Running a self managed kubernetes cluster on AWS | In this post I want to walk through the steps one needs to take to run a self managed kubernetes clus... | 0 | 2020-10-29T03:01:12 | https://dev.to/pksinghus/running-a-self-managed-kubernetes-cluster-on-aws-mhm | kubernetes, aws, devops | In this post I want to walk through the steps one needs to take to run a self managed kubernetes cluster on AWS. One of the reasons one might want to do it is so that one could run newest version of kubernetes not yet available as a managed service.
We are going to use `kubeadm` to install kubernetes components and use AWS integrations for things like load balancing, CI/DR ranges, etc. I am going to install kubernetes version `1.19` on `ubuntu server 18.04 LTS`. `docker` will be the container runtime.
The control plane will be setup manually. The worker nodes will be part of an autoscaling group. Autoscaling will be managed by [`cluster-autoscaler`](https://github.com/kubernetes/autoscaler).
Both the control plane and data plane will be deployed in private subnets. There is no reason for any of these instances to be in public subnets.
For convenience, I am going to use the same `AMI` for both control plane and data plane.
Things that I had to struggle with are correctly using the flag `--cloud-provider=aws` and tagging of instances with `kubernetes.io/cluster/kubernetes`.
### Creating the AMI
Start an instance with base `ubuntu 18.04` image. Assign a key pair while launching so that we can `ssh` into the instance.
Let's install `kubeadm`, `kubelet` and `kubectl` first. Following the directions at https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/, with slight changes, we need to run this script
```
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
```
Swap needs to be disabled on the instance.
Install `docker` -
```
sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get install -y apt-transport-https curl jq
sudo apt-get install docker.io -y
sudo systemctl enable docker.service
sudo systemctl start docker.service
```
Worker node needs to use `aws cli`, so let's install it -
```
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
```
The kubelet needs to run with the flag `--cloud-provider=aws`. We can apply this change at one place in this AMI and it will be carried over to all instances launched with this image.
Edit the file `/etc/systemd/system/kubelet.service.d/10-kubeadm.conf` and the change the line to include `cloud-provider` flag at the end -
```
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cloud-provider=aws"
```
At this point we can create an `AMI` from this instance. Let's call it `k8s-image`.
### Setting up control plane
#### Load Balancer
Because our control plane is going to be highly available, its instances are going to run behind a load balancer. So I created a classic `ELB` which forwards `TCP` traffic on port `6443`, the default port used by the control plane. I was unable to get the `NLB` to work in this experiment.
I setup the load balancer to communicate with 3 private availability zones where the control plane will be located. Let's say the DNS name of the load balancer is `internal-k8s-111.us-east-1.elb.amazonaws.com`
These security groups were manually created -
#### ELB security group
I created a security group for `ELB` which allowed ingress on port `6443` from the security group attached to worker nodes mentioned below. Let's call it `k8s-loadbalancer-sg`
#### Control plane/Worker node security group
For convenience I used the same security group for both control plane and data plane instances. This security group allowed ingress from the load balancer security group mentioned above on port `6443`. It also allowed ingress on all ports if the source was this security group itself. Let's call it `k8s-custom-sg`. I also enabled ssh on port `22`
### IAM role
Let's create an IAM role which will be attached to our instances. I am going to call it `CustomK8sRole`. The same IAM role will be attached to both control plane and data plane instances in this experiment. In real world you would only provide minimum necessary privileges to an IAM role.
You will need to update this role to provide access to all the AWS services we are going to mention in this article.
### Creating the first machine
Launch an instance using our custom AMI `k8s-image` and IAM role `CustomK8sRole`.
Tag the node so that it can be discovered by worker nodes, e.g., with key `k8s-control-plane`. Tag key is sufficient, value will not be used.
Add another tag `kubernetes.io/cluster/kubernetes` with the value of `owned`. This seems to be necessary if we are using a cloud provider. Here `kubernetes` is the cluster name which is the default name used by kubeadm. If you have chosen a different name for the cluster, change the tag key accordingly.
Change the hostname of the instance -
```
sudo hostnamectl set-hostname \
$(curl -s http://169.254.169.254/latest/meta-data/local-hostname)
```
This is being done manually for control plane. It will be automated for worker nodes.
For API server and controller manager to integrate with `aws` cloud provider we need to start them with the flag `--cloud-provider=aws`. I couldn't find a way to tell `kubeadm` to do it using command line args. So we are going to run `kubeadm` with a config file `config.yaml`. If we use a config file then all arguments need to go in the file, including the API server (control plane) endpoint. Control plane endpoint is the DNS name of ELB we created above.
Create the file `config.yaml` -
```
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
extraArgs:
cloud-provider: aws
controllerManager:
extraArgs:
cloud-provider: aws
configure-cloud-routes: "false"
controlPlaneEndpoint: internal-k8s-111.us-east-1.elb.amazonaws.com:6443
```
The kubelet also needs to run with this flag but that configuration is baked into this custom AMI because of the change we have made to the file `10-kubeadm.conf`, so we don't need a section to configure the kubelet.
We do not need to provide CI/DR ranges for pod IPs and service IPs in this configuration file because we will use AWS VPC CNI plugin.
Attach this instance to the ELB. It will initially show out of service. That's ok because there is nothing running on the instance. But it needs to be attached now because this is the controller endpoint kubelet will try to access. After the AAPI server starts running on the instance, it will show as `InService`.
Start kubeadm -
```
sudo kubeadm init --config config.yaml --upload-certs
```
This should be it. The only error I came across in this step was security group misconfiguration where the ELB was not being able to communicate with the kubelet.
It should print a message about how you can add additional control plane and data plane nodes to the cluster.
If there are errors, debug them using `journalctl -a -xeu kubelet -f`.
Setup the `.kube/config` file as indicated in the message and you should see one master node running -
```
kubectl get nodes
```
### CNI
Nothing is going to work until you install a CNI plugin. Following AWS's documentation about AWS CNI plugin, install it substituting correct value for `<region-code>` -
```
curl -o aws-k8s-cni.yaml https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.7.5/config/v1.7/aws-k8s-cni.yaml
sed -i -e 's/us-west-2/<region-code>/' aws-k8s-cni.yaml
kubectl apply -f aws-k8s-cni.yaml
```
### Adding control plane nodes
Start up as many machines as you would like using the custom image and custom IAM role.
Add the tags `kubernetes.io/cluster/kubernetes` (value `owned`) and `k8s-control-plane`(with value as empty) as done for the first node.
Change the host name before doing anything and add the machine to the ELB.
Run the command as printed by the previous step.
```
sudo hostnamectl set-hostname \
$(curl -s http://169.254.169.254/latest/meta-data/local-hostname)
kubeadm join internal-k8s-111.us-east-1.elb.amazonaws.com:6443 --token 111.111 \
--discovery-token-ca-cert-hash sha256:111 \
--control-plane --certificate-key 111
```
### Worker nodes
The command for a worker node to join the cluster was printed when we setup our first control plane node. It looks something like this -
```
kubeadm join internal-k8s-111.us-east-1.elb.amazonaws.com:6443 --token 111.111 \
--discovery-token-ca-cert-hash sha256:111
```
So we need a token generated on the control plane node, we need to know the control plane endpoint and certificate hash. We have these values available currently because we just setup our control plane. But how do we handle the issue of worker nodes getting added to the cluster because of autoscaling one month down the road. It is not a good idea to hardcode these values.
EC2 instances allow us to provide a script as `UserData` which executes immediately after the instance launches. We can use `UserData` to automate this process.
Our script will make use of AWS CLI which is conveniently baked into our image.
We will also make use of [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html) to execute commands remotely on a control plane node. I benefitted extremely from the [provided examples](https://docs.aws.amazon.com/systems-manager/latest/userguide/walkthrough-cli.html).
Our instances will need to be part of an autoscaling group (ASG). The ASG defines how many instances do we want currently to run, the minimum number of instances that should always be running and the maximum number of instances it will allow to run. ASG also declares subnets where the instances can be launched.
The ASG needs a Launch Configuration which defines the AMI which will be used to launch the instance. We will use our custom AAMI.
We can also declare resource tags which will be applied to instances launched using this Launch Configuration. The resource tag that needs to be configured here is `kubernetes.io/cluster/kubernetes` with the value of `owned`.
Our script will query for EC2 instances which have been tagged with `k8s-control-plane`, which all of our control plane nodes are. Then it will execute a command remotely on the first node from this list using `aws ssm send-command` to generate a new token and generate the command to join the cluster as a worker node. It will then execute this command and finally execute another remote command to delete the token which was generated.
Here's the `UserData` which we can supply to our launch configuration. The SSM agent is already installed on control plane nodes because we used AWS managed ubuntu image. Our `CustomK8sRole` will need to have policies added that allow it to execute these commands.
```
#!/bin/bash
sudo hostnamectl set-hostname \
$(curl -s http://169.254.169.254/latest/meta-data/local-hostname)
instances=$(aws ec2 describe-instances --filters "Name=tag-key,Values=k8s-control-plane" | jq -r ".Reservations[].Instances[].InstanceId")
echo "control plane instances- $instances"
instance=$(echo $instances| cut -d ' ' -f 1)
echo "working with instance- $instance. Generating token."
sh_command_id=$(aws ssm send-command \
--instance-ids "${instance}" \
--document-name "AWS-RunShellScript" \
--comment "Generate kubernetes token" \
--parameters commands="kubeadm token generate" \
--output text \
--query "Command.CommandId")
sleep 5
echo "Receiving token"
result=$(aws ssm list-command-invocations --command-id "$sh_command_id" --details | jq -j ".CommandInvocations[0].CommandPlugins[0].Output")
token=$(echo $result| cut -d ' ' -f 1)
echo "generating join command"
sh_command_id=$(aws ssm send-command \
--instance-ids "${instance}" \
--document-name "AWS-RunShellScript" \
--comment "Generate kubeadm command to join worker node to cluster" \
--parameters commands="kubeadm token create $token --print-join-command" \
--output text \
--query "Command.CommandId")
sleep 10
echo "getting result"
result=$(aws ssm list-command-invocations --command-id "$sh_command_id" --details | jq -j ".CommandInvocations[0].CommandPlugins[0].Output")
join_command=$(echo ${result%%---*})
echo "executing join command"
$join_command
echo "deleting kubernetes token"
sh_command_id=$(aws ssm send-command \
--instance-ids "${instance}" \
--document-name "AWS-RunShellScript" \
--comment "Delete kubernetes token" \
--parameters commands="kubeadm token delete $token" \
--output text \
--query "Command.CommandId")
sleep 5
result=$(aws ssm list-command-invocations --command-id "$sh_command_id" --details | jq -j ".CommandInvocations[0].CommandPlugins[0].Output")
echo $result
```
### Cluster Autoscaler
Download the yaml manifest `curl -O https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-one-asg.yaml`
We are using the `one-asg` manifest in this case. But if your setup uses persistent volumes then you will have to use `multi-asg` manifest by configuring one ASG per availability zone.
Replace `k8s-worker-asg-1` in the file with the name of your ASG and edit the section for certificates like so -
``` volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-certificates.crt"
```
The location mentioned in the file `/etc/ssl/certs/ca-bundle.crt` is incorrect for our setup.
Apply the manifest `kubectl apply -f cluster-autoscaler-one-asg.yaml`.
### Calico
To be able to provide network policies, calico is one option -
`kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.7.5/config/v1.7/calico.yaml`
### AWS Load Balancer Controller
The AWS ALB Ingress Controller is now known as AWS Load Balancer Controller. It can create Application Load Balancers for the services that you want to expose to the internet.
I was able to deploy the [echoserver] (https://kubernetes-sigs.github.io/aws-load-balancer-controller/guide/walkthrough/echo_server/). The public subnets had to be tagged with `kubernetes.io/role/elb` and private subnets with `kubernetes.io/role/internal-elb`.
| pksinghus |
500,789 | HACKTOBERFEST 2020 | About Me Hi! I am Manali Biswas. I am new to the DEV community as I joined it during Hackt... | 0 | 2020-10-29T06:15:23 | https://dev.to/manalibiswas/hacktoberfest-2020-3anm | hacktoberfest | <!-- ✨This template is only meant to get your ideas going, so please feel free to write your own title, structure, and words! ✨ -->
### About Me
Hi! I am Manali Biswas. I am new to the DEV community as I joined it during Hacktoberfest this month. I am a Computer Engineering student at Delhi Technological University. This year, I heard a lot about Hacktoberfest, with all the messages floating on social media. After understanding that its a celebration for open source, I decided to dump my laziness and go make at least 4 PRs!
### Background
Before this, I had participated in GirlScript Summer of Code 2020. That was my first experience of participating in a large-scale open source coding. I learned a lot of things that time, like making a pull request and contributing to node.js web applications. I like to make web applications (MERN stack) and have started exploring ML now.
### Progress
To be honest, I started off Hacktoberfest as a pastime. But then, I realized that I can try out something new and create something that I will love to look back to. So, I have started working on a Productivity web app, in which I will be exploring some calendar APIs. So far, I have integrated Google Calendar API and made a webpage to enable the user to set up timers. My Hacktoberfest is complete for this year and I have ordered my swag too!
### Contributions
I have contributed to a DS-ALGO repo by adding stack and queue stl C++. In that, I have made a program using many of the stl functions provided for queue and stack. There were some things which I learned, e.g., emplace function. Apart from that, I made the timer and Google Calendar API integrations to my productivity app, and added some styling.
### Reflections
Overall, my Hacktoberfest 2020 was good, I learned some things and it was a good experience. I did not have much idea of Hacktoberfest before, and now that I do, I think it is a great initiative. It is highly encouraging to see such fests, because open source gives birth to a lot of new ideas. I would love to participate again, and contribute to open source! | manalibiswas |
503,462 | Back to Basics: Event Delegation | Event Delegation is an old trick to keep your JavaScript DOM enhancements independent of content | 0 | 2020-11-01T14:23:30 | https://dev.to/codepo8/back-to-basics-event-delegation-5742 | javascript, webdev, tricks | ---
title: Back to Basics: Event Delegation
published: true
description: Event Delegation is an old trick to keep your JavaScript DOM enhancements independent of content
tags: javascript, webdevelopment, tricks
cover_image:
---

> Back to basics is [a series of small posts](https://christianheilmann.com) where I explain basic, dependency free web techniques I keep using in my projects. These aren't revelations but helped me over the years to build sturdy, easy to maintain projects.
One of my favourite tricks when it comes to building web interfaces is Event Delegation
Events don't just happen on the element you apply them to. Instead they go all the way down the DOM tree to the event and back up again. These phases of the event lifecycle are called [event bubbling and event capture](https://developer.mozilla.org/en-US/docs/Learn/JavaScript/Building_blocks/Events#Event_bubbling_and_capture).
The practical upshot of this is that you don't need to apply event handlers to every element in the document. Instead, often one handler on a parent element is enough. In the long ago, this was incredibly important as older browsers often had memory leaks connected with event handling.
Say you have a list of links, and instead of following these links you want to do something in code when the user clicks on them:
```html
<ul id="dogs">
<li><a href="#dog1">Dog1</a></li>
<li><a href="#dog2">Dog2</a></li>
<li><a href="#dog3">Dog3</a></li>
<li><a href="#dog4">Dog4</a></li>
<li><a href="#dog5">Dog5</a></li>
<li><a href="#dog6">Dog6</a></li>
<li><a href="#dog7">Dog7</a></li>
<li><a href="#dog8">Dog8</a></li>
<li><a href="#dog9">Dog9</a></li>
<li><a href="#dog10">Dog10</a></li>
</ul>
```
You could loop over each of the links and assign a click handler to each:
```javascript
const linkclicked = (e,l) => {
console.log(l);
output.innerHTML = l.innerHTML;
e.preventDefault();
};
const assignhandlers = elm => {
let links = document.querySelectorAll(`${elm} a`);
links.forEach(l => {
l.addEventListener('click', e => {linkclicked(e,l)});
});
}
assignhandlers('#dogs');
```
You can [try this event handling example here](https://codepo8.github.io/talks-back-to-basics/event-handling.html) and the code is available on GitHub ([event.handling.html](https://github.com/codepo8/talks-back-to-basics/blob/main/event-handling.html)).
This works, but there are two problems:
1. When the content of the list changes, you need to re-index the list (as in, call `assignhandlers()` once more)
2. You only react to the links being clicked, if you also want to do something when the list items are clicked you need to assign even more handlers.
You can try this by clicking the "Toggle more dogs" button in the example. It adds more items to the list and when you click them, nothing happens.
With event delegation, this is much easier:
```javascript
document.querySelector('#dogs').
addEventListener('click', e => {
// What was clicked?
let t = e.target;
// does it have an href?
if (t.href) {
console.log(t.innerText); // f.e. "Dog5"
output.innerHTML = t.innerText;
}
// if the list item was clicked
if (t.nodeName === 'LI') {
// print out the link
console.log(t.innerHTML);
output.innerHTML = t.innerHTML;
}
e.preventDefault(); // Don't follow the links
});
```
You can [try this event delegation example here](https://codepo8.github.io/talks-back-to-basics/event-delegation.html) and the code is available on GitHub ([event-delegation.html](https://github.com/codepo8/talks-back-to-basics/blob/main/event-delegation.html)). If you now click the "Toggle more dogs" button, and click any of the links with puppies, you'll see what it still works.
There are a few things you can do to determine what element the click event happened on. The most important bit here is the `let t = e.target;` line, which stores the element that is currently reported by the event capturing/bubbling cycle. If I want to react to a link, I check if a `href` exists on the target. If I want to react to a list item, I compare the `nodeName` to `LI`. Notice that node names are always uppercase if you do that kind of checking.
I really like event delegation as it gives me a lot more flexibility and I don't have to worry about changes in content. The handler just lies in wait until it is needed.
| codepo8 |
503,519 | All the resources you need for Frontend development | I've read many articles about resources for Frontend development. But every time I needed a resource... | 0 | 2020-11-01T16:21:16 | https://dev.to/codingknite/all-the-resources-you-need-for-frontend-development-3jeo | webdev, resources, beginners, tutorial | I've read many articles about resources for Frontend development. But every time I needed a resource that I'd read about, I had to go through the hustle of looking for the article again.
To solve this problem I decided to put all these resources in [one place](https://github.com/developer-resources). Somewhere I could just look if I needed a resource.
I also decided to include some resources for those that are learning new technologies.
In the list I included resources for HTML, CSS, JavaScript, React, Icons, Illustrations, fonts and many more.
I combined all these resources and put them in [this Repository](https://github.com/developer-resources/frontend-development).
I also made the repo open source so that others can include resources that I didn't.
Feel free to check it out and contribute to it if you can. I hope it helps you out. | codingknite |
585,009 | Selenium Web Driver Vs Proxies API | The world of web scraping is varied and complex and Proxies API sits at one of the most crucial junct... | 0 | 2021-02-18T09:52:21 | https://proxiesapi.com/blog/selenium-web-driver-vs-proxies-api.html.php | The world of web scraping is varied and complex and Proxies API sits at one of the most crucial junctions. Allowing web scrapers/crawlers to bypass IP blocks by using a single API endpoint to access our 20 million-plus high-speed proxies on rotation.
One of the questions we get frequently is how we are different from services like OctoParse or Diffbot. Many times it is like comparing Apples and Oranges but when we send this comparison table to our customer's developer team, their CXO, their marketing or SEO team, they typically get it quite easily if we are a suitable service or not.
So here is how we are different from Selenium Web Driver...
Selenium was built for automating tasks on web browsers but is very effective in web scraping as well...
Here you are controlling the Firefox browser and automating a search query...
Its language agnostic so here is the same thing accomplished using Javascript.
Selenium Web Driver vs Proxies API
Aspect Proxies API Selenium Web Driver
Who is it for? Developers Developers
Cost 1000 free CallsStarts at $49 pm Open Source
API access Yes yes
Size of project enterprisemediumsmall enterprisemediumsmall
Easy to setup single api call for everything manual setup
Product/Service product product
Rotating Proxies Yes no
Single API? Yes no
Desktop App no no
Visual Scraping no no | proxiesapi | |
503,559 | DuckDB: an embedded DB for data wrangling | Last week, ThoughtWorks released it's latest edition of the Technology Radar. One of the new entries... | 0 | 2020-11-01T19:14:58 | https://dev.to/volkmarr/duckdb-an-embedded-db-for-data-wrangling-4hfm | datascience, sql, python, duckdb | Last week, ThoughtWorks released it's latest edition of the [Technology Radar](https://www.thoughtworks.com/radar/platforms). One of the new entries to the platform section was [DuckDB](https://duckdb.org). This new DB sounded interesting, so I decided to check it out.
# What is DuckDB
ThoughtWorks describes it as
> DuckDB is an embedded, columnar database for data science and analytical workloads. Analysts spend significant time cleaning and visualizing data locally before scaling it to servers. Although databases have been around for decades, most of them are designed for client-server use cases and therefore not suitable for local interactive queries. To work around this limitation analysts usually end up using in-memory data-processing tools such as Pandas or data.table. Although these tools are effective, they do limit the scope of analysis to the volume of data that can fit in memory. We feel DuckDB neatly fills this gap in tooling with an embedded columnar engine that is optimized for analytics on local, larger-than-memory data sets.
Similar to SQLite, it's a relational database, that supports SQL, without the necessity of installing and managing an SQL server. Additionally, it is optimized to be super-fast, even with large datasets, that don't fit in memory.
# Test drive
### Test data creation
To test a database, first you need some data. So I created a [python script](https://gist.github.com/VolkmarR/c4dba35037e2a4e438189ec90269bcbc) and used [Faker](https://github.com/joke2k/faker) to create the following CSV files:
```
persons.csv (10.000 rows)
id,name,street,city,email
1,Ronald Montgomery,300 Smith Heights Apt. 722,Shannonview,arellanotyler@ramirez.com
books.csv (10.000 rows)
1,978-0-541-64306-5,Exclusive systemic knowledge user,1,27.31
orderItems (1.000.000 rows)
id,person_id,book_id,quantity,date
1,7001,47034,3,2020-08-16
```
### Installation of DuckDB
In order to use it, you have to install the DuckDB library. This is done using `pip install duckdb==0.2.2`
### The test
For the test, I defined the following task: Create a CSV file, that contains the total amount of the sold books (quantity * price) per person category.
This is the code to solve this task
```python
import duckdb
from time import time
start = time()
# Connect to database.
# If no filename is specified, the db will be created in memory
conn = duckdb.connect()
# Create tables and load data from CSV files
conn.execute("CREATE TABLE persons as Select * from read_csv_auto ('persons.csv')")
conn.execute("CREATE TABLE books as Select * from read_csv_auto ('books.csv')")
conn.execute("CREATE TABLE orderItems as Select * from read_csv_auto ('orderItems.csv')")
# Execute the query to get the result and use copy to export it as CSV file
conn.execute("""copy (SELECT category, round(sum(quantity * price), 2) amount FROM orderItems
inner Join persons on person_id = persons.id
inner Join books on book_id = books.id
group by category
order by category) to 'result.csv' (HEADER)""")
# Print execution time
print("Executed in ", time() - start)
```
The execution time is around 2 seconds on my PC and the result file looks like this:
```
category,amount
1,13203562.05
2,13120658.42
3,12378199.17
4,12183193.4
5,13450846.14
6,13111841.91
7,12438200.33
8,12750379.26
9,12881481.69
10,12118417.6
```
# Summary
So what do I think about DuckDB after this quick test? I have to say, I really like it. I've worked with SQL for a long time and thanks to DuckDB, I can reuse this skill to wrangle with data. I can work in memory and seamless switch to using a database file, if the data exceeds memory.
What do you think? Ready to give DuckDB a try? BTW: It also [plays nice with pandas](https://duckdb.org/docs/api/python) too. | volkmarr |
504,059 | Getting my hands dirty with coding !!! | Hi ✨developers ✨, I am 👤 Suraez aka Suraj from Nepal. I am a CS student and passionate about coding a... | 0 | 2020-11-02T11:42:20 | https://dev.to/suraez/getting-my-hands-dirty-with-coding-2il3 | hacktoberfest | Hi ✨developers ✨, I am 👤 Suraez aka Suraj from Nepal. I am a CS student and passionate about coding and programming.I joined DEV 'cause when it came to learning something, there were always articles written by some DEV people with a thorough explanation.So I would also like to share my learnings and learn from other DEV people at the same time.

I participated in the Hacktoberfest challenge this year (#hacktoberfest2020). Apparently, this open source competition has been around for more than 5 or 6 years.Had I known about this competition before, I would have participated every year.So, I look forward to get notified about such events or competition through DEV.
Since, I am from Nepal. I doubted if I could participate in this hacktoberfest challenge but DigitalOcean, Intel and DEV made it all easy.Kudos to DigitalOcean, Intel and DEV for giving a platform for students like us to leave a mark on the world.
Here are the 4 PRs that counted for Hacktoberfest2020 challenge:
https://github.com/furaxdev/portofolio/pull/1
https://github.com/Arvind19999/meslaApp/pull/1
https://github.com/pujamudbhari/myportfolio/pull/1
https://github.com/prakriti75/quote-generator/pull/1
Since, this challenge #hacktoberfest2020 literally made me see how can I contribute to this huge, gigantic open source community,I am gonna contribute as much as I can. As of now, I intend to participate in Google Summer Of Code. (GSoC).
Thanks for reading my post. ✍
| suraez |
504,757 | MY HACKTOBERFEST EXPEREINCE | This was my first Hacktoberfest . Being a fresher, I always thought that my level was not that high f... | 0 | 2020-11-03T03:38:16 | https://dev.to/vishesht27/my-hacktoberfest-expereince-2gbl | hacktoberfest, opensource, github, git | This was my first Hacktoberfest . Being a fresher, I always thought that my level was not that high for clearing this event. But when I started, I started with goal of learning new things and not for completing events.
The things I learned from Hacktoberfest 2020:
1. Git and Github
2. Java
3. Working on issues
| vishesht27 |
516,266 | Setting Up UFW on Ubuntu Server | UFW (Uncomplicated Firewall) is a program that allows you to internally control ports on your Linux... | 0 | 2020-11-16T12:47:03 | https://bowlerdesign.tech/posts/setting-up-ufw-on-ubuntu-server/ | ---
title: Setting Up UFW on Ubuntu Server
published: true
date: 2019-12-15 11:29:00 UTC
tags:
canonical_url: https://bowlerdesign.tech/posts/setting-up-ufw-on-ubuntu-server/
---

[UFW](https://help.ubuntu.com/community/UFW) (Uncomplicated Firewall) is a program that allows you to internally control ports on your Linux instance. This gives you the ability to forward ports from your machine.
The common use of a firewall is to control the ports that have access from the outside world, for instance, running a website would need ports `80`/`443` exposed on your network to be able to route your site.
UFW is different, think of port forwarding, but between local instances. You can lock down internal exposure to port `22` (`ssh`) for example.
## Why? What's the point?
Security.
In the coming weeks, I'll be writing blog posts on how I have set up my jumpbox server. UFW plays a key part in my setup. I have used UFW to only open port 22 on my jumpbox server. I can `ssh` in, but that's it. No other ports can be attacked or sniffed.
From the jumpbox server I can then **only** `ssh` into my other internal instances. This means that if I wanted to `ssh` into server `B` I would have to go via the jumpbox `A`.
## I think I get it, how can I install it?
We're running [Ubuntu Server on a Raspberry Pi](https://ubuntu.com/download/raspberry-pi). But these instructions are for all Debian instances, the Raspberry Pi is irrelevant for this tutorial.
### Let's install
1. Install UFW
`sudo apt-get install ufw`
1. Check the status of UFW
`sudo ufw status`
You should see that UFW is **disabled**
1. Let's allow some ports, it's really important that you allow your `ssh` port, otherwise you can lose access when we get round to enabling UFW.
The default `ssh` port is **22** unless you have changed the default port.
`sudo ufw allow 22`
You can use the above command to allow the necessary ports for your instance. We're just going to stick with port `22` for this example.
1. Let's enable UFW
`sudo ufw enable`
## That's it
You now have a firewall running on your local instance, locked down to be only accessible by port `22`.
In the future, if you're running services on this box, you'll need to expose any other ports that you want to have access outside of your machine. Let's say you set up [OpenVPN](https://openvpn.net/), you have to expose port `1194` on the machine it's running on.
## Thanks for reading
Thanks for reading, hope I've helped in some way!
<!--kg-card-end: markdown--> | edleeman | |
525,139 | 💥 Best of #explainlikeimfive | If you can't explain it simply, you don't understand it well enough. -- Albert Einstein Explain... | 26,922 | 2020-11-25T20:20:03 | https://dev.to/jmfayard/best-of-explainlikeimfive-3a0f | bestofdev, explainlikeimfive, beginners | > If you can't explain it simply, you don't understand it well enough.
> -- Albert Einstein
Explain me like I'm five is the best tag on DEV.to and you should definitely follow it
{% tag explainlikeimfive %}
I have compiled for you some incredibly pedagogical answers on this website. Specifically I selected answers that were enlightening yet concise, and that explained with words only - no code.
There are lots of great content here, so **🔖 bookmark the article** if you want to read it when you have more time.
Without further ado, here is the selection:
<!-- TOC -->
- [DNS](#dns)
- [TCP](#tcp)
- [Websockets](#websockets)
- [GraphQL](#graphql)
- [What is programming?](#what-is-programming)
- [What is a programming language?](#what-is-a-programming-language)
- [GitHub](#github)
- [Recursion](#recursion)
- [Dependency Injection](#dependency-injection)
- [Optionals](#optionals)
- [Promises](#promises)
- [Async Await](#async-await)
- [Open-Source](#open-source)
- [Smoke Testing](#smoke-testing)
- [Vue](#vue)
- [React](#react)
- [Redux](#redux)
- [What is DevOps?](#what-is-devops)
- [CI / CD](#ci--cd)
- [Containers](#containers)
- [Docker & Kubernetes](#docker--kubernetes)
<!-- /TOC -->
# Developer roles
{% comment i9k7 %}
# Networking
## DNS
{% comment 127gi %}
## TCP
{% comment ehm %}
## Websockets
{% comment g0g %}
## GraphQL
{% comment hdi %}
# Programming
## What is programming?
{% comment hh25 %}
## What is a programming language?
{% comment hfil %}
## GitHub
{% comment 3jei %}
## Recursion
{% comment 62mf %}
## Dependency Injection
> When you go and get things out of the refrigerator for yourself, you can cause problems. You might leave the door open, you might get something Mommy or Daddy doesn't want you to have. You might even be looking for something we don't even have or which has expired.
> What you should be doing is stating a need, "I need something to drink with lunch," and then we will make sure you have something when you sit down to eat.
[John Munsch](https://stackoverflow.com/questions/1638919/how-to-explain-dependency-injection-to-a-5-year-old/1638961#1638961)
## Optionals
{% comment 3395 %}
## Promises
{% comment 9e0g %}
## Async Await
{% comment nmjk %}
# Methodologies
## Open-Source
{% comment 4444 %}
## Smoke Testing
{% comment 14a0f %}
# Frameworks
## Vue
{% comment hbbp %}
## React
{% comment ha19 %}
## Redux
{% comment oi6 %}
# DevOps
## What is DevOps?
{% comment 3ghf %}
## CI / CD
{% comment bjlp %}
## Containers
{% comment f7l %}
## Docker & Kubernetes
{% comment filh %}
## Did I miss something?
If you have found other explanations which are enlightening and concise, and which use only analogies, words and images, no code, please add them in the comments :)
---
_That was it, thanks for reading! If you’d like to ask a reader question, you can do so at my [“ask me” page](https://jmfayard.dev/contact/) on https://jmfayard.dev/_ | jmfayard |
790,772 | Frontend Mentor - Stats Preview Card Component | Stats Preview Card Component design from the website Frontend... | 0 | 2021-08-13T15:38:02 | https://dev.to/aituos/frontend-mentor-stats-preview-card-component-58mk | frontendmentor, webdev, css | 
Stats Preview Card Component design from the website Frontend Mentor.
https://www.frontendmentor.io/challenges/stats-preview-card-component-8JqbgoU62
You can see my finished version here:
[Github repo](https://github.com/Aituos/FM--Stats-Preview-Card-Component) | [Live version](https://aituos.github.io/FM--Stats-Preview-Card-Component/)
For me, there were two challenges in this... uh, challenge, that I didn't really know how to tackle.
First of all:
## The image you get is in grayscale
Okay cool, I thought, the solution would be to throw it into something like GIMP and just colorize it - but there's nothing to learn from that really. I already know how to do it. I knew it was possible to do this with just CSS and I recently saw a video on the very subject. Obviously I couldn't remember anything from the video, so I searched for it with no luck.
(Takes a deep breath)
The next hours were full of convoluted ideas involving absolute positioning of precisely-sculpted half-transparent pinkish divs over the image or using ::after pseudo-elements to achieve the desired end result and it was all *horrible*.
After two or three days I finally caved and looked for a solution on YouTube. This is the video I selected:
https://www.youtube.com/watch?v=2tlbKm8_4mg
I watched the first few minutes, then decided to get distracted and look at the comments. One comment in particular casually mentioned background-blend-mode.
background-blend-mode...
:weary:
BACKGROUND BLEND MODE, oh my god.
It was relatively quick and painless after that. I simply created a div, set its' background-image and background-color, and background-blend-mode to soft-light. I then tweaked the color to make it look as close to the preview as possible. Done. Three lines of code (and I think it might be possible to do with just two).
Later on I also found out about mix-blend-mode, which would allow me to use an img element and colorize it, but for now I decided to just leave the divs.
And the second issue was...
## The layout itself
The entire card is simple, but there's one element that changes its' position - the image. In the mobile version it's at the top, so naturally I'd place it at the beginning of html. But the desktop version has the image on the right, which means that in the html I'd place it at the end.
Since you are given two images anyway, I decided to create two elements and hide them with "display: none" when needed.
I'm honestly not sure how a problem like this could be solved otherwise. Imagine a website where the exact same image is supposed to appear in multiple places - depending on screen size and on what the user does. I mean, you can do it with just html and css, but I feel like javascript would be very helpful.
Anyway, this made me think of a few things. First of all - if I hide something with "display: none", is it still visible to screen readers? Because that could lead to a lot of confusion... Fortunately the short answer is **no**, it's as if the element was never there.
Second of all - if one of the images is hidden right from the start, is it still downloaded and loaded in the background? Well, **yes**. It is. But! There's a "picture" element that can be used to solve this problem. I haven't tried it myself yet, but there's a very good explanation of how it works on MDN:
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/picture | aituos |
581,355 | Object-Oriented JavaScript — Object Properties | Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62 Subscribe to my em... | 0 | 2021-01-24T21:54:48 | https://thewebdev.info/2020/08/17/object-oriented-javascript%e2%80%8a-%e2%80%8aobject-properties/ | webdev, programming, beginners, javascript | **Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62**
**Subscribe to my email list now at http://jauyeung.net/subscribe/**
JavaScript is partly an object-oriented language.
To learn JavaScript, we got to learn the object-oriented parts of JavaScript.
In this article, we’ll look at object properties.
### Object Properties and Attributes
Object properties have their own attributes.
They include the enumerable and configurable attributes.
And they’re both booleans.
Enumerable means if we can enumerate the properties.
And configurable means the property can’t be deleted or change any attributes if it’s `false` .
We can use the `Object.getOwnPropertyDescriptor` method by writing:
```
let obj = {
name: 'james'
}
console.log(Object.getOwnPropertyDescriptor(obj, 'name'));
```
And we get:
```
{value: "james", writable: true, enumerable: true, configurable: true}
```
from the console log.
### Object Methods
ES6 comes with various object methods.
### Copy Properties using Object.assign
We can use the `Object.assign` method to copy properties.
For instance, we can write:
```
let a = {}
Object.assign(a, {
age: 25
})
```
Then `a` is:
```
{age: 25}
```
We copy the `age` property to the `a` object, so that’s what we get.
`Object.assign` can take multiple source objects.
For instance, we can write:
```
let a = {}
Object.assign(a, {
a: 2
}, {
c: 4
}, {
b: 5
})
```
Then `a` is:
```
{a: 2, c: 4, b: 5}
```
All the properties from each object will be copied.
If there’re any conflicts:
```
let a = {
a: 1,
b: 2
}
Object.assign(a, {
a: 2
}, {
c: 4
}, {
b: 5
})
console.log(a)
```
then the later ones will make precedences.
So `a` is the same.
### Compare Values with Object.is
We can compare values with `Object.is` .
It’s mostly the same as `===` , except that `NaN` is equal to itself.
And `+0` is not the same as `-0` .
For instance, if we have:
```
console.log(NaN === NaN)
console.log(-0 === +0)
```
Then we get:
```
false
true
```
And if we have:
```
console.log(Object.is(NaN, NaN))
console.log(Object.is(-0, +0))
```
We get:
```
true
false
```
### Destructuring
Destructuring lets us decompose object properties into variables.
For instance, instead of writing:
```
const config = {
server: 'localhost',
port: '8080'
}
const server = config.server;
const port = config.port;
```
We can write:
```
const config = {
server: 'localhost',
port: '8080'
}
const {
server,
port
} = config
```
It’s much shorter than the first example.
Destructuring also works on arrays.
For instance, we can write:
```
const arr = ['a', 'b'];
const [a, b] = arr;
```
Then `a` is `'a'` and `b` is `'b'` .
It’s also handy for swapping variable values.
For instance, we can write:
```
let a = 1,
b = 2;
[b, a] = [a, b];
```
Then `b` is 1 and `a` is 2.
### Built-in Objects
JavaScript comes with various constructors.
They include ones like `Object` , `Array` , `Boolean` , `Function` , `Number` , and `String` .
These can create various kinds of objects.
Utility objects includes `Math` , `Date` , and `RegExp` .
Error objects can be created with the `Error` constructor.
### Object
The `Object` constructor can be used to create objects.
So these are equivalent:
```
const o = {};
const o = new Object();
```
The 2nd one is just longer.
They both inherit from the `Object.prototype` which has various built-in properties.
For instance, there’s the `toString` method to convert an object to a string.
And there's the `valueOf` method to return a representation of an object.
For simple objects, `valueOf` just returns the object. So:
```
console.log(o.valueOf() === o)
```
And we get `true` .
### Conclusion
Objects have various properties and methods.
They inherit various methods that let us convert to different forms.
| aumayeung |
581,730 | Infrastructure as Code: the 5 Questions to Ask before You Start | According to Wikipedia, infrastructure as code (IaC) is the process of managing and provisioning comp... | 0 | 2021-03-15T15:41:42 | https://dev.to/langyizhao/infrastructure-as-code-the-5-questions-to-ask-before-you-start-1i0p | According to Wikipedia, infrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.
Although I don’t think I can define it any better, I believe that “as code” implies slightly more than those definition files. Especially since over the years, we as developers have accumulated so many best practices and principles dealing with code, there is no reason we can’t apply those experiences to the infrastructure if we make it as code.
## Question 0: Should I consider Infrastructure as Code?
Consider this a bonus question, but an important one before embarking on any path towards IaC. Let’s assume that if you are reading this article, you are either considering IaC or more likely than not, already starting your journey.
Infrastructure as Code (IaC) is an inevitable phase of the DevOps movement. Automation is a cornerstone of DevOps culture [1], and after you finally automated your codebase with Version Control and CICD (or at least CI) pipelines, manual maneuvers of the underlying structure would feel so out of the place. People may call it with different names, like playbooks, cookbooks, manifests, and templates, but they are all the same as infrastructure code.
One common objection to Infrastructure as Code (or perhaps any automation) is "We don’t make changes often enough to justify automating them" [2].

Intuitively, it may feel true that changes happen much less often at the infrastructure level than at the code level.
But wait a moment, have you considered configuration changes? How about security patches (like updating to the latest AMI if you are using AWS)? Scaling up your cluster before Black Friday? Have you ever restored your database from a snapshot?
These are all changes related to infrastructure and can be made easier with IaC.
Even when infrastructure changes are infrequent, there are many additional benefits of stack created via IaC. For example, it takes much less effort to fully control your resources (e.g. cleaning it up completely or spinning it up again later), thus ideal for any POC type of work, especially in cloud environments. I've seen teams unknowingly kept huge orphan EC2 and RDS instances running forever, contributing steady revenues to Amazon, only because they started them manually and lost the due observability very quickly.
So from our perspective, the answer to whether you should consider IaC (even if you don’t adopt it) is most definitely yes.
## Question 1: Infrastructure as Code or no Infrastructure at all (a.k.a. serverless)?
It is beneficial to have your infrastructure as code. But sometimes less is more, and you don't even need your own infrastructure - your cloud provider takes care of it for you under Function-as-a-Service (FaaS), or serverless.
I am a heavy user of AWS Lambda Functions because some tasks are so decoupled from other services that they are literally nothing but functions.
A simple scheduled job to send out notifications to either humans or external systems? A good candidate for a serverless function. Of course, you can have a cron job in an EC2 instance but that would be crazy to have its own instance (a waste of resource and a burden for maintenance), and probably too coupled to have the instance shared with many other unrelated jobs.
A filter to route important info via a webhook of your log aggregator to your Slack? Good for serverless too. You can write, test, and even deploy one in an hour and forget about it.
A popular cloud-agnostic choice of tooling is the [Serverless Framework](https://github.com/serverless/serverless). Regarding languages, Go, Python, and Node.js all work very well. C#, Java (or JVM languages using the Java runtime) are not preferred for AWS Lambda because of the infamous cold-starts [3].
My own rule of thumb is if I can handle a task with no more than 10 files of source code (excluding external libraries) in Python or Node.js, I would consider using FaaS. Avoid being too obsessed: The tendency of splitting services small enough to fit into functions can drive microservices into nano services, which is considered an anti-pattern by many [4].
All being said, investing in serverless options for a subset of your services doesn't contradict the codification of your infrastructure in the big picture. You still can consider a not-so-standalone serverless function as a part of your infrastructure, connected with other parts of your system by API calls or streaming (SQS, Kinesis, Kafka, etc.)
## Question 2: To Terraform or not to Terraform?
Among the Infrastructure as Code tools, Terraform popularity has undeniably grown in the last few years. Two of the four teams I most recently worked with were using Terraform as the primary tool (combining with other tools such as Kubernetes), with one other using plain CloudFormation and another completely going serverless. To decide whether Terraform is the right choice for your infrastructure, we can try to start with some comparisons.
### Terraform vs. Cloud-specific Options
Every major cloud provider invented their own template format that you can use to draft your infrastructure code such as ARM Template from Azure and CloudFormation from AWS.
A very obvious advantage of using these formats is that no one can be more familiar with their cloud than the people developing them, thus you can expect that those templates reflect all of the latest cool features the cloud provider has added or is planning to add. If using cloud-agnostic options, you would sometimes have to wait for the community to come up with a solution to provision something only offered by a specific cloud provider. Tools with a larger community and ecosystem, like Terraform, would likely react faster than those with a smaller community.
On the other hand, using the template from a certain cloud provider would inevitably result in vendor lock-in. You can't run your CloudFormation on GCP or Azure, and it is usually harder than it looks to translate among templates.
Using cloud-agnostic tools like Terraform makes it theoretically easier to switch among cloud providers. Some companies run stacks on multi-cloud with intentional data redundancy in each cloud because, in reality, you can't escape vendor lock-in if that vendor holds all your data.
Besides the concern of vendor lock-in, Terraform is also useful for the polycloud strategy, which is slightly different from multi-cloud. Polycloud implies you leverage components from different cloud providers in the same infrastructure stack. As an extreme case, you want a GCP Compute Engine instance to run some Azure Machine Learning task and save the result into AWS S3. Using Terraform with 3 providers configured is a much saner choice than writing one template for each cloud.
### Terraform vs. Other Cloud-agnostic Options
Excluding the cloud-specific options still leaves a lot of options out there. Big players including Chef, Puppet, and Ansible (though some may argue that they are more server configuration tools than infrastructure creating tools [2]), and my favorite is Kubernetes (commonly stylized as K8s).
Kubernetes Is incredibly powerful and has an even larger ecosystem than Terraform. However, it might be unfair to compare them because Kubernetes is a container orchestration system that happens to meet the need of Infrastructure as Code and Terraform are scoped much narrowly.
In Kubernetes, you define all resources in your infrastructure via manifest files (usually in the form of YAML), and the control plane manages them for you, almost magically. You need to know even fewer details about the underlying hardware than Terraform to provision new stacks, within an established K8s cluster. And it's much less likely to mess things up if role-based access control (RBAC) was configured correctly in that cluster.
However setting up a Kubernetes cluster, especially for the first time, can be very hard. It would be easier if your cloud provider gives you managed options such as AWS EKS or GCP GKE, but still pretty involved to get it configured properly.
OpenShift is a similar option to Kubernetes, and in fact, it is running K8s underneath and provides an extra abstraction layer. Some may find it useful, for me the vanilla K8s provides just the right level of abstraction. Yet another layer feels a little redundant and doesn't adjust the new moving parts introduced.
Besides these established options there have also emerged many newer, smaller players that you may be interested in. [Pulumi](https://github.com/pulumi/pulumi), enables you to write your infrastructure code with the same languages you would usually use for your application code, such as Python, JS, TS, and Go, in case you are not a fan of the declarative DSL models used by most tools.
### Other Pros and Cons of Terraform
The module system is a major strength of Terraform that doesn't exist in many cloud-agnostic or cloud-specific options natively. Not only it dramatically increases the reusability of your infrastructure code, but you also get the invaluable opportunity to distill the difference between the different environments in just a couple of files.
```
.
├── modules
│ ├── backend
│ │ ├── main.tf
│ │ ├── user_data.sh
│ │ └── variables.tf
│ ├── documentDB
│ │ ├── main.tf
│ │ └── variables.tf
│ ├── frontend
│ │ ├── install_docker_compose.sh
│ │ ├── main.tf
│ │ ├── routings.tf
│ │ └── variables.tf
│ ├── grafana
│ │ ├── main.tf
│ │ └── variables.tf
│ ├── postgreSQL
│ │ ├── main.tf
│ │ └── variables.tf
│ └── redis
│ ├── main.tf
│ └── variables.tf
├── production
│ ├── main.tf
│ └── variables.tf
├── qa
│ ├── main.tf
│ └── variables.tf
└── staging
├── main.tf
└── variables.tf
```
In the above example, all the differences in configuration between each environment are strictly scoped in the stack level `variables.tf` files, and they would be very concise if your module-level `variables.tf` files have sensible defaults.
The biggest gripe about working with Terraform is the fragility of its "state file", which is `terraform.tfstate` by default. It is understandable that a state must be persisted somewhere to manage any dynamic infrastructure, but many tools abstract the trouble away from you. For example, CloudFormation provides drift detection as a service, through the console or API, and Kubernetes strives to be as stateless as possible.
Terraform leaves the duty of safekeeping the state of the stack to you, the applier of the infrastructure code. There are a few different ways of managing the state file, but the last thing you would want to do is to keep it publicly in your repo because anyone can open the file with a text editor and peek at the plaintext secrets inside. For AWS environments consider using the "s3" backend to store the state file. A properly secured private S3 bucket is required and a DynamoDB table is used for locking (the same bucket and lock table can be used for multiple stacks).
I highly recommend enabling the versioning feature on the cloud storage service containing your state file so you can restore your corrupted state file from the last working version. If you can't enable it or you store your state file with a local backend, you may want to back up the state file before applying anything nontrivial.
Another not-so-obvious issue of Terraform is the version compatibility. Terraform is not even 1.0 at the time of writing, which is more than 6 years after its initial release. Being sub-1.0, on the bright side, implies it's still rapidly evolving, but meanwhile, this means each version could break backward-compatibility and get forgiven. Hashicorp does provide you helper commands or tools to migrate, but from my experience upgrading many stacks either from 0.11 to 0.12 or 0.12 to 0.13, there is always a chance to miss something and corrupt your state file. Again, do keep a backup of your state file! For Kubernetes, I've had much fewer issues when it upgraded minor versions.
## Question 3: When or how frequently should I apply my IaC changes?
After you finish coding your infrastructure, you will need to apply it to the real world to have your stack instantiated. You can choose the way to apply based on the requirements of how soon and how often changes will be made.
The fastest way to create your infrastructure from code or apply changes is to run it manually on CLI. Usually, the only other thing you need is the right access from your cloud provider. For AWS, it could be an STS token to assume an admin role or an ID/secret pair in the `.aws` folder or as environment variables.
If the idea of creating stacks directly from your laptop scares you or your security team, you can run the same commands after SSHing into a dedicated host with preset system roles just enough for managing your piece of infrastructure. This manual approach gives you more flexibility in the timing of applying stack changes, with the drawback of noncompliance in many companies, unless it's just a POC project.
A more widely accepted approach is to embed your infrastructure change into a CI pipeline. There are lots of plugins pairing CI platforms with IaC tools such as [the Terraform Plugin on Jenkins](https://plugins.jenkins.io/terraform/). But more often than not, you don't even need any plugin to apply your infrastructure code in your CI. In Gitlab CI, for example, I only need to set the job image as `hashicorp/terraform` with the image tag matching my script to run any Terraform commands I would usually run in my CLI.
There are also other ways to apply your changes with third-party tools. [Atlantis](https://github.com/runatlantis/atlantis), for example, applies your changes when a Pull Request containing changes in Terraform files is submitted into Gitlab. Using the Github Pull Request page as the main interaction point is an increasingly popular choice for many modern tools (e.g. the static code analysis platform [MuseDev](https://www.muse.dev/)) because it is one of the few points in the development lifecycle that draws the attention of the entire team.
Regardless of your choice, there is always the risk to build your stack into something monolithic, which slows you down in the long run. It is almost always helpful to make your changes small, modular, and independent of each other. This is especially true if you agree with the "Treat your servers like cattle, not pets" mindset [5], which means you should have the majority of your system disposable and reproducible.
For example, if you use Terraform, as we discussed above, the concept of modules would be your friend. You may want to start creating modules early and keep your root level main.tf file as small as possible: Ideally only a set of module blocks beside the mandatory terraform and provider blocks. Moreover, considering the risk of the state file corruption mentioned earlier, you may want to avoid having too many modules in the same stack sharing the same `tfstate` file.
Want to be even more confident to apply your change as frequently as you want? Consider testing your Infrastructure code, which is the next question to ask.
## Question 4: How to Test my Infrastructure Code?

When you catch a colleague testing his code in production, will you recall this famous "I test in production" meme and laugh at him?
Well, how about when his code is in the context of "Infrastructure as Code"? Will your reaction change to "Oh, testing THAT in production is OK."? But is the infrastructure code really so different from the application code we are familiar with: that it is not testable in lower environments?
To answer the question let's first admit that infrastructure code IS harder to test. For one thing, you can hardly create a miniature version of the full infrastructure stack in your laptop, thus local testing is not always feasible. For another, mocking an upstream infrastructure is much harder than mocking an upstream service like in your application code because of the difference in the level of abstraction.
Fortunately, being unable to test something locally doesn't mean you have to test it in production. You should still have at least one other environment lower than production, whether you call it testing, QA, staging, or something else. Deploying a testing environment is a test itself, for your infrastructure code.
Assertions make tests possible, and for each IaC tool, there are companion tools to help you verify the actual end state of an infrastructure change against the desired. For Terraform, there is [terratest](https://github.com/gruntwork-io/terratest), for CloudFormation there is [taskcat](https://github.com/aws-quickstart/taskcat), and [inspec](https://github.com/inspec/inspec) is for Chef. Even if you don't use these assertion tools, simply running your integration tests prepared for the application code would many expose potential problems in your infrastructure code. If you use CICD to apply your infrastructure changes, put the pipeline steps of applying changes to lower environment AND integration testing before the step to change production infrastructure.
However, testing and assertion in a lower environment only make sense when you run the same infrastructure code for your production environment. Take some time to modularize and reuse the infrastructure code you share among environments instead of copy/paste them around.
Sometimes, instead of full-fledged assertive testing, static code analysis could be a much more pragmatic target to automate. And yes, you can do static code analysis on your infrastructure. Most IaC tools included CLI commands to validate and lint your infrastructure code out of the box, and third-party tools can go beyond that and find certain misconfigurations and security-related issues as well. Below is an example of [checkov](https://github.com/bridgecrewio/checkov) scanning an IaC stack coded with Terraform, triggered via [MuseDev](https://www.muse.dev/).

## Question 5: Will IaC mitigate my Tech Debt?
Infrastructure as Code, by itself, is just a tool like the Cloud or the DevOps or the AI, will not magically solve your problem. You have to decide if it is the right tool for your particular fix.
We usually consider technical debt as a combination of design debt and code debt [6], but here let's take either level apart and evaluate their relationships with IaC:
### IaC and Design Debt
Here we limit the term "design" at the architectural level, rather than the application level.
Although in an ideal world a good architect should defer all the decisions at the infrastructure level as long as possible [7] to avoid premature constraints, the rubber has to meet the road eventually. To materialize the system, you have to decide on the exact type of DB, queue, server, logging, monitoring, alerting, and everything that you initially abstracted away from the developers of the application code. And some of the decisions are bound to be wrong the first time however experienced you are, because of the lack of information, which would be collected only after the initial decisions are made.
You will want to be able to refactor your infrastructure, continuously in some cases, after you start to get more information. You will want to be agile. And this is when you reap the benefits of Infrastructure as Code.
In places where IaC was not all adopted or implemented incompletely as an afterthought, I've seen infrastructure such as application servers drifted from their original state so much that no one dares to make changes on them. They are called by some "snowflake systems" (don't confuse with Snowflake Data Warehouse) because they are so unique that no one can reproduce them [2]. With the existence of such systems, any attempt to refactor your code would either fail or get dramatically slowed down, falling into the automation fear spiral [8].
Contrarily, proper implementation of IaC would result in disposable and replaceable modules of the infrastructure that you can quickly troubleshoot and resolve failures. The effort and risk of swapping new components in and old ones out would be finally recused to a level that you can experiment and refactor the details of your architectural design. And this is how IaC helps you solve your tech debt.
### IaC and Code Debt
When you talk about spaghetti, smelly, inflexible or legacy code, you know you are talking about the code debt type of tech debt.
Unfortunately, Infrastructure as Code, however good it is, will not help you refactor your application code. In fact, these two types of code shouldn't even know the details of each other.
Ideally, thanks to the power of the cloud and containerization, your application layer should be decoupled from the runtime platform hosting it, and the runtime platform should be even further decoupled from the infrastructure platform supporting it. Your application code and infrastructure code should be in parallel dimensions.
As a result, You still need your good old red/green/refactor cycles to gradually improve the quality of your application code. Even more, you may want to start refactoring your infrastructure code as well from now on, since they are not only "real" code, but we also have ways to test them (see Question 4).
Fortunately, nowadays we have plenty of tools to help us fight code debt. For example, static code analysis tools with properly configured ruleset could easily give you actionable refactoring suggestions for source code of almost any language [9], and modern Continuous Assurance
platforms such as [MuseDev](https://www.muse.dev/) are all you need to integrate those tools into your own repo.
## Summary or TLDR
* Q0: Should I consider Infrastructure as Code?
> Suggestion: Yes if you have a cloud environment, public, hybrid or private.
* Q1: Infrastructure as Code or Serverless?
> Suggestion: For small standalone services, try serverless. For complex architecture, IaC or both.
* Q2: To Terraform or not to Terraform?
> Suggestion: Use Terraform if you worry about vendor lock-in. Be aware of other options (such as K8S) if you worry about getting locked into Terraform itself.
* Q3: When or how frequently should I apply my IaC changes?
> Suggestion: The smaller the unit of your infrastructure stack is, the more frequent you can change it.
* Q4: How to Test my Infrastructure Code?
> Suggestion: Lint your infrastructure code before applying your changes. Optionally use assertions to verify the deployment.
* Q5: Will IaC mitigate my Tech Debt?
> Suggestion: IaC helps you fixing your design debt, but you need other tools to fix your code debt.
[1] [DevOpsCulture](https://martinfowler.com/bliki/DevOpsCulture.html)
[2] [Infrastructure as Code, 2nd Edition - Kief Morris](https://books.google.com/books/about/Infrastructure_as_Code_2nd_Edition.html?id=VYtAzQEACAAJ)
[3] [New for AWS Lambda – Predictable start-up times with Provisioned Concurrency | AWS Compute Blog](https://aws.amazon.com/blogs/compute/new-for-aws-lambda-predictable-start-up-times-with-provisioned-concurrency/)
[4] [Microservices, Nanoservices, Teraservices, and Serverless - DZone Microservices](https://dzone.com/articles/microservices-nanoservices-teraservices-and-server-1#)
[5] [The History of Pets vs Cattle and How to Use the Analogy Properly | Cloudscaling](http://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle/)
[6] [Technical debt - Wikipedia](https://en.wikipedia.org/wiki/Technical_debt)
[7] [A Little Architecture](https://blog.cleancoder.com/uncle-bob/2016/01/04/ALittleArchitecture.html)
[8] [Infrastructure as Code: The Automation Fear Spiral | ThoughtWorks](https://www.thoughtworks.com/insights/blog/infrastructure-code-automation-fear-spiral)
[9] [List of tools for static code analysis - Wikipedia](https://en.wikipedia.org/wiki/List_of_tools_for_static_code_analysis) | langyizhao | |
582,210 | 3 Useful Websites Everyone Should Know! 2021 🔥 | https://t.co/vZrZ3XyPqG (Send Notes That Self-Destruct) https://t.co/DSTOyCfqw4 (Username, Domain &... | 0 | 2021-01-25T17:14:09 | https://dev.to/mahmoud38021/3-useful-websites-everyone-should-know-2021-1j0n | html, css, javascript, programming | 1. https://t.co/vZrZ3XyPqG
(Send Notes That Self-Destruct)
2. https://t.co/DSTOyCfqw4
(Username, Domain & Trademark Search.)
3. https://t.co/XdiEPd7TXO
(Take beautiful, high-resolution screen captures of websites.) | mahmoud38021 |
582,503 | How to use TypeScript with GraphQL | GraphQL is a powerful query language that allows you to build flexible APIs. It lets you define a... | 10,984 | 2021-01-25T22:01:27 | https://www.takeshape.io/articles/how-to-use-typescript-with-graphql/ | tutorial, webdev, javascript, graphql | ---
title: How to use TypeScript with GraphQL
published: true
description:
series: TypeScript + GraphQL + TypeGraphQL
tags: tutorial, webdev, javascript, graphql
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/s2t6cb5wgfsrdk1d4sw7.png
canonical_url: https://www.takeshape.io/articles/how-to-use-typescript-with-graphql/
---
GraphQL is a powerful query language that allows you to build flexible APIs. It lets you define a type system for your data, so when you execute a query, it returns only the data you need.
GraphQL can offer a better developer experience when used with TypeScript because they are both typed language. TypeScript is a typed superset of JavaScript that extends it by adding types. So, using these technologies together will certainly help you to build predictable and strongly-types APIs.
In this tutorial, I will first explain why combining such technologies, and then show you how to use TypeScript with GraphQL by building an API from scratch using TypeGraphQL.
## Prerequisites
This tutorial assumes that you have some experience using TypeScript, particularly TypeScript classes and decorators. Knowledge of GraphQL will come in handy but is not mandatory.
In this guide, we will be using [TypeGraphQL](https://typegraphql.com/), which is a modern framework for building GraphQL API using Node.js and TypeScript.
## Why use TypeScript with GraphQL
TypeScript is a popular programming language developed and maintained by Microsoft. It is a superset of JavaScript that uses static type-checking to make your code predictable.
Over the years, TypeScript has proven to be a valuable language for large codebases. TypeScript enhances code quality with its types, which adds many benefits, such as robustness, understandability, and predictability.
GraphQL solves the problem of over-fetching or under-fetching APIs. It provides one single endpoint for all requests using a _`Post`_ method to get exactly the data you need, nothing more and nothing less. In this way, GraphQL makes your queries flexible, and your API readable and easy to learn.
TypeScript and GraphQL both rely on types to make your code understandable. However, GraphQL types can only be defined in a GraphQL schema using the method `buildSchema` or a file with `.gql` extension. The GraphQL types are not supported on GraphQL resolvers because resolvers are just regular JavaScript functions, not GraphQL code. TypeScript solves this issue because, as we mentioned earlier, it’s a superset of JavaScript. So, It can set types on the GraphQL resolvers. This is the reason why using TypeScript along with GraphQL makes sense.
GraphQL handles the types for the GraphQL schemas, and TypeScript sets the types on the GraphQL resolvers. However, because you are handling multiple languages, building strongly-typed APIs using Node.js, GraphQL, and TypeScript can be a challenge to maintain.
Maintaining consistency between your schema and resolvers is what TypeGraphQL intends to solve. TypeGraphQL allows you to use TypeScript classes and decorators to create the schema, types, and resolvers of your API. It uses TypeScript to build the entire GraphQL API.

illustration
So far, we have learned why pairing TypeScript with GraphQL can be useful and why TypeGraphQL is handy for building and maintaining GraphQL APIs that use TypeScript.
Without further ado, let’s dive into the practice part and build up the GraphQL API using TypeGraphQL.
## Setting up
To use TypeScript and GraphQL, we first need to create a new Node.js app.
Open your command-line interface (CLI) and run this command:
```bash
yarn init
```
Or for `npm`
```bash
npm init
```
You’ll need to respond to a few configuration questions which will emit a `package.json` file. Next, install the dependencies needed for this tutorial.
```bash
yarn add express apollo-server-express graphql reflect-metadata type-graphql class-validator
```
For `npm`
```bash
npm install express apollo-server-express graphql reflect-metadata type-graphql class-validator
```
We will break these packages down later and explain what they do. For now, let’s install their types so TypeScript can understand the libraries.
```bash
yarn add -D @types/express @types/node nodemon
```
Or
```bash
npm install -D @types/express @types/node nodemon
```
Note that we installed `nodemon` as well to enable live-reloading whenever a change occurs.
Here’s what each of the installed libraries do:
- `express` is a minimalist web framework for Node.js
- `apollo-server-express` is a middleware that allows using `express` in an Apollo GraphQL server.
- `reflect-metadata` enables TypeScript decorators to add the ability to augment a class and its members when the class is defined. It’s a dependency of TypeGraphQL.
- `class-validator` allows TypeGraphQL to use decorator and non-decorator based validation.
Next, we need to structure the project as follows:
```bash
src
| ├── resolvers
| | └── todoResolver.ts
| └── schemas
| | └── Todo.ts
| └── index.ts
├── tsconfig.json
├── package.json
└── yarn.lock
```
Here, there are four files to underline:
- The entry point of the server (`index.ts`).
- The `schemas` folder that contains the GraphQL Schema of the project.
- The `resolvers` folder that holds the resolvers of the API.
- The `tsconfig.json` file tells to TypeScript how to compile the code.
With this in place, we can now add a script to start the server in the `package.json` file.
```json
"scripts": {
"start": "nodemon --exec ts-node src/index.ts"
}
```
This script will start the server using `nodemon`. And whenever our code is updated, it will restart.
Let’s now configure the `tsconfig.json`.
```json
{
"compilerOptions": {
"emitDecoratorMetadata": true,
"experimentalDecorators": true
}
}
```
These two properties should be set to `true` to be able to use TypeScript decorators in the project.
We can now build a GraphQL Schema for the API.
## Build the GraphQL Schema
TypeGraphQL allows you to build a schema using TypeScript classes and decorators. It’s just syntactic sugar because under the hood TypeGraphQL will still generate regular GraphQL code. We will see the code generated later - for now, let’s create the schema.
- `schemas/Todo.ts`
```typescript
import { Field, ObjectType, InputType } from ‘type-graphql’
@ObjectType() export class Todo {
@Field() id: number
@Field() title: string
@Field() description: string
@Field() status: boolean
}
@InputType() export class TodoInput implements Partial {
@Field() title: string
@Field() description: string
}
```
At the first the syntax might look weird, however, it’s relatively simple to understand. It’s just TypeScript decorators and classes.
Here, the `@ObjectType()` provided by TypeGraphQL enables creating a new object or schema. The `Todo` class reflects the shape of a Todo object, and the `TodoInput` defines the expected data for adding a new Todo.
Now, let’s write the same code using GraphQL.
```typescript
type Todo {
id: ID!
title: String!
description: String!
status: Boolean!
}
input TodoInput {
title: String!
description: String!
}
```
As you can see, the logic is the same. The only difference is that here, we don’t use TypeScript.
Now we’re ready to create our GraphQL resolvers.
## Create the GraphQL resolver
Unlike GraphQL, TypeGraphQL puts the GraphQL query or mutation in the resolvers. The name of the function will be used as an endpoint when querying or mutating data.
- `resolvers/todoResolver.ts`
```typescript
import { Query, Resolver, Mutation, Arg } from ‘type-graphql’;
import { Todo, TodoInput } from ‘../schemas/Todo’;
@Resolver((of) => Todo) export class TodoResolver {
private todos: Todo[] = []
@Query((returns) => [Todo], { nullable: true })
async getTodos(): Promise<Todo[]> {
return await this.todos
}
@Mutation((returns) => Todo)
async addTodo(
@Arg('todoInput') { title, description }: TodoInput
): Promise<Todo> {
const todo = {
id: Math.random(), // not really unique
title,
description,
status: false,
}
await this.todos.push(todo)
return todo
}
}
```
Here, we use the `Resolver` decorator to create a new GraphQL resolver that returns a Todo. Next, we build a GraphQL query to fetch all Todos.
After that, we define a mutation query that expects a `title`, and a `description` to add a new Todo on the array of data.
By the way, you don’t need to use async/await here, because this won’t take time to complete. But, I add it here for reference when you need to deal with a real server.
Let’s now convert the code to GraphQL.
```typescript
type Mutation {
addTodo(todoInput: TodoInput!): Todo!
}
type Query {
getTodos: [Todo!]
}
```
With this in place, we can build the server that uses the schema and resolver we’ve just created.
## Create the Server
- `src/index.ts`
```typescript
import ‘reflect-metadata’;
import { ApolloServer } from ‘apollo-server-express’;
import * as Express from ‘express’ import { buildSchema } from ‘type-graphql’;
import { TodoResolver } from ‘./resolvers/todoResolver’;
async function main() { const schema = await buildSchema({ resolvers: [TodoResolver], emitSchemaFile: true, })
const app = Express()
const server = new ApolloServer({
schema,
})
server.applyMiddleware({ app })
app.listen(4000, () =>
console.log('Server is running on http://localhost:4000/graphql')
)
}
main()
```
As you can see here, we import `TodoResolver`, which needs to be passed as a resolver to the `buildSchema` method. With that, TypeGraphQL can build a new GraphQL Schema based on the Todo resolver.
Next, we pass the `schema` object (it contains the GraphQL schemas and resolvers) to Apollo to create the server.
Setting the property `emitSchemaFile: true` allows TypeGraphQL to generate a `schema.gql` file at build-time.
Let’s check if the app works. Run the following command:
```bash
yarn start
```
Or
```bash
npm start
```
Visit `http://localhost:4000/graphql`, and then add this code block below to GraphQL Playground to create a new Todo.
```typescript
mutation {
addTodo(todoInput: { title: "Todo 1", description: "This is my todo" }) {
title
description
status
}
}
```
The Todo object should be created successfully!

todo-created
Now query for the newly created Todo using the following GraphQL query.
```graphql
{
getTodos {
title
description
status
}
}
```
You should see that all Todos have been returned.

all-todos
Great! Our app looks good.
We have now finished building a GraphQL API using TypeScript.
You can find the finished project in this [Github repo](https://github.com/ibrahima92/typescript-graphql-api)
Thanks for reading
## GraphQL in TakeShape
TakeShape provides a flexible GraphQL API to manage your content easily. It gives you the ability to immediately see how changes to your content model will impact your API using the API Explorer. You don’t have to build any backend on your own, everything is set up for you. TakeShape automatically generates a secure GraphQL API to expose all of your content and services.
## Next steps
Check out these resources to dive deeper into the content of this tutorial:
- [TypeGraphQL Docs](https://typegraphql.com/docs/introduction.html)
- [TypeScript Decorators Docs](https://www.typescriptlang.org/docs/handbook/decorators.html)
- [TypeScript Classes Docs](https://www.typescriptlang.org/docs/handbook/classes.html)
- [TypeGraphQL Examples](https://typegraphql.com/docs/examples.html)
- [GraphQL Docs](https://graphql.org/learn/)
| ibrahima92 |
582,674 | Vue Router 4–Scroll Behavior | Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62 Subscribe to my em... | 0 | 2021-01-25T23:30:42 | https://thewebdev.info/2020/08/22/vue-router-4-scroll-behavior/ | vue, webdev, programming, javascript | **Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62**
**Subscribe to my email list now at http://jauyeung.net/subscribe/**
**Vue Router 4 is in beta and it’s subject to change.**
To build a single page app easily, we got to add routing so that URLs will be mapped to components that are rendered.
In this article, we’ll look at how to use Vue Router 4 with Vue 3.
### Scroll Behavior
We can change the scroll behavior with the Vue Router.
To do that, we add the `scrollBehavior` method to the object that we pass into the `createRouter` method.
For example, we can write:
```
<!DOCTYPE html>
<html lang="en">
<head>
<script src="https://unpkg.com/vue@next"></script>
<script src="https://unpkg.com/vue-router@4.0.0-beta.7/dist/vue-router.global.js"></script>
<title>App</title>
</head>
<body>
<div id="app">
<router-view></router-view>
<p>
<router-link to="/foo">foo</router-link>
<router-link to="/bar">bar</router-link>
</p>
</div>
<script>
const Foo = {
template: `<div>
<p v-for='n in 100'>{{n}}</p>
</div>`
};
const Bar = {
template: "<div>bar</div>"
};
const routes = [
{
path: "/foo",
component: Foo
},
{
path: "/bar",
component: Bar
}
];
const router = VueRouter.createRouter({
history: VueRouter.createWebHistory(),
routes,
scrollBehavior(to, from, savedPosition) {
return { left: 0, top: 500 };
}
});
const app = Vue.createApp({});
app.use(router);
app.mount("#app");
</script>
</body>
</html>
```
We return an object with the `left` and `top` properties.
`left` is the x coordinate and `top` is the `y` coordinate we want to scroll to when the route changes.
`to` has the route object of the route we’re moving to.
And `from` has the route object we’re moving from.
Now when we click on the router links, we move to somewhere near the top of the page.
`savedPosition` has the position that we scrolled in the previous route.
We can use the `savedPosition` object as follows:
```
<!DOCTYPE html>
<html lang="en">
<head>
<script src="https://unpkg.com/vue@next"></script>
<script src="https://unpkg.com/vue-router@4.0.0-beta.7/dist/vue-router.global.js"></script>
<title>App</title>
</head>
<body>
<div id="app">
<router-view></router-view>
<p>
<router-link to="/foo">foo</router-link>
<router-link to="/bar">bar</router-link>
</p>
</div>
<script>
const Foo = {
template: `<div>
<p v-for='n in 100'>{{n}}</p>
</div>`
};
const Bar = {
template: `<div>
<p v-for='n in 150'>{{n}}</p>
</div>`
};
const routes = [
{
path: "/foo",
component: Foo
},
{
path: "/bar",
component: Bar
}
];
const router = VueRouter.createRouter({
history: VueRouter.createWebHistory(),
routes,
scrollBehavior(to, from, savedPosition) {
if (savedPosition) {
return savedPosition;
} else {
return { left: 0, top: 0 };
}
}
});
const app = Vue.createApp({});
app.use(router);
app.mount("#app");
</script>
</body>
</html>
```
When the `savedPosition` object is defined, we return it.
Otherwise, we scroll to the top when we click on the router links.
Now when we scroll to the links, we’ll stay at the bottom of the page.
### Conclusion
We can scroll to a given position on the page with Vue Router 4 with the `scrollBehavior` method.
| aumayeung |
792,269 | Introduction to Python | Python is a programming language created by Guido van Rossum and it was released in 1991. The... | 0 | 2021-08-15T09:17:00 | https://dev.to/kamula/introduction-to-python-3m9k | python, git | 
Python is a programming language created by Guido van Rossum and it was released in 1991.
The language is used in
- Server side web development using frameworks like [Django](https://www.djangoproject.com/) or [Flask](https://flask.palletsprojects.com/en/2.0.x/)
- Science and mathematics using tools like [scipy](http://scipy.org/) or [Pandas](http://pandas.pydata.org/)
- Desktop GUIs using the [TK GUI library](http://wiki.python.org/moin/TkInter)
- Software Development using various python tools like [Scons](http://www.scons.org/),[Buildbot](http://buildbot.sourceforge.net/) or [roundup](http://roundup.sourceforge.net/)
- Business applications
The functions stated above are just among the few areas where Python is used.
### Why use Python?
- Python runs on multiple platforms (Windows, Linux,Mac etc)
- Python has a very simple syntax, It was designed for readability and it has some similarities to the English language
- Python supports functional programming, procedural programming and Object Oriented Programming
### First Python Program (Hello World)
Steps:
- Install [Python](https://www.python.org/downloads/) in your pc
- Open your favourite IDE and paste this line of code:
`print("Hello, World!")`. The output should be *Hello, World!*
Python is easy to pick up whether you're a first time programmer or you're experienced with other languages. Visit the official [Python site](https://www.python.org/) for more information.
| kamula |
583,204 | Eryn - React Native Template | Always starting a project with React Native leads me to make some changes and make choices on how it... | 0 | 2021-01-26T15:55:04 | https://caian.dev/posts/eryn-react-native-template/ | reactnative, react, mobile | Always starting a project with React Native leads me to make some changes and make choices on how it should be structured. In general, I think in:
- Directories Structure
- Common Libraries
- Libraries Configuration
Even tho some choices can vary between projects, some of them are constant. I noticed then that I could create a template for making life easier and simple, yet still keep offering a place for dealing with the different kinds of projects that could appear.
I also noticed that I could create a "foundation" for all those who also wanted to use this template at this peak, leaving the project and architectural choices with documentation and some more general Q&A about my vision related to the project structure.
So, I present you 🌲Eryn Template! A React Native Template that could be used directly with the CLI and gives you the support to scale your application and let it open to community contributions! Use it with the command:
```sh
$ npx react-native init MyProject --template react-native-template-eryn
```
### References
- [GitHub Project](https://github.com/caiangums/react-native-template-eryn/)
- [Docs](https://caiangums.github.io/react-native-template-eryn/)
- Cover Photo by <a href="https://unsplash.com/@jaymantri?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Jay Mantri</a> on <a href="https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText">Unsplash</a> | caiangums |
583,221 | javascript return? | return quirks, is this just js thingy? | 0 | 2021-01-26T15:23:50 | https://dev.to/johndoesup/javascript-return-4p00 | help | ---
title: javascript return?
published: true
description: return quirks, is this just js thingy?
tags: #help
//cover_image: https://direct_url_to_image.jpg
---
```js
function e(){
return 4, false;
}
e()
// e () -> false
function f() {
return 4 + 4, true;
}
f()
// f() -> true
```
Shouldn't I expect it to return 4 or sum of 4 + 4. Why every time my function `e()` and function `f()` return 2nd value?
| johndoesup |
583,293 | Show console outputs based on environment | A project can be beautiful from the outside, but if the browser console is full of messy outputs it w... | 0 | 2021-01-26T18:02:36 | https://giuliachiola.dev/posts/show-console-outputs-based-on-environment/ | javascript | A project can be beautiful from the outside, but if the browser console is full of messy outputs it will immediately seem confusing and carelessly 😅
## Using the local storange + custom script
In this script we:
- assign the `window.console` to a custom variable named `consoleWrap`
- create a "state" variable `devMode` and save it in the browser local storage. We will use it to determinate if we are in development or production mode!
- instead of use the default `console.log()` function, use the new `consoleWrap.debug.log()` instead, as it will be shown in browser console only when `devMode` local storage var is `'true'`.
```js
// main.js
let consoleWrap = {};
if (localStorage.devMode && localStorage.devMode === 'true') {
consoleWrap.debug = window.console
} else {
consoleWrap.debug = () => {}
}
```
```js
// other-file.js
consoleWrap.debug.log('Hello!')
```
To set the `devMode` in browser local storage, please add this line in browser console:
```js
// browser console
localStorage.devMode = 'true'
> Hello!
```
> 🧨 **!important**
>
> local storage values are strings 🤭, so we have to assign the variable as string `localStorage.devMode = 'true'` and check its value as string `localStorage.devMode === 'true'`.
## Using vue env + webpack + loglevel
In a Vue project we already have webpack installed, and do not output noisy `console.log()` in production JS bundle is an efficient way to save kilobytes! 😏
**Loglevel** to the rescue!
- [loglevel](https://github.com/pimterry/loglevel)
> Minimal lightweight simple logging for JavaScript. loglevel replaces console.log() and friends with level-based logging and filtering, with none of console's downsides.
Install it in development packages:
```shell
npm install loglevel --save-dev
```
In every JS file we would need to output something, we have to:
- import _loglevel_
- use its syntax, where `log.debug` == `console.log`
```js
import log from 'loglevel';
log.debug('This output will be in both development and production mode')
```
Why did we talk about webpack above? 😅
Well, webpack will not add into the JS bundle the code that will never be executed, as for example a condition that will never match:
```js
if ((2 + 2) === 5) {
// This code will never see the sunlight! 😢
}
```
so if we use node `ENV` variables settings:
```shell
# .env
VUE_APP_DEBUG=true
```
```shell
# .env.production
VUE_APP_DEBUG=false
```
we can add all console outputs we want to our project
```js
import log from 'loglevel';
if (process.env.VUE_APP_DEBUG) {
log.debug('This output will be in development mode, but not in production mode')
}
```
and none of them will output in the final JS bundle! 🎉
| giulia_chiola |
583,636 | JavaScript Best Practices - Returns, scopes, and if statements | Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62 Subscribe to my em... | 0 | 2021-01-27T00:01:44 | https://thewebdev.info/2020/07/18/javascript-best-practices-returns-scopes-and-if-statements/ | webdev, programming, javascript, codequality | **Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62**
**Subscribe to my email list now at http://jauyeung.net/subscribe/**
JavaScript is a very forgiving language. It’s easy to write code that runs but has mistakes in it.
In this article, we’ll look at returning values in certain callbacks, using block-scoped variables, cleaning up the `if` statements, and returning clearly.
* * *
### Returning Values in Callbacks Passed Into Array Methods
Many JavaScript array methods, like `map`, `from`, `every`, `some`, `reduce`, `find`, `findIndex`, `reduceRight`, `filter`, and `sort`, take a callback.
The callbacks that are passed in should return some value so that these methods can return a proper value, allowing us to do other things. For instance, the following code is probably useless and a mistake:
```
const arr = [1, 2, 3];
const mapped = arr.map(() => {});
```
Then we see that the value of `mapped` is `[undefined, undefined, undefined]`. It’s probably not what we want. Therefore, we should make sure that the callbacks that these array methods take return a value so that we don’t get something unexpected.
We shouldn’t have callbacks that don’t return any value unless we’re 100% sure that we want to do this.
* * *
### Use Block-Scoped Variables
If we use `var` to declare variables, we should treat them as if they’re block-scoped so that we don’t cause confusion for ourselves and other developers.
For instance, we can write something like this:
`foo` logs `true` as we expected, but the confusion is that we declared `var foo` twice in one `if` block.
We should treat `var` like a block-scoped variable to reduce confusion, so we should instead write:
Better yet, we should use `let` for block-scoped variables:
`let` is always block-scoped so that we can’t put the variables anywhere we want like with `var`.
* * *
### Class Method Should Reference this
Instance methods should reference `this` in a JavaScript class. Otherwise, there’s no point in it being an instance method. If we don’t need to reference `this` in a class method, then it should be a static method.
For instance, if we have:
The `bar` shouldn’t be an instance method since it doesn’t reference `this` . Instead, we should make it a static method as follows:
* * *
### Limiting Linearly Independent Paths Through a Piece of Code
We shouldn’t have too many `else if` blocks in any `if` statement to reduce complexity. To make code easy to read, we should reduce the number of paths that an `if` statement can take.
For instance, we can write:
We should consider reducing the cases of `if` statements.
Photo by [Rod Long](https://unsplash.com/@rodlong?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral).
* * *
### Return Statements Should Specify a Value or Not Specify a Value
In JavaScript, the `return` statement can return `undefined` in multiple ways. `return` with nothing after it returns `undefined`. `return` with `void` after it and expression after `void` also returns `undefined`.
`return` with `undefined` after it also returns `undefined`.
We should consider returning with nothing or returning `undefined` or an expression explicitly to make what we’re trying to return or not return clear.
So we should either write:
```
const bar = () => {
return;
}
```
or:
```
const bar = () => {
return 1;
}
```
or:
```
const bar = () => {
return undefined;
}
```
The `void` operator is rarely used, so most developers aren’t familiar with it. Its use is also limited. Therefore, it’s not used often.
* * *
### Conclusion
Many array methods take a callback that should return a value. Instance methods like `map`, `reduce`, `filter`, etc. all have callbacks that should return values.
Class instance methods should reference `this`. If they don’t, then they should be static methods.
`if` statements shouldn’t be too complex. Keep the number of paths to a minimum.
Finally, `return` should return values or nothing rather than using the `void` operator with it. | aumayeung |
583,978 | How to Install Java & Set Environment Variables | In this course, we will study the How to Install Java and how to Set Environment Variables in java an... | 0 | 2021-01-27T10:51:21 | https://dev.to/alimammiya/how-to-install-java-set-environment-variables-42j3 | java, beginners, tutorial, programming | <p>In this course, we will study the <a href="https://usemynotes.com/how-to-install-java/">How to Install Java</a> and how to Set Environment Variables in java and Installation of Java in Windows Operating System</p>
<h2>How to install Java?</h2>
<p>Java is supported in many platforms like Windows, Linux, Solaris, etc. All these platforms have their own installation methods. We will not be covering installation procedures for all the platforms that support Java but only Windows operating system for the moment.</p>
<h2>Installation of Java in Windows Operating System</h2>
<p>Whether you are using a 32-bit operating system or a 64-bit operating system. Both of them have a similar installation procedure.</p>
<h3>Step 0: Check if you already have Java installed on your computer.</h3>
<p>Most of today’s desktop applications require Java to work. They require Java to be installed on the host system to properly get executed. So if you have one of those applications in your computer then you probably might have Java already installed on your computer.</p>
<p>To confirm whether you have Java installed on your computer or not, follow these steps -</p>
<ol>
<li>Hold the <strong>Windows key</strong> and press <strong>R</strong> on your keyboard. This will open up the Run dialog box.</li>
<li>Now enter <strong>cmd</strong> in that dialog box and click <strong>OK</strong>. This is open to the Command Prompt window.</li>
<li>In the <strong>Command Prompt</strong>, type <strong>java -version</strong> and press <strong>Enter</strong> on your keyboard.</li>
<li>If you see this type of output below with java version, then congratulations you have Java installed on your computer and no need to continue to further steps.</li>
<li>If you don’t see an output similar to the above image, then follow the next step.</li>
</ol>
<h3>Step 1: Know whether you have a 32-bit or 64-bit operating system.</h3>
<p>Knowing your operating system type is important for installing Java. It comes for both 32-bit and 64-bit operating systems. If you know the type of operating system you have to install Java then you can move to the next step. If you don’t know your system type or having trouble finding it out then follow these simple steps:</p>
<ol>
<li>Open <strong>File Explore</strong>r that comes with your Windows operating system.</li>
<li>On the left side panel, Right-click on <strong>This PC</strong> to open the context menu and then click on <strong>Properties</strong>.</li>
<li>This will open up the <strong>System Properties</strong> of your computer. Now, under the <strong>System</strong>section, check out the <strong>System type</strong> of your computer.</li>
</ol>
<h3>Step 2: Download Java Development Kit (JDK)</h3>
<ul>
<li>Head to the official Java download website to browse through the JDK packages. Link: (<a href="https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html"real="nofollow">https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html)</li>
<li>Scroll down the page in the link given above and click on the appropriate link based on the operating system you wish to install.</li>
<li>A pop window may appear to accept the license agreement. If you wish to read it then you can or just select the checkbox accepting the license agreement and click the Download button. </li>
<li>You will then be redirected to a login page where you will have to sign in to start the download.</li>
<ol>
<li>Don’t have an Oracle Account? Follow this link to create one. Link: (https://profile.oracle.com/myprofile/account/create-account.jspx)</li>
<li>Already have an Oracle Account? Then fill out the username and password and sign in to start the download.</li>
</ol>
<li>Once downloaded, open the installer package and follow the installation wizard. During installation, you may change the installation path if needed. But the default settings during installation is recommended.</li>
</ul>
<p>After the completion of the installation, just repeat<strong> Step 0</strong> to check whether the installation is successful.</p>
<h2>Setting up Environment Variable</h2>
<p>Environment variables are set to enable the system to access externally installed applications or executables. These variables are in the form of a Key-Value pair. They are generally used by the system and other applications to access an external application or program that does a specific task.</p>
<p>We will set environment variables to make sure our system has access to Java. Also, it will enable us to use Java executables globally in a system</p>
<h2>How to set environment variables for Java in Windows 10?</h2>
<ol>
<li>Locate the Java installation on your PC. If you have installed Java using default options, then it can be located in one of the following locations:</li>
<table class="table table-bordered table-striped">
<thead>
<tr>
<td>C:/Program Files/Java/jdk-<version> /bin - For 64-bit OS<br>
C:/Program Files (x86)/Java/jdk-<version>/bin - For 32-bit OS</td>
</tr>
</thead>
</table>
<p>Copy the location of the bin folder of the Java installation which contains java and javac.</p>
<li>Open <strong>File Explorer</strong> that comes with your Windows operating system.</li>
<li>On the left side panel, Right-click on <strong>This PC</strong> to open the context menu and then click on <strong>Properties</strong>.</li>
<li>This will open up the <strong>System Properties</strong> of your computer. On the left panel, click on <strong>Advanced system settings</strong>.</li>
<li>Under the <strong>Advanced tab</strong>, click on <strong>Environment Variables</strong>.</li>
<li>Under the <strong>System Variables</strong> section, locate the <strong>Path</strong> variable and double-click on it. (If there is no <strong>Path</strong> variable then create a new one)</li>
<li>Click on <strong>New</strong> and paste the location of the <strong>bin</strong> folder of the Java installation from step 1 and click on <strong>OK</strong>.</li>
<li>You have successfully set the Java environment variable. Now you may close all the other dialog boxes that were opened since step 1.</li>
</ol> | alimammiya |
640,658 | Setting Up Webpack for ReScript | As much as I strongly prefer ES6 modules, using them with ReScript (formerly BuckleScript / ReasonML)... | 0 | 2021-03-24T15:25:51 | https://webbureaucrat.gitlab.io/articles/setting-up-webpack-for-rescript/ | bucklescript, reason, rescript, webpack | ---
title: Setting Up Webpack for ReScript
published: true
date: 2021-03-20 00:00:00 UTC
tags: ["bucklescript", "reasonml", "rescript", "webpack"]
canonical_url: https://webbureaucrat.gitlab.io/articles/setting-up-webpack-for-rescript/
---
As much as I _strongly_ prefer ES6 modules, using them with ReScript ([formerly BuckleScript / ReasonML](https://rescript-lang.org/blog/bucklescript-is-rebranding)) and ServiceWorkers just isn't practical right now. I'm writing this article so that I can easily grab the configuration the next time I need it. This is a beginner's guide because I am a webpack beginner, and, well, _everyone_ is a ReScript beginner right now.
### Some basic setup
1. Open your _bsconfig.json_ and set the `module` property of the `package-specs` object to `"commonjs"` if it is not already set.
2. Install webpack locally by running `npm i webpack webpack-cli`.
### Configuring webpack
It's important to note for your configuration _where_ your javascript _.bs.js_ files are output, and this is controlled by the `in-source` property of the same `package-specs` object in _bsconfig.json_. This guide assumes `in-source` is `false` (because, quite frankly, that's my preference) but it means that the _.bs.js_ outputs get buried in a deeply nested folder structure.
This is a sample _webpack.config.js_ file based on those assumptions.
```js
const path = require('path');
module.exports =
{
entry:
{
index: "./lib/js/src/index.bs.js",
about: "./lib/js/src/about.bs.js"
},
output:
{
filename: "[name].js",
path: path.resolve(__dirname, "dist/"),
}
};
```
This configuration assumes that we should process two output files _index.bs.js_ and _about.bs.js_ (and their dependencies) and then outputs each bundled file by their name ("index" and "about") into the folder called _dist/_. The resulting bundles are _dist/index.js_ and _dist/about.js_.
### Including webpack in the build
You're welcome to run `npx webpack` any time you want to regenerate your bundled files, but it's a good automation practice to add it to your build command like so:
```json
"scripts":
{
"build": "npx bsb -make-world && npx webpack",
"start": "npx bsb -make-world -w",
"clean": "npx bsb -clean-world"
}
```
### In conclusion
I'm still not a fan of script bundlers and avoid them wherever possible, but when it's not possible, it's nice to have a configuration pasta on hand. In a future article, I'll talk about my main use for webpack: ServiceWorkers. | webbureaucrat |
583,994 | Bitcoin T shirt | Hi, I offer you a Classic “The Bitcoin Family” T-Shirt design in black. Let everyone know that you ac... | 0 | 2021-01-27T11:05:37 | https://dev.to/dwieroneto/bitcoin-t-shirt-4m54 | tshirt | Hi, I offer you a Classic “The Bitcoin Family” T-Shirt design in black. Let everyone know that you accept in BTC. This is best T-shirt quality, thick cotton material.
100% Cotton material
High-quality screenprint
Slim-fit ==> Grap Yours Here ==>https://www.bonfire.com/the-bitcoin-family/ | dwieroneto |
584,253 | ReScript: Adding new actions to an existing useReducer | Previously we updated a React component to use the useReducer hook in rescript-react. In this post, w... | 11,015 | 2021-01-27T18:04:08 | https://willcodefor.beer/posts/compiler-help-when-updating-variants-in-rescript/ | rescript, react | [Previously](https://dev.to/believer/rescript-using-usereducer-in-reasonreact-bin) we updated a React component to use the `useReducer` hook in rescript-react. In this post, we'll add a couple of new actions to our reducer and see how the compiler helps us with adding these new features.
```rescript
type action = Toggle | Display | Hide
```
We start by adding two new actions to the `action` type called `Display` and `Hide`. After we save we'll get an error in the compiler saying that we haven't covered all cases in our reducer's pattern match. It even tells us that we are missing `(Display|Hide)`. This is exactly what we want!
```reasonml
Warning number 8 (configured as error)
6 │ let make = () => {
7 │ let (state, dispatch) = React.useReducer((state, action) => {
8 │ switch action {
9 │ | Toggle =>
. │ ...
13 │ }
14 │ }
15 │ }, HideValue)
16 │
You forgot to handle a possible case here, for example:
(Display|Hide)
```
Let's add the new actions to our reducer.
```rescript
switch action {
| Display => DisplayValue
| Hide => HideValue
| Toggle =>
...
}
```
By handling both the `Display` and `Hide` case the compiler will be happy, but we don't have anything that triggers our new actions so let's add those next.
```rescript
<Button onClick={_ => dispatch(Display)}> {React.string("Display value")} </Button>
<Button onClick={_ => dispatch(Hide)}> {React.string("Hide value")} </Button>
```
By adding two `<Button>` components that trigger our new actions when clicked we've successfully added the new functionality to our `useReducer`. The complete updated example looks like this
```rescript
type state = DisplayValue | HideValue
type action = Toggle | Display | Hide
@react.component
let make = () => {
let (state, dispatch) = React.useReducer((state, action) => {
switch action {
| Display => DisplayValue
| Hide => HideValue
| Toggle =>
switch state {
| DisplayValue => HideValue
| HideValue => DisplayValue
}
}
}, HideValue)
<div>
{switch state {
| DisplayValue => React.string("The best value")
| HideValue => React.null
}}
<Button onClick={_ => dispatch(Toggle)}> {React.string("Toggle value")} </Button>
<Button onClick={_ => dispatch(Display)}> {React.string("Display value")} </Button>
<Button onClick={_ => dispatch(Hide)}> {React.string("Hide value")} </Button>
</div>
}
``` | believer |
584,267 | Ant Design component customization and bundle optimization | Easily replace original components with your custom wrappers and efficiently decrease bundle size. St... | 0 | 2021-01-30T14:57:09 | https://dev.to/kopivan/ant-design-component-customization-and-bundle-optimization-5c2j | typescript, javascript, programming | Easily replace original components with your custom wrappers and efficiently decrease bundle size. Step-by-step tutorial with React, TypeScript, Styled-Components.
____
I’m Ivan Kopenkov, a front-end developer. In this article, I will tell you about the approaches I have used for the UI library component customization. You will also learn how to significantly decrease bundle size, cutting off all the unnecessary modules Ant Design takes there.
In our case, we are making wrappers for original Ant Design components inside the project, changing their appearance, and developing their logic. At the same time, we import both customized and original components right from the ant-design module. That saves tree shaking functionality and makes complex library components use our wrappers instead of original nested elements.
If you are already or about to use Ant Design, this article will provide you with a better and more effective way to do so. Even if you have chosen another UI library, you might be able to implement these ideas.
#Problems with using UI libraries
UI libraries provide developers with a variety of ready-to-use components that are commonly required in any project. Usually, such components are covered with tests, and they support the most common use cases.
If you’re going to use one of these libraries, you should be ready to face the next two problems:
1. Surely, every project requires UI components to be modified. The components must match the project design. Moreover, it’s often needed to develop or change some components’ logic for particular use cases.
2.The majority of UI libraries include more components, icons, and utilities than will be used in one project, at least in its early stages. But all these files might be put into the bundle, which can dramatically increase the initial loading time for your app.
The first issue is solved by the customization of library components, and the second is tackled by bundle optimization. Some libraries, including Ant Design, are already adapted for tree shaking, which lets the bundler automatically exclude unused modules from the bundle.
However, even if you use Ant Design, built-in tree shaking support will be not enough to achieve effective bundle size. All the icons of this library will be included in the bundle, as well as the entire Moment.js library with every localization file since it is a dependency for some Ant components. Moreover, if some of the Ant components are re-exported in one file, each of them will be added to the bundle. Even if only one of them is used.
#Methods of customization
Let’s begin by defining available solutions for customization of UI library components.
###1. Redefinition of global classes (CSS only)
This is the simplest method. You just need to add styles for global CSS classes, which are used by UI library components.
The cons:
* The behavior and logic of components can’t be changed or added.
* CSS-in-JS may be used in this way, but only for global class definition, without the superpowers of this solution.
* Global class usage causes unwanted style mixing: the same classes might be used in other parts of a project, and the selected UI-library may be used by third-party modules on the same site.
Indeed, the only advantage of this method is its simplicity.
###2. Local wrappers for components
This method is more advanced, and it involves creating a separate file in your project for every component that you need to customize. Inside such a file, you make a new component, which renders inside itself the optional one from the UI-library.
The pros:
* It lets you customize the styles of the components and also modify component logic.
* You can use all the powers of CSS-in-JS at the same time.
The cons:
* If an original component is used widely across the project, you will need to change all its imports to your new wrapper’s source. It can be quite time-consuming depending on the component usage broadness.
* Suppose you use IDE autocomplete to automatically import selected components, using this approach. In that case, you will need to pay attention to the component you select from the list because you will have at least two of them: the customized one and the original one. It’s easy to forget about this and pick the original component or even accidentally leave imports of some original ones after creating a new wrapper.
And the most important thing: many of the components are complex, and they use inside themselves other components of the same library. Since the original components have absolutely no idea about our wrappers, they will continue to use the original ones inside themselves, ignoring the logic or appearance changes made in wrappers. For example, such an Ant Design component as AutoComplete renders inside itself the components Input and Select. At the same time, inside List are used Grid, Pagination, and Spin. The same thing with Password, Search, and Textarea, which are the dependencies for Input, and so on.
###3. Forking the UI library repository
Making a private copy of the original UI library repository seems to be the most powerful and the most complicated approach at once.
The pros:
* It gives you maximum freedom in appearance customization and logic modification.
* There is the opportunity to reuse the same forked UI library in other projects.
The cons:
* You could meet some complications when you try to pull the original repository updates to the forked one.
* It can be quite inconvenient for developers to continuously modify components in a separate repository to meet the main project’s requirements.
#How we have been customizing Ant components
After a long discussion, our team decided to use the Ant Design UI library for new projects. My responsibility was to create a boilerplate for a new project, which will be used later to launch other projects. It is crucial for us to change styles and also to modify and add logic for components.
We didn’t want to fork the Ant Design repository because we had a bad experience separating the components to a detached repo. Developing [MCS](http://mcs.mail.ru/), we’ve been using the Semantic UI library, storing its components in a separate repository. No convenient way of working with that was found. For the first time, we used to share this repository with another project ([b2c-cloud](https://cloud.mail.ru/)), developing different themes for each other. But that was inconvenient, and changes for one project could accidentally affect another, so at some point, we forked from this repository again. Eventually, we moved the wrappers from the detached repository to the project, and we’re pretty happy with that.
I’ve chosen the second approach to create wrappers directly in the project. At the same time, I wanted customized components to be imported right from the antd module. This allows us to avoid changing imports of already used components when we make wrappers for them. This also saves tree shaking and makes complex components automatically use custom wrappers instead of original components inside themselves.
After that, I will tell you how meeting these requirements was achieved step by step, and you will understand how to implement the same approach in other projects.
### Step 1. Files with wrappers
In the folder where project components are stored, I made a new catalog for future wrappers, called antd. Here, we gradually added new files for wrappers, depending on our demands in modification. Every file is a composition, a wrapper component rendering an original one imported from a UI library. Let’s look at the simplified example of such a file:
{% gist https://gist.github.com/ikopenkov/74b8c4e70e6c02e76170f583165227f9 %}
To demonstrate a method of style customization, I just changed the component background color using Styled Components. To show the method of logic customization, I added the tooltipTitleparameter to additionally render a tooltip when it is passed.
### Step 2. Change component imports with aliases to wrappers
Now let’s consider how to make a builder (here: Webpack) change the original path of modules imported from the root of antd to the path of our wrappers.
We should create an index.ts file in the root folder with wrappers src/components/antd and copy into this file the content of the file located at node_modules/antd/lib/index.d.ts. Then, using the massive replace tool of some IDE, we change every import path from ./componentName to antd/lib/componentName.
By this point, there should be the next content:
{% gist https://gist.github.com/ikopenkov/e8d9de905d50e9fda51f8890d931217d %}
Then, we change the import paths of the components for which we made the wrappers. In this case, we should import Button from src/components/antd/Button:
{% gist https://gist.github.com/ikopenkov/412597f7c932a436802b57f5fafce913 %}
Now we only need to configure Webpack to use these paths as the aliases to the Ant components. I’ve made a simple tool that makes the set of aliases:
{% gist https://gist.github.com/ikopenkov/64fe0800d39ab529f49de20e1beadf2c %}
> Worth noting, here is **the solution to the problem when complex components use original nested elements instead of the custom ones**. A piece of code in the file AntAliases.ts that finds relative imports of nested components inside the complex ones located in the Ant Design library folder files. It then creates aliases for these imports, making complex components use our custom wrappers for the nested components.
The resolve section of our Webpack config looks like this:
{% gist https://gist.github.com/ikopenkov/8f6bd2a6e42314577a22ec7d6162303a %}
### Step 3. TypeScript support (optional)
The first two steps are enough to work on their own. However, if you use TypeScript and change interfaces of original components in your wrappers (as I did in the example, having added the additional property tooltipTitle), then you will need to add aliases to the TypeScript config. In this case, it's much simpler than it was with Webpack; you simply add the path of the file with imports of the wrappers from the previous step to tsconfig.json:
{% gist https://gist.github.com/ikopenkov/b549d69eb20370f102f1d64f737ba922 %}
### Step 4. Variables (optional)
As we use Styled Components for our projects, it’s pretty convenient for us to declare style variables in a single ts file and import some of them where we need them. Ant Design styles were written using Less.js, which allows us to build styles in our project, injecting our variables using less-loader. Thus, it’s a great opportunity to use the same variables inside our components and wrappers, as well as to build styles of the original components with them.
Because our style guide implies naming variables and functions in camelCase, initially we defined variables in this case. Ant Designless-files use kebab-case for variable naming, thus we automatically transform and export these variables in kebab-case as well.
Our file with style variable declarations in short form looks like this:
{% gist https://gist.github.com/ikopenkov/a3230c90cdd5d353bfe705465526531e %}
You can see the complete list of Ant Design variables in [this file](https://github.com/ant-design/ant-design/blob/master/components/style/themes/default.less).
We do injection of variables and building of less-files by adding less-loader into the Webpack configuration:
{% gist https://gist.github.com/ikopenkov/8b473d8f0dca056aefe31b1aa1270bf6 %}
### The component example
Once you have completed the first two steps, everything should work fine. Let’s have a look at the code in which we use the modified component:
{% gist https://gist.github.com/ikopenkov/2483f2cbd76ffd247dea79c21b268802 %}
### The problem with Grid and Radio
You can omit this part if you don’t plan to make Grid and Radio render wrapped components inside themselves instead of original ones. This problem is caused by the fact that Grid is virtually not a separate component. In fact, its source located at node_modules/antd/es/grid/index.jscontains only re-exports of the components Col and Row.
All the other complex components already use our wrappers, thanks to aliases we made. But when we use Grid it will still import original Col and Row because of its file content. To fix this we should consider the next steps.
To illustrate this case, I created a wrapper for Col and made its background red by default.
{% gist https://gist.github.com/ikopenkov/a21da5e466f52b3fa62c1246083f0e73 %}
Then I rendered the original List component for the test and want it to render the modified Col for its columns.
{% gist https://gist.github.com/ikopenkov/b34892363150695a46eb5169bac82ee3 %}
To make List use exactly our wrapper instead of the default Col, we created a new file to replace original re-exports located in node_modules/antd/es/grid/index.js with paths to our wrappers. We applied this new file to antd/Grid.ts, and here is its content:
{% gist https://gist.github.com/ikopenkov/8e758db3332bbaaf13a0b1c391202183 %}
Now we only need to set the path to this file in the constant SPECIAL_ALIASES defined in AntAliases.tsx:
{% gist https://gist.github.com/ikopenkov/1a6103bbab25216eb5d23236449f6540 %}
Finally, the customization part is over. Now List will render our Col wrapper as its columns. To customize Row as well just make a wrapper and change the Row path at src/components/antd/Grid.tsx. It's not very convenient to do, but you only need it for two components: Grid and Radio. Although, during the last year, we haven't received demand for that in our projects.
## Bundle optimization
### Tree shaking
As I mentioned, the latest version of Ant Design is adapted for tree shaking right out of the box. Its previous versions weren’t, so we used to use babel-plugin-import to drop the unused code. I assume that the other libraries without built-in tree shaking support can achieve this, at least partially, using this plugin.
### Styles import
Despite native tree shaking support, we didn’t drop babel-plugin-import and continue to use it to automatically get styles of a component when we import its js-code. Using it, no excess styles are added to the bundle, and developers don’t need to think about style dependencies. Now, it’s impossible to forget to import the styles of some components.
The plugin is enabled in the babel.config.js file in the root of the project:
{% gist https://gist.github.com/ikopenkov/4fbdb48e4533e33781490ff95f3b78fb %}
### Moment.js
At this time, the bundle consists of the following modules:

Ant Design uses Moment.js, which pulls all the localization files it has to the bundle. You can see in the image how dramatically it increases the size of the bundle. If you don’t need such components depending on Moment.js, such as DatePicker, you can simply cut this library, for example, by adding an alias for Moment.js to some empty file.
As we’re still using Moment.js for our projects (ignoring the fact that its creators have recently deprecated it 😅), we didn’t need to fully eliminate it. We just excluded useless localization files from adding to the bundle, leaving only supported languages (en and ru).
It became possible thanks to ContextReplacementPlugin, delivered with Webpack:
{% gist https://gist.github.com/ikopenkov/b622da07115e05327bdb41abe0c536de %}
Now we can make sure that redundant files are eliminated, as in the next screenshot:

> If you use Lodash and/or Ramda and want to exclude their unused files from the bundle, but at the same time you don’t want to import every function from their separate files, you can just add to your Babel config [babel-plugin-lodash](https://github.com/lodash/babel-plugin-lodash) and [babel-plugin-ramda](https://github.com/megawac/babel-plugin-ramda).
### Icons
Webpack Bundle Analyzer screenshots above show that the heaviest part of the bundle is the Ant Design built-in icon set. This happens because Ant Design exports icons from a single file.
We use unique custom icons in our projects, so we don’t need this file at all. You can cut it off, as well as Moment.js, just by making an alias to some empty file. However, I want to illustrate the ability to save only the required default icons if you want to use them.
For that reason, I added the file src/antd/components/Icons.tsx. I left there only the Spinner icon to render a button in state "loading":
{% gist https://gist.github.com/ikopenkov/f9de2e93e2245ea0afb899cd49ce7801 %}
I also added an alias to this file into the Webpack config.
{% gist https://gist.github.com/ikopenkov/3387634610ce7382528fb1015269acf8 %}
And now we just need to render the button itself:
{% gist https://gist.github.com/ikopenkov/3cb2df1469e88a28ec654b8531af6c15 %}
As a result, we get the bundle with only the one icon we used instead of getting the full pack of icons as before:

Optionally, you can easily replace default icons with standard ones using the same file we’ve just created.
## Conclusion
Finally, every unused component of Ant Design has been cut off by Webpack. At the same time, we continue to import any component, whether it is a wrapper or an original one, from the root of the library.
Moreover, during development, TypeScript will show proper types for customized components as it was with Button from the example above, for which we added the additional property tooltipTitle.
If we decide to customize another component in the project, even a widely used one, we will just need to add a file with the wrapper and change the path of that component in the file with re-exports located at src/components/antd/index.ts.
We’ve been using this approach for more than a year in two different projects, and we still haven’t found any flaws.
___
You can see the ready-to-use boilerplate with a prototype of this approach and the examples described in this article [in my repository](https://github.com/ikopenkov/ant-customization). Along with this solution, we test our components using Jest and React Testing Library. This will be addressed in a different post, as it includes a few tricky elements.
| kopivan |
584,427 | Swimm Live Demo | Get Beta Access at Swimm's first Live Demo Event. Join Swimm with your team to create and access unl... | 0 | 2021-01-27T19:42:02 | https://dev.to/omerr/swimm-live-demo-32pn | tutorial, devtool, documentation, opensource | Get Beta Access at Swimm's first Live Demo Event.
Join Swimm with your team to create and access unlimited tutorials coupled with your repository, or enjoy complimentary tutorials contributed by the community.
https://www.eventbrite.com/e/swimm-live-demo-version-035-tickets-137414651923
Swimm makes development more fluid with smart docs that are synced and coupled with your code. Never let onboarding, outdated documents, or project switching slow you down.
www.swimm.io
| omerr |
584,489 | Iniciando mi camino con Python | Mi nombre es Wilson, de 40 años que para muchos es muy viejo pero yo todavía no siento esa edad. Esto... | 0 | 2021-01-27T20:58:30 | https://dev.to/wilgutl/iniciando-mi-camino-con-python-11c5 | python, beginners, 100daysofcode | Mi nombre es Wilson, de 40 años que para muchos es muy viejo pero yo todavía no siento esa edad. Estoy finalizando en la universidad la carrera de ingeniería de software. La verdad son muchas cosas básicas que enseñan pero poco que quede como conocimiento consciente.
Hace muchos años he querido hacer parte del gremio de desarrolladores, no importa que lenguaje y siempre me he quejado por que para empezar piden al menos dos años de experiencia y ninguna empresa brinda la oportunidad de empezar desde cero a menos que alguien te ayude. Sin embargo, eso cambió desde hace unos meses, ¿Qué ha cambiado? mi forma de pensar. Llegué a la conclusión que nunca tuve asesoría de nadie, ni un profesor, ni un amigo, ni un extraño.
Si alguien me lo hubiera dicho en ese entonces estoy seguro que hace rato estaría metido en ese mundo. Dicen que si no programas a diario no te gusta la programación y en eso difiero, simplemente no ha sido consciente de como funcionan las cosas.
No, todavía no he ingresado pero estoy dispuesto a seguir esos consejos que he escuchado durante los últimos meses que dicen que si no tienes experiencia en una empresa entonces hay que comenzar a crearla uno mismo con proyectos propios y por eso estoy aquí.
Hace ya algunas semanas he comenzado a estudiar desde cero la programación con Python, ya había tenido algo de experiencia académica con Java e hice un proyecto sencillo en php hace un par de años y debo decir que la experiencia con Python ha sido agradable.
Dentro de la búsqueda de cursos encontré uno que tiene como objetivo realizar código durante 100 días y me ha gustado mucho por que diariamente estoy intentando crear algo que, aunque muy básico, me ha logrado retar varias veces mentalmente.
Con los próximos blogs empezaré a publicar los ejercicios básicos que he realizado. No voy a empezar con los primeros pero voy a dejar más abajo el repositorio donde se encuentra alojado por si alguien se anima a brindarme ideas de como mejorar alguno de ellos.
Para finalizar, resumo que mi objetivo con estos blogs es ir publicando ejercicios que voy realizando y subiendo a mi repositorio github y los explicaré según como los entiendo de la mejor manera para ayudarme a reforzar los temas vistos. Espero que salgan más cosas buenas de esto y si alguien ha leído esto hasta el final muchas gracias por la paciencia.
[Repositorio 100 días de Código](https://github.com/WilsonGLan/Reto_100_dias.git)
| wilgutl |
584,541 | Better Perl with subroutine signatures and type validation | Did you know that you could increase the readability and reliability of your Perl code with one featu... | 0 | 2021-01-27T22:30:47 | https://phoenixtrap.com/2021/01/27/better-perl-with-subroutine-signatures-and-type-validation/ | perl, programming, signatures, types | Did you know that you could increase the readability and reliability of your Perl code with one feature? I'm talking about subroutine signatures: the ability to declare what arguments, and in some cases what types of arguments, your functions and methods take.
Most Perl programmers know about the [`@_`](https://perldoc.pl/perlvar#@_) variable (or `@ARG` if you [`use English`](https://perldoc.pl/English)). When a subroutine is called, `@_` contains the parameters passed. It's an array (thus the `@` sigil) and can be treated as such; it's even the default argument for [`pop`](https://perldoc.pl/functions/pop) and [`shift`](https://perldoc.pl/functions/shift). Here's an example:
```perl
use v5.10;
use strict;
use warnings;
sub foo {
my $parameter = shift;
say "You passed me $parameter";
}
```
Or for multiple parameters:
```perl
use v5.10;
use strict;
use warnings;
sub foo {
my ($parameter1, $parameter2) = @_;
say "You passed me $parameter1 and $parameter2";
}
```
(What's that `use v5.10;` doing there? It enables all features that were introduced in [Perl 5.10](https://perldoc.pl/perl5100delta), such as the [`say`](https://perldoc.pl/perlfunc#say) function. We'll assume you type it in from now on to reduce clutter.)
**We can do better**, though. [Perl 5.20](https://perldoc.pl/perl5200delta) (released in 2014; why haven't you upgraded?) introduced the experimental [`signatures`](https://perldoc.pl/perlsub#Signatures) feature, which as described above, allows parameters to be introduced right when you declare the subroutine. It looks like this:
```perl
use experimental 'signatures';
sub foo ($parameter1, $parameter2 = 1, @rest) {
say "You passed me $parameter1 and $parameter2";
say "And these:";
say for @rest;
}
```
You can even set defaults for optional parameters, as seen above with the `=` sign, or slurp up remaining parameters into an array, like the `@rest` array above. For more helpful uses of this feature, consult the [perlsub](https://perldoc.pl/perlsub#Signatures) manual page.
**We can do better still**. The [Comprehensive Perl Archive Network (CPAN)](https://www.cpan.org/) contains several modules that both enable signatures, as well as validate parameters are of a certain type or format. (Yes, Perl can have types!) Let's take a tour of some of them.
## [Params::Validate](https://metacpan.org/pod/Params::Validate)
This module adds two new functions, `validate()` and `validate_pos()`. `validate()` introduces *named parameters*, which make your code more readable by describing what parameters are being called at the time you call them. It looks like this:
```perl
use Params::Validate;
say foo(parameter1 => 'hello', parameter2 => 'world');
sub foo {
my %p = validate(@_, {
parameter1 => 1, # mandatory
parameter2 => 0, # optional
} );
return $p->{parameter1}, $p->{parameter2};
}
```
If all you want to do is validate un-named (positional) parameters, use `validate_pos()`:
```perl
use Params::Validate;
say foo('hello', 'world');
sub foo {
my @p = validate_pos(@_, 1, 0);
return @p;
}
```
Params::Validate also has fairly deep support for *type validation*, enabling you to validate parameters against [simple types](https://metacpan.org/pod/Params::Validate#Type-Validation), [method interfaces](https://metacpan.org/pod/Params::Validate#Interface-Validation) (also known as "duck typing"), [membership in a class](https://metacpan.org/pod/Params::Validate#Class-Validation), [regular expression matches](https://metacpan.org/pod/Params::Validate#Regex-Validation), and [arbitrary code callbacks](https://metacpan.org/pod/Params::Validate#Callback-Validation). As always, consult the [documentation](https://metacpan.org/pod/Params::Validate) for the nitty-gritty details.
## [MooseX::Params::Validate](https://metacpan.org/pod/MooseX::Params::Validate)
MooseX::Params::Validate adds type validation via the [Moose](https://metacpan.org/pod/Moose) object-oriented framework's type system, meaning that anything that can be defined as a [Moose type](https://metacpan.org/pod/distribution/Moose/lib/Moose/Manual/Types.pod) can be used to validate the parameters passed to your functions or methods. It adds the [`validated_hash()`](https://metacpan.org/pod/MooseX::Params::Validate#validated_hash(-\\@_,-\%parameter_spec-)), [`validated_list()`](https://metacpan.org/pod/MooseX::Params::Validate#validated_list(-\\@_,-\%parameter_spec-)), and [`pos_validated_list()`](https://metacpan.org/pod/MooseX::Params::Validate#pos_validated_list(-\\@_,-$spec,-$spec,-...-)) functions, and looks like this:
```perl
package Foo;
use Moose;
use MooseX::Params::Validate;
say __PACKAGE__->foo(parameter1 => 'Mouse');
say __PACKAGE__->bar(parameter1 => 'Mice');
say __PACKAGE__->baz('Men', 42);
sub foo {
my ($self, %params) = validated_hash(
\@_,
parameter1 => { isa => 'Str', default => 'Moose' },
);
return $params{parameter1};
}
sub bar {
my ($self, $param1) = validated_pos(
\@_,
parameter1 => { isa => 'Str', default => 'Moose' },
);
return $param1;
}
sub baz {
my ($self, $foo, $bar) = pos_validated_list(
\@_,
{ isa => 'Str' },
{ isa => 'Int' },
);
return $foo, $bar;
}
```
Note that the first parameter passed to each function is a reference to the `@_` array, denoted by a backslash.
MooseX::Params::Validate has several more things you can specify when listing parameters, including [roles](https://metacpan.org/pod/distribution/Moose/lib/Moose/Manual/Roles.pod), [coercions](https://metacpan.org/pod/distribution/Moose/lib/Moose/Manual/Types.pod#COERCION), and dependencies. The [documentation for the module](https://metacpan.org/pod/MooseX::Params::Validate) has all the details. **We use this module at work a lot**, and even use it without Moose when validating parameters passed to test functions.
## [Function::Parameters](https://metacpan.org/pod/Function::Parameters)
For a different take on subroutine signatures, you can use the [Function::Parameters](https://metacpan.org/pod/Function::Parameters) module. Rather than providing helper functions, it defines two new Perl keywords, `fun` and `method`. It looks like this:
```perl
use Function::Parameters;
say foo('hello', 'world');
say bar(param1 => 'hello');
fun foo($param1, $param2) {
return $param1, $param2;
}
fun bar(:$param1, :$param2 = 42) {
return $param1, $param2;
}
```
The colons in the `bar()` function above indicate that the parameters are named, and need to be specified by name when the function is called, using the `=>` operator as if you were specifying a hash.
In addition to [defaults](https://metacpan.org/pod/Function::Parameters#Default-arguments) and the [positional](https://metacpan.org/pod/Function::Parameters#Simple-parameter-lists) and [named](https://metacpan.org/pod/Function::Parameters#Named-parameters) parameters demonstrated above, Function::Parameters supports [type constraints](https://metacpan.org/pod/Function::Parameters#Type-constraints) (via [Type::Tiny](https://metacpan.org/pod/Type::Tiny)) and Moo or Moose [method modifiers](https://metacpan.org/pod/Function::Parameters#Method-modifiers). (If you don't know what those are, the [Moose](https://metacpan.org/pod/distribution/Moose/lib/Moose/Manual/MethodModifiers.pod) and [Class::Method::Modifiers](https://metacpan.org/pod/Class::Method::Modifiers) documentation are helpful.)
**I'm not a fan** of modules that add new syntax for common tasks like subroutines and methods, if only because there's an extra effort in updating toolings like syntax highlighters and [Perl::Critic](http://perlcritic.com/) code analysis. Still, this may appeal to you, especially if you're coming from other languages that have similar syntax.
## [Type::Params](https://metacpan.org/pod/Type::Params)
Speaking of [Type::Tiny](https://metacpan.org/pod/Type::Tiny), it includes its own parameter validation library called [Type::Params](https://metacpan.org/pod/Type::Params). **I think I would favor this for new work**, as it's compatible with both Moo and Moose but doesn't require them.
Type::Params has a number of functions, none of which are provided by default, so you'll have to import them explicitly when `use`ing the module. It also introduces a separate step for compiling your validation specification to speed up performance. It looks like this:
```perl
use Types::Standard qw(Str Int);
use Type::Params qw(compile compile_named);
say foo('hello', 42);
say bar(param1 => 'hello');
sub foo {
state $check = compile(Str, Int);
my ($param1, $param2) = $check->(@_);
return $param1, $param2;
}
sub bar {
state $check = compile_named(
param1 => Str,
param2 => Int, {optional => 1},
);
my $params_ref = $check->(@_);
return $params_ref->{param1}, $params_ref->{param2};
}
```
The features of Type::Tiny and its bundled modules are pretty vast, so I suggest once again that you [consult the documentation](https://metacpan.org/pod/Type::Tiny::Manual) on how to use it.
## [Params::ValidationCompiler](https://metacpan.org/pod/Params::ValidationCompiler)
At the [top of the documentation to Params::Validate](https://metacpan.org/pod/Params::Validate#DESCRIPTION), you'll notice that the author recommends instead his [Params::ValidationCompiler](https://metacpan.org/pod/Params::ValidationCompiler) module for faster performance, using a compilation step much like Type::Params. It provides two functions for you to import, [`validation_for()`](https://metacpan.org/pod/Params::ValidationCompiler#validation_for(...)) and [`source_for()`](https://metacpan.org/pod/Params::ValidationCompiler#source_for(...)). We'll concentrate on the former since the latter is mainly useful for debugging.
It looks like this:
```perl
use Types::Standard qw(Int Str);
use Params::ValidationCompiler 'validation_for';
my $validator = validation_for(
params => {
param1 => {
type => Str,
default => 'Perl is cool',
},
param2 => {
type => Int,
optional => 1,
},
);
say foo(param1 => 'hello');
sub foo {
my %params = $validator->(@_);
return @params{'param1', 'param2'};
}
```
As you can see, it supports type constraints, defaults, and optional values. It can also put extra arguments in a list (it calls this feature "slurpy"), and can even return generated objects to make it easier to catch typos (since a typoed hash key just generates that key rather than returning an error). There's a bit more to this module, so please [read the documentation](https://metacpan.org/pod/Params::ValidationCompiler) to examine all its features.
## Conclusion
One of Perl's mottos is **"there's more than one way to do it,"** and you're welcome to choose whatever method you need to enable signatures and type validation. Just remember to be consistent and have good reasons for your choices, since the overall goal is to improve your code's reliability and readability. And be sure to share your favorite techniques with others, so they too can develop better software. | mjgardner |
584,550 | Overcoming Impostor Syndrome | It’s been over 4months since I promised I was going to write about how I overcame Impostor syndrome.... | 0 | 2021-01-28T07:43:49 | https://dev.to/dammyton/overcoming-impostor-syndrome-33oe | impostorsyndrome | It’s been over 4months since I promised I was going to write about how I overcame Impostor syndrome. I’m so sorry it came late.
>But right within me i knew I was nervous and fighting impostor syndrome. "Impostor syndrome sucks"... Thanks to Segun Onilude(A friend), who adviced me and his advice did helped. I would definitely write about how I overcame impostor syndrome in my next article. (Excerpt from [My first month experience as a Frontend Developer Intern at Mira Technologies](https://dev.to/dammyton/my-first-month-experience-as-a-frontend-developer-intern-at-mira-technologies-4ecb))
Nevertheless, here are some of the ways that can help you overcome impostor syndrome.
###What is imposter syndrome?
Imposter syndrome describes feelings of severe inadequacy and self-doubt that can leave people fearing that they will be exposed as a “fraud”, usually in their work lives.
It’s fairly typical for people with imposter syndrome to encounter thoughts such as “I feel like a fake”, “I don’t trust my own talents and skills”, “Anyone could've done that” and “I don’t deserve my success”. This has really affected a lot of people regardless of their success or academic accomplishments.
>“Next time you’re in a situation that feels completely outside your comfort zone, don’t focus on your failures, Consider it your opportunity to learn from your missteps and to bring forth a new perspective that others may not have.” - Andy Molinksy
For several years, I’ve nursed within me that I was not good enough even when opportunities came by, I had to walk away cause I felt I was not good enough – there was a part of me telling me I wasn’t the right fit. Seeing this pattern emerge, I realize I’d been experiencing imposter syndrome.
###How to Get Through It
Here are few tips I’ve made to help me whenever I’m struggling with imposter syndrome.
###• Know your Identity:
There is a need to understand who you are, how you see yourself really matters -it has a way of shaping your life either positively or negatively. Stop dwelling on your self-doubt. “When I feel overwhelmed because I just can’t do it all, I say to myself that I’m remarkable and I tell myself to get it done”. You aren’t here to live the life of another person. You are just you.
Losing is just part of the game. Don’t glorify failure, but don’t let it make you feel like you’re not a real contender either. Instead, think and say positive thoughts and reinforce them by looking at your accomplishments. Stand firm in who God says you are when you feel like a fraud and you can start to see how you're impacting lives. It works!
God intentionally created us for a purpose: “For we are God’s masterpiece. He has created us anew in Christ Jesus, so we can do the good things he planned for us long ago” (Eph. 2:10). Without question, the Imposter Syndrome will hold us back, preventing us from enjoying and sharing all God has for us but always remember that you’re gifted, wonderful, blessed, loved and you are not in this world by a mistake. I encourage you to Saturate your mind with the truth of God’s Word. It is filled with reminders of His unconditional love for you!
###• Celebrate Yourself Always:
After a success, have you dismissed it as just good luck or timing? - Sometimes we forget that we are worth it. To help show yourself that you're actually doing well, keep track of your wins in a private document, this will act as a success remider. But alongside working on your impostor syndrome directly, it can also be beneficial to work on your confidence and self esteem.
Make it an habit to document your wins, testimonials, photos, awards and then visit them when you feel like an imposter. Stop comparing yourself with others. Learn to value your own strengths and once you start respecting your own potential, you will soon realize that you have a lot to offer.
###• Say “YES” to opportunities:
While it might be intimidating to take on a role you're not sure you can succeed in, know that you were asked to do it for a reason, and there's nothing wrong with learning new things and asking questions along the way.This helps you to learn, grow, and advance in your career.
Irrespective of how you feel, It takes a lot of courage to pursue challenges even when you're doubtful. Trust me, You can do it. The world needs you to make a difference.
I hope this tips of mine helps, cheers to pushing past your fears and sharing your gifts with the world! | dammyton |
584,566 | Back to Basics: Operators, Operators, Operators | This series discusses the building blocks of JavaScript. Whether you're new to the language, you're p... | 10,733 | 2021-01-28T00:14:09 | https://dev.to/alisabaj/back-to-basics-operators-operators-operators-3l3h | javascript, operators, beginners, technicalinterviews | This series discusses the building blocks of JavaScript. Whether you're new to the language, you're preparing for a technical interview, or you're hoping to brush up on some key JS concepts, this series is for you.
Today's post is about operators. In this post, I'll go over some of the most common operators you'll come across in JavaScript, but it's by no means an exhaustive list. At the bottom of this post, you can find a link to the MDN documentation, which has information on other kinds of JavaScript operators.
- [What is an operator?](#what-is-an-operator)
- [Assignment operators](#assignment-operators)
- [Comparison operators](#comparison-operators)
- [Arithmetic operators](#arithmetic-operators)
- [Logical operators](#logical-operators)
- [String operators](#string-operators)
- [Ternary (conditional) operator](#ternary-conditional-operator)
## What is an operator?
In JavaScript, an operator is a way to compare or assign values, or perform operations. There are many different types of operators.
There are _binary_ operators, _unary_ operators, and a _ternary_ operator in JavaScript. "Binary" means that there are **two** values, or _operands_, involved, with one coming before the operator, and one coming after the operator. An example of a binary operator is `1 + 2`. In this example, `1` and `2` are the operands, and `+` is the operator.
A "unary" operator means that there is only **one** operand. The operand is either before the operator, or after the operator. An example of a unary operator is `x++` (don't worry if you're unfamiliar with this syntax, we'll talk about below).
The "ternary" operator in JavaScript involves **three** operands. It's used as a shortened version of an `if...else` statement, and is therefore also known as a "conditional" operator. An example of a ternary operator is `num >= 0 ? "Positive" : "Negative"`. In this example, the three operands are `num >= 0`, `"Positive"`, and `"Negative"`, and the operators that separate them are `?` and `:`.
## Assignment operators
An **assignment** operator is a binary operator. It assigns a value to the left operand based on the value of the right operand.
The most common assignment operator is `=`, as in `a = b`. In this example, `a` is the left operand, and it's assigned the value of `b`, which is the right operand.
There are also _compound assignment operators_. Compound assignment operators typically combine assignment and arithmetic operators in a shortened version. For example, `a += b` is a shortened version of `a = a + b`.
Below is a table of some of the most common assignment operators:
| Operator name | Shortened operator | Longform version | Example |
| ------------------------- | ------------------ | ---------------- | --------- |
| Assignment operator | a = b | a = b | `x = 4;` |
| Addition operator | a += b | a = a + b | `x += 4;` |
| Subtraction assignment | a -= b | a = a - b | `x -= 4;` |
| Multiplication assignment | a \*= b | a = a \* b | `x *= 4;` |
| Division assignment | a /= b | a = a / b | `x /= 4;` |
| Remainder assignment | a %= b | a = a % b | `x %= 4;` |
Let's see some examples of the above operators:
```javascript
let x = 10;
console.log((x += 3)); // x = 10 + 3 -> x = 13
let y = 8;
console.log((y -= 3)); // y = 8 - 3 -> y = 5
let z = 3;
console.log((z *= 3)); // z = 3 * 3 -> z = 9
let m = 6;
console.log((m /= 3)); // m = 6 / 3 -> m = 2
let n = 7;
console.log((n %= 3)); // n = 7 % 3 -> n = 1
```
## Comparison operators
A **comparison** operator is a binary operator. It compares the two operands, and returns `true` or `false` depending on the comparison.
One comparison operator is less than, or `<`. For example, `1 < 2` would return `true`, because `1` is less than `2`.
When comparing two values of different types, JavaScript does something called **type conversion**. This means that if you're comparing a string with an integer, for example, JavaScript will try to convert the string into a number so that the values can actually be compared. There are two comparison operators that **don't** do type conversion: strict equal, `===`, and strict not equal, `!==`. Strict equal and strict not equal do not convert values of different types before performing the operation.
Below is a table of comparison operators in JavaScript:
| Operator name | Operator symbol | Operator function | Example |
| --------------------- | --------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------- |
| Equal | `==` | Returns `true` if the operands are equal, and `false` if the operands are not equal. | `4 == "4"` (returns true) |
| Not equal | `!=` | Returns `true` if the operands are not equal, and `false` if the operands are equal. | `4 != "5"` (returns true) |
| Strict equal | `===` | Returns `true` if the operands are of the same type and are equal, and `false` if the operands are the same type and are not equal or are different types. | `4 === 4` (returns true) |
| Strict not equal | `!==` | Returns `true` if the operands are the same type but are not equal or are different types, and `false` if the operands are of the same type and are equal. | `4 !== "4"` (returns true) |
| Greater than | `>` | Returns `true` if the left operand is greater than the right operand, and `false` if the left operand is less than or equal to the right operand. | `4 > 3` (returns true) |
| Greater than or equal | `>=` | Returns `true` if the left operand is greater than or equal to the right operand, and `false` if the left operand is less than the right operand. | `4 >= "4"` (returns true) |
| Less than | `<` | Returns `true` if the left operand is less than the right operand, and `false` if the left operand is greater than or equal to the right operand. | `4 < "5"` (returns true) |
| Less than or equal | `<=` | Returns `true` if the left operand is less than or equal to the right operand, and `false` if the left operand is greater than the right operand. | `4 <= 7` (returns true) |
Let's see some examples of the above operators:
```javascript
let x = 5;
let y = 2;
let z = 7;
let m = "5";
let n = "6";
x == m; // 5 == "5" -> true
x != y; // 5 != 2 -> true
x === z; // 5 === 7 -> false
x !== m; // 5 !== "5" -> true
x > y; // 5 > 2 -> true
x >= z; // 5 >= 7 -> false
x < n; // 5 < "6" -> true
x <= m; // 5 <= "5" -> true
```
## Arithmetic operators
An **arithmetic** operator can be a binary or unary operator. As a binary operator, it takes two numerical values as the operands, performs an arithmetic operation, and returns a numerical value. As a unary operator, it takes one numerical value, performs an operation, and returns a numerical value.
One arithmetic operator is the plus sign, `+`, which is used to add two numbers. For example, `4 + 6` would return `10`. Below is a table of some of the arithmetic operators in JavaScript:
| Operator name | Operator symbol | Operator function | Example |
| -------------- | --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
| Addition | `+` | Binary operator. Returns the result of adding two operands. | `4 + 6` returns 10 |
| Subtraction | `-` | Binary operator. Returns the result of subtracting one operand from another. | `5 - 2` returns 3 |
| Multiplication | `*` | Binary operator. Returns the result of multiplying two operands. | `3 * 4` returns 12 |
| Division | `/` | Binary operator. Returns the result of dividing one operand by another. | `9 / 3` returns 3 |
| Remainder | `%` | Binary operator. Returns the integer remainder of dividing one operand by another. | `10 % 3` returns 1 |
| Increment | `++` | Unary operator. Adds `1` to the operand. If it comes before the operand (`++z`), it returns the value of the operand _after_ adding `1`. If it comes after the operand (`z++`), it returns the value of the operand _before_ adding `1`. | If `z = 4`, `++z` returns `5`, and `z++` returns `4`. |
| Decrement | -- | Unary operator. Subtracts `1` from the operand. If it comes before the operand (`--z`), it returns the value of the operand _after_ subtracting `1`. If it comes after the operand (`z--`), it returns the value of the operand _before_ subtracting `1`. | If `z = 4`, `--z` returns `3`, and `z--` returns `4`. |
| Exponentiation | `**` | Binary operator. Returns the result of raising one operand to the power of the other operand. | `5 ** 2` returns 25 |
Let's see some examples of the above operators:
```javascript
let x = 3;
let y = 5;
let z = 6;
let a = 2;
let b = 7;
console.log(x + y); // 3 + 5 -> 8
console.log(y - x); // 5 - 3 -> 2
console.log(x * z); // 3 * 6 -> 18
console.log(z / x); // 6 / 3 -> 2
console.log(y % x); // 5 % 3 -> 2
console.log(a++); // 2
console.log(--b); // 6
console.log(y ** x); // 5 * 5 * 5 -> 125
```
## Logical operators
A **logical** operator can be a binary operator or a unary operator. As a binary operator, it typically takes two Boolean values, evaluates them, and returns a Boolean value.
The unary logical operator in JavaScript is the logical NOT. It takes one operand and evaluates if it can be converted to the Boolean value `true`.
Below is a table of logical operators in JavaScript:
| Operator name | Operator symbol | Operator function | Example |
| --- | --- | --- | --- |
| Logical AND | && | Returns `true` if both operands are `true`, and returns `false` if at least one of the operands is `false`. | `true && true` (returns true) `true && false` (returns false) |
| Logical OR | || | Returns `true` if at least one operand is `true`, and returns `false` if both operands are `false`. | `true` || `false` (returns true) `false` || `false` (returns false) |
| Logical NOT | ! | Returns `false` if the operand can be converted to `true`, and returns `true` if the operand cannot be converted to `true`. | `!true` (returns false) `!false` (returns true) |
Let's see some examples of the above operators:
```javascript
true && true; // true
true && false; // false
false && false; // false
true || true; // true
true || false; // true
false || false; // false
!true; // false
!false; // true
```
## String operators
A **string** operator is a binary operator. It takes two strings and combines them into a single string using `+`, which in this case is called the _concatenation operator_. String _concatenation_ means combining two string values together.
An example of a string operator is `console.log("Happy " + "birthday")`, which console logs the string `"Happy birthday"`.
There is also a shortened version of the string operator, which is `+=`. For example:
```javascript
let string1 = "birth";
let string2 = "day";
console.log(string1 += string2) // "birthday"
```
## Ternary (conditional) operator
A **conditional** operator, or ternary operator, is used with three operands. It's used to evaluate if a condition is true, and then returns one of two values depending on that.
The ternary operator is structured like the following:
```
condition ? expressionIfTrue : expressionIfFalse
```
Ternary operators are discussed at length [in this post](https://dev.to/alisabaj/back-to-basics-conditional-statements-in-javascript-2feo#the-ternary-operator).
---
This post just went over some of the more common operators you'll use and come across in JavaScript. There are many more operators, including bitwise operators and relational operators, and I encourage you to learn more about them in the MDN documentation [here](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Expressions_and_Operators). | alisabaj |
584,595 | Convolutional Sequence to Sequence | Recurrent neural networks (RNNs) with LSTM or GRU units are the most prevalent tools for NLP research... | 0 | 2021-01-28T00:36:51 | https://dev.to/divyarkamat/convolutional-sequence-to-sequence-4i5l | Recurrent neural networks (RNNs) with LSTM or GRU units are the most prevalent tools for NLP researchers, and provide state of the art results on many different NLP tasks, including language modeling (LM), neural machine translation (NMT), sentiment analysis, and so on. However, a major drawback of RNNs is that since each word in the input sequence are processed sequentially, they are slow to train.
Most recently, Convolutional Neural Networks - traditionally used in solving most of Computer Vision problem, have also found prevalence in tackling problems associated with NLP tasks like Sentence Classification, Text Classification, Sentiment Analysis, Text Summarization, Machine Translation and Answer Relations.
Back in 2017, a team of researchers from Facebook AI research released an interesting paper about [Sequence to Sequence learning with Convolutional neural networks(CNNs)](https://arxiv.org/pdf/1705.03122.pdf), where they tried to apply CNNs to problems in Natural Language Processing.
In this post, I’ll try to summarize this paper on how CNN's are being used in machine translation.
**What are Convolutional Neural Networks and their effectiveness for NLP?**
<!-- wp:paragraph -->
<p><br>Convolutional Neural Networks (CNNs) were originally designed to perform deep learning for computer vision tasks, and have proven highly effective. They use the concept of a “convolution”, a sliding window or “filter” that passes over the image, identifying important features and analyzing them one at a time, then reducing them down to their essential characteristics, and repeating the process.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Now, lets see how CNN process can be applied to NLP.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Neural networks can only learn to find patterns in numerical data and so, before we feed a text into a neural network as input, we have to convert each word into a numerical value. It starts with an input sentence broken up into words and transformed to word embeddings - low-dimensional representations generated by models like word2vec or GloVe or by using a custom embedding layer. The text is organized into a matrix, with each row representing a word embedding for the word. The CNN’s convolutional layer “scans” the text like it would an image, breaks it down into feature. <br>The following image illustrates how the convolutional “filter” slides over a sentence, three words at a time. This is called a 1D convolution because the kernel is moving in only one dimension. It computes an element-wise product of the weights of each word, multiplied by the weights assigned to the convolutional filter. The resultant output will be a feature vector that contains about as many values as there were in input embeddings, so the input sequence size does matter.<br><img class="wp-image-137" style="width: 400px;" src="https://dkamatbloghome.files.wordpress.com/2021/01/download.gif" alt=""></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>A convolutional neural network will include many of these kernels (filters), and, as the network trains, these kernel weights are learned. Each kernel is designed to look at a word, and surrounding word(s) in a sequential window, and output a value that captures something about that phrase. In this way, the convolution operation can be viewed as window-based feature extraction.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><br>We'll be building a machine learning model to go from once sequence to another, using PyTorch and TorchText. This will be done on German to English translations, but the models can be applied to any problem that involves going from one sequence to another, such as summarization, i.e. going from a sequence to a shorter sequence in the same language.<br>Before we delve deep into the code, lets first understand the model architecture as mentioned in the paper.<br>Lets recall our general RNN based encoder decoder model, </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><br><img class="wp-image-141" style="width: 450px;" src="https://dkamatbloghome.files.wordpress.com/2021/01/seq2seq1.png" alt=""><br><br>We use our encoder (green) over the embedded source sequence (yellow) to create a context vector (red). We then use that context vector with the decoder (blue) and a linear layer (purple) to generate the target sentence.</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>How convolutional sequence to sequence model work?</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>An architecture proposed by authors for sequence to sequence modeling is entirely convolutional. Below diagram outlines the structure of convolutional sequence to sequence model.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><img class="wp-image-142" style="width: 500px;" src="https://dkamatbloghome.files.wordpress.com/2021/01/convseq2seq0.png" alt=""><br>Like any RNN based sequence to sequence structure CNN based model uses encoder decoder architecture, however here both encoder and decoder are composed of stacked convolutional layers with a special type of activation function called Gated Linear Units. In the middle there is a attention function. The encoder extracts features from the source sequence, while decoder learns to estimate the function that maps the encoders hidden state and its previous generated words to the next word. The attention tells the decoder which hidden states of the encoder to focus on.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p> <br>A concept of <strong>positional embedding</strong>, is been introduced in this model. Well, what do we mean by positional embedding?<br>In CNN, we process all the words in a sequence simultaneously, it is impossible to capture the sequence order information like we do in RNNs (a timeseries based model). In order to use the sequence information of the sequence, the absolute position information of the tokens needs to be injected into the model and we need to explicity sent this information to the network. This works just like a regular word embedding but instead of mapping words, it maps the absolute position of a word to a dense vector. The position embeding output will be added on the word embeding. With this additional information, the model knows which part of the context it is handling.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><img class="wp-image-144" style="width: 500px;" src="https://dkamatbloghome.files.wordpress.com/2021/01/positional_emb.png" alt=""><br>The paper also applies residual connection between the blocks in both the encoder and the decoder, which allows for deeper convolutional network. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><br><strong><em>Why residual connection?<br></em></strong>Models with many layers often rely on shortcut or residual connections. When we stack up convolutional layers to form a deeper model, it becomes harder and harder to optimize since the model has a lot of parameters, resulting in poor performance and also the gradient values start exploding and becomes very difficult to handle. This is solved by adding a residual block (skip connections) i.e to add the previous blocks output onto the current block directly. This technique makes the learning process easier and faster, enabling the model to go deeper, also helps improve the accuracy.</p>
<!-- /wp:paragraph -->
## Encoder
Let's now have a closer look at the Encoder structure.

- We take the German sentence add padding at the start <sos> and end of the sentence <eos> and split those into tokens. This is because, the CNN layer is going to reduce the length of the sentence, to maintain same sentence length we add padding.
- We first send it to the embeding layer to get the word embedding, we also need to encode the position of the word, so we will be literally sending the position(index postion of the words) to another similar embeding layer to get the positional embeddings.
- We then do a element wise sum of word embedding and postional embedding, that is going to result in a combined embedding which is a element wise vector (This layer knowns the word and also has encoded even the location of the word) .
- This vector goes into a Fully connected layer because we need to convert it into a particular dimention and also, to help increase capacity and extract information, basically to convert these simple numbers into something which is more complex (like rearranging of features).
- The ouput of each of these FC layer is simultaneously sent to the multiple convolution blocks.
- For each of the information that is going into the convolutional block, we are going to get individual outputs.
- This output is again sent to another fully connected layer, because the output of the convolution need to be converted into the embedding dimension of the encoder.
- The final vector will have the embedding equal to the number of dimension we want.
- We also add a skip connection the output of the final FC layer gets added with the element wise sum of word and position embeding , i.e we are sending the whole word along with the position of the word to the decoder as convolutional layer might loose the positional information.
Finally, from the encoder block we will be sending two outputs to the decoder, one is the conved output and another is combined vector (which is combination of transformed vector and embedding vector)
i.e suppose we have 6 tokens, we will be getting 12 context vectors, 2 context vectors per token, one from conved and another from combined.
## Convolutional Blocks
Lets now see the convolutional block within encoder architecture

- As mentioned earlier we pass the padded input sentence to the CNN block, this is because the CNN layer is going to reduce the length of the sentence, and we need to ensure that the length of the input sentence getting into the convolution block is equal to the length of the sentence going out of the convolution block.
- We will be then convolving the padded output using a kernel size of 3 (odd size)
- The output of this is sent to a special kind of acivation GLU (Gated Linear Unit) activation.
### How GLU activation works?
The output of convolutional layer i.e input to GLU is split into two halves A and B, half the input (A) would go to sigmoid and we would then do a element wise sum of both. Sigmoid acts as a gate, determining which part of B are relevant to the current context. The larger the values of entry in A the more important that corresponding entry in B is. The gating mechanisn of the models enables to select the effective parts of the input features in preducting the next words. Since the GLU is going to reduce the input to half, we would be doubling the input to the convolution block.

- Then we add a residual connection i.e The output of the combined vector is going to be added as a skip connection to the output of the GLU, this done just to avoid any issues associated with the convolutional layers, this skip connections ensures smooth flow of gradients
This concludes a single convolutional block. Subsequent blocks take the output of the previous block and perform the same steps. Each block has their own parameters, they are not shared between blocks. The output of the last block goes back to the main encoder - where it is fed through a linear layer to get the conved output and then elementwise summed with the embedding of the token to get the combined output.
## Decoder
The Decoder is very similar to Encoder, but with few changes.

- We will passing the whole output for the prediction, like encoder we will first pass the tokens to the embeding layer to get the word and postional embedding.
- Add both the word and postional embedding using element wise sum, pass it to the fully connected layer, which then goes to the convolutional layer.
- The convolutional layer accepts two additional inputs i.e the encoder conved and encoder combined (this is to feed encoder information into the decoder), we also pass the embedding vector as a residual connection to the convolution layer. Unlike the encoder, the resnet connection or skip connection goes only to the convolution block it doesnot go to the output of the convolution block because we have to use the information to predict the output.
- This goes to two layer linear network (FC layer) to make the final prediction.
## Decoder Conv Blocks
Let's now see the decoder convolutional blocks, this is similar to the one within encoder. However there are few changes.

For encoder the input sequence is padded so that the input and output lengths are the same and we would pad the target sentence in the decoder aswell for the same reason. However, for decoder we only pad at the beginning of the sentence, the padding makes sure the target of the decoder is shifted by one word from its input. Since we are processing all the target sequence simultaneously, so we need a method of not only allowing the filter to translate the token that we have to the next stage, but we also need to make sure that the model will not learn to output the next word in the sequence by directly copying the next word, without actually learning how to translate.
<!-- the attension in the middle can be computed simultaneously for the length of the kernel (queries) to parallelize the training process. However, during testing we need to wait for the next word to be generated in order to proceed to the next time step -->
If we don't pad it at the beginning (as shown below), then the model will see the next word while convolving and would literally be copying that to the output, without learning to translate

## Attention
The model also adds a attention in every decoder layer and demonstrate that each attention layer only adds a negligible amount of overhead.The model uses both encoder conved and encoder combined, to figure out where exactly the encoder want the model to focus on while making the prediction
- Firstly, we take the conved output of a word from the decoder, do a element wise sum with the decoder input embedding to generate combined embedding
- Next, we calculate the attention between the above generated combined embedding and the encoder conved, to find how much it matches with the encoded conved
- Then, this is used to calculate the weighted sum over the encoded combined to apply the attention.
- This is then projected back up to the hidden dimenson size and a residual connection to the initial input is applied to the attention layer.
This can be seen as attention with multiple ’hops’ compared to single step attention
## Seq2Seq
For the final part of the implementation, we'll implement the seq2seq model. This will stitch the Encoder and Decoder together:
- Firstly <eos> token is sliced off from the target sentence as we do not input this token to the decoder.
- The encoder receives the input/source sentence and produces two context vectors for each word:
- encoder_conved (output from final encoder conv. block) and
- encoder_combined [encoder_conved plus (elementwise) src embedding plus positional embeddings]
- The decoder takes in all the target sentence at one to produce the prediction of output/target sentence
## Inference
Following are the steps that we taken during inference:
- Firstly, ensure our model is in evaluation mode, which it should "always" be for inference
- When a new unseen sentence is passed, we first convert that to lower case and tokenize the sentence
- append the \<sos> and \<eos> tokens
- map the tokens to their indexes i.e corresponding integer representation fron vocab
- convert it to a tensor and use unsqueeze operation to add a batch dimension
- feed the source sentence into the encoder using torch.no_grad() block to ensure no gradients are calculated within the block to reduce memory consumption and speed things up.
- create a list to hold the output sentence, initialized with an \<sos> token
- while we have not hit a maximum length
- convert the current output sentence prediction into a tensor with a batch dimension
- place the current output and the two encoder outputs into the decoder
- get next output token prediction from decoder
- add prediction to current output sentence prediction
- break if the prediction was an <eos> token
- convert the output sentence from indexes to tokens
- return the output sentence (with the <sos> token removed) and the attention from the last layer
# Conclusion
Compared to RNN models convolution models have two advantages.
- First, it runs faster because convolution can be performed in parallel. By contrast, RNN needs to wait for the value of the previous timesteps to be computed.
- Second, it captures dependencies of different lengths between the words easily. In a group of stacked CNN layers, the bottom layers captures closer dependencies while the top layers extract longer (complex) dependencies between words.
Having said that when comparing RNN vs CNN, both are commonplace in the field of Deep Learning. Each architecture has advantages and disadvantages that are dependent upon the type of data that is being modeled.
***From the abstract of the paper, the authors claim to outperform the accuracy of deep LSTMs in WMT’14 English-German and WMT’14 English-French translation at an order of magnitude faster speed, both on GPU and CPU*.**
Pytorch implementation to this can be found [here] (https://github.com/divya-r-kamat/NLP-With-PyTorch/blob/master/Convolutional_Sequence_to_Sequence/German_to_English_Translation_using_Convolutional_Seq2Seq.ipynb)
| divyarkamat | |
584,783 | Using React 17 in the angular 11 gives the error React is not defined when makes prod build but working fine in dev build. | Hi I am following this URL to integrate React in angular https://medium.com/@zacky_14189/embedding-re... | 0 | 2021-01-28T07:02:27 | https://dev.to/sumitkumar151294/using-react-17-in-the-angular-11-gives-the-error-react-is-not-defined-when-makes-prod-build-but-working-fine-in-dev-build-gfo | Hi I am following this URL to integrate React in angular https://medium.com/@zacky_14189/embedding-react-components-in-angular-the-easy-way-60f796b68aef
But when I make production build then in the browser console I got the error
main.597a58e09c20f4baf37b.js:1 ERROR ReferenceError: React is not defined
at e.render (14.30350cdd70a75b908595.js:1)
at e.ngAfterViewInit (14.30350cdd70a75b908595.js:1)
at Jt (main.597a58e09c20f4baf37b.js:1)
at $t (main.597a58e09c20f4baf37b.js:1)
at Zt (main.597a58e09c20f4baf37b.js:1)
at on (main.597a58e09c20f4baf37b.js:1)
at main.597a58e09c20f4baf37b.js:1
at on (main.597a58e09c20f4baf37b.js:1)
at Rn (main.597a58e09c20f4baf37b.js:1)
at main.597a58e09c20f4baf37b.js:1 | sumitkumar151294 | |
585,029 | Moving forward with Previous | In this series, I'm documenting my experiences with attempting to write and execute a machine learnin... | 10,975 | 2021-01-28T12:55:29 | https://dev.to/goyder/moving-forward-with-previous-3bc4 | python, machinelearning, retro | In this series, I'm documenting my experiences with attempting to write and execute a machine learning program in Python 1.6 on a NeXT hardware emulator running NeXTSTEP.
As I outlined in the first article, my motivations carrying out this project are basically:
* Curiosity
* Interest in picking up technical skills
Today's article is going to be more focused on the latter motivation, as I figure out the best way to run the 30-something-year-old NeXTSTEP operating system on my current machine (a Linux desktop with 9-ish year old hardware, for reference).
## Goal
Our purpose here is relatively straightforward: let's explore our options for [emulation platforms](https://en.wikipedia.org/wiki/Emulator) to let us run NeXTSTEP, pick one, and get it going.

*(I'd love to actually carry all of this out via an actual NeXT machine, but their age, importance, and the fact that not very many were actually built or sold means that it's very difficult to find one.)*
## Challenges and decisions
### What platform to use?
One of the key challenges I encountered straight off the bat is trying to figure out what hardware I want to emulate. Two options jump out at me:
* First, there is a emulator available that replicates the NeXT machines, charmingly called ["Previous"](http://previous.alternative-system.com/). (Points for that.)
* I'd actually very briefly played around with this years ago but gave up quickly as I didn't really know what I was doing, and the learning curve was significant.
* The other option is to run one of the later versions of NeXTSTEP that were made available on non-NeXT hardware. (These versions were known as [OPENSTEP](https://winworldpc.com/product/nextstep/4x), of course not to be confused with the API spec [OpenStep](https://en.wikipedia.org/wiki/OpenStep)) Notably, "non-NeXT hardware" means x86 hardware, meaning it can be run under [VirtualBox](https://www.virtualbox.org/).
The documentation and guides in both cases seem... dicey. I daresay there'll be interesting debugging regardless.

*Depicted: debugging.*
Reviewing both of the options, I decided to go with Previous because:
* It's the most true to the spirit of this project (it's the closest to authentic hardware, and the non-Intel chipset it's emulating makes this even more pointlessly quixotic!)
* The VirtualBox option is only available for later versions of NeXTSTEP which were ported to x86 chipsets.
* It *does* seem to have the most support, features, and, most importantly, [active forum community](http://www.nextcomputers.org/forums).
Seriously, an active and devoted forum cannot be understated for its value in troubleshooting.
### Getting Previous installed
Having decided on Previous, I jumped into figuring out how to get it running on my machine. Off the bat, I was a touch nervous, primarily because the [latest news article on the homepage](http://previous.alternative-system.com/index.php/news) was from 2016, and that started with *"very long time since last updated this homepage..."*. This didn't scream "strongly supported and frequently updated" to me, but at least they're at a 1.X version.
With [no prebuilt builds available to download](http://previous.alternative-system.com/index.php/download), I set about following the instructions on how to [build from source](http://previous.alternative-system.com/index.php/build). Again, these instructions are old, but they do look technically possible, so adapted them into script and give it a go. (The script is available below.)
I type `./install.sh` into the shell and begin a few minutes of tense waiting while the compilation process proceeds...
Watching intensely, no significant errors crop up...
*Stay on target... stay on target...*
And bang. The build executes flawlessly first time, to my amazement. I launch the still-warm, freshly built executable and am greeted with a blank launch menu:

I've succeeded. Now I just have to explore what it is I've succeeded in doing.
## What next?
Okay, it seems like we have a hardware emulator going - now we have to get some software to run on it.
Stay tuned.
---
## Series review
### Where have we got to?
At this point, we have:
* Compiled Previous.
(*As this project proceeds, this trail of breadcrumbs will be more useful.*)
### What did we find?
In this sesh, we:
* Found the [Previous website](http://previous.alternative-system.com/), and
* [Adapted the install script](https://gist.github.com/goyder/003704dc37b19928ddb1988ee31ee0bc).
### What could we explore further?
* Other emulation options - how well do the Virtualbox emulators work? Are there significant differences?
* What's the feasibility of the online implementations? If I write a ML algorithm in one of the [Docker-run systems](https://virtuallyfun.com/wordpress/2016/02/07/nextstep-in-your-browser/) have I created the very worst containerised ML solution possible? | goyder |
585,238 | Difference between 'extends' and 'implements' in TypeScript | Today, a friend ask about the difference between extends and implements. class Media { format: s... | 0 | 2021-01-28T15:45:09 | https://dev.to/danywalls/difference-between-extends-and-implements-in-typescript-32i0 | typescript, oop | Today, a friend ask about the difference between `extends` and `implements`.
```typescript
class Media {
format: string;
}
class Video extends Media {}
class Image implements Media {}
```
The short answer for him was:
extends: The class get all these methods and properties from the parent, so you don't have to implement.
implements: The class has to implement methods and properties.
I hope this is can help someone with the same question.
| danywalls |
585,311 | AWS CloudFormation Templates: Getting Started. Available Now | I'm excited to announce a new course is available now in the Pluralsight library, The AWS CloudFor... | 0 | 2021-01-28T17:23:44 | https://dev.to/distinctlyminty/aws-cloudformation-templates-getting-started-available-now-5fg3 | I'm excited to announce a new course is available now in the Pluralsight library,
The AWS CloudFormation Templates: Getting Started course shows you step by step how you can use CloudFormation Templates to codify your infrastructure, providing you with a single source of truth and allowing you to automate the deployment of your infrastructure. You will explore how to create resources, discover how to use parameters and outputs and how to define mappings. When you’re finished with this course, you’ll have the skills and knowledge of CloudFormation templates needed to create templates for your own infrastructure.
Find it now only on [Link](https://pluralsight.com)Pluralsight. | distinctlyminty | |
585,545 | Passing Arguments into Svelte Actions | Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62 Subscribe to my em... | 0 | 2021-01-28T21:37:21 | https://thewebdev.info/2020/05/15/passing-arguments-into-svelte-actions/ | webdev, programming, javascript, svelte | **Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62**
**Subscribe to my email list now at http://jauyeung.net/subscribe/**
Svelte is an up and coming front end framework for developing front end web apps.
It’s simple to use and lets us create results fast.
In this article, we’ll look at how to pass parameters into actions.
Adding Parameters
=================
Actions can take arguments. This means that the action will be called alongside the elements it belongs to.
For instance, we can create an app where we display a message when we clicked and held a button for a specified amount of time that we can change with a slider.
To make this, we write the following code:
`longpress.js`
```
export const longpress = (node, duration) => {
let timer;
const handleMousedown = () => {
timer = setTimeout(() => {
node.dispatchEvent(new CustomEvent("longpress"));
}, duration);
};
const handleMouseup = () => {
clearTimeout(timer);
};
node.addEventListener("mousedown", handleMousedown);
node.addEventListener("mouseup", handleMouseup); return {
destroy() {
node.removeEventListener("mousedown", handleMousedown);
node.removeEventListener("mouseup", handleMouseup);
},
update(newDuration) {
duration = newDuration;
}
};
};
```
`App.svelte` :
```
<script>
import { longpress } from "./longpress.js";
let pressed = false;
let duration = 2000;
</script>
<label>
<input type=range bind:value={duration} max={2000} step={100}>
{duration}ms
</label>
<button use:longpress={duration}
on:longpress="{() => pressed = true}"
on:mouseenter="{() => pressed = false}"
>
press and hold
</button>
{#if pressed}
<p>You pressed for {duration}ms</p>
{/if}
```
In the code above, we created a `longpress` action that takes a `duration` as an argument.
We have the `update` method in the object we return to update the `duration` when it’s passed in.
Then when we click the mouse, the `mousedown` event is emitted, and then the `handleMousedown` is called.
We dispatch the custom `longpress` event after the specified `duration` via `setTimeout` .
Then when the mouse button is released, `handleMouseup` is called, and then `clearTimeout` is called.
Then in `App.svelte` , we have the button that we long press to see the message. We have the slider to adjust how long the long-press last until we see the message.
This works because we listened to the `longpress` event emitted by the button, which is attached to the `longpress` action with the `use:longpress` directive.
When we first hover over the button, then `mouseenter` event is emitted and `pressed` is set to `false` .
When the `longpress` event is emitted from `longprss` , which is emitted, when we hold the button for long enough, then `pressed` is set to `true` .
Then the message is displayed when `pressed` is `true` .
If we need to pass in multiple arguments, we pass in one object with multiple properties like:
```
use:longpress={{duration, foo}}
```
Conclusion
==========
We can pass in a single argument to action. This lets us adjust our actions in the way that we want.
The `update` function is required to update the value when the argument is updated. | aumayeung |
585,555 | Sharing Code Between Svelte Component Instances with Module Context | Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62 Subscribe to my em... | 0 | 2021-01-28T21:40:52 | https://thewebdev.info/2020/08/10/sharing-code-between-svelte-component-instances-with-module-context/ | svelte, webdev, programming, javascript | **Check out my books on Amazon at https://www.amazon.com/John-Au-Yeung/e/B08FT5NT62**
**Subscribe to my email list now at http://jauyeung.net/subscribe/**
Svelte is an up and coming front end framework for developing front end web apps.
It’s simple to use and lets us create results fast.
In this article, we’ll look at how to share variables between component instances with module context.
### Module Context
Svelte components can contain code that is shared between multiple component instances.
To do this, we can declare the script block with `context='module'` . The code inside it will run once, when the module first evaluates, rather than when a component is instantiated.
For instance, we can use it to create multiple audio elements that can only have one audio element play at a time by stopping the ones that aren’t being played as follows:
`App.svelte` :
```
<script>
import AudioPlayer from "./AudioPlayer.svelte";
</script>
<AudioPlayer
src="https://file-examples.com/wp-content/uploads/2017/11/file_example_MP3_700KB.mp3"
/>
<AudioPlayer
src="https://file-examples.com/wp-content/uploads/2017/11/file_example_MP3_1MG.mp3"
/>
<AudioPlayer
src="http://www.hochmuth.com/mp3/Haydn_Cello_Concerto_D-1.mp3"
/>
```
`AudioPlayer.svelts` :
```
<script context="module">
let current;
</script>
<script>
export let src;
let audio;
let paused = true;
const stopOthers = () =>{
if (current && current !== audio) current.pause();
current = audio;
}
</script>
<article>
<audio
bind:this={audio}
bind:paused
on:play={stopOthers}
controls
{src}
></audio>
</article>
```
In the code above, we have the `AudioPlayer` component which has the code that’s shared between all instances in:
```
<script context="module">
let current;
</script>
```
This lets us call `pause` on any instance of the audio element that isn’t the one that’s being played as we did in the `stopOthers` function. `stopOthers` is run when we click play on an audio element.
Then we included the `AudioPlayer` instances in the `App.svelte` component.
Therefore, when we click play on one of the audio elements, we pause the other ones.
### Exports
Anything that’s exported from a `context='module'` script block becomes an export of the module itself. For instance, we can create a button to stop all the audio elements in `AudioPlayer` that is currently playing as follows:
`App.svelte` :
```
<script>
import AudioPlayer, { stopAll } from "./AudioPlayer.svelte";
</script>
<button on:click={stopAll}>Stop All</button>
<AudioPlayer
src="https://file-examples.com/wp-content/uploads/2017/11/file_example_MP3_700KB.mp3"
/>
<AudioPlayer
src="https://file-examples.com/wp-content/uploads/2017/11/file_example_MP3_1MG.mp3"
/>
<AudioPlayer
src="http://www.hochmuth.com/mp3/Haydn_Cello_Concerto_D-1.mp3"
/>
```
`AudioPlayer.svelte` :
```
<script context="module">
let elements = new Set();
export const stopAll = () => {
elements.forEach(element => {
element.pause();
});
};
</script>
<script>
import { onMount } from 'svelte';
export let src;
let audio;
onMount(() => {
elements.add(audio);
return () => elements.delete(audio);
});
</script>
<article>
<audio
bind:this={audio}
controls
{src}
></audio>
</article>
```
In the code above, we add all the audio element instances to the `elements` set which we created in the `contex='module'` block as each `AudioPlayer` component is mounted.
Then we export the `stopAll` function, which has all the audio element instances from `elements` , and we call `pause` on all of them by looping through them and calling `pause` .
In `App.svelte` , we import the `stopAll` function from `AudioPlayer` and then call it when we click the Stop All button.
Then when we click play on one or more audio elements, they’ll all be paused when we click the Stop All button.
Note that we can’t have a default export because the component is the default export.
### Conclusion
We can use the module context script block to add script blocks that are shared between all instances of a component.
If we export functions inside module context script blocks, we can import the function and then call it. | aumayeung |
585,575 | Browser extension - Setup and test | I just published a new extension on Chrome and Firefox that allows anyone to run Code Tours from the... | 10,972 | 2021-01-29T15:56:35 | https://dev.to/qmenoret/browser-extension-setup-and-test-2kbc | javascript, typescript, extension | I just published a new extension on Chrome and Firefox that allows anyone to run Code Tours from the Github UI. More information about Code Tours and the extension in this blog post.
{% link https://dev.to/doctolib/run-code-tours-without-leaving-github-1dj3 %}
I thought it would be nice to write a series about how you could do exactly the same, step by step.
This second blog post will focus on how to set up the environment to develop a Browser Extension.
## The manifest file
All we’ve seen in the previous post must now be bundled together to be loaded into the browser. In order to do so, you will need to have a folder (let’s call it `extension`) containing the different scripts to be loaded, an icon for your extension, and a Manifest file to tell the browser what to load, and where to find it. The manifest for our extension looks like this:
```js
{
"name": "Code tours Github",
"version": "0.0.1",
"description": "Allows to run code tours in your browser",
"manifest_version": 2,
"minimum_chrome_version": "60",
"background": {
"scripts": ["background.js"]
},
"permissions": ["https://render.githubusercontent.com/*", "https://github.com/*"],
"icons": {
"128": "code-tour.png"
},
"content_scripts": [
{
"run_at": "document_start",
"matches": ["https://github.com/*/*"],
"js": ["github.js"]
}
]
}
```
Let’s deep dive into the different properties.
### Describing your extension
The properties `name`, `description` and `version` are here to describe your extension. The `name` will be displayed on the Chrome Web Store (or Firefox Addons store) and when you hover over the extension icon. The `description` will also be displayed in the Store by default. You should be sure to name your extension properly as a poor description is a cause for rejection (we'll see more about submitting the extension in a next blog post).
The `version` should only be incremented when you release a new version.
### A nice logo!
The `icon` property should be the path to a nice image you want to show in the extension toolbar of the browser. It will also be shown in the Store page so make sure to have a decent resolution for it (128x128 will do).
### Starting your scripts
The `backgrounds` and `content_scripts` sections list the different scripts you want to load. Just give it a relative path to the script from the manifest file. For the Content Scripts, you also need to explicitly state in which pages it should be included via the `matches` (or `exclude_matches`) properties.
### Permissions
Depending on the actions you want to perform from your extension, you will need to require some [permissions](https://developer.chrome.com/extensions/declare_permissions). You should list them in your manifest file. For instance, you could require:
* `bookmarks` to be able to manage the bookmarks of the browser
* `nativeMessaging` if you want to be able to start external programs
* Any URL if you want too be able to query those with authentication (you can always do a simple GET without authentication, but if you want to query content where you need the cookies, you will need to declare it)
You could also dynamically require them from your background script, but I would recommend to put all the permissions that are required for your extension to work in the manifest file, so your user can’t reject them (either they accept, or won’t install the extension).
One important note: *do not ask for more permissions than you need*, you will have to justify all of them during the review process when you submit your extension, and this is yet another cause for rejection.
## A minimal working extension
In order to demonstrate, just create a folder called `extension` with 3 files in it.
manifest.json:
```js
{
"name": "My extension",
"version": "0.0.1",
"description": "A small test!",
"manifest_version": 2,
"minimum_chrome_version": "60",
"background": {
"scripts": ["background.js"]
},
"content_scripts": [
{
"run_at": "document_start",
"matches": ["http://*/*", "https://*/*"],
"js": ["content_script.js"]
}
]
}
```
content_script.js:
```js
console.log('content script loaded')
```
background.js:
```js
console.log('background script loaded')
```
Now let’s load it in the browser!
## Loading the extension
Now that we have a folder with our content scripts, background, and manifest, we can load it into our Browser.
### Chrome
For chrome, just go to [chrome://extensions](chrome://extensions). First, activate the developer mode in the top right corner.

The select ”Load Unpacked” on the top left.

It will prompt a file picker. Select the `extension` folder (the one containing the `manifest.json` file). Your extension is loaded and can be reloaded from this same page.
Note that if you have a background script associated with your extension, you will have a “Inspect views: background page” link. This opens the dev tools linked to your background script, allowing you to check the logs.

Now every time you will edit a file and want to reload the extension, you can click the reload button:

### Firefox
For Firefox, it’s as easy. Just go to the [about:debugging](about:debugging) page, click on “This Firefox”:

Then click “Load temporary addon”:

In the same way as for Chrome you will be prompted for a file. Select the `manifest` file (not the folder) and your extension will be loaded.
You will have access to a “Inspect button” granting you access to the devtools of the background page.

### The result
In both cases, when inspecting the background script, you will see “background script loaded” appear in the console, and on every page you visit, you will see “content script loaded”, as if it was part of the website code.
## Going further
In my case, I went with using TypeScript, which required me to transpile my code to generate what we described before, and use Webpack to generate the different script files. You can find the resulting code [here](https://github.com/qmenoret/code-tours-github/tree/6d18419ce71ec0392f921c169225fc541ac2fbd6).
To get started faster, you can find a lot of resources with _ready to use_ repositories, such as [chrome-extension-typescript-starter](https://github.com/chibat/chrome-extension-typescript-starter) for TypeScript. It’s a good way to get started quickly.
### Conclusion
We just deep dived into how to setup and test a browser extension. In the next post, we’ll start implementing some real features! Feel free to follow me here if you want to check the next one when it's out:
{% user qmenoret %}
_________________
Photo by [Ricardo Gomez Angel](https://unsplash.com/@ripato?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText)
| qmenoret |
585,733 | Working with Docker and other unix-things on Windows like on macOS or Linux | Install WSL 2 Install Ubuntu (or other distro) into WSL Place your project files in Linux inside WSL... | 0 | 2021-01-29T04:14:38 | https://dev.to/grawl/working-with-docker-and-other-unix-things-on-windows-like-on-macos-or-linux-4o6p | windows, wsl, docker, jetbrains | 1. [Install WSL 2](https://docs.microsoft.com/en-us/windows/wsl/install-win10)
1. [Install Ubuntu (or other distro) into WSL](https://docs.microsoft.com/en-us/windows/wsl/install-win10#step-6---install-your-linux-distribution-of-choice)
1. Place your project files in Linux inside WSL using path like `\\wsl$\Ubuntu\home\username\Work`. You can access them from using Explorer since it's network location for Windows. Just Enter `\\wsl$\` into location bar and you will see available WSL locations.

1. Open it in JetBrains IDE

1. Install [Docker Desktop for Windows](https://www.docker.com/products/docker-desktop)
1. Enable WSL 2 support in Docker Dashboard Settings

1. Open WSL Command Line, go to project folder and initialize Docker configuration. For example: `docker-compose up -d`.
1. Now you can see Docker Desktop give you access to containers you just build.

1. Now you can work with containers into JetBrains IDE Services pane, connecting to Docker daemon using "Docker for Windows" option.


TIP after installing Docker Desktop, you can notice you can open Docker frontend in browser, but not frontend from WSL (like `npm run start`). To fix that, create [`.wslconfig` file](https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig) in your home folder (`C:\Users\username`) and fill it:
```
[wsl2]
memory=4GB
swap=0
localhostForwarding=true
```
The key you want to change is `localhostForwarding`.
Found it [here](https://github.com/microsoft/WSL/discussions/2471#discussioncomment-113621). | grawl |
585,740 | 3 Software Engineering podcasts you must subscribe to now | Software Engineering podcasts have been my companion for a long time. I have listened to software eng... | 0 | 2021-01-29T04:22:22 | https://geshan.com.np/blog/2020/09/software-engineering-podcasts-you-must-subscribe-to/ | podcast, webdev | Software Engineering podcasts have been my companion for a long time. I have listened to software engineer podcasts for more than a [decade](https://geshan.com.np/blog/2009/02/3-drupal-podcasts-you-must-be-fool-to/). Software Engineering podcasts give you the latest news and views in a great format that is equally easy to consume too. You can also read the previous parts [1](https://geshan.com.np/blog/2015/10/3-podcasts-every-software-engineer-slash-developer-should-subscribe-to/), [2](https://geshan.com.np/blog/2016/05/3-podcasts-every-software-engineer-slash-developer-should-subscribe-to-part-2/), [3](https://geshan.com.np/blog/2017/01/3-podcasts-every-software-engineer-slash-developer-should-subscribe-to-part-3/), and [4](https://geshan.com.np/blog/2019/07/podcasts-every-software-engineer-slash-developer-should-subscribe-to-part-4/). Here are the 3 podcasts all software engineers should subscribe to now:

## [Command Line Heroes (from Redhat) by Saron Yitbarek](https://www.redhat.com/en/command-line-heroes)
Command line heroes is a gem of a podcast. The topics are well researched and in-depth. It covers a wide range of topics that are very relevant to all software engineers. I remember listening to the episode about [serverless](https://www.redhat.com/en/command-line-heroes/season-2/at-your-serverless) from Season 2. It was a breath of fresh air having comments from Saron, bytes from the archive, and great add-ons by multiple guests including Andrea Passwater.

This [award-winning](https://www.shortyawards.com/11th/command-line-heroes) podcast defines itself as:
> Command Line Heroes tells the epic true tales of how developers, programmers, hackers, geeks, and open source rebels are revolutionizing the technology landscape.
Amongst other seasons, my favorite was Season 3. There first episode about [Python](https://www.redhat.com/en/command-line-heroes/season-3/pythons-tale) was simply amazing and enlightening. I was happy to know about the Python programming language’s benevolent dictator for life: [Guido van Rossum](https://en.wikipedia.org/wiki/Guido_van_Rossum) who created this multipurpose language.
Saron has other podcasts too like [Coding Newbie](https://www.codenewbie.org/podcast) which is great too. Command line heroes’ episodes are not that long but the way the music, commentary, and other things are blended together show the mark of a great podcaster. The starting generally hooks you in to listen to the whole episode.
## [Developing Up By Mike Miles](https://www.developingup.com/)
Developing up is an amazing software engineering podcast. The good aspect of this podcast is it focused on the non-technical side of our careers. I think having great soft skills is crucial to do a software engineering job well as well as indispensable to climb up the career ladder. I remember the episode pretty well where Mike and Karl talk about [public speaking](https://www.developingup.com/episodes/43). It was definitely a great one among others.

Developing up describes itself as:
> A podcast focused on the non-technical side of being a developer because your career is about more than the code you write.
With 50 episodes since 2016, listening to this software engineering podcast you will surely learn some needed non-technical skills. Ranging from [Pair programming](https://www.developingup.com/episodes/48) and [code reviews](https://www.developingup.com/episodes/46) to [working remotely](https://www.developingup.com/episodes/40) and [imposter syndrome](https://www.developingup.com/episodes/30).
The episodes are relatively short and Mike does a great job of asking amazing questions. This brings out insightful and relatable information. I would really recommend you to subscribe to this software engineering podcast.
## [The SAAS Podcast by Umer Khan](https://saasclub.io/saas-podcast/)
As usual, for the third software engineering podcast, I have something that is more related to the business side fo things. As software engineers, if we understand the domain and why we are doing things this way it makes our work much more meaningful.
One episode I remember pretty well is the chat between [Dave and Umer](https://saasclub.io/podcast/clickfunnels-self-funded-saas-startup/) about ClickFunnels. Dave describes the ClickFunnels journey and how it made millions of dollars.

The SAAS podcast defines itself as:
> Over 250 in-depth interviews with proven SaaS founders and entrepreneurs. Get actionable insights to help you build, grow, and scale your SaaS business
There are many other episodes I like from THE SAAS software engineering podcast, like the one where Krish talks about [scaling](https://saasclub.io/podcast/saas-subscriptions-and-billing-chargebee/) a SAAS business. Omer does a great job of researching and interviewing the host. The episodes are a bit long but worth the time.
## Conclusion
Even though due to no commuting to work for the past ~6 months I have listed to podcasts a lot less. I hope you enjoy listening to the above software engineering podcasts. Happy listening. | geshan |
585,747 | Let's Git It | At one point a codenewbie, I found myself thoroughly confused by git commands, how to use them, when... | 0 | 2021-01-29T17:08:26 | https://dev.to/gabbinguyen/let-s-git-it-13m6 | git, codenewbie, beginners | At one point a codenewbie, I found myself thoroughly confused by git commands, how to use them, when to use them, and so on. While there's many online resources outlining the commands, most of what I found wasn't 'beginner-friendly' or robust enough for my liking. In short, the documentation confused me even more. It seemed to me as if the people who wrote the docs were simply trying to help *themselves* remember the commands, as opposed to teaching a newbie what the commands meant.
For any code beginner stumbling across my blog, this post is for you! This is git, simplified.
####Git Clone
*git clone* is used to target an existing Git repository and make a clone of that to exist on your local machine. You are able to sync the local clone to the remote with a few commands (which I will go over later in the post). After cloning it, you can *cd* into the directory to begin coding.

####Git Checkout
Speaking of new branches, *git checkout* is used to create a new branch. The command for that is *git checkout -b "your-branch-name-here"*

####Git Branch
*git branch* is a command used to check which branch you are currently working out of. A use case for this is when you're working on multiple features with other collaborators, you don't necessarily want to be working off the same branch. To keep things from getting mixed up or from breaking, you want to checkout a new branch to code your portion of the project. In this case, it's good practice to check what branch you're in just in case before pushing any changes.

####Git Add
*git add* isn't quite the same as a traditional 'save' command. *git add* sends all the changes made to the staging area, which then is able to be saved with a different command. There are different types of *add* commands that stage certain pieces of information:

####Git Commit
*git commit* is the command used to save your local copy of the project. The most commonly used command is *git commit -m "your-message-here"* where you include a message of any changes your made.

####Git Push
*git push* is the command used to upload the local saved changes to the remote repository. After you push, it allows other collaborators to pull your branch, merge your changes with theirs, and work with the modifications you made.
####Git Pull
*git pull* fetches any commits from the remote branch and allows you to merge it with your branch. When you're collaborating with others, the command you'll be using frequently will be *git pull origin "name-of-branch-you-are-wanting-to-pull-from*
In this example, my project partner's branch is called 'testbranch.' I want to merge what they have with my portion of the project in my own working branch, so I call testbranch at the end of the pull command.

This pull command only merges the work on your local copy. To share the merged work with the rest of your team, you will then add, commit, and push this newly merged branch for others to pull from.
These are just the basics git commands; while there's many more that you can utilize, having a good understanding of the basics will help any beginner get a jump start into collaboration. Happy coding!
References:
-https://stackoverflow.com/questions/572549/difference-between-git-add-a-and-git-add?rq=1
-https://www.atlassian.com/git/tutorials/saving-changes | gabbinguyen |
585,881 | Most Common Mistakes To Avoid During Website Redesigning | Websites are the visual representation of your business and are not only a medium to reach a large nu... | 0 | 2021-01-29T06:38:00 | https://dev.to/ltdsolace/most-common-mistakes-to-avoid-during-website-redesigning-1l61 | webdev | Websites are the visual representation of your business and are not only a medium to reach a large number of users, but they also provide credibility and trust of your users for business. This is a main reason of why entrepreneurs come to the web designing company to seek professional help and get their website designed. But designing a website takes more time and energy specially to do it right and get the results that you want. Redesigning a website could be a tricky process and you must consider lots of factors to avoid any negative effect. So here, we will discuss the major mistakes that you should avoid while redesigning or upgrading your current website. Let us see each one in detail.
#Mistakes To Avoid During Website Redesigning-
**1. Not Analyzing Current Web Design-**
Many entrepreneurs reach to a web development company for getting their website revamped without knowing the defects of their current website. Prior to finding a solution for any issue, you should discover what the actual issue is. You should analyze a current website thoroughly and know the reason of why you are thinking of redesign, which are the major problems that you want to solve first and so on.
Discover the opportunities that you are missing and things that are working and not working on your current website. So these points should be analyzed and consider while finalizing new design of the website.
**2. Not Setting The Goals-**
Remember that website redesign is not about using latest technologies and techniques. You have to understand why you are taking this step and investing such a lot of money and time in redesigning. For instance, an eCommerce business might want to change older design that had complicated navigation, to something much simpler, knowing how vital navigation is on a shopping website. Setting proper final goals will give a clear view on things. This will assist you with understanding the process and divide the workload by compartmentalizing and assigning a specific set of tasks to separate teams. In this way, the website architecture and overflow become a more seamless and streamlined process. On the other hand, not choosing your final objectives and starting in an aimless way may compel you to change the complete strategy at the very last hour. This is the most well-known website redesign mistakes made.
**3. Aesthetics Over Functionality-**
In the current times, everything has to be eye-catching and attractive, but don’t ignore the importance of functionality for a website. Attractive website with poor functionality may attract users but will fail to retain them. Though the main aim of website redesign is to make it more attractive than the previous one, you should also focus on making it work better than the previous one. Easy navigation and smart layout are important components of a good website and they must not be ignored under any circumstances. Your website must be easy to use and users must be easily able to find whatever they are looking for.
**4. Ignoring The Redesign Budget-**
Next critical aspect to consider and decade on is the budget, which increases the issues during a website redesign. The expense of upgrading a site depends on a different features and functionalities you have to integrate into the site. So, you should establish a realistic budget for redesigning the website. After all, websites are a face of digital platforms. This can help you to reach many leads and convert them into sales to accomplish higher ROI. The longer the redesigning process occurs, the longer the server resources will be consumed, including team’s salary. So, it is necessary to decide on a budget and then move ahead with planning.
**5. Not Providing Adequate Time To Website Designers-**
It requires a lot of time to upgrade a website and you should have enough patience to provide sufficient time to redesign a website. It doesn’t mean that you have to wait for many years, just fix the deadlines so the work can be done timely. But do not set unrealistic deadlines. There are many complications involved in the website redesign that you should accomplish. Many people approach customized website design services, but can’t provide sufficient time. Website redesigning with predefined templates doesn’t take more time in comparison with the custom website design.
**6. Not Choosing The Proper Technology-**
Technology decisions plays an important role in website redesign. Once you have set the final goals and divided the task among separate teams, the next step is to choose a technology to maximize output within the shortest possible time. Selection of technology is not only important for present but for future use also. Using the most latest technology available in the market buys the team a considerable amount of time before the next redesign occurs. For instance, if speed and single page application is the requirement, the best option could be going with the React.js. Similarly, if your website is in Python, then Django as a framework could be the perfect choice.
**7. Not Paying Adequate Attention To The Content-**
Choosing design over content is one of the most common website redesign mistakes to avoid. This approach can nullify your whole redesign strategy and this wastes the time and money. It may be very enticing to jump on an impulse and start designing the website, paying no attention to the content that controls the entire core of your website’s design. So the correct approach should be opt for content creation and moderation first and then shift towards designing. You should go with the lead-oriented and exciting content that focuses on the target audience’s demands. The goal of a web redesign should be to complement and enhance your content instead of taking the limelight away from it.
**8. Not Making A Responsive Website-**
Responsiveness is a property of a website to scale down or up, according to the screen on which it is being displayed. For example, when the buttons on your website scale down if opened on any mobile screen and become so small that users’ finger accidentally press a wrong button, which they don’t what to click.
Near about 52% of the traffic comes from mobile devices, so it is necessary to scale down everything so that mobile user has a great user experience. Hence, whenever going for a website redesign, ensure that your website is responsive.
**9. Not Providing A Proper Contact Information-**
Your clients must connect with you easily. Customers don’t like to take lots of efforts for finding out the ways of connecting with you online. So always remember to add a ‘Contact Us’ page link in the prime navigation and also place a clear CTA button in the header with the official phone number of the business. Allow users to contact you with- contact form submission, email, phone, social media page etc. Also ensure to reply immediately to maintain trust in the relationship with your clients.
**10. Not Setting A Realistic Launch Date-**
After proper designing of a website, the question that we automatically find ourselves is, when do we need to launch the website? This is necessary, because businesses might have tight constraints and fixed deadlines for meeting their target. There are too many factors to consider while setting up a launch date. Setting an unrealistic launch date will either make the website suffer in terms of quality, or you might have to settle for less than expectations. So, it will be better to start as early as possible and set a realistic date for the website’s relaunch.
| ltdsolace |
586,002 | so amazing to be part of this group | A post by Selunati | 0 | 2021-01-29T09:56:04 | https://dev.to/selunati/so-amazing-to-be-part-of-this-group-378g | selunati | ||
586,034 | Container v0.1 Released - with Attributes Injection | https://github.com/apexpl/container/ A lightweight, straight forward dependency injection container... | 0 | 2021-01-29T11:35:43 | https://dev.to/apexpl/container-v0-1-released-with-attributes-injection-2gn6 | php | ERROR: type should be string, got "\nhttps://github.com/apexpl/container/\n\nA lightweight, straight forward dependency injection container that simply works, and works well. Supports config file and all standard injection methods -- constructor, setter, annotation, plus also attributes. Also includes a `Di` wrapper class that allows container methods to be accessed statically for greater simplicity and efficiency.\n" | apexpl |
586,158 | The must listen Podcast If you are a CEO | Business wars is amazing podcast series. In which the host David Brown discusses the business strateg... | 0 | 2021-01-29T14:35:37 | https://dev.to/alimemonzx/the-must-listen-podcast-if-you-are-a-ceo-5h93 | podcast, leadership, watercooler | Business wars is amazing podcast series. In which the host David Brown discusses the business strategies and competition between the biggest business in the world.
The best series so far for me was Netflix vs Blockbuster. Blockbuster's was the pioneer in movie rental business. In 2000 Netflix offered Blockbuster a partnership which Blockbuster turned down and rest is the history.
Do check out this 8 episode series on Spotify.
{% spotify spotify:episode:1it9SpNQQ4VmbwWipKj52o %}
Some amazing series from Business Wars:
* **Nike vs Adidas**
* **eBay vs PayPal**
* **Marvel vs DC**
* **Red bull vs Monster**
* **Coke vs Pepsi**
* **McDonalds vs Burger King**
* **Ferrari vs Lamborghini**
Have you heard any of this series. If yes, do let me know which one is your favourite in comments section | alimemonzx |
586,217 | Scilla - Recon | Scilla is a recon tool. It's a CLI tool for collecting dns records, directories, subdomains and open... | 0 | 2021-01-29T15:40:00 | https://dev.to/edoardottt/scilla-recon-1ald | github, go, security, recon | Scilla is a recon tool. It's a CLI tool for collecting dns records, directories, subdomains and open ports giving a single domain.
https://github.com/edoardottt/scilla | edoardottt |
586,476 | Azure Calling Functions | Your frontend is able to call an Azure Function using any type of HTTP Request. Simply ensure that... | 0 | 2021-02-09T22:31:57 | https://dev.to/jwp/azure-calling-functions-3c45 | Your frontend is able to call an Azure Function using any type of HTTP Request.
Simply ensure that the function name is preceded with the segment /api/ as shown here:
```json
https://xyz.azurewebsites.net/
api/Function1'
```
However to get this to work, the Function application's CORS configuration must include the Origin found in the request header.
From there it's functions all the way.
JWP2021 Azure Function CORS
| jwp | |
665,432 | Lessons learned from Junior to Senior Developer | Lessons Learned from Junior to Senior Developer | 0 | 2021-05-18T19:54:12 | https://dev.to/vincentntang/lessons-learned-from-junior-to-senior-developer-2dob | ---
title: Lessons learned from Junior to Senior Developer
published: true
description: Lessons Learned from Junior to Senior Developer
tags:
//cover_image: https://direct_url_to_image.jpg
---
3 years ago, I decided to change careers and become a software developer. I had no formal training in software development and decided to learn on my own. Through countless youtube and Udemy tutorials, and hackathons, I landed my first job working on construction management software.
Fast forward to today. I'm a senior frontend developer in charge of delegating & delivering features affecting millions of users at a time. I've come a long way since then and I'm writing this post reflecting on what I learned along the way.
This isn't an exhaustive list but just lessons I learned along the way
## Working with stakeholders
- If you demo to stakeholders, record a video of the demo in advance as a backup.
- Live coding/testing has a high chance of failure
- The frontend is always highly scrutinized by non-tech stakeholders since it's visual and easier to understand
- When you demo, talk slow and move your mouse slow. Things that are obvious to you are this way since you spent several weeks on a feature. Stakeholders are seeing your work for the first time
## Working with management
- If you need to estimate a task, assume it's way harder than it is
- Deliver bad news early and good news frequently
- If your PM is messaging you directly for updates frequently, either there's an issue with the process or you are doing things wrong.
- Push back deadlines as far back as possible during planning phases. You'll thank yourself later. It's better to underpromise and overdeliver
## Working with engineers
- Create living documentation upfront and do it regularly, it saves everyone time. Google docs is your friend
- If you get stuck on a problem for more than 3 hours, ask for help.
- Make your teammates look good and give credit where it's due. The analogy is if you hang out with awesome friends chances are you are awesome as well
- Make an ADR (Architectural Decision Record) when you implement big changes to a codebase. It prevents you (and others) from 2nd guessing why certain things were done 6 months ago
## Code Reviews
- For code reviews, critique the code, not the person
- Make your code reviews awesome by explaining every question a person might come up with
## Leading a team
- Mentoring developers can be fun and enjoyable, but you need to spend time and effort on it
- Plan out your ticket action plan for the team over several days to iterate and improve
- Spend time onboarding newer developers on the project, and write documentation on questions they have. They're looking at things from a fresh perspective
- Always give good candid feedback to your teammates when possible
- Emphasize the big picture and give room for others to be creative/ take ownership of features
- When you pair program, one person is the driver, the other is the instructor
## Time Management
- Plan large focus periods of work where you won't get interrupted
- Don't forget we're all people too. Plan your vacation and sick days
- Think win-win and lump related tasks together and get them done in one go
## Reducing Technical Debt
- Don't over-engineer things unless it's warranted
- If you use two booleans that conflict with each other, use an enumerable instead
- Type safety is effective at reducing technical debt.
- Always use pure functions when possible. This makes the app reusable and obvious at a glance
- Write more tests against iterative and less obvious code
- Use undefined instead of sentinel values. Sentinel values are checks on seemingly random values in the codebase
- Always use a linter. It saves you time and effort
- Don't have competing sources of truth. Use as few scoped variables as possible, and use as few derived/computed variables as possible
- If you need to tackle legacy debt, migrate one feature at at time
## Architectural Rules
- Most systems use soft deletes. This means companies don't actually delete your data, it's just marked as inactive
- Use caching to reduce amount of work service-layers need to make. But don't treat it as the source of truth if possible.
- Never trust the frontend. The backend should handle the vast majority of business logic. Avoid database business logic if possible, it's hard to maintain.
- If you don't need high scalability on your project, just ship it out and don't worry about it until needed
- Don't put too many shiny things in the app. It increases the risk of failure. Pick preferably popular libraries with good documentation
- Feature flagging helps you deploy and deliver features to other teams while still deploying regularly
## Database Rules
- Keep your database designs as simple as possible to reduce complex JOIN SQL statements
- NoSQL datastores don't have ACID requirements, so it's great when you design a system where users don't talk to each other
## Summary
Let me know what you learned over the years and add it in a comment :) | vincentntang | |
586,797 | Project 41 of 100 - Writing Responsive CSS in JSX | Hey! I'm on a mission to make 100 React.js projects ending March 8th. Please follow my dev.to profile... | 0 | 2021-01-30T07:26:54 | https://dev.to/jameshubert_com/project-41-of-100-writing-responsive-css-in-jsx-2me6 | react, 100daysofcode, javascript | *Hey! I'm on a mission to make 100 React.js projects ending March 8th. Please follow my dev.to profile or my [twitter](https://www.twitter.com/jwhubert91) for updates and feel free to reach out if you have questions. Thanks for your support!*
Link to today's deployed app: [Link](https://www.dolcesdollhouse.com/)
Link to the repo: [github](https://github.com/jwhubert91/dolces-dollhouse)
This was a short project riding the heels of the first draft of a client website. The website is optimized for mobile but we wanted to just put a couple of styles in so that it's comprehensible to desktop viewers. The primary challenge was the static background image element saying (paraphrased from Spanish) "More coming soon".
For mobile screens I wanted to take advantage of the `background-size: cover` CSS property to simply cover the space. On quite large screens though this can look ridiculous since only maybe a small part of the image will be showing.
In pure CSS this is easily solved by a media query. Simply state the screen size you are optimizing for (say- over 570px wide) and set the styles within the brackets.
In this project I am actually making use of `props` to pass an image URL to the component so that the component is reusable. Since I'm setting the CSS `background` property within the JSX, it would override some of the other styles in actual stylesheet.
The CSS is loaded in the <head> of the html page well before the image URL is passed to this object so this got in the way of me being able to give the image the `background-size: cover` style.
Instead, we can use Javascript to check for the screen size with the following built-in vanilla Javascript property of the `window` object:
```
window.innerWidth
```
This property returns the inner width of the window thus allowing you to set styles for different screen sizes.
The problem? Well this property is only called once on page load. Frankly for mobile this is just fine, but if someone is looking at it on a browser and decides to resize the window and push it to the side, it won't be called again. To call the property again when the window is resized we take advantage of a callback on the `window.onresize` method. I am using React `useState` to set a windowWidth variable to the size of the screen on page load and whenever the screen is resized.
```
const [windowWidth,setWindowWidth] = useState(0);
window.onresize = () => {
setWindowWidth(window.innerWidth);
}
```
Now that we have a variable that updates whenever the screen changes sizes we can create separate style objects to pass to the JSX element for each screen size, like here, where I use a ternary:
```
const innerImageStyle = windowWidth < 570 ?
({
width: '100%',
height: '80vh',
backgroundImage: `url(${props.imageUrl})`,
backgroundSize: 'cover',
}) :
({
width: '100%',
height: '500px',
backgroundImage: `url(${props.imageUrl})`,
backgroundSize: 'contain',
});
```
As you can see if it's a mobile device (less than 570 pixels wide) then we want the background to cover the space available. If it's a desktop computer, we want the background to repeat (client's choice here but I like it).
I'm sure we could also do this using the React useEffect hook, but this is intended to be a simple version and I honestly feel like the built-in Javascript window methods are underestimated and under-taught for how powerful they are.
Let me know what you think and how you do conditional styling for React components in the comments... | jameshubert_com |
586,858 | Excel Formulas to Find the Reverse VLOOKUP Value ~ Clear-Cut Example | Normally, the Excel VLOOKUP function searches the values from left to right in the table. Have you ev... | 0 | 2021-02-08T10:27:29 | https://geekexcel.com/excel-formulas-reverse-vlookup/ | excelformula, excelformulas | ---
title: Excel Formulas to Find the Reverse VLOOKUP Value ~ Clear-Cut Example
published: true
date: 2021-01-30 09:11:33 UTC
tags: ExcelFormula,Excelformulas
canonical_url: https://geekexcel.com/excel-formulas-reverse-vlookup/
---
Normally, the Excel **[VLOOKUP function](https://geekexcel.com/how-to-use-excel-vlookup-function-in-office-365/)** searches the values from left to right in the table. Have you ever tried to look up backward in Excel? In this article, I’m going to talk about how to **find the reverse vlookup in Excel Office 365**. Let’s get into this article!! Get an official version of ** MS Excel** from the following link: [https://www.microsoft.com/en-in/microsoft-365/excel](https://www.microsoft.com/en-in/microsoft-365/excel)
## General Formula:
- To get the reverse lookup value from a range, use the below formula.
**=[VLOOKUP](https://geekexcel.com/how-to-use-excel-vlookup-function-in-office-365/)(A1,[CHOOSE](https://geekexcel.com/how-to-use-choose-function-in-microsoft-excel-2013/)({3,2,1},column1,column2,column3),3,0)**
## Syntax Explanations:
- **VLOOKUP** – This function will help to lookup data in a range or table by row. Read more on the **[VLOOKUP function](https://geekexcel.com/how-to-use-excel-vlookup-function-in-office-365/)**.
- **CHOOSE** – In Excel, the **[CHOOSE function](https://geekexcel.com/how-to-use-choose-function-in-microsoft-excel-2013/)** helps to return a value from the list of value arguments using a given position or index.
- [**Column**](https://geekexcel.com/use-column-function-in-microsoft-excel-365-simple-methods/) – It represents the input data.
- **Comma symbol (,)** – It is a separator that helps to separate a list of values.
- **Parenthesis** **()** – The main purpose of this symbol is to group the elements.
## Practical Example:
Refer to the below example image. Here we will show how to get the backward lookup.
- First, we will enter the input values in **Column B** , **Column C** , and **Column D**.
<figcaption id="caption-attachment-21545">Input Ranges</figcaption>
- Then, enter the above-given formula to the blank cell where we want to display the result.
<figcaption id="caption-attachment-21546">Enter the formula</figcaption>
- Press the **ENTER** key to get the reverse vlookup value.
<figcaption id="caption-attachment-21547">Result</figcaption>
## Wrap-Up:
Here, we have explained the simple formula used to **find the reverse Vlookup in Excel**. Hope you like it. Please feel free to state your **query** or **feedback** for the above article. Click here to know more about **[Geek Excel](https://geekexcel.com/)** and **[Excel Formulas](https://geekexcel.com/excel-formula/)!!**
### Read Also:
- **[Excel Formulas to Reverse Order of Characters in a Cell with Functions!!](https://geekexcel.com/excel-formulas-to-reverse-order-of-characters-in-a-cell-with-functions/)**
- **[How to Select, Deselect, and Reverse Multiple Ranges in Excel 365?](https://geekexcel.com/how-to-select-deselect-and-reverse-multiple-ranges-in-excel-365/)**
- **[Reverse Text String or Words Order in Excel Office 365!!](https://geekexcel.com/reverse-text-string-or-words-order-in-excel-office-365/)**
### Video Tutorial:
<video id="video-21544-2" width="770" height="384" loop="1" autoplay="1" preload="metadata" controls="controls"><source type="video/webm" src="https://eadn-wc04-371788.nxedge.io/cdn/wp-content/uploads/2021/01/Iu2nhwmmn8-1.webm?_=2"></source><a href="https://eadn-wc04-371788.nxedge.io/cdn/wp-content/uploads/2021/01/Iu2nhwmmn8-1.webm">https://geekexcel.com/wp-content/uploads/2021/01/Iu2nhwmmn8-1.webm</a></video> | excelgeek |
587,853 | Deploying An Asp.Net WebApi and MySql DataBase Container to Kubernetes (Part-2)-Deployment | Read Part 1 Setting up Kubernetes and Kubernetes Dashboard using Docker Desktop. In this article, we... | 0 | 2021-01-31T14:26:13 | https://dev.to/gbengelebs/deploying-an-asp-net-webapi-and-mysql-database-container-to-kubernetes-part-2-deployment-3g82 | docker, kubernetes, csharp, mysql | Read Part 1 [Setting up Kubernetes and Kubernetes Dashboard using Docker Desktop](https://dev.to/gbengelebs/deploying-an-asp-net-webapi-and-mysql-database-container-to-kubernetes-part-1-setup-1ll9). In this article, we will be deploying a WebApi and MySql Database to Kubernetes. To understand the process, here is a visual representation of the different components we will be creating.

To deploy an application to Kubernetes we need to create a Deployment which is a blueprint of the application itself; we expose our deployment to other components by linking it to a service that acts as a gateway to the pods which is our application.
## What is a Deployment?
A deployment object has the required information of how pods will be created and how will they run and communicate with each other, it contains information on how an application will be set up and configured. For this tutorial, we will be creating two Kubernetes Deployments one for the MySql Database and the other for the API.
A Kubernetes Deployment can be created by configuring a YAML file.

This YAML file configuration is sent to the Kube API server which then feeds the configuration to the brain of Kubernetes the **etcd**. The Controller Manager and the Scheduler request for these configuration data through the APIServer in order to bring the application to its desired state as defined in the deployment configuration.
The pods as defined in the configuration file are then installed and run on the worker nodes.
## CREATING THE DATABASE DEPLOYMENT
- Download the starting source code from [here](https://github.com/GbengaElebsDev/TestApi).
- Create a YAML file called mySqlDeployment.yaml in the root of the application.
- Paste this code snippet.
{% gist https://gist.github.com/GbengaElebs/5e033e0c6069b9eb25fab79039445fa0 %}
## Lets Go through the YAML file
- **apiVersion**: Specifies which version of the Kubernetes API you're using to create this object.
- **kind**: What kind of object you want to create. In this case a Deployment.
- **metadata**: Data that describes and gives information about other data, It helps uniquely identify the object, including a name string, UID, and an optional namespace.
- **spec**: What state you desire for the object. This contains the attributes of the deployment. The attributes of the spec are specific to its kind. So a **Deployment** will have a different spec from a **Service**.
- **template**: The template describes the pods that will be created.
The template has its own configuration and specification and it applies to the pods that are going to be created within the deployment.
- **ImagePullPolicy**: Never- This Means Kubernetes should not attempt to pull the image from docker hub but rather use the local image.
- The spec within the template is like the specifications of the pod. Its container name, the image it is based on, its environment variables, and the port it should open.
- **port**: This is the port number that the pod will be listening on.
- **replicas**: This is the number of pods that should be created from this deployment. Replicas allow us to easily scale our pods and ultimately our application.
- **Labels and Selectors **: These are connecting elements in our case, we are labeling the deployment (mysql-deployment), and whenever we need to attach the deployment to another object like a service we can easily attach it with the label name. We label the pod template as (mysql8) and we attach it to the deployment by using a selector and specifying the *matchLabels* tag - (mysql8). This way deployments know which pods belong to it.
- **resources**: This is to specify the compute resources for this container.
- **env**: This is to specify the environment variables that we will be used to authenticate and connect to the MYSQL database.
*1028Mi is equivalent to 1GB; 500m is equivalent to 1 CPU core*.
- Navigate to the root of the project in the command prompt.
- Apply the deployment using this command.
```
$ kubectl apply -f mysqlDeployment.yaml
deployment.apps/mysql-deployment created
```
If we check for the pods
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-deployment-558fc46595-h5bg8 1/1 Running 0 4m23s
```
One pod has been created and it has started running
### VOLUMES AND VOLUME MOUNTS
Because pods can easily be destroyed we create a volume to ensure data in the MYSQL pod can be persisted even when the pod is destroyed. Think of a volume as a hard drive storage that attaches to the pod.
**a persistent volume (PV) is the "physical" volume on the host machine that stores the persistent data. A persistent volume claim (PVC) is a request for the platform to create a persistent volume for you, and then PVs are attached to pods via a PVC**. So in order to create a Persistent Volume, you need to create a Persistent Volume Claim.
- Create a YAML file called (my-sql-pv-claim.yaml).
- Paste the code snippet as shown below.
{% gist https://gist.github.com/GbengaElebs/00058c238eb9225169196b848531e218 %}
As with the other deployments we specify the kind, metadata, and labels we then specify the access modes. This defines the way pods can interact with the volumes.
**AccessModes**
> **ReadWriteOnce**- If you need to write to the volume but you don't have the requirement that multiple pods should be able to write to it. We have just one MYSQL pod so we use this.
https://stackoverflow.com/questions/57798267/kubernetes-persistent-volume-access-modes-readwriteonce-vs-readonlymany-vs-read]
At the tail end of the (my-sql-pv-claim.yaml) file, we use a configMap to store a MYSQL initialization script to create a Users table.
*A ConfigMap is an API object that lets you store configuration for other objects to use*.
*To loop multiple YAML documents in a single file separate them using(---) *.
- Go to the mySqlDeployment.yaml
- Add this code Snippet
- The full mySqlDeployment.yaml file is now like this.
{% gist https://gist.github.com/GbengaElebs/f724388b65bf5406eb236ec354346d32 %}
*A Volume mount is used to mount the container at a specified path*. The MySQL container mounts the PersistentVolume at (/var/lib/mysql). and the MySQL table creation initialization script is mounted at this location(/docker-entrypoint-initdb.d). As specified in the [MYSQL Docker Hub Documentation](https://hub.docker.com/_/mysql).
- Apply the claim.
```
$ kubectl apply -f mysql-pv-claim.yaml
persistentvolumeclaim/mysql-pv-claim created
configmap/mysql-initdb-config created
```
- ReApply the mySqlDeployment.
```
$ kubectl apply -f mysqlDeployment.yaml
deployment.apps/mysql-deployment configured
```
*For the MySQL deployment. Don’t forget to create the claim first otherwise, you would have a mounting issue.*
### CREATING THE DATABASE SERVICE
Every Kubernetes Deployment needs to seat behind a service. A service can be defined as a logical set of pods. Because pods are dynamic and change frequently. It is important to provide a stable interface to a set of pods that does not change. Service provides that interface and maintains a stable IP Address. The service's IP and name can be used to communicate with external components that want to interact with the pods.
- Create a YAML file called (mySqlService.yaml).
- Paste the code snippet as shown below.
{% gist https://gist.github.com/GbengaElebs/9dd4a12623fc854a132db0798fe536f4 %}
- Apply the deployment using this command.
- check whether the service is up.
```
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysql8-service NodePort 10.109.131.242 <none> 3306:30750/TCP 5m
```
.We use a NodePort type of service since we have only one pod and NodePort opens a specific port on each node of the cluster and traffic on that node is forwarded directly to the service. So there is really no need for LoadBalancing the request since we only have a single pod for our DataBase.
## CREATING THE API DEPLOYMENT
- Navigate to the root of the project where the (docker-compose.yml) file is and run this command to build the docker images.
```
docker-compose -f docker-compose.yml up
```
- Create a file called (deployment.yaml) in the root of the project.
- Paste this code snippet.
{% gist https://gist.github.com/GbengaElebs/28ede99f6edb9ba5efca48795d9610f7 %}
It is similar to the mySqlDeployment.yaml with a few changes to the image and the pod name. We set the DBHOST to be (mysql-service). Which is the name of the DataBase service, and then we specify the environment variables to connect to the MYSQL service.
## CREATING THE API SERVICE
- Create another file called (service.yaml).
- Paste this code snippet.
{% gist https://gist.github.com/GbengaElebs/5782c9d25af13b4d886aef3976bab8b2 %}
Just like the Deployment, we specify the **Kind** tag as a service and then we bind the service to our deployment using a selector(testapi-deployment). We use a LoadBalancer type of service. This helps balance requests across our pods in the deployment.
Let's apply them to Kubernetes.
```
$ kubectl apply -f deployment.yaml
deployment.apps/testapi-deployment created
```
```
$ kubectl apply -f service.yaml
service/testapi-service created
```
- The API and MYSQL services and deployments have been created.
```
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h54m
mysql8-service NodePort 10.106.223.74 <none> 3306:30211/TCP 10m
testapi-service LoadBalancer 10.111.82.6 localhost 8080:31852/TCP 5s
```
- Our service has been started on **localhost** port **8080**.
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-deployment-558fc46595-h5bg8 1/1 Running 0 11m
testapi-deployment-6b6866d579-9dh5p 1/1 Running 0 29s
```
- Go to your Kubernetes dashboard to Monitor the pods and service.

- Navigate to the URL of the service. In my case, it is [http://localhost:8080/swagger/index.html](http://localhost:8080/swagger/index.html) and test the API endpoints.
- Lets scale the API pod
```
$ kubectl scale --replicas=3 deployment/testapi-deployment
deployment.apps/testapi-deployment scaled
```
- Check the number of pods. Three pods have been created.
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-deployment-558fc46595-h5bg8 1/1 Running 1 43h
testapi-deployment-669f97c8f6-26c4g 1/1 Running 1 42h
testapi-deployment-669f97c8f6-fq5tg 0/1 Pending 0 6s
testapi-deployment-669f97c8f6-mxcz2 1/1 Running 0 6s
```
- One of the pods is pending. Let's check for the reason. We use the *describe* tag to give more information about a Kubernetes object.
```
$ kubectl describe pods testapi-deployment-669f97c8f6-fq5tg
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 65s (x4 over 2m25s) default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
```
- We see the reason below. Insufficient CPU. We can then choose to add more resources to the Kubernetes Cluster.
- The LoadBalancer service will now balance the request across the 2 API pods. If any of the pods die the Controller Manager will ensure that it is recreated to bring the application to its desired state. Hence providing a highly stable, scalable, and performant system.
- Finished source code [here](https://github.com/GbengaElebsDev/TestApi/tree/Finished).
## CONCLUSION
- In this article we have :
- Gone through the different components of the Kubernetes system.
- Discussed Persistent Volumes and Persistent Volume claims in Kubernetes
- Explained the meaning of some Kubernetes objects.
- Deployed an ASP.NetCore WebApi with MYSQL Database to Kubernetes.
## MORE RESOURCES
I just discovered this awesome channel [TechWorld with Nana](https://www.youtube.com/channel/UCdngmbVKX1Tgre699-XLlUA). Check out some of the videos here
- [Kubernetes Architecture explained ](https://www.youtube.com/watch?v=umXEmn3cMWY)
- [Kubernetes Components explained! Pods, Services, Secrets, ConfigMap](https://www.youtube.com/watch?v=Krpb44XR0bk)
| gbengelebs |
591,806 | Go to ~base install in ubuntu | I relalized that there is no free space in my root parition. I installed a lot of stuff with a lot of... | 0 | 2021-02-04T09:17:20 | https://dev.to/bodnarlajos/go-to-base-install-in-ubuntu-3k4j | ubuntu | I relalized that there is no free space in my root parition. I installed a lot of stuff with a lot of libraries but there are became unnecessary now.
What you can do with it ?
I tried to remove all packages what was installed after the first booting.
You can start with the ubuntu manifest file which one is contains the all packages what is necessary for a working system.
Create a packages list from your's computer.
<code>sudo apt list --installed</code>
Download the ubuntu's manifest file from here: https://releases.ubuntu.com/20.04/ubuntu-20.04.1-desktop-amd64.manifest
Make a diff:
diff original.pkgs current.pkgs
Prepare this file with you favorite editor and remove the unnecessary packages. The result should be like this "sudo apt remove --purge ...".
It is not perfect because I have ~4Gb more then the first install was. But it is enough to reach ~10Gb free space in my root partition.
If you have a working script or and other way what you used to use, please let me know :) | bodnarlajos |
592,348 | Responsive Navigation Menu for 2021🎖️|| CSS JS | Let's Build a Responsive Navigation Hamburger Menu in 2021 from Scratch for both desktop & mobile... | 0 | 2021-02-13T21:58:09 | https://dev.to/joyshaheb/responsive-navigation-menu-for-2021-css-js-34pk | css, beginners, tutorial, javascript | Let's Build a Responsive Navigation Hamburger Menu in 2021 from Scratch for both desktop & mobile screen🎖️
# Table of Contents -
* [Codepen](#level-1)
* [Setup](#level-2)
* [HTML](#level-3)
* [SCSS](#level-4)
* [JavaScript](#level-5)
* [Conclusion](#level-6)
# Codepen <a name="level-1"></a>
{% codepen https://codepen.io/joyshaheb/pen/mdOOrNd %}
# Youtube <a name="level-2"></a>
{% youtube dmrigoFKlLc %}
# Setup
Come over to [Codepen.io](codepen.io) or any code editor and write these in scss 👇
```SCSS
// Changing default styles
*{
margin: 0px;
padding: 0px;
box-sizing: border-box;
body{
font-family: sans-serif;
width: 100%;
min-height: 100vh;
font-size: 25px;
overflow-x: hidden;
}
}
// Managing all our breakpoints in this map
$bp : (
mobile : 480px,
tablet : 768px,
desktop : 1440px,
);
// Conditional Media query Mixins
// To save time
@mixin query($screen){
@each $key, $value in $bp{
@if ($screen == $key){
@media (max-width : $value){@content;}
}
}
}
```
# HTML <a name="level-3"></a>
Let's start Coding. Write these in HTML 👇
```HTML
<!-- Using the BEM naming system -->
<!-- The Parent holding everything -->
<div class="container">
<!-- This section carries our header -->
<div class="header">
<!-- Logo here -->
<div class="header__logo">LOGO</div>
<!-- Button Managemnet -->
<div class="header__btn">
<i id="open" class='header__btn-open bx bx-menu' ></i>
<i id="close" class='header__btn-close bx bx-x hide'></i>
</div>
<!-- menu Items here -->
<div id="menu" class="header__menu slide">
<div class="item-1">
<!-- Using Radio buttons to toggle back & forth -->
<input type="radio" checked name="A" id="a">
<label for="a">Home</label>
</div>
<div class="item-2">
<input type="radio" name="A" id="b">
<label for="b">About</label>
</div>
<div class="item-3">
<input type="radio" name="A" id="c">
<label for="c">Services</label>
</div>
<div class="item-4">
<input type="radio" name="A" id="d">
<label for="d">Contacts</label>
</div>
</div>
</div>
<!-- This section carries our content -->
<div class="main">
<div class="main__header">Welcome !</div>
<div class="main__text">
Lorem ipsum dolor sit amet.
</div>
</div>
</div>
```
# SCSS <a name="level-4"></a>
```SCSS
// Style rules for desktop screen
.header{
display: flex;
flex-direction: row;
justify-content: space-between;
align-items: center;
background-color: #c1c1c1;
height: 10vh;
padding: 0px 10px;
&__logo{
cursor: pointer;
}
&__btn{
display: none;
}
&__menu{
display: flex;
flex-direction: row;
[class ^="item-"]{
padding-left: 15px;
cursor: pointer;
}
}
}
// Style rules for .main class
.main{
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
width: 100%;
height: 80vh;
text-align: center;
}
// Style Rules for Label
label:hover{
color : white;
cursor: pointer;
}
input[type = "radio"]{
display: none;
}
input[type = "radio"]:checked + label{
color: white;
text-decoration: underline;
}
// Media query rules for mobile screen
@include query(mobile){
.header{
justify-content:center;
&__logo{}
&__btn{
display: flex;
position: absolute;
right : 10px;
font-size: 40px;
cursor: pointer;
}
&__menu{
flex-direction: column;
align-items: center;
justify-content: space-evenly;
position: absolute;
z-index: 1;
right: 0px;
top: 10vh;
background-color: #c1c1c1;
width: 100%;
height: 90vh;
transition: all 0.4s ease;
}
}
}
// Style rules when btn is clicked
.hide{
display: none;
}
.slide{
right : -100%;
}
```
# JavaScript <a name="level-5"></a>
```Javascript
// Selecting id from HTML in JS
let open = document.getElementById("open"),
close = document.getElementById("close"),
menu = document.getElementById("menu");
// Creating a reuseable function
common = (x,y,z) =>{
x.addEventListener("click",()=>{
x.classList.toggle("hide");
y.classList.toggle("hide");
// defining conditions on if statements
if(z== "slide-in"){
menu.classList.toggle("slide");
}
if(z == 'slide-out'){
menu.classList.toggle("slide");
}
})
}
// Calling functions here
common(open,close,"slide-in");
common(close,open,'slide-out');
```
## Credits

## Read Next :
{% link https://dev.to/joyshaheb/master-css-flexbox-2021-build-5-responsive-layouts-css-2021-3n9k %}
{% link https://dev.to/joyshaheb/acing-css-grid-model-in-2021-with-5-exercises-css-2021-51ci %}
# Conclusion <a name="level-6"></a>
Here's Your Medal For reading till the end ❤️
## Suggestions & Criticisms Are Highly Appreciated ❤️


* **YouTube[ / Joy Shaheb](youtube.com/channel/UCHG7IJuST_BXJkne-0u0Xtw)**
* **Twitter[ / JoyShaheb](https://twitter.com/JoyShaheb)**
* **Instagram[ / JoyShaheb](https://www.instagram.com/joyshaheb/)**
| joyshaheb |
597,511 | Running distributed k6 tests on Kubernetes | 📖What you will learn What the operator pattern is and when it is useful Deploying the k6... | 0 | 2021-02-11T11:38:12 | https://k6.io/blog/running-distributed-tests-on-k8s | performance, cloud, testing, kubernetes | ---
title: Running distributed k6 tests on Kubernetes
published: true
date: 2021-02-11 00:00:00 UTC
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/tqbj0asgy45ip9owh711.png
canonical_url: https://k6.io/blog/running-distributed-tests-on-k8s
tags: #performance #cloud #testing #kubernetes
---
> ### 📖What you will learn
>
> - What the operator pattern is and when it is useful
> - Deploying the k6 operator in your kubernetes cluster
> - Running a distributed k6 test in your own cluster
> #### ⚠️ Experimental
>
> The project used in this article is experimental and changes a lot between commits. Use at your own discretion .
[](/blog/static/49d58b70df40a0fa1aa75dd1f6d1f670/acdd1/operator.png)
## Introduction
One of the questions we often get in the forum is how to run distributed k6 tests on your own infrastructure. While we believe that [running large load tests](https://k6.io/docs/testing-guides/running-large-tests) is possible even when running on a single node, we do appreciate that this is something some of our users might want to do.
There are at least a couple of reasons why you would want to do this:
- You run everything else in Kubernetes and would like k6 to be executed in the same fashion as all your other infrastructure components.
- You have access to a couple of high-end nodes and want to pool their resources into a large-scale stress test.
- You have access to multiple low-end or highly utilized nodes and need to pool their resources to be able to reach your target VU count or Requests per Second (RPS).
## Prerequisites
To be able to follow along in this guide, you’ll need access to a Kubernetes cluster, with enough privileges to apply objects.
You’ll also need:
- [Kustomize](https://github.com/kubernetes-sigs/kustomize/)
- [Kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [Make](https://www.gnu.org/software/make/)
## The Kubernetes Operator pattern
The [operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) is a way of extending Kubernetes so that you may use [custom resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) to manage applications running in the cluster. The pattern aims to automate the tasks that a human operator would usually do, like provisioning new application components, changing the configuration, or resolving problems that occur.
This is accomplished using custom resources which, for the scope of this article, could be compared to the traditional service requests that you would file to your system operator to get changes applied to the environment.
[](/blog/static/8bc25b5fb3de365092d17de6121c3280/d9c41/pattern.png)
The operator will listen for changes to, or creation of, K6 custom resource objects. Once a change is detected, it will react by modifying the cluster state, spinning up k6 test jobs as needed. It will then use the parallelism argument to figure out how to split the workload between the jobs using [execution segments](https://k6.io/docs/using-k6/options#execution-segment).
## Using the k6 operator to run a distributed load test in your Kubernetes cluster
We'll now go through the steps required to deploy, run, and clean up after the k6 operator.
### Cloning the repository
Before we get started, we need to clone the operator repository from GitHub and navigate to the repository root:
```
$ git clone https://github.com/k6io/operator && cd operator
```
### Deploying the operator
Deploying the operator is done by running the command below, with kubectl configured to use the context of the cluster that you want to deploy it to.
First, make sure you are using the right context:
```
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* harley harley harley
jean jean jean
ripley ripley ripley
```
Then deploy the operator bundle using make. This will also apply the roles, namespaces, bindings and services needed to run the operator.
```
$ make deploy
/Users/simme/.go/bin/controller-gen "crd:trivialVersions=true" rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
cd config/manager && /Users/simme/.go/bin/kustomize edit set image controller=ghcr.io/k6io/operator:latest
/Users/simme/.go/bin/kustomize build config/default | kubectl apply -f -
namespace/k6-operator-system created
customresourcedefinition.apiextensions.k8s.io/k6s.k6.io created
role.rbac.authorization.k8s.io/k6-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/k6-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/k6-operator-proxy-role created
clusterrole.rbac.authorization.k8s.io/k6-operator-metrics-reader created
rolebinding.rbac.authorization.k8s.io/k6-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/k6-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/k6-operator-proxy-rolebinding created
service/k6-operator-controller-manager-metrics-service created
deployment.apps/k6-operator-controller-manager created
```
### Writing our test script
Once that is done, we need to create a config map containing the test script. For the operator to pick up our script, we need to name the file `test.js`. For this article, we’ll be using the test script below:
```
import http from 'k6/http';
import { check } from 'k6';
export let options = {
stages: [
{ target: 200, duration: '30s' },
{ target: 0, duration: '30s' },
],
};
export default function () {
const result = http.get('https://test-api.k6.io/public/crocodiles/');
check(result, {
'http response status code is 200': result.status === 200,
});
}
```
Before we continue, we'll run the script once locally to make sure it works:
```
$ k6 run test.js
```
If you’ve never written a k6 test before, we recommend that you start by reading [this getting started article from the documentation](https://k6.io/docs/getting-started/running-k6), just to get a feel for how it works.
Let’s walk through this script and make sure we understand what is happening: We’ve set up two stages that will run for 30 seconds each. The first one will ramp up to linearly to 200 VUs over 30 seconds. The second one will ramp down to 0 again over 30 seconds.
In this case the operator will tell each test runner to run only a portion of the total VUs. For instance, if the script calls for 40 VUs, and `parallelism` is set to 4, the test runners would have 10 VUs each.
Each VU will then loop over the default function as many times as possible during the execution. It will execute an HTTP GET request against the URL we’ve configured, and make sure that the responds with HTTP Status 200. In a real test, we'd probably throw in a sleep here to emulate the think time of the user, but as the purpose of this article is to run a distributed test with as much throughput as possible, I've deliberately skipped it.
### Deploying our test script
Once the test script is done, we have to deploy it to the kubernetes cluster. We’ll use a `ConfigMap` to accomplish this. The name of the map can be whatever you like, but for this demo we'll go with `crocodile-stress-test`.
If you want more than one test script available in your cluster, you just repeat this process for each one, giving the maps different names.
```
$ kubectl create configmap crocodile-stress-test --from-file /path/to/our/test.js
configmap/crocodile-stress-test created
```
> #### ⚠️ Namespaces
>
> For this to work, the k6 custom resource and the config map needs to be deployed in the same namespace.
Let’s have a look at the result:
```
$ kubectl describe configmap crocodile-stress-test
Name: crocodile-stress-test
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
test.js:
----
import http from 'k6/http';
import { check } from 'k6';
export let options = {
stages: [
{ target: 200, duration: '30s' },
{ target: 0, duration: '30s' },
],
};
export default function () {
const result = http.get('https://test-api.k6.io/public/crocodiles/');
check(result, {
'http response status code is 200': result.status === 200,
});
}
Events: <none>
```
The config map contains the content of our test file, labelled as test.js. The operator will later search through our config map for this key, and use its content as the test script.
### Creating our custom resource (CR)
To communicate with the operator, we’ll use a custom resource called `K6`. Custom resources behave just as native kubernetes objects, while being fully customizable. In this case, the data of the custom resource contains all the information necessary for k6 operator to be able to start a distributed load test:
```
apiVersion: k6.io/v1alpha1
kind: K6
metadata:
name: k6-sample
spec:
parallelism: 4
script: crocodile-stress-test
```
For Kubernetes to know what to do with this custom resource, we first need to specify what API Version we want to use to interpret its content, in this case `k6.io/v1alpha1`. We’ll then set the kind to K6, and give our resource a name.
As the specification for our custom resource, we now have the option to use a couple of different properties:
#### Parallelism
Configures how many k6 test runner jobs the operator should spawn.
#### Script
The name of the config map containing our `script.js` file.
#### Separate
Whether the operator should allow multiple k6 jobs to run concurrently at the same node. The default value for this property is `false`, allowing each node to run multiple jobs.
#### Arguments
Allowing you to pass arguments to each k6 job, just as you would from the CLI. For instance `--tag testId=crocodile-stress-test-1`,`--out cloud`, or `—no-connection-reuse`.
### Deploying our Custom Resource
We will now deploy our custom resource using kubectl, and by that, start the test:
```
$ kubectl apply -f /path/to/our/k6/custom-resource.yml
k6.k6.io/k6-sample created
```
Once we do this, the k6 operator will pick up the changes and start the execution of the test. This looks somewhat along the lines of what is shown in this diagram:
[](/blog/static/8c12a4c120f2f4feed3d7284df4be089/14945/pattern-k6.png)
Let’s make sure everything went as expected:
```
$ kubectl get k6
NAME AGE
k6-sample 2s
$ kubectl get jobs
NAME COMPLETIONS DURATION AGE
k6-sample-1 0/1 12s 12s
k6-sample-2 0/1 12s 12s
k6-sample-3 0/1 12s 12s
k6-sample-4 0/1 12s 12s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
k6-sample-3-s7hdk 1/1 Running 0 20s
k6-sample-4-thnpw 1/1 Running 0 20s
k6-sample-2-f9bbj 1/1 Running 0 20s
k6-sample-1-f7ktl 1/1 Running 0 20s
```
The pods have now been created and put in a paused state until the operator has made sure they’re all ready to execute the test. Once that’s the case, the operator deploys another job, k6-sample-starter which is responsible for making sure all our runners start execution at the same time.
Let’s wait a couple of seconds and then list our pods again:
```
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
k6-sample-3-s7hdk 1/1 Running 0 76s
k6-sample-4-thnpw 1/1 Running 0 76s
k6-sample-2-f9bbj 1/1 Running 0 76s
k6-sample-1-f7ktl 1/1 Running 0 76s
k6-sample-starter-scw59 0/1 Completed 0 56s
```
All right! The starter has completed and our tests are hopefully running. To make sure, we may check the logs of one of the jobs:
```
$ kubectl logs k6-sample-1-f7ktl
[...]
Run [100%] paused
default [0%]
Run [100%] paused
default [0%]
running (0m00.7s), 02/50 VUs, 0 complete and 0 interrupted iterations
default [1%] 02/50 VUs 0m00.7s/1m00.0s
running (0m01.7s), 03/50 VUs, 13 complete and 0 interrupted iterations
default [3%] 03/50 VUs 0m01.7s/1m00.0s
running (0m02.7s), 05/50 VUs, 41 complete and 0 interrupted iterations
default [4%] 05/50 VUs 0m02.7s/1m00.0s
[...]
```
And with that, our test is running! 🎉 After a couple of minutes, we’re now able to list the jobs again to verify they’ve all completed:
```
$ kubectl get jobs
NAME COMPLETIONS DURATION AGE
k6-sample-starter 1/1 8s 6m2s
k6-sample-3 1/1 96s 6m22s
k6-sample-2 1/1 96s 6m22s
k6-sample-1 1/1 97s 6m22s
k6-sample-4 1/1 97s 6m22s
```
### Cleaning up
To clean up after a test run, we delete all resources using the same yaml file we used to deploy it:
```
$ kubectl delete -f /path/to/our/k6/custom-resource.yml
k6.k6.io "k6-sample" deleted
```
Which deletes all the resources created by the operator as well, as shown below:
```
$ kubectl get jobs
No resources found in default namespace.
$ kubectl get pods
No resources found in default namespace.
```
> #### ⚠️ Deleting the operator
>
> If you for some reason would like to delete the operator altogether, just run make delete from the root of the project..
>
> The idea behind the operator however, is that you let it remain in your cluster between test executions, only applying and deleting the actual K6 custom resources used to run the tests.
## Things to consider
While the operator makes running distributed load tests a lot easier, it still comes with a couple of drawbacks or gotchas that you need to be aware of and plan for. For instance, the lack of metric aggregation.
We’ll go through in detail how to set up the monitoring and visualisation of these test runs in a future article, but for now, here’s a list of things you might want to consider:
### Metrics will not be automatically aggregated
Metrics generated by running distributed k6 tests using the operator won’t be aggregated, which means that each test runner will produce its own results and end-of-test summary.
**To be able to aggregate your metrics and analyse them together, you’ll either need to:**
1) Set up some kind of monitoring or visualisation software and configure your K6 custom resource to make your jobs output there.
2) Use [logstash](https://github.com/elastic/logstash), [fluentd](https://github.com/fluent/fluentd), splunk, or similar to parse and aggregate the logs yourself.
### Thresholds are not evaluated across jobs at runtime
As the metrics are not aggregated at runtime, your thresholds won’t be evaluated using aggregation either. Currently, the best way to solve this is by setting up alarms for passed thresholds in your monitoring or visualisation software instead.
### Overpopulated nodes might create bottlenecks
You want to make sure your k6 jobs have enough cpu and memory resources to actually perform your test. Using parallelism alone might not be sufficient. If you run into this issue, experiment with using the separate property.
### Experimental
As mentioned in the beginning of the article, the operator _is_ experimental, and as such it might change a lot from commit to commit.
### Total cost of ownership
The k6 operator significantly simplifies the process of running distributed load tests in your own cluster. However, there still is a maintenance burden associated with self-hosting. If you'd rather skip that, as well as the other drawbacks listed above, and instead get straight to load testing, you might want to have a look at the [k6 cloud offering](https://k6.io/cloud).
## See also
- [The k6 operator project on GitHub](https://github.com/k6io/operator)
---
#### 🙏🏼 Thank you for reading!
If you enjoyed this article and would like to read others like it in the future, it would definitely make us happy campers if you hit the ❤️ or 🦄 buttons.
To not miss out on any of our future content, make sure to press the follow button.
Want to get in touch with us? Hit us up either in the comments below or [on Twitter](https://twitter.com/k6_io) | simme |
602,107 | Reactive Forms And Form Validation In Angular With Example | This tutorial we are learn how to create Reactive Forms And Form Validation In Angular With Example v... | 0 | 2021-02-13T03:55:26 | https://dev.to/robertlook/reactive-forms-and-form-validation-in-angular-with-example-5062 | angular, node | This tutorial we are learn how to create Reactive Forms And Form Validation In Angular With Example very simply form see below:
provides a model-driven approach to handling form inputs value change over time. In this form reactive form, we need to import "ReactiveFormsModule" from the angular forms library. We will use FormControl, FormGroup, FormArray, Validation class with Reactive forms in angular.
**[Reactive Forms And Form Validation In Angular With Example](https://www.phpcodingstuff.com/blog/reactive-forms-and-form-validation-in-angular-with-example.html)**
**src/app/app.component.html**
```
<h1>Reactive Forms And Form Validation In Angular With Example - phpcodingstuff.com</h1>
<form [formGroup]="form" (ngSubmit)="submit()">
<div class="form-group">
<label for="name">Name</label>
<input formControlName="name" id="name" type="text" class="form-control">
<span *ngIf="form.name.touched && form.name.invalid" class="text-danger">Name is required.</span>
</div>
<div class="form-group">
<label for="email">Email</label>
<input formControlName="email" id="email" type="text" class="form-control">
<span *ngIf="form.email.touched && form.email.invalid" class="text-danger">Email is required.</span>
</div>
<div class="form-group">
<label for="body">Body</label>
<textarea formControlName="body" id="body" type="text" class="form-control"> </textarea>
<span *ngIf="form.body.touched && form.body.invalid" class="text-danger">Body is required.</span>
</div>
<button class="btn btn-primary" type="submit">Submit</button>
</form>
```
**src/app/app.component.ts**
```
import { Component } from '@angular/core';
import { FormGroup, FormControl, Validators} from '@angular/forms'; // <----------------- This code---------------
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
constructor() { }
ngOnInit(): void {
}
form = new FormGroup({
name: new FormControl('', Validators.required),
email: new FormControl('', Validators.required),
body: new FormControl('', Validators.required)
});
submit(){
console.log(this.form.value);
}
}
```
Original source : *[https://www.phpcodingstuff.com/blog/reactive-forms-and-form-validation-in-angular-with-example.html](https://www.phpcodingstuff.com/blog/reactive-forms-and-form-validation-in-angular-with-example.html "https://www.phpcodingstuff.com/blog/reactive-forms-and-form-validation-in-angular-with-example.html")* | robertlook |
602,111 | Github Search and BLoC | In case it helped :) Website: https://web.flatteredwithflutter.com/#/ We will cover briefly... | 0 | 2021-02-13T04:36:35 | https://dev.to/aseemwangoo/github-search-and-bloc-3gpi | computerscience, productivity, programming, flutter | *In case it helped :)*
<a href="https://www.buymeacoffee.com/aseemwangoo" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Pass Me A Coffee!!" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>
<!-- wp:paragraph -->
<p><strong><em>Website: </em></strong><a href="https://web.flatteredwithflutter.com/#/" rel="noreferrer noopener" target="_blank"><strong><em>https://web.flatteredwithflutter.com/#/</em></strong></a></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>We will cover briefly about</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li>Integrate Github API</li><li>Define UI states </li><li>Create Search BLoC</li><li>Update UI as per states</li></ol>
<!-- /wp:list -->
{% youtube 7eQgZ6QQwxs %}
<!-- wp:heading {"level":3} -->
<h3>Integrate Github API</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We define an abstract class (aka <strong>contract</strong>), which includes one method.</p>
<!-- /wp:paragraph -->
<!-- wp:image -->
<figure class="wp-block-image"><img src="https://cdn-images-1.medium.com/max/1600/1*G4KaI2oqwmMPQ49ZLmwFVQ.png" alt="Github Search and BLoC"/><figcaption>Github Search and BLoC</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>Github has a <a href="https://stackoverflow.com/questions/53747159/whats-the-correct-endpoint-to-access-github-repositories-on-github-api" rel="noreferrer noopener" target="_blank">public endpoint</a> exposed for searching the repositories, and we append the user-defined search term to it.</p>
<!-- /wp:paragraph -->
<!-- wp:preformatted -->
<pre class="wp-block-preformatted">https://api.github.com/search/repositories?q='YOUR SEARCH TERM'</pre>
<!-- /wp:preformatted -->
<!-- wp:paragraph -->
<p>So now, we implement the abstract class in our <strong>GithubApi</strong> (our implementation class name).</p>
<!-- /wp:paragraph -->
<!-- wp:preformatted -->
<pre class="wp-block-preformatted">class GithubApi implements GithubSearchContract</pre>
<!-- /wp:preformatted -->
<!-- wp:paragraph -->
<p>and our search function looks like this</p>
<!-- /wp:paragraph -->
<!-- wp:image -->
<figure class="wp-block-image"><img src="https://cdn-images-1.medium.com/max/1600/1*5I2qKaWZ8F7We_yv3h6m0w.png" alt="Github Search and BLoC"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>where we call the API, fetch the results, and convert them into the <strong>SearchResult</strong> model.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Define UI states</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We formulate all the possible states our UI can have and then define them.</p>
<!-- /wp:paragraph -->
<!-- wp:preformatted -->
<pre class="wp-block-preformatted">enum States {<br> noTerm,<br> error,<br> loading,<br> populated,<br> empty,<br>}</pre>
<!-- /wp:preformatted -->
<!-- wp:paragraph -->
<p>We create a base class (<strong>SearchState</strong>), and each state (defined above) will implement this base class.</p>
<!-- /wp:paragraph -->
<!-- wp:preformatted -->
<pre class="wp-block-preformatted">@immutable
class SearchState extends BlocState {
SearchState({this.state});
final States state;
}
abstract class BlocState extends Equatable {
@override
List<Object> get props => [];
}</pre>
<!-- /wp:preformatted -->
<!-- wp:paragraph -->
<p>Our <strong>SearchState </strong>class is internally extending <a rel="noreferrer noopener" href="https://pub.dev/packages/equatable" target="_blank">equatable</a>. Equatable does the heavy lifting for equality comparisons between two objects.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4>Implement UI states</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>All the values inside the enum correspond to a UI state, currently, we have <strong>5 values inside our enum</strong>, hence we will <strong>create 5 states</strong>.</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>class SearchNoTerm extends SearchState {
SearchNoTerm() : super(state: States.noTerm);
}
class SearchError extends SearchState {
SearchError() : super(state: States.error);
}
class SearchLoading extends SearchState {
SearchLoading() : super(state: States.loading);
}
class SearchPopulated extends SearchState {
final SearchResult result;
SearchPopulated(this.result) : super(state: States.populated);
}
class SearchEmpty extends SearchState {
SearchEmpty() : super(state: States.empty);
}</code></pre>
<!-- /wp:code -->
<!-- wp:paragraph -->
<p>As we see here, each of the UI states also includes the respective value from the enum. For instance,</p>
<!-- /wp:paragraph -->
<!-- wp:quote -->
<blockquote class="wp-block-quote"><p><strong>SearchNoTerm state</strong> has the value of <strong>States.noTerm</strong> , and so on</p></blockquote>
<!-- /wp:quote -->
<!-- wp:paragraph -->
<p>The results are only included in the <strong>SearchPopulated</strong> <strong>state</strong>, which has a <strong>SearchResult(our model class)</strong> parameter.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Create Search BLoC</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The time has come, to create our much-anticipated BLoC.</p>
<!-- /wp:paragraph -->
<!-- wp:image -->
<figure class="wp-block-image"><img src="https://cdn-images-1.medium.com/max/1600/1*T9wvQB0FjG481mqykxr1sQ.png" alt="Github Search and BLoC"/><figcaption>Github Search and BLoC</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>The idea behind bloc is to <strong>expose</strong> <strong>sinks</strong> (for user-defined events) and react as per those events by <strong>emitting the respective states</strong>.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>We define our search bloc which takes in the <strong>implementation of Github API as a parameter</strong>.</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>class SearchBloc {
factory SearchBloc(GithubSearchContract api) {
//.....
}
// Sink exposed to UI
final Sink<String> onTextChanged;
// State exposed to UI
final Stream<SearchState> state;
}</code></pre>
<!-- /wp:code -->
<!-- wp:quote -->
<blockquote class="wp-block-quote"><p>We expose the <strong>onTextChanged sink</strong> and emit the <strong>stream of searchstate</strong>.</p></blockquote>
<!-- /wp:quote -->
<!-- wp:heading {"level":4} -->
<h4>1. onTextChanged Sink</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We use <a href="https://pub.dev/packages/rxdart" rel="noreferrer noopener" target="_blank">RxDart</a> for defining what goes inside our sink.</p>
<!-- /wp:paragraph -->
<!-- wp:quote -->
<blockquote class="wp-block-quote"><p>RxDart adds additional capabilities to Dart <a href="https://api.dart.dev/stable/dart-async/Stream-class.html" rel="noreferrer noopener" target="_blank">Streams</a> and <a href="https://api.dart.dev/stable/dart-async/StreamController-class.html" rel="noreferrer noopener" target="_blank">StreamControllers</a>.</p></blockquote>
<!-- /wp:quote -->
<!-- wp:code -->
<pre class="wp-block-code"><code>factory SearchBloc(GithubSearchContract api) {
final onTextChanged = PublishSubject<String>();
final state = onTextChanged
.distinct()
.debounceTime(const Duration(milliseconds: 500))
.switchMap<SearchState>((String term) => _helpers.eventTyping(term))
.startWith(SearchNoTerm());
return SearchBloc._(api, onTextChanged, state);
}</code></pre>
<!-- /wp:code -->
<!-- wp:paragraph -->
<p>We create a <a href="https://pub.dev/documentation/rxdart/latest/rx/PublishSubject-class.html" rel="noreferrer noopener" target="_blank"><strong>PublishSubject</strong></a> of type string as we would be searching a string term.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>PublishSubject</strong>: It emits all the subsequent items of the source Observable at the time of subscription.</p>
<!-- /wp:paragraph -->
<!-- wp:quote -->
<blockquote class="wp-block-quote"><p>Unlike a <code>BehaviorSubject</code>, a <code>PublishSubject</code> doesn't retain/cache items, therefore, a new <code>Observer</code> won't receive any past items</p></blockquote>
<!-- /wp:quote -->
<!-- wp:code -->
<pre class="wp-block-code"><code>final subject = PublishSubject<int>();
// observer1 will receive all data and done events
subject.stream.listen(observer1);
subject.add(1);
subject.add(2);
// observer2 will only receive 3 and done event
subject.stream.listen(observe2);
subject.add(3);
subject.close();</code></pre>
<!-- /wp:code -->
<!-- wp:image -->
<figure class="wp-block-image"><img src="https://cdn-images-1.medium.com/max/1600/0*FL3hhOXm-oNiJSU2.png" alt="Publish Subject"/><figcaption>Publish Subject</figcaption></figure>
<!-- /wp:image -->
<!-- wp:separator -->
<hr class="wp-block-separator"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":4} -->
<h4>2. Filtering sink</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Now we need to filter the items entering our <strong>sink. </strong>We use <strong>distinct </strong>to skip the data events if they are equal to the previous data event.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>The returned stream provides the same events as this stream, except that it never provides two consecutive data events that are equal.</p>
<!-- /wp:paragraph -->
<!-- wp:quote -->
<blockquote class="wp-block-quote"><p><a href="https://rxmarbles.com/#distinct" rel="noreferrer noopener" target="_blank">Interactive description for distinct</a>.</p></blockquote>
<!-- /wp:quote -->
<!-- wp:separator -->
<hr class="wp-block-separator"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":4} -->
<h4>3. Debounce </h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We wait for the user to stop typing for 500ms before running a search. This is achieved using <a rel="noreferrer noopener" href="https://pub.dev/documentation/rxdart/latest/rx/DebounceExtensions/debounceTime.html" target="_blank">debounce</a>.</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>Stream.fromIterable([1, 2, 3, 4]) .debounceTime(Duration(seconds: 1)) .listen(print); // prints 4</code></pre>
<!-- /wp:code -->
<!-- wp:heading {"level":4} -->
<h4>4. switchMap</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Call the Github API with the given search term. If another search term is entered, switchMap will ensure the previous search is discarded.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>This can be useful when you only want the very latest state from asynchronous APIs, for example.</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>RangeStream(4, 1) .switchMap((i) => TimerStream(i, Duration(minutes: i)) .listen(print); // prints 1</code></pre>
<!-- /wp:code -->
<!-- wp:paragraph -->
<p>Finally, we call the Github API:</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>Stream<SearchState> eventTyping(String term) async* {
if (term.isEmpty) {
yield SearchEmpty();
} else {
yield* Rx.fromCallable(() => api.search(term))
.map((result) =>
result.isEmpty ? SearchEmpty() : SearchPopulated(result))
.startWith(SearchLoading())
.onErrorReturn(SearchError());
}
}</code></pre>
<!-- /wp:code -->
<!-- wp:list -->
<ul><li>where if the term is empty we emit <strong>SearchEmpty</strong> state</li><li>Otherwise, call the API, bundle the results into <strong>SearchPopulated</strong> state</li><li>In case of error, emit the <strong>SearchError</strong> state</li></ul>
<!-- /wp:list -->
<!-- wp:separator -->
<hr class="wp-block-separator"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":3} -->
<h3>Update UI as per states</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Our bloc exposes a stream (<strong>called state</strong>). In our UI, we simply listen to this stream and react as per the state emitted</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>StreamBuilder<SearchState>(
builder: (context, model) {
final state = model.state;
if (state == States.loading) {
return const _Loading();
} else if (state == States.empty || state == States.noTerm) {
return const _Empty();
} else if (state == States.error) {
return const _Error();
} else if (state == States.populated) {
return const _DisplayWidget();
}
return const _Internal();
},
initialData: SearchNoTerm(),
stream: searchBloc.state,
)</code></pre>
<!-- /wp:code -->
*In case it helped :)*
<a href="https://www.buymeacoffee.com/aseemwangoo" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Pass Me A Coffee!!" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>
{% youtube 7eQgZ6QQwxs %}
<!-- wp:quote -->
<blockquote class="wp-block-quote"><p>Hosted URL : <a href="https://web.flatteredwithflutter.com/#/" rel="noreferrer noopener" target="_blank">https://web.flatteredwithflutter.com/#/</a></p></blockquote>
<!-- /wp:quote -->
<!-- wp:quote -->
<blockquote class="wp-block-quote"><p><a href="https://github.com/AseemWangoo/experiments_with_web" rel="noreferrer noopener" target="_blank">Source code for Flutter Web App..</a></p></blockquote>
<!-- /wp:quote --> | aseemwangoo |
602,167 | Thinking About Writing As A Means Of Livelihood | In this post, I muse over some of the thinking I've been doing lately about what it'll take for me to write fulltime and to figure ways out to have automatic passive income. | 0 | 2021-02-13T06:48:39 | https://arihantverma.com/posts/2021/01/23/thinking-about-writing-as-means-of-livelihood/ | writing, passingincome, stories, essay | ---
title: Thinking About Writing As A Means Of Livelihood
published: true
description: In this post, I muse over some of the thinking I've been doing lately about what it'll take for me to write fulltime and to figure ways out to have automatic passive income.
tags: ["writing", "passingincome", "stories", "essay"]
cover_image: "https://images.unsplash.com/photo-1434030216411-0b793f4b4173?ixid=MXwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHw%3D&ixlib=rb-1.2.1&auto=format&fit=crop&w=1650&q=80"
canonical_url: https://arihantverma.com/posts/2021/01/23/thinking-about-writing-as-means-of-livelihood/
---
## History
In May 2015, during the last days of college, I kept expressing a wish which no body cared to reply or attend to. They had no reason to. I casually wished to my friends _I wish we had something of our own, some of us, so that we wouldn't have to go away from each other and these memories and friendship_. Some of my batchmates ended up [doing their things](http://fossbytes.com). None of my close friends did. Ever since, I've been dreaming to start something of my own.
Thinking about starting something of one's own, [Arjit Raj](https://www.instagram.com/arjitraj_/) comes to mind. We were neighbours in final year of college in 2014-15. I remember our constant brain storming sessions. He was fondly called Andaman ( where his folks lived back then ). He was the topper of mechanical engineering batch. Always dreamt big. Right from the first year, he set targets, achievable goals. He'd sleep at 9pm, get up at 4am, and work towards whatever he wanted to do.
I remember in 2014, early on, as we entered our final year of engineering, he set a target to become a [Quora top writer](https://www.quora.com/profile/Raj-Arjit) by December. He did it by September. In 2nd year, he had a bad internship experience. He decided to do something about it so that no Indian student would ever have to face what he did. He started curating internship experiences on [internfeel](http://internfeel.com). He included me in his endeavour and for some time I edited and posted internship experiences on the wordpress website.
[He made udemy courses](http://udemy.com/user/arjit/) before it was cool and earned from them. He quit his first job out of college in about a couple of years and just started [making](https://www.instagram.com/spacetechie/) [stuff](https://www.kickstarter.com/projects/rajarjit/the-rocket-deck?ref=5bugvs). Arjit always pampered me. I knew I didn't deserve his praise, but he still kept at it. I don't know what he initially saw in me. But he eventually decided to stop.
We had a fallout when I promised re making internfeel.com from scratch but never did. He was the first person to call me out on my tendency to over commit and not deliver at a consistent rate. After that I never promised anybody anything I couldn't do for them. In fact, I almost stopped committing. I haven't talked with him in years. Now that I've quit my job and have actively started a thought process to figure out ways of making a good living without being employed, only Arjit comes to the mind.
## Scratch To Quit Job
I'm not special to have urges to quit job. I'm not even one of those people who have it hard with their bosses. Yet, I've constantly felt the urge to quit. In the last 2.4 years, I've had to have a staggering amount of commitment to not quit. I'm glad I did, and I learnt a lot along the way. Most of all, I learnt a very important skill in life – how to keep at something you do not particularly enjoy doing, with a sense of responsibility, ownership and gratefulness.
Though lesser salary has been a reason to constantly mull over if all the work that I was putting up was worth it, something bigger drove the urges to quit, more than getting paid less. To feel free is the simplest I can put it. If I can find ways to monetize the things I love to do, what better than that? That's what started to make me think a couple of months ago. Even if there'd be an initial struggle, wouldn't long term benefits come more quickly?
Ever since December last year, I've established one thing – whatever I do, if I don't find ways to earn an income on my own by the side, I'm going to keep having constant urges to quit. So this year I decided to start.
## Taking A Serious Look At Writing
I love to write. I write [short stories](/posts/2021/01/14/changing-narration-voices-and-tenses/), [poetry](/writing), [essays](/tags) and technical articles about code gymnastics. But I never wrote consistently. So I made two goals for myself. I want to get paid writing two kinds of articles.
1. Writing for news / literary magazines.
2. Writing for tech / coding magazines.
I want to write about stories and books that I read and hear. This way I'll get to read and listen more, and possibly get paid. Sounds really cool, doesn't it? I'm going to make it possible, and document everything – failures and successes, as I do so.
I want to write about the code that I read, because I want to read more code. So basically I'm just playing deals with myself.
I recently pitched an article to [Smashing Magazine](https://www.smashingmagazine.com) and got accepted. They pay for each article you write. Getting paid to do something of my own is the best incentive for me there has ever been.
Meanwhile, I'm working hard to bring to life an idea I've had for quite a while. That'll be the first ever end to end product I'd have worked on :) In all honestly, apart from all the reasons for trying to make this product — having a sustainable passive income while I sleep being one of them — I'm doing this to show it to Arjit, so that we could collaborate like we used to, but with me putting as much effort, life and vigour as he does.
This year I only have one resolution — establish this cycle:
make something self sustaining ➡️ earn through it ➡️ invest it back in learning ➡️ make something else with that learning ➡️ ∞
| arihantverma |
602,445 | My javascript / tech / web development newsletter for 13-02-2021 is out! | A roundup of all the best javascript, tech and web development links posted to my linkblog in the past week | 0 | 2021-02-13T13:37:50 | https://dev.to/mjgs/my-javascript-tech-web-development-newsletter-for-13-02-2021-is-out-130i | javascript, tech, webdev, discuss | ---
title: My javascript / tech / web development newsletter for 13-02-2021 is out!
published: true
description: A roundup of all the best javascript, tech and web development links posted to my linkblog in the past week
tags: javascript, tech, webdev, discuss
---
In this week’s edition:
Governments, DevOps CI/CD, GameStop, SSH Tunnels, Jobs’ keynotes, OSS, audioOnUnix, Jack Dorsey, blogging, NFTs, CAPEX, Apple, Disney, Jamstack, websockets, k8s, Kafka, Lambda, Tesla, cool podcasts...
https://markjgsmith.substack.com/p/mark-smiths-newsletter-13-02-2021
Would love to hear any comments and feedback you have.
[@markjgsmith](https://twitter.com/markjgsmith) | mjgs |
602,491 | Top 12 libraries for NextJS, React apps and React Native apps for i18n and react localization | Best i18n libraries for React web, React Native, Expo and all other React apps. Check how to react localization should look like. | 0 | 2021-02-13T16:17:09 | https://simplelocalize.io/blog/posts/the-most-popular-react-localization-libraries/ | javascript, react, typescript | ---
title: Top 12 libraries for NextJS, React apps and React Native apps for i18n and react localization
published: true
description: "Best i18n libraries for React web, React Native, Expo and all other React apps. Check how to react localization should look like."
tags: [javascript, react, typescript]
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/7zgnz6iv5ds8bh60pxiu.png
canonical_url: https://simplelocalize.io/blog/posts/the-most-popular-react-localization-libraries/
---
## Update 2021-02-22
Hey! If you want to read the most recent version of this post, then checkout the [original post on my blog](https://simplelocalize.io/blog/posts/the-most-popular-react-localization-libraries/). I update it regularly! 🌱
## Star [awesome-i18n](https://github.com/jpomykala/awesome-i18n) repository
A full [list of localization libraries and tools on Github](https://github.com/jpomykala/awesome-i18n) can be found on GitHub. Feel free to add your resources there. 🤙
## ⚛️ Libraries list for React localization
Checkout my list of the best React libraries wich I gathered. I focus mainly on ReactJS, React Native and Expo libraries.
### 1. [react-native-localize](https://github.com/zoontek/react-native-localize)
A toolbox for your React Native app localization

### 2. [FormatJS](https://formatjs.io)
Internationalize your web apps with react-intl library. Check also FormatJS CLI for message extraction below!

### 3. [FormatJS CLI](https://formatjs.io/docs/tooling/cli/)
Extract messages from project with FormatJS library

### 4. [react-i18nify](https://github.com/sealninja/react-i18nify)
Simple i18n translation and localization components and helpers for React

### 5. [react-persian](https://github.com/evandhq/react-persian)
react-persian is a set of react components for Persian localization

### 7. [react-i18next](https://react.i18next.com/)
Internationalization framework for React / React Native which is based on i18next. The i18next-community created integrations for frontend-frameworks such as React, AngularJS, Vue.js and many more.

### 8. [bloodyowl/react-translate](https://github.com/bloodyowl/react-translate)
Internationalization for react

### 9. [next-translate](https://github.com/vinissimus/next-translate)
Easy i18n for Next.js +10

### 10. [react-translated](https://github.com/amsul/react-translated)
A dead simple way to add complex translations in a React project

### 11. [React-intl hooks](https://github.com/CreateThrive/react-intl-hooks)
React-intl-hooks is a small and fast library that you can use to replace Format.js components.

### 12. [SimpleLocalize CLI](https://simplelocalize.io/docs/api/get-started/)
SimpleLocalize CLI allows you to upload and manage translation keys in cloud for free!

#### 1.Upload translation in JSON files and manage them in translation editor.

#### 2. Manage them in translation editor

#### 3. Download ready to use JSON files

👋 Thanks for reading! [Follow me on Twitter](https://twitter.com/jakub_pomykala)
| jpomykala |
602,583 | Vuetify and MDB | Both Vuetify and MDB implement the Material Design Design Language. Whereas Vuetify implements Materi... | 0 | 2021-02-24T12:38:57 | https://drmsite.blogspot.com/2021/02/vuetify-and-mdb.html | css, html, javascript, thoughts | ---
title: Vuetify and MDB
published: true
date: 2021-02-13 14:18:00 UTC
tags: CSS,HTML,JavaScript,Thoughts
canonical_url: https://drmsite.blogspot.com/2021/02/vuetify-and-mdb.html
---
Both [Vuetify](https://vuetifyjs.com/en/) and [MDB](https://mdbootstrap.com) implement the Material Design Design Language. Whereas Vuetify implements Material Design using the Vue JavaScript framework, MDB implements it atop the Bootstrap CSS framework. We're looking at a few specific terms there aren't we? We have JavaScript and CSS frameworks and Design Language - but what do we mean by those terms.
I covered the primary three JavaScript frameworks in my [previous book](https://shop.bcs.org/store/221/detail/workgroup?id=3-221-9781780174761), and there is no shortage of articles comparing and contrasting Angular, React and Vue (the three most popular JavaScript frameworks at the time of writing). A JavaScript framework provides a developer with a blueprint, and often concrete artefacts, to use when building an application. Rather than coding everything from the ground up - perhaps utilising functions from a JavaScript library - a JavaScript framework offers structure (the degree and rigidity of this structure depend upon how opinionated the framework is) that will then be decorated with the application's business logic.
JavaScript frameworks are a vast area of contention so I should note that I use VueJS the most because it's the one I use professionally. I prefer it because of familiarity and because it has the lowest adoption barrier - seemingly being the closest to VanillaJS. Again, their primary benefit is that they allow developers to hit the ground running in terms of how an application will be structured, especially those frameworks that are more opinionated. We've used the word "opinionated" a few times now - it is a term which is closely related to the friction developers find when developing applications. If the developer follows the framework designers' accepted application design, they feel less friction while developing the application. Should they attempt something outside those guidelines, and the framework be very opinionated, then they can find the going harder; they will feel more friction. The constraints of an opinionated framework can be comforting; depending upon the confidence of the developer. JavaScript frameworks do involve some measure of learning as well as obedience to the design dictates of their designers; the less opinionated the framework, the less deference is required to others' decisions, and the developer needs to have more confidence in their abilities. That is not to suggest that skilled developers don't use frameworks - just that they might have chosen their framework because it conforms with their preferred approach.
Similarly, CSS frameworks make the developer's life convenient, and they do that by removing an awful lot of the fear, uncertainty and dread at the start of a project. Initiating a project can be terrifying and might be analogous to an artist creating new work, being confronted with - and subsequently terrified of - the blank canvas. If you've been provided with a ready-made tranche of CSS, then a significant number of visual design and development decisions have already been made for you. While comforting, it is like being swaddled in a vast blanket and can be somewhat constricting. I can't be alone when I noted that many sites started to look like facsimiles of the Bootstrap site just after the framework's release.
This tendency to homogenisation is, to a degree, ameliorated by Bootstrap theming. MDB takes this theming to the next level by adding custom elements and components not ordinarily available within Bootstrap, excluding multiple external libraries.
Criticism of both CSS and JavaScript frameworks is rife and, to an extent, understandable. The constraints they provide, while offering countless boons to the developer, might explain their proliferation. Developers can be opinionated - how else might you interpret the continuing arguments over the relative benefits of [Vim](https://www.vim.org) over [Emacs](https://www.gnu.org/software/emacs/). Personally, if I have to log into a Linux server, I generally use [Nano](https://www.nano-editor.org), much to the disdain of friends who have spent the time to learn the minutiae of Vim or Vi. The criticism means that developers sometimes seek to break out of those confines while, in turn, constraining others within similar bonds of conformity by developing competing CSS or JavaScript frameworks. One has to appreciate such dedication. Developing a framework is a challenging and often thankless task - especially as, once birthed, they are likely to be exposed to a vast ecosystem of other frameworks, all competing for developer adoption.
Developers will be exposed to multiple CSS and JavaScript frameworks throughout their career - either by choice or imposed by their employer. As such, a new developer is left in a quandary about what to learn. The web fundamentals (HTML, CSS and JS) can sometimes be left behind in the scramble to learn the latest framework with the most-posted jobs - concentrating on learning the fundamentals provides a foundation where the developer can learn to appreciate the relative merits of a CSS or JavaScript framework. I once was asked to implement a CSS framework, which I will not name, which was very concrete and opinionated in its naming convention for the classes used. Those classes were the be-all and end-all of the framework and wholly dictated the appearance of elements in the UI. The developer could add class attributes to HTML elements to make them act in ways contrary to how they should by default, which left me feeling very uncomfortable.
That's not to say that developers should ignore innovations within the field, but discernment is required before jumping on to the latest bandwagon. Indeed, the new developer's primary focus should be the fundamentals of HTML, CSS and JS - in that order. Your first professional role, or independent study, will likely provide more than enough exposure to CSS and JavaScript frameworks.
Now that we're more aware of the frameworks lets look at Design Languages. [Nate Baldwin](https://medium.com/thinking-design/what-is-a-design-language-really-cd1ef87be793) suggests that if you have spent any time developing anything on the internet; you have either already created a Design Language of your own, or implemented someone else's. That includes creating an eBay listing or implementing a custom frame on your Facebook Profile. Baldwin's article goes into Design Languages' details. He notes that they are made up of many disparate elements in the same way that our written or spoken languages are. He also points out that, despite their ubiquity, the visual interfaces that Design Languages influence are remarkably complex mechanisms to glean and impart information. As such, we need to be conscious of their impact on our users and aware of their importance.
Being made up of many various elements, a Design Language is a tricky beast and worth studying. Even the name itself can be problematic, with designers calling them Design Languages or Design Systems - some front-end frameworks are even worthy of the name Design Language and [cod;tas](https://coditas.com) host a [curated list of them](https://design-languages.com). I've used three of the twenty-nine Design Languages listed during my career to date (at the time of writing). Still, of those three, I keep returning to Material Design - though I'm becoming more and more enamoured of IBM's Living Language. I should also note that I've also developed within the constraints of private, corporate, Design Languages for clients, some of which went so far as to have distinct and restricted corporate typefaces.
As an aside, I should also note that when I worked primarily on local government contracts, the predominant thematic colour was purple. After all, Purple was historically restricted to royalty and the elite due to the dye's original exorbitant costs. Thankfully I was no longer working in such a milieu when the UK Independence Party co-opted that particular colour. One can only imagine that the only solution to theming such sites today is to use the whole rainbow of colours represented on [Wikipedia's list of United Kingdom political party meta attributes](https://en.wikipedia.org/wiki/Wikipedia:Index_of_United_Kingdom_political_parties_meta_attributes).
As a further aside, I spent some time thinking about political parties' colours in the UK. I wrote a [blog post](https://dev.to/mouseannoying/polital-spectrum-3a8-temp-slug-7488688) with, not my thoughts per se, but my findings, from the Wikipedia article linked above and display the political parties, sorted using their hue, saturation and lightness (HSL) values.
Working within a Design Language's strictures is similar, but not the same as working within a CSS or JavaScript framework's bindings. It is related in that working with a Design Language means that the overarching application has a consistent look (in the same way as working with a CSS framework) and feel (in the same way as working within a JavaScript framework). Further, introducing elements from other Design Languages is likely to present your users' to some discordance and lead to confusion in the same way as CSS and JavaScript frameworks don't often play well together.
That is not to say we should not seek to challenge our users by introducing innovation. But those challenges should be sprinkled sparingly through the application, rather than at every turn of the user. Using a Design Language, we help our users feel confident that their interaction will lead to expected results by allowing them to feel confident in the application's consistency. Further, a documented Design Language will allow other team members - should you enjoy work with others - feel as though they know how to progress with preliminary development before the input of a dedicated front-end developer. John Rhea discusses the introduction of dissonance to an application in his book [Beginner Usability: A Novice's Guide to Zombie Proofing Your Website](https://www.sitepoint.com/premium/books/beginner-usability). He notes that users will be familiar with interacting with websites in specific ways, though interacting with previous websites. He also notes that to introduce dissonance, one must already be conscious of the rules implied by a Design Language.
But what is a Design Language? In the case of Material Design, Material is the metaphor which inspires the Design Language. Real-world objects act as the inspiration for user interface elements. Content is organised using cards, lists and sheets, navigations occurs when users interact with navigation draws and tabs; actions are initiated using buttons. Nearly all elements have a subtle rounding because, in nature, right angles are rare. I'm conscious that answering what a Design Language is is difficult to define; it is an aesthetic feeling towards an application and is made up of [visual and conceptual standards](https://www.uxpin.com/studio/blog/design-language/). UXPin, the above quote's originator, says that a Design Language collects and standardises user interface components and patterns, a style guide, and some semantics documentation. Both UXPin and [Gleb Kuznetsov](https://www.smashingmagazine.com/2020/03/visual-design-language-building-blocks/) note that a Design Language must relate to the brand's corporate identity. Should you be tasked with developing an application for a brand, you must examine their other assets - physical or internet-based. This examination will furnish you with a feeling about how your application should look, even if it's only related to any logos to be used or colour-schemes to implement.
We started this by examining what we mean by CSS frameworks, JavaScript frameworks and Design Languages; we'll now look at the relationship between MDB and Vuetify and Material Design. Both MDB and Vuetify implement the Material Design Language, using significantly different techniques. Up until the release of version 5, MDB also required the inclusion of the jQuery JavaScript library. The Bootstrap CSS framework itself required jQuery before version 5; now it only needs the [Popper](https://popper.js.org) JavaScript library to enable proper positioning of tooltips and popover elements. MDB now has its own, dedicated, JavaScript library and no longer requires jQuery.
MDB adds Material Design concepts to the Bootstrap CSS framework along with a significant number of discrete, JavaScript-powered, user interface elements. Vuetify does pretty much the same but adds Material Design principles to the Vue JavaScript framework. Bootstrap and MDB's reliance on JavaScript means that both approaches aren't all that different, especially when considering the initial reliance Bootstrap had on jQuery. The primary differences are how Vuetify forces the developer to write the application. MDB decorates the HTML, whereas Vuetify replaces common HTML elements with its components. If building with HTML is analogous to building with Lego - which it can sometimes seem to be - then creating your first application with Vuetify is similar to building with a completely different construction toy such as Meccano.
Perhaps I might be accused of being an HTML purist, but using a `v-container` element is little different to using a `div` and adding a class of `container` - but it does seem to be a case of replacing HTML elements for the sake of it. Vuetify does have a sensibly naming convention so that you can mostly guess what is required next while building your application. A `v-container` likely needs to have a `v-row` within it, and that `v-row` is crying out for at least one `v-col`. You know, seeing as both Bootstrap and Vuetify both share a twelve-point grid system, that the `v-col` will have a `cols` attribute with a number between 1 and 12 as its value. But why bother with separate elements when adding a hierarchy of div elements with the classes of `container`, `row` and `col-*` will work just as well? It all smacks of overkill and using custom elements for the sake of it.
I guess that there's not a great deal to differentiate the two approaches to implementing Material Design. Vuetify has the closest affinity to those developers already used to working with Vue. In contrast, MDB is likely to feel the most natural for those developers used to traditional application development using so-called monolithic application structures, which don't take advantage of Single Page Application (SPA) architecture. I am conscious that I don't use SPAs professionally. To a greater or lesser degree, I'm not upset about not working with a SPA though, and I feel an affinity with [Chris Ferdinandi](https://gomakethings.com/about/) when he notes that:
["Browsers are an amazing piece of technology. They give you so much for free, just baked right in."](https://gomakethings.com/an-alternative-to-single-page-apps-multi-page-apps-with-service-workers/)
["Single page apps break all that, and force you to recreate it with JavaScript."](https://gomakethings.com/an-alternative-to-single-page-apps-multi-page-apps-with-service-workers/)
Along with Vuetify, another prominent Vue Component Library implements the Material Design Language, [Vue Material](https://vuematerial.io). Vue Material is closely related to, and partners with [Creative Tim](https://www.creative-tim.com/) who develop [Vue Material Kit](https://www.creative-tim.com/product/vue-material-kit). This situation mirrors MDB, as they also offer a paid-for product with more components.
Whatever your chosen CSS or JS framework or Design Language, the important thing is not to confuse your users too much, all while keeping your clients happy. | mouseannoying |
602,701 | How to verify school email addresses in Node.js | In this post, we look at how school email addresses can be verified easily and quickly in Node.js. T... | 0 | 2021-02-13T17:50:14 | https://www.marvinschopf.com/2021/02/how-to-verify-school-email-addresses-in-node-js/ | node, javascript, howto, tutorial | **In this post, we look at how school email addresses can be verified easily and quickly in Node.js.**
This is especially useful when a service wants to give certain perks or benefits to students or teachers. Often this is done using paid enterprise service providers, but in the vast majority of cases, verification can also be done quickly and free of charge using the user's email address.
Unfortunately, one disadvantage of most modules for checking school emails is that they only check if the domain ends in ".edu", which eliminates all international educational institutions as they cannot use an ".edu" domain.
The module used in this article is based on the Jetbrains dataset, which contains thousands of international educational institutions and is constantly growing.
_**Disclaimer:** I am the developer of the module mainly used in this post._
## Requirements
The only requirement to verify a user's student status is a confirmed email address (or more precisely, domain of the email address, for example) of the user.
## Installation
The installation of the required modules in an already initialised and set up Node.js project can easily be done with `npm`:
```bash
npm install swot-node
```
Or using `yarn`:
```bash
yarn add swot-node
```
## Usage
First we import the installed library:
```javascript
const swot = require("swot-node");
```
After that, the use is very simple. Any URL containing a domain can be entered as input. This does not necessarily have to be an email address, but it makes the most sense when verifying students, for example.
The use is asynchronous via Promises or `async` / `await`:
```javascript
swot.isAcademic("example@stanford.edu").then((response) => {
if (response) {
// The email belongs to an educational institution!
console.log("The email belongs to an educational institution!");
} else {
// The email does not belong to an educational institution!
console.log("The email does not belong to an educational institution!");
}
});
```
It is also possible to get the name(s) of the educational institution:
```javascript
swot.getSchoolNames("example@stanford.edu").then((response) => {
if (response === false) {
// URL does not belong to an academic institution
console.log("URL does not belong to an academic institution");
} else if (response === true) {
// URL ends on a TLD reserved for academic institutions, but has no entry of its own in the database
console.log(
"URL ends on a TLD reserved for academic institutions, but has no entry of its own in the database"
);
} else {
// Domain has an entry and there are also names in the database
console.log(response);
// => [ 'Stanford University' ]
}
});
```
The exact possible return values of the functions can be found in the [documentation](https://swot.js.org) of the library.
## Full example
```javascript
const swot = require("swot-node");
// Just check if email belongs to an academic institution
swot.isAcademic("example@stanford.edu").then((response) => {
if (response) {
// The email belongs to an educational institution!
console.log("The email belongs to an educational institution!");
} else {
// The email does not belong to an educational institution!
console.log("The email does not belong to an educational institution!");
}
});
// Check if email belongs to an academic institution and get name(s) of institution
swot.getSchoolNames("example@stanford.edu").then((response) => {
if (response === false) {
// URL does not belong to an academic institution
console.log("URL does not belong to an academic institution");
} else if (response === true) {
// URL ends on a TLD reserved for academic institutions, but has no entry of its own in the database
console.log(
"URL ends on a TLD reserved for academic institutions, but has no entry of its own in the database"
);
} else {
// Domain has an entry and there are also names in the database
console.log(response);
// => [ 'Stanford University' ]
}
});
```
## Conclusion
To check in Node.js whether an email address belongs to a student, it is not necessary to use a paid commercial service.
Instead, you can simply use free open source software, which is maintained by the community and thus also guarantees a much larger and higher quality data set.
More about the library `swot-node` can be found in the [documentation](https://swot.js.org). | marvinschopf |
602,826 | Create self signed certificates for Kubernetes using cert-manager | Install Cert manager in Kubernetes Read this for up-to-date instructions: https://cert-man... | 0 | 2021-02-13T21:37:22 | https://dev.to/amritanshupandey/create-self-signed-certificates-for-kubernetes-using-cert-manager-403n | kubernetes | ## Install Cert manager in Kubernetes
Read this for up-to-date instructions: https://cert-manager.io/docs/installation/kubernetes/
```bash
# Kubernetes 1.16+
$ kubectl apply —validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.2/cert-manager.yaml
```
## Create a keypair secret
In this step create a new k8s secret that contains the TLS CA cert and key that is used by cert manager to issue new certificates. As a prerequisite, we need a CA certificate and associated key encoded in base64.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: ca-key-pair
namespace: default
data:
tls.crt: <tls-key-base64-encoded>
tls.key: <tls-key-base64-encoded>
```
## Create an issuer
Issuers are used by Cert manager to issue new certificates
```yaml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: ca-issuer
namespace: default
spec:
ca:
secretName: ca-key-pair
```
## Create certificates
This creates new certificate using the issuer and CA key pair created earlier. In the following example, the certificate is stored as k8s secret `k8s-xps-lan` in default namespace.
```yaml
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: k8s-xps-lan
namespace: default
spec:
secretName: k8s-xps-lan
issuerRef:
name: ca-issuer
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
commonName: k8s.xps.lan
organization:
- XPS.LAN
dnsNames:
- gitlab.xps.lan
- minio.xps.lan
- registry.xps.lan
- k8s.xps.lan
- kibana.xps.lan
- elastic.xps.lan
```
In a separate post, we will see how this certificate can be used by ingress-nginx and other applications. | amritanshupandey |
602,896 | Animated Sonar Screen - CSS Only | Old ship styled sonar screen built with CSS only. Animated rotary effect which hides and reveals ship... | 0 | 2021-02-13T22:26:55 | https://dev.to/nikolab/animated-sonar-screen-css-only-3p9f | codepen, css, animation, frontend | <p>Old ship styled sonar screen built with CSS only. Animated rotary effect which hides and reveals ships is built with conic-gradient() function so it only works in Chrome/Firefox. </p>
{% codepen https://codepen.io/nikolab/pen/poNEgKW %}
<p>Feel free to comment, fork, upgrade and use for any purpose.</p>
If you liked this article, consider buying me a <a href="https://www.buymeacoffee.com/nikolab">coffee</a>. | nikolab |
603,102 | What are CSS Variables? | CSS Variables is a big win for Front-end Developers. It brings the power of variable to CSS, which re... | 0 | 2021-02-14T04:27:38 | https://dev.to/rahxuls/what-are-css-variables-1p1h | webdev, css, programming, codenewbie | CSS Variables is a big win for Front-end Developers. It brings the power of variable to CSS, which results in less repetition, better readability and more flexibility.
---
In this example below, it's much better to create a variable for the `#ff6f69` colour then repeating it.
```css
/*THE OLD WAY */
#title {
color: #ff6f69;
}
.quote {
color: #ff6f69;
}
```
THE NEW WAY
```css
:root {
--red: #ff6f69; /* <--- Declaring Variabke */
}
/*USING THE VARIABLE*/
#title {
color: var(--red);
}
.quote{
color: var(--red);
}
```
More flexibility in case you want to change colour.
---
### Benefits
- Reduces repetition in your stylesheet
- They don't require any transpilling to work
- They live in the DOM, which opens up a ton of benefits
---
### Declaring a Local Variable
You can also create local variables, which are accessible only to the element it's declared at and to its children. This makes no sense to do if you know that variable only will be used in a specific part of your app.
```css
.alert {
--alert-color: #ff6f69; /*This variable can now be used by its children*/
}
.alert p {
color: var(--alert-color);
border: 1px solid var(--alert-color);
}
```
If you tried to use the `alert-color` variable somewhere else in your application, it simply wouldn't work. The browser would just ignore that line of CSS.
---
### Easier responsiveness with Variables
You can, for example, change the variables based upon the width of the screen:
```css
:root {
--main-font-size: 16px;
}
media all and (max-width:600px) {
:root {
--main-font-size: 12px;
}
}
```
And with those simple four lines of code, you have updated the main font-size across your entire app when viewed on small screens.
---
### How to access variables with JavaScript
Grabbing a CSS variable in JavaScript takes three lines of code.
```javascript
var root = document.querySelector(':root');
var rootStyles = getComputedStyle(root);
var mainColor =
rootStyles.getPropertyValue('--main-color');
console.log(mainColor);
--> '#ff6f69'
```
To update CSS variables:
```javascript
root.style.setProperty('--main-color','#88d8b0')
```
---
> Currently, 77% of global website traffic supports CSS Variables.
⌚Thanks For Reading | Happy Coding ☕
Get weekly newsletter of amazing articles I posted this week and some offers or announcement. Subscribe from <a href="https://mailchi.mp/9f73b65b9c38/rahulism" target="_blank">Here</a>
<a href="https://www.buymeacoffee.com/rahxuls" target="_blank"> <img src="https://res.cloudinary.com/rahulism1/image/upload/v1608182430/bmc_nbxakd.png"></a>
| rahxuls |
603,404 | useHug: Creating custom React Hooks 🥰 | Learn how to pull behaviors out of components and into custom hooks with React | 0 | 2021-02-14T15:16:01 | https://dev.to/headwayio/usehug-creating-custom-react-hooks-1edc | react, webdev, javascript | ---
title: useHug: Creating custom React Hooks 🥰
published: true
description: Learn how to pull behaviors out of components and into custom hooks with React
tags: react, webdev, javascript
//cover_image: https://direct_url_to_image.jpg
---
Building custom hooks is a great way to encapsulate behaviors and reuse them throughout your application. To demonstrate this, we're going to build out the idea of "hugging" elements of our UI. Our huggable behavior will:
- Change the mouse cursor on hover (we want our user to know what needs a hug).
- Scale the element down on click (this is a firm hug, some squishiness is expected).
- Change the mouse cursor while clicking (to show our appreciation).
I find the first step to making something reusable is to use it once, so let's implement this in a component:
```jsx
import React, { useState } from "react";
import { animated, useSpring } from "react-spring";
const Huggable = () => {
const [hovering, setHovering] = useState(false);
const [pressed, setPressed] = useState(false);
const animationProps = useSpring({
transform: `scale(${pressed ? 0.8 : 1})`
});
const onMouseEnter = () => setHovering(true);
const onMouseLeave = () => {
setHovering(false);
setPressed(false);
};
const onMouseDown = () => setPressed(true);
const onMouseUp = () => setPressed(false);
let className = "huggable";
if (pressed) {
className += " hugging-cursor";
} else if (hovering) {
className += " huggable-cursor";
}
return (
<animated.div
className={className}
onMouseEnter={onMouseEnter}
onMouseLeave={onMouseLeave}
onMouseDown={onMouseDown}
onMouseUp={onMouseUp}
style={animationProps}
role="button"
>
Hug me!
</animated.div>
);
};
export default Huggable;
```
There are a few things going on here so we'll take a closer look:
```jsx
const [hovering, setHovering] = useState(false);
const [pressed, setPressed] = useState(false);
```
There are two states that we want to track here, is the user hovering and have they pressed the button.
```jsx
const animationProps = useSpring({
transform: `scale(${pressed ? 0.8 : 1})`
});
```
We take advantage of react-spring's `useSpring` hook to create an animation. We could also use CSS transforms here but react-spring does a lot of math for us to give us really good looking animations without much work.
```jsx
const onMouseEnter = () => setHovering(true);
const onMouseLeave = () => {
setHovering(false);
setPressed(false);
};
const onMouseDown = () => setPressed(true);
const onMouseUp = () => setPressed(false);
```
These event handlers will be used to manage our hovering / pressed state, which in turn will drive our behavior.
```jsx
let className = "huggable";
if (pressed) {
className += " hugging-cursor";
} else if (hovering) {
className += " huggable-cursor";
}
```
We set a `className` here dynamically based on our pressed / hovering state. This is used to control some basic styles as well as custom cursors when hovering. This might have been a little easier had I used JSS or styled components, but this served my needs just fine and will hopefully make sense to a wider audience.
```jsx
return (
<animated.div
className={className}
onMouseEnter={onMouseEnter}
onMouseLeave={onMouseLeave}
onMouseDown={onMouseDown}
onMouseUp={onMouseUp}
style={animationProps}
role="button"
>
Hug me!
</animated.div>
);
```
Finally, our markup. Not much to see here as we're just passing down the props we defined above, but it's worth pointing out the `animated` tag, which is required by react-spring.
Here's what we've got so far:

Not bad! Now let's try and isolate what we want to encapsulate in a hook. We know this should be applicable to any element, so we won't want to use any of the markup. That leaves the state management, event handlers, the animation, and our classes:
```jsx
const [hovering, setHovering] = useState(false);
const [pressed, setPressed] = useState(false);
const animationProps = useSpring({
transform: `scale(${pressed ? 0.8 : 1})`
});
const onMouseEnter = () => setHovering(true);
const onMouseLeave = () => {
setHovering(false);
setPressed(false);
};
const onMouseDown = () => setPressed(true);
const onMouseUp = () => setPressed(false);
let className = "huggable";
if (pressed) {
className += " hugging-cursor";
} else if (hovering) {
className += " huggable-cursor";
}
```
If we copy that into it's own function it looks something like this:
```jsx
const useHug = () => {
const [hovering, setHovering] = useState(false);
const [pressed, setPressed] = useState(false);
const style = useSpring({
transform: `scale(${pressed ? 0.8 : 1})`
});
const onMouseEnter = () => setHovering(true);
const onMouseLeave = () => {
setHovering(false);
setPressed(false);
};
const onMouseDown = () => setPressed(true);
const onMouseUp = () => setPressed(false);
let className = "";
if (pressed) {
className += "hugging-cursor";
} else if (hovering) {
className += "huggable-cursor";
}
//TODO: return...?
};
```
All that's left now is what we want to return. This is an important decision as it defines what consuming components can do with our hook. In this case, I really want a consumer to be able to import the hook as one object and spread it over an html element, like so:
```jsx
const huggableProps = useHug();
return <a href="/contact" {...huggableProps}>Contact Us</a>
```
This makes our hook easy to consume and use while keeping some flexibility in case an element wants to pick and choose what events to use. In order to do that we have to leave off our state variables, since they aren't valid properties for html elements. This is what our return statement winds up looking like:
```jsx
return {
onMouseDown,
onMouseEnter,
onMouseLeave,
onMouseUp,
className,
style
};
```
Now that we've got our hook, the only thing left to do is to use it:
```jsx
export default function App() {
const { className, ...hugProps } = useHug();
const buttonHugProps = useHug();
return (
<div className="App">
<animated.section className={`huggable ${className}`} {...hugProps}>
I like hugs!
</animated.section>
<br />
<br />
<animated.button {...buttonHugProps} type="button">
buttons need hugs too
</animated.button>
</div>
);
}
```
In the above example we've implemented our `useHug` hook in two ways, by taking all of the props and spreading them out over an element, and another by separating out the `className` prop and using that to compose a css class with our consuming element's existing className. We also make use of the `animated` tag to ensure our app animates correctly with react-spring.
Although this example may seem kind of silly, a lot of the process for extracting logic into a custom hook would remain the same, no matter what you're building. As you identify patterns in your code it's a good practice to look for ways you can abstract application logic or behavior in the same way you would abstract a common UI element like a modal or input. This approach can help set you up for success as your application grows over time and prevent future developers (or future you) from reinventing the wheel on something you've already implemented a few times.
If you'd like to see the full code, [here it is on codesandbox](https://codesandbox.io/s/huggable-qt088?file=/src/App.js). Feel free to fork it and play around, I'd love to see what you come up with! | chrisheld927 |
603,457 | linux command tips | liste of linux commande i user thid week | 0 | 2021-12-10T17:38:32 | https://dev.to/fredlag/linux-command-tips-4l36 | ---
title: linux command tips
published: true
description: liste of linux commande i user thid week
tags:
//cover_image: https://direct_url_to_image.jpg
---
Hi all, my second post on dev not an original post but i will describe here what's linux command i have used last week.
## CTRL+r command
This command help me a lot. With this command i can search in my past command. very usfull when you always make the same command again and again.
## !! command
I used this command to Replay the last command. Exemple : nano /var/log/syslog (error because non sudo)
Juste type sudo !! and the command open directly the syslog file.
## |Echo$? command
With that i can see the return of the program
## Cpu burn
If you want to test a program with full cpu
dd if=/dev/zero of=/dev/null
fullcpu() { dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; fullcpu; read; killall dd
## Number of open ssh connection
sudo netstat -tulpn | grep LISTEN | fredlag | |
603,696 | How to build a Quiz Game in Python | Hello everyone, today we are going to create a fun Quiz Game in python. How does it work?... | 11,147 | 2021-02-14T18:34:32 | https://dev.to/mindninjax/how-to-build-a-quiz-game-in-python-10ik | python, tutorial, beginners, codenewbie | Hello everyone, today we are going to create a fun **Quiz Game in python**.
## How does it work?
Our quiz game will be asking questions to the player to which player has to reply with the right answer. Each question will have 3 attempts. If the player fails to answer the question within 3 attempts then the game will move on to the next question and the player will receive zero points. But if the player gives the right answer to the question then, he will get 1 point. At the end of the game, the total points scored by the player are displayed.

I hope the abstract working of the game is clear to everyone, now let's move on to the project setup.
## Project Setup
Before we start coding this project, we need some questions and answers for our game.
In our case, we are going to use some easy superhero based questions.
Feel free to use your own questions or answers for the game. Our questions and answers will be stored in a separate python file in a form of a **python dictionary**.
**Here how it looks:**
```python
quiz = {
1 : {
"question" : "What is the first name of Iron Man?",
"answer" : "Tony"
},
2 : {
"question" : "Who is called the god of lightning in Avengers?",
"answer" : "Thor"
},
3 : {
"question" : "Who carries a shield of American flag theme in Avengers?",
"answer" : "Captain America"
},
4 : {
"question" : "Which avenger is green in color?",
"answer" : "Hulk"
},
5 : {
"question" : "Which avenger can change it's size?",
"answer" : "AntMan"
},
6 : {
"question" : "Which Avenger is red in color and has mind stone?",
"answer" : "Vision"
}
}
```
You can learn more about **python dictionaries** from [here](https://www.programiz.com/python-programming/dictionary).
We won't be able to cover much about dictionaries in this tutorial but basically it is a data structure that can be used to store data as a single, organized & easy to access form.
You can think of the dictionary as the list. But there are some key differences between lists & dictionaries:
- Lists are enclosed within **`()`** parenthesis while dictionaries are enclosed in **`{}`** brackets.
- Individual elements of lists are accessed using **`index`** of the element while individual elements of dictionaries are accessed through **`key:value`** pair where **`key`** is the identifier and **`value`** is its corresponding data or value.
You must make sure that your dictionary should be in the same format as above or else you may need to make necessary changes to the code to make it work for you. **Feel free to ask questions on my social handles or post your question below in discussions/comments.**
Now I assume that you have your questions & answers ready. Make sure that your Q&A python file is in the same folder as your main quiz game python file which we will start coding in just a second.
Now let's jump to coding.
## Let's Code
The first thing we always do is import required modules into our code. Luckily for this project, we don't need any specific module. However, we still need to import the Q&A python file we created in the previous step.
We have named our Q&A python file as **`questions.py`**. **Here's how we will import it:**
```python
from questions import quiz
```
We are asking python to import the **`quiz`** dictionary which contains our question & answers from the file **`questions.py`**.
Now let's get to the structure of our game...
**Pay close attention! As this might feel a bit complicated...**
Now we are going to initialize a variable to keep track of the score.
```python
score = 0
```
Now it's time to ask the questions to our player.
For that, we need to create an **`for`** loop which will iterate through all the questions.
```python
# Here remember 'quiz' is our dictionary and 'question' is our temp variable
for question in quiz:
pass
```
Now as previously mentioned, the player will have 3 attempts for each question to get the right answer.
Let's create a variable to keep track of the attempts.
```python
# Here remember 'quiz' is our dictionary and 'question' is our temp variable
for question in quiz:
attempts = 3
```
Now let's create an **`while`** loop within our **`for`** loop, which will run only until player has attempts left.
```python
# Here remember 'quiz' is our dictionary and 'question' is our temp variable
for question in quiz:
attempts = 3
# this while loop will run until player has more than 0 attempts left
while attempts > 0:
pass
```
Great! Now let's print the questions and take the response from our player. We'll use our good old **`print()`** & **`input()`** functions for that.
```python
# Here remember 'quiz' is our dictionary and 'question' is our temp variable
for question in quiz:
attempts = 3
while attempts > 0:
print(quiz[question]['question']) # this will print the current interation of for loop
answer = input("Enter Answer: ")
```
Awesome! The response of the player will be stored in the **`answer`** variable.
Now we will use a function which will check if the answer provided by the player is right or wrong. We will name that function as **`check_ans()`.** For now, let's focus on our **`for`** loop and then we will see how this function works.
```python
# Here remember 'quiz' is our dictionary and 'question' is our temp variable
for question in quiz:
attempts = 3
while attempts > 0:
print(quiz[question]['question']) # this will print the current interation of for loop
answer = input("Enter Answer: ")
check = check_ans(question, answer, attempts, score)
```
**We will pass 4 parameters to our function, which are:**
- **`question` -** the current iteration of **`for`** loop
- **`answer` -** the answer provided by player
- **`attempts` (optional) -** an optional parameter of number of attempts left
- **`score` (optional) -** an optional parameter of the current score of the player
We will store the output of our function in **`check`** variable.
Now we are going to use **`if`** statements to increase score if the answer provided by the player is right.
```python
# Here remember 'quiz' is our dictionary and 'question' is our temp variable
for question in quiz:
attempts = 3
while attempts > 0:
print(quiz[question]['question']) # this will print the current interation of for loop
answer = input("Enter Answer: ")
check = check_ans(question, answer, attempts, score)
if check:
score += 1
break
attempts -= 1
```
Here if the answer given by the player is right then the score will **increase by 1** and the **`while`** loop will break and the **`for`** loop will move on to the next question.
But if the answer is wrong, then the player will lose one attempt, and the while loop will continue until either right answer is provided by the player or player runs out of attempts.
Here finally, our **`for`** loop ends!
Are we forgetting something? 🤔
Oh we forgot the implementation of our **`check_ans()`** function... Let's cover that quickly!
```python
def check_ans(question, ans, attempts, score):
if quiz[question]['answer'].lower() == ans.lower():
return True
else:
return False
```
Here is our function... Let's break it down!
Here an **`if`** statement will compare the answer provided by the player with the correct answer from our dictionary.
If the answer is right then it will return **`True`** or else it will return **`False`**.
Let's add a few print statements to notify the player if his answer is right or wrong.
```python
def check_ans(question, ans, attempts, score):
if quiz[question]['answer'].lower() == ans.lower():
print(f"Correct Answer! \nYour score is {score + 1}!")
return True
else:
print(f"Wrong Answer :( \nYou have {attempts - 1} left! \nTry again...")
return False
```
Here looks good right!
**You did it! Be proud of yourself 🤩**
## Some Ideas to try out
Here are some quick ideas you can try with this project.
- **Make it multiplayer -** Try modifying this game so that more than one player can enjoy this game at once. You can do this by simply adding an additional **`for`** loop which will contain the names of the players and score of each player is stored separately. The player with the highest score will win the game.
- **Use MCQ format -** Not just quiz, you can also use it conduct MCQ tests. All you have to do is modify the print function to print the multiple answers and the player will have to guess the right answer.
- **Use an API -** Make use of an interesting API to automatically fetch questions from the web so you don't have to get into the hassle of creating the questions and answers on your own. One of my favorite is the [Superhero API](https://superheroapi.com/).
## Source Code
You can find the complete source code of this project here -
[mindninjaX/Python-Projects-for-Beginners](https://github.com/mindninjaX/Python-Projects-for-Beginners/tree/master/Quiz%20Game)
## Support
Thank you so much for reading! I hope you found this beginner project useful.
If you like my work please consider [Buying me a Coffee](https://buymeacoff.ee/mindninjaX) so that I can bring more projects, more articles for you.

Also if you have any questions or doubts feel free to contact me on [Twitter](https://twitter.com/mindninjaX), [LinkedIn](https://www.linkedin.com/in/mindninjax/) & [GitHub](https://github.com/mindninjaX). Or you can also post a comment/discussion & I will try my best to help you :D | mindninjax |
603,780 | Node Worker Service - Youtube GIF Maker Using Next.js, Node and RabbitMQ | Hello everyone, This Article is the third part of the series Youtube GIF Maker Using Next.js, Node an... | 11,201 | 2021-02-14T22:15:44 | https://dev.to/ragrag/node-worker-service-youtube-gif-maker-using-next-js-node-and-rabbitmq-2g51 | webdev, node, react, javascript | Hello everyone,
This Article is the third part of the series Youtube GIF Maker Using Next.js, Node and RabbitMQ.
In this article we will dive into building the worker service of our Youtube to GIF converter. This Article will contain some code snippets but the whole project can be accessed [on github](https://github.com/ragrag/youtube-gif) which contains the full source code. You can also view the [app demo](ytgif.vercel.app). The following topics will be covered here
* [Functionalities](#functionalities)
* [Flow Chart](#flow-chart)
* Implementation
* RabbitMQ Service
* [Consuming Tasks From the Queue](#consuming-tasks-from-the-queue)
* [Message Acknowledgment](#message-acknowledgment)
* Conversion Service
* [Downloading Youtube Video](#downloading-youtube-video)
* [Converting Video to GIF](#converting-video-to-gif)
* [Uploading GIF to Google Cloud Storage](#uploading-gif-to-google-cloud-storage)
* [Putting It All Together](#putting-it-all-together)
* [Closing Thoughts](#closing-thoughts)
## Functionalities

As you can see, the service worker is responsible for:
* Consuming tasks from the task queue
* Converting a part of a youtube video to a GIF
* Uploading the GIF to a cloud storage
* Updating the job gifUrl and status in database
## Flow Chart
This flow chart will simplify how the service worker is works

## Implementation
### RabbitMQ Service
#### Consuming Tasks From the Queue
Just like the RabbitMQ Service from the backend server in the previous part of this series, the RabbitMQ Service in the service worker is similar except for one single function, **startConsuming()**
```ts
//rabbitmq.service.ts
import amqp, { Channel, Connection, ConsumeMessage } from 'amqplib';
import Container, { Service } from 'typedi';
import { Job } from '../entities/jobs.entity';
import ConversionService from './conversion.service';
@Service()
export default class RabbitMQService {
private connection: Connection;
private channel: Channel;
private queueName = 'ytgif-jobs';
constructor() {
this.initializeService();
}
private async initializeService() {
try {
await this.initializeConnection();
await this.initializeChannel();
await this.initializeQueues();
await this.startConsuming();
} catch (err) {
console.error(err);
}
}
private async initializeConnection() {
try {
this.connection = await amqp.connect(process.env.NODE_ENV === 'production' ? process.env.RABBITMQ_PROD : process.env.RABBITMQ_DEV);
console.info('Connected to RabbitMQ Server');
} catch (err) {
throw err;
}
}
private async initializeChannel() {
try {
this.channel = await this.connection.createChannel();
console.info('Created RabbitMQ Channel');
} catch (err) {
throw err;
}
}
private async initializeQueues() {
try {
await this.channel.assertQueue(this.queueName, {
durable: true,
});
console.info('Initialized RabbitMQ Queues');
} catch (err) {
throw err;
}
}
public async startConsuming() {
const conversionService = Container.get(ConversionService);
this.channel.prefetch(1);
console.info(' 🚀 Waiting for messages in %s. To exit press CTRL+C', this.queueName);
this.channel.consume(
this.queueName,
async (msg: ConsumeMessage | null) => {
if (msg) {
const job: Job = JSON.parse(msg.content.toString());
console.info(`Received new job 📩 `, job.id);
try {
await conversionService.beginConversion(
job,
() => {
this.channel.ack(msg);
},
() => {
this.channel.reject(msg, false);
},
);
} catch (err) {
console.error('Failed to process job', job.id, err);
}
}
},
{
noAck: false,
},
);
}
}
```
**startConsuming()** will consume a message from the queue, parse its JSON object and then delegate the conversion process to the ConversionService.
All the ConversionService needs to do the conversion process is the Job object as well as two callbacks used to either acknowledge or reject the message from the queue (Will be discussed below).
Also notice that in this example we use
```ts
this.channel.prefetch(1);
```
We will talk about this at the end of this part of the series and what it means
#### Message Acknowledgment
To remove a task from the queue (indicating that the service successfully processed the task either negatively or positively) we need to do **manual acknowledgment**.
This can be done in amqplib by using either
```ts
channel.ack(msg);
```
To indicate a positive message acknowledgement
or
```ts
// Second parameter specifies whether to re-queue the message or not
channel.reject(msg, false);
```
To indicate a negative message acknowledgement.
Notice that on error we do not re-queue the message back to the queue and we consider it as a 'failed conversion'. But this can be left up to the programmer to handle.
See more on [RabbitMQ Message Acknowledgement](https://www.rabbitmq.com/confirms.html)
### Conversion Service
This service contains the core logic of our service worker.

It exposes a function **beginConversion()** that is called from the RabbitMQ Service when consuming a message
```ts
public async beginConversion(job: Job, { onSuccess, onError }: { onSuccess: () => void; onError: () => void }) {
...
}
```
This function will perform all the steps necessary for the conversion, it will then call either **onSuccess()** or **onError()** depending on its success or failure.
These are the steps neccesary for converting a youtube video to a GIF:
* Downloading Youtube Video
* The youtube video is downloaded locally
* Converting downloaded video to GIF
* The video is converted into a GIF (only the selected range by start/end times is converted)
* Uploading GIF to Google Cloud Storage
* Updating the database
* call **onSuccess()** or **onError()** accordingly
Lets start by downloading the youtube video locally
#### Downloading Youtube Video
to download the youtube video locally, we use the go-to package for that task, [ytdl-core](https://github.com/fent/node-ytdl-core).
a function **downloadVideo()** is responsible for this, it takes the youtube video url/id and returns a [ReadableStream](https://nodejs.org/api/stream.html#stream_readable_streams) that we can use to save the video file locally as well as its extension i.e: mp4, avi..etc
```ts
//conversion.service.ts
import { Readable } from 'stream';
import ytdl from 'ytdl-core';
import YoutubeDownload from '../common/interfaces/YoutubeDownload';
private async downloadVideo({ youtubeId, youtubeUrl }: YoutubeDownload): Promise<{ video: Readable ; formatExtension: string }> {
const info = await ytdl.getInfo(youtubeId);
const format: ytdl.videoFormat = info.formats[0];
if (!format) throw new Error('No matching format found');
const video = ytdl(youtubeUrl, {
format,
});
return { video, formatExtension: format.container };
}
public async beginConversion(job: Job, { onSuccess, onError }: { onSuccess: () => void; onError: () => void }) {
try {
console.info('Started Processing Job :', job.id);
const { video, formatExtension } = await this.downloadVideo({
youtubeId: job.youtubeId,
youtubeUrl: job.youtubeUrl,
});
const srcFileName = `./src/media/temp.${formatExtension}`;
video.on('progress', (chunkLength, downloaded, total) => {
//... Logic for showing progress to the user..i.e progress bar
});
video.pipe(
fs
.createWriteStream(srcFileName)
.on('open', () => {
//Video download started
console.log('Downloading Video');
})
.on('finish', async () => {
//Video finished downloading locally in srcFileName
console.info('Downloaded video for job ', job.id);
//...Logic for converting the locally downloaded video to GIF
})
.on('error', async () => {
//...handle failure logic
}),
);
} catch (err) {
//...handle failure logic
}
}
```
#### Converting Video to GIF
To convert local videos to GIFs we will use [ffmpeg.wasm](https://github.com/ffmpegwasm/ffmpeg.wasm) which is essentially a Webassembly port of FFmpeg. So you can think of this process as using [FFmpeg](https://ffmpeg.org/ffmpeg.html) inside node asynchronously to do the conversion. no spawning external processes, no dependent tools ...etc which is very powerful and simple.
```ts
//conversion.service.ts
import { createFFmpeg, fetchFile, FFmpeg } from '@ffmpeg/ffmpeg';
import GifConversion from '../common/interfaces/GifConversion';
//...somewhere in our code
const ffmpeg = createFFmpeg({
log: false,
progress: p => {
progressBar.update(Math.floor(p.ratio * 100));
},
});
await ffmpeg.load();
//Converts a video range to GIF from srcFileName to destFileName
private async convertToGIF({ startTime, endTime, srcFileName, destFileName, formatExtension }: GifConversion) {
try {
console.info('Converting Video to GIF');
this.ffmpeg.FS('writeFile', `temp.${formatExtension}`, await fetchFile(srcFileName));
await this.ffmpeg.run(
'-i',
`temp.${formatExtension}`,
'-vcodec',
'gif',
'-ss',
`${startTime}`,
'-t',
`${endTime - startTime}`,
'-vf',
'fps=10',
`temp.gif`,
);
await fs.promises.writeFile(destFileName, this.ffmpeg.FS('readFile', 'temp.gif'));
console.info('Converted video to gif');
} catch (err) {
throw err;
}
}
public async beginConversion(job: Job, { onSuccess, onError }: { onSuccess: () => void; onError: () => void }) {
try {
console.info('Started Processing Job :', job.id);
const srcFileName = `./src/media/temp.${formatExtension}`;
const destFileName = `./src/media/temp.gif`;
//... Video download logic
// GIF Conversion
await this.convertToGIF({
startTime: job.startTime,
endTime: job.endTime,
srcFileName,
destFileName,
formatExtension,
});
} catch (err) {
//...handle failure logic
}
}
```
#### Uploading GIF to Google Cloud Storage
After the local video file is converted to a GIF, we can finally upload it to Google Cloud Storage.
First we will have a CloudStorageService that's responsible for just that!
in our case we use Google Cloud Storage.
```ts
import { Storage } from '@google-cloud/storage';
import * as _ from 'lodash';
import { Service } from 'typedi';
@Service()
class CloudStorageService {
private storage;
private BUCKET_NAME;
constructor() {
const privateKey = _.replace(process.env.GCS_PRIVATE_KEY, new RegExp('\\\\n', 'g'), '\n');
this.BUCKET_NAME = 'yourbucketname';
this.storage = new Storage({
projectId: process.env.GCS_PROJECT_ID,
credentials: {
private_key: privateKey,
client_email: process.env.GCS_CLIENT_EMAIL,
},
});
}
async uploadGif(gifImage: Buffer, uploadName: string) {
try {
const bucket = await this.storage.bucket(this.BUCKET_NAME);
uploadName = `ytgif/${uploadName}`;
const file = bucket.file(uploadName);
await file.save(gifImage, {
metadata: { contentType: 'image/gif' },
public: true,
validation: 'md5',
});
return `https://storage.googleapis.com/${this.BUCKET_NAME}/${uploadName}`;
} catch (err) {
throw new Error('Something went wrong while uploading image');
}
}
}
export default CloudStorageService;
```
we can now use it like that to upload the generated GIF
```ts
//conversion.service.ts
import Container from 'typedi';
import CloudStorageService from './cloudStorage.service';
private async uploadGifToCloudStorage(destFileName, uploadName): Promise<string> {
try {
console.info('Uploading gif to cloud storage');
const gifImage = await fs.promises.readFile(destFileName);
const cloudStorageInstance = Container.get(CloudStorageService);
const gifUrl = await cloudStorageInstance.uploadGif(gifImage, `gifs/${uploadName}`);
return gifUrl;
} catch (err) {
throw err;
}
}
public async beginConversion(job: Job, { onSuccess, onError }: { onSuccess: () => void; onError: () => void }) {
try {
const destFileName = `./src/media/temp.gif`;
//... Video download logic
//... Video conversion logic
const gifUrl = await this.uploadGifToCloudStorage(destFileName, job.id);
} catch (err) {
//...handle failure logic
}
}
```
#### Handling success/failure
Handling success and failure is pretty simple. First, we have to update the job in the database
**In case of success:**
Set the job status to 'done' and update the gifUrl to the uploaded gif to Google Cloud Storage.
**In case of failure:**
Set the job status to 'error'
After that we will call **onSuccess()** or **onError()** which essentially will handle the positive/negative RabbitMQ message acknowledgment
```ts
public async beginConversion(job: Job, { onSuccess, onError }: { onSuccess: () => void; onError: () => void }) {
try {
const destFileName = `./src/media/temp.gif`;
//... Video download logic
//... Video conversion logic
const gifUrl = await this.uploadGifToCloudStorage(destFileName, job.id);
//Success scenario
await this.jobService.updateJobById(job.id as any, { status: 'done', gifUrl });
console.info(`Finished job ${job.id}, gif at ${gifUrl}`);
onSuccess();
} catch (err) {
//Failure scenario
console.error('Failed to process job', job.id);
await this.jobService.updateJobById(job.id as any, { status: 'error' });
onError();
}
}
```
#### Putting it all together
Putting it all together as well as adding a cli progress by with [cli-progress](https://github.com/AndiDittrich/Node.CLI-Progress) the ConversionService looks like this
```ts
import Container, { Service } from 'typedi';
import JobsService from './jobs.service';
import ytdl from 'ytdl-core';
import { Readable } from 'stream';
import { Job } from '../entities/jobs.entity';
import { createFFmpeg, fetchFile, FFmpeg } from '@ffmpeg/ffmpeg';
import fs from 'fs';
import cliProgress from 'cli-progress';
import CloudStorageService from './cloudStorage.service';
import GifConversion from '../common/interfaces/GifConversion';
import YoutubeDownload from '../common/interfaces/YoutubeDownload';
const progressBar = new cliProgress.SingleBar({}, cliProgress.Presets.shades_classic);
@Service()
export default class ConversionService {
private ffmpeg: FFmpeg = null;
constructor(private jobService = new JobsService()) {}
public async initializeService() {
try {
this.ffmpeg = createFFmpeg({
log: false,
progress: p => {
progressBar.update(Math.floor(p.ratio * 100));
},
});
await this.ffmpeg.load();
} catch (err) {
console.error(err);
}
}
private async downloadVideo({ youtubeId, youtubeUrl }: YoutubeDownload): Promise<{ video: Readable; formatExtension: string }> {
const info = await ytdl.getInfo(youtubeId);
const format: ytdl.videoFormat = info.formats[0];
if (!format) throw new Error('No matching format found');
const video = ytdl(youtubeUrl, {
format,
});
return { video, formatExtension: format.container };
}
private async convertToGIF({ startTime, endTime, srcFileName, destFileName, formatExtension }: GifConversion) {
try {
console.info('Converting Video to GIF');
this.ffmpeg.FS('writeFile', `temp.${formatExtension}`, await fetchFile(srcFileName));
progressBar.start(100, 0);
await this.ffmpeg.run(
'-i',
`temp.${formatExtension}`,
'-vcodec',
'gif',
'-ss',
`${startTime}`,
'-t',
`${endTime - startTime}`,
'-vf',
'fps=10',
`temp.gif`,
);
progressBar.stop();
await fs.promises.writeFile(destFileName, this.ffmpeg.FS('readFile', 'temp.gif'));
console.info('Converted video to gif');
} catch (err) {
throw err;
}
}
private async uploadGifToCloudStorage(destFileName, uploadName): Promise<string> {
try {
console.info('Uploading gif to cloud storage');
const gifImage = await fs.promises.readFile(destFileName);
const cloudStorageInstance = Container.get(CloudStorageService);
const gifUrl = await cloudStorageInstance.uploadGif(gifImage, `gifs/${uploadName}`);
return gifUrl;
} catch (err) {
throw err;
}
}
public async beginConversion(job: Job, { onSuccess, onError }: { onSuccess: () => void; onError: () => void }) {
try {
await this.jobService.updateJobById(job.id as any, { status: 'processing' });
console.info('Started Processing Job :', job.id);
const { video, formatExtension } = await this.downloadVideo({
youtubeId: job.youtubeId,
youtubeUrl: job.youtubeUrl,
});
const srcFileName = `./src/media/temp.${formatExtension}`;
const destFileName = `./src/media/temp.gif`;
video.on('progress', (chunkLength, downloaded, total) => {
let percent: any = downloaded / total;
percent = percent * 100;
progressBar.update(percent);
});
video.pipe(
fs
.createWriteStream(srcFileName)
.on('open', () => {
console.log('Downloading Video');
progressBar.start(100, 0);
})
.on('finish', async () => {
progressBar.stop();
console.info('Downloaded video for job ', job.id);
await this.convertToGIF({
startTime: job.startTime,
endTime: job.endTime,
srcFileName,
destFileName,
formatExtension,
});
const gifUrl = await this.uploadGifToCloudStorage(destFileName, job.id);
await this.jobService.updateJobById(job.id as any, { status: 'done', gifUrl });
console.info(`Finished job ${job.id}, gif at ${gifUrl}`);
onSuccess();
})
.on('error', async () => {
progressBar.stop();
console.error('Failed to process job', job.id);
await this.jobService.updateJobById(job.id as any, { status: 'error' });
onError();
}),
);
} catch (err) {
await this.jobService.updateJobById(job.id as any, { status: 'error' });
onError();
throw err;
}
}
}
```
## Closing Thoughts
Remember how we used channel.prefetch(1) when we started consuming from the queue
```ts
this.channel.prefetch(1);
```
What this does it makes sure that each queue consumer gets only on message at a time. This ensures that the load will be distributed evenly among our consumers and whenever a consumer is free it will be ready to process more tasks.
Read more about this from [RabbitMQ Docs](https://www.rabbitmq.com/tutorials/tutorial-two-javascript.html).
This also mean that if we want to scale our conversion process jobs/worker services we can add more replicas of this service.
Read more about this [Competing Consumers](https://docs.microsoft.com/en-us/azure/architecture/patterns/competing-consumers)
Thats it for our service worker! Now we can start digging into the client side of the app!.
Remember that the full source code can be viewed on the [github repository](https://github.com/ragrag/youtube-gif)
In the next part of the series we will see how we can implement The Next.js Client which will send gif conversion requests and view converted GIFs!.
| ragrag |
603,891 | Validation using schematron in java by example | We write a simple example in java to run a validation using Schematron. The framework Schematron is a way to do easy validation of XML documents and create output that is easily readable by someone working with these documents. | 0 | 2021-03-01T04:44:17 | https://dev.to/kalaspuffar/validation-using-schematron-in-java-by-example-4om6 | ---
title: Validation using schematron in java by example
published: true
description: We write a simple example in java to run a validation using Schematron. The framework Schematron is a way to do easy validation of XML documents and create output that is easily readable by someone working with these documents.
tags:
cover_image: https://i.ytimg.com/vi/0OCULBADZr4/maxresdefault.jpg
---
{% youtube 0OCULBADZr4 %}
We write a simple example in java to run a validation using Schematron. The framework Schematron is a way to do easy validation of XML documents and create output that is easily readable by someone working with these documents. | kalaspuffar | |
603,911 | Being RESTful Is Not Always Best For The User | The RESTful convention has been invaluable regarding creating a consistency in the internet allowing... | 0 | 2021-02-15T01:16:31 | https://dev.to/davidnnussbaum/being-restful-is-not-always-best-for-the-user-13p0 | ruby, sinatra, rest | The RESTful convention has been invaluable regarding creating a consistency in the internet allowing for improved communication. However, as with most things in life, there are exceptions. My project involved having a page on which people can enter their medical histories and subjective reports. It would not be convenient for the user to enter part of the information on one page and then to proceed to the next page. This is separate from notes which a person may or may not make use of and therefore is appropriate to put on a separate page.
This is the page being discussed:
------------------------------------------------------------------
Medical Information Notepad
Medical Information For Peter
General History:
Medical Conditions:
Medications:
Allergies:
Current Treatments:
Surgeries:
Immunizations With Dates:
The Current Problem:
Location of the Problem:
Any Observed Changes:
What You Are Feeling:
What Is The Level Of Discomfort On A Scale Of 1 To 10:
How Long This Has Been Going On:
Press Here To Enter The Above Information:
Press Here To Logout Without Saving The Information:
-----------------------------------------------------------------
The issue with this arrangement regarding the history and subjective areas is that they come from two separate tables. Therefore, the name on the route cannot be RESTful and simply use the name of the table since we have two tables represented on the page.
The schema is as follows:
-----------------------------------------------------------------
create_table "comments", force: :cascade do |t|
t.text "identifier"
t.text "note"
t.text "items_to_discuss"
t.text "questions"
t.integer "patient_id"
end
create_table "histories", force: :cascade do |t|
t.text "diagnoses"
t.text "medications"
t.text "allergies"
t.text "current_treatments"
t.text "surgeries"
t.text "immunizations_with_dates"
t.integer "patient_id"
end
create_table "patients", force: :cascade do |t|
t.string "username"
t.string "password_digest"
end
create_table "subjectives", force: :cascade do |t|
t.text "location"
t.text "observed_changes"
t.text "sensation_changes"
t.string "scale_1_to_10"
t.text "length_of_time"
t.integer "patient_id"
end
end
------------------------------------------------------------------
This is the code for the page being discussed. As you can see, the two tables have their respective information entered separately even though two tables are being updated:
------------------------------------------------------------------
<div class="p-3 mb-2 bg-light text-dark">
<h1 class='text-center'><span class="border border-3">Medical Information Notepad</span></h1>
<h1 class='text-center'><span class="border border-3">Medical Information For <%= @patient.username %></span></h1>
<form method="post" action="/patients/<%= @patient.id %>/info">
<h3><u>General History:</u></h3>
Medical Conditions: <input type="diagnoses" name="histories[diagnoses]"><br>
Medications: <input type="medications" name="histories[medications]" ><br>
Allergies: <input type="allergies" name="histories[allergies]"><br>
Current Treatments: <input type="current_treatments" name="histories[current_treatments]"><br>
Surgeries: <input type="surgeries" name="histories[surgeries]"><br>
Immunizations With Dates: <input type="immunizations_with_dates" name="histories[immunizations_with_dates]"><br>
<h3><u>The Current Problem:</u></h3>
Location of the Problem:<input type="location" name="subjectives[location]"><br>
Any Observed Changes: <input type="observed_changes" name="subjectives[observed_changes]"><br>
What You Are Feeling: <input type="sensation_changes" name="subjectives[sensation_changes]"><br>
What Is The Level Of Discomfort On A Scale Of 1 To 10: <input type="scale_1_to_10" name="subjectives[scale_1_to_10]"><br>
How Long This Has Been Going On:<input type="length_of_time" name="subjectives[length_of_time]"><br>
<h3><u>Press Here To Enter The Above Information:</u> <input type="submit" value="Save Information"></h3>
</form>
<form method="GET" action="/logout">
<input type="hidden" value="DELETE" name="_method">
<h3><u>Press Here To Logout Without Saving The Information:</u> <input type="submit" value="Logout"></h3>
</form>
</div>
Finally, the following code allows for the saving of information for both sections from the same page:
post '/patients/:id/info' do
redirect_if_not_logged_in
patient = Patient.find(session["patient_id"])
history = History.new(:diagnoses => params[:histories][:diagnoses])
history.medications = params[:histories][:medications]
history.allergies = params[:histories][:allergies]
history.current_treatments = params[:histories][:current_treatments]
history.surgeries = params[:histories][:surgeries]
history.immunizations_with_dates = params[:histories][:immunizations_with_dates]
history.patient_id = patient.id
history.save
subjective = Subjective.create(:location => params[:subjectives][:location])
subjective.observed_changes = params[:subjectives][:observed_changes]
subjective.sensation_changes = params[:subjectives][:sensation_changes]
subjective.scale_1_to_10 = params[:subjectives][:scale_1_to_10]
subjective.length_of_time = params[:subjectives][:length_of_time]
subjective.patient_id = patient.id
subjective.save
redirect 'patients/:id/info'
end
------------------------------------------------------------------
As is evident from the above code, the route ends with /info which is not a RESTful term as /show would be. However, it does convey the meaning of what is presented on the page.
In conclusion, RESTful convention should indeed be used as frequently as possible. This presentation gives an example where being non-RESTful can be justified.
| davidnnussbaum |
603,930 | I want to know weather a fresher learn Java(hibernate, spring) or python(django) | A post by AKHIL | 0 | 2021-02-15T03:01:45 | https://dev.to/a4akhil007/i-want-to-know-weather-a-fresher-learn-java-hibernate-spring-or-python-django-38co | beginners, career, webdev | a4akhil007 | |
604,097 | Introduction to AWS and AWS Compute Services | I spent over a week reading about cloud deployment models and service models, AWS compute services, i... | 0 | 2021-02-15T04:31:16 | https://dev.to/aws-builders/introduction-to-aws-and-aws-compute-services-2dh4 | aws, awscomputeservices, security, cloud | <!-- wp:paragraph -->
<p>I spent over a week reading <strong>about cloud deployment models and service models, AWS compute services, its security and compliance.</strong> I have started with explaining what do you mean by cloud and it's benefits. </p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3><strong><em>Q.What is cloud?</em></strong></h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>It refers to <strong>servers that are accessed over the Internet or Ethernet cables, an on-demand shared resources offering compute, storage, databases, analytics and much more that can be deployed and scaled with ease</strong>. By using cloud computing, users and companies don't have to manage physical servers themselves or run software applications on their own machines, they can focus on their own application code.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><em>AWS is a cost-effective solution for businesses as its a pay-as-you-go model. </em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>Cloud benefits</strong>,</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li>You can access as much or as little resources/services as you need, and scale up and down as required with only a few minutes’ notice. You can scale horizontally or vertically.</li><li>Increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.</li><li>Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers. So you can focus on your code and it's efficiency.</li><li>Easily deploy your application in multiple regions around the world. This means you can provide lower latency and a better experience for your customers at minimal cost.</li></ol>
<!-- /wp:list -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://image.slidesharecdn.com/hsbcandawsday-awsfoundations-170709164032/95/hsbc-and-aws-day-aws-foundations-5-638.jpg?cb=1499618785" alt=""/><figcaption>Credits: https://image.slidesharecdn.com/hsbcandawsday-awsfoundations-170709164032/95/hsbc-and-aws-day-aws-foundations-5-638.jpg?cb=1499618785</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><strong>Q.What do you mean by Compute?</strong></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Compute can be thought of as processing power required by application to process and execute it's tasks. A physical server within a data center would be considered a Computer resource as it may have multiple CPU's and many Gigabytes of RAM.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3><strong><em>Q.What is Cloud Computing?</em></strong></h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Cloud computing provides a simple way to access servers, storage, databases and a broad set of application services over the Internet. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>A cloud services platform such as Amazon Web Services owns and maintains the network-connected hardware required for these application services, while you provision and use what you need via a web application.</strong> You can access as many resources as you need, almost instantly, and only pay for what you use.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://www.novelvox.com/wp-content/uploads/2020/05/Cloud-Computing-Models.jpg" alt=""/><figcaption>Credits: https://www.novelvox.com/wp-content/uploads/2020/05/Cloud-Computing-Models.jpg</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><strong>There are different cloud deployment models, a simple problem statement can be you wanting to travel from point A to point B, now options available are,</strong></p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li><strong>Public provider</strong> owns and operates all the hardware needed to run a public cloud. For our problem statement <em><strong>think of it as using a bus for transportation</strong>.</em><ol><li>Low cost for the ticket</li><li>Limited tickets and, less reliability and security in reaching the place on time.</li></ol></li><li><strong>Private cloud </strong>belongs to a specific organization. That organization controls the system and manages it in a centralized fashion. For our problem statement <em><strong>think of it as using your own car for transportation.</strong></em><ol><li>High cost and maintenance required.</li><li>Fixed spacing but Highly secure in reaching the place.</li><li>Full control over the device.</li></ol></li><li><strong>Hybrid cloud</strong> is a combination of two or more infrastructures, every model within a hybrid is a separate system, but they are all a part of the same architecture. For our problem statement <strong><em>think of it as renting a private taxi.</em></strong><ol><li>Cost effective while comparing with your own car.</li><li>Secure and Flexible up-to certain extent.</li><li>Can be very complex and may cater to specific use cases or destinations.</li></ol></li></ol>
<!-- /wp:list -->
<!-- wp:image {"align":"center","width":705,"height":318,"sizeSlug":"large"} -->
<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img src="https://www.smactechlabs.com/wp-content/uploads/2020/04/internet-network-1.jpg" alt="" width="705" height="318"/><figcaption>Credits: https://www.smactechlabs.com/wp-content/uploads/2020/04/internet-network-1.jpg</figcaption></figure></div>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><strong>There are different cloud service models, Each type of cloud service, and deployment method, provides you with different levels of control, flexibility, and management. </strong></p>
<!-- /wp:paragraph -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://www.cloudflare.com/img/learning/serverless/glossary/platform-as-a-service-paas/saas-paas-iaas-diagram.svg" alt=""/><figcaption>Credits: https://www.cloudflare.com/img/learning/serverless/glossary/platform-as-a-service-paas/saas-paas-iaas-diagram.svg</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><strong><em>A simple problem statement can be you wanting to eat pizza, now options available are,</em></strong></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>In traditional on-premises services, you will have to make everything at home.</strong> From setting dining table, owning a oven, making the pizza dough, tomato sauce and other ingredients. </p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li><strong><em>Infrastructure as a Service (IaaS)</em></strong> provides you with the highest level of flexibility and management control over your IT resources. For our problem statement, which is eating pizza,<ol><li><em>Vendor will manage getting all the ingredients from the market such as pizza dough and tomato sauce. You have to focus on resources you already own such as dinning table and oven. </em></li></ol></li><li><strong><em>Platform as a Service (PaaS)</em></strong> manages the underlying infrastructure ( hardware and operating systems) and allow you to focus on the deployment and management of your applications. For our problem statement, which is eating pizza,<ol><li><em>Vendor will manage getting all the ingredients from the market and using an oven to make it for you. You have to focus on managing the dining table for the pizza.</em></li></ol></li><li><strong><em>Software as a Service (SaaS)</em></strong>, a completed product that is run and managed by the service provider. With a SaaS offering you do not have to think about how the service is maintained or how the underlying infrastructure is managed, you only need to think about how you will use that particular piece software.<ol><li><em>Vendor will manage getting all the ingredients from the market and using an oven to make it for you, and setting up the dining table. You have to focus on enjoying the pizza.</em></li></ol></li></ol>
<!-- /wp:list -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://www.valueblue.nl/wp-content/uploads/2017/08/Management-Iaas-Saas-Paas-Cloud.jpg" alt=""/><figcaption>Credits: https://www.valueblue.nl/wp-content/uploads/2017/08/Management-Iaas-Saas-Paas-Cloud.jpg</figcaption></figure>
<!-- /wp:image -->
<!-- wp:heading {"level":5} -->
<h5><strong>A few AWS Cloud computing services,</strong></h5>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p><strong>1.Elastic Compute cloud (EC2):</strong> Allows you to deploy virtual servers within your AWS environment. Most people will require an Ec2 instance within their environment as a part of at least one of their solutions. Configuration of EC2 depends on,</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li>Amazon machine image(AMIs): a template of pre-configured EC2 instances, to quickly launch your instance.</li><li>Instance types: depends on the parameters such as CPU's, memory, storage ...</li><li>Instance Purchasing Options: You can choose your ec2 instances from on-demand, spot, reserved, .. purchasing options.</li><li>Storage options: Depending on your instance selected, you can select<ol><li>Persistent storage</li><li>Ephemeral storage</li></ol></li></ol>
<!-- /wp:list -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="http://programmerprodigycode.files.wordpress.com/2021/02/b2ed1-ami.png" alt=""/><figcaption>Credits: http://programmerprodigycode.files.wordpress.com/2021/02/b2ed1-ami.png</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>For more information on EC2, <a rel="noreferrer noopener" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html" target="_blank">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":5} -->
<h5>Q.<strong>What is a container?</strong></h5>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>It holds everything an applications needs to run from within its container package.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>2.EC2 Container Service(EBS):</strong> Runs Docker-enabled applications packaged as containers across a cluster of EC2 instances without a complex cluster management system.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong><em>An amazon ECS cluster is comprised of a collection of EC2 instances, these instances still operate in much the same was as a single EC2 instance. A Cluster can only scale in a single region.</em></strong></p>
<!-- /wp:paragraph -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://image.slidesharecdn.com/6-170607222654/95/backing-up-amazon-ec2-with-amazon-ebs-snapshots-june-2017-aws-online-tech-talks-6-638.jpg?cb=1496874517" alt=""/><figcaption>Credits: https://image.slidesharecdn.com/6-170607222654/95/backing-up-amazon-ec2-with-amazon-ebs-snapshots-june-2017-aws-online-tech-talks-6-638.jpg?cb=1496874517</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>For more information on EBS, <a rel="noreferrer noopener" href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html" target="_blank">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonEBS.html</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>3.Elastic Container Registry(ECR):</strong> Provides a secure location to store and manage your docker images.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong><em>This is a fully managed service, so you don't need to provision any infrastructure to allow you to create this registry of docker images.</em></strong></p>
<!-- /wp:paragraph -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://d1.awsstatic.com/diagrams/product-page-diagrams/Product-Page-Diagram_Amazon-ECR.bf2e7a03447ed3aba97a70e5f4aead46a5e04547.png" alt=""/><figcaption>Credits: https://d1.awsstatic.com/diagrams/product-page-diagrams/Product-Page-Diagram_Amazon-ECR.bf2e7a03447ed3aba97a70e5f4aead46a5e04547.png</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>For more information on ECR, <a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html" target="_blank" rel="noreferrer noopener">https://docs.aws.amazon.com/AmazonECR/latest/userguide/what-is-ecr.html</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>4.Elastic Container service for Kubernetes (EKS)</strong>: Kubernetes: Container orchestration tool designed to automate, deploy, scale and operate containerized applications.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong><em>EKS allows you to run Kubernetes across your AWS infrastructure without having to take care of provisioning and running the Kubernetes management infrastructure in what's referred to as the control plane.</em></strong></p>
<!-- /wp:paragraph -->
<!-- wp:image {"align":"center","width":571,"height":236,"sizeSlug":"large"} -->
<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img src="https://rssoftware.files.wordpress.com/2020/11/image-1.png" alt="" width="571" height="236"/><figcaption>Credits: https://rssoftware.files.wordpress.com/2020/11/image-1.png</figcaption></figure></div>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>For more information, <a href="https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html" target="_blank" rel="noreferrer noopener">https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>5.AWS Elastic Beanstalk:</strong> AWS managed service that takes your code of your web application code and automatically provisions and deploys the required resources with AWS to make the web application operational.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong><em>An ideal service for developers who are not familiar with necessary AWS skills.</em></strong></p>
<!-- /wp:paragraph -->
<!-- wp:image {"align":"center","width":468,"height":279,"sizeSlug":"large"} -->
<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img src="https://mindmajix.com/blogs/images/Complete-Resource-Control.png" alt="" width="468" height="279"/><figcaption>Credits: https://mindmajix.com/blogs/images/Complete-Resource-Control.png</figcaption></figure></div>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>For more information, <a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html" target="_blank" rel="noreferrer noopener">https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>6.AWS Lambda:</strong> It is a serverless compute service that allows you to run your application code without having to manage EC2 instances.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong><em>Serverless means that you do not need to worry about provisioning and managing your own compute resource to run your own code, instead this is managed and provisioned by AWS.</em></strong></p>
<!-- /wp:paragraph -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://42vnof42im1n3ecs8l2w7ez1-wpengine.netdna-ssl.com/wp-content/uploads/2020/08/product-page-diagram_Lambda-HowItWorks.68a0bcacfcf46fccf04b97f16b686ea44494303f.png" alt=""/><figcaption>Credits: https://42vnof42im1n3ecs8l2w7ez1-wpengine.netdna-ssl.com/wp-content/uploads/2020/08/product-page-diagram_Lambda-HowItWorks.68a0bcacfcf46fccf04b97f16b686ea44494303f.png</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>You only have to pay for computer power when lambda is in use via it's functions.</em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>Components of AWS Lambda</strong>,</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li>The Lambda function is compiled of your own code that you want Lambda to invoke as per defined triggers.</li><li>Event sources are AWS services that can be used to trigger your Lambda functions.</li><li>A trigger is essentially an operation from an event source that causes the function to invoke.</li><li>Downstream Resources are resources that are required during the execution of your lambda function.</li><li>Log streams help to identify issues and troubleshoot issues with your lambda function. </li></ol>
<!-- /wp:list -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://image.slidesharecdn.com/gettingstartedwithawslambdaandtheserverlesscloud-160818170616/95/getting-started-with-aws-lambda-and-the-serverless-cloud-by-jim-tran-principal-solutions-architect-aws-10-638.jpg?cb=1473363544" alt=""/><figcaption>Credits: https://image.slidesharecdn.com/gettingstartedwithawslambdaandtheserverlesscloud-160818170616/95/getting-started-with-aws-lambda-and-the-serverless-cloud-by-jim-tran-principal-solutions-architect-aws-10-638.jpg?cb=1473363544</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>For more information on AWS Lambda, <a href="https://docs.aws.amazon.com/lambda/latest/dg/welcome.html" target="_blank" rel="noreferrer noopener">https://docs.aws.amazon.com/lambda/latest/dg/welcome.html</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>7.AWS batch:</strong> Used to manage and run batch computing workloads within AWS.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong><em>Primarily used in specialist use cases which require a vast amount of compute power across a cluster of compute resources to complete batch processing executing a series of tasks. </em></strong></p>
<!-- /wp:paragraph -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://acloudxpert.com/wp-content/uploads/2019/10/product-page-diagram-AWS-Batch_digital-media-1.ebb47115d796e652c924f69c498f89fd12aa8ea5.png" alt=""/><figcaption>Credits: https://acloudxpert.com/wp-content/uploads/2019/10/product-page-diagram-AWS-Batch_digital-media-1.ebb47115d796e652c924f69c498f89fd12aa8ea5.png</figcaption></figure>
<!-- /wp:image -->
<!-- wp:list {"ordered":true} -->
<ol><li>Jobs: Classed as the unit of work that is to be run by AWS batch.</li><li>Job definition: Define specific parameters for the Jobs themselves and dictate how the job will run and with what configuration.</li><li>Job queues: Scheduled are placed into a job queue when they run</li><li>Job scheduling: Takes care of when a job should be run and from which compute environment.</li></ol>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p><em>For more information, <a href="https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html" target="_blank" rel="noreferrer noopener">https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong>8.Lightsail:</strong> Much like an EC2 instance but without as many configurable steps throughout its creation.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p><strong><em>It has been designed to be simple, quick and very easy to use at a ow cost point for small scale use cases by small businesses or for single users</em></strong></p>
<!-- /wp:paragraph -->
<!-- wp:image {"align":"center","width":595,"height":324,"sizeSlug":"large"} -->
<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img src="https://i1.wp.com/davidveksler.com/files/2019/10/wordpress-architecture.png?ssl=1" alt="" width="595" height="324"/><figcaption>Credits: https://i1.wp.com/davidveksler.com/files/2019/10/wordpress-architecture.png?ssl=1</figcaption></figure></div>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>For more information, <a href="https://lightsail.aws.amazon.com/ls/docs/en_us/articles/what-is-amazon-lightsail" target="_blank" rel="noreferrer noopener">https://lightsail.aws.amazon.com/ls/docs/en_us/articles/what-is-amazon-lightsail</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:heading {"textAlign":"center","level":4} -->
<h4 class="has-text-align-center"><em><strong>Common use cases of cloud computing</strong>,</em></h4>
<!-- /wp:heading -->
<!-- wp:list {"ordered":true} -->
<ol><li>Migration of Production services</li><li>To avoid traffic bursting</li><li>Backup and Disaster recovery</li><li>Web hosting</li><li>Big data analytics</li></ol>
<!-- /wp:list -->
<!-- wp:heading -->
<h2 id="paragraph_9iT3F78NI-3"><strong><em>Security and Compliance</em></strong></h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p id="paragraph_Id3O10SuWY1">In the cloud, you don’t have to manage physical servers or storage devices. Instead, you use software-based security tools to monitor and protect the flow of information into and of out of your cloud resources.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p id="paragraph_hx3O10CvWY1"><strong>The AWS Cloud enables a shared responsibility model. </strong></p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li><strong> AWS manages security of the cloud, you are responsible for security in the cloud. </strong></li><li><strong>You retain control of the security you choose to implement to protect your own content, platform, applications, systems, and networks no differently than you would in an on-site data center.</strong></li></ol>
<!-- /wp:list -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://www.cloudtechnologyexperts.com/wp-content/uploads/2017/07/shared-model.png" alt=""/><figcaption>Credits: https://www.cloudtechnologyexperts.com/wp-content/uploads/2017/07/shared-model.png</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p id="paragraph_VW3O10CuWY1"><strong>Benefits of AWS Security</strong></p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li>The AWS infrastructure puts strong safeguards in place to help protect your privacy. All data is stored in highly secure AWS data centers.</li><li>Cut costs by using AWS data centers. Maintain the highest standard of security without having to manage your own facility</li><li>Security scales with your AWS Cloud usage. No matter the size of your business, the AWS infrastructure is designed to keep your data safe. </li></ol>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p id="paragraph_ca6u2SMsld0">AWS Cloud Compliance enables you to understand the robust controls in place at AWS to maintain security and data protection in the cloud. As systems are built on top of AWS Cloud infrastructure, compliance responsibilities will be shared. </p>
<!-- /wp:paragraph -->
<!-- wp:image {"sizeSlug":"large"} -->
<figure class="wp-block-image size-large"><img src="https://phoenixnap.com/blog/wp-content/uploads/2019/06/security-vs-compliance-1-e1560797221187.jpg" alt=""/><figcaption>Credits: https://phoenixnap.com/blog/wp-content/uploads/2019/06/security-vs-compliance-1-e1560797221187.jpg</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><em>For more information, <a rel="noreferrer noopener" href="https://awseducate.instructure.com/courses/197/pages/aws-cloud-computing-fundamentals?module_item_id=9215" target="_blank">https://awseducate.instructure.com/courses/197/pages/aws-cloud-computing-fundamentals?module_item_id=9215</a></em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>I will be spending next couple of weeks focusing on AWS storage and databases. Let me know where i could improve at.</p>
<!-- /wp:paragraph -->
<!-- wp:jetpack/contact-form {"subject":"New feedback received from your website","to":"kakabisht07@gmail.com"} -->
<!-- wp:jetpack/field-email {"required":true} /-->
<!-- wp:jetpack/field-textarea {"label":"How could i improve?"} /-->
<!-- wp:jetpack/button {"element":"button","text":"Send Feedback","textColor":"black","backgroundColor":"light-gray","className":"is-style-outline"} /-->
<!-- /wp:jetpack/contact-form --> | hridyeshbisht |
604,313 | Perl programmers are Mandalorians | The Perl programmers were a clan-based cultural group that was composed of members from multiple sp... | 0 | 2021-02-16T17:53:44 | https://dev.to/thibaultduponchelle/perl-programmers-are-mandalorians-l0b | perl, mandalorians | 
***The Perl programmers were a clan-based cultural group that was composed of members from multiple species all bound by a common culture, creed, and code. They originated on the planet Unix in the Outer Rim Territories and had a particularly important role in Internet history as legendary hackers/gurus.***
Experienced Perl programmers are powerful and respected, but sometimes suffer from being rejected by other cultures that do not embrace Perl qualities.
***Perl masters were some of the most feared warriors in the galaxy.***
They are now spread all over the universe, sometimes exiled. They often prefer to stay discrete.

***Perl programmers respect the creed:***
***- sharing - portability - stability -***

When people meet a Perl programmer, they have a strange reaction then look at him and say "so you're one of them?"

Perl programmers are **Mandalorians**

| thibaultduponchelle |
604,382 | Cleanup your Azure DevOps Service Principals | We’ve all been there, out of convenience you auto generate a service principal to connect a Azure Dev... | 0 | 2021-02-15T12:03:19 | https://jamescook.dev/cleanup-azure-devops-service-principals | azure, devops, cloud, security | We’ve all been there, out of convenience you auto generate a service principal to connect a Azure DevOps project to your Azure subscriptions. As long as IAM (Identity and Access Management) is configured correctly within your subscription, it’s so easy to just click, create, go and never look back. However, there can be implications in doing this, some I’ve identified below:
- duplicate service principals with the same access rights;
- access to resources that shouldn’t be accessible;
- duplicate display names causing mismanagement.
These can cause a knock on effect whereby security is impacted or pipeline disruption caused by user error. This post will address how we can go about cleaning up after ourselves and implement best practices when creating service principals for use with Azure DevOps.
To make any changes, you will need:
- to be a Project Administrator or hold permissions within a project to manage service principals;
- the right amount of permissions within a subscription to manage IAM;
- Azure Active Directory permissions to manage App registrations.
## Identifying duplicate service principals
To identify if there are duplicate service principals within DevOps, you first need to go into a project where you are using them. Once within a project, select **Project Settings** from the bottom left of the screen and then **Service connections** under the heading **Pipelines**. From here you should be able to see the list of service principals connected to your project, select each service principal that connects to Azure and select the **Edit** button. You will see the details of the subscription this service connection is connected to and if it's connected to a resource group.

If the resource group field is blank, this can mean the service principal has contributor permissions on the subscription and all resources. Repeat these steps to go into each service principal to identify if any are configured to the same subscription and/or resource group.
To dig deeper into the permissions of the service principal, cancel out of the edit window and select **Manage service connection roles** under the **Details** heading. This will lead you to the IAM of the subscription or resource group where you can identify what permissions have been set.
## Is a service principal in use
Within the **Service connections** page under **Project Settings**, select a service principal and select **Usage history**. From here you can identify what pipelines have used the service principal and the dates. You should be able to identify if the service principal is no longer needed based on it's usage.

You may want to consult those who have utilised the service principal in the past for their pipelines to see if they are going to be used again in the future or if they are using another connection.
## Renaming duplicate display names in Azure Active Directory and Devops
When you auto generate a service principal within DevOps to the same subscription multiple times, you will notice the display name will be in the exact same format. There are two places where you need to rename service principals so they are identifiable within DevOps and Azure AD. First, within the **Service connections** page, you can select the **Edit** option within a service principals configuration page and enter a new display name within the field **Service connection name**.

The second place to rename is within Azure AD. You will need to select **Manage Service Principal** within the service principal connection page.

Once the page loads, select **Branding** in the left side menu and enter the display name within the **Name** field (remember to click save once completed).

## Set who can use service principals within DevOps
Open the service principal from within the **Service connections** page and select the three dotted icon in the top right of the screen and select **Security**.

From within this window, you can specify who can manage or use the connection, what pipelines are allowed to use the service principal and what other projects can use the connection.
## Before creating any new service principals
1. Check you don’t have a service principal already created with the access requirements you need;
2. Identify what level of access you need for the service principal to complete the work. You can edit the default contributor permissions so they have stricter access;
3. Consider who should have permissions to manage the service principal and who can use it. The smaller the list, the more secure your environment is;
4. Remove the option “Grant access permission to all pipelines”, and manually configure which pipelines should have access to use the service principal.
| officialcookj |
604,387 | Weekly Challenge 100 | Challenge 100 TASK #1 › Fun Time The task You are given a time (12 hour / 24 h... | 0 | 2021-02-15T12:12:40 | https://dev.to/simongreennet/weekly-challenge-100-4lf6 | perl, perlweeklychallenge | [Challenge 100](https://perlweeklychallenge.org/blog/perl-weekly-challenge-100/)
# TASK #1 › Fun Time
## The task
You are given a time (12 hour / 24 hour).
Write a script to convert the given time from 12 hour format to 24 hour format and vice versa.
Ideally we expect a one-liner.
## My solution
Let's start with the one liner.
```
($h,$m,$a)=($ARGV[0]=~/^(\d+):(\d+)\s*([ap]m)?$/);printf $a?("%02d:%02d",($h%12+($a eq"pm"?12:0)),$m):("%02d:%02d %s",($h%12)||12,$m,$h>=12?"pm":"am")
```
Yes, it works. But really doesn't explain what I'm doing in any great detail. So the solution I am submitting actual explains the above in more detail.
Now I'm sure when they figured out time many millennia ago computers were not even a pipe dream. This unfortunately make time a difficult thing to handle in the digital age. Specifically, 12:00<b>p</b>m is one minute after 11:59<b>a</b>m. And [Internet time](https://en.wikipedia.org/wiki/Swatch_Internet_Time) never really took off thankfully.
For this task I read a string, and split it into an `$hour`, `$minute` and `$apm` (what is that part actually called?). I then check the $hour is valid (1-12 if am/pm specified, 0-23 if not).
It's then just a matter of showing the converted time. If we are going from 12 hours to 24 hours, the new hour is `$hour%12`, and we add 12 if the indicator is 'pm'. Going the other way, we set `$apm` to 'pm' if `$hour >= 12`. We then use `$hour%12`, and use 12 if the new hour is 0. In all cases, the minutes value remains unchanged.
One gotcha I had was `./ch-1.pl 12:40 pm` didn't give me the result I desired. This is because `@ARGV` is actually two values `12:40` and `pm`. Using `./ch-1.pl "12:40 pm"` fixes this (at least in bash).
## Examples
```
» ./ch-1.pl "5:15 pm"
17:15
» ./ch-1.pl "19:15"
07:15 pm
```
# TASK #2 › Triangle Sum
## The task
You are given triangle array.
Write a script to find the minimum path sum from top to bottom.
## My solution
Let's start with some fundamentals
* The smallest number in a choice doesn't lead the smallest sum. Take `[1], [1,2], [8,9,1]` as an example. In this case, choosing the 1 in the second row won't result in the smallest sum. Therefore we need to walk all paths.
* The number of possible paths is 2<sup>@rows - 1</sup>.
* As noted in the task, the x value remains the same if going to left, or x + 1 if going right.
The task can be broken down into three parts.
1. It's sometimes a challenge to parse the input correctly. For this task, I slurp up all the input, and use a regular expression to extract the numbers. One this is done, we know that the first row is contains one number, the second row two numbers, third row has three numbers, and so on.
1. I then walk each path, and find the lowest value. For this I have a counter `$i` from 0 to 2<sup>$#rows</sup>-1. For each iteration, we use binary arithmetic to determine if we will walk left or right.
1. Once we have figured out the minimum path, we print the solution.
## Examples
```
» ./ch-2.pl "[ [1], [2,4], [6,4,9], [5,1,7,2] ]"
1 + 2 + 4 + 1 = 8
» ./ch-2.pl "[ [3], [3,1], [5,2,3], [4,3,1,3] ]"
3 + 1 + 2 + 1 = 7
``` | simongreennet |
604,517 | MaiD: Hackable newsfeed reader with Postman. | How much time do you spend reading stuff on the internet?? If your answer is 'A lot' then MaiD is th... | 0 | 2021-02-15T13:48:12 | https://dev.to/nirmaljuluru/maid-hackable-newsfeed-reader-with-postman-2o6i | javascript | **How much time do you spend reading stuff on the internet??**
If your answer is *'A lot'* then MaiD is the solution you need.
**MaiD?**
if you are thinking, what the fluff is MaiD? then let me simplify it for you.
*MaiD = Mail in your feeD*
Before you read the rest of the article, this is how a feed looks like: [Sample feed](https://pmg-feed-maid.udaykrishna.com/feedfiles/9f1dd4a8-a118-4d8a-91df-70e25f0b52f6.html)
**Backstory**
It all started on new year. Like everyone else, I was working on new year goals. One of them was to limit my time on the internet. For a software developer whose life is to make others spend their time online, it sounds ironic. I know.
Anyway, I listed down the apps/websites that I was spending my time on.
These came out as the top ones:
1. Reddit
2. Twitter
3. Medium
4. Hacker News
5. Dev.to
On average I am spending 3 hrs a day reading stuff. Yes 3hours! and the worst part is I am reading stuff that i don't even need. I am getting distracted by the suggested posts and feed.
I thought to myself, wouldn't it be cool, if there was an app that can email me daily all the top posts and articles from my favorite reading sources?
As a curious hacker, I started searching for solutions that can help me build personalized feeds. I was surprised that there aren't any solutions for this problem. Sure there are many rss readers but they are also suggesting more articles/blogs to read 😅
I discussed with some of my friends and many are facing the same issue as well. Some were even willing to pay for a solution. Hmm, looks like I am onto something.
So i discussed with Uday and we thought of building a solution that will collect all the articles and posts and send you a custom feed
Like a bunch of nerds, we started with the system design and architecture. During this time, I also came across a postman hackathon. Postman introduced workspaces(which are nothing but a collection of api calls.) And we realized we could just use workspaces instead of creating backend from scratch.
Yep, you read that right. We built our entire backend on postman. That means no hassle of creating db tables, handling auth, saving user preferences etc.,
Here is how we built:
1. We added all the feed sources as requests in a collection. Requests can return either xml(rss feeds) or json.
2. We also created util functions to parse the response and store it in a variable. These functions are run once a request is run successfully.
3. We used postman scripts to handle the workflow(ie. To make requests in a specific order)
4. We added space theme(for better reading experience) to the feed and uploaded it to aws S3.
5. The feed link is shared in the email.
So if anyone wants to use maid, all they need to do is:
1. Go to mailjet and request for api access(free btw).
2. Fork the [postman workspace](https://www.postman.com/read-stack/workspace/maid-mail-in-your-feed/overview) and use the mailjet credentials.
3. If you want to add any custom sources,add it as a new request and the util functions to handle logic.
**Note:**
1. You can use the aws api key provided to upload to aws -- Details in [postman workspace](https://www.postman.com/read-stack/workspace/maid-mail-in-your-feed/overview)
2. By default the theme is space, you can also customize it -- Details in postman workspace.
3. If you prefer watching video on [Youtube](https://www.youtube.com/watch?v=1EqbrCyg9Rs&feature=youtu.be) | nirmaljuluru |
605,261 | Web Font Loading & The Status Quo | Let's start with the obvious: there's lots of great posts out there on font loading (which all tend... | 0 | 2021-02-18T01:31:06 | https://whistlr.info/2021/font-loading/ | javascript, html, font, webfonts | ---
title: Web Font Loading & The Status Quo
published: true
date: 2021-02-16 00:00:00 UTC
tags: javascript, html, font, webfonts
canonical_url: https://whistlr.info/2021/font-loading/
---
Let's start with the obvious: there's lots of great posts out there on font loading (which all tend to be 27 pages long for some reason) and using the `font-display` CSS property, and… you get the idea. These all _accept_ the status quo—that fonts cannot load synchronously like your CSS—and just describe ways to mask that.
But, it's my website, and I know exactly what fonts the user is going to need. So why can't I ask the browser to put a small font onto the critical path before a page displays at all? As an engineer, I find the lack of choice frustrating. 😠
I don't have a perfect soution, but this post lays out my gripes, a fallback solution via base64 encoding your fonts, and platform suggestion. To start, here's the fundamental issue, shown via animation.
<img src="https://storage.googleapis.com/hwhistlr.appspot.com/assets/emojityper-2020-08-10.webp" width="768" height="524" />
While there's variants on this problem, there's two things happening here:
1. "Emojityper" displays with the system font first
2. The loaded font is _bigger_ than the system font—we see [layout shift](https://web.dev/cls/), which I'm paid by my employer to tell you is bad (it _is_ bad, but I'm also paid to tell you)
The status quo solution is to use the `font-display` CSS property (and some friends). And to be fair, traditional CSS can solve both of these problems. However, these issues are typically solved by _not displaying the offending text_ until its font arrives—even though the rest of your page is rendered.
The most frustrating issue here is that this "flash" takes all of a few frames—maybe 50-60ms. This is the choice I'd like: to delay rendering by a small amount of time. My opinion on this UX is that users are going to be more delighted by a page ready-to-go rather than one effected by a flash that confuses a user's eyes for mere milliseconds. 👀
### Case Study
On [developer.chrome.com](https://developer.chrome.com), we actually inline all of our stylesheets and images (largely SVGs) into each page's HTML in order to reduce the number of requests and make the page load faster. We're really happy with this solution, because for most users, their network is going to deliver that whole _single_ payload incredibly quickly.
Despite this sheer duplication of assets across every HTML page, our fonts still go to the network, and new users will still see a flash.
## Loading in general
For background on loading, see my [recent interactive post](https://whistlr.info/2020/understanding-load/). The TL;DR from that post is that the _only_ thing that can block a page from rendering is loading external CSS. And for fonts, your browser will asynchronously load a font when glyphs from it are needed—e.g., for the heading font of this blog, that's immediately, but only once the stylesheet has first arrived.
Here, I'm actually using two tricks to get you the font earlier (although neither prevents the flash and layout shift):
- I use `<link rel="preload" ... />` to request the font early, although this only helps if you have an external CSS file (if it's inlined in `<style>`, the font URL is _right there_)
- I also send the font via [HTTP2 Server Push](https://en.wikipedia.org/wiki/HTTP/2_Server_Push) _before_ any HTML goes to the user, although it seems like browser vendors are removing support for this due to misuse
Regardless of what you think this post, preloading your font is a good idea. Modern HTTP is very good at sending you lots of files at once, so the earlier your user's font can get on that train, the better. 🚂🚋🚋
Font files should also be [fingerprinted](https://web.dev/love-your-cache/#fingerprinted-urls) and cached forever for future loads. I digress, but this loading issue—like so many—is only about the user's 1<sup>st</sup> load. With the advent of service workers, we as web developers have almost complete control over the user's 2<sup>nd</sup> load.
## Solutions, today
This is a tricky one. We can actually include a font inline in your blocking CSS file—by base64 encoding it, which has ~33% space overhead. There's no extra network requests here and decoding is done in a blocking way.
```
@font-face {
font-family: 'Carter One';
src: url('data:application/font-woff2;charset=utf-8;base64,d09GMgABAAAAAG74ABI...') format('woff2');
font-weight: normal;
font-style: normal;
}
```
Many folks argue that [base64 is a bad idea](https://csswizardry.com/2017/02/base64-encoding-and-performance-part-2/). Although, in that case study, the size of the image isn't listed—about 220k—and the author fundamentally disagrees with my assertion that fonts _can_ be critical resources.
There is cost here, both in space and decoding time. If you're going to base64 a font to avoid the flash, how can you minimize the cost?
- I find that most Latin custom fonts are about ~20k, and I wouldn't base64 anything substantially larger than that—keep it to a single font at most. (I'd use the [system font](https://whistlr.info/2020/system-font/) for body text, and leave a custom font for your headings or hero text.)
- Put the font declaration in a unique CSS file that's cached forever. Unlike the rest of your CSS, which you might change, the font is not going to change over time.
```
<!-- These will be downloaded in parallel -->
<link rel="stylesheet" href="./base64-encoded-font-eeb16h.css" />
<link rel="stylesheet" href="./styles-cakl1f.css" />
```
- Only ship woff2—[95%+ of users](https://caniuse.com/woff2) have support
- This is advanced, but if you can control what your user gets on their 2<sup>nd</sup> load (e.g., via a Service Worker), then you _could_ serve the user a real, cached woff2 as well and then use only it for repeat loads.
## Anti-patterns
There are other ways to ensure users don't see any part of your page before the fonts load. But they're going to involve JavaScript and that's just a rabbit hole that increases your site's complexity _real fast_. 📈
You could mark every part of your page as hidden via a CSS class, and then only remove it once you see a font arrive. You could do this via the [Font Loading API](https://developer.mozilla.org/en-US/docs/Web/API/CSS_Font_Loading_API) or by literally measuring the rendering size of a test `<div>` until it changes. These are not good solutions.
(This is something I happily do on [Santa Tracker](https://santatracker.google.com), but we literally have a loading screen, _lean in_ to a slow load, and the entire site requires JS. It's not suitable for _sites_.)
## A standards plea
Last year, a proposal was made to add [Priority Hints](https://wicg.github.io/priority-hints/).
Right now, this proposal is _just_ for hints about the importance of network traffic.
But perhaps it could include a hint choice of `critical` which informs a browser that this preload _may_ block page rendering—if it arrives quickly, of course.
```
<!-- Preload this font and block until used, with limited budget -->
<link rel="preload"
importance="critical"
href="/carter-one.woff2?v11"
as="font"
type="font/woff2"
crossorigin />
<!-- This could work for as="style", as="fetch" or others -->
<link rel="preload"
importance="critical"
href="/important-data.json"
as="fetch"
crossorigin />
```
This would allow for standards-based developer _choice_, and because it's a purely additive attribute, would have a sensible fallback for unsupported browsers (i.e., not to block the page at all). There's also a wide range of resources [you can preload](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/link#attributes), so it could be a versatile tool. ⚒️
## Summary
I find a lack of control over font loading frustrating, and using base64 for small fonts can help you if this problem frustrates you too. And if you find yourself trying to preload similarly-sized images 🖼️ to make your page work, that's actually one of the biggest signs this approach might help you—to me, that font is just as important as that site logo or navigation button. 🍔
To be clear, this can be a [footgun](https://en.wiktionary.org/wiki/footgun)—don't block page loading for minutes because 100k of fonts haven't arrived—use base64 sparingly to avoid a flash or [layout shift](https://web.dev/cls/). I don't think it makes sense for every site. I'm not even sure I'm going to implement this strategy on this blog.
Yet, to revisit the [developer.chrome.com](https://developer.chrome.com) case study from earlier, where we happily inline images and our stylesheets. I don't think we should inline the fonts directly on the page—they're ~20k files which _never change_—but moving them to a synchronous, fingerprinted (and cached forever), stylesheet including just the base64 font may be on the cards.
➡️ Let me know what you think on [Twitter](https://twitter.com/intent/tweet?text=Hi%20@samthor%20I%20think%20base64%20is%20the%20worst%20and%20fonts%20should%20not%20be%20critical%20to%20a%20page%20load%20%F0%9F%94%A5). | samthor |
605,450 | Web Scraping Yahoo Cryptocurrency Indices using Python
| The aim of this article is to get you started on real-world problem solving while keeping it super... | 0 | 2021-07-23T17:35:28 | https://proxiesapi.com/blog/Web-Scraping-Yahoo-Cryptocurrency-Indices-using-Python.php | The aim of this article is to get you started on real-world problem solving while keeping it super simple so you get familiar and get practical results as fast as possible.
https://youtu.be/vGUO6klO9O4 | proxiesapi | |
605,627 | JavaScript Interview Question #38: Can you add multiple arrays in JavaScript? | Can you add multiple arrays in JavaScript? What’s the output? . . . . . . . . . . . .... | 11,099 | 2021-05-06T11:40:55 | https://learn.coderslang.com/js-test-38-adding-3-arrays-of-integers/ | javascript, beginners, codenewbie, webdev | 
Can you add multiple arrays in JavaScript? What’s the output?
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
The function `add(x, y, z)` applies the `+` operator to the provided arguments. Or, simply put, adds them up.
In line 5 we provide it with 3 arrays.
Whenever you try to add arrays in JavaScript, they will be first converted to strings. Every element is separated from the next one by a comma and a single space. In our case:
- `1, 2`
- `3, 4`
- `5, 6`
Then these strings are concatenated, or “glued” together to make a result.
---
**ANSWER**: a string `1, 23, 45, 6` will be printed to the console.
[Learn Full-Stack JavaScript](https://js.coderslang.com) | coderslang |
605,959 | Get JSON Data from API Call and pass it to component as a parameter | I have a URL that I am returning Json data, that is need to pass as a parameter to react component, t... | 0 | 2021-02-16T15:46:06 | https://dev.to/asp1979/get-json-data-from-api-call-and-pass-it-to-component-as-a-parameter-d7h | fetch, apicall, jsondata, passingparametertocomponent | I have a URL that I am returning Json data, that is need to pass as a parameter to react component, that component is: https://rsuitejs.com/components/multi-cascader
and here is my code:
fetch('http://ABCServer/LabMag/Home/GetCategories')
.then(response => response.json())
.then(json => console.log(json));
ReactDOM.render(<div>
<p>Cascade: </p>
<MultiCascader data={json} style={{ width: 424 }} uncheckableItemValues={b}/>
<hr />
<p>Not cascaded:</p>
<MultiCascader cascade={false} data={data} style={{ width: 224 }} />
</div>, document.getElementById('root')); | asp1979 |
606,116 | Install a Haskell compiler on Macbook with chip M1 | Hi everyone! I struggled with this after getting my new macbook (Apple Silicon (arm64) based macs),... | 0 | 2021-02-16T19:13:25 | https://dev.to/raquelhn/install-a-haskell-compiler-on-macbook-with-chip-m1-50k1 | haskell, programming | Hi everyone!
I struggled with this after getting my new macbook (Apple Silicon (arm64) based macs), the instructions can be very confusing for a newbie so I thought a post would be useful. I followed the instructions of Moritz Angerman @angerman_io
The first step is to open terminal and install rossetta 2:
/usr/sbin/softwareupdate --install-rosetta --agree-to-license
after this, lets install nix
sudo curl -L https://nixos.org/nix/install | sh
then just follow the instructions (you just have to copy a path), and once its installed you can set up a nix compiler, as explained in Moritz tweet, typing this:
nix-shell -p haskell.compiler.ghc882
finally type:
ghci and you should get Prelude>
That is it!! You have haskell, hope it works and helps
Raquel
| raquelhn |
606,226 | dev.to bug with creating a new post | I was going to file a bug on GitHub, but as a non-contributor, I wasn't sure if that would be more co... | 0 | 2021-02-16T20:16:46 | https://dev.to/jasterix/dev-to-bug-with-creating-a-new-post-3809 | bug, help | I was going to file a bug on GitHub, but as a non-contributor, I wasn't sure if that would be more counterproductive.
**Issue**: The title field to create a new post is shorter than font
**Description**: The title field when you create a new post is too short and cuts off the text. This makes it impossible to see what you're typing
**To reproduce:**
1. Click on "Write a post"
2. Click into the title field. Once you click on the field, it immediately resizes.
**Browser**: This issue is present when using both Chrome and FD
###Screenshots:
**Before**:

**After**:


**My workaround**:
1. Open the dev console
2. Change the height (23px in Chrome, 24px in Firefox) to 80px
**Before**:

**After**:

**The Code**:
```html
<div data-testid="article-form__title" class="crayons-article-form__title">
<textarea type="text" id="article-form-title" placeholder="New post title here..." autocomplete="off" class="crayons-textfield crayons-textfield--ghost fs-3xl m:fs-4xl l:fs-5xl fw-bold s:fw-heavy lh-tight" aria-label="Post Title" autofocus="" style="height: 23px;"></textarea>
</div>
```
| jasterix |
607,799 | Know The Web: HTTP Cookie 🍪 | In this post, we are going to learn about Cookie certainly not the edible one. We'll discuss cookie p... | 11,371 | 2021-02-17T12:55:21 | https://souvikinator.netlify.app/blog/web-cookies/ | webdev, httpcookie, javascript, security | In this post, we are going to learn about Cookie certainly not the edible one. We'll discuss **cookie properties** and **security stuff** related to HTTP cookies, also **create cookie** on the way so make sure that you and your patience grab milk and cookie, enjoy the post.
## Cookie Time!
While using Facebook, Instagram, or any other services online did you notice that once you logged into these services you don't have to log in when you visit these sites again?
You searched for shoes and the next moment when you visit any site, you get to see *ads* related to shoes.
Is there some mind-reading stuff going on?

To define, cookies are small chunks of *temporary data* (key-value pair) in the browser which helps in various functionalities in any web services (as mentioned above). These web services/websites setup cookie into your browser and use for features like *managing you session on their service/website*, *to track you* and stuff like that. They can also be used to remember pieces of information that the user previously entered into form fields, such as names, addresses, passwords(not a good idea😅), and payment card numbers.
Now as these websites/web services are able to access the cookie they place in your browser which makes it clear that, *"every time you make a request to the website/web service, the cookie is sent to the server along with the request"*.
## 🕵️♂️ Sherlock mode ON!
Let's head over to a random site and have a look at their cookies. On the way, I'll explain about the properties. So I am heading to [motherfuckingwebsite.com](https://motherfuckingwebsite.com/). In developer tools open the **Application tab** and then to **cookie > https://mothe...**.
There you get to see the following:
[](https://postimg.cc/Xpvymdg7)
Those with *green* underline are options. **Name** & **Value** are self explanatory. The rest are what we need to understand.
- **Domain**
Each cookie has a domain pattern to which it belongs and can only be accessed by that specific domain pattern.
If a cookie named `cookie-1` is added for `.motherfuckingwebsite.com` (notice the `.`) then `cookie-1` can be accessed by any *subdomain* of *motherfuckingwebsite.com*. Example: `cookie-1` can be accessed by the domain `motherfuckingwebsite.com` as well as its subdomain like `www.motherfuckingwebsite.com` or `www2.motherfuckingwebsite.com` and so on.
If a cookie named `cookie-2` is added for a subdomain `xyz.motherfuckingwebsite.com` then it can be only accessed by its subdomain and itself. Example: `cookie-2` can be accessed by subdomain `xyz.motherfuckingwebsite.com` and its subdomain `abc.xyz.motherfuckingwebsite.com` and so on.
you can read more at [RFC2109](https://tools.ietf.org/html/rfc2109)
- **Path**
Suppose you want to make a cookie accessible to a specific path then this option is used. Will explain in a while.
- **Expires/Max-age**
As I have mentioned right in the start that *"cookies are temporary data"* i.e they have a *validity duration* after which they expire. How is *validity duration* determined? By the web service/website. Whenever a website/web service creates a cookie, it also mentions its lifetime.
**HttpOnly**, **Secure** and **SameSite** will be explained in the security section.
Okay! enough talks. Let's create some cookies, heat up your oven (browser)
## 👨💻 The Client Way
First we'll discuss about creating cookie from the client-side i.e from the browser using JS which is pretty easy.
`document.cookie`
How about having a look at existing cookie using JS? Just use `document.cookie` in the console and you'll see the following:
[](https://postimg.cc/JGdBL0T3)
Notice, each cookie is separated by a semicolon(`;`).
- creating simple cookie
```js
document.cookie="itsME=1"
```
> **NOTE**: Above code doesn't override cookies. It only created a new one.
[](https://postimg.cc/0zTMz31b)
You can see it's defined for domain `motherfuckingwebsite.com` now as per the properties we have discussed above, `www.motherfuckingwebsite.com` should not be able to access the cookie `itsME`.
[](https://postimg.cc/HjYJ8XR4)
and we don't see the cookie that we created hence we verified the properties.
- setting up cookie path
How about adding the **Path** option to our cookie? Let's go...
```js
document.cookie="itsMe=7; path=/test";
```
The above code will only set cookie for `motherfuckingwebsite.com/test` and can only be accessed by it. Here is the example.
```js
document.cookie="itsME=7; path=/test";
```
[](https://postimg.cc/Vr1JRC6w)
*Image 1*: we are accessing cookie from `motherfuckingwebsite.com` and there is no such cookie.
*Image 2*: we are accessing cookie from `motherfuckingwebsite.com/test` and we can see it.
- Setting cookie lifetime
Let's create a cookie with an expiry date. Now we can do this in two ways.
1. **Expires**: Takes *date* as value.
```js
//86400e3 is same as 86400000 i.e 24 hours in milliseconds
var exp_date=new Date(Date.now()+86400e3);
//refer template literals in JS if not familiar with ${}
document.cookie=`itsME2=2;expires=${exp_date.toGMTString()}`;
```
2. **Max-age**: Takes time (in *seconds*) as value.
```js
//86400 i.e 24 hours in seconds
document.cookie=`itsME3=3;max-age=86400`;
```
Above we have created both the cookie with a validity of 24 hrs. from the time the cookie was created. Here you can compare all three cookies we have set so far.
[](https://postimg.cc/QVmTf01y)
Notice! in the **Expires/Max-age** part you can see `ItsME2` and `ItsME3` has some date and time but `ItsME` shows *session*. It is so because when you don't mention any expiry time of the cookie then the browser considers it as a **sessional cookie** and it expires as soon as you close the browser. Go ahead, give it a try.
> 💡 Head over to [didthanoskill.me](http://www.didthanoskill.me/) and look for cookie from the URL bar. You'll see *1 cookie* in use. When you do `document.cookie` in the browser console, an empty string is returned which is weird. Go to the *Application* tab in the developer tool and there also you'll see no cookie. Any idea why is so? *Hint*: have a look at the source and if still don't understand then run the debugger in the dev tool to understand why is it happening so?
## 🖥️ The Server Way
We saw the client's Way of creating cookies. Now we'll create a cookie from the server-side and I'll use **NodeJS** and **express** for this.
Basically what happens is when the client makes a *request* to the server, the server responds with a *response* which contains *header* and in that header, there is `set-cookie` option which tells the browser to create a cookie.
- creating a simple cookie.
```js
const app=require("express")();
app.get("/",(req,res)=>{
//setting response header
res.setHeader("set-cookie",["itsSERVER1=h1"]);
res.send("this is https://localhost:2000/");
});
app.listen(2000,()=>{
console.log(">2000<");
})
```
and we have it.
- setting up cookie path
```js
const app=require("express")();
app.get("/",(req,res)=>{
/*can also use res.setHeader() instead of
res.cookie()*/
res.cookie("itsSERVER1","h1");
//for path /hahahayes
res.cookie("itsSERVER2","yeet!",{path:"/hahahayes"});
res.send("this is https://localhost:2000/");
});
app.get("/hahahayes",(req,res)=>{
res.send("this is https://localhost:2000/hahahayes");
});
app.listen(2000,()=>{
console.log(">2000<");
});
```
gives following result:
[](https://postimg.cc/v4gxHyjJ)
[](https://postimg.cc/ZBcCD5Bd)
so on and so forth for other options as well.
## 🔒 Security
Security is a very important topic of discussion over here. As mentioned earlier, services like social media use various cookies to keep you logged in. If such cookies get in hands of attackers they can easily break into your account and the rest you know.
When user privacy is a concern, it's important that any web app implementation invalidate cookie data after a certain timeout instead of relying on the browser to do it.
If you are using cookie to store some data and later rendering it in DOM (which is a super duper bad practice) then make sure to keep the valid formatting, they should be escaped using a built-in encodeURIComponent function:
```js
var cookie_name="mycookie";
var cookie_value="myvalue";
document.cookie = `${encodeURIComponent(cookie_name)}=${encodeURIComponent(cookie_value)}`;
```
In section **The Client Way**, we easily accessed the website's cookie using JavaScript, so an attacker may find a vulnerability like [XSS](https://portswigger.net/web-security/cross-site-scripting) which enables them to execute malicious JS code on the website and bypass logins. From a developer's point of view, it's really hard to keep track of XSS especially in humongous applications with a lot of features. Due to this, some inbuilt security features are there in cookies, which prevent such attacks even if the attacker is able to execute some code.
You can check out [Hack this site basic 10](http://souvikinator.netlify.app/blog/hack-this-site-basic-10/) which demonstrates, what careless use of cookies can lead to.
**HttpOnly** is an option used by web-server when they set cookies. This option forbids any JavaScript access to the cookie. This is a precautionary measure, to protect from certain attacks.
```js
//server side
const app=require("express")();
app.get("/",(req,res)=>{
/*can also use res.setHeader() instead of
res.cookie()*/
res.cookie("itsSERVERsecure","100",{httpOnly:true});
res.send("this is https://localhost:2000/");
});
app.listen(2000,()=>{
console.log(">2000<");
});
```
and you'll see a tick mark (✔️) under HttpOnly in the *Application tab* (developer tools). Try accessing it using JS.
If your cookie contain sensitive content then you may wanna send it over **HTTPS**. To accomplish this you have to include **secure** option as shown below.
```js
//client side
document.cookie = "ItsMeSecure=6; secure";
```
```js
//server side
const app=require("express")();
app.get("/",(req,res)=>{
/*can also use res.setHeader() instead of
res.cookie()*/
res.cookie("itsSERVERsecure","100",{secure:true});
res.send("this is https://localhost:2000/");
});
app.listen(2000,()=>{
console.log(">2000<");
});
```
**samesite** SameSite prevents the browser from sending the cookie along with cross-site requests. Possible values are *lax*, *strict* or *none*.
The **lax** value will send the cookie for all same-site requests and top-level navigation GET requests. This is sufficient for user tracking, but it will prevent many [CSRF attacks](https://portswigger.net/web-security/csrf). This is the default value in modern browsers.
The **strict** value will prevent the cookie from being sent by the browser to the target site in all *cross-site browsing* contexts, even when following a regular link.
The *none* value explicitly states no restrictions will be applied. The cookie will be sent in all requests—both cross-site and same-site.
So make sure that you use cookies wisely 🦉.
Feel free to point out any issues or suggest improvements in the content.
🥳 So it's time to wrap up the post with a quote
> "Opportunities don't happen. You create them" -Chris Grosser
| souvikinator |
607,255 | GameDevHQ Intensive Training Week 01 / Week 08 [Day 2] | I will honestly lie if I tell you that today was easy. For the second day of week 1, personally, it... | 0 | 2021-02-17T00:06:19 | https://dev.to/rehtsestudio/gamedevhq-intensive-training-week-01-week-08-day-2-1124 | unity3d, gamedev | I will honestly lie if I tell you that today was easy. For the second day of week 1, personally, it took me a while to figure it out.
Today's task was to create the spawn manager, where we need to control the spawning of the enemy and to optimize with an object pooling from the beginning.
Spawn Manager
Create start & end points on the map
Turn the SpawnManager into a Singleton
Spawn Enemies at the start position
Randomize enemy types
Assign target destination one spawned
The appropriate delay between spawns
Keep Environment tidy when spawning
Recycle Enemies not Destroyed
Wave System (Amount to spawn = 10 * currentWave)
Use Object Pooling
The spawn manager actually started very easy all you have to do is create the starting point and end point were the enemies going to walk the path and then you're going to create a spawn manager, turn into a Singleton and then randomize the enemy types; when I read about the Object pooling system that's when I dropped blank for at least 3 hours🤣.
In order to make the fully work I decided to take like a small break and then get back to it and then once I analyze what I need to do I create a simple pulling system in which at the beginning of the spawn I will Instantiate 10 enemy object and depending on the wave that number will either increase or will use the same 10 enemy and run it depending on the number of the wave.
2nd Day really put my brain to work. My goal for each week is to tackle any task in the first two day of the week and have the rest of the week to optimize or play around with ideas for these projects.
Next up will be to see how I can I Optimize or play around with the spawn manager and the EnemyAI.
Day 2 done
To be continued
| rehtsestudio |
607,511 | Pengenalan Bootstrap | Bootstrap merupakan framework CSS yang menyediakan class dan komponen yang siap dipakai, sehingga kit... | 0 | 2021-02-17T06:29:03 | https://dev.to/syhn/pengenalan-bootstrap-2p58 | Bootstrap merupakan framework CSS yang menyediakan class dan komponen yang siap dipakai, sehingga kita tidak perlu menulis kode CSS dari nol, cukup hanya memanggil class yang telah disediakan oleh Bootstrap
| syhn | |
607,639 | How to Deploy your Website using Vercel | Hey Guys! Previously, I made a blog on how to deploy your website to GitHub Pages. If you... | 11,411 | 2021-02-17T09:38:21 | https://dev.to/therickedge/how-to-deploy-your-website-using-vercel-4499 | deploy, vercel, programming, coding | ## Hey Guys!
Previously, I made a blog on how to deploy your website to GitHub Pages.
If you did not check that out yet, check it out over <a href = "https://dev.to/code2rithik/how-to-deploy-your-website-using-github-pages-34hc">here</a>
But this time, we are Deploying our website using Vercel.
### 1. First, create an account if you don't have one. But If you do, then Move to the Step 4:

### 2. You will be redirected to the Dashboard. Then, Click on the Drop Down Menu and select "Add GitHub Org or Account":

### 3. Then, you will get a window that pops up. Select the option "Install". This won't install any application on your device though:

If it is successful, then you will get a window like this:

### 4. Now, head back over to your dashboard and click on "New Project":

### 5. Now, select your GitHub username in the dropdown:
### 6. Now select the GitHub repository that you want to host:

### 7. Now, select your account:

### 8. Then, proceed by clicking on next:

### 9. Next, enter the details and click on Deploy:

### 10. Then, it will take some time to process:

### 11. After a while, you will be able to see a Congratulations Screen:

### 12. If you want to visit your website, click on Visit and it will take you to the website:


## Bonus:
If you want more links to the same website, go to your GitHub Account registered email and you will receive an email from the Vercel team with different links to the same website. (It will take some time to receive the email):

Hope you liked this tutorial, if you did click on the like button and comment down below for assistance.
Remember...
# Keep Coding Y'All 👨🏻💻
| therickedge |
607,656 | Assembly: Embedded Resource elenco e lettura | 🔥 Assembly: Embedded Resource elenco e lettura ❓ Step0: cosa sono le Embedded Resource? Sviluppare... | 0 | 2021-02-17T10:15:12 | https://dev.to/devandreacarratta/assembly-embedded-resource-elenco-e-lettura-4cj9 | assembly, embeddedresource, streamreader, devandreacarratta | 🔥 Assembly: Embedded Resource elenco e lettura
❓ Step0: cosa sono le Embedded Resource?
Sviluppare in C# mi ha sempre portato ad utilizzare le Embedded Resource e devo dire che senza di loro la vita dello sviluppatore non sarebbe interessante come ora.
Nell'articolo verra' spiegato come ...
1️⃣ Come adeguo il mio file csproj?
2️⃣ Come capisco quali Embedded Resource ho a disposizione nel mio progetto?
3️⃣ Come le posso utilizzare?
... ed ora non vi resta che leggere il post ed ottenere risposta alle domande.
📰 [Articolo Completo](https://blog.devandreacarratta.it/assembly-embedded-resource-streamreader/?utm_source=devto&utm_medium=coding-tips&utm_campaign=dotnet&utm_content=assembly-embedded-resource-streamreader)
Buona giornata! | devandreacarratta |
607,790 | Step into Your Customers’ Shoes to Build Impactful SaaS Apps | Before you get started with your next SaaS project, make sure you put yourself into your customers’ p... | 0 | 2021-02-17T12:30:04 | https://dev.to/chirag191094/step-into-your-customers-shoes-to-build-impactful-saas-apps-2eh6 | Before you get started with your next SaaS project, make sure you put yourself into your customers’ place as that will help you envision and better understand what exactly is required. In this article, I will talk about the various aspects of SaaS app development. Let’s begin with the introduction of SaaS first.

## What is SaaS and How Does It Benefit the Businesses?
SaaS refers to Software as a Service – meaning software is delivered and maintained by way of a subscription model. Hosted on the cloud, SaaS products don’t need to reside on your computer which means you don’t need to worry about the storage space either. SaaS products don’t usually come with a lifetime license and there is no complex process to have these. You can use SaaS programs anywhere from any device and the best part is they can run without internet as well. If you are also looking for SaaS app development for your business, you must know the benefits that come along with it. Let’s quickly go over these benefits:
• SaaS is secure as cloud service providers are responsible for ensuring maximum data security
• SaaS is cost-efficient and there is no need to buy any hardware or maintain it
• SaaS is fully reliable as well. Since servers are located across the globe, it doesn’t matter even if one goes down. Your app will still continue to remain online
• SaaS is scalable and you can scale the app up/down as conveniently as a few clicks to fit your requirement
• No need to develop SaaS solutions from the scratch
Other than these, SaaS-based apps can be run on multiple devices with just a login, quite contrary to the conventional software solutions.Another significant benefit of SaaS is that users can test the software even before they buy the subscription.
### Have SaaS Business Ideas? Turn Them into Reality!
Turning the best SaaS business ideas into reality is easy and possible only if you have the right [SaaS app development company](https://devtechnosys.com/saas-application-development.php) at your service. A SaaS app can be used as a separate business wherein you build a system for others or for the purpose of generating additional revenue. Wondering how would you generate additional revenue by building a SaaS product for your business? Let me tell you how?
You build a SaaS product, use it for your business, and then share the product with other businesses who need it. This way you help those businesses automate the processes and eventually add more bucks to your bank. What’s more, these products can be customized to meet specific business needs.
### Challenges of SaaS App Development
Well, all isn’t just great about building SaaS products there exist some intimidating challenges too. As it is with every project, SaaS business ideas aren’t always going to be a great hit and the risk of failure is always there.The challenges that may come across include:
**• Lack of trust** – If your customers don’t trust your SaaS model, you may have to struggle hard to sell your product, especially if you are a new entrant in the domain.
**• Fewer audiences** –If your target market is small, your SaaS product will not yield desired profitability. To avoid this, you will need a horizontal development by increasing functionality. Analyze your customers’ needs, market trends, and what your competitors are doing?
**• Poor app idea** – if your SaaS business idea turns out to be poor mainly because of poor execution, there’s no point in creating an app that doesn’t cater to the needs of your customers.
### Hire Mobile App Development Company with Competent SaaS App Developers
The only way to ensure your SaaS app idea turns out to be a pleasant reality is: hire mobile app developers who have already worked on various SaaS products. A team of competent and dedicated developers with prior experience in SaaS can help you in this. But how would you determine that the mobile application development company you are going to hire is worthy of being assigned your project or not? Let’s discuss further!
### What to Consider When Looking for Mobile App Developers for Hire?
There are several important parameters on which you’ll have to judge the shortlisted companies in this regard and these include:
**SaaS development expertise**
When it comes to building great apps, experience does matter a lot. Only an experienced team of developers can help you shape a SaaS product that meets all your expectations and needs.
**Portfolio**
Make sure you scrutinize the portfolio of the company before you finally decide to hire mobile app developers for your project. The project/product portfolios, and case studies can help you better understand the methodology, competencies, and skills. In short, you get absolute clarity of their worthiness for your project.
**Client testimonials**
Before you hire dedicated developers for your app, don’t forget to read what their existing and past clients have to say about their competency, professionalism, and dedication to the role.
**Check their online presence**
Visit their website, social media presence, and other online platforms where they are present to know more about their overall performance as an app developer.
**Methodology and approach**
Learn about the methodology and approach your shortlisted and top mobile app development companies follow as that will help you make an informed decision.
**Technology stack**
Check whether the company you intend to hire offers [full-stack development services](https://devtechnosys.com/fullstack-development.php) or not because the tools &technologies that are used in the development have a huge impact on the end SaaS product.
Depending on your target audience and the mobile devices they use, you will need to hire an Android app development company or an iPhone app development company that canconvert your SaaS business idea into a pleasant reality.
### Conclusion
Whether you are looking to hire an Android app development company or an iPhone app development company, make sure you analyze, review, and study about the shortlisted agencies. Make sure you do your homework well before giving out your SaaS app development project.
| chirag191094 | |
607,800 | 前端判斷網頁是不是webview開啟 | 第一次使用不太熟悉 用了找到的code不過依然判斷不了 function isWebview() { var useragent = navigator.userAgent; va... | 0 | 2021-02-17T13:01:11 | https://dev.to/douknowccy/webview-32fg | 第一次使用不太熟悉
用了找到的code不過依然判斷不了
```javascript
function isWebview() {
var useragent = navigator.userAgent;
var rules = [
"WebView",
"webview",
"(iPhone|iPod|iPad)(?!.*Safari/)",
"Android.*(wv|.0.0.0)",
];
var regex = new RegExp(`(${rules.join("|")})`, "ig");
return Boolean(useragent.match(regex));
}
```
後來裝了Vconsole
在app中打開後看到需要判斷的關鍵字
'wv'後 前端頁面就能判斷了
```javascript
function isWebview() {
var useragent = navigator.userAgent;
var rules = [
"wv",
"WebView",
"webview",
"(iPhone|iPod|iPad)(?!.*Safari/)",
"Android.*(wv|.0.0.0)",
];
var regex = new RegExp(`(${rules.join("|")})`, "ig");
return Boolean(useragent.match(regex));
}
``` | douknowccy | |
607,947 | Anomaly Detection For IoT Using Open Distro For ElasticSearch | This post is the third in series on using the AWS ecosystem for IoT applications. Previously, we int... | 0 | 2021-02-17T15:23:12 | https://dev.to/tejpochiraju/anomaly-detection-for-iot-using-open-distro-for-elasticsearch-14kj | ios, aws, elasticsearch, machinelearning | > This post is the third in series on using the AWS ecosystem for IoT applications. Previously, we integrated AWS IoT with [Timestream /Quicksight](https://iotready.co/blog/metal-to-alerts-with-aws-iot-timestream-quicksight/) and [ElasticSearch/Kibana](https://iotready.co/blog/metal-to-alerts-with-aws-iot-elasticsearch-kibana/).
## Why
Anomaly detection is the foundation for applications such as Predictive Maintenance, which in turn is the driving force behind most industrial IoT deployments. Now that the _essentials_ of sensors, communication, storage and visualisation have largely been solved, attention has turned to machine learning based analytics. Cue the [new features from AWS IoT](https://aws.amazon.com/iot-analytics/) and Open Distro For ElasticSearch - the latter is the focus of this article.
## What are we going to build?
We will:
1. Simulate a smart grid sensor capable of measuring current, voltage, tempterature and humidity
2. Train an anomaly detector in ODFE on each of these metrics or `features` as ODFE calls them.
3. Simulate various grades of anomalies and verify that detector is working fine
4. Integrate the anomaly detector with Kibana's alerts ([previously discussed here](/blog/metal-to-alerts-with-aws-iot-elasticsearch-kibana))
### Simulated Smart Grid Sensor
Our simulated sensor helps monitor and predict failures in the medium voltage (MV) transmission grid. The sensor has the following nominal specifications:
- Voltage between 23kV and 25kV
- Current between 0A and 600A
- Temperature between 30C and 100C
- Humidity between 20% and 80%
> Values in these ranges are considered **good**. Anything outside is an **anomaly**.
### Anomaly Detection in ODFE
ODFE uses the Random Cut Forest (RCF) algorithm for anomaly detection. RCF is an unsupervised algorithm which analyses the data and identifies patterns. Data points that do not fit into these patterns are classified as anomalies and can include, amongst others,:
- Spikes
- Changes in periodicity
- Unclassifiable data points
Each anomaly is given a score - low scores correspond to _normal_ and high scores to _anomalous_ data points. Read more about RCF in these references:
- [Real-time anomaly detection in ODFE](https://opendistro.github.io/for-elasticsearch/blog/odfe-updates/2019/11/real-time-anomaly-detection-in-open-distro-for-elasticsearch/)
- [RCF with AWS Sagemaker](https://docs.aws.amazon.com/sagemaker/latest/dg/randomcutforest.html)
- [RCF Algorithm on Manning](https://freecontent.manning.com/the-randomcutforest-algorithm/)
#### ElasticSearch Instance
Assuming you followed the previous post, you will already have an ElasticSearch instance running. However, we need at least 2 CPU cores to use anomaly detection. I am using a `t2.medium` instance for this post.
## Building Our First Anomaly Detector
Like before, we will start our simulator to inject sensor data into ElasticSearch. I started a script to simulate 21 sensors sending data every 10s.
```python
current = random.uniform(0.0,600.0)
voltage = random.uniform(23000.0,25000.0)
temperature = random.uniform(30,100)
humidity = random.uniform(20.0,80.0)
```
The injected data looks a bit like this:

### Initialisation / Training
The ODFE documentation has an [excellent guide](https://opendistro.github.io/for-elasticsearch-docs/docs/ad/#get-started-with-anomaly-detection) for setting up a detector. Following that we end up with a configuration that looks like this:

Note that:
- I picked `Detector Interval = 5 minutes` and `Window Delay = 2 minutes`
- The documentation suggests smaller intervals make the system more real-time but consume more CPU, which sounds about right.
- You are allowed to add up to 5 features per detector - this seems to be an ODFE limitation rather than that of the RCF algorithm itself.
- I have chosen to track the `max()` value for each metric. You can use any of the standard ElasticSearch aggregations.
- Once configured, the detector took between 30-60 minutes to initialise and go live.
- I made the mistake of trying to enable the detector on a `t2.small` instance and kept running into an `unknown error`. This disappeared once I changed the instance size to `t2.medium`.
### Anomaly Generation
Once the detector was **Live**, I started generating anomalies in about 60% of the data points using the following snippet. Note that I am randomly introducing anomalies into one or all of the four metrics.
```python
r = random.randint(1,10)
if r == 1:
current = 0
voltage = 0
temperature = 0
humidity = 0
elif r == 2:
current = 1000
voltage = 100000
temperature = 300
humidity = 150
elif r == 3:
current = 1000
voltage = random.uniform(23000.0,25000.0)
temperature = random.uniform(30,100)
humidity = random.uniform(20.0,80.0)
elif r == 4:
current = random.uniform(0.0,600.0)
voltage = 100000
temperature = random.uniform(30,100)
humidity = random.uniform(20.0,80.0)
elif r == 5:
current = random.uniform(0.0,600.0)
voltage = random.uniform(23000.0,25000.0)
temperature = 300
humidity = random.uniform(20.0,80.0)
elif r == 6:
current = random.uniform(0.0,600.0)
voltage = random.uniform(23000.0,25000.0)
temperature = random.uniform(30,100)
humidity = 150
else:
current = random.uniform(0.0,600.0)
voltage = random.uniform(23000.0,25000.0)
temperature = random.uniform(30,100)
humidity = random.uniform(20.0,80.0)
```
The resulting timeseries charts, in their glorious randomness, look like this:

### Anomaly Detection
And just like that, the detector is triggered within the first time interval. This is great - with little knowledge of machine learning and zero code, we set up a self-taught anomaly detector!

### Hang On...
If you examine the anomaly grades, you will notice that the grade reduces in each time interval until the detector no longer considers the signals to be anomalous. This reminds me of an old joke,
> Said the guru to his disciple, "Next year is going to be really difficult for you. You will not meet your family or friends for a long time and you will witness a lot of suffering. In fact you won't be able to step outside your own house!". "And the year after?", asked the disciple. The guru replied, "You will get used to it".
Jokes aside, **why** is this happening?
The anomaly detector, as mentioned above, is self-taught. And it keeps learning - even as anomalous data streams in. As the kind folk at ODFE explained to me, if 5% of your data is anomalous, is it really anomalous or, in fact, the *new normal*? The anomaly detector, naturally, adapts to this new normal and gives these signals a decreasing grade until they are fully *normalised*.
This makes sense, it's just not what I expected.
### How do you solve this?
> If your data has infrequent anomalies, there's nothing to fix. The existing plugin already works well!
Freezing the anomaly detection data model by stopping the learning phase should solve this problem. I have opened a [feature request](https://github.com/opendistro-for-elasticsearch/anomaly-detection/issues/388) for this very use case.
A good suggestion from the ODFE team was to use a combination of rule based detection algorithms and ML based anomaly detection. This makes sense, especially since there are a few other issues with this domain-agnostic approach:
- All signals, across all devices, in a time interval are given a single anomaly grade. We may need to classify these anomalies for priorities and, more importantly, identify the specific devices which are reporting anomalous data.
- We may need different anomaly detection for each SKU. E.g. 300A current is anomalous for a sensor rated at 200A but normal for a 500A sensor. With ODFE, we would need to send data from each SKU to a different index and set up separate detectors for each.
## Integrating Alerts
Once an anomaly detector has been set up, it can be used as a source in the existing Alerts plugin for ODFE. We have previously discussed this - all that changes is that we define our monitor using our new anomaly detector. Yes, that's all!

## Conclusions and Next Steps
Anomaly detection is a relatively new feature in ODFE and is already really good at actually detecting anomalies and does not get tripped unless the anomalies are frequent or persistent. If the [feature request](https://github.com/opendistro-for-elasticsearch/anomaly-detection/issues/388) is accepted and built, we are in job done territory for simple use cases.
For sensitive applications like smart grids and perhaps industrial monitoring, we are exploring solutions that combine intelligence on the cloud and at the edge. Over the coming weeks and months, we will write about our work with:
- Rule based calibration and detection at the edge
- [Fuzzy logic](https://www.sciencedirect.com/science/article/pii/S0888613X96001168) based fault diagnosis at the edge
- ML at the edge using projects such as [TinyML](https://www.tinyml.org/)
## Ideas, questions or corrections?
Write to us at [hello@iotready.co](mailto:hello@iotready.co) | tejpochiraju |
608,120 | CSS animation-delay Property | The CSS animation-delay property is used to specify the delay for the start of an animation. This is... | 0 | 2021-02-19T06:59:15 | https://sharepointanchor.com/learn-css/css-animation-delay-property/ | learncss, cssa | ---
title: CSS animation-delay Property
published: true
date: 2021-02-17 11:59:25 UTC
tags: LearnCSS,CSSA
canonical_url: https://sharepointanchor.com/learn-css/css-animation-delay-property/
---
The [CSS](https://sharepointanchor.com/learn-css/ "CSS") animation-delay property is used to **specify the delay for the start of an animation**. This is one of the CSS3 properties. The animation-delay value is defined in **seconds** (s) or **milliseconds** (ms). Its **default value** is **0** and **negative** values are also allowed.
- The animation-delay property accepts the following values.
- **time**
- **initial**
- **inherit**
## Animation-delay Characteristics:
| **Initial value** | 0s |
| **Applies to** | all elements, **`::before`** and **`::after`** pseudo elements |
| **Inherited** | no |
| **Computed value** | as specified |
| **Animation type** | discrete |
| **JavaScript syntax** | **`object.style.animationDelay = "1s";`** |
## Syntax:
```
animation-delay: time | initial | inherit;
```
## Values:
| **Value** | **Description** |
| --- | --- |
| time | This value **defines the number of seconds (s) or milliseconds (ms) to wait** before the animation will start. It is an optional one. |
| initial | It will **set the property to its default value**. |
| inherit | This value **inherits the property from its parent element**. |
## Example of the animation-delay property:
The following code sets the animation delay to 3 seconds. Thus, the animation starts after 3 seconds.
```
<!DOCTYPE html>
<html>
<head>
<style>
div {
width: 120px;
height: 120px;
background: #00B69E;
position: relative;
animation: delay 5s infinite;
animation-delay: 3s;
}
@keyframes delay {
from {
left: 0px;
}
to {
left: 300px;
}
}
</style>
</head>
<body>
<h2>Animation-delay example</h2>
<p>Here the animation starts after 3 seconds.</p>
<div></div>
</body>
</html>
```
## Result:
The following image has shown the output of the above code.
<figcaption>Animation-delay property</figcaption>
## Example of animation-delay property with a negative value:
In the below code, we use animation-delay property with a negative value (-2 seconds).
```
<!DOCTYPE html>
<html>
<head>
<style>
div {
width: 100px;
height: 100px;
background:#38558C;
position: relative;
animation: delay 5s 1;
animation-delay: -2s;
}
@keyframes delay {
from {
left: 0px;
}
to {
left: 300px;
}
}
</style>
</head>
<body>
<h2>Animation-delay example with negative value.</h2>
<p>Here, the animation will start as if it had already been playing for 2 seconds.</p>
<div></div>
</body>
</html>
```
## Result:
After executing the above code, you will get the result as shown in the below image.
<figcaption>Animation-delay with negative value</figcaption>
## Example of animation-delay property with milliseconds:
In this code, we apply the animation-delay property with 300 milliseconds.
```
<!DOCTYPE html>
<html>
<head>
<style>
div {
width: 120px;
height: 120px;
background: #8F3E87;
position: relative;
animation: delay 5s 1;
animation-delay: 300ms;
}
@keyframes delay {
from {
left: 0px;
}
to {
left: 300px;
}
}
</style>
</head>
<body>
<h2>Animation-delay example in milliseconds.</h2>
<p>Here, the animation will start after 300ms.</p>
<div></div>
</body>
</html>
```
## Result:
By running the above code, you will get the result as given in the below image.
<figcaption>Animation-delay with 300 ms</figcaption>
## Browser-Support:
<figcaption>Browser-support</figcaption>
The post [CSS animation-delay Property](https://sharepointanchor.com/learn-css/css-animation-delay-property/) appeared first on [Share Point Anchor](https://sharepointanchor.com). | anchorshare |
608,140 | Multi-account AWS environments with superwerker | Managing and securing multiple AWS accounts gets complex. superwerker is a free and open-source... | 0 | 2021-02-17T16:33:34 | https://sbstjn.com/blog/superwerker-aws-multi-account-environment/ | aws, cloud, opensource, github | Managing and securing multiple AWS accounts gets complex. [superwerker](https://superwerker.cloud) is a free and open-source solution to automate the setup and management of your multi-account AWS environments. Based on our experiences at [superluminar](https://superluminar.io/), we teamed up with [kreuzwerker](https://kreuzwerker.de/) from Berlin to bundle prescriptive best practices from multiple years of cloud consulting and created [superwerker](https://superwerker.cloud).

Available as an official [AWS Quick Start](https://aws.amazon.com/quickstart/architecture/superwerker/), [superwerker](https://superwerker.cloud) helps you to set up various AWS services recommended for AWS cloud environments consisting of multiple AWS accounts.
[Read more »](https://sbstjn.com/blog/superwerker-aws-multi-account-environment/) | sbstjn |
608,177 | Day 4 | Day 4/100 of #100DaysOfCode scroll-animation | Codewars | Node.js Hours coded: 3.2 Lines of code: 2... | 11,311 | 2021-02-17T17:31:46 | https://www.linkedin.com/feed/update/urn:li:activity:6767854732536115201/ | 100daysofcode | Day 4/100 of #100DaysOfCode
scroll-animation | Codewars | Node.js
Hours coded: 3.2
Lines of code: 246
Keystrokes: 3666
1. Completed the "scroll-animation" project. It's part of the "50 Projects in 50 Days" Udemy Course.
Technology: HTML, CSS, JavaScript DOM
2. I tried to solve the "(4kyu) Next bigger number with the same digits" problem on Codewars. And the result was
Passed:95 Failed:55. I tried for more than 2 hours but I could not pass all tests.
3. I enjoyed my time when practicing on the Postman tool. It's a great tool when trying to dissect RESTful APIs.
via @software_hq's #vscode extension https://lnkd.in/ggsUNjy | rb_wahid |
608,207 | React - how to create dynamic table | Hello Coders! 👋 😊 In this article, I would like to show you how to create a dynamic table... | 0 | 2021-02-17T18:51:21 | https://dirask.com/posts/How-to-create-customized-dynamic-table-in-React-pqa53p | react, javascript, webdev, html | ###Hello Coders! 👋 😊
In this article, I would like to show you how to create a dynamic table in React.
__Before we start__, I would highly recommend you to check out __runnable examples__ for the solution on our website:
[How to create customized dynamic table in React](https://dirask.com/posts/How-to-create-customized-dynamic-table-in-React-pqa53p)
The final effect of this post:

Below example presents how to create a dynamic table from an array. Example table should consist of a header and some data records. While creating such records use `map()` function to convert them into elements.
Remember that each record should have a unique key 🗝️ - it helps React optimally manage changes in the DOM. Such a key may be, for example, the `id` assigned to an element of the table.
```jsx
import React from 'react';
const tableStyle = {
border: '1px solid black',
borderCollapse: 'collapse',
textAlign: 'center',
width: '100%'
}
const tdStyle = {
border: '1px solid #85C1E9',
background: 'white',
padding: '5px'
};
const thStyle = {
border: '1px solid #3498DB',
background: '#3498DB',
color: 'white',
padding: '5px'
};
const App = () => {
const students = [
{ id: 1, name: 'Bob', age: 25, favFruit: '🍏' },
{ id: 2, name: 'Adam', age: 43, favFruit: '🍌' },
{ id: 3, name: 'Mark', age: 16, favFruit: '🍊' },
{ id: 4, name: 'John', age: 29, favFruit: '🍒' }
];
return (
<div>
<table style={tableStyle}>
<tbody>
<tr>
<th style={thStyle}>Id</th>
<th style={thStyle}>Name</th>
<th style={thStyle}>Age</th>
<th style={thStyle}>Favourite Fruit</th>
</tr>
{students.map(({ id, name, age, favFruit }) => (
<tr key={id}>
<td style={tdStyle}>{id}</td>
<td style={tdStyle}>{name}</td>
<td style={tdStyle}>{age}</td>
<td style={tdStyle}>{favFruit}</td>
</tr>
))}
</tbody>
</table>
</div>
);
};
export default App;
```
You can run this example [here](https://dirask.com/posts/How-to-create-customized-dynamic-table-in-React-pqa53p)
That's how it works.
If you found this solution useful you can react to this post or just leave a comment to let me know what you think. Thanks for reading! 😊
## Write to us!
If you have any problem to solve or questions that no one can answer related to a React or JavaScript topic, or you're looking for mentoring write to us on [dirask.com -> Questions](https://dirask.com/questions) | diraskreact |
608,221 | The Top 3 Office 365 Backup And Recovery Solutions | Having a comprehensive and customizable Office 365 backup and recovery solution is increasingly impor... | 0 | 2021-02-17T19:13:03 | https://dev.to/hugholssen/the-top-3-office-365-backup-and-recovery-solutions-42hf | recovery, office365, backup, microsoft |
Having a comprehensive and customizable <a href="https://vmarena.com/altaro-office-365-backup-solution/">Office 365 backup</a> and recovery solution is increasingly important for Office 365 clients.
Backup and recovery solutions capture a copy of a file, database, or even an entire computer at a given time and save the data to the secondary storage device so that users can return them in the future. Microsoft does not provide a natural backup for Microsoft Office 365.
Standard settings only protect data for 30-90 days on average, leading to many complications when organizations believe their systems are backed up and later discover that they are missing.
Luckily, I am about to share with you the top 3 backup solutions for Office 365, starting with Altaro:
##1.Altaro Office 365 Backup
Altaro Software is a market-leading vendor for backup and recovery solutions for SMBs and MSPs. Altaro <a href="https://www.altaro.com/office-365-backup/">Office 365 Backup</a> is their MS O365 replication and restoration software solution, which focuses on backing up your Office 365 mailboxes and files stored in OneDrive and SharePoint.
Office 365 Backup is cloud-based, making it easy to deploy and configure. The solution is also easy to manage via a central cloud management console, from which administrators can configure complete and granular data recovery.
##2.Veeam Backup for Microsoft Office 365
<a href="https://www.veeam.com/backup-microsoft-office-365.html">Veeam</a> is one of the best global markets in backup and recovery solutions, holding the largest EMEA market share (Europe, the Middle East, and Africa).
Veeam offers a range of solutions to suit business needs, with their Backup & Replication solution being one of the most popular leading products.
##3.Commvault Backup & Recovery
Commvault is a market leader in data and information management, offering intelligent, scalable solutions. Their powerful, practical products have resulted in <a href="https://www.commvault.com/">Commvault</a> being recognized as a leader for eight consecutive years in the Gartner Magic Quadrant for Data Center Backup and Recovery Solutions.
##Wrap Up
I know it’s a tough choice, and all of the companies we mentioned offer some great products. But ultimately the decision is yours, so share a thought about which one of our __top 3 Office 365 backup and recovery solutions__ do you think is the best.
| hugholssen |
610,509 | Why Programmers don't make money? | I am not aware of anything in your mind, but if you are reading this post, you probably don’t agree w... | 0 | 2021-02-18T06:49:48 | https://dev.to/songa1/why-programmers-don-t-make-money-3kal | earnmoneyprogramming, programming, programmersdontmakemoney, problemsolving | I am not aware of anything in your mind, but if you are reading this post, you probably don’t agree with me. Right? And I know that it would be hard for many people to believe this but if you’re going to read this to the end, you will understand what I mean.
When I started programming, I was very curious. I knew nothing about programming. All I wanted to know how systems work and how people create applications. I also wondered how the richest people are programmers. How do they make money? Is programming really the way to becoming the richest person?
As I went deep in learning different programming languages, I started to understand that programmers don’t just make money. Programming is not about making money and becoming a programmer is not a way to becoming rich, even though it may lead you to that.
As days passed, I understood that programmers solve problems. They identify challenges and develop solutions for that problem identified. And what about making money?
**Let me get to that.**
When you develop solutions, you are solving some problem, but also at the same time, you can make money from it. And many people are wrong when they think that money comes first. When you put money on the front, you are not only poisoning your mind, you are also limiting the level of what you can achieve. I am not ignoring the fact that many new programmers join programming clubs or communities because they are thirsty for money.
Because they have heard someone saying that programmers make a lot of money, they think that they will also make a great amount of money. I can understand them. But let me ask you something, what causes most people to give up?
Well, for some it’s because they are not aware of what they need, for others it’s because of society and current environmental influences, and for others, it’s because they didn’t get what they thought they would get or because they got it and they think they don’t need to work anymore.
I am now talking about those who give up because they think they have found what they wanted.
Let’s take an example for someone who came in Programming looking for money, he will develop some kind of an application and luckily make a lot of money, and then after that, because he thinks he has found what he wanted, he may even forget that he is a programmer. That’s like students, many of them learn for an exam, but they don’t actually know when the real exam will start.
But when your commitment is to solve problems, you will always solve problems because there is always a new problem. And if you are to make money, you will make money constantly.
I am not telling you to leave programming if you are looking for money. Actually, you have a point too, but your goals should be clear. Making money is not a goal, it may be an outcome of your achievements towards your goal, or even an award to consistently achieving your goal.
So, keep this in mind, _Programmers don’t make money, they solve problems._ And before deciding to become one, make your goals clear.
| songa1 |
610,679 | Mathematical Calculator | Dealing with math homework statistics seems to be a daunting task though; It has its benefits that sh... | 0 | 2021-02-18T11:04:15 | https://dev.to/jinjohn38/mathematical-calculator-mme | Dealing with math homework statistics seems to be a daunting task though; It has its benefits that should help you a lot. The first is that you are able to prepare and further improve your problem-solving skills, and another thing is that by doing your maths with homework, you will be able to continually be exposed to the goal involved, to help you get better. your math homework help to speed up the process of resolving questions and disputes and looking at your completed results For this reason, it is only a good idea if you have many online websites in your mind that will help you speed up your math work, as such choices can make a big difference.
<a href="https://www.standarddeviationcalculator.io/"> https://www.standarddeviationcalculator.io/ </a>
Online Stats Calculator
high-quality statistics calculators online calculators that provide mathematical help for the homework you need. One excellent example of Web site us will certainly be easycalculation.com.Although the web page has been subjected to a number of statistical problems, the web page itself still offers more than just online calculations for statistics. , root means square, probability, factorial are just some of the online calculators in the calculations provided by the page. Like other separate calculators, make sure you submit your data in order to solve the parameters you need.
Online Web Site Search. The left pane of the site contains a number of backlinks that, when clicked, will take you to a specific website. I'm mainly <a href="https://medium.com/nyu-a3sr-data-science-team/how-to-talk-about-statistics-and-data-in-an-era-of-public-distrust-a-guide-for-statistical-e51bdb7965a2"> talking about statistics </a> and I'm asking for help with statistics homework, so click the statistics URL. Then you will be able to see a hyperlink to the calculator mentioned previously. At least with the intention of realizing and knowing how to rely on them, go through and look at several calculators. The information about internet calculators on the website is that they usually give you the exact formula used in the calculations. If the problem is caused by the mathematical terminology of a particular statistical term, the website also offers discussions on phrases that are embarrassing or unintelligible. This is especially useful if you need statistical homework help, but be careful to use it. While each website will do its best to provide you with the most accurate information there may still be a chance to give you incorrect information. items in auto-pilot rain. | jinjohn38 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.