id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
286,220
Domain Extensions and Your Project
NOTICE: I tried to limit how many links I made reference to, there should only be one but if I mentio...
0
2020-03-22T21:24:01
https://dev.to/kailyons/domain-extensions-and-your-project-1b1f
**NOTICE: I tried to limit how many links I made reference to, there should only be one but if I mention multiple please do not click them or go to them, as I have no control over them. You have been warned** Let's make it clear, I am a domain extension nut. A weird hobby of mine is domain extensions and also explaining how I feel about them. This article is supposed to help you which TLD's to pick. TLD's being Top-Level-Domains or domain extensions. Basically the .com .to .org .net .io and about a trillion others to pick from. I want to go through four main sections. The first section being the domain extensions to avoid at all costs. The second being about okay-enough domain extensions. Thirdly about the extensions you need to look into. Finally about the extensions that are clever for some projects. Let's just jump into it. # AVOID THESE EXTENSIONS!! Non-applicable trade domains (.accountant, .shiksha, .builders, etc.). Notice the word "Non-applicable." In short, if your domain isn't a build tool or accounting software, then stay away from these domains. Any kind of skill, trade, etc that doesn't apply would make sense to avoid. Free domains. These are an ABSOLUTE must to avoid. So you know how domains like .cc usually represent more sketchy sites because they are cheap? These domains usually belong to the sketchiest. I have seen them belong to a plethora of sites like for malware, stolen code running projects, and a plethora of different scams/crams/and whams. One most notable example was a URL shortener that abused TinyURL, with file hosting with Discord CDN with lying about it, a Pastebin service that... I do not know what it used as a "backend". It did free hosting on Glitch, claiming it was self-hosted... and I will make an article about this in the future. Also just away from the side note "free domains" means "Freenom" domains, which Freenom is a TLD seller who gives free domains. They are a tiny bit sketch but not too bad. It is mostly the TLD's themselves. Also, don't worry .js.org people, all is good, you will be mentioned *later.* # Okay Domains Generics. Your .com, .co, .org, .net, even stuff like .cc, .to, .me. These TLDs by nature usually mean something too. .net can mean network, .org is general organizations, .co is elegant product, .com is average product, .cc is cheaper product. So on and so forth. These TLDs are also best known as "Not just software" domains. # Good Domain Extensions These are some you should focus on. Basically .ai, .dev, .io are the key three here. These are good for any job but they have a clear dev stance to them. .codes, .app, .cloud, .gg so on and so forth. Also, some, while other people might avoid can be good, like how my friend got my favorite domain I ever owned, viruses-to.download. Now yeah, probably don't visit the page, he is using it for file hosting and plans to do virus testing things there, so be warned. Also in the next year or so neither of us might have it anymore. # Special extensions These TLDs are the best on the market. Your project will love the use of these. Remember .js.org? Yeah, that's one of these. Yes free, has a clear JavaScript objective. .sh is another good one, idealized as shell-like in Bash. .py is another good one that is wonderful for python developers. .wtf has a lot of potential, but personally I see it as a gross TLD because of one specific website about... not being nice to animals... in a non-violent but graphic way. Moving on. One's that I love the most are .one, .is, and more. # Bonus: Where to buy them Now I am not sponsored, if I was I would have something better than a dual-core CPU laptop with four gigs of RAM. Anyways I always recommend Hover, 101Domain, and Namecheap as mains, name.com as a backup, and in general stay away from GoDaddy. Some other possible good ones out there but these are what I use. Have fun and be safe online. # Interesting happenstance Okay so while trying to find TLD screenshots and also trying to un-forget some TLDs and I typed in a random string that was me keyboard spamming. I found one of them in the form of the .cm TLD was taken. Weird.
kailyons
286,235
Starting out with GraphQL
Understanding the Purpose, and some key early tips
0
2020-03-24T14:13:23
https://dev.to/heroku/starting-out-with-graphql-5g0m
json, graphql, beginners, webdev
--- title: Starting out with GraphQL published: true description: Understanding the Purpose, and some key early tips tags: json, graphql, beginners, web-dev cover_image: https://dev-to-uploads.s3.amazonaws.com/i/h4fr5nleot3nw21n043v.png --- ## What is GraphQL? GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. By running a GraphQL server (e.g. [Apollo GraphQL](https://www.apollographql.com/)), your existing applications can send parameterised requests and get back JSON responses. ## What is it good for? Is GraphQL The answer to all of life’s problems? Maaaaaaybe? But it isn’t a database or a web server, the two things its most often confused for. Let’s dive in. GraphQL is a communication standard. Its goal is to let you request all the data you need with a single fairly compact request. It was born as an attempt to improve on REST APIs for populating web pages with data. The classic conundrum looks like this: WEBPAGE: When Bob comments on a photo, I want to show a tooltip with profile pics of Alice and Bob’s top 5 mutual friends. APP: okay, here’s Bob’s ‘user’ record WEBPAGE: this has all his friend’s IDs, I need their mutual friends. APP: hey good news I built an endpoint that takes two users IDs and returns their mutual friends WEBPAGE: Great! APP: they have 137 mutual friends WEBPAGE: geez I want the top 5 by date, but… okay, now can I get their profile pictures APP: sure here’s the first friend’s ‘user’ record WEBPAGE: I need- APP: here’s the second friend complete ‘user’ record WEBPAGE: you don’t just have the photos? APP: nope! Here’s the third friend’s ‘user’ record. Geez this is taking a while, huh? SOMEone’s on 3g, amirite? -fin- What’s wrong with this picture? In general two things: * Overfetching: while most REST API’s will have a way to ask for a ‘top 5’, there’s usually no way to ask for *some* information. We only wanted Bob’s mutual friends, and then after that we only wanted the mutuals photos not their full profiles * Multiple round trips: the very last request, for all 5 mutual friends' user profiles could be done in parallel, but all the steps before that would have to happen synchronously, waiting for a full reply before the next step could happen. If you think this isn’t a problem, you need to be a bit better informed about [the public’s level of broadband access](https://www.brookings.edu/wp-content/uploads/2016/07/Broadband-Tomer-Kane-12315.pdf) In this scenario, no ‘bad engineering’ happened with this REST API, in fact the work’s been done to return filtered and scoped lists for some requests! But it is true that the front-end page team doesn’t have access to set up the exact API endpoints they need. This is an important point and I want to emphasize it a bit. > If you have full access to alter your REST API, you don’t need GraphQL If your pal, the backend developer, is working with you to set up this feature, they can absolutely set up user/views/top5mutualpics and give you just the data you need, but the trouble starts as your operation grows and features on the front end need to be delivered without API changes. This probably means your org is growing, your user base is growing, and that you expect the frontend to grow and change without updates to your API, so it’s probably a good thing! ## Benefits of GraphQL GraphQL allows you to request data to the depth and in the shape that you need. It also implicitly lets you scope your request to get only the fields you need ```json { hero { name } } ``` The response we get back will be JSON in this shape: ```json { "data": { "hero": { "name": "R2-D2" } } } ``` _This example is done on the lovely Star Wars API (SWAPI) endpoint, check out its [GraphQL interface here](https://graphql.org/swapi-graphql/)_ So there’s no need to create separate /profile /profile_posts and /profile_vitals endpoints to get more focused versions of the data. The goal here is to have GraphQL "wrap around" your existing REST API end points and provide a new, unified interface that lets me query all the things. # Tips for the beginner writing GraphQL queries I saw an amazing talk from [Sean Grove](https://twitter.com/sgrove) of One Graph who works on maintaining GraphiQL, the rad graphical explorer for GraphQL. He talked about adding automations to GraphQL to let it point new query writers in the direction of more efficient coding of GraphQL queries. The query language is supposed to be easy, so these points shouldn’t add significantly to the weight of writing new queries. ## Optimize with variables GraphQ: lets you parameterise queries. Here we are making a query asking for a particular hero that matches the film name "NEW HOPE" and the names of their friends: ```json hero(“NEW HOPE”) { name friends { name } } ``` This looks pretty good but _updating_ this query will require some string manipulation by our GraphQL client (e.g. the React web app that will be asking for data). Also, later queries with different parameters will not benefit from any caching, since the GraphQL server will see it as a whole new query. So it’s better to add a variable to a query, then re-use the same query over and over: ```json query HeroNameAndFriends($episode: Episode) { hero(episode: $episode) { name friends { name } } } ``` _This example and others in this article are cribbed from the [GraphQL learn pages](https://graphql.org/learn/queries/)_ Now we can update the episode variable and re-run the same query, and it’ll impact the client less AND return faster. ## Set defaults for your variables If you love the other devs on yourself or even future you, you’ll set defaults on your variables to make sure each query succeeds ```json query HeroNameAndFriends($episode: Episode = JEDI) { hero(episode: $episode) { name friends { name } } } ``` Later you can re-use this query as ``` HeroNameAndFriends('EMPIRE') ``` and benefit from caching! ## Write more DRY (‘don’t repeat yourself’) queries with fragments It’s an amazing feature that you get to specify exactly the fields that you want to get back from a GraphQL query, after a while this can get… kinda tedious: ```json hero(episode: $episode) { name height weight pets { name height weight } friends { name height weight } } ``` If this was a query we might be asking for photos, IDs, friends’ IDs, over and over again as the query has more clauses. Surely there’s a way to ask for: `name` `height` `weight` All at once? Yup! Define a fragment like so: ```json fragment criticalInfo on Character { name height weight } ``` _Note that Character is just a label I’m using in this example, i.e. a character in a story_ Now our query is _much_ more compact: ```json hero(episode: $episode) { ...criticalInfo pets { ...criticalInfo } friends { ...criticalInfo } } ``` # Ready to dive in and go further? My next article will cover how to host your first GraphQL server on Heroku, and after that how to build your first service architecture. Your next step if all this is interesting to you should be to get a full series on [GraphQL queries right from the GraphQL team](https://graphql.org/learn/queries/) on their [learn page](https://graphql.org/learn/). If you want to really learn GraphQL, I cannot recommend highly enough [“Learning GraphQL” ](http://shop.oreilly.com/product/0636920137269.do)by [Alex Banks](http://www.oreilly.com/pub/au/6913) and [Eve Porcello](http://www.oreilly.com/pub/au/6914). _This and the several articles that will follow it brought to you by my [favorite train read the last few weeks](https://www.amazon.com/Learning-GraphQL-Declarative-Fetching-Modern-ebook/dp/B07GBJZX1L)._
nocnica
286,253
Introduction à Scaleway Elements Kubernetes Kapsule avec Gloo et Knative …
Scaleway a dévoilé de nouveau services (encore en beta pour certains) au sein de sa nouvelle gamme...
0
2020-03-23T00:00:57
https://medium.com/@abenahmed1/introduction-%C3%A0-scaleway-elements-kubernetes-kapsule-avec-gloo-et-knative-62dcfc7f966f
scaleway, kubernetes, serverless, docker
--- title: Introduction à Scaleway Elements Kubernetes Kapsule avec Gloo et Knative … published: true date: 2020-03-22 23:53:06 UTC tags: scaleway,kubernetes,serverless,docker canonical_url: https://medium.com/@abenahmed1/introduction-%C3%A0-scaleway-elements-kubernetes-kapsule-avec-gloo-et-knative-62dcfc7f966f --- ![](https://cdn-images-1.medium.com/max/1024/1*a21ZH3r1im3ofZyjQuikEw.jpeg) Scaleway a dévoilé de nouveau services (encore en beta pour certains) au sein de sa nouvelle gamme Scaleway Elements. ![](https://cdn-images-1.medium.com/max/1024/1*O1Rr2KNApr93Q_oc4FalpA.jpeg) > Scaleway Elements représentant l’ecosystème Cloud public est composé de cinq catégories de produits à savoir : Compute, Stockage, Réseaux, Internet des objets et Intelligence artificielle. [Scaleway Elements](https://www.scaleway.com/fr/scaleway-elements/) ![](https://cdn-images-1.medium.com/max/524/0*rIRVH_58ac4eFw-H.jpg) Je vais m’intéresser à ce service en beta nommé Scaleway Elements Kubernetes Kapsule qui permet d’exécuter des applications conteneurisées dans un environnement Kubernetes géré par Scaleway. Actuellement le service est disponible dans la zone de disponibilité de Paris en France et supporte à minima la dernière version mineure des 3 dernières versions majeures de Kubernetes. [Bêtas & avant-premières](https://www.scaleway.com/fr/betas/) Je lance donc depuis la console web de Scaleway Elements un cluster Kubernetes managé via ce service avec des noeuds de type DEV1-M (3vCPU, 4Go RAM et 40 Go de disque NVMe) : ![](https://cdn-images-1.medium.com/max/1018/1*exyowhusHmCMQU6NzDyksQ.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*C6AC5iEoCRBHdtS_iZLDWw.jpeg) avec plusieurs options qui définiront le prix à venir. En effet ce dernier dépendra des ressources qui sont allouées pour le cluster Kubernetes telles que le nombre et le type de nœuds, l’utilisation de loadbalancers et les volumes persistants. Les nœuds sont facturés au même prix que les instances compute utilisées. Le control plane de Kubernetes est fourni sans frais supplémentaires : ![](https://cdn-images-1.medium.com/max/1024/1*yja5Mkh3cbe80gpSxkf0Ow.jpeg) Je choisis ici de ne pas activer le dashboard Kubernetes et de ne pas installer d’Ingress Controller : ![](https://cdn-images-1.medium.com/max/1024/1*nGduKlu0YyGqtKUPhaYIcw.jpeg) Lancement de la création du cluster : ![](https://cdn-images-1.medium.com/max/1024/1*bzMI0w2efSD3g4TO75fH5w.jpeg) qui une fois terminée me retourne un le point de terminaison ainsi qu’un domaine Wildcard : ![](https://cdn-images-1.medium.com/max/1024/1*g83luEsAzk9zfzwWem2lgw.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*fIP3YK6qYWDd9EqGxM4O-g.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*Jq7nwjFoH-v4tM_0DPn9lw.jpeg) Un Load Balancer m’est en effet attribué avec ce domaine Wildcard qui pointe en effet sur les adresses IP publiques de chacun des noeuds constituant le cluster : ![](https://cdn-images-1.medium.com/max/1024/1*L6Qye80krop8doialc1tvA.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*geR-P2yBJYFf6_J8K-gWFw.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*4_PzML6cThDmY_4ssZil-g.jpeg) Je peux charger le fichier Kubeconfig à utiliser en conjonction du client Kubectl pour gérer le cluster Kubernetes en ligne de commande : ![](https://cdn-images-1.medium.com/max/1024/1*6_5BvzS8q38iyL8vh2cRpA.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*5GaUI6Tt_hEitQIhWWjBDw.jpeg) Je connecte alors ce cluster au service Weave Cloud qui me fournira avec Weave Scope et Cortex le moyen de surveiller ce dernier. En effet, Weave Cloud est une plate-forme opérationnelle qui agit comme une extension à son infrastructure d’orchestration de conteneurs, fournissant Deploy: livraison continue, Explore: visualisation et dépannage et Monitor: surveillance Prometheus. Ces fonctionnalités fonctionnent ensemble pour aider à expédier les fonctionnalités plus rapidement et à résoudre les problèmes plus rapidement : [What is Weave Cloud & Documentation](https://www.weave.works/docs/cloud/latest/overview/) ![](https://cdn-images-1.medium.com/max/682/0*Ac01vBsWsbTz7nr-.png) ![](https://cdn-images-1.medium.com/max/1024/1*OOinZpL8SaU2b5PMm_MrCQ.jpeg) Je peux alors visualiser mes trois noeuds Worker : ![](https://cdn-images-1.medium.com/max/1024/1*i1Pq-xRqwE1QBhWAKmsy0Q.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*kYsA2mdGSNFQ01-oAoRc9A.jpeg) Et m’y connecter. J’en profite donc pour y installer l’Agent ZeroTier pour les lier à un réseau VPN via la console Shell fournie avec Weave Scope : ![](https://cdn-images-1.medium.com/max/1024/1*RCj0atpDZONaCxIBnSwp9w.jpeg) Mes trois noeuds sont connectés au service VPN de ZeroTier : ![](https://cdn-images-1.medium.com/max/1024/1*YtscI1ZueR8_6QeRKdXpVw.jpeg) Je peux donc procéder à l’installation de MetalLB pour obtenir un service de Load Balancing intégré avec le plan d’adressage défini dans ZeroTier : [MetalLB](https://metallb.universe.tf/) ``` openssl rand -base64 128 | kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey=- ``` ![](https://cdn-images-1.medium.com/max/1024/1*CWjZkiW4PcK7I-YMddPoYA.png) ``` kubectl apply -f [https://raw.githubusercontent.com/google/metallb/v0.9.2/manifests/metallb.yaml](https://raw.githubusercontent.com/google/metallb/v0.9.2/manifests/metallb.yaml) ``` et ce fichier de configuration : ![](https://cdn-images-1.medium.com/max/1024/1*ud_TqOIpZdPm_idkEY-kUg.jpeg) MetalLB est actif dans le cluster : ![](https://cdn-images-1.medium.com/max/1024/1*hPelAHGCrEAHUhsxYaOmWA.jpeg) Pour permettre l’installation de Knative, je procède à l’installation de Gloo par chargement au préalable du binaire Glooctl depuis son dépôt sur Github : [solo-io/gloo](https://github.com/solo-io/gloo/releases) ![](https://cdn-images-1.medium.com/max/1024/1*wUSFQqoYep-qcPtkYqxMQQ.jpeg) Installation de Knative Serving dans le cluster : ![](https://cdn-images-1.medium.com/max/1024/1*gK-rj1LLcKHBYFihZ_TrhQ.png) ![](https://cdn-images-1.medium.com/max/1024/1*WVLqlUqz41EtMHH35n6_Bw.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*toeSIKnMSElvWOlTt0USyw.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*KJAvgGKtopJpZfLmxhxxiw.jpeg) avec un proxy pour Knative Serving qui a pris une adresse IP via MetalLB : ![](https://cdn-images-1.medium.com/max/1024/1*6vY1-Yc8TfwCiruZfyQlzg.jpeg) Premier test de Knative Serving avec l’image Docker du célèbre Helloworld en Go : ![](https://cdn-images-1.medium.com/max/1024/1*AfJ0Oh_Huj6uQfxmtEXh4w.png) qui répond via le Proxy : ![](https://cdn-images-1.medium.com/max/1024/1*mNDjRyNfH1WgxZECYvwMxQ.png) Autre test avec une image Docker Azure Functions pour Linux : [Azure/azure-functions-docker](https://github.com/Azure/azure-functions-docker) via ce manifeste YAML : ![](https://cdn-images-1.medium.com/max/1024/1*CCgk2BX0C8oLETTwGvmowQ.png) ![](https://cdn-images-1.medium.com/max/1024/1*IRBX5p9KOhNYAnSwNukSZQ.png) Je modifie la partie Headers de mon navigateur Web pour accéder à cette fonction test : ![](https://cdn-images-1.medium.com/max/1024/1*LB7pHniQLvRrVnqwBs-6KA.jpeg) et la fonction répond : ![](https://cdn-images-1.medium.com/max/1024/1*pn8jStipv4EilwBHMZrK_w.jpeg) Visualisation de cette fonction dans Weave Cloud et des conteneurs associés : ![](https://cdn-images-1.medium.com/max/1024/1*vWFxoQXsLCJOkCgEChlWrg.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*D6ZnzFDgrS90s-3jSoIMhg.jpeg) Idem pour le sempiternel démonstrateur FC : ![](https://cdn-images-1.medium.com/max/1024/1*oXejTLR9D8VS8bGjE7OYqg.png) ![](https://cdn-images-1.medium.com/max/1024/1*Ogymp2oWgX5tXC3yjOyeMg.png) ![](https://cdn-images-1.medium.com/max/1024/1*UW9uGTYWxh1ccyE0DZ7O7A.jpeg) Modification encore une fois de la partie Headers du navigateur Web : ![](https://cdn-images-1.medium.com/max/1024/1*O6CAQ9JlfSoyUzcZZviXrQ.jpeg) Le démonstrateur est accessible : ![](https://cdn-images-1.medium.com/max/1024/1*krPw6DOzJY3ljuBzqJhRDw.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*qksvHt85253Eq-Um5ri2zA.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*2lPaT96TCd17Kt84kGCHgQ.jpeg) Je visualise dans Weave Cloud les conteneurs associés à ce démonstrateur et Knative Serving : ![](https://cdn-images-1.medium.com/max/1024/1*8qQeNNfv8orRQK_ZymtaLQ.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*u5SWjPHIYf13xspII6OwFA.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*yGjL0pVmewi_-4zpSRmD7A.jpeg) ainsi que les grandes métriques du cluster Kubernetes : ![](https://cdn-images-1.medium.com/max/1024/1*Zsf2rzqbQOfOJT1EusXi7w.jpeg) ![](https://cdn-images-1.medium.com/max/1024/1*V1W4FfRIXOnyKZC11WaVyw.jpeg) Scaleway Elements Kubernetes Kapsule est appelé à s’enrichir avec notamment même si ici les clusters de Kubernetes doivent être considérés comme _stateless_. Si on a besoin d’une application stateful, on peut utiliser des volumes persistants. La storageClass pour les volumes Scaleway Block Storage est définie par défaut, elle n’a donc pas besoin d’être spécifiée … ![](https://cdn-images-1.medium.com/max/379/0*Hf_Yd6YKUbiM1g_i.jpg) À suivre ! …
deep75
286,378
자바와 닷넷의 문자열 연산자 차이
2015-04-06 10:30:11 from blog.hazard.kr 1. == 및 != 연산자 닷넷 닷넷은 == 연산자 오버로딩을 통...
0
2020-03-23T05:35:25
https://dev.to/composite/-596p
java, csharp, techtalks, korean
> 2015-04-06 10:30:11 from blog.hazard.kr ## 1. `==` 및 `!=` 연산자 ### 닷넷 닷넷은 `==` 연산자 오버로딩을 통하여 `String.Equals` 사용하여 값의 동일성을 비교. ### 자바 자바는 `String`이 닷넷과 같이 클래스이며 연산자 지원 안하는 특성상 레퍼런스 비교밖에 못하므로 동일한 값 비교 불가. 따라서 개발자가 직접 `String.equals`를 사용. ## 2. + 연산자 ### 닷넷 : ```cs string s = "asd" + b + "qwe"; //>> string s = string.Concat("asd", b, "qwe"); ``` [String.cs](http://www.dotnetframework.org/default.aspx/4@0/4@0/DEVDIV_TFS/Dev10/Releases/RTMRel/ndp/clr/src/BCL/System/String@cs/1305376/String@cs) .NET concat 원리 ```cs [System.Security.SecuritySafeCritical] // auto-generated public static String Concat(String str0, String str1) { //Contract 는 Test 및 유효성 검사를 위한 내부 클래스임. Contract.Ensures(Contract.Result<string>() != null); Contract.Ensures(Contract.Result</string><string>().Length == (str0 == null ? 0 : str0.Length) + (str1 == null ? 0 : str1.Length)); Contract.EndContractBlock(); if (IsNullOrEmpty(str0)) { if (IsNullOrEmpty(str1)) { return String.Empty; } return str1; } if (IsNullOrEmpty(str1)) { return str0; } int str0Length = str0.Length; //.NET 은 네이티브를 통해 포인터에다가 합칠 문자열 길이를 모두 합산하여 배열에 자리 부여 String result = FastAllocateString(str0Length + str1.Length); //그리고 포인터에다가 순서대로 삽입 FillStringChecked(result, 0, str0); FillStringChecked(result, str0Length, str1); //그리하여 포인터 문자열 출력. return result; } ``` ### 자바 : ```java String s = "asd" + b + "qwe"; //>> String s = new StringBuffer().append("asd").append(b).append("qwe").toString(); ``` [StringBuffer.java](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/lang/StringBuffer.java) 자바는 문자열 증가 연산자 약속을 `StringBuffer` 클래스를 통해 합치며 원리는 닷넷과 차이가 있음. ### 닷넷과 자바의 문자열 합치기 차이점 닷넷 : 처음부터 합칠 모든 문자열의 길이만큼 자리를 포인터에 할당 후 삽입한 다음 포인터 결과값 출력. 자바 : `StringBuffer` 특성상 일정 자리를 부여 후 문자열 넣을 때마다 필요 시 일정량 증가 후 삽입한 다음 문자열 출력. (기본값은 +16) ### 닷넷과 자바의 문자열 합치기 공통점 반복문 등에서 문자열 추가시 닷넷은 `StringBuilder`, 자바는 `StringBuffer`를 쓰는 것이 성능상 이득. 여기까지.
composite
286,416
Mousetrap JS
So you came here to learn about mousetraps right? So mousetraps are great for capturing rodents with...
0
2020-03-23T09:26:56
https://dev.to/chrisleboeuf/mousetrap-js-5d38
So you came here to learn about mousetraps right? So mousetraps are great for capturing rodents with a spring-loaded mechanism. Totally kidding! This isn't about those kinds of mousetraps. This is about a neat Javascript library for fairly simple and easy keybinding! [Mousetrap](https://github.com/ccampbell/mousetrap) has many awesome capabilities. It's extremely lightweight with no external dependencies. You can get Mousetrap by doing a simple ``npm install mousetrap`` and require it in your app. Do that and now you can start using mousetraps like a pro! Let's get right into it! First, there is ``Mousetrap.bind``. Let's look at some examples! ```js // single keys Mousetrap.bind('4', function() { console.log('4'); }); Mousetrap.bind("?", function() { console.log('show shortcuts!'); }); Mousetrap.bind('esc', function() { console.log('escape'); }, 'keyup'); // combinations Mousetrap.bind('command+shift+k', function() { console.log('command shift k'); }); // map multiple combinations to the same callback Mousetrap.bind(['command+k', 'ctrl+k'], function() { console.log('command k or control k'); // return false to prevent default browser behavior // and stop event from bubbling return false; }); // gmail style sequences Mousetrap.bind('g i', function() { console.log('go to inbox'); }); Mousetrap.bind('* a', function() { console.log('select all'); }); // Alphabet! Mousetrap.bind('a b c d e f g', function() { console.log('Now I know my abc's'); }); ``` So bind will be used to literally allow you to bind specific sets of keys to a specified callback method. On top of that, if for whatever reason you wanted to you can even overwrite default keyboard shortcuts. And you can also specify whether your shortcut is a keyup, keydown, or a keypress by adding a third argument to the bind method. This way you can bind multiple types of keypresses to the same key or combination of keys. And that leads to the next thing.``Mousetrap.unbind``. With this method, you can unbind a single key or an array of keyboard events. If you previously used bind to bind a key and you specified the kind of keypress, then you must specify the same kind of keypress in the unbind. ```js Mousetrap.bind('b', () => { console.log('b was pressed') } 'keydown'); // This is how you must do it if you specified a specific keypress Mousetrap.unbind('b', 'keydown'); ``` Next Mousetrap has a neat way of triggering the same keyboard event. If for whatever reason you wanted to fire off an event that you had previously bound to a key, you can easily 'trigger' that event by using ``Mousetrap.trigger``. ```js Mousetrap.trigger('b'); ``` This method can also take in an optional argument for the type of keypress like the other functions. Finally, we will take a look at one last method. ``Mousetrap.reset`` is yet another useful method. The reset method will remove anything you have bound to mousetrap. This can be useful if you want to change contexts in your application without refreshing the page in your browser. Internally mousetrap keeps an associative array of all the events to listen for so reset does not actually remove or add event listeners on the document. It just sets the array to be empty. This is only some of the functionality of Mousetrap. You can go see the rest [here](https://craig.is/killing/mice). Mousetrap is an awesome, easy to use library that I highly recommend using if you want to make simple keybinding events.
chrisleboeuf
286,433
Rewriting to Haskell–Configuration
You can keep reading here or jump to my blog to get the full experience, including the wonderful pink...
0
2020-03-23T08:00:28
https://odone.io/posts/2020-03-23-rewriting-haskell-configuration.html
functional, haskell, servant
You can keep reading here or [jump to my blog](https://odone.io/posts/2020-03-23-rewriting-haskell-configuration.html) to get the full experience, including the wonderful pink, blue and white palette. --- This is part of a series: - [Rewriting to Haskell–Intro](https://odone.io/posts/2020-02-26-rewriting-haskell-intro.html) - [Rewriting to Haskell–Project Setup](https://odone.io/posts/2020-03-03-rewriting-haskell-setup.html) - [Rewriting to Haskell–Deployment](https://odone.io/posts/2020-03-14-rewriting-haskell-server.html) - [Rewriting to Haskell–Automatic Formatting](https://odone.io/posts/2020-03-19-rewriting-haskell-formatting.html) --- Coming from Rails we are used to employing yaml files to configure a web application. This is why we decided to do the same with Servant. As a matter of fact, we now have a `configuration.yml` file: ```yml database: username: stream database: stream_development password: "" application: aws_s3_access_key: "ABCD1234" aws_s3_secret_key: "EFGH5678" aws_s3_region: us-east-1 aws_s3_bucket_name: stream-demo-bucket ``` That is great for development but how can we run test against the test database? Turns out that the package we use to parse the yaml file allows the use of ENV variables: ```yml database: username: stream database: _env:DATABASE:stream_development password: "" application: aws_s3_access_key: "ABCD1234" aws_s3_secret_key: "EFGH5678" aws_s3_region: us-east-1 aws_s3_bucket_name: stream-demo-bucket ``` That is, now we can just run `DATABASE=stream_test stack test`! In the repository we actually keep a `configuration.yml.example` file and git ignore `configuration.yml` to avoid leaking credentials: ```yml database: username: stream database: _env:DATABASE:stream_development password: "" application: aws_s3_access_key: "REPLACE_ME" aws_s3_secret_key: "REPLACE_ME" aws_s3_region: us-east-1 aws_s3_bucket_name: ll-stream-demo ``` For production we use [Ansible](https://www.ansible.com/) (with Ansible Vault) to put in place the correct `configuration.yml`. Plus, we instruct [Hapistrano](https://hackage.haskell.org/package/hapistrano) to make that file available for each deployment: ```yml linked_files: - haskell/configuration.yml ``` To read the configuration inside the Servant application we use [`loadYamlSettings`](https://www.stackage.org/haddock/lts-15.5/yaml-0.11.3.0/Data-Yaml-Config.html#v:loadYamlSettings) from the [yaml](https://www.stackage.org/package/yaml) package: ```hs loadYamlSettings :: FromJSON settings => [FilePath] -- ^ run time config files to use, earlier files have precedence -> [Value] -- ^ any other values to use, usually from compile time config. overridden by files -> EnvUsage -> IO settings ``` In other words, given a type `settings` that is an instance of `FromJSON` we can decode yaml files into a value of that type. And this is how we do it for Stream: ```hs data Configuration = Configuration { configurationDatabaseUser :: String, configurationDatabaseDatabase :: String, configurationDatabasePassword :: String, configurationApplicationAwsS3AccessKey :: AccessKey, configurationApplicationAwsS3SecretKey :: SecretKey, configurationApplicationAwsS3Region :: Region, configurationApplicationAwsS3BucketName :: BucketName } instance FromJSON Configuration where parseJSON (Object x) = do database <- x .: "database" application <- x .: "application" Configuration <$> database .: "username" <*> database .: "database" <*> database .: "password" <*> application .: "aws_s3_access_key" <*> application .: "aws_s3_secret_key" <*> application .: "aws_s3_region" <*> application .: "aws_s3_bucket_name" loadConfiguration :: IO Configuration loadConfiguration = loadYamlSettings ["./configuration.yml"] [] useEnv ``` --- Get the latest content via email from me personally. Reply with your thoughts. Let's learn from each other. Subscribe to my [PinkLetter](https://odone.io#newsletter)!
riccardoodone
286,442
Kissing JavaScript #2 globals.js
Have you ever asked why you must type const { readFileSync } = require('fs') every time you nee...
5,561
2020-03-23T08:14:26
https://dev.to/bittnkr/kissing-javascript-2-globals-js-2b1k
javascript
Have you ever asked why you must type ```JavaScript const { readFileSync } = require('fs') ``` every time you need to read a file or use any other file handling function? In my DRY obsession, this bothers me a lot. To me, the first requirement to write simpler code is just write less code. One of my strategies to avoid the repetition is the use of global variables. In [first post](https://dev.to/bittnkr/kissing-javascript-1174) of this series, there was a part of code I didn't commented about: ```JavaScript if (typeof window == 'object') window.test = test else global.test = test ``` This code makes the `test()` function globally available, (in nodejs and in the browser) so I only need to require the file once for the entire application. Traditionally (before ES6) if you write`x = 10` not preceded by `var` or `const`, that variable will automatically become a global variable. Having an accidental global variable is a bad thing because that variable can replace another with the same name declared in another part or library or simply leak the function scope. For this reason, ES6 introduced the `"use strict";` directive. One of the things this mode do is disallow global variables by default. After that, most of the libraries avoided using global variables to not pollute the global space. So, I've a good news to you: Now the global space is almost desert and is free to be used at will by you. Yes **you** are the owner of global space now, and you can use it to make you life simpler. So my second tip is just this: Create a file named `globals.js` and put on it everything you want to have always at hand. Follow a model with part of my `globals.js`, with some ideas of nice globals: ```JavaScript // check if the fs variable already exists in the global space, if so, just returns if (global.fs) return // a shortcut for the rest for the file var g = (typeof window == 'object') ? window : global g.fs = require('fs') g.fs.path = require('path') // make path part of fs g.json = JSON.stringify g.json.parse = JSON.parse // from the previous article g.test = require('./test.js') // plus whatever you want always available ``` Now just put in the main file of your NodeJS project, the line ```JavaScript require('./globals.js') ``` and after that in anywhere of your project when you need a function of `fs` module, you just need to type: ```JavaScript var cfg = fs.readFileSync('cfg.json') ``` without any require(). I know this is not the most complex or genial article you have ever read on here dev.to, but I'm sure the wise use of global space can save you a lot of typing. A last word: In this time of so many bad news, I want to give you another little tip: Turn off the TV, and give a tender kiss in someone you love and loves you (despite the distancing propaganda we are bombarded). Say her how important she is to you life and how you would miss if she is gone. (The same to him) I my own life, every time I faced death I realized that the most important and the only thing that really matters and we will carry with our souls to the after life is the love we cultivate. So, I wish you a life with lot of kisses and love. From my heart to your all. 😘
bittnkr
286,461
Covid Counter
Another Corona Count tracker
0
2020-03-23T08:50:20
https://dev.to/barelyhuman/covid-counter-61l
corona, covid, counter
--- title: Covid Counter published: true description: Another Corona Count tracker tags: corona, covid, counter --- I created this because I was bored and didn't have anything creative to make and since everyone seems to be posting theirs. Here's a minimal version that gets you the counts of various incidents [Link](https://corona.siddharthgelera.com/)
barelyhuman
286,830
[Tutorial Git] git commit -am: Atualizando arquivo modificado no Git
Para atualizar um arquivo que foi modificado no repositório, existem dois caminhos. $ git add &lt;...
5,484
2020-03-23T20:25:57
https://dev.to/womakerscode/tutorial-git-adicionando-um-arquivo-modificado-no-git-116c
github, am, git, braziliandevs
Para atualizar um arquivo que foi modificado no repositório, existem dois caminhos. ``` $ git add <arquivo> ``` - **$** indica que você deve usar o **usuário comum** para fazer essa operação. - **add** vai adicionar ao git o(s) arquivo(s) que virá(ão) em seguida. - digite o nome do arquivo sem os sinais **< >**. seguido do **commit** ``` $ git commit -m 'sua mensagem aqui' ``` Exemplo: Aqui temos o arquivo index.html que foi modificado. ![arquivo modificado](https://dev-to-uploads.s3.amazonaws.com/i/5y1vdqg8u0lidlt8jex2.png) Adicionando o arquivo com o comando **git add** ![git add no arquivo modificado](https://dev-to-uploads.s3.amazonaws.com/i/aqde5qpcn3ydxcar5snt.png) E fazendo o **commit** ![fazendo o commit do arquivo](https://dev-to-uploads.s3.amazonaws.com/i/1pbn886459v3ljhxbiqk.png) ## Atalho Também é possível fazer o **commit das modificações** através de um **atalho**: ``` $ git commit -am 'adição de modificação do arquivo' ``` O parâmetro **-a** adiciona todos os arquivos que foram modificados, sem a necessidade de adicionar cada um individualmente. Exemplo: Aqui temos vários arquivos modificados ![vários arquivos modificados](https://dev-to-uploads.s3.amazonaws.com/i/kf4xw8bhu8fjfbmicaxr.png) Usando o atalho ![usando o atalho](https://dev-to-uploads.s3.amazonaws.com/i/c3ivq3uujgzmwqnqo4d9.png) **Observação:** É importante notar que se houver um arquivo novo (ainda não rastreado pelo **git**) o comando **git commit -am** faz a adição do commit **apenas dos arquivos rastreados que foram modificados**. Exemplo: Temos um arquivo que foi modificado (**nintendo64.html**) e um arquivo novo (**estilo-mobile.css**) ainda não rastreado. ![arquivo modificado e outro não rastreado](https://dev-to-uploads.s3.amazonaws.com/i/eufw6m04fws7qd13poxg.png) Usando o atalho **git commit -am** podemos perceber que somente o arquivo modificado foi mandado ao **index**. ![fazendo o commit, através do atalho, do arquivo modificado](https://dev-to-uploads.s3.amazonaws.com/i/j26exubcinfvv7x6y9y7.png) ## Descartando modificações Caso queira descartar as modificações feitas em um arquivo, basta digitar ``` $ git checkout <nome_do_arquivo> ``` Exemplo: Temos dois arquivos que foram modificados e queremos descartar as modificações (voltando ao arquivo anterior as mudanças). ![dois arquivos modificados](https://dev-to-uploads.s3.amazonaws.com/i/1t10uvm5cqogmvpxylbs.png) Usando com comando **git checkout** com os dois arquivos ao mesmo tempo: ![usando o checkout para descartar alterações](https://dev-to-uploads.s3.amazonaws.com/i/90wso0tsgd0g9is1159j.png)
danielle8farias
286,519
Make your react apps compatible with IE
Installation npm install react-app-polyfill Enter fullscreen mode ...
0
2020-03-23T11:39:51
https://dev.to/k_penguin_sato/make-your-react-apps-compatible-with-ie-4g82
react
--- title: Make your react apps compatible with IE published: true description: tags: React --- # Installation ```bash npm install react-app-polyfill ``` or ```bash yarn add react-app-polyfill ``` # Import entry points Import the packages at the top of your `index.tsx` or `index.jsx`. ```js import 'react-app-polyfill/ie9'; import 'react-app-polyfill/ie11'; import 'react-app-polyfill/stable'; ``` That's it! Now your react app should run on IE without any errors. # Resources - [react-app-polyfill](https://github.com/facebook/create-react-app/tree/master/packages/react-app-polyfill)
k_penguin_sato
286,524
[Rails]Implement session-based authentication from scratch
Here is how you can implement session-based authentication functionality in your rails application...
0
2020-03-23T11:50:32
https://dev.to/k_penguin_sato/rails-implement-session-based-authentication-from-scratch-2631
rails
--- title: [Rails]Implement session-based authentication from scratch published: true description: tags: Rails --- Here is how you can implement session-based authentication functionality in your rails application without using any gem. # Create author resources Run the commands below. ``` $ rails generate model Author $ rails generate controller Authors name:string email:string password_digest:string $ rails generate migration add_index_to_authors_email // Add index $ rake db:migrate ``` # Set validations Add validations for `name` and `email`. ```ruby # models/author.rb class Author < ApplicationRecord VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i.freeze validates :name, presence: true, length: { maximum: 50 } validates :email, presence: true, length: { maximum: 255 }, format: { with: VALID_EMAIL_REGEX }, uniqueness: { case_sensitive: false } end ``` # Add secure password to Author You'll have your users put the password and its confirmation in the form and send them as hashed values. (Hash values can not be decrypted even though they got intercepted by a third party during the transmission.) You check if the sent hashed value matches the hashed password stored in the DB. And if it does, you allow your user to log in to the application. ## Add has_secure_password It's quite easy to set up in rails. Simply put `has_secure_password` in the Author model. (Also add the minimum length of each password.) ```ruby class Author < ApplicationRecord # # other code # validates :password, length: { minimum: 6 } has_secure_password end ``` `has_secure_password` - Enables you to store the hashed password in your DB as password_digest - Lets you use password and password_confirmation params and validations for them. - Lets you use the `authenticate` method. ### Add bcrypt gem Add `gem 'bcrypt'` to your Gemfile and run `bundle install`. ```ruby gem 'bcrypt' ``` ## Check if it's working correctly Run the commands in the rails console to see if you can use the `authenticate` method. The `authenticate` method returns false if the given password was wrong and returns the author object if the given password was correct. ```ruby $ Author.create(name:"test", email:"test@email.com", password:"000000") $ Author.first.authenticate('test') //=> false $ Author.first.authenticate('000000') //=> #<Author:0x0000560ee2e0a1b8 id: 1, name: "test", email: "test@email.com", password_digest: "$2a$12$bQQu49N3xNCKO8StooXLBOqwwCAv7NbPqt3aG35AFDHRUgh.C8BgO", created_at: Mon, 30 Sep 2019 08:40:11 UTC +00:00, updated_at: Mon, 30 Sep 2019 08:40:11 UTC +00:00> ``` # Sign up functionality Let's start by setting up the routes for users to sign up. ```ruby Rails.application.routes.draw do resources :authors get '/signup', to: 'authors#new' post '/signup', to: 'authors#create' ``` Add the code below to the author controller. ```ruby class AuthorsController < ApplicationController def show @author = Author.find(params[:id]) end def new @author = Author.new end def create @author = Author.new(author_params) if @author.save redirect_to @author else render 'new' end end private def author_params params.require(:author).permit(:name, :email, :password, :password_confirmation) end end ``` Lastly, create a signup page and show page for each user under `views/authors/`. ```erb # views/authors/show.html.erb <%= @author.name %> <%= @author.email %> ``` ```erb # views/authors/new.html.erb <% provide(:title, 'Sign up') %> <h1>Sign up</h1> <div class="row"> <div class="col-md-6 col-md-offset-3"> <%= form_for(@author) do |f| %> <%= f.label :name %> <%= f.text_field :name, class: 'form-control' %> <%= f.label :email %> <%= f.email_field :email, class: 'form-control' %> <%= f.label :password %> <%= f.password_field :password, class: 'form-control' %> <%= f.label :password_confirmation, "Confirmation" %> <%= f.password_field :password_confirmation, class: 'form-control' %> <%= f.submit "Create my account", class: "btn btn-primary" %> <% end %> </div> </div> ``` # Sign in/out `HTTP` is a stateless protocol. So we use sessions to maintain the user state. The `new` action is used to put information for a new session and `create` action is used to actually create a new session. And the `destroy` action is used to delete a session. ## Set up routes Set up routes for `sessions`. ```ruby Rails.application.routes.draw do resources :authors # Create new users get '/signup', to: 'authors#new' post '/signup', to: 'authors#create' # Sessions get '/login', to: 'sessions#new' post '/login', to: 'sessions#create' delete '/logout', to: 'sessions#destroy' end ``` ## Create a session controller ``` $ rails generate controller Sessions ``` Add the code to `SessionsController`. ```ruby class SessionsController < ApplicationController def new end def create author = Author.find_by(email: params[:session][:email].downcase) if author && author.authenticate(params[:session][:password]) log_in author redirect_to author else render 'new' end end def destroy log_out redirect_to root_url end end ``` And add the code to `SessionHelper` and include session helper in `ApplicationController`. The `session` used in the code below is the built-in `session` method in Rails. ```ruby module SessionsHelper def log_in(author) session[:author_id] = author.id end def current_author @author ||= Author.find_by(id: session[:author_id]) if session[:author_id] end def logged_in? !current_author.nil? end def log_out session.delete(:author_id) @current_author = nil end end ``` ```ruby class ApplicationController < ActionController::Base protect_from_forgery with: :exception include SessionsHelper end ``` ## Remember me functionality First of all, add a column called `remember_digest` to `Author`. ``` $ rails generate migration add_remember_digest_to_users remember_digest:string ``` Update code in the Author model. Each method has its description in the code. ```ruby class Author < ApplicationRecord attr_accessor :remember_token VALID_EMAIL_REGEX = /\A[\w+\-.]+@[a-z\d\-.]+\.[a-z]+\z/i.freeze validates :name, presence: true, length: { maximum: 50 } validates :email, presence: true, length: { maximum: 255 }, format: { with: VALID_EMAIL_REGEX }, uniqueness: { case_sensitive: false } validates :password, length: { minimum: 6 } has_secure_password class << self # Return the hash value of the given string def digest(string) cost = ActiveModel::SecurePassword.min_cost ? BCrypt::Engine::MIN_COST : BCrypt::Engine.cost BCrypt::Password.create(string, cost: cost) end # Return a random token def generate_token SecureRandom.urlsafe_base64 end end # Create a new token -> encrypt it -> stores the hash value in remember_digest in DB. def remember self.remember_token = Author.generate_token update_attribute(:remember_digest, Author.digest(remember_token)) end # Check if the given value matches the one stored in DB def authenticated?(remember_token) BCrypt::Password.new(remember_digest).is_password?(remember_token) end def forget update_attribute(:remember_digest, nil) end end ``` Update the session helper. ```ruby module SessionsHelper def log_in(author) session[:author_id] = author.id end def current_author if (author_id = session[:author_id]) @current_author ||= User.find_by(id: author_id) elsif (author_id = cookies.signed[:author_id]) author = User.find_by(id: author_id) if author && author.authenticated?(cookies[:remember_token]) log_in author @current_author = author end end end def logged_in? !current_author.nil? end # Make the author's session permanent def remember(author) author.remember cookies.permanent.signed[:author_id] = author.id cookies.permanent[:remember_token] = author.remember_token end # Delete the permanent session def forget(author) author.forget cookies.delete(:author_id) cookies.delete(:remember_token) end def log_out forget(current_author) session.delete(:author_id) @current_author = nil end end ``` Update the session controller. ```ruby class SessionsController < ApplicationController def new end def create author = Author.find_by(email: params[:session][:email].downcase) if author && author.authenticate(params[:session][:password]) log_in author params[:session][:remember_me] == '1' ? remember(author) : forget(author) redirect_to author else render 'new' end end def destroy log_out redirect_to root_url end end ``` Lastly, add `remember_me` checkbox in the view. ```erb <div class="login-form"> <h2>Log in</h2> <%= form_for(:session, url: login_path) do |f| %> <%= f.email_field :email, autofocus: true, autocomplete: "email", placeholder: 'Email', class: 'login-input'%><br/> <%= f.password_field :password, autocomplete: "current-password", placeholder: 'Password', class: 'login-input' %> <div class="check-field"> <%= f.check_box :remember_me %> <%= f.label :remember_me %> </div> <%= f.submit "Log in", class: 'btn btn-outline-primary login-btn' %> <% end %> </div> ``` # Authorization Add the following methods to the author controller. ```ruby class AuthorsController < ApplicationController before_action :authenticate_author ## Other code ## private def author_params params.require(:author).permit(:name, :email, :password, :password_confirmation) end def authenticate_author unless logged_in? flash[:danger] = "Please log in." redirect_to login_url end end def correct_author @author = Author.find(params[:id]) redirect_to(root_url) unless current_author?(@author) end end ``` Add the `current_author?` method to the session helper. ```ruby module SessionsHelper def current_author?(author) author == current_author end end ``` That's it! Now you should have a simple authentication functionality on your rails app! # References - [Ruby on Rails チュートリアル:実例を使って Rails を学ぼう](https://railstutorial.jp/chapters/sign_up?version=5.1#sec-unsuccessful_signups)
k_penguin_sato
286,534
The happiest countries Worldwide
Happiness is not a simple goal but is about making progress when it's as elusive as ever. Being happy...
0
2020-03-23T12:15:13
https://dev.to/silviosmith3/the-happiest-countries-worldwide-27oe
Happiness is not a simple goal but is about making progress when it's as elusive as ever. Being happy often means continually finding satisfaction, contentment, a feeling of joy, and a sense that your life is meaningful during all kinds of problems that do not depend upon finding ease or comfort. Nobody is jolly or elated all the time, but some individuals are definitely more fulfilled or fortunate than others. Depending on the purpose of the research, happiness is often measured using objective indicators (data on crime, income, civic engagement and health) and subjective methods, such as asking people how frequently they experience positive and negative emotions. Helsinki, Finland is the happiest one for the third year. The N2 is Denmark according to 2020's study, followed by Switzerland in third place and Iceland in the fourth. The UK ranks N13th and The USA N18th. 'A happy social environment, whether urban or rural, is one where people feel a sense of belonging, where they trust and enjoy each other and their shared institutions' said Professor John F. Helliwell of the University of British Columbia who co-edited the report. 'Generally, we find that the average happiness of city residents is more often than not higher than the average happiness of the general country population, especially in countries at the lower end of economic development. The least happy cities ranked were Kabul, Afghanistan; Sanaa, Yemen; Gaza, Palestine; Port-au-Prince, Haiti; and Juba, South Sudan. These are just statistics. Happiness is inside everyone and it doesn’t matter where do you live or do you have lots of money, do you wear expensive clothes is all about finding happiness inside you wherever you are in Denmark, in Finland, in Afghanistan or somewhere else.
silviosmith3
286,545
The Grand Summer Internship Fair
https://internshala.com/the-grand-summer-internship-fair?utm_source=eap_whatsapp&amp;utm_medium=33547...
0
2020-03-23T12:36:16
https://dev.to/coolrocks/the-grand-summer-internship-fair-3cc9
startup, codenewbie, contributorswanted
https://internshala.com/the-grand-summer-internship-fair?utm_source=eap_whatsapp&utm_medium=3354716 Hey, In the wake of COVID-19, this year's 'Grand Summer Internship Fair - India's largest online fair' brings 1,200+ work from home and summer internships in dream companies like OnePlus, Xiaomi, Capgemini, HCL, TVS, and many more. All this with a guaranteed stipend up to INR 75,000!? So, register for the fair now and also win rewards up to INR 30,000. https://internshala.com/the-grand-summer-internship-fair?utm_source=eap_whatsapp&utm_medium=3354716
coolrocks
286,678
On the Coronavirus
The last few weeks has been tough for all of us. I wanted to share with you my personal experience, a...
0
2020-03-23T14:45:10
https://dev.to/stopachka/on-the-coronavirus-ebg
The last few weeks has been tough for all of us. I wanted to share with you my personal experience, and the mindset I’m relying on to move forward. # Looking Back ## The calm before the storm I remember first hearing about this in mid January. My friend was about to head out to China, and she was worried about it. She tends to over-worry, so I reassured her and jokingly told her how she would be fine. One week in, my coworker visited from China. From his eyes and his stories I could tell this was serious. I called my friend and found out she came back early. Since I was 18, I constantly thought about exponential growth, tail risks, and black swans [1]. This fit the bill — I understood it conceptually. But it stopped there. Conceptually. From February to mid March, I was going through the *motions* of preparation. Though I thought I understood that the world could shift in a few days, my understanding was only hypothetical. In early February I told my parents to buy up food, and ordered food in San Francisco as well. In some respects I was preparing, but in another, I was simply fitting preparation to the amount of time I had. I didn’t think it was important enough to change priorities. As things escalated, I increased my attention, but never to the level that this deserved. I canceled my plan to go to New York and tried to distance more. Again this was going through the motions — even with all this happening, the most important thing on my mind was my existing work and personal projects. Even when we were told to work from home. Even as I saw the markets begin to crash, and a significant portion of my personal wealth disappear with them. ## The storm Towards the end of the week, it began to hit me. I realized that we were in much worse shape than China. Complete social isolation was on the way. We could enter a serious recession. In the same day, two of my friends and I decided to move to a cabin and isolate there. We thought we’d go for a month, starting Wednesday. By the evening we decided to go for two months, the very next day. The timing was on the nose, as the very next day shelter in place was announced in San Francisco. The next 48 hours was a blur. We got everything together, I ended up liquidating my entire portfolio, and we got out of San Francisco. I remember feeling like I was in a war zone — making multiple drastic, high impact decisions a day. After a night of being stranded, we arrived safe and sound in the cabin. After those 48 hours, my eyes cleared up. # Looking Ahead As we move towards the present, there’s uncertainty all around us. Many of us are worried about our loved ones. We’re worried about the future. Will hospitals flood with patients and will military cars carry coffins? How long will this last? In a matter of days, many have lost their jobs and many have lost significant wealth. Are we about to experience the great depression? That’s a lot of uncertainty, but we can come together and manage it. Here’s how I’m thinking about it: ## Short Term 1. **Amor Fati [2]** Character is forged through adversity and judged by action rather than thought. Will you let the panic consume you or will you strengthen your resolve? Will you focus inward or will you focus outward? Many of us have felt fear and when we feel fear, the reaction is knee-jerk. As you act, keep this top of mind: how you behave now, no matter what you think inside, is what determines your character. Let this idea guide you gently: you can feel fear of course, and you can make mistakes, but keeping the idea top of mind will gently evolve and shape your behavior. 2. **Come together** Some have experienced a significant loss of wealth, yet still won’t have to worry about their livelihood. Others have lost their jobs. Some have families that are in trouble. We have experienced pain, we are all in different circumstances, and we all have some ways that we can support our community. We can’t fix this overnight. No big brother can make sweeping changes. Let’s do what we can as individuals, whether that’s financial support, a phone call, or a kind word--it all counts. Use this adversity to come together. 3. **Roll with the punches** When there’s volatility and change, the panic and fear can make it hard to adjust. Yet, we must adjust. If you’re in quarantine — what can you do *because* of it? How can you grow and how can you be helpful? Adjusting will calm you and give you mental clarity: whether that’s adjusting what you do at work, reading new books, or finding new ways to connect — make the change. ![First time I’ve made a dish in 7 years](https://user-images.githubusercontent.com/984574/77328677-1b01e200-6cda-11ea-8ffe-1cb6ba97ce71.png "First time I’ve made a dish in 7 years") _First time I’ve made a dish in 7 years_ ## Long Term I think long term, we face two primary fears. **The first, health: will we lose lives?** This is fundamentally up to us. What we do to today will ultimately decide tomorrow. Physically isolate, wash your hands, and stay safe. This is directly under our control, so let’s give ourselves completely to it. **The second, wealth: will we enter a great depression?** We may feel that the world will change and we won’t keep up — what if we lose our wealth, lose our job, and our skills aren’t relevant anymore? What if we’re not safe, never mind that we may never achieve our dreams? Let’s break this down. **Kill the fear:** Even if you lose all our wealth, your job, and your skills aren’t relevant anymore, *you still have your wits*. The skills you have today didn’t just pop up when you were born. You **learned them. ****You *will* learn and adapt with whatever is next. **Evolve the vision:** Instead of judging your future by outcome (how much wealth you have), judge it by character: *what kind of person will you be?* You *will* be the kind of person who generates value, who is strengthened by adversity. Your character and your behavior is under your control. ## Putting it together Thinking about it, both the short term and the long term fall under one idea: **focus on what you can control.** You can only control your character and your actions. So focus on that, and judge yourself only by that. *Thanks to Bipin Sure**s**h, who**se* *stoic ideas inspired the realization that all of these actions fit under one umbrella.* *Thanks to Jacky Wang and Luba Yudasina for convincing me to include my personal story.* *Thanks to Bipin Suresh, Victoria Chang, Luba Yudasina, Mark Shlick, Jacky Wang, Aamir Patel, Nino Parunashvili, Daniel Woelfel, Avand Amiri, Abraham Sorock for reviewing drafts of this essay* [1] Black Swans: Rare events in certain domains, where their magnitude is so large that in the long run they are all that matter. See Nassim Taleb’s [Incerto](https://en.wikipedia.org/wiki/Nassim_Nicholas_Taleb#Incerto) for the concept and some of the most profound essays on risk [2] Amor Fati: “To love one’s fate” — from [Nietzche](https://en.wikipedia.org/wiki/Amor_fati)
stopachka
286,696
100daysofcode Flutter
day 14/100 of #100daysofcode #Flutter Belajar Flutter , dari kursus Udemy from @maxedapps Learning a...
0
2020-03-23T15:19:26
https://dev.to/triyono777/100daysofcode-flutter-jn2
100daysofcode, flutter
day 14/100 of #100daysofcode #Flutter Belajar Flutter , dari kursus Udemy from @maxedapps Learning about navigating between screen , push, pop, stack concept, pushnamed, pushnamed with argument, send data to next screen, https://github.com/triyono777/100-days-of-code/blob/master/log.md
triyono777
286,735
React: Simple Auth Flow
Now that we know how to use useState, useReducer and Context, how can we put these concepts into our...
5,550
2020-03-23T17:03:40
https://dev.to/koralarts/react-simple-auth-flow-3fbf
tutorial, beginners, react
Now that we know how to use `useState`, `useReducer` and Context, how can we put these concepts into our projects? An easy example is to create a simple authentication flow. We'll first setup the `UserContext` using React Context. ```react import { createContext } from 'react' const UserContext = createContext({ user: null, hasLoginError: false, login: () => null, logout: () => null }) export default UserContext ``` Now that we've created a context, we can start using it in our wrapping component. We'll also use `useReducer` to keep the state of our context. ```react import UserContext from './UserContext' const INITIAL_STATE = { user: null, hasLoginError: false } const reducer = (state, action) => { ... } const App = () => { const [state, dispatch] = useReducer(reducer, INITIAL_STATE) return ( <UserContext.Provider> ... </UserContext.Provider> ) } ``` Our reducer will handle 2 action types -- `login` and `logout`. ```react const reducer = (state, action) => { switch(action.type) { case 'login': { const { username, password } = action.payload if (validateCredentials(username, password)) { return { ...state, hasLoginError: false, user: {} // assign user here } } return { ...state, hasLoginError: true, user: null } } case 'logout': return { ...state, user: null } default: throw new Error(`Invalid action type: ${action.type}`) } } ``` After implementing the reducer, we can use `dispatch` to call these actions. We'll create functions that we'll pass to our provider's value. ```react ... const login = (username, password) => { dispatch({ type: 'login', payload: { username, password } }) } const logout = () => { dispatch({ type: 'logout' }) } const value = { user: state.user, hasLoginError: state.hasLoginError, login, logout } return ( <UserContext.Provider value={value}> ... </UserContext.Provider> ) ``` Now that our value gets updated when our state updates, and we passed the login and logout function; we'll have access to those values in our subsequent child components. We'll make two components -- `LoginForm` and `UserProfile`. We'll render the form when there's no user and the profile when a user is logged in. ```react ... <UserContext.Provider value={value}> {user && <UserProfile />} {!user && <LoginForm />} </UserContext.Provider> ... ``` Let's start with the login form, we'll use `useState` to manage our form's state. We'll also grab the context so we have access to `login` and `hasLoginError`. ```react const { login, hasLoginError } = useContext(UserContext) const [username, setUsername] = useState('') const [password, setPassword] = useState('') const onUsernameChange = evt => setUsername(evt.target.value) const onPasswordChange = evt => setPassword(evt.target.value) const onSubmit = (evt) => { evt.preventDefault() login(username, password) } return ( <form onSubmit={onSubmit}> ... {hasLoginError && <p>Error Logging In</p>} <input type='text' onChange={onUsernameChange} /> <input type='password' onChange={onPasswordChange} /> ... </form> ) ``` If we're logged in we need access to the user object and the logout function. ```react const { logout, user } = useContext(UserContext) return ( <> <h1>Welcome {user.username}</h1> <button onClick={logout}>Logout</button> </> ) ``` Now, you have a simple authentication flow in React using different ways we can manage our state! [Code Sandbox](https://codesandbox.io/s/react-context-authentication-otjqv)
koralarts
286,748
How to style forms with CSS: A beginner’s guide
Written by Supun Kavinda✏️ Apps primarily collect data via forms. Take a generic sign-up form, for...
0
2020-04-15T13:51:32
https://blog.logrocket.com/how-to-style-forms-with-css-a-beginners-guide/
css, tutorial
--- title: How to style forms with CSS: A beginner’s guide published: true date: 2020-03-23 16:00:30 UTC tags: css, tutorial canonical_url: https://blog.logrocket.com/how-to-style-forms-with-css-a-beginners-guide/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/eovw1sezwhviumgl2smt.png --- **Written by [Supun Kavinda](https://blog.logrocket.com/author/supunkavinda/)**✏️ Apps primarily collect data via forms. Take a generic sign-up form, for example: there are several fields for users to input information such as their name, email, etc. In the old days, websites just had plain, boring HTML forms with no styles. That was before CSS changed everything. Now we can create more interesting, lively forms using the latest features of CSS. Don’t just take my word for it. Below is what a typical HTML form looks like without any CSS. ![Simple HTML Form](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/html-form.png?resize=441%2C58&ssl=1) Here’s that same form jazzed up with a bit of CSS. ![Form Styled With CSS](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/form-styled-with-css.png?resize=539%2C179&ssl=1) In this tutorial, we’ll show you how to recreate the form shown above as well as a few other amazing modifications you can implement to create visually impressive, user-friendly forms. We’ll demonstrate how to style forms with CSS in six steps: 1. Setting [`box-sizing`](https://developer.mozilla.org/en-US/docs/Web/CSS/box-sizing) 2. CSS selectors for input elements 3. Basic styling methods for text input fields 4. Styling other input types 5. UI pseudo-classes 6. Noncustomizable inputs Before we dive in, it’s important to understand that there is no specific style for forms. The possibilities are limited only by your imagination. This guide is meant to help you get started on the path to creating your own unique designs with CSS. Let’s get started! [![LogRocket Free Trial Banner](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/f760c-1gpjapknnuyhu8esa3z0jga.png?resize=1200%2C280&ssl=1)](https://logrocket.com/signup/) ## 1. Setting `box-sizing` I usually set `* {box-sizing:border-box;}` not only for forms, but also webpages. When you set it, the width of all the elements will contain the padding. For example, set the width and padding as follows. ```jsx .some-class { width:200px; padding:20px; } ``` The `.some-class` without `box-sizing:border-box` will have a width of more than `200px`, which can be an issue. That’s why most developers use `border-box` for all elements. Below is a better version of the code. It also supports the `:before` and `:after` pseudo-elements. ```jsx *, *:before, *:after { box-sizing: border-box; } ``` Tip: The `*` selector selects all the elements in the document. ## 2. CSS selectors for input elements The easiest way to select input elements is to use [CSS attribute selectors](https://dev.to/bnevilleoneill/advanced-css-selectors-for-common-scenarios-3gl6). ```jsx input[type=text] { // input elements with type="text" attribute } input[type=password] { // input elements with type="password" attribute } ``` These selectors will select all the input elements in the document. If you need to specify any selectors, you’ll need to add classes to the elements. ```jsx <input type="text" class="signup-text-input" /> ``` Then: ```jsx .signup-text-input { // styles here } ``` ## 3. Basic styling methods for single-line text input fields Single-line fields are the most common input fields used in forms. Usually, a single-line text input is a simple box with a border (this depends on the browser). Here’s the HTML markup for a single-line field with a placeholder. ```jsx <input type="text" placeholder="Name" /> ``` It will look like this: ![Default HTML Input Field](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/default-html-input-field.png?resize=214%2C59&ssl=1) You can use the following CSS properties to make this input field more attractive. - Padding (to add inner spacing) - Margin (to add a margin around the input field) - Border - Box shadow - Border radius - Width - Font Let’s zoom in on each of these properties. ### Padding Adding some inner space to the input field can help improve clarity. You can accomplish this with the `padding` property. ```jsx input[type=text] { padding: 10px; } ``` ![Input Field With Padding](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/input-field-padding.png?resize=200%2C53&ssl=1) ### Margin If there are other elements near your input field, you may want to add a margin around it to prevent clustering. ```jsx input[type=text] { padding:10px; margin:10px 0; // add top and bottom margin } ``` ![Input Field With and Without Margins](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/input-field-margin.jpeg?resize=700%2C400&ssl=1) ### Border In most browsers, text input fields have borders, which you can customize. ```jsx .border-customized-input { border: 2px solid #eee; } ``` You can also remove a border altogether. ```jsx .border-removed-input { border: 0; } ``` Tip: Be sure to add a background color or `box-shadow` when the border is removed. Otherwise, users won’t see the input. Some web designers prefer to display only the bottom border because feels a bit like writing on a line in a notebook. ```jsx .border-bottom-input { border:0; // remove default border border-bottom:1px solid #eee; // add only bottom border } ``` ### Box shadow You can use the CSS [`box-shadow`](https://www.w3schools.com/cssref/css3_pr_box-shadow.asp) property to add a drop shadow. You can achieve a range of effects by playing around with the property’s five values. ```jsx input[type=text] { padding:10px; border:0; box-shadow:0 0 15px 4px rgba(0,0,0,0.06); } ``` ![Input Field With Box Shadow](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/input-field-box-shadow.png?resize=251%2C142&ssl=1) ### Border radius The `border-radius` property can have a massive impact on the feel of your forms. By curving the edges of the boxes, you can significantly alter the appearance of your input fields. ```jsx .rounded-input { padding:10px; border-radius:10px; } ``` ![Input Field With Border Radius](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/input-field-border-radius.png?resize=356%2C122&ssl=1) You can achieve another look and feel altogether by using `box-shadow` and `border-radius` together. ![Input Field With Border Radius and Box Shadow](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/input-field-border-radius-box-shadow.png?resize=388%2C79&ssl=1) ### Width Use the `width` property to set the width of inputs. ```jsx input { width:100%; } ``` ### Fonts Most browsers use a different font family and size for form elements. If necessary, we can inherit the font from the document. ```jsx input, textarea { font-family:inherit; font-size: inherit; } ``` ## 4. Styling other input types You can style other input types such as text area, radio button, checkbox, and more. Let’s take a closer look. ### Text areas Text areas are similar to text inputs except that they allow multiline inputs. You’d typically use these when you want to collect longer-form data from users, such as comments, messages, etc. You can use all the basic CSS properties we discussed previously to style text areas. The [`resize`](https://www.w3schools.com/cssref/css3_pr_resize.asp) property is also very useful in text areas. In most browsers, text areas are resizable along both the x and y axes (value: `both`) by default. You can set it to `both`, `horizontal`, or `vertical`. Check out this text area I styled: {% codepen https://codepen.io/SupunKavinda/pen/dyomzez %} <script async src="https://static.codepen.io/assets/embed/ei.js"></script> In this example, I used `resize:vertical` to allow only vertical resizing. This practice is used in most forms because it prevents annoying horizontal scrollbars. Note: If you need to create auto-resizing text areas, you’ll need to use a [JavaScript approach](https://stackoverflow.com/questions/454202/creating-a-textarea-with-auto-resize), which is outside the scope of this article. ### Checkboxes and radio buttons The default checkbox and radio buttons are hard to style and require more complex CSS (and HTML). To style a checkbox, use the following HTML code. ```jsx <label>Name <input type="checkbox" /> <span></span> </label> ``` A few things to note: - Since we’re using `<label>` to wrap the `<input>`, if you click any element inside `<``label``>`, the `<input>` will be clicked - We’ll hide the `<input>` because browsers don’t allow us to modify it much - `<span>` creates a custom checkbox - We’ll use the `input:checked` [pseudo-class selector](https://developer.mozilla.org/en-US/docs/Learn/CSS/Building_blocks/Selectors/Pseudo-classes_and_pseudo-elements) to get the checked status and style the custom checkbox Here’s a custom checkbox (see the comments in the CSS for more explanations): {% codepen https://codepen.io/SupunKavinda/pen/yLNKQBo %} Here’s a custom radio button: {% codepen https://codepen.io/SupunKavinda/pen/eYNMQNM %} We used the same concept (`input:checked`) to create custom elements in both examples. In browsers, checkboxes are box-shaped while radio buttons are round. It’s best to keep this convention in custom inputs to avoid confusing the user. ### Select menus Select menus enable users to select an item from multiple choices. ```jsx <select name="animal"> <option value="lion">Lion</option> <option value="tiger">Tiger</option> <option value="leopard">Leopard</option> </select> ``` You can style the `<select>` element to look more engaging. ```jsx select { width: 100%; padding:10px; border-radius:10px; } ``` ![Styled Select Element](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/select-element-styled.png?resize=465%2C156&ssl=1) However, you cannot style the dropdown (or `<option>` elements) because they are styled by default depending on the OS. The only way to style those elements is to use [custom dropdowns with JavaScript](https://medium.com/@kyleducharme/developing-custom-dropdowns-with-vanilla-js-css-in-under-5-minutes-e94a953cee75). ### Buttons Like most elements, buttons have default styles. ![Default Button](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/default-button.png?resize=88%2C45&ssl=1) ```jsx <button>Click Me</button> ``` Let’s spice this up a bit. ```jsx button { /* remove default behavior */ appearance:none; -webkit-appearance:none; /* usual styles */ padding:10px; border:none; background-color:#3F51B5; color:#fff; font-weight:600; border-radius:5px; width:100%; } ``` ![Button Customized With CSS](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/customized-button.png?resize=601%2C75&ssl=1) ## 5. UI pseudo-classes Below are some [UI pseudo-classes](https://developer.mozilla.org/en-US/docs/Learn/Forms/UI_pseudo-classes) that are commonly used with form elements. These can be used to show notices based on an element’s attributes: - `:required` - `:valid` and `:invalid` - `:checked` (we already used this) These can be used to create effects on each state: - `:hover` - `:focus` - `:active` ### Generated messages with `:required` To show a message that input is required: ```jsx <label>Name <input type="text"> <span></span> </label> <label>Email <input type="text" required> <span></span> </label> label { display:block; } input:required + span:after { content: "Required"; } ``` ![CSS-Generated Content Alerting the User of a Required Input Field](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/css-generated-content-input-required.png?resize=286%2C62&ssl=1) If you remove the `required` attribute with JavaScript, the `"Required"` message will be removed automatically. Note: `<input>` cannot contain other elements. Therefore, it cannot contain the `:after` or `:before` pseudo-elements. Hence, we need to use another `<span>` element. We can do the same thing with the `:valid` and `:invalid` pseudo-classes. ### `:hover` and `:focus` `:hover` selects an element when the mouse pointer hovers over it. `:focus` selects an element when it is focused. These pseudo-classes are often used to create transitions and slight visual changes. For example, you can change the width, background color, border color, shadow strength, etc. Using the `transition` property with these properties makes those changes much smoother. Here are some hover effects on form elements (try hovering over the elements). {% codepen https://codepen.io/SupunKavinda/pen/yLNKZqg %} When users see elements subtly change when they hover over them, they get the sense that the element is actionable. This is an important consideration when designing form elements. Did you notice that (in some browsers) a blue outline appears when focusing on form elements? You can use the `:focus` pseudo-class to remove it and add more effects when the element is focused. The following code removes the focus outline for all elements. ```jsx *:focus {outline:none !important} ``` To add a focus outline: ```jsx input[type=text]:focus { background-color: #ffd969; border-color: #000; // and any other style } ``` Have you seen search inputs that scale when focused? Try this input. {% codepen https://codepen.io/SupunKavinda/pen/KKpoJJa %} ## 6. Noncustomizable inputs Styling form elements has historically been a tall order. There are some form elements that we don’t have much control over styling. For example: - `<input type="color">` - `<input type="file">` - `<progress>` - `<option>`, `<optgroup>`, `<datalist>` These elements are provided by the browser and styled based on the OS. The only way to style these elements is to use custom controls, which are created using stylable HTML elements such as `div`, `span`, etc. For example, when styling `<input type="file">`, we can hide the default input and use a custom button. Custom controls for form elements are developed for most major JavaScript libraries. You can find them on [GitHub](https://github.com/). ## Conclusion You should now understand how to style simple form elements and how to use custom controls when browser input fields are difficult to style. As I stated at the beginning of this post, these are only the basic building blocks of CSS form styling. You should use these tips as a foundation to let your imagination run wild. As a closing tip, remember to make all your forms [responsive](https://developer.mozilla.org/en-US/docs/Learn/CSS/CSS_layout/Responsive_Design). Looking for more form design inspiration? Check out this [CodePen collection](https://codepen.io/collection/KuDsH/). * * * ## Is your frontend hogging your users' CPU? As web frontends get increasingly complex, resource-greedy features demand more and more from the browser. If you’re interested in monitoring and tracking client-side CPU usage, memory usage, and more for all of your users in production, [try LogRocket.](https://logrocket.com/signup/) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/403rye2ptzx994uhktk1.png) [LogRocket](https://logrocket.com/signup/) is like a DVR for web apps, recording everything that happens in your web app or site. Instead of guessing why problems happen, you can aggregate and report on key frontend performance metrics, replay user sessions along with application state, log network requests, and automatically surface all errors. Modernize how you debug web apps — [Start monitoring for free.](https://logrocket.com/signup/) * * * The post [How to style forms with CSS: A beginner’s guide](https://blog.logrocket.com/how-to-style-forms-with-css-a-beginners-guide/) appeared first on [LogRocket Blog](https://blog.logrocket.com).
bnevilleoneill
286,768
Quickly build schema-based forms in React with uniforms
Written by Godwin Ekuma✏️ uniforms are React libraries for building form-based web UIs from every...
0
2020-04-15T15:02:24
https://blog.logrocket.com/quickly-build-schema-based-forms-in-react-with-uniforms/
react, tutorial, webdev
--- title: Quickly build schema-based forms in React with uniforms published: true date: 2020-03-23 17:30:51 UTC tags: react, tutorial, webdev canonical_url: https://blog.logrocket.com/quickly-build-schema-based-forms-in-react-with-uniforms/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/3ej4i4blkyostjign2ux.png --- **Written by [Godwin Ekuma](https://blog.logrocket.com/author/godwinekuma/)**✏️ [uniforms](https://uniforms.tools/) are React libraries for building form-based web UIs from every schema. A schema is a formal representation of data, data types, allowable values, default values, required values, etc. These web UIs are designed for accepting, modifying, and presenting data and are usually embedded within an application. In this tutorial, we’ll demonstrate how you can use uniforms to efficiently build schema-based forms in React. ## Why do you need uniforms? Manually writing HTML templates and the logic for data binding is hard, especially in a relatively large application. Forms are even trickier because they usually involve functionalities that are more advanced than [data binding](https://dev.to/bnevilleoneill/form-input-binding-in-vue-1298-temp-slug-8717084), such as validation and submission. uniforms eliminate the stress of writing templates and the JavaScript logic for data binding. They facilitates form rendering and take care of state management, validation, and submission. Below are the core features of uniforms, according to the [official documentation](https://uniforms.tools/docs/what-are-uniforms/). - Automatic forms generation - Fields capable of rendering every schema - Helper for creating custom fields with one line - Inline and asynchronous form validation - Various schemas integration - Wide range of themes support [![LogRocket Free Trial Banner](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/f760c-1gpjapknnuyhu8esa3z0jga.png?resize=1200%2C280&ssl=1)](https://logrocket.com/signup/) ## How do uniforms work? uniforms are defined by the following. 1. **Schema** **—** Compatible schemas include [GraphQL schema](https://graphql.org/learn/schema/), [JSON Schema](https://json-schema.org/), [`uniforms-bridge-simple-schema`](https://www.npmjs.com/package/uniforms-bridge-simple-schema), and [`uniforms-bridge-simple-schema-2`](https://www.npmjs.com/package/uniforms-bridge-simple-schema-2) 2. **Theme** — The theme is a package that contains prestyled form components from any of today’s [popular style librar](https://dev.to/bnevilleoneill/top-10-react-component-libraries-for-2020-4cm7)[ies](https://dev.to/bnevilleoneill/top-10-react-component-libraries-for-2020-4cm7), such as AntDesign, Bootstrap 3, Bootstrap 4, Material Design, Semantic, unstyled HTML, etc. 3. **Schema bridge** — A bridge is a unified schema mapper that uniforms’ internals use to operate on the schema data, validate the form, and generate from errors. uniforms has a predefined schema-to-bridge mapper, [uniforms-bridge-json-schema](https://www.npmjs.com/package/uniforms-bridge-json-schema), that can be used to [create a schema bridge](https://uniforms.tools/docs/uth-bridge-concept/) ## Using uniforms Let’s say the marketing team at your company wants to collect lead information and you’ve agreed to help. Your task is to use uniforms to create a form for users to contact to the marketing team. ### Installation To use uniforms, you must first install the dependent packages. We’ll use JSON Schema to specify the data format, Bootstrap 4 as our UI theme, and [Ajv](https://github.com/epoberezkin/ajv) for schema validation. To install the required packages run the command below. ```jsx npm install uniforms uniforms-bridge-json-schema uniforms-bootstrap4 bootstrap@4.4.1 ajv ``` ### Create a schema Define the shape of the form by defining a plain JSON, which is a valid part of a JSON Schema. ```jsx // schema.js const schema = { title: 'Lead Form', type: 'object', properties: { name: { type: 'string' }, email: { type: 'string' }, phone: { type: 'integer', minimum: 0, maximum: 100 }, reason: { type: 'string', options: [ { label: 'Product Questions', value: 'product-questions' }, { label: 'Online Order Support', value: 'online-order-support' }, { label: 'Sales-support', value: 'sales-support' }, { label: 'Events', value: 'events' } ] }, message: { type: 'string', uniforms: { component: LongTextField } } }, required: ['firstName', 'email', 'message'] }; ``` ### Create a bridge For uniforms to make use of any schema, you must create a bridge of the schemas. The following schemas are compatible with `uniforms`. - `GraphQLBridge` in `uniforms-bridge-graphql` - `JSONSchemaBridge` in `uniforms-bridge-json-schema` - `SimpleSchema2Bridge` in `uniforms-bridge-simple-schema-2` - `SimpleSchemaBridge` in `uniforms-bridge-simple-schema` ```jsx import { JSONSchemaBridge } from 'uniforms-bridge-json-schema'; const bridge = new JSONSchemaBridge(schema); ``` Though JSON Schema is easy to use with uniforms, it doesn’t come with validation out of the box. You must manually define a validator to use on your contact form. Let’s use Ajv for validation: ```jsx import Ajv from 'ajv'; const ajv = new Ajv({ allErrors: true, useDefaults: true }); function createValidator(schema) { const validator = ajv.compile(schema); return model => { validator(model); if (validator.errors && validator.errors.length) { throw { details: validator.errors }; } }; } const schemaValidator = createValidator(schema); ``` Now that you have a validator, you can include it as part of the bridge. ```jsx const bridge = new JSONSchemaBridge(schema, schemaValidator); ``` At this point, the `schema.js` file should look like this: ```jsx import Ajv from 'ajv'; import { JSONSchemaBridge } from 'uniforms-bridge-json-schema'; const ajv = new Ajv({ allErrors: true, useDefaults: true }); const schema = { title: 'Lead Form', type: 'object', properties: { name: { type: 'string' }, email: { type: 'string' }, phone: { type: 'integer', minimum: 0, maximum: 100 }, reason: { type: 'string', options: [ { label: 'Product Questions', value: 'product-questions' }, { label: 'Online Order Support', value: 'online-order-support' }, { label: 'Sales-support', value: 'sales-support' }, { label: 'Events', value: 'events' } ] }, message: { type: 'string', uniforms: { component: LongTextField } } }, required: ['firstName', 'email', 'message'] }; function createValidator(schema) { const validator = ajv.compile(schema); return model => { validator(model); if (validator.errors && validator.errors.length) { throw { details: validator.errors }; } }; } const schemaValidator = createValidator(schema); const bridge = new JSONSchemaBridge(schema, schemaValidator); export default bridge; ``` ### Add the schema to a form Uniforms’ theme packages include a component called `AutoForm` that generates a form from the schema. Pass the schema to `AutoForm` to generate a form. ```jsx import React from "react"; import "./styles.css"; import { AutoForm, AutoFields, ErrorsField, SubmitField } from 'uniforms-bootstrap4'; import LeadSchema from './schema'; export default function App() { return ( <div className="App"> <div className="uniforms"> <AutoForm schema={LeadSchema} onSubmit={(e) => {console.log(e)}}> <h4>Have a question? Contact Sales</h4> <AutoFields /> <ErrorsField /> <SubmitField /> </AutoForm> </div> </div> ); } ``` ![Schema-Based React Form Built With uniforms](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/schema-based-react-form-1.png?resize=720%2C285&ssl=1) ![Schema-Based React Form With Error Prompts](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2020/03/schema-based-react-form-error-prompts.png?resize=720%2C402&ssl=1) ## Conclusion Now you have the basic knowledge you need to create schema-based forms in React using uniforms. The library comes with myriad other prebuilt form elements you can explore in the [uniforms documentation](https://uniforms.tools/docs/api-fields/). The snippets used in this tutorial come from an [example app](https://codesandbox.io/s/agitated-feynman-itwiy?from-embed). You’re welcome to clone it and play with it yourself. * * * ## Full visibility into production React apps Debugging React applications can be difficult, especially when users experience issues that are difficult to reproduce. If you’re interested in monitoring and tracking Redux state, automatically surfacing JavaScript errors, and tracking slow network requests and component load time, [try LogRocket.](https://www2.logrocket.com/react-performance-monitoring) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/eq752g8qhbffxt3hp9t4.png) [LogRocket](https://www2.logrocket.com/react-performance-monitoring) is like a DVR for web apps, recording literally everything that happens on your React app. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app's performance, reporting with metrics like client CPU load, client memory usage, and more. The LogRocket Redux middleware package adds an extra layer of visibility into your user sessions. LogRocket logs all actions and state from your Redux stores. Modernize how you debug your React apps — [start monitoring for free.](https://www2.logrocket.com/react-performance-monitoring) * * * The post [Quickly build schema-based forms in React with uniforms](https://blog.logrocket.com/quickly-build-schema-based-forms-in-react-with-uniforms/) appeared first on [LogRocket Blog](https://blog.logrocket.com).
bnevilleoneill
286,807
Almost 80% of companies in Brazil ship code at least once in a week, according to study
Between January 29th and March 5th, we conducted a survey to get a technology landscape of the Brazil...
0
2020-03-23T19:29:35
https://sourcelevel.io/blog/almost-80-of-companies-in-brazil-ship-code-at-least-once-in-a-week-according-to-study
developmenttools, deploy, leadtime, metrics
--- title: Almost 80% of companies in Brazil ship code at least once in a week, according to study published: true date: 2020-03-23 19:03:31 UTC tags: Development Tools,deploy,lead time,metrics canonical_url: https://sourcelevel.io/blog/almost-80-of-companies-in-brazil-ship-code-at-least-once-in-a-week-according-to-study --- Between January 29th and March 5th, we conducted a survey to get a technology landscape of the Brazilian startups and companies. We released the [full report](https://sourcelevel.io/technology-landscape-in-brazilian-startups-and-companies) with the results of the study. The questionnaire had all the questions in Portuguese, to restrict the audience. We used our communication channels, mainly Twitter and LinkedIn, to achieve respondents. Although the survey received answers from Brazilians working for companies outside Brazil, 301 out of 349 respondents work for Brazilian companies. Respondents working for companies based in the five regions of the country (North Region, Northeast Region, Central-West Region, Southeast Region, and South Region) answered the form. One of the more than 30 questions was how frequent deploys used to be in their companies. 41.72% of the respondents said they deploy at least once in a day, while other 37.57% of them deploy at least once in a week. They sum almost 80%, meaning that most of the companies deploy to production at least once in the week. It makes sense comparing the number of companies that adopted agile, and the current digital transformation wave, which is underway. At a certain point, we had to manage the fact that there were too few answers from women and non-binary people. Despite our efforts, we got nearly 10% of the responses from women. When searching for other studies, we found that the number of women in tech varies from 8% to 25%, depending on the survey. Our research is within this range, although we wished to have more responses from women and non-binary people. [![](https://sourcelevel.io/wp-content/uploads/hiring-in-brazilian-companies-diversity-minorities-developers-410x1024.png)](https://sourcelevel.io/wp-content/uploads/hiring-in-brazilian-companies-diversity-minorities-developers.png) Below, I listed the main topics of the survey. ## General questions - **Companies and IT department profiles** : this set of questions gives an idea of which industry, the number of employees, and time in the market. Also, the questionnaire included questions about the location of the headquarter and the work arrangement. Based on that information, we can tell that there are a different number of combinations in the answers. - **Research participant profile** : questions around participant’s demographic and professional information are crucial for the survey. They draw the universe of the respondents and give credibility to the data. - **Hiring Process:** are companies concerned about hiring people belonging to minority groups and increasing diversity? How many developers did companies employed in the last 12 months? Those are some of the questions around this topic. ## Development Flow We covered from the conception to the deployment of the development flow. The survey had a specific question about the methodology adopted by the company. - **Conception** : this phase of development is where the idea becomes an epic or a feature. Data show the most used software to document software demands and how companies are handling changes during development. - **Development** : the survey covers the programming language, coding platform, version control system, and practices of the teams. This section is very insightful. - **Code Review:** How companies tackle code review? Is it performed just within teams, or is it an institutionalized practice? - **Testing:** The most exciting finding of this category of questions is that few companies fully automate tests, and even fewer companies have tests run by a QA person or a specialist. - **Deploying:** this section surveys which CI/CD software is used, among other questions like whether deploys are automated and their frequency. - **Metrics of the development process:** The most used metrics for development, according to our study, are Lead Time and Throughput. Among other questions, our study also reveals the most desired metrics. ## Are you interested in the numbers? We prepared a **report in English** , including all the questions surveyed. **[Read the full report for free!](https://sourcelevel.io/technology-landscape-in-brazilian-startups-and-companies#access-full-report)** The post [Almost 80% of companies in Brazil ship code at least once in a week, according to study](https://sourcelevel.io/blog/almost-80-of-companies-in-brazil-ship-code-at-least-once-in-a-week-according-to-study) appeared first on [SourceLevel](https://sourcelevel.io).
georgeguimaraes
286,871
I made a "Simon Game" variation using VueJS 🎮
An "infinite" game based on "Simon Game"
0
2020-03-23T22:28:25
https://dev.to/felipperegazio/i-made-a-simon-game-variation-using-vuejs-3nmc
showdev, game, vue, frontend
--- title: I made a "Simon Game" variation using VueJS 🎮 published: true description: An "infinite" game based on "Simon Game" tags: #showdev #game #vuejs #frontend cover_image: https://felippe-regazio.github.io/memory-lights/img/memory-lights.png --- Recently i made this kind of [Simon Game](https://en.wikipedia.org/wiki/Simon_(game)) using VueJS. > Simon is an electronic game of memory skill invented by Ralph H. Baer and Howard J. Morrison, working for toy design firm Marvin Glass and Associates,[1] with software programming by Lenny Cope. The device creates a series of tones and lights and requires a user to repeat the sequence. In my implementation, the previous sequence is not memorized. Every time you hit the right sequence a completely new variation with a new step is proposed by the computer. This version is also "infinite", test your limits. Here is the game link: https://felippe-regazio.github.io/memory-lights/ Here is the source code: https://github.com/felippe-regazio/memory-lights-source My best was level 12 🙃
felipperegazio
286,872
I.can.has("cheezeburger")
When I was a youngin', there was a site called icanhas.cheezeburger.com... I have been eyeing the mo...
0
2020-03-23T22:31:41
https://dev.to/omnoms/i-can-has-cheezeburger-39pg
When I was a youngin', there was a site called icanhas.cheezeburger.com... I have been eyeing the mocha/chai/expect framework just for the sake of understanding/learning how it works in relation to the javascript internals to get that kind of behavior. And I was reminded of the old website, icanhas.cheezeburger.com when I saw someone name their check for undefined or null in a nested object to "has". So... I set on to improve on that code (because it was relying on implicit checks) as well as making my own funny version of it. May I introduce the "I" function. ```js var food = { hamburger: false } I(food).can.has("cheezeburger"); ``` So this is using ES2015 style code, in that it's not a class that you strictly need to new up, it also uses Object.defineProperty to make getters/setters and it's compatible with IE11. However it also uses an anti-pattern which really isn't OK strictly speaking, since there's a getter that modifies state, but the way that this function is used is also not a common/natural pattern in javascript world, only in unit-tests, so you could argue that it's OK. So let's walk through it. The first thing is that the function needs to keep to a fluent-like API where you can just keep on chaining methods/functions/properties from it because they all return a modified version of the function. So that's why a getter used in this way where it modifies state is considered OK because of the fluent-like api but using the getter as a necessary evil to provide that type of functionality that otherwise would have required a function call/invocation to be able to provide it. This means that every property needs to return an instance of this unless it's a terminating expression/word. Like ".is.undefined" or similar. Now undefined was unfortunately a protected word. I was not able to override it in ES2015 style code, so I would have to dig a little deeper to see how mocha/chai does it > After initial checks it seems that it's using an Object internal function to override the default for a property if it already exists. So I'm not gonna do that for this. So non-terminal getters always return the same instance is how we chain them without using a function-call. Since a property-getter is accessed just like any javascript object property ``` var normalObj = { prop: "value" } > normalObj.prop "value" var objWithGetter = {} Object.defineProperty(objWithGetter, 'prop', { get: function() { return "value"; }}); > objWithGetter.prop "value" ``` Now it may not be apparent why this is an advantage, but it sort of gives you access to a function that can perform evaluation at access-time rather than pre-computing a value to a normal property. This is the key-ingredient for this type of functionality. So what properties do we want? Well.... "I" is taken care of as the function name. We could choose to make it a static-like function that takes no parameters. But then we'd have to instantiate itself much later in the chain and there's no real advantage to that. So true to the mocha/chai pattern, we'll add the object we're working with when calling the function. `I(food)`. _can_, as a property really doesn't add anything to the chain. It could potentially add a flag saying that following this statement in the chain the end-property has to be a function, or something, it's a stretch of the mind to attribute the name "can" as a moniker for a function. But for me, it doesnt add anything, so all it does, is just return the same instance untouched. _has_, as a property is just a function that checks for the existence of something and returns true/false. The function itself is based off of this [SO article](https://stackoverflow.com/questions/23808928/javascript-elegant-way-to-check-nested-object-properties-for-null-undefined) with a few improvements. And then it would be a very small function/class if we ended it there. So I extended it with a not as well as null or undef checks, to make it slightly more useful. _not_, as a property actually modifies state of the instance before returning an instance reference. It flips an internal boolean when used. So technically speaking you could chain it indefinitely and it would toggle it on/off all the time. _null_, as a property just checks if the initial object provided is equal to null _undef_, as a property checks if the intial object provided is equal to undefined So with these properties, _null_, _undef_ and _has_ are the terminators as they dont return an instance of the function/class when they return a value. Ofc you could make it so that they do and you add a value property that contains the end-result. So that you always have an instance to work with. But I guess that's just a flavor of choice. Trying to reconcile not instantiating/newing up a class/function but still maintain an instance for the sake of syntax, was a bit trickier than I wanted, so I ended up just making sure that we're not overwriting "this.<prop>" whenever the function was called. This in the end, became cleaner than doing it the "proper" way. ```js function I(obj) { this._obj = obj; this._negate = false; if(!this.can) Object.defineProperty(this, 'can', { get: function() { return this; } }); if(!this.is) Object.defineProperty(this, 'is', { get: function() { return this; } }); if(!this.not) Object.defineProperty(this, 'not', { get: function() { this._negate = !this._negate; return this; } }); function everyProp(currObj) { return function(prop) { if( typeof currObj === "undefined" || currObj === null || !(prop in currObj)) return false; currObj = currObj[prop]; return true; }; } this.has = function(key) { var tObj = this._obj; const returnObj = key.split(".").every(everyProp(tObj)); if (this._negate) return !returnObj; return returnObj; }; if(!this.undef) Object.defineProperty(this, 'undef', { get: function() { if(this._negate) return typeof this._obj !== "undefined"; return typeof this._obj === "undefined"; }}); if(!this.null) Object.defineProperty(this, 'null', { get: function() { if(this._negate) return this._obj !== null; return this._obj === null; }}); return this; } var food = { hamburger: false } console.log(I(food).can.not.not.has("cheezeburger")); ``` So can I? `node has.js` > false FeelsBadMan
omnoms
286,937
Train Simulation Problems - Update
Thanks for yesterday. Hey everyone, So, I have gotten pretty far with the Train Simulatio...
0
2020-03-23T23:40:12
https://dev.to/sonandrew/train-simulation-problems-update-15fm
java, help
## Thanks for yesterday. Hey everyone, So, I have gotten pretty far with the Train Simulation from the help I've gotten yesterday. Thank you to all that helped! I really appreciate it. Most of the problems I had were with things I thought I had fixed already but didn't. I think I just needed some extra eyes on this problem. Usually, everyone is by themselves trying their hardest to figure out a problem, but, sometimes all you need to do is put your ego and pride aside and ask for help. ## Update. To update you on this project, I have pretty much solved all those pesky problems I had, but then, others have popped up... > **Peskipiksi pesternomi!** > **It had no effect...** The issue where the number of stations that were entered, were not how many stations you went to. So let's say I entered *3* as the number of stations. I would instead get *100* stations because the getCurrentStation method was not actually returning the current station. The other major issue I had involed the station name. It turns out I was trying to get a char instead of an integer to return the current station index. **SMH...** I felt so dumb but thats why you ask for help. I thought I already corrected it when I copied and changing it from another method. ## Current problems. 1. With these problems solved I am now having a problem with getting the current station... Yes, this is unexpected considering I just fixed it to where it wouldn't print out *100* stations anymore. Now, it is only giving me one station when it is supposed to be adding *1* after each station and subtracting *1* if the stopCounter is not less than the number of stations. 2. The other problem I am having is with the display of the Usage Report. It is supposed to be shown at the end of the program after you have already run a simulation. If the current trains create counter is less than *0* then it is supposed to display the usage report with the user name, the create counter, and the run counter. So far I have not been able to figure out why this happening. ## I am asking once again. Well, like the great **Bernard Sanders** it's that time again when I ask you for your help. I am learning a lot from these post and do think I will continue to post about these school projects. If you can lend your eyes toward [this project](https://gist.github.com/sonAndrew/dde70230fe184eb40993ed81f114a713) and tell me if you see any discrepancies in the code that would cause these problems, I would greatly appreciate it. Thanks for reading.
sonandrew
286,979
Deploying Service Based Architecture Application on Amazon’s ECS (Elastic Container Service)
Link: https://blog.joshsoftware.com/2019/03/12/deploying-service-based-architecture-application-on-am...
0
2020-03-24T03:03:06
https://dev.to/shekhar12020/deploying-service-based-architecture-application-on-amazon-s-ecs-elastic-container-service-4mcl
Link: https://blog.joshsoftware.com/2019/03/12/deploying-service-based-architecture-application-on-amazons-ecs-elastic-container-service/ Why deploy container for each service Deploying all service on single machine is possible but we should refrain from it. If we deploy all service on single machine then we are not utilising benefits of service based architecture (except manageable/easy-to-upgrade codebase).
shekhar12020
286,983
Can anyone suggest a good library for OCR in expo
I have tried using ejecting with expo to bare react native project and then using "react-native-tesse...
0
2020-03-24T03:21:52
https://dev.to/keyurpatel8118/can-anyone-suggest-a-good-library-for-ocr-in-expo-4h93
I have tried using ejecting with expo to bare react native project and then using "react-native-tesseract-ocr (https://github.com/jonathanpalma/react-native-tesseract-ocr)" library. But it's not working out for me.
keyurpatel8118
287,023
How To Build Web Components Using Stencil JS
Stencil is a compiler that generates Web Components (more specifically, Custom Elements). Stencil co...
0
2020-03-24T05:55:55
https://enappd.com/blog/build-web-components-using-stencil-js/51
react, stenciljs, component, javascript
<main role="main"><article class=" u-minHeight100vhOffset65 u-overflowHidden postArticle postArticle--full is-supplementalPostContentLoaded is-withAccentColors" lang="en"><div class="postArticle-content js-postField js-notesSource editable" id="editor_6" g_editable="true" role="textbox" contenteditable="true" data-default-value="Title Tell your story…" spellcheck="false"><section name="e1ca" class="section section--body section--first section--last"><div class="section-divider"><hr class="section-divider"></div><div class="section-content"><div class="section-inner sectionLayout--insetColumn"><p name="6de2" class="graf graf--p graf-after--figure">Stencil is a compiler that generates Web Components (more specifically, Custom Elements). Stencil combines the best concepts of the most popular frameworks into a simple build-time tool and we can use these web components everywhere in your JavaScript projects(Angular, React, Vue) without need to copying one thing again and again. You can either use it in your Vanilla JavaScript.</p><blockquote name="7c66" class="graf graf--blockquote graf-after--p">Stencil was created by the <a href="http://ionicframework.com/" class="markup--anchor markup--blockquote-anchor" rel="noopener" target="_blank">Ionic Framework</a> team to help build faster, more capable components that worked across all major frameworks.</blockquote><h3 name="6832" class="graf graf--h3 graf-after--blockquote">Goals and Features of a&nbsp;stencil</h3><p name="cc8c" class="graf graf--p graf-after--h3">Stencil aims to combine the best concepts of the most popular front-end frameworks into a compile-time tool rather than a run-time tool. It’s important to stress that Stencil’s goal is to <em class="markup--em markup--p-em">not</em> become or be seen as a “framework”, but rather our goal is to provide a great developer experience and tooling expected from a framework</p><h4 name="a64a" class="graf graf--h4 graf-after--p">Virtual DOM</h4><p name="e443" class="graf graf--p graf-after--h4">basically, The <strong class="markup--strong markup--p-strong">virtual DOM</strong> (VDOM) is a programming concept where an ideal, or “<strong class="markup--strong markup--p-strong">virtual</strong>”, representation of a UI is kept in memory and synced with the “real” <strong class="markup--strong markup--p-strong">DOM</strong> by a library.</p><h4 name="cf06" class="graf graf--h4 graf-after--p">Async rendering (inspired by React&nbsp;Fiber)</h4><p name="9900" class="graf graf--p graf-after--h4">we can also able to do an <strong class="markup--strong markup--p-strong">asynchronous</strong> call before <strong class="markup--strong markup--p-strong">rendering</strong> the data implemented in a Class</p><h4 name="89d8" class="graf graf--h4 graf-after--p">Reactive data-binding</h4><p name="358c" class="graf graf--p graf-after--h4">Reactive data-binding simply means that a flow of changes in your <strong class="markup--strong markup--p-strong">data</strong> drives action. Whether the change comes from both the DOM and the <strong class="markup--strong markup--p-strong">data</strong> in your application or just one of those, does not really matter.</p><h4 name="d87c" class="graf graf--h4 graf-after--p">TypeScript</h4><p name="2669" class="graf graf--p graf-after--h4">TypeScript is an open-source programming language developed and maintained by Microsoft. It is a strict syntactical superset of JavaScript and adds optional static typing to the language. TypeScript is designed for development of large applications and trans compiles to JavaScript</p><h4 name="0eff" class="graf graf--h4 graf-after--p">JSX</h4><p name="d9c3" class="graf graf--p graf-after--h4"><strong class="markup--strong markup--p-strong">JSX</strong> is a preprocessor step that adds XML syntax to JavaScript. You can definitely use Stencil without <strong class="markup--strong markup--p-strong">JSX</strong> but <strong class="markup--strong markup--p-strong">JSX</strong> makes Stencil components a lot more elegant. Just like XML, <strong class="markup--strong markup--p-strong">JSX </strong>tags have a tag name, attributes, and children. If an attribute value is enclosed in quotes, the value is a string.</p><h4 name="9b1c" class="graf graf--h4 graf-after--p">Live reload</h4><p name="eaec" class="graf graf--p graf-after--h4"><strong class="markup--strong markup--p-strong">Live reloading reloads</strong> or refreshes the entire app when a file changes.</p><h4 name="da39" class="graf graf--h4 graf-after--p">Web Standards</h4><p name="9d87" class="graf graf--p graf-after--h4">Components generated by Stencil, in the end, are built on top of web components, so they work in any major framework or with no framework at all. Additionally, other standards heavily relied on include ES Modules and dynamic imports which have proven to replace traditional bundlers which add unnecessary complexities and run-time JavaScript. By using web-standards, developers can learn and adopt a standard API documented across the world, rather than custom framework APIs that continue to change.</p><h4 name="a30d" class="graf graf--h4 graf-after--p">Wide Browser&nbsp;Support</h4><p name="18e5" class="graf graf--p graf-after--h4">For the small minority of browsers that do not support modern browser features and APIs, Stencil will automatically polyfill them on-demand.</p><h4 name="58ea" class="graf graf--h4 graf-after--p">Automatic Optimizations</h4><p name="4bdb" class="graf graf--p graf-after--h4">There are countless optimizations and tweaks developers must do to improve performance of components and websites. With a compiler, Stencil is able to analyze component code as an input, and generate optimized components as an output.</p><h4 name="b006" class="graf graf--h4 graf-after--p">Run-time Performance</h4><p name="a81a" class="graf graf--p graf-after--h4">Instead of writing custom client-side JavaScript which every user needs to download and parse for the app to work, Stencil instead prefers to use the already amazing APIs built directly within the browser. These APIs include Custom Elements</p><h4 name="88b8" class="graf graf--h4 graf-after--p">Tiny API</h4><p name="44ac" class="graf graf--p graf-after--h4">Stencil purposely does not come with a large custom API which needs to be learned and re-learned, but rather heavily relies on, you guessed it, web standards. Again, our goal is to not create yet-another-framework, but rather provide tooling for developers to generate future-friendly components using APIs already baked within the browser. The smaller the API, the easier to learn, and the less that can be broken.</p><h3 name="dacb" class="graf graf--h3 graf-after--p">Getting Started</h3><h4 name="c27e" class="graf graf--h4 graf-after--h3">Steps we will follow for Adding Stencil to our Simple Javascript Application</h4><ol class="postList"><li name="c42a" class="graf graf--li graf-after--h4">Crating a simple Stencil component</li><li name="580a" class="graf graf--li graf-after--li">Modify this component according to our requirements</li><li name="80ae" class="graf graf--li graf-after--li">Add this component into Our Javascript Application</li></ol><h4 name="35cc" class="graf graf--h4 graf-after--li">Crating a simple Stencil component</h4><p name="21f0" class="graf graf--p graf-after--h4">Stencil requires a recent LTS version of <a href="https://nodejs.org/" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">NodeJS</a> and npm. Make sure you’ve installed and/or updated Node before continuing.</p><blockquote name="7ddf" class="graf graf--blockquote graf-after--p"><em class="markup--em markup--blockquote-em">Note that you will need to use npm 6 or higher.</em></blockquote><p name="f5c5" class="graf graf--p graf-after--blockquote">For Creating Project in a stencil, you have to open your system terminal and type</p><pre name="e4f3" class="graf graf--pre graf-after--p">npm init stencil</pre><figure tabindex="0" contenteditable="false" name="cfa7" class="graf graf--figure graf-after--pre"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 404px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 57.8%;"></div><img class="graf-image" data-image-id="1*KtN2Rl0RTPT49jpmychtVw.png" data-width="1101" data-height="636" src="https://cdn-images-1.medium.com/max/720/1*KtN2Rl0RTPT49jpmychtVw.png"><div class="crosshair u-ignoreBlock"></div></div><br/><figcaption class="imageCaption" contenteditable="true" data-default-value="Type caption for image (optional)">Stencil Init</figcaption></figure><p name="8241" class="graf graf--p graf-after--figure">Stencil can be used to create standalone components or entire apps. After running init you will be provided with a prompt so that you can choose the type of project to start.</p><p name="d0c2" class="graf graf--p graf-after--p"><strong class="markup--strong markup--p-strong">In this Blog, we will cover components part of the stencil.</strong></p><p name="5d0c" class="graf graf--p graf-after--p">so after clicking on component, it will ask you for the component name</p><figure tabindex="0" contenteditable="false" name="d0e8" class="graf graf--figure graf-after--p"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 404px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 57.8%;"></div><img class="graf-image" data-image-id="1*K_tkptBjX2U2E3fvqUq-Rg.png" data-width="1103" data-height="637" src="https://cdn-images-1.medium.com/max/720/1*K_tkptBjX2U2E3fvqUq-Rg.png"><div class="crosshair u-ignoreBlock"></div></div><br/><figcaption class="imageCaption" contenteditable="true" data-default-value="Type caption for image (optional)">Component name&nbsp;Stencil</figcaption></figure><p name="c31e" class="graf graf--p graf-after--figure">After the name, it will ask for confirmation of the name</p><figure tabindex="0" contenteditable="false" name="74cd" class="graf graf--figure graf-after--p"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 408px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 58.3%;"></div><img class="graf-image" data-image-id="1*4DlY4vbaCQWj6JSUiW1EgA.png" data-width="1101" data-height="642" src="https://cdn-images-1.medium.com/max/720/1*4DlY4vbaCQWj6JSUiW1EgA.png"><div class="crosshair u-ignoreBlock"></div></div><br/><figcaption class="imageCaption" contenteditable="true" data-default-value="Type caption for image (optional)">Confirm Stencil</figcaption></figure><p name="0b68" class="graf graf--p graf-after--figure">when you press ‘<strong class="markup--strong markup--p-strong">Y</strong>’ here and press <strong class="markup--strong markup--p-strong">enter</strong> key. it will create your component</p><figure tabindex="0" contenteditable="false" name="240b" class="graf graf--figure graf-after--p"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 404px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 57.699999999999996%;"></div><img class="graf-image" data-image-id="1*bhGZgWwIfHbIZfN990XYSg.png" data-width="1105" data-height="638" src="https://cdn-images-1.medium.com/max/720/1*bhGZgWwIfHbIZfN990XYSg.png"><div class="crosshair u-ignoreBlock"></div></div><br/><figcaption class="imageCaption" contenteditable="true" data-default-value="Type caption for image (optional)">Component Create&nbsp;Stencil</figcaption></figure><p name="bc87" class="graf graf--p graf-after--figure">so once it finishes go to your project folder and run</p><pre name="7743" class="graf graf--pre graf-after--p">npm start</pre><p name="9aa2" class="graf graf--p graf-after--pre">to spin up the development server and it should automatically open in a new tab in your browser if it doesn’t you can find it on <strong class="markup--strong markup--p-strong">localhost:3333</strong></p><figure tabindex="0" contenteditable="false" name="9c3c" class="graf graf--figure graf-after--p"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 394px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 56.3%;"></div><img class="graf-image" data-image-id="1*wH9-6ObOlqSwQOQyZ_dM1A.png" data-width="1920" data-height="1080" src="https://cdn-images-1.medium.com/max/720/1*wH9-6ObOlqSwQOQyZ_dM1A.png"><div class="crosshair u-ignoreBlock"></div></div><br/><figcaption class="imageCaption" contenteditable="true" data-default-value="Type caption for image (optional)">Stencil app&nbsp;started</figcaption></figure><p name="e62d" class="graf graf--p graf-after--figure">and you should see ` Hello, World! I’m Stencil ‘Don’t call me a framework’ JS`</p><p name="e9ca" class="graf graf--p graf-after--p">Okay so now let’s look into the folder we got from the command to find out what is responsible for this output. so I have opened the project folder with visual studio code, of course, you can use webstorm, atom, sublime whatever you like. Inside of this folder, you can see a couple of folder and file. The files are mostly config file. you won’t see a webpack config file yeah stencil might use webpack behind the scene but it doesn’t expose the configuration to you because of the stencil itself a compiler it false the idea of doing that logic for you.</p><p name="2951" class="graf graf--p graf-after--p">In stencil, you don’t need to config a webpack to bundle all your files and framework and so on together because again you are not using a framework here.</p><p name="1e95" class="graf graf--p graf-after--p">Now let’s have a look in Source folder in which we write our own component as you might guess in the components folder and there you find one folder named my-component with two files inside of it.</p><p name="8b60" class="graf graf--p graf-after--p">When you check my-component.tsx&nbsp;. This might look bit like angular to you it uses typescript and decorators and then again it’s look bit like React because we have the render method where we return some HTML but that is JSX. and we also have a css file where we can style our own component.</p><p name="1e0c" class="graf graf--p graf-after--p">and we also have an index.html file in the source folder where we have added our component. But again, in the end, we just split out a couple of javascript files you can drop into any project, But for testing, it gives us an index.html file</p><h4 name="32b7" class="graf graf--h4 graf-after--p">Modify this component according to our requirements</h4><p name="3873" class="graf graf--p graf-after--h4">Now we will modify existing my-component.tsx file and makes one component which is responsible for creating custom modal.</p><figure tabindex="0" contenteditable="false" name="3b2a" class="graf graf--figure graf--iframe graf-after--p is-defaultValue"><div class="aspectRatioPlaceholder is-locked"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 35.699999999999996%;"></div><div class="iframeContainer">{% gist https://gist.github.com/enappd/91bb802faca33009dbee5a40637d6591.js %}</div></div></figure><p name="c495" class="graf graf--p graf-after--figure">So here we have modified our my-component.tsx file for creating custom modal.</p><figure tabindex="0" contenteditable="false" name="4571" class="graf graf--figure graf-after--p is-defaultValue"><div class="aspectRatioPlaceholder is-locked" style="max-width: 300px; max-height: 533px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 177.7%;"></div><img class="graf-image" data-image-id="1*AkbAbyZr8zAPIVO23z9nwQ.jpeg" data-width="300" data-height="533" src="https://cdn-images-1.medium.com/max/720/1*AkbAbyZr8zAPIVO23z9nwQ.jpeg"><div class="crosshair u-ignoreBlock"></div></div></figure><p name="4e2c" class="graf graf--p graf-after--figure">In this example, we have used 4 decorators</p><ol class="postList"><li name="5e41" class="graf graf--li graf-after--p">Component&nbsp;:- <a href="https://stenciljs.com/docs/component#component-decorator" class="markup--anchor markup--li-anchor" rel="noopener" target="_blank">@Component()</a> declares a new web component</li></ol><p name="3e1c" class="graf graf--p graf-after--li">Each Stencil Component must be decorated with a <code class="markup--code markup--p-code">@Component()</code> decorator from the <code class="markup--code markup--p-code">@stencil/core</code> package. In the simplest case, developers must provide an HTML <code class="markup--code markup--p-code">tag</code> name for the component. Often times, a <code class="markup--code markup--p-code">styleUrl</code> is used as well, or even <code class="markup--code markup--p-code">styleUrls</code>, where multiple different style sheets can be provided for different application modes/themes.</p><p name="0988" class="graf graf--p graf-after--p">2. Prop&nbsp;:- <a href="https://stenciljs.com/docs/properties#prop-decorator" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">@Prop()</a> declares an exposed property/attribute</p><p name="4a5b" class="graf graf--p graf-after--p">Props are custom attribute/properties exposed publicly on the element that developers can provide values for. Children components should not know about or reference parent components, so Props should be used to pass data down from the parent to the child. Components need to explicitly declare the Props they expect to receive using the <code class="markup--code markup--p-code">@Prop()</code> decorator. Props can be a <code class="markup--code markup--p-code">number</code>, <code class="markup--code markup--p-code">string</code>, <code class="markup--code markup--p-code">boolean</code>, or even an <code class="markup--code markup--p-code">Object</code> or <code class="markup--code markup--p-code">Array</code>. By default, when a member decorated with a <code class="markup--code markup--p-code">@Prop()</code> decorator is set, the component will efficiently re-render.</p><p name="731b" class="graf graf--p graf-after--p">3. Method&nbsp;:- <a href="https://stenciljs.com/docs/methods#method-decorator" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">@Method()</a> declares an exposed public method</p><p name="a7c5" class="graf graf--p graf-after--p">The <code class="markup--code markup--p-code">@Method()</code> decorator is used to expose methods on the public API. Functions decorated with the <code class="markup--code markup--p-code">@Method()</code> decorator can be called directly from the element, ie. they are intented to be callable from the outside!</p><p name="a0f2" class="graf graf--p graf-after--p">4. State&nbsp;:- <a href="https://stenciljs.com/docs/state#state-decorator" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">@State()</a> declares an internal state of the component</p><p name="d3a2" class="graf graf--p graf-after--p">The <code class="markup--code markup--p-code">@State()</code> decorator can be used to manage internal data for a component. This means that a user cannot modify this data from outside the component, but the component can modify it however it sees fit. Any changes to a <code class="markup--code markup--p-code">@State()</code> property will cause the components <code class="markup--code markup--p-code">render</code> function to be called again.</p><p name="a8f9" class="graf graf--p graf-after--p">Here you can modify CSS and tsx according to your requirements.</p><p name="d774" class="graf graf--p graf-after--p">All the css related to your component should go into your-component.css file and you can write the logic for the code in your-component.tsx file</p><h4 name="7615" class="graf graf--h4 graf-after--p">How to use This Component In Simple Java script&nbsp;project</h4><p name="b982" class="graf graf--p graf-after--h4">Integrating a component built with Stencil to a project without a JavaScript framework is straight forward. If you’re using a simple HTML page, you can add your component via a script tag. like this</p><pre name="70df" class="graf graf--pre graf-after--p">&lt;script type="module" src="/build/stencilcomponent.esm.js"&gt;&lt;/script&gt;</pre><pre name="a3d0" class="graf graf--pre graf-after--pre">&lt;script nomodule src="/build/stencilcomponent.js"&gt;&lt;/script&gt;</pre><p name="2f1b" class="graf graf--p graf-after--pre">So after Modification in our index file, it will something look like this</p><figure tabindex="0" contenteditable="false" name="3b06" class="graf graf--figure graf--iframe graf-after--p is-defaultValue"><div class="aspectRatioPlaceholder is-locked"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 35.699999999999996%;"></div><div class="iframeContainer">{% gist https://gist.github.com/enappd/e9eeb1f9730df5e4feb7c9485e31f901.js %}</div></div></figure><p name="1298" class="graf graf--p graf-after--figure">In this project, I have some created some more components according to mobile UI you can check the <a href="https://github.com/enappd/StencilComponents" class="markup--anchor markup--p-anchor" rel="noopener" target="_blank">complete source code here for the example</a></p><figure tabindex="0" contenteditable="false" name="a449" class="graf graf--figure graf-after--p is-defaultValue"><div class="aspectRatioPlaceholder is-locked" style="max-width: 300px; max-height: 533px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 177.7%;"></div><img class="graf-image" data-image-id="1*cUWw6f2WU7UMchF73h61Dw.jpeg" data-width="300" data-height="533" src="https://cdn-images-1.medium.com/max/720/1*cUWw6f2WU7UMchF73h61Dw.jpeg"><div class="crosshair u-ignoreBlock"></div></div></figure><h3 name="f388" class="graf graf--h3 graf-after--figure">Conclusion</h3><p name="c8cb" class="graf graf--p graf-after--h3">We looked at how to create a custom component in stencil and how to use it is a simple javascript Project. We have also learned how to pass props to a parent component to children component and how to call a component function in a parent component.</p><h3 name="047c" class="graf graf--h3 graf-after--p">Next Steps</h3><p name="178a" class="graf graf--p graf-after--h3">Now that you have learned how to create custom components in Stencil and how to use it in a simple javascript project, you can also try:</p><ul class="postList"><li name="9f48" class="graf graf--li graf-after--p is-selected"><a href="https://enappd.com/blog/ionic-4-paypal-payment-integration-for-apps-and-pwa/16" class="markup--anchor markup--li-anchor" rel="nofollow noopener noopener noopener noopener noopener noopener noopener noopener" target="_blank">Ionic 4 PayPal payment integration&#8202;—&#8202;for Apps and PWA</a></li><li name="aa95" class="graf graf--li graf-after--li"><a href="https://enappd.com/blog/ionic-4-stripe-payment-integration-with-firebase-for-apps-and-pwa/17" class="markup--anchor markup--li-anchor" rel="nofollow noopener noopener noopener noopener noopener noopener noopener noopener" target="_blank">Ionic 4 Stripe payment integration&#8202;—&#8202;for Apps and PWA</a></li><li name="fc67" class="graf graf--li graf-after--li"><a href="https://enappd.com/blog/how-to-integrate-apple-pay-in-ionic-4-apps/21" class="markup--anchor markup--li-anchor" rel="nofollow noopener noopener noopener noopener noopener noopener noopener noopener" target="_blank">Ionic 4 Apple Pay integration</a></li><li name="b1fc" class="graf graf--li graf-after--li"><a href="https://enappd.com/blog/twitter-login-in-ionic-4-apps-using-firebase/24" class="markup--anchor markup--li-anchor" rel="noopener" target="_blank">Twitter login in Ionic 4 with Firebase</a></li><li name="57fe" class="graf graf--li graf-after--li"><a href="https://enappd.com/blog/facebook-login-in-ionic-4-apps-using-firebase/25" class="markup--anchor markup--li-anchor" rel="noopener" target="_blank">Facebook login in Ionic 4 with Firebase</a></li><li name="97f4" class="graf graf--li graf-after--li"><a href="https://medium.com/enappd/using-geolocation-and-beacon-plugins-in-ionic-4-754b41304007" class="markup--anchor markup--li-anchor" target="_blank">Geolocation</a> in Ionic 4</li><li name="34a6" class="graf graf--li graf-after--li"><a href="https://medium.com/enappd/qr-code-scanning-and-optical-character-recognition-ocr-in-ionic-4-95fd46be91dd" class="markup--anchor markup--li-anchor" target="_blank">QR Code and scanners</a> in Ionic 4 and</li><li name="de81" class="graf graf--li graf-after--li"><a href="https://medium.com/enappd/how-to-translate-in-ionic-4-globalization-internationalization-and-localization-31ec5807a8bc" class="markup--anchor markup--li-anchor" target="_blank">Translations in Ionic 4</a></li></ul><p name="2930" class="graf graf--p graf-after--li">If you need a base to start your next Ionic 4 app, you can make your next awesome app using <a href="https://store.enappd.com/product/ionic-4-full-app/" class="markup--anchor markup--p-anchor" rel="noopener nofollow noopener noopener nofollow noopener noopener nofollow noopener noopener noopener noopener noopener noopener noopener noopener" target="_blank">Ionic 4 Full App</a></p><figure tabindex="0" contenteditable="false" name="89dc" class="graf graf--figure graf-after--p"><div class="aspectRatioPlaceholder is-locked" style="max-width: 700px; max-height: 442px;"><div class="aspectRatioPlaceholder-fill" style="padding-bottom: 63.2%;"></div><img class="graf-image" data-image-id="1*2BzL8TesnBHuazHr3VA4SQ.jpeg" data-width="760" data-height="480" src="https://cdn-images-1.medium.com/max/720/1*2BzL8TesnBHuazHr3VA4SQ.jpeg"><div class="crosshair u-ignoreBlock"></div></div><br/><figcaption class="imageCaption" contenteditable="true" data-default-value="Type caption for image (optional)">Use Ionic 4 Full app template for your next awesome&nbsp;app</figcaption></figure><p name="099a" class="graf graf--p graf--empty graf-after--figure graf--trailing"><br></p></div></div></section></div></article></main>
abhijeetrathor2
287,121
Truly there is no difference in implementation. Integrating Spring with Jersey and integrating Jersey with Spring
More interesting and arguably references are Jersey and Spring, especially integrating Jersey to Spr...
0
2020-04-07T02:54:55
https://dev.to/urunov/truly-there-is-no-difference-in-implementation-integrating-spring-with-jersey-and-integrating-jersey-with-spring-m2k
rest, sping, jersey, java
More interesting and arguably references are Jersey and Spring, especially integrating Jersey to Spring, or differ. This article we might response that curiosity. [Catch up source code which we implemented Spring and Jersey.] (https://github.com/Hamdambek/SpringBoot-Projects-FullStack/tree/master/Part-4%20Spring%20Boot%20REST%20API) Really there is no difference in implementation. Integrating Spring with Jersey and integrating Jersey with Spring mean the same thing as far as code is concerned. # What is Jersey, stands for...? Briefly inform about Jersey: The Jersey RESTful Web Services framework is open source, production quality, framework for developing RESTful Web Services in Java that provides support for JAX-RS APIs and serves as a JAX-RS (JSR 311 & JSR 339) Reference Implementation. # What is JAX-RS...? Java API for RESTful Web Services (JAX-RS) is a Java programming language API specification that provides support in creating web services according to the Representational State Transfer (REST) architectural pattern. JAX-RS uses annotations to simplify the development and deployment of web service clients and endpoints. JAX-RS is an official part of Java EE. This article assumes you understand Jersey, and are thinking about integrating Spring into your Jersey application. Figure-1. Three-tier application architecture. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/uqvr69zi0mlvh2eh8459.jpg) Figure-1. Three-tier application architecture. In this example, the business layer and persistence layer may seem redundant, but in a real application, the service layer also handles the business logic of the domain, not just a simple “find data”, but also “do manipulate data”. The persistence layer should only be concerned with database interactions, and not with any business logic. In a REST application, there is no presentation layer. But does that mean that this architecture doesn’t apply to REST applications? Absolutely not. We should still adhere to this separation of concerns in REST applications. It is just good design. With Spring, its REST layer is implemented in its MVC framework. The MVC framework is widely know for its MVC capabilities with the use of controllers. But with a little tweaking of the controllers and the annotations used, the controllers can easily become a REST controller, where instead of return models and views, you are return RESTful representation objects. More valuable information adhere Figure-2. A multi-layered architecture, based on the “Law of Demeter". ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/a9ajrzoiacc8tycm10t7.jpg) Figure-2. Multi-layer architecure. * the first layer is the REST support implemented with Jersey, has the role of a facade and delegates the logic to the business layer * the business layer is where the logic happens * the data access layer is where the communcation with the pesistence storage (in our case the MySql database) takes place [Source code:] (https://github.com/Hamdambek/SpringBoot-Projects-FullStack/tree/master/Part-4%20Spring%20Boot%20REST%20API) @SpringBootApplication public class JerseyApplication implements WebApplicationInitializer { public static void main(String[] args) { SpringApplication.run(JerseyApplication.class, args); } @Override public void onStartup(ServletContext sc) throws ServletException { sc.setInitParameter("contextConfigLocation", "noop"); AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext(); context.register(SpringConfig.class); sc.addListener(new ContextLoaderListener(context)); sc.addListener(new RequestContextListener()); } Reference: 1. [WHY AND HOW TO USE SPRING WITH JERSEY?] (https://psamsotha.github.io/jersey/2015/10/19/why-use-spring-with-jersey.html#mavenDeps) 2. [Tutorial – REST API design and implementation in Java with Jersey and Spring] (https://www.javacodegeeks.com/2014/08/tutorial-rest-api-design-and-implementation-in-java-with-jersey-and-spring.html)
urunov
287,140
From LAMP to MERN: Understanding similarities and differences
Not so long ago, when I started my journey as a Full Stack Web Developer, the stack that appealed to...
0
2020-03-24T09:00:14
https://dev.to/diegotech/from-lamp-to-mern-understanding-similarities-and-differences-4o64
webdev, node, php, react
Not so long ago, when I started my journey as a Full Stack Web Developer, the stack that appealed to me the most was initially the LAMP stack. This is because I love Laravel but, like many others, I have seen myself inevitably adapting to more updated stacks like the MERN stack. In this article, I will talk about how I used my understanding of the LAMP stack to move on to MERN stack, and caught up with the JavaScript-as-a-server-side-language trend. Let’s start by breaking both stacks down: LAMP: - L: Linux Operating System ( I am not mentioning WAMP because I simply hate Windows ). - A: Apache HTTP server. - M: MySQL Relational Database Management System. - P: PHP programming language (Could be Python or Perl). MERN: - M: MongoDB Database Management System. - E: ExpressJS Web Application Framework for NodeJS. - R: ReactJS, a JavaScript library for building User Interfaces (UI — Frontend). - N: NodeJS server, or just call it Server-side JavaScript. Okay, great… and? Well, let’s cross-match a little bit... And in order to do so, let’s think about how the app communicates internally across its stack elements. - First, the most important difference between the stacks are the languages used on the server. A LAMP stack application communicates with its server (Apache) using the P (PHP, Python or Perl), while a MERN stack application communicates with its server (NodeJS) using the N (NodeJS). You might ask yourself, what the heck is that supposed to mean?… Turns out that, until recent years, JavaScript could only run on the browser, meaning that it handled data only on the front-end of an application. But then came NodeJS, a runtime environment that allowed JavaScript to run on the server. That is why we call NodeJS the server-side JavaScript. So cross-match number one: (N)odeJS is to the MERN stack what (P)HP and Apache are to the LAMP stack. Paradox. - Second, a server-side language is a good thing but, a server-side framework is even better. Here is where our stacks’ cross-matching goes a bit off. PHP has frameworks that allow it to simplify server creation and management (Laravel, Symfony), and so does NodeJS. The difference is that, the framework NodeJS uses to manage and delegate server requests is actually part of the MERN stack… you’ve guessed it! It is Express JS, which leads us to our cross-match number two: ExpressJS is, to NodeJS in the MERN stack, what PHP (Laravel, Symfony) is to Apache in the LAMP stack. Reality is, NodeJS does not need ExpressJS to create an HTTP server, just as Apache does not need Laravel or Synfony, but does need PHP, Python or Perl. ExpressJS is a framework that simplifies the creation and development of a NodeJS server. - Third, so far we have mostly talked about LAMP’s and MERN’s respective servers, and the respective server-side languages for each. Now, let’s talk about the why. Databases. This is a pretty obvious but, interesting cross-match: While LAMP uses an SQL based database management system, MERN uses a JavaScript based database management system through the use of JSON-like syntax. This is where you notice the power of MERN… it is simply JavaScript all accross. In the LAMP stack you have to worry about handling SQL, PHP and maybe even some JavaScript for the front-end. For the MERN stack, learning JavaScript gives you a whole stack foundation of knowledge. I mean… it is 3 vs. 1 language, you tell me what’s simpler… Anyways… cross-match number three: MongoDB is to the MERN stack what MySQL is to the LAMP stack. - Fourth crossmatch… ReactJS is to the MERN stack what nothing is to the LAMP stack. Here is the funny thing… You could actually use ReactJS to style the front-end of a LAMP stack application, and you probably know that but, notice how there is no accountability for a technology that handles the front-end responsibility in the LAMP stack. This is because the LAMP stack architecture is more server based than client based. The elements of the LAMP stack do not mention any User Interface related technologies while the MERN stack does (ReactJS). - Finally… a stack is a stack but… In my junior, humble opinion, unlike the LAMP stack definition, the MERN stack definition pretty much covers all the elements of a modern web app: - MongoDB, a database management system; - Express, a speedy and simple server-side framework; - React, a technology able to create modern UIs; - Node, a flexible server-side language and server environment. Note that all the elements of the MERN stack were built with JavaScript in mind, and that JavaScript is in fact, The Web-App Development Language. This is my humble perspective on the similarities and differences between the most used web app stacks today. I might be wrong about some things so feedback on my opinion is very welcome! Thank you for reading on through the very end, and I hope this article helped you in any way possible! If you liked it, please give this article a thumbs up and share it with fellow developers. Now, get your eyes back on your editor!!!
diegotech
287,151
Galaxies far far away
Complexity vs Usefulness
0
2020-03-24T09:33:37
https://dev.to/tomavelev/galaxies-far-far-away-a90
softwaredevelopment, complexity
--- title: Galaxies far far away published: true description: Complexity vs Usefulness tags: software development, complexity, --- Creating something of value, creating something for other (non-tech) people, that is actually used, timing it correctly and many more factors are counter intuitive with complexity in software development to a coder/programmer. In most of my tools I am digging in the code shit and the things colleagues find meaningful - abstractions, abstractions, abstractions - test-ability, change-ability, solid, OOP, compiled, interpreted, reused, readable, garbage collected code, scalability, etc. And all this things probably does matter at some point. But, in my last tool I've linked reading excel, writing word files and created simple web front end - no CSS, no JavaScript. Nobody gives a shit what is client and what is server, is it binary, is it multi-core, is it optimal, is the code well written as long as the software does what is desired.
tomavelev
287,452
What is Cloud Mining? [An Overview]
Cloud computing first came to the market around 2006. It was a revolution in the sector of real-time...
0
2020-03-25T22:49:57
https://blog.coincodecap.com/what-is-cloud-mining-and/?utm_source=rss&utm_medium=rss&utm_campaign=what-is-cloud-mining-and
crypto, cryptomining
--- title: What is Cloud Mining? [An Overview] published: true date: 2020-03-24 20:12:31 UTC tags: Crypto,crypto-mining canonical_url: https://blog.coincodecap.com/what-is-cloud-mining-and/?utm_source=rss&utm_medium=rss&utm_campaign=what-is-cloud-mining-and --- [Cloud computing](https://en.wikipedia.org/wiki/Cloud_computing) first came to the market around 2006. It was a revolution in the sector of real-time computation over the internet. People could use enormous computational power anywhere in the world by paying a small fee without owning the hardware in person. The mainframes somewhere in a remote location would do all work for you. It revolutionized the way people do computational work. In a cloud, you can run pretty much everything from software to a whole operating system. [Bitcoin](https://blog.coincodecap.com/tag/bitcoin/) came into existence in the year 2008. But it got popular around the year 2014. We are already aware of [Bitcoin mining](https://blog.coincodecap.com/what-is-bitcoin-mining-and-how-it-works/) and [Ethereum mining](https://blog.coincodecap.com/how-ethereum-mining-works-an-overview/). The sole purpose of mining is to solve a [complex mathematical puzzle](https://blog.coincodecap.com/how-bitcoin-mining-work/) to update the ledger with blocks of valid transactions. Lately, competition in mining has gone up tremendously as the popularity of different cryptocurrencies went up. Miners used various techniques ( both hardware and software ) to increase their mining capability. ## **How cloud mining came into the picture?** Many people are interested in mining cryptocurrencies but don’t want to go through the tedious process of setting up the hardware and software. So, cloud mining is an escape route for them. It is a beautiful implementation to bridge the gap between mining and mining enthusiasts. In cloud mining, people pay to borrow hashing power (capability to calculate hash per second ). They can utilize the hashing power to mine cryptocurrencies on their personal computer. A user has to choose the hash rate which he/she wants to put into [crypto mining](https://blog.coincodecap.com/tag/crypto-mining/). After paying the fee, a share will be allotted to the user, just like in the case of a [mining pool](https://blog.coincodecap.com/a-simple-guide-to-mining-pools/). Now, he/she can mine cryptocurrencies!! ## **What are the advantages of Cloud Mining?** - There is no hassle of setting up mining equipment. - The electricity cost is low. - Anyone from anywhere in the world can use it. - The monetary fee is low compared to the setting up mining rigs. - They provide you with options to allot different hash rate for different cryptocurrencies. ## **What are the disadvantages of Cloud Mining?** - Hash rate you have chosen may not be suitable for the present difficulty level in mining. - Internet connectivity plays an important role. With a poor internet connection, cloud mining will be difficult. - Many cases of [**fraud and scam**](https://thenextweb.com/hardfork/2019/12/11/bitcoin-mining-scam-722m/) organizations have come up in recent years. ## **So, is cloud mining for you?** If you want to give [crypto mining](https://blog.coincodecap.com/tag/crypto-mining/) a try, then cloud mining is one way with minimum cost. If you think that you can get more hashing power at the same price as cloud mining then obviously, cloud mining is not for you. So it all comes down to market conditions and personal interests. **Also Read:** - **[A Candid Explanation of Bitcoin](https://blog.coincodecap.com/a-candid-explanation-of-bitcoin/)** - **[A Simple Guide to Mining Pools](https://blog.coincodecap.com/a-simple-guide-to-mining-pools/)** - **[How Bitcoin Mining Work? [Technical]](https://blog.coincodecap.com/how-bitcoin-mining-work/)** - **[Replace By Fee and Unconfirmed Transactions in Bitcoin](https://blog.coincodecap.com/what-is-replace-by-fee-in-bitcoin/)** The post [What is Cloud Mining? [An Overview]](https://blog.coincodecap.com/what-is-cloud-mining-and/) appeared first on [CoinCodeCap Blog](https://blog.coincodecap.com).
coinmonks
287,168
Is Value an Object
Little function to check if a value is an object: function isObject(val){ return ( val != nul...
5,579
2020-03-24T10:10:21
https://dev.to/nombrekeff/is-value-an-object-2hbc
javascript, snippets
Little function to check if a value is an object: ```js function isObject(val){ return ( val != null && typeof val === 'object' && Array.isArray(val) === false ); } ``` > **Notice** that Date, RegExp, etc.. will pass the check.
nombrekeff
287,309
WP Snippet #010 Filter posts by (Acf) meta values
A code snippet showing how to get and filter WordPress posts by custom (Acf) meta field values using the get_post function.
0
2020-03-24T18:37:45
https://since1979.dev/snippet-010-filter-posts-by-acf-meta-values/
wordpress, webdev, php
--- title: WP Snippet #010 Filter posts by (Acf) meta values published: true description: A code snippet showing how to get and filter WordPress posts by custom (Acf) meta field values using the get_post function. canonical_url: https://since1979.dev/snippet-010-filter-posts-by-acf-meta-values/ cover_image: https://since1979.dev/wp-content/uploads/2020/03/wp-snippet-010-filter-posts-by-meta-value.jpg tags: wordpress, webdev, php --- [Originally posted on my website on March 24th 2020](https://since1979.dev/snippet-010-filter-posts-by-acf-meta-values/) Getting a list of posts filtered by meta value ---------------------------------------------- Let's say we have a tech blog where we have posts talking about different tips for specific operating systems. Of course we could use [taxonomies](https://wordpress.org/support/article/taxonomies/) to categorize these posts by Os. But if for some reason we need to use [custom meta fields](https://wordpress.org/support/article/custom-fields/) we will need a way to filter our posts. In these situations we can use meta queries like shown below. ### Get posts by single meta value So we have a list of posts all with a custom field named "Os". I used [Acf](https://www.advancedcustomfields.com/) to create a custom select field but this wil also works with native custom meta fields. Now we want to list all posts that are about "MacOs". For this we can use the code snippet below. {% gist https://gist.github.com/vanaf1979/3318b4b258bc92270f4b4e30249df018 %} Here we first create a new array called *$args*. This *$args* array consist of two keys. The first one is *post_type* we want to select. In our case we want the posts so we set this to *post*. This could of course also be a custom [post type](https://wordpress.org/support/article/post-types/). Next we set a key/value pair for *meta_query*. With the *meta_query* argument we can start selecting posts by meta key. *Meta_query* accepts an array of array's each representing a specific select statement. In our case we want to select all posts where the meta field "Os" is equal to "MacOs" so we set the following key/value pairs: - **Key (Os):** The name of the meta field. - **Value (MacOs):** The value the meta key should have. - **Compare (=):** The comparison operator to use. Next we use the *[get_posts](https://developer.wordpress.org/reference/functions/get_posts/)* function and pass it our *$args* array. If all is well this should retrieve all posts where MacOs is the Os. And finaly we simply print out the results. Note: When using the *get_posts* function we can also specify array keys for numberposts, category, include, exclude and suppress_filters. See the docs [here](https://developer.wordpress.org/reference/functions/get_posts/). ### Get posts by meta list value If we want to show all the posts from both MacOs and Linux we can change our meta_query array like shown below. {% gist https://gist.github.com/vanaf1979/5000096b18139fcbaf7f6406fe4e9620 %} Here instead of passing a string to the *value* key we pass an array containing the possible values we want to select. In our case we pass MacOs and Linux. For this to work we also have to change the compare key to 'IN'. The *get_posts* function should now get all the posts that have either a meta value of MacOs or Linux. ### More to come As you can see filtering posts with the *meta_query* can be pretty powerful and these code snippets only scratched the surface of what we can do. I will publish some more snippets on this topic soon. #### Follow Found this post helpful? Follow me on twitter [@Vanaf1979](https://twitter.com/Vanaf1979) or here on Dev.to [@Vanaf1979](https://dev.to/vanaf1979) to be notified about new articles, and other WordPress development related resources. **Thanks for reading and stay safe**
vanaf1979
287,326
OpenNMS On the Horizon – ARM, CircleCI, Documentation, SNMPv3, Time-Series, Flows, and More!
It's time for OpenNMS On the Horizon! In the last week we wondered... what is time? Does time exist?...
0
2020-04-20T21:04:32
https://www.opennms.com/en/blog/2020-03-24-opennms-on-the-horizon-march-24th-2020-arm-circleci-documentation-snmpv3-time-series-flows-and-more/
ooh, arm, bgp, circleci
--- title: OpenNMS On the Horizon – ARM, CircleCI, Documentation, SNMPv3, Time-Series, Flows, and More! published: true date: 2020-03-24 15:55:46 UTC tags: OOH,arm,bgp,circleci canonical_url: https://www.opennms.com/en/blog/2020-03-24-opennms-on-the-horizon-march-24th-2020-arm-circleci-documentation-snmpv3-time-series-flows-and-more/ cover_image: https://i.imgur.com/74fmzvG.png --- It's time for OpenNMS On the Horizon! In the last week we wondered... what _is_ time? Does time exist? Is it Monday? Oh, crap, it's Tuesday! I totally forgot to do OOH! Um. Anyway... so we worked on ARM support for Docker, CircleCI updates, documentation improvements, BGP, SNMPv3, time-series, flows, and more. <!-- git log -author=bamboo@opennms.org -invert-grep -all -no-merges -color=always -since='2020-03-16 00:00:00' -until='2020-03-24 00:00:00' -format='%Cblue%ai %Cgreen%aN %Creset%s %Cblue(%H)%Cred%d' -author-date-order | sort | less -R --> ## Github Project Updates ### **Internals, APIs, and Documentation** - Ronny updated the Minion docker images to use JICMP and JICMP6 rather than JNA. - Ronny finished updating the Minion docker images to support ARM builds. Horizon 27 and higher will support x86\_64, arm64, and arm/v7. 👏 - Bonnie worked on updating the documentation to recommend CentOS 8 for Horizon 25+. - Sean added ZSTD compression support to our Kafka config. - Bonnie added thresholding documentation. - Christian added support for parsing BGP capabilities and adding AFI/SAFI statistics as metrics. - I got CircleCI builds working on the foundation branches back to `foundation-2016`. - Chandra did more fixes related to SNMPv3 and engine IDs. - Chandra did some work on adding Jolokia features to Minion and Sentinel. - Chandra worked on updating the sink API to use protobuf 3. - Patrick continued his work on the new timeseries API. - Chandra worked on writing enriched (classified and tagged) flow data to Kafka. ### **Web, ReST, UI, and Helm** - Ron Roskens worked on fixing persisted calendar report display. - I added auto-merge support to the Helm CircleCI config. ## Calendar of Events ### **April Releases - April 7th, 2020** The next OpenNMS release day is April 7th, 2020. Unless we run into major issues, we're hoping to release Horizon 26 in April, which includes support for BMP telemetry collection. ### **[OpenNMS Training](https://hs.opennms.com/training-registration-2020) - Moonachie, New Jersey - April 27th through May 1st, 2020** The OpenNMS Group [still hopes to be offering training](https://hs.opennms.com/training-registration-2020) at SecureWatch 24 Fusion Center in Moonachie, New Jersey the week of April 27th. 8 seats are available, and the deadline for signing up is April 17th. ## Until Next Week… If there’s anything you’d like me to talk about in a future OOH, or you just have a comment or criticism you’d like to share, don’t hesitate to [say hi](mailto:twio@opennms.org). - Ben <!-- https://github.com/OpenNMS/twio-fodder/blob/ee50b0f05f3f93a66c242448a70fdca3993b972a/scripts/twio-issues-list.pl --> ## Resolved Issues Since Last OOH - [HELM-233](https://issues.opennms.org/browse/HELM-233): auto-merge helm develop -> master in CircleCI - [MIB-3](https://issues.opennms.org/browse/MIB-3): running mib2openNMS - Segmentation fault - [MIB-7](https://issues.opennms.org/browse/MIB-7): mib2opennms - problem of OID in mask tag - [MIB-9](https://issues.opennms.org/browse/MIB-9): mib2opennms : set options -6 and -w as default - [NMS-2558](https://issues.opennms.org/browse/NMS-2558): Multiple Default SNMP community Strings - [NMS-2867](https://issues.opennms.org/browse/NMS-2867): OpenNMS MIB: Converting Events to MIB - [NMS-3045](https://issues.opennms.org/browse/NMS-3045): Create Java SNMP command line utils - [NMS-3458](https://issues.opennms.org/browse/NMS-3458): Event Configuration error results in success event after reloading - [NMS-12438](https://issues.opennms.org/browse/NMS-12438): persisted defaultCalendarReport database reports are broken - [NMS-12476](https://issues.opennms.org/browse/NMS-12476): Backport CircleCI pipeline to foundation-2018 - [NMS-12481](https://issues.opennms.org/browse/NMS-12481): Docker Image Improvements - [NMS-12482](https://issues.opennms.org/browse/NMS-12482): Reduce Minion docker image size - [NMS-12483](https://issues.opennms.org/browse/NMS-12483): Publish arm64 and armhf Docker images for Minion - [NMS-12484](https://issues.opennms.org/browse/NMS-12484): Use jicmp (and jicmp6) by default in Minion Docker images - [NMS-12553](https://issues.opennms.org/browse/NMS-12553): Add support for per AFI/SAFI statistics - [NMS-12570](https://issues.opennms.org/browse/NMS-12570): Add support for Local RIB - [NMS-12571](https://issues.opennms.org/browse/NMS-12571): Parse BGP Capabilities - [NMS-12574](https://issues.opennms.org/browse/NMS-12574): Apply more sensible defaults to OpenBMP kafka producer - [NMS-12603](https://issues.opennms.org/browse/NMS-12603): Backport CircleCI pipeline to foundation-2017 - [NMS-12604](https://issues.opennms.org/browse/NMS-12604): Update installation requirements re: CentOS 8 - [NMS-12607](https://issues.opennms.org/browse/NMS-12607): Backport CircleCI pipeline to foundation-2016 - [NMS-12615](https://issues.opennms.org/browse/NMS-12615): PR's fail circleci RPM build steps due to missing GPG setup
rangerrick
287,359
Webpack 4 : Quick Start Guide
Webpack is one of most commonly used module bundlers available now. It eases the developer's job and...
0
2020-03-27T06:59:37
https://dev.to/saileshsubramanian/webpack-4-quick-start-guide-54n4
webpack, javascript, webapps, es6
Webpack is one of most commonly used module bundlers available now. It eases the developer's job and provides blazing fast performance coupled with amazing features. From the days of task runners like Grunt and Gulp to Module Bundlers , front-end application development has never been so easier and engaging as today. ><i>"Webpack is a static module bundler for modern JavaScript applications. When webpack processes your application, it internally builds a dependency graph which maps every module your project needs and generates one or more bundles"</i> <a href="https://webpack.js.org/concepts/" target="_blank" style="font-size: 14px;">Read the core concepts from here</a> Please note that the sole purpose of this article is to help to quickly build a neat webpack build configuration for a webapps. If you are more interested in learning the basics/core concepts of webpack please refer the above mentioned link to get to know about core concepts of webpack. <b>Let's Dive in</b> 1. Create a directory <pre>mkdir webpack101 && cd webpack101</pre> 2. Use NPM or Yarn for package management <pre>npm init</pre> OR <pre>yarn init</pre> It will generate the `package.json` file. Yarn is my favorite , so throughout in this guide yarn will be used. 3. Install webpack locally(recommended) <pre>yarn add --dev webpack webpack-cli </pre> You can see the webpack being added as dev dependencies in package. 4. Now lets create a sample project with our usual stuffs.You can find the source files [here](https://github.com/SaileshSubramanian/webpackGuide) Now the project structure is ready let's bring in the main player `webpack.config.js`. Create the `webpack.config.js` in the root. 6. Now that the initial configuration is ready,lets modify our `package.json` to add the build command. 7. Now let's run the build command <pre>yarn build</pre> 8. We now have a `bundle.js` inside the dist folder.For the sake of cache busting , include `[chunkhash]` in the output js file configuration of webpack. So each time the generated js file will be in the format `bundle.[chunkhash].js`. Naturally our `dist` folder will be cluttered with many files. So we need to add `clean-webpack-plugin`. ```javascript const { CleanWebpackPlugin } = require('clean-webpack-plugin'); ...... plugins: [ new CleanWebpackPlugin(), .... ] ``` But that does not the serve the whole purpose. So let's add more to the `webpack.config.js`. **Working with ES6** Let's modify our `index.js` and add some behaviour using ES6. Since the code is in ES6 we need to transpile it so that the browser can understand. Here loaders come for the rescue, and do the code transformation for us. 9. Adding Babel to the project. We specify the rules in the module section to add the each loaders in `webpack.config.js`. <i>The `test` property identifies which file or files should be transformed. The `use` property indicates which loader should be used to do the transforming.</i> <pre>yarn add --dev babel-loader @babel/core @babel/preset-env</pre> Modify the our `webpack.config.js` as below. ```javascript module:{ rules: [ { test: /\.(js|jsx)$/, exclude: /(node_modules)/, use: { loader: 'babel-loader', options: { presets: ["@babel/preset-env"] } } } ] } ``` 10. Add a `.babelrc` file with contents as below. ```javascript { "presets": [ "@babel/preset-env" ] } ``` But how do we see the magic happening? So let's add the `webpack-dev-server` to run the project locally. <pre>yarn add --dev webpack-dev-server</pre> Also modify the package.json the script to run the dev server and then run `yarn serve`. ```javascript "serve": "webpack-dev-server --open --config webpack.config.js" ``` 11. With the css preprocessors taking the significant role in the web development these days , lets create sass files and add loaders to transform and bundle it. <pre>yarn add --dev style-loader css-loader node-sass sass-loader</pre> The `mini-css-extract-plugin` helps us to extract all styles and bundle it in our dist directory. Use `MiniCssExtractPlugin.loader` instead of style-loader if you need a separate `bundle.css` file as the style-loader injects all the styles in the `head` element of your html. <pre>yarn add --dev mini-css-extract-plugin</pre> Add the loaders to our `webpack.config.js` as below. ```javascript const MiniCssExtractPlugin = require('mini-css-extract-plugin'); ...... plugins: [ new MiniCssExtractPlugin({ filename:"bundle.[chunkhash].css" }), .... ] ..... { test: /\.(sa|sc|c)ss$/, use: [ { loader: MiniCssExtractPlugin.loader }, { loader: "css-loader" }, { loader: "sass-loader" } ] } ``` Now comes the role of plugins. We need to modify our HTML files, copy some of the assets to build folder and so on and to do that we need to add certain webpack plugins. 12. Adding `HtmlWebpackPlugin` , it generates an HTML file with generated bundle files,both js & css, integrated in the `script` and `link` tags. We can even specify the template as well. <pre> yarn add --dev html-webpack-plugin</pre> Now modify our `webpack.config.js`to add the plugin. ```javascript var HtmlWebpackPlugin = require('html-webpack-plugin'); ............. plugins: [new HtmlWebpackPlugin( { title: 'My App', template:'./src/index.html', 'meta': { 'viewport': 'width=device-width, initial-scale=1, user-scalable=no' } } )] ``` What about assets like fonts, images..Let's add `copy-webpack-plugin`. The reason why `file-loader` was not used because it loads on those assets referenced in our modules. <pre>yarn add --dev copy-webpack-plugin</pre> Add the configurations for the plugin as well inside `webpack.config.js`. ```javascript const CopyPlugin = require('copy-webpack-plugin'); new CopyPlugin([ { from:'./src/assets', to:'assets' } ]) ``` And finally all our `assets` are copied to build directory. <b>Preparing for Different environments</b> We could actually maintain separate webpack comfiguration files for development and production deployment, with production files having production configurations included. Let's create `webpack.common.config.js`. Remove all the contents from the current `webpack.config.js` and paste it in the new file.Change the output path options as `path:path.resolve(__dirname, '../dist'),` Add the below script in the `webpack.config.js` to configure different environments. ```javascript const webpackMerge = require('webpack-merge'); const commonConfig = require('./webpack.common.config.js'); module.exports = ({ env }) => { const envConfig = require(`./webpack.${env}.config.js`); return webpackMerge(commonConfig, envConfig); }; ``` Make sure you have the `webpack-merge` yarn added as dev-dependency. Now we can to create `webpack.dev.config.js` and `webpack.prod.config.js`. Include the development specific feature config in the `webpack.dev.config.js` as below.If they existed in your `webpack.common.config` remove it to avoid unexpected results. ```javascript module.exports={ mode:"development", devServer:{ port:3000, hot: true, contentBase:'./dist' }, devtool:"inline-source-map" } ``` Same for the `webpack.prod.config.js`. I leave up to you if you require source- map in prod mode. ```javascript module.exports={ mode:"production", devtool:"source-map" } ``` Modify the scripts to run for different environment in `package.json` to look more meaningful. ```javascript "scripts": { "serve": "webpack-dev-server --open --config build-config/webpack.config.js --env.env=dev", "build:dev": "webpack --config build-config/webpack.config.js --env.env=dev", "build:prod": "webpack --config build-config/webpack.config.js --env.env=prod" } ``` You can again go for optimization techniques available with other webpack plugins in the production mode. Since v4 webpack does the optimization for you based on the `mode`. But you can override those with your own configurations. `uglify-js` , `optimise-css-assets` are most popular. Thanks for reading.I hope that was informative .If you have any corrections or suggestion, please let me know in the comments section. Happy Coding !!
saileshsubramanian
287,376
Logging in ASP .NET Core 3.1
This is the twelfth of a new series of posts on ASP .NET Core 3.1 for 2020. In this series, we’ll c...
0
2020-03-24T18:27:05
https://wakeupandcode.com/logging-in-asp-net-core-3-1/
webdev, csharp, dotnet
--- title: Logging in ASP .NET Core 3.1 published: true date: 2020-03-24 14:00:00 UTC tags: webdev, csharp, dotnet canonical_url: https://wakeupandcode.com/logging-in-asp-net-core-3-1/ --- ![](https://wakeupandcode.com/wp-content/uploads/2020/01/aspnetcore-az-banner.png) This is the twelfth of a new [series of posts](https://wakeupandcode.com/aspnetcore/#aspnetcore2020) on ASP .NET Core 3.1 for 2020. In this series, we’ll cover 26 topics over a span of 26 weeks from January through June 2020, titled **ASP .NET Core A-Z!** To differentiate from the [2019 series](https://wakeupandcode.com/aspnetcore/#aspnetcore2019), the 2020 series will mostly focus on a growing single codebase ([NetLearner!](https://wakeupandcode.com/netlearner-on-asp-net-core-3-1/)) instead of new unrelated code snippets week. Previous post: - [Key Vault for ASP .NET Core 3.1 Web Apps](https://wakeupandcode.com/key-vault-for-asp-net-core-3-1-web-apps/) **NetLearner on GitHub** : - Repository: [https://github.com/shahedc/NetLearnerApp](https://github.com/shahedc/NetLearnerApp) - v0.12-alpha release: [https://github.com/shahedc/NetLearnerApp/releases/tag/v0.12-alpha](https://github.com/shahedc/NetLearnerApp/releases/tag/v0.12-alpha) # In this Article: - [L is for Logging in ASP .NET Core](#L) - [Log Messages](#messages) - [Logging Providers](#providers) - [JSON Configuration](#config) - [Log Categories](#categories) - [Exceptions in Logs](#exceptions) - [Structured Logging with Serilog](#serilog) - [References](#refs) # L is for Logging in ASP .NET Core You _could_ write a fully functional ASP .NET Core web application without any logging. But in the real world, you _should_ use some form of logging. This blog post provides an overview of how you can use the _built-in_ logging functionality in ASP .NET Core web apps. While we won’t go deep into 3rd-party logging solutions such as [Serilog](https://serilog.net/) in this article, you should definitely consider a robust semantic/structured logging solution for your projects. ![](https://wakeupandcode.com/wp-content/uploads/2020/03/logging-types-31-1024x427.png)<figcaption> Logging providers in ASP .NET Core 3.1</figcaption> # Log Messages The simplest log message includes a call to the extension method **ILogger.Log()** by passing on a **LogLevel** and a text string. Instead of passing in a LogLevel, you could also call a specific Log method such as **LogInformation** () for a specific LogLevel. Both examples are shown below: ``` _**// Log() method with LogLevel passed in**_ \_logger. **Log** (LogLevel.Information, "some text");// _**Specific LogXX() method, e.g. LogInformation()**_\_logger. **LogInformation** ("some text"); ``` [LogLevel](https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.loglevel?view=dotnet-plat-ext-3.1) values include Trace, Debug, Information, Warning, Error, Critical and None. These are all available from the namespace **Microsoft.Extensions.Logging**. For a more structured logging experience, you should also pass in meaningful variables/objects following the templated message string, as all the Log methods take in a set of parameters defined as “params object[] args”. ``` public static void **Log** ( this ILogger **logger** , LogLevel **logLevel** , string **message** , **params object[] args**); ``` This allows you to pass those values to specific logging providers, along with the message itself. It’s up to each logging provider on how those values are captured/stored, which you can also configure further. You can then query your log store for specific entries by searching for those arguments. In your code, this could look something like this: ``` \_logger. **LogInformation** ("some text for id: { **someUsefulId** }", **someUsefulId** ); ``` Even better, you can add your own **EventId** for each log entry. You can facilitate this by defining your own set of integers, and then passing an int value to represent an EventId. The [EventId](https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.logging.eventid?view=dotnet-plat-ext-3.1) type is a struct that includes an implicit operator, so it essentially calls its own constructor with whatever int value you provide. ``` \_logger. **LogInformation** (someEventId, "some text for id: { **someUsefulId** }", **someUsefulId** ); ``` In the [NetLearner.Portal](https://github.com/shahedc/NetLearnerApp/tree/master/src/NetLearner.Portal) project, we can see the use of a specific integer value for each EventId, as shown below: ``` _ **// Step X: kick off something here** _\_logger. **LogInformation** (LoggingEvents. **Step1KickedOff** , "Step {stepId} Kicked Off.", stepId);_ **// Step X: continue processing here** _\_logger. **LogInformation** (LoggingEvents. **Step1InProcess** , "Step {stepId} in process...", stepId);_ **// Step X: wrap it up** _\_logger. **LogInformation** (LoggingEvents. **Step1Completed** , "Step {stepId} completed!", stepId); ``` The integer values can be whatever you want them to be. An example is shown below: ``` public class **LoggingEvents** { public const int **ProcessStarted** = 1000; public const int **Step1KickedOff** = 1001; public const int **Step1InProcess** = 1002; public const int **Step1Completed** = 1003; ...} ``` # Logging Providers The default template-generated web apps include a call to **CreateDefaultBuilder** () in [Program.cs](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Portal/Program.cs), which automatically adds the Console and Debug providers. As of ASP.NET Core 2.2, the EventSource provider is also automatically added by the default builder. When on Windows, the EventLog provider is also included. ``` public static IHostBuilder **CreateHostBuilder** (string[] args) => Host. **CreateDefaultBuilder** (args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }); ``` **NOTE** : As mentioned in an [earlier post in this blog series](https://wakeupandcode.com/generic-host-builder-in-asp-net-core-3-1/), the now-deprecated Web Host Builder has been replaced by the Generic Host Builder with the release of .NET Core 3.0. If you wish to add your own set of logging providers, you can expand the call to CreateDefaultBuilder(), clear the default providers, and then add your own. The built-in providers now include **Console** , **Debug, EventLog, TraceSource** and **EventSource.** ``` public static IHostBuilder **CreateHostBuilder** ( string[] args) => Host. **CreateDefaultBuilder** (args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.UseStartup<Startup>(); }) . **ConfigureLogging** (logging => { _ **// clear default logging providers** _ logging. **ClearProviders** (); _ **// add built-in providers manually, as needed** _ logging. **AddConsole** (); logging. **AddDebug** (); logging. **AddEventLog** (); logging. **AddEventSourceLogger** (); logging. **AddTraceSource** (sourceSwitchName); }); ``` The screenshots below show the log results viewable in Visual Studio’s Debug Window and in the Windows 10 Event Viewer. Note that the EventId’s integer values (that we had defined) are stored in the EventId field as numeric value in the Windows Event Viewer log entries. ![VS2019 Output panel showing debug messages](https://wakeupandcode.com/wp-content/uploads/2020/03/logging-vs2019-output-1024x319.png)<figcaption> VS2019 Output panel showing debug messages </figcaption> ![](https://wakeupandcode.com/wp-content/uploads/2020/03/logging-event-log-1024x694.png)<figcaption>Windows Event Viewer showing log data</figcaption> For the _Event Log provider_, you’ll also have to add the following NuGet package and corresponding using statement: ``` Microsoft.Extensions.Logging. **EventLog** ``` For the _Trace Source provider_, a “source switch” can be used to determine if a trace should be propagated or ignored. For more information on the Trace Source provider and the Source Switch it uses check out the official docs at: - SourceSwitch Class (System.Diagnostics): [https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.sourceswitch?view=netcore-3.1](https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.sourceswitch?view=netcore-3.1) For more information on adding logging providers and further customization, check out the official docs at: - _Add Providers section_ of **Logging in ASP.NET Core** : [https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging#add-providers](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging#add-providers) # JSON Configuration One way to configure each Logging Provider is to use your appsettings.json file. Depending on your environment, you could start with appsettings.Development.json or [App Secrets](https://wakeupandcode.com/your-web-app-secrets-in-asp-net-core/) in development, and then use environment variables, [Azure Key Vault](https://wakeupandcode.com/key-vault-for-asp-net-core-3-1-web-apps/) in other environments. You may refer to earlier blog posts from 2018 and 2019 for more information on the following: - Your Web App Secrets in ASP .NET Core: [https://wakeupandcode.com/your-web-app-secrets-in-asp-net-core/](https://wakeupandcode.com/your-web-app-secrets-in-asp-net-core/) - Key Vault for ASP .NET Core Web Apps: [https://wakeupandcode.com/key-vault-for-asp-net-core-3-1-web-apps/](https://wakeupandcode.com/key-vault-for-asp-net-core-3-1-web-apps/) In your local JSON config file, your configuration uses the following syntax: ``` { " **Logging**": { " **LogLevel**": { **"Default":**"Debug", **"Category1":**"Information", **"Category2":**"Warning" }, **"SpecificProvider":** { **"ProviderProperty":** true } }} ``` The configuration for **LogLevel** sets one or more categories, including the **Default** category when no category is specified. Additional categories (e.g. System, Microsoft or any custom category) may be set to one of the aforementioned LogLevel values. The **LogLevel** block can be followed by one or more provider-specific blocks (e.g. **Console** ) to set its properties, e.g. **IncludeScopes**. Such an example is shown below. ``` { " **Logging**": { " **LogLevel**": { **"Default":**"Debug", **"System":**"Information", **"Microsoft":**"Information" }, **"Console":** { **"IncludeScopes":** true } }} ``` To set logging filters in code, you can use the **AddFilter** () method for specific providers or all providers in your [Program.cs](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Portal/Program.cs) file. The following syntax can be used to add filters for your logs. ``` .ConfigureLogging(logging => logging. **AddFilter** ("_ **Category1** _", LogLevel._ **Level1** _) . **AddFilter** <_ **SomeProvider** _>("_ **Category2** _", LogLevel._ **Level2** _)); ``` In the above sample, the following placeholders can be replaced with: - _ **CategoryX** _: System, Microsoft, custom categories - **LogLevel**._ **LevelX** _: Trace, Debug, Information, Warning, Error, Critical, None - _ **SomeProvider** _: Debug, Console, other providers To set the EventLog level explicitly, add a section for “EventLog” with the default minimum “LogLevel” of your choice. This is useful for Windows Event Logs, because the Windows Event Logs are logged for Warning level or higher, by default. ``` { " **Logging**": { " **LogLevel**": { " **Default**": "Debug" }, " **EventLog**": { " **LogLevel**": { " **Default**": "Information" } } } } ``` # Log Categories To set a category when logging an entry, you may set the string value when creating a logger. If you don’t set a value explicitly, the fully-qualified namespace + class name is used. In the [WorkhorseModel class](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Portal/Pages/Workhorse.cshtml.cs) seen in The NetLearner.Portal project, the log results seen in the Debug window started with: ``` NetLearner.Portal.Pages.WorkhorseModel ``` This is the _category name_ created using the class name passed to the constructor in WorkhorseModel as shown below: ``` private readonly ILogger \_logger; public **WorkhorseModel** (ILogger< **WorkhorseModel** > logger){ \_logger = logger;} ``` If you wanted to set this value yourself, you could change the code to the following: ``` private readonly ILogger \_logger; public **WorkhorseModel** ( **WorkhorseModel** logger){ \_logger = logger. **CreateLogger** ("NetLearner.Portal.Pages.WorkhorseModel");} ``` The end results will be the same. However, you may notice that there are a couple of differences here: 1. Instead of **ILogger** <_classname_> we are now passing in an **ILoggerFactory** type as the logger. 2. Instead of just assigning the injected logger to the private **\_logger** variable, we are now calling the factory method **CreateLogger()** with the desired string value to set the category name. # Exceptions in Logs In addition to EventId values and Category Names, you may also capture [Exception information](https://wakeupandcode.com/handling-errors-in-asp-net-core-3-1/) in your application logs. The various Log extensions provide an easy way to pass an exception by passing the Exception object itself. ``` try { _ **// try something here** _ **throw new Exception();**} catch (Exception **someException** ){ \_logger.LogError(eventId, **someException** , "Trying step {stepId}", stepId); _ **// continue handling exception** _} ``` Checking the Event Viewer, we may see a message as shown below. The **LogLevel** is shown as “Error” because we used the **LogError** () extension method in the above code, which is forcing an Exception to be thrown. The details of the Exception is displayed in the log as well. ![Windows Event Viewer showing error log entry](https://wakeupandcode.com/wp-content/uploads/2020/03/logging-event-log-error-1024x694.png)<figcaption>Windows Event Viewer showing error log entry </figcaption> # Structured Logging with Serilog At the very beginning, I mentioned the possibilities of structured logging with 3rd-party providers. There are many solutions that work with ASP .NET Core, including (but not limited to) [elmah](https://elmah.io/), [NLog](https://nlog-project.org/) and [Serilog](https://serilog.net/). Here, we will take a brief look at Serilog. Similar to the built-in logging provider described throughout this article, you should include variables to assign template properties in all log messages, e.g. ``` Log.Information("This is a message for { **someVariable** }", **someVariable** ); ``` To make use of Serilog, you’ll have to perform the following steps: 1. grab the appropriate NuGet packages: [Serilog](https://www.nuget.org/packages/Serilog/), [Hosting](https://www.nuget.org/packages/Serilog.Extensions.Hosting), various [Sinks](https://github.com/serilog/serilog/wiki/Provided-Sinks), e,g, [Console](https://www.nuget.org/packages/Serilog.Sinks.Console) 2. use the Serilog namespace, e.g. **using Serilog** 3. create a new LoggerConfiguration() in your Main() method 4. call UseSerilog() when creating your Host Builder 5. write log entries using methods from the Log static class. For more information on Serilog, check out the following resources: - Getting Started: [https://github.com/serilog/serilog/wiki/Getting-Started](https://github.com/serilog/serilog/wiki/Getting-Started) - Writing Log Events: [https://github.com/serilog/serilog/wiki/Writing-Log-Events](https://github.com/serilog/serilog/wiki/Writing-Log-Events) - Setting up Serilog in ASP.NET Core 3: [https://nblumhardt.com/2019/10/serilog-in-aspnetcore-3/](https://nblumhardt.com/2019/10/serilog-in-aspnetcore-3/) # References - Logging in ASP.NET Core: [https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/](https://docs.microsoft.com/en-us/aspnet/core/fundamentals/logging/) - ASP.NET Core Logging with Azure App Service and Serilog: [https://devblogs.microsoft.com/aspnet/asp-net-core-logging/](https://devblogs.microsoft.com/aspnet/asp-net-core-logging/?WT.mc_id=-blog-shchowd) - Explore .NET trace logs in Azure Application Insights with ILogger: [https://docs.microsoft.com/en-us/azure/azure-monitor/app/ilogger](https://docs.microsoft.com/en-us/azure/azure-monitor/app/ilogger) - Azure Application Insights for ASP.NET Core: [https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core](https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-core) - Don’t let ASP.NET Core Console Logging Slow your App down: [https://weblog.west-wind.com/posts/2018/Dec/31/Dont-let-ASPNET-Core-Default-Console-Logging-Slow-your-App-down](https://weblog.west-wind.com/posts/2018/Dec/31/Dont-let-ASPNET-Core-Default-Console-Logging-Slow-your-App-down)
shahedc
287,517
When will my cover image change?
I updated the cover images on 2 of my posts. I was really hoping the new image would show up in the t...
0
2020-03-24T21:35:02
https://dev.to/vickilanger/when-will-my-cover-image-change-2km
help
I updated the cover images on 2 of my posts. I was really hoping the new image would show up in the twitter cards. It's been several hours and no luck yet. Am I forever stuck with the old images? Is it a cache thing? How do I fix it? Am I just being impatient? These are the posts I changed. {% post https://dev.to/vickilanger/code-questions-bot-42io %} {% post https://dev.to/vickilanger/that-s-it-that-s-the-tweet-send-3e0h %}
vickilanger
287,584
Learn React Hook by building a Simple Blog App
React hook tutorial for beginner
0
2020-04-16T01:26:23
https://dev.to/kingdavid/learn-react-hook-by-building-a-simple-blog-app-22i2
react, hook
--- title: Learn React Hook by building a Simple Blog App published: true description: React hook tutorial for beginner tags: React, Hook --- # What is React? React is a popular JavaScript library developed by Facebook for building user interfaces. It uses the concept of Virtual DOM to render Elements into the browser DOM because it is a popular belief that manipulating the browser DOM directly can be very slow and costly. React developers often manipulate the virtual DOM and let React take care of updating the Browser DOM. One of the biggest advantages of using React is the reusability of code. Components can be easily reused throughout an application or even across different applications, saving time and effort for developers. This is especially helpful in large projects where there may be a lot of repetitive code. Additionally, React's modular architecture allows for easier testing and debugging, which can further speed up the development process. ## Hooks in React Hooks are functions that let you “hook into” React state and lifecycle features from function components. Before the arrival of Hook, state and React lifecycles can only be used in a class component. Starting from version 16.8, React rolled out a lot of features that enable developers to hook into a React state without having to write a single class component. Hooks provide a way for developers to manage state and other React features without needing to convert their functional components to class components. They also make it easier to reuse stateful logic across different components, as the logic can be encapsulated in a custom hook. # What we’re building We are building a simple frontend CRUD blog app where a user can create a post, read the post, update post, and delete the post without making any API request to the server. You can view the final project here: https://react-simple-blog.now.sh or download the source code here: https://github.com/tope-olajide/react-simple-blog # The Setup To follow up with this tutorial and get the app running, we are going to download and install the latest version of Node.js. (I am currently using version 12.13.1 for this tutorial) Next, you'll launch your Command-Line Interface, install React and create a new project by running the following command: ``` npx create-react-app react-simple-blog ``` The above command will create a new directory called react-simple-blog and install React and its dependencies on it. To make sure React is working, launch your CLI, navigate to the ``react-simple-blog`` folder (or whatever you chose to name the folder) and run : ``npm start`` to start your React development server. Once the server is running, React will automatically launch your browser and navigate to http://localhost:3000/ in it, which is the default homepage for our React app. If all goes well, you should see the create-react-app splash screen that looks like this: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585201127/react-blog/react-splash.png "React welcome splash screen") ## Modify the `App.js` file Let's update the `App.js` to display a welcoming message instead of the default flash screen. Navigate to `` react-simple-blog/src `` on your computer, then open the `App.js` file in your editor, and replace everything in it with the following code: ```javascript import React from "react"; const App = ( ) => { return ( <div> <h1>Hello World</h1> </div> ); }; export default App; ``` Here, we modified the App component to display *Hello World*. Your browser should automatically refresh and display a similar output like this: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585202891/react-blog/code-output-3.png "code output 3") The first line imports React from the node-modules. In the third line, we created a functional component called App, using the JavaScript fat arrow function. Then we render the following JSX elements: ```javascript return ( <div> <h1>Hello World</h1> </div> ); ``` In the last line, we exported the App component so that it can be used later. # What is JSX? JSX stands for JavaScript Syntax Extension. It has a familiar syntax with plain HTML and it can also be used directly in your JavaScript file but no browser can read it without transpiling it first. JSX can be transpiled into JavaScript code by using a preprocessor build-tool like babel. Babel has already been pre-installed with create-React-app, so you don't have to be worried about configuring the app to transform your JSX code into javascript. You can read more about JSX [here](https://Reactjs.org/docs/introducing-jsx.html) Navigate to ``React-simple-blog/src`` and open the `index.js` file in your editor. This file serves as the starting point of the application and it is responsible for rendering the main component, which is the "App" component, to the `root` element located in the `public/index.html` file. In the `index.js` file, the `App` component is imported on line 4 and then rendered to the browser using the `React.render` method on line 7. This means that when the user loads the webpage, the `App` component will be displayed inside the `root` div. This is how React components are rendered to the browser - by being inserted into the DOM through the "React.render" method. Next, we are going to delete some files we are not using but came bundled with create-React-app. Navigate to ``react-simple-blog/src`` and delete the following files: *App.css*, *App.test.js*, *index.css*, *logo.svg*, and *setupTests.js* After that, open your index.js file and delete the third line: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585202891/react-blog/code-output-2.png "code output 2") Since we have removed the index.css file, there is no reason to import it again in the index.js, else we might end up with a "failed to compile" error. By now, you should have just 3 files left in the src folder (i.e App.js, index.js and serviceWorker.js). Let's create a new folder called Components inside the src folder. This folder will be used to store the remaining components we'll be building for this app: ``react-simple-blog/src/Components``. Inside the Components folder, create a new file called ``CreateNewPost.jsx``. From its name, you can easily guess what this new file will be used for. Let us add the following code into the newly `CreateNewPost.jsx` file: ```javascript import React from "react"; const CreateNewPost = () => { return ( <> <form> <h1>Create New Post</h1> <input type ="text" placeHolder="title" size="39" required></input> <br /> <br /> <textarea placeHolder="contents" rows="8" cols="41"required></textarea> <br /> <br /> <button>Save Post</button> </form> </> ); }; export default CreateNewPost; ``` If you have been following up with this tutorial from the beginning and you're familiar with HTML, there should be nothing strange to you here except for this opening and closing empty tag: ``<> </>`` which is a short syntax for ``<React.Fragment> </React.Fragment>``. Using fragments instead of ``<div></div>`` is a little bit faster and has less memory usage. All we did here was create a new form that will be used to collect the user's input. Also, it is good to know that React component name starts with an uppercase letter because this is how React distinguishes components from regular HTML elements or other functions. By convention, only components that start with a capital letter are considered to be React components, while lower-case elements are assumed to be standard HTML elements. To display the CreateNewPost component, we need to import it first in into the App component and render it. To do that, open up the `App.js` and add the following code below the import React statement: ``import CreateNewPost from './components/CreateNewPost'`` To render CreateNewPost component, we'll replace ```<h1>Hello World </h1>.``` with ```<CreateNewPost />``` So the App component looks like this: ```javascript import React from "react"; import CreateNewPost from './Components/CreateNewPost' const App = ( ) => { return ( <div> <CreateNewPost /> </div> ); }; export default App; ``` You may refresh your browser if React hasn't done so. If everything went right, you should see an output that looks almost the same as this one: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585202891/react-blog/code-output-4.png "code output 4") We are not adding any CSS for now. Everything styling will be done towards the end of this app. The 'Save Post' button does nothing for now, we'll add some functionalities to it once we're done with creating all the necessary components. Our next task is to create the Post component, which will be responsible for displaying each individual post. If you are finding the concept of multiple components confusing, don't worry. As you see these components working together, everything will become clearer. Let's create a new file inside the Components folder called `Post.jsx` and add the following code: ```javascript import React from 'react'; const Post = () => { return ( <> <section> <h3>Post title will appear here</h3> <p> Post contents will appear here</p> <button>Edit</button> <button>Delete</button> </section> </> ) } export default Post ``` In the code above, we import the React library, which is a prerequisite for writing React code. We then define the `Post` component as a function using arrow function syntax. Inside the function, we use JSX to create a section element with a title and content of a blog post, as well as two buttons for editing and deleting the post. These buttons are not working for now, we'll make them work later once we're done with building the remaining components. To display the Post component, navigate to the `App.js` file and update it with the following code: ```javascript import React from "react"; import Posts from './Components/Post' const App = ( ) => { return ( <> <Posts /> </> ); }; export default App; ``` After refreshing your browser, you should have a typical output that looks like this: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585202891/react-blog/code-output-5.png "code output 5") Next, you will create a new component named `ModifyPost`, which will be used for modifying a selected post. As the name suggests, this component will be used to modify the post, and we want it to be rendered by React only when a user clicks the `Edit` button. To create this component, navigate to the `Components` directory and create a new file called `ModifyPost.jsx`. Then add the following code to the `ModifyPost.jsx` file: ```javascript import React from "react"; const ModifyPost = () => { return ( <> <form> <h1>Modify Post</h1> <input type="text" placeholder="title" size="39" required></input> <br /> <br /> <textarea placeholder="contents" rows="8" cols="41" required></textarea> <br /> <br /> <button>Update Post</button> </form> </> ); }; export default ModifyPost; ``` ## Create the `DisplayAllPosts` component The final component we will be creating for this tutorial is the `DisplayAllPosts` component. This component will serve as a parent component to `CreatePost`, `ModifyPost`, and `Post`. Navigate to the `Components` directory and create a new file called `DisplayAllPosts.jsx`, then add the following code to it: ```javascript import React from 'React'; import CreateNewPost from './CreateNewPost' const DisplayAllPosts = () => { return ( <> <CreateNewPost /> </> ) } export default DisplayAllPosts ``` Here, we created a new component called `DisplayAllPost` and rendered the `CreateNewPost` component inside it. Now that we have finished building the components, it's time to bring them to life. As mentioned earlier, we intentionally did not add CSS to all the components at this stage, as the styling will be implemented once all the functionalities of this app are completed. The next step is to capture user input as they type into the text field and directly save it into the component state variable. To do this, we'll be using the first React hook in this tutorial called `useState`. ## State in React In React, `state` is means an object that stores information about how a component works. It is important in React and helps to handle the changing parts of a user interface. The component keeps track of the state object, and you can change it using the `setState()` method. Here are a few key points to keep in mind about state in React: - States in React are changeable and dynamic, meaning they can be updated throughout the lifecycle of a component. - A state holds information about the component it was declared in, and the component that declares a state is the owner of that state. No other components can directly access or modify it. - When the state of a component changes, the component will re-render itself, updating the corresponding elements in the DOM to reflect the new state. This makes it an efficient way to manage dynamic data and user input in React applications. When we declare a state variable with `useState`, it returns an array with two items. The first item is the current value(state), and the second item is its updater function(setState) that is used to update the state. The array items returned from the `useState` function are destructured into `state` and `setState` variables respectively. Having gained a brief understanding of the `useState` concept, let's modify the `DisplayAllPosts` component in the following ways: ```javascript import React, {useState} from 'React'; import CreateNewPost from './CreateNewPost' const DisplayAllPosts = () => { const [title, setTitle] = useState(""); const [content, setContent] = useState(""); const savePostTitleToState = event => { setTitle(event.target.value); console.log(title) }; const savePostContentToState = event => { setContent(event.target.value); console.log(content) }; return ( <> <CreateNewPost savePostTitleToState = {savePostTitleToState} savePostContentToState = {savePostContentToState} /> </> ) } export default DisplayAllPosts ``` Here, two state variables, `title` and `content`, are created using the `useState` hook along with their corresponding updater functions, `setTitle` and `setContent`. Two functions, `savePostTitleToState` and `savePostContentToState`, are also defined to save user input values into the corresponding state variables. console.log() statements are added to these functions to view the input value as the user types. Finally, the two functions are passed as props to the `CreateNewPost` component using the `props` keyword. Props are used to pass data, including functions and state, from a parent component, `DisplayAllPosts` in this case, to its child components, such as `CreateNewPost`. The next step involves passing down the props data from the parent component, `DisplayAllPosts`, to its child component, `CreateNewPost`. Open the `CreateNewPost` component and update the component to look like this: ```javascript import React from "react"; const CreateNewPost = props => { return ( <> <form> <h1>Create New Post</h1> <input type="text" onChange={props.savePostTitleToState} placeholder="title" size="39" required ></input> <br /> <br /> <textarea onChange={props.savePostContentToState} placeholder="contents" rows="8" cols="41" required ></textarea> <br /> <br /> <button>Save Post</button> </form> </> ); }; export default CreateNewPost; ``` To see the changes you have made, refresh your browser and open your browser console (press `ctrl+shift+i` if you are using Chrome) to view the captured data. Enter some text in the input fields and if everything works correctly, you should see a similar output like this: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585268496/react-blog/code-output-6.gif "code output 6") We will now move on to the next step, which involves saving the post title and content captured from the user into a new state variable called `allPosts`. This will be done by handling the `onClick` event of the `Save Post` button. In the ``DisplayAllPosts.jsx`` file, create a new state called `allPost`: ```const [allPosts, setAllPosts] = useState([]);``` After that, we'll create a new function called `savePost`: ```javascript const savePost = () => { const id = Date.now(); setAllPost([...allPost, {title, content, id}]); console.log(allPost); }; ``` The `savePost()` function will be used to save the current post's `title` and `content` into the `allPost` state variable. The function first generates a unique ID using the Date.now() method and then uses the `setAllPost()` function, which is provided by the useState() hook, to add a new post to the `allPost` array. The new post is an object with `title`, `content`, and `id` properties that are set to the values of the `title`, `content`, and `id` variables, respectively. Finally, the function logs the allPost array to the console to show the new post has been added. Let's update the savePost function to reset the `title` and `content` once the post has been added to the `allPost` array: ```javascript const savePost = () => { setAllPost([...allPost, { title, content }]); setTitle(""); setContent(""); console.log(allPost); }; ``` # Clearing the input field with `useRef` We need to make sure a user can easily create a new post without having to manually delete the old content from the input fields. Clearing the state value will not affect the input field value on the DOM. To locate the input fields on the DOM and clear their value, we are going to use another React hook called `useRef`. In React, the `useRef` hook creates a reference to a specific element or value in a component. This creates an object that can be updated without causing a re-render. With this hook, you can access and manipulate the properties of HTML elements like buttons or input fields, or store and access previous values of a component's props or state. We are going to import ``useRef`` by updating the React import statement like this: ```import React, { useState, useRef } from "react";``` Next, we will initialize the useRef: ``` const getTitle = useRef(); const getContent = useRef(); ``` Then pass down the refs to `CreateNewPost` component as props: ```javascript <CreateNewPost savePostTitleToState={savePostTitleToState} savePostContentToState={savePostContentToState} getTitle={getTitle} getContent={getContent} /> ``` After that, we'll navigate to the `CreateNewPost.jsx` and make it use the new props data we passed down to it. The `CreateNewPost` component should be looking like this: ```javascript import React from "react"; const CreateNewPost = props => { return ( <> <form> <h1>Create New Post</h1> <input type="text" onChange={props.savePostTitleToState} placeholder="title" size="39" required ref={props.getTitle} ></input> <br /> <br /> <textarea onChange={props.savePostContentToState} placeholder="contents" rows="8" cols="41" required ref={props.getContent} ></textarea> <br /> <br /> <button>Save Post</button> </form> </> ); }; export default CreateNewPost; ``` The `ref` attribute is used to get the value of the `input` and `textarea` fields, and the functions `getTitle` and `getContent` passed in as props are used to set the value of the ref. Open the `DisplayAllPosts.jsx` file and update the `savePost` function to look like this: ```javascript const savePost = (event) => { event.preventDefault(); setAllPosts([...allPosts, {title, content}]); console.log(allPosts); getTitle.current.value = ""; getContent.current.value = ""; }; ``` We used the `event.preventDefault()` to prevent the default refreshing behaviour of HTML form when a user clicks on the submit button. The fourth and fifth lines of the function are used to clear the input fields by setting the value of the getTitle and getContent refs to an empty string. To use the savePost function, we'll pass it down as props to the CreateNewPost component. Let's update the return statement in `DisplayAllPosts.jsx` file: ``` return ( <> <CreateNewPost savePostTitleToState={savePostTitleToState} savePostContentToState={savePostContentToState} getTitle={getTitle} getContent={getContent} savePost={savePost} /> </> ); ``` Open the `CreateNewPost` component and make it use the `savePost` function we passed down to it like this: ```javascript import React from "react"; const CreateNewPost = props => { return ( <> <form onSubmit={props.savePost}> <h1>Create New Post</h1> <input type="text" onChange={props.savePostTitleToState} placeholder="title" size="39" required ref={props.getTitle} ></input> <br /> <br /> <textarea onChange={props.savePostContentToState} placeholder="contents" rows="8" cols="41" required ref={props.getContent} ></textarea> <br /> <br /> <button>Save Post</button> </form> </> ); }; export default CreateNewPost; ``` Each time a user submits a post by clicking on the Save Post button, the `onSubmit()` event will trigger the `savePost` function we created earlier. The `DisplayAllPosts` component should be looking like this: ```javascript import React, { useState, useRef } from "react"; import CreateNewPost from "./CreateNewPost"; const DisplayAllPosts = () => { const [title, setTitle] = useState(""); const [content, setContent] = useState(""); const [allPosts, setAllPosts] = useState([]); // Initialize useRef const getTitle = useRef(); const getContent = useRef(); const savePostTitleToState = event => { setTitle(event.target.value); }; const savePostContentToState = event => { setContent(event.target.value); }; const savePost = event => { event.preventDefault(); setAllPosts([...allPosts, { title, content }]); console.log(allPosts); getTitle.current.value = ""; getContent.current.value = ""; }; return ( <> <CreateNewPost savePostTitleToState={savePostTitleToState} savePostContentToState={savePostContentToState} getTitle={getTitle} getContent={getContent} savePost={savePost} /> </> ); }; export default DisplayAllPosts; ``` You can refresh the browser now and launch the browser console to see if the captured data is being saved correctly into the AllPosts state variable. You should have a similar output that looks like this: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585275453/react-blog/code-output-7.png "code output 6") Now that the post data has been captured successfully, it's time to display them in the `DisplayAllPost` component. But before then, let's render the `CreateNewPost` component only when a user clicks on the `Add New` button and remove the component immediately the user clicks on the `Save Post` button. To do that, let's update the `DisplayAllPost` component to look like this: ```javascript import React, { useState, useRef } from "react"; import CreateNewPost from "./CreateNewPost"; const DisplayAllPosts = () => { const [title, setTitle] = useState(""); const [content, setContent] = useState(""); const [allPosts, setAllPosts] = useState([]); const [isCreateNewPost, setIsCreateNewPost] = useState(false); // Initialize useRef const getTitle = useRef(); const getContent = useRef(); const savePostTitleToState = event => { setTitle(event.target.value); }; const savePostContentToState = event => { setContent(event.target.value); }; const toggleCreateNewPost =()=>{ setIsCreateNewPost(!isCreateNewPost) } const savePost = event => { event.preventDefault(); const id = Date.now(); setAllPosts([...allPosts, { title, content, id }]); console.log(allPosts); getTitle.current.value = ""; getContent.current.value = ""; toggleCreateNewPost() }; if(isCreateNewPost){ return ( <> <CreateNewPost savePostTitleToState={savePostTitleToState} savePostContentToState={savePostContentToState} getTitle={getTitle} getContent={getContent} savePost={savePost} /> </> ); } return ( <> <h2>All Posts</h2> <br/> <br/> <button onClick={toggleCreateNewPost}>Create New</button> </> ) }; export default DisplayAllPosts; ``` We created a new state variable called `isCreateNewPost` and we initialized it with a boolean value, false. Then we created another function called `toggleCreateNewpost`, this function will make the `isCreateNewPost` state variable to swicth between true and false. If the previous state value of `isCreateNewPost` is `true`, `toggleCreateNewpost` will change it to `false` otherwise, `true`. We added a new button called `Create New`. This button will call the `toggleCreateNewpost` function once it is invoked. Finally, we created a conditional statement that only renders the ``CreateNewPost`` component if the ``isCreateNewPost`` boolean value is true. This process of rendering a component only when a condition is met is called **`Conditional Rendering`** in React. After refreshing the browser, you can preview your changes. The output you see should be nearly identical to this one: When you click on the ``Create New`` button, it should render the ``CreateNewPost`` component like so: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585277138/react-blog/code-output-8.gif "code output 8") When you enter the post title and contents and click on the ``Save Post`` button, it should save them and render back the ``DisplayAllPosts`` component, but the post will not be displayed yet. To display all the posts, we need to modify the Post component to receive the props that will be passed down to it from its parent component, `DisplayAllPosts`. Open the``Post.jsx`` and modify it to look like this: ```javascript import React from 'react'; const Post = (props) => { return ( <> <section> <h3>{props.title}</h3> <p> {props.content}</p> <button>Edit</button> <button>Delete</button> </section> </> ) } export default Post ``` Now that we are done with the ``Post`` component, let's modify the ``DisplayAllPosts`` to look like this: ```javascript import React, { useState, useRef } from "react"; import CreateNewPost from "./CreateNewPost"; import Post from "./Post"; const DisplayAllPosts = () => { const [title, setTitle] = useState(""); const [content, setContent] = useState(""); const [allPosts, setAllPosts] = useState([]); const [isCreateNewPost, setIsCreateNewPost] = useState(false); // Initialize useRef const getTitle = useRef(); const getContent = useRef(); const savePostTitleToState = event => { setTitle(event.target.value); console.log(title) }; const savePostContentToState = event => { setContent(event.target.value); console.log(content) }; const toggleCreateNewPost = () => { setIsCreateNewPost(!isCreateNewPost); }; const savePost = event => { event.preventDefault(); setAllPosts([...allPosts, { title, content }]); console.log(allPosts); getTitle.current.value = ""; getContent.current.value = ""; toggleCreateNewPost(); }; if (isCreateNewPost) { return ( <> <CreateNewPost savePostTitleToState={savePostTitleToState} savePostContentToState={savePostContentToState} getTitle={getTitle} getContent={getContent} savePost={savePost} /> </> ); } return ( <> <h2>All Posts</h2> {!allPosts.length ? ( <div> <h3>There is nothing to see here!</h3> </div> ) : ( allPosts.map(eachPost => { return ( <Post id={eachPost.id} key={eachPost.id} title={eachPost.title} content={eachPost.content} /> ); }) )} <br /> <br /> <button onClick={toggleCreateNewPost}>Create New</button> </> ); }; export default DisplayAllPosts; ``` Here the `DisplayAllPosts` component has been modified to display the post data. If the ``allPosts`` array is empty it is going to display `There is nothing to see here!` on the screen else it is going to use the array.map() method to loop through the `allPosts` array data and pass down each post `id`, `key`, `title` and `content` as `props` to the ``Post`` component. Refresh your browser, click on the ``Add New`` button, enter some value into the title and contents field and click on save. If everything goes well, you should have a similar output that looks like this: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585202891/react-blog/code-output-9.gif "code output 9") You can click on the `Create New` button to add more posts and see all the posts being rendered to the screen. So far, we are done with the C and R (Create and Read) feature of our CRUD app. The next feature we are going to implement now is the Update feature. This feature will enable the user to modify a selected post once the `Edit` button is clicked. Open the `DisplayAllPosts.js` and create a new state called `isModifyPost` below the `isCreateNewPost` state: ``` const [isModifyPost, setIsModifyPost] = useState(false); ``` We will use this state to render the `ModifyPost` component once `isModifyPost` boolean value is true. Next, we'll to create another function called `toggleModifyPostComponent` just below the `toggleCreateNewPost` function: ```javascript const toggleModifyPostComponent = () => { setIsModifyPost(!isModifyPost) } ``` This function switches the value of the `isModifyPost` variable between `true` and `false`. If the previous value of the variable is `false`, it changes it to `true`, and if the previous value is `true`, it changes it to `false`. Let's create another state called `editPostId`, below the `isModifyPost` state: ``` const [editPostId, setEditPostId] = useState(""); ``` This state variable will be used to save the `id` of the post that a user wants to modify. After that, you will create another function called `editPost` below the `toggleModifyPostComponent` function: ```javascript const editPost = id => { setEditPostId(id); toggleModifyPostComponent(); }; ``` This function will be passed down to the `Post` component and get called from inside the `Post` component with the `id` of the post that the user clicks on as its parameter. The `setEditPostId` function will save the post id into `editPostId` state, while the `toggleModifyPost` function will render or remove the `ModifyPost` component depending on the `isModifyPost` state boolean value. We are saving the `id` of the post that a user wants to modify into the `editPostId` state variable because we want the `updatePost` function to have access to it. Now, we will create a new function called `updatePost`. This function will be used to update the modified post:\ ```javascript const updatePost = (event) => { event.preventDefault(); const updatedPost = allPosts.map(eachPost => { if (eachPost.id === editPostId) { return { ...eachPost, title: title || eachPost.title, content: content || eachPost.content }; } return eachPost; }); setAllPosts(updatedPost); toggleModifyPostComponent(); }; ``` In this code, we used the `map()` method to loop through each post in the `allPosts` component to locate the post that the user wants to edit, based on the post `id` stored in the `editPostId` state variable. Then, we used the rest syntax (`...`) to modify only the `title` and `content` of the post, while keeping the `id` of the post unchanged. To prevent empty values from being saved when the user updates a post without making changes, we used the `OR` operator (`||`) to save the previous post title and content. The next thing we need to do now is to render the `ModifyPost` component if the ``isModifyPost`` state variable is ``true``. In the ``DisplayAllPost.jsx``, let's add the following code below the ``if (isCreateNewPost){...}`` statement: ```javascript else if (isModifyPost) { const post = allPosts.find(post => { return post.id === editPostId; }); return ( <ModifyPost title={post.title} content={post.content} updatePost={updatePost} savePostTitleToState={savePostTitleToState} savePostContentToState={savePostContentToState} /> ); } ``` What we are tryig to achieve here is to preload the input fields in the `ModifyPost` component with the data of the selected post. We also passed down the `updatePost`, `saveTitleToState`, `savePostContentToState` function to the `ModifyPost` component respectively. We have used `saveTitleToState` and `savePostContentToState` before in the `CreateNewPost` component to save user input value to the state variable. Now we are going to use the props that was passed to the ``ModifyPost`` component. Open the `ModifyPost.jsx` and update its code to look like this: ```javascript import React from "react"; const ModifyPost = props => { return ( <> <form> <h1>Modify Post</h1> <input defaultValue={props.title} onChange={props.savePostTitleToState} text placeholder="title" size="39" ></input> <br /> <br /> <textarea defaultValue={props.content} placeholder="contents" onChange={props.savePostContentToState} rows="8" cols="41" ></textarea> <br /> <br /> <button onClick ={props.updatePost}>Update Post</button> </form> </> ); }; export default ModifyPost; ``` We set the default value of the inputs field that will be rendered to the user with the post `title` and `content` that was passed down to this component. We also set the submit button with an onClick event which will call the ``updatePost`` function that was passed down to the ``ModifyPost`` component. One more thing before we test the ``ModifyPost`` component, we will trigger the ``ModifyPost`` component once a user clicks on the edit button. Let's pass down the `editPost` function to the `Post` component from `DisplayAllPosts` component. Let's modify the `DisplayAllPosts` component to render the `Post` component like this: ```javascript return ( <> <h2>All Posts</h2> {!allPosts.length ? ( <div> <h3>There is nothing to see here!</h3> </div> ) : ( allPosts.map(eachPost => { return ( <Post id={eachPost.id} key={eachPost.id} title={eachPost.title} content={eachPost.content} editPost={editPost} /> ); }) )} <br /> <br /> <button onClick={toggleCreateNewPost}>Create New</button> </> ); ``` Now we are going to update the Post component to use the `editPost` function that was passed to it. The `Post` Component should be looking like this: ```javascript import React from 'react'; import React from "react"; const Post = ({ title, content, editPost, id }) => { return ( <> <section> <h3>{title}</h3> <p> {content}</p> <button onClick={() => editPost(id)}>Edit</button> <button>Delete</button> </section> </> ); }; export default Post; ``` Before running the app, let's compare the `DisplayAllPost.jsx` file and make sure it looks like this: ```javascript import React, { useState, useRef } from "react"; import CreateNewPost from "./CreateNewPost"; import Post from "./Post"; import ModifyPost from "./ModifyPost" const DisplayAllPosts = () => { const [title, setTitle] = useState(""); const [content, setContent] = useState(""); const [allPosts, setAllPosts] = useState([]); const [isCreateNewPost, setIsCreateNewPost] = useState(false); const [isModifyPost, setIsModifyPost] = useState(false); const [editPostId, setEditPostId] = useState(""); // Initialize useRef const getTitle = useRef(); const getContent = useRef(); const savePostTitleToState = event => { setTitle(event.target.value); }; const savePostContentToState = event => { setContent(event.target.value); }; const toggleCreateNewPost = () => { setIsCreateNewPost(!isCreateNewPost); }; const toggleModifyPostComponent = () => { setIsModifyPost(!isModifyPost) } const editPost = id => { setEditPostId(id); console.log(id) toggleModifyPostComponent(); }; const updatePost = (event) => { event.preventDefault(); const updatedPost = allPosts.map(eachPost => { if (eachPost.id === editPostId) { console.log([eachPost.id, editPostId] ) return { ...eachPost, title: title || eachPost.title, content: content || eachPost.content }; } console.log(eachPost) return eachPost; }); setAllPosts(updatedPost); toggleModifyPostComponent(); }; const savePost = event => { event.preventDefault(); const id = Date.now(); setAllPosts([...allPosts, { title, content, id }]); console.log(allPosts); setTitle(""); setContent(""); getTitle.current.value = ""; getContent.current.value = ""; toggleCreateNewPost(); }; if (isCreateNewPost) { return ( <> <CreateNewPost savePostTitleToState={savePostTitleToState} savePostContentToState={savePostContentToState} getTitle={getTitle} getContent={getContent} savePost={savePost} /> </> ); } else if (isModifyPost) { const post = allPosts.find(post => { return post.id === editPostId; }); return ( <ModifyPost title={post.title} content={post.content} updatePost={updatePost} savePostTitleToState={savePostTitleToState} savePostContentToState={savePostContentToState} /> ); } return ( <> <h2>All Posts</h2> {!allPosts.length ? ( <div> <h3>There is nothing to see here!</h3> </div> ) : ( allPosts.map(eachPost => { return ( <Post id={eachPost.id} key={eachPost.id} title={eachPost.title} content={eachPost.content} editPost={editPost} /> ); }) )} <br /> <br /> <button onClick={toggleCreateNewPost}>Create New</button> </> ); }; export default DisplayAllPosts; ``` Now we can refresh the browser to view the changes Finally, we will to implement the last and probably the easiest feature of our `CRUD` app, the `Delete` feature. This feature will enable a user to remove a specific post once they click on the delete button. Open the `DisplayAllPosts.jsx` file and create a `deletePost` function below `editPost` function: ```javascript const deletePost = id => { const modifiedPost = allPosts.filter(eachPost => { return eachPost.id !== id; }); setAllPosts(modifiedPost); }; ``` The `deletePost` function takes in the `id` of the post that a user wants to remove as its parameter. We used one of the JavaScript array methods called `filter()` to remove the post that matches the `id`. Next we are going to pass down the `deletePost` function from the `DisplayAllPosts` component to the `Post` component. To do that, we are going update the Post component we imported in `DisplayAllPost.jsx` by adding `deletePost={deletePost}` to the child component like so: ```javascript return ( <> <h2>All Posts</h2> {!allPosts.length ? ( <div> <h3>There is nothing to see here!</h3> </div> ) : ( allPosts.map(eachPost => { return ( <Post id={eachPost.id} key={eachPost.id} title={eachPost.title} content={eachPost.content} editPost={editPost} deletePost={deletePost} /> ); }) )} <br /> <br /> <button onClick={toggleCreateNewPost}>Create New</button> </> ); ``` Finally, we will make use of the `deletePost` function we passed down to Post component by launching the Post.jsx file and updating it to look like this: ```javascript import React from "react"; const Post = ({ title, content, editPost, id, deletePost }) => { return ( <> <section> <h3>{title}</h3> <p> {content}</p> <button onClick={() => editPost(id)}>Edit</button> <button onClick={() => deletePost(id)}>Delete</button> </section> </> ); }; export default Post; ``` When a user click on the `Delete` button, it will invoke the `deletePost` function we passed down to `Post` component with the `id` of the current post. If all goes well we should have a similar output that looks like this: ![alt text](https://res.cloudinary.com/temitope/image/upload/v1585202891/react-blog/code-output-10.gif "code output 10") # Summary This tutorial showed us how to create a basic blog frontend using React without making any API requests to a backend server. We didn't need to rely on a server to provide the data to our application, instead, we used React to create and manage our data. We also learned how to manipulate the state by performing CRUD (Create, Read, Update, Delete) operations on the state data. By doing so, we were able to add, read, update, and delete posts from our blog application. We also used conditional rendering in our application to switch between two components. Conditional rendering allowed us to only display a specific component when a particular condition is met. This way, we could show or hide certain parts of our application based on user interactions or other factors. Overall, this tutorial provided a foundation for creating more complex React applications and using its features to manage data and manipulate the user interface. Thanks for reading. The full code used in this tutorial can be found here: https://github.com/tope-olajide/react-simple-blog. If you enjoy my post, please share it with others and give it some emojis so people can see it.
kingdavid
287,774
Fake news’ foe: Machine Learning and Twilio
Fake news has become a huge issue in our digitally-connected world and it is no longer limited to lit...
0
2020-03-25T08:33:07
https://dev.to/twilio/fake-news-foe-machine-learning-and-twilio-5fln
machinelearning, python, tensorflow, twiliohackathon
**Fake news** has become a huge issue in our digitally-connected world and it is no longer limited to little squabbles -- fake news spreads like wildfire and is impacting millions of people every day. How do you deal with such a sensitive issue? Countless articles are being churned out every day on the internet -- how do you tell real from fake? It's not as easy as turning to a simple fact-checker which is typically built on a story-by-story basis. As developers, can we turn to machine learning? In this series we will see two approaches to predict if a given article is fake or not. In this first article we will see a more traditional supervised approach of detecting fake news by training a model on labelled data and will use Twilio WhatsApp API to infer from our model. In the next article we will see how we can use Advanced pre-trained NLP models like BERT, GPT-2, XLNet, Grover etc, to achieve our goal. Let's start with understanding a bit of a background. What is Fake News? ------------------ `According to 30seconds.org :` "Fake news" is a term used to refer to fabricated news. Fake news is an invention -- a lie created out of nothing -- that takes the appearance of real news with the aim of deceiving people. This is what is important to remember: the information is false, but it seems true. `According to Wikipedia:` "Fake news (also known as junk news, pseudo-news, or hoax news) is a form of news consisting of deliberate disinformation or hoaxes spread via traditional news media (print and broadcast) or online social media." The usage of the web as a medium for perceiving information is increasing daily. The amount of information loaded in social media at any point is enormous, posing a challenge to the validation of the truthfulness of the information. The main reason that drives this framework is that on an average 62% of US adults rely on social media as their main source of news. The quality of news that is being generated in social media has substantially reduced over the years. The generation of fake news is intentional by the unknown sources which are trivial and there are existing methodologies to individually validate the users' trustworthiness, the truthfulness of the news and user engagement in social media. But, analysing these features individually doesn't consider the holistic factors of measuring the news credibility. Hence, combining the auxiliary information together with the news content to measure the news credibility is a possible route to focus. There have been techniques to validate the writing style of the users to classify the news content but these methods also have their outliers and error rates. Aim: ---- We will be building a WhatsApp based service which will accept news headlines from the user and predict if given news is fake news or not. Requirements: ------------- - A Twilio account ---[ sign up for a free one here](www.twilio.com/referral/YmyL1H) - A Twilio whatsapp sandbox ---[ configure one here](https://www.twilio.com/console/sms/whatsapp/sandbox) - [Set up your Python and Flask developer environment](https://www.twilio.com/docs/usage/tutorials/how-to-set-up-your-python-and-flask-development-environment) --- Make sure you have Python 3 downloaded as well as[ ngrok](https://ngrok.com/). - Tensorflow Let's build: ------------ Now we know what is fake news and why it's a major issue. Let's jump into building a solution to fight this problem. We will be using the [LIAR Dataset](https://www.cs.ucsb.edu/~william/data/liar_dataset.zip) by William Yang Wang which he used in his research paper titled "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection. The original dataset come with following columns: - Column 1: the ID of the statement ([ID].json). - Column 2: the label. - Column 3: the statement. - Column 4: the subject(s). - Column 5: the speaker. - Column 6: the speaker's job title. - Column 7: the state info. - Column 8: the party affiliation. - Column 9-13: the total credit history count, including the current statement. * 9: barely true counts. * 10: false counts. * 11: half true counts. * 12: mostly true counts. * 13: pants on fire counts. - Column 14: the context (venue / location of the speech or statement). For the simplicity we have converted it to 2 column format: - Column 1: Statement (News headline or text). - Column 2: Label (Label class contains: True, False) You can find the modified dataset here. Now we have a dataset, let's start building a Machine Learning model. `Step 1: Preprocessing:` Data preprocessing is a process of preparing the raw data and making it suitable for a machine learning model. It is the first and crucial step while creating a machine learning model. When creating a machine learning project, it is not always a case that we come across clean and formatted data. And while doing any operation with data, it is mandatory to clean it and put it in a formatted way. So for this, we use data preprocessing tasks. The file `preprocessing.py` contains all the preprocessing functions needed to process all input documents and texts. First we read the train, test and validation data files then performed some preprocessing like tokenizing, stemming etc. There are some exploratory data analysis is performed like response variable distribution and data quality checks like null or missing values etc. ```python #Stemming def stem_tokens(tokens, stemmer): stemmed = [] for token in tokens: stemmed.append(stemmer.stem(token)) return stemmed #process the data def process_data(data,exclude_stopword=True,stem=True): tokens = [w.lower() for w in data] tokens_stemmed = tokens tokens_stemmed = stem_tokens(tokens, eng_stemmer) tokens_stemmed = [w for w in tokens_stemmed if w not in stopwords ] return tokens_stemmed #creating ngrams #unigram def create_unigram(words): assert type(words) == list return words #bigram def create_bigrams(words): assert type(words) == list skip = 0 join_str = " " Len = len(words) if Len > 1: lst = [] for i in range(Len-1): for k in range(1,skip+2): if i+k < Len: lst.append(join_str.join([words[i],words[i+k]])) else: #set it as unigram lst = create_unigram(words) return lst ``` `Step 2: Feature Selection:` For feature selection, we have used methods like simple bag-of-words and n-grams and then term frequency like tf-idf weighting. we have also used word2vec and POS tagging to extract the features, though POS tagging and word2vec has not been used at this point in the project. We are looking at following features: ```python def features(sentence, index): """ sentence: [w1, w2, ...], index: the index of the word """ return { 'word': sentence[index], 'is_first': index == 0, 'is_last': index == len(sentence) - 1, 'is_capitalized': sentence[index][0].upper() == sentence[index][0], 'is_all_caps': sentence[index].upper() == sentence[index], 'is_all_lower': sentence[index].lower() == sentence[index], 'prefix-1': sentence[index][0], 'prefix-2': sentence[index][:2], 'prefix-3': sentence[index][:3], 'suffix-1': sentence[index][-1], 'suffix-2': sentence[index][-2:], 'suffix-3': sentence[index][-3:], 'prev_word': '' if index == 0 else sentence[index - 1], 'next_word': '' if index == len(sentence) - 1 else sentence[index + 1], 'has_hyphen': '-' in sentence[index], 'is_numeric': sentence[index].isdigit(), 'capitals_inside': sentence[index][1:].lower() != sentence[index][1:] } ``` `Step 3: Classification:` Here we have built all the classifiers for predicting the fake news detection. The extracted features are fed into different classifiers. We have used Naive-bayes, Logistic Regression, Linear SVM, Stochastic gradient descent and Random forest classifiers from sklearn. Each of the extracted features were used in all of the classifiers. Once fitting the model, we compared the f1 score and checked the confusion matrix. ``` n-grams & tfidf confusion matrix and F1 scores #Naive bayes [841 3647] [427 5325] f1-Score: 0.723262051071 #Logistic regression [1617 2871] [1097 4655] f1-Score: 0.70113000531 #svm [2016 2472] [1524 4228] f1-Score: 0.67909201429 #sgdclassifier [ 10 4478] [ 13 5739] f1-Score: 0.718731637053 #random forest [1979 2509] [1630 4122] f1-Score: 0.665720333284 ``` After fitting all the classifiers, 2 best performing models were selected as candidate models for fake news classification. We have performed parameter tuning by implementing GridSearchCV method on these candidate models and chosen best performing parameters for these classifiers. Finally the selected model was used for fake news detection with the probability of truth. In Addition to this, We have also extracted the top 50 features from our term-frequency tf-idf vectorizer to see what words are most important in each of the classes. We have also used Precision-Recall and learning curves to see how training and test sets perform when we increase the amount of data in our classifiers. `Step 4: Prediction:` Our finally selected and best performing classifier was Logistic Regression which was then saved on disk with name final_model.sav. Once you close this repository, this model will be copied to the user's machine and will be used by prediction.py file to classify the fake news. It takes a news article as input from the user then a model is used for final classification output that is shown to the user along with probability of truth. ```python def detecting_fake_news(var): #retrieving the best model for prediction call load_model = pickle.load(open('final_model.sav', 'rb')) prediction = load_model.predict([var]) prob = load_model.predict_proba([var]) return prediction, prob ``` `Step 5: Integrating Twilio WhatsApp API:` We have to write a code to accept a news article headline or text from Twilio WhatsApp API and save it to our model for prediction. For this we will python flask API server. [You can follow the similar process for SMS API as well] Following script will do that: ```python from flask import Flask, request import prediction from twilio.twiml.messaging_response import MessagingResponse app = Flask(__name__) @app.route('/sms', methods=['POST']) def sms(): resp = MessagingResponse() inbMsg = request.values.get('Body') pred, confidence = prediction.detecting_fake_news(inbMsg) resp.message( f'The news headline you entered is {pred[0]!r} and corresponds to {confidence[0][1]!r}.') return str(resp) if __name__ == '__main__': app.run() ``` Now you have to generate an endpoint which can be accessed using Twilio WhatsApp Sandbox. ![Running Flask App](https://s3.amazonaws.com/fininity.tech/Blog_images/t3.png) Your Flask app will need to be visible from the web so Twilio can send requests to it. Ngrok lets us do this. With it installed, run the following command in your terminal in the directory your code is in. Run `ngrok http 5000` in a new terminal tab. ![ngrok](https://s3.amazonaws.com/fininity.tech/Blog_images/terminal-1.png) Grab that ngrok URL to configure twilio whatsapp sandbox. We will try this on WhatsApp! So let’s go ahead and do it (either on our Sandbox if you want to do testing or your main WhatsApp Sender number if you have one provisioned). In a screenshot below we show the Sandbox page: ![Configuring Twilio Sandbox](https://s3.amazonaws.com/fininity.tech/Blog_images/twilio-console.png) And we’re good to go! Let’s test our application on WhatsApp! We can send some news headlines or facts to this sandbox and get predictions in return if everything works as expected. ![Test App](https://s3.amazonaws.com/fininity.tech/Blog_images/whatsapp.png) Hurray! You wanna try this? Complete code is available on [GitHub](https://github.com/jbahire/fake-news-foe). ## What's next This was a very basic implementation with limited data but I really hope this will be sufficient to give you an idea about cool things you can do with Tensorflow and Twilio. You can try to tweak this project and use various datasets to build something more cool! so what you're planning to build? Tell me in the comments below or hit me up on [twitter](https://twitter.com/jayesh_ahire1) with your ideas and I will be happy to collaborate! In next part, we will see how we can use Advanced pre-trained NLP models like BERT, GPT-2, XLNet, Grover etc, to achieve our goal! ### References: 1. ["Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection] (https://arxiv.org/abs/1705.00648) 2. [Twilio WhatsApp API] (https://www.twilio.com/docs/sms/whatsapp/api) 3. [Fake News Detection with LIAR Dataset](https://github.com/nishitpatel01/Fake_News_Detection) 4. [What is Fake News?] (https://30secondes.org/en/module/what-is-fake-news/) 5. [FEVER: a large-scale dataset for Fact Extraction and VERification](https://arxiv.org/pdf/1803.05355.pdf)
jbahire
287,805
Servicio de Web Crawler (Rastreador Web)
El web crawling o el raspado de datos web se está volviendo cada vez más popular en los últimos años....
0
2020-03-25T09:49:13
https://dev.to/octoparsehola/servicio-de-web-crawler-rastreador-web-3p26
webdev, database, datascience, python
El web crawling o el raspado de datos web se está volviendo cada vez más popular en los últimos años. Los datos scraping se pueden usar para varios análisis, incluso predicciones. Al analizar los datos, las personas pueden obtener información sobre una industria y enfrentarse a otros competidores. Aquí, podemos ver cuán útil y necesario es obtener datos de alta calidad a una velocidad más rápida y a gran escala. Además, la mayor demanda de datos ha impulsado el rápido crecimiento del servicio Web Crawler. http://www.octoparse.es/blog/gratis-web-scraping-herramientas-en-l%C3%ADnea Web Crawler Service se puede encontrar fácilmente si lo busca a través de Google. Más exactamente, es un tipo de servicio pago personalizado. Cada vez que desee rastrear un sitio web o cualquier conjunto de datos, deben pagar al proveedor del servicio. Luego puede obtener los datos scraped que desea. Hay algo que debe notar, debe tener cuidado con el proveedor de servicios que elija y expresar sus requisitos de datos de la manera más clara y exclusiva posible. Propondré algunos servicios de rastreo web (web crawler) http://www.octoparse.es/blog/servicio-de-web-crawler que he usado o aprendido para su referencia. De todos modos, la evaluación de los servicios es difícil ya que esos servicios evolucionan continuamente para servir mejor al cliente. La mejor manera de decidir es cuáles son sus requisitos y lo que se ofrece, mapearlos y clasificarlos usted mismo. **1. DataHen** http://www.datahen.com/?utm_campaign=AdWords&utm_source=ppc DataHen es conocido como un proveedor profesional de servicios de rastreadores web (web crawlers). Ha ofrecido un servicio integral y paciente, que abarca todos los niveles de rastreo de datos o requisitos de raspado de personal, nuevas empresas y empresas. No necesitará comprar o aprender un software de raspado utilizando DataHen. Cuando algunos sitios que requieren autenticación los bloquean, pueden completar formularios. La interfaz de usuario es fácil de entender, como se puede ver a continuación, solo necesita completar la información requerida y le entregarán los datos que necesita rastrear. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/hq3zx6iscyjzeyf6mks5.png) **2. Grepsr** https://www.grepsr.com/ Grepsr es una potente plataforma de servicio de rastreo (crawler) que proporciona múltiples tipos de necesidades de rastreo de datos de usuario. Para comunicarse mejor con los usuarios, Grepsr ha proporcionado requisitos bastante claros e integrales que recopilan la interfaz de usuario como se muestra a continuación. También hay tres ediciones del Plan de pago de Grepsr, desde Starters hasta Enterprises. Los usuarios pueden elegir cualquier plan en función de sus respectivas necesidades de rastreo. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/p6dn89ios4530ldpkiot.png) **3. Octoparse** https://www.octoparse.es/ ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/4wiayx1ekfi2d9bdrztq.gif) Octoparse debe definirse como una herramienta de raspado web (web scraping) http://www.octoparse.es/blog/construir-un-web-rastreador-para-principiantes , aunque también ofrece un servicio personalizado de rastreadores de datos. Octoparse Web Crawler Service también es poderoso. Las tareas se pueden programar para que se ejecuten en la plataforma en la nube, que incluye al menos 6 servidores en la nube que trabajan simultáneamente. También es compatible con las rotaciones de IP, lo que evita que ciertos sitios web lo bloqueen. Además, la API de Octoparse permite a los usuarios conectar su sistema a sus datos raspados en tiempo real. Los usuarios pueden importar los datos de Octoparse en su propia base de datos o utilizar la API para exigir el acceso a los datos de su cuenta. Además, Octoparse ofrece un Plan de extracción de edición gratuita. La edición gratuita también puede satisfacer las necesidades básicas de raspado o rastreo de los usuarios. Cualquiera puede usarlo para raspar o rastrear datos después de registrar una cuenta. Lo único es que necesita aprender a configurar las reglas básicas de raspado para rastrear los datos que necesita, de todos modos, es fácil comprender las habilidades de configuración. La interfaz de usuario es clara y fácil de entender, como se puede ver en la figura a continuación. Por cierto, su servicio de respaldo es profesional, los usuarios con cualquier duda pueden contactarlos directamente y obtener comentarios y soluciones lo antes posible.
octoparsehola
287,856
New Year Resolutions Of Every Website Tester In 2020
From automated browser testing to starting your own blog, here are some of the most interesting and intriguing new year resolutions for every website tester to aim in 2020.
0
2020-03-25T11:31:29
https://www.lambdatest.com/blog/new-year-resolutions-of-every-website-tester-in-2020/
testing, development, productivity, webdev
--- title: New Year Resolutions Of Every Website Tester In 2020 published: true description: From automated browser testing to starting your own blog, here are some of the most interesting and intriguing new year resolutions for every website tester to aim in 2020. tags: testing, development, productivity, webdev canonical_url: https://www.lambdatest.com/blog/new-year-resolutions-of-every-website-tester-in-2020/ cover_image: https://www.lambdatest.com/blog/wp-content/uploads/2020/01/New-Year-Resolution.jpg --- Were you able to work upon your resolutions for 2019? I may sound comical here but my 2019 resolution being a web developer was to take a leap into web testing in my free time. Why? So I could understand the release cycles from a tester’s perspective. I wanted to wear their shoes and see the SDLC from their eyes. I also thought that it would help me groom myself better as an all-round IT professional. I may have made a few bad decisions in 2019 but being a part-time web tester was not one of them. On the contrary, it worked like a charm. I was able to provide better development estimates after relating to loads associated with a tester’s bandwidth. I also made sure to follow best practices for avoiding [cross browser compatibility issues](https://www.lambdatest.com/blog/9-ways-to-avoid-cross-browser-compatibility-issues/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) and ended up saving a considerable lot of time for my testing team. The trick about working on new year resolutions is that you need to start taking ‘incremental steps’ to bring the change as breaking the shackles is always difficult. Mind you, it is difficult but definitely not impossible! ![i feel like every year has a new energy.](https://www.lambdatest.com/blog/wp-content/uploads/2020/01/image1.gif) _Src: Buzzfeed_ I don’t want to sound condescending, I love being a web developer, however, I am also becoming fond of web testing. Now, my job being a full-time web developer helps me to catch up with the recent trends of web development. This is why I decided that this year I will be venturing more as a part-time web tester to become a more productive resource for my team. **Now, what should I target as a website tester for 2020?** I thought this question would be best answered by my tech team at LambdaTest and so I went to every person on board and asked them about their targets and new year resolutions for 2020? I came up with some intriguing pointers and interesting targets. This article, I will be sharing these new year resolutions for every website tester who is a part of the LambdaTest crew. ## I Will Follow The Latest Trends In Web & Technology It is important to follow the happenings in the outside world, especially about your line of work. There is not a single source of information on the internet that can provide you will all the information and you need to spend effort discovering those ‘gold mines’. Follow thought leaders in the field of web technologies, automation testing, etc. on social media platforms like LinkedIn, Twitter, etc. You can also join/subscribe to communities/groups like Google Groups, Reddit, Stack Overflow, etc. and consume information that is relevant to you. Even if you are a newbie in website/web application testing, you should try answering questions on Stack Overflow even if you have partial information about a potential solution to the problem. Most importantly, make sure to follow these top 21 Selenium automation testing blogs of 2020. ## I Will Venture Into Automation Testing The field of website testing is constantly evolving and demand for automation testing of websites/web applications is picking up pace. There will be demand for manual testers as not all testing activities can be automated. However, the demand for automation testers will comfortably outpace the demand for manual testers. With emerging technologies like artificial intelligence (AI), machine learning (ML), etc. in automation testing, enterprises will be willing to invest more in automated testing than manual testing. [Manual Testing vs Automation Testing: Check Out The Differences](https://www.lambdatest.com/blog/difference-between-manual-and-automation-testing/) SDETs (Software Development Engineer in Test) or full-stack QAs are considered as advanced versions of software testers. They are in huge demand as they have knowledge about software development and testing. It’s never too late to start something! Getting started with automation testing will be the best thing you can do to kick-start your career in 2020. ## I Will Learn Different Programming Languages The Stack Overflow Developer Survey 2019 has an insightful list of[most loved, dreaded, and wanted programming languages](https://insights.stackoverflow.com/survey/2019#technology-_-most-loved-dreaded-and-wanted-languages). The survey results could be a good starting point to know from a tester’s perspective about the programming languages that are in demand. Each one of us in the automation testing field would be highly proficient with maximum 1-2 programming languages but you can never be complacent with the same. Learning new programming languages will keep you more prepared for rapid changes in the software testing industry. It is a known fact that shifting gears from one programming language to another is not that difficult as the basic concepts of programming still remain the same. Automation tests for testing websites/web applications can be written using different programming languages like Java, JavaScript, C#, Ruby, Python, etc. Being proficient in different languages will help you outclass the rest of your team and set new standards for them in the long run. ## I Will Blog About My Learnings The fundamentals of the statement – There is something to learn from each one of us that still holds good. In 2020, make it a point that you document your experiments, learning, and experiences in a technical blog. Getting started with blogging and sticking to a schedule will be extremely difficult in the beginning but slowly it would become a part of your routine. There is so much to learn from other’s experiences and starting a blog will not only help you improve your soft skills but also help you network & learn from other testers. Start a blog with a free blogging platform and slowly move to a custom domain once blogging becomes a part of your lifestyle.☺ Just like you might be searching for solutions to a technical snag that you are facing, there might be many others who would have written about a potential solution on their blog. Blogging on different topics related to website testing will help you in your professional journey as it has innumerable and intangible returns! ## I Will Automate Browser Testing Testing of websites/web applications on different combinations of browsers, devices, and operating system is very important for ensuring a consistent behavior and experience to your website visitors. Cross browser testing is and will always remain relevant as far as websites/web applications are concerned. However, have you ever thought about how much time you devote to manual cross browser testing? For a simple regression testing round, you may have to test your web application on hundreds of browsers + OS combination. It sounds exhausting, isn’t it? This is precisely why you should be focusing on developing a skill around automated browser testing. will open doors to new avenues of testing. Selenium is the most popular test automation framework for web-applications and Appium is the most popular test automation framework for mobile browsers.Selenium 4 which is the upcoming release of Selenium has many enhancements and excitement around its release should get you also excited about the test framework. Getting started with Selenium is easy as it is compatible with popular programming languages. Start with automated browser testing on local Selenium Grid and eventually shift to UI testing using remote online Selenium Grid such as LambdaTest. LambdaTest can help you perform cross browser testing on 2000+ real browsers & operating systems hosted on the cloud. You can perform manual and automated browser testing along with the benefits of third-party integrations. The best part is you can leverage [parallel testing with Selenium](https://www.lambdatest.com/blog/speed-up-automated-parallel-testing-in-selenium-with-testng/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) and reduce your test cycles by multiple folds. ## I Will Be Super-Attentive In Product Meetings Are you a fan of Agile Scrums? Well, so are many enterprises handling complex development projects and deadlines. The daily scrums definitely help to bring more transparency among the team. However, one may counter that too many meetings can kill productivity and may end up being monotonous. Nevertheless, there is always some take away from meetings, particularly product meetings. Product meetings can be fulfilling as you get to know from the product owners about the nuances of the product features and what thoughts went about designing & building the product. You can build a product management mindset from such meetings and that mindset can help you come up with better scenarios that will eventually be useful in improving the product. Add the resolution of staying very attentive and taking notes when in Product Meetings.☺ ## I Will Learn Various Test Automation Frameworks Selenium is one of the biggest names in automation testing frameworks. The advantage of the Selenium framework is that it open-source, compatible with popular programming languages like C#, Python, Ruby, Java, etc. and major browser vendors like Microsoft Edge, Google Chrome, Internet Explorer, Mozilla Firefox, Safari, etc. Being globally accepted, Selenium has a wide user community which has led to the introduction of multiple language-specific test automation frameworks compatible with Selenium automation testing. Python, the programming language that is gaining immense popularity has a range of [test automation frameworks](https://www.lambdatest.com/blog/top-5-python-frameworks-for-test-automation-in-2019/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) like PyUnit, pytest, Robot, Behave, etc. Similarly, there are frameworks for other languages too. For example: 1. [Top Java testing frameworks](https://www.lambdatest.com/blog/top-5-java-test-frameworks-for-automation-in-2019/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) 2. [Top JavaScript testing frameworks](https://www.lambdatest.com/blog/top-javascript-automation-testing-framework/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) 3. [Top PHP testing frameworks](https://www.lambdatest.com/blog/best-9-php-frameworks-in-2019-for-test-automation/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) So if you thought you are an expert in Selenium by working over a couple of frameworks, think again! There is still a lot to learn. Each framework has its own troubleshooting process, capabilities, and more. Getting started with these test frameworks might not be difficult as most of the test frameworks are compatible with Selenium WebDriver and prior knowledge of Selenium will always be an added advantage. ## I Will Focus On Testing For Handheld Devices The entire spectrum of web testing has changed with the penetration of mobile phones. Irrespective of the customer base, web-based businesses have to focus on mobile testing as mobile phones are becoming an intangible part of the business. There are plenty of mobile test automation tools – Appium, Robotium, UI Automator, MonkeyTalk, Selendroid, etc. that will gain more prominence in the future. Appium is the most popular tool for automating native, mobile web, and hybrid applications on iOS, Android, and Windows desktop platforms. Appium is widely used for automated browser testing on mobile devices. Cross browser testing with Appium can be further extended as it can be used with cloud based cross browser testing platforms like LambdaTest. If you are an automation tester with knowledge of mobile test automation tools, you can do wonders in 2020 as mobile web testing is one of the leading [software testing trends](https://www.lambdatest.com/blog/testing-trends-to-look-out-for-in-2019/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) in 2020. ## I Will Learn Nuances Of CI/CD With continuous integration and continuous delivery, development teams are involved in frequent code changes that are pushed to the appropriate branch. Jenkins, JIRA, Travis CI, Gitlab, Bamboo, etc. are some of the popular tools used for CI/CD. Learning these tools will further enhance your skills and along with testing, you can also play a role in release management. This enables you to enhance your skill-sets as you can play the role of a tester as well as a release manager. There are many MOOC courses on these topics. Start with a free-course to get acquainted with the subject and opt for a paid-course when required. ## I Will Garner Skills in AI and ML AI and ML are now being used in codeless test automation, including the latest Selenium IDE. With the rising demand of project methodologies like Agile, DevOps, CI/CD, etc., enterprises are looking for flexible QA approaches. The usage of Machine Learning (ML) for automation in testing will garner more attention in 2020 as it helps dynamically write test new cases based on user interactions by mining through their logs and behavior. AI and ML algorithms are great at further optimizing the [process of test automation](https://www.lambdatest.com/blog/machine-learning-for-automation-testing/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) and can be used in conjunction with Selenium framework to accelerate the automation testing process. Having know-how about machine learning algorithms that can be used for automated browser testing will give you an upper edge in this hyper-competitive industry. ## I Will Pay More Attention To The Performance We don’t have to reiterate the importance of performance for a consumer-based website/web application. As web testers, you already know the implications that a slow page load or any other performance can have on the business. Irrespective of whether you are working on the client-side testing or server-side testing, adding complementary skills will definitely help you scale up your profile. Learning server-side/back-end testing would require a considerable amount of effort. Hence, you should break that task into smaller sub-tasks. Some of the popular server-side tools are Locust.io, Multi-Mechanize, Apache Bench, Httperf, JMeter, etc. Google PageSpeed Insights is one of the best tools available for gauging performance on the client-side. In fact, you can [measure page load times with Selenium](https://www.lambdatest.com/blog/how-to-measure-page-load-times-with-selenium/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil) as well. If you are familiar with the Selenium framework, it could be a great starting point to venture into performance testing. Learning these tools will improve your depth & understanding in automated testing as you can visualize the clear a wholesome picture from the client-side as well as the server-side. ## I Will Build My Personal Brand There are some things that will reap positive results after many years and one of them is your personal brand. Personal branding has become extremely important and the best manner to do the same is by letting relevant people know about your skills and spreading your knowledge. Present in relevant conferences, upload your presentations online and start uploading your learnings/code to platforms like GitHub so that others can also benefit from your work. It also helps you improve your own skills. ## I Will Improve My Soft-skills When you are in a discussion, it is important to step into another person’s shoes and maneuver the language as per his/her skill-set and position. When you are presenting something to stakeholders in business (i.e. product managers, QA managers, VPs, etc.), it is important to make them aware of the business side of software/web testing. Instead of big presentations that contain defect conversion charts; convert that information into a 1-2 page slider that walks them through the business risks and delivery timelines. This is not a simple task but it should be learned. Good communication skills (oral and written) are important traits for being a good tester as testers have to constantly communicate with relevant stakeholders in the team. ## I Won’t Miss Out On Exploratory Testing Sometimes, it is perfectly fine to venture into unknown territories and the same principle can also be applied to testing. As a tester, you keep performing your day to day duties of test planning, execution, and test case enhancements. Sometimes, you can also look at testing some features that were developed a long time back, have not been updated and are still being used by your users. In 2020, make it a habit to take some time out from your busy schedule and perform exploratory testing of certain features in the product. This will help you improve the understanding of the product and vitalize your creativity and lateral thinking. ## It’s Curtains ![It’s Curtains](https://www.lambdatest.com/blog/wp-content/uploads/2020/01/image3.gif) These are the New Year resolutions that were addressed by the tech team and I believe they would be valid for every website tester who wishes to thrive in the industry. The software testing field is going through a rapid transformation and you should have realistic deadlines to complete these goals☺. > **“If you are not learning something new each day, you are not testing.” > -Jerry (Gerald) Weinberg** One thumb rule that has followed is making sure that there is some progress every single day so that you are in a comfortable position to achieve the New Year resolutions. As website testers, what are your top resolutions for 2020? Do share your key resolutions by leaving a note about them in the comments section of the blog. Happy new year and happy testing! 🙂 [![cross_browser_testing_tool](https://www.lambdatest.com/blog/wp-content/uploads/2018/11/Adword-Cyber2.jpg)](https://accounts.lambdatest.com/register/?utm_source=dev&utm_medium=Blog&utm_campaign=Nikhil-25032020&utm_term=Nikhil)
nikhiltyagi04
287,918
Washing your code: avoid mutation
Mutations happen when we change a JavaScript object or array without creating a new variable or reassigning an existing one. Mutations make code harder to understand and can lead to hard-to-find bugs.
4,121
2020-03-25T13:39:13
https://blog.sapegin.me/all/avoid-mutation/
javascript, programming, cleancode
--- published: true title: 'Washing your code: avoid mutation' description: Mutations happen when we change a JavaScript object or array without creating a new variable or reassigning an existing one. Mutations make code harder to understand and can lead to hard-to-find bugs. canonical_url: https://blog.sapegin.me/all/avoid-mutation/ series: Washing your code tags: - javascript - programming - cleancode --- <a href="https://leanpub.com/washingcode/"><img src="https://d33wubrfki0l68.cloudfront.net/28e22fbb71fee8adfa141f08f2296ef28eb1669e/6c21d/images/washing-your-code-cover-small.jpg" width="150" height="194" align="right" /></a> _You’re reading an excerpt of my upcoming book on clean code, “Washing your code: write once, read seven times.” [Preorder it on Leanpub](https://leanpub.com/washingcode/) or [read a draft online](https://github.com/sapegin/washingcode-book/)._ --- Mutations happen when we change a JavaScript object or array without creating a new variable or reassigning an existing one: ```js const puppy = { name: 'Dessi', age: 9 }; puppy.age = 10; ``` Here we’re _mutating_ the original `puppy` object by changing its `age` property. Mutations are often problematic. Consider this function: ```js function printSortedArray(array) { array.sort(); for (const item of array) { console.log(item); } } ``` The problem here is that the `.sort()` array method mutates the array we’re passing into our function, likely not what we’d expect when calling a function named `printSortedArray`. Some of the problems with mutation: - Mutation may lead to unexpected and hard-to-debug issues, where data becomes incorrect somewhere, and you have no idea where it happens. - Mutation makes code harder to understand: at any time, an array or object may have a different value, so we need to be very careful when reading the code. - Mutation of function arguments makes the behavior of a function surprising. _Immutability_ or _immutable data structures_, meaning that to change a value we have to create a new array or object, would solve this problem. Unfortunately, JavaScript doesn’t support immutability natively, and all solutions are more crutches than actual solutions. But even just _avoiding_ mutations in our code makes it easier to understand. Also, don’t forget that `const` in JavaScript only prevents reassignments — not mutations. We’ve discussed reassignments in the previous chapter, [Avoid reassigning variables](/all/avoid-reassigning-variables). ### Avoid mutating operations One of the most common use cases for mutation is updating an object: <!-- const hasStringModifiers = m => m.match(/^[ \w]+$/) --> ```js function parseExample(content, lang, modifiers) { const example = { content, lang }; if (modifiers) { if (hasStringModifiers(modifiers)) { example.settings = modifiers .split(' ') .reduce((obj, modifier) => { obj[modifier] = true; return obj; }, {}); } else { try { example.settings = JSON.parse(modifiers); } catch (err) { return { error: `Cannot parse modifiers` }; } } } return example; } ``` <!-- expect(parseExample('pizza', 'js')).toEqual({content: 'pizza', lang: 'js'}) expect(parseExample('pizza', 'js', '{"foo": true}')).toEqual({content: 'pizza', lang: 'js', settings: {foo: true}}) expect(parseExample('pizza', 'js', 'foo bar')).toEqual({content: 'pizza', lang: 'js', settings: {foo: true, bar: true}}) --> Here we’re creating an object with three fields, one of which, `settings`, is optional. And we’re doing it by mutating the initial `example` object when it should have an optional field. I prefer to see the whole object shape in a single place instead of having to read the whole function to find all possible object shape variations. Usually it doesn’t matter whether a property has an `undefined` value or doesn’t exist at all. I haven’t seen many cases where it mattered for a good reason. We also have a special error case here that returns an entirely different object with a lone `error` property. But it’s really a special case because none of the properties of two objects overlap, and it doesn’t make sense to merge them. I use ternaries for simple cases, and extract code to a function for more complex cases. Here we have a good case for the latter because of a nested condition and a `try`/`catch` block. Let’s refactor it: <!-- const hasStringModifiers = m => m.match(/^[ \w]+$/) --> ```js function getSettings(modifiers) { if (!modifiers) { return undefined; } if (hasStringModifiers(modifiers)) { return modifiers.split(' ').reduce((obj, modifier) => { obj[modifier] = true; return obj; }, {}); } return JSON.parse(modifiers); } function parseExample(content, lang, modifiers) { try { return { content, lang, settings: getSettings(modifiers) }; } catch (err) { return { error: `Cannot parse modifiers` }; } } ``` <!-- expect(parseExample('pizza', 'js')).toEqual({content: 'pizza', lang: 'js'}) expect(parseExample('pizza', 'js', '{"foo": true}')).toEqual({content: 'pizza', lang: 'js', settings: {foo: true}}) expect(parseExample('pizza', 'js', 'foo bar')).toEqual({content: 'pizza', lang: 'js', settings: {foo: true, bar: true}}) --> Now it’s easier to understand what the code does, and the possible shapes of the return object are clear. We’ve also removed all mutations and reduced nesting a little. ### Beware of the mutating array methods Not all methods in JavaScript return a new array or object. [Some methods mutate](https://doesitmutate.xyz/) the original value in place. For example, `push()` is one of the most commonly used. Replacing imperative code, full of loops and conditions, with declarative code is one of my favorite refactorings. And one of the most common suggestions I give in code reviews. Consider this code: <!-- const ProductOptions = () => null const product1 = {name: 'pizza', colors: [], sizes: []} const product2 = {name: 'pizza', colors: [], sizes: []} --> ```jsx const generateOptionalRows = () => { const rows = []; if (product1.colors.length + product2.colors.length > 0) { rows.push({ row: 'Colors', product1: <ProductOptions options={product1.colors} />, product2: <ProductOptions options={product2.colors} /> }); } if (product1.sizes.length + product2.sizes.length > 0) { rows.push({ row: 'Sizes', product1: <ProductOptions options={product1.sizes} />, product2: <ProductOptions options={product2.sizes} /> }); } return rows; }; const rows = [ { row: 'Name', product1: <Text>{product1.name}</Text>, product2: <Text>{product2.name}</Text> }, // More rows... ...generateOptionalRows() ]; ``` Here we have two ways of defining table rows: a plain array with always visible rows, and a function that returns optional rows. The latter mutates the original array using the `.push()` method. Array mutation itself isn’t the most significant issue of this code. However, code with mutations likely hides other issues — mutation is a good sign to look closer. Here the main problem is imperative array building and different ways for handling required and optional rows. Replacing imperative code with declarative and eliminating conditions often makes code more readable and maintainable. Let’s merge all possible rows into a single declarative array: <!-- const ProductOptions = () => null const product1 = {name: 'pizza', colors: [], sizes: []} const product2 = {name: 'pizza', colors: [], sizes: []} --> ```jsx const rows = [ { row: 'Name', product1: <Text>{product1.name}</Text>, product2: <Text>{product2.name}</Text> }, // More rows... { row: 'Colors', product1: <ProductOptions options={product1.colors} />, product2: <ProductOptions options={product2.colors} />, isVisible: (product1, product2) => (product1.colors.length > 0 || product2.colors.length) > 0 }, { row: 'Sizes', product1: <ProductOptions options={product1.sizes} />, product2: <ProductOptions options={product2.sizes} />, isVisible: (product1, product2) => (product1.sizes.length > 0 || product2.sizes.length) > 0 } ]; const visibleRows = rows.filter(row => { if (typeof row.isVisible === 'function') { return row.isVisible(product1, product2); } return true; }); ``` Now we’re defining all rows in a single array. All rows are visible by default unless they have the `isVisible` function that returns `false`. We’ve improved code readability and maintainability: - there’s only one way of defining rows; - no need to check two places to see all available rows; - no need to decide which method to use to add a new row; - easier to make an existing row optional by adding `isVisible` function to it. Here’s another example: <!-- const options = {foo: 1} const task = {parameters: {foo: {message: 'Foo'}, bar: {initial: 2}}} --> ```js const defaults = { ...options }; const prompts = []; const parameters = Object.entries(task.parameters); for (const [name, prompt] of parameters) { const hasInitial = typeof prompt.initial !== 'undefined'; const hasDefault = typeof defaults[name] !== 'undefined'; if (hasInitial && !hasDefault) { defaults[name] = prompt.initial; } prompts.push({ ...prompt, name, initial: defaults[name] }); } ``` <!-- expect(defaults).toEqual({foo: 1, bar: 2}) expect(prompts).toEqual([{name: 'foo', initial: 1, message: 'Foo'}, {name: 'bar', initial: 2}]) --> At first sight, this code doesn’t look very bad: it converts an object into an array by pushing new items into the `prompts` array. But if we take a closer look, there’s another mutation inside a condition in the middle that mutates the `defaults` object. And this is a bigger problem because it’s easy to miss while reading the code. The code is actually doing two loops: one to convert the `task.parameters` object to the `prompts` array, and another to update `defaults` with values from `task.parameters`. I’d split them to make it clear: <!-- const options = {foo: 1} const task = {parameters: {foo: {message: 'Foo'}, bar: {initial: 2}}} --> ```js const parameters = Object.entries(task.parameters); const defaults = parameters.reduce( (acc, [name, prompt]) => ({ ...acc, [name]: prompt.initial !== undefined ? prompt.initial : options[name] }), {} ); const prompts = parameters.map(([name, prompt]) => ({ ...prompt, name, initial: defaults[name] })); ``` <!-- expect(defaults).toEqual({foo: 1, bar: 2}) expect(prompts).toEqual([{name: 'foo', initial: 1, message: 'Foo'}, {name: 'bar', initial: 2}]) --> Other [mutating array methods](https://doesitmutate.xyz/) to watch out for are: - [.copyWithin()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/copyWithin) - [.fill()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/fill) - [.pop()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/pop) - [.push()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/push) - [.reverse()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reverse) - [.shift()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/shift) - [.sort()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/sort) - [.splice()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/splice) - [.unshift()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/unshift) ### Avoid mutation of function arguments Objects or arrays that are passed to a function can be mutated inside that function, and this affects the original object: ```js const mutate = object => { object.secret = 'Loves pizza'; }; const person = { name: 'Chuck Norris' }; mutate(person); // -> { name: 'Chuck Norris', secret: 'Loves pizza' } ``` <!-- expect(person).toEqual({ name: 'Chuck Norris', secret: 'Loves pizza' }) --> Here the `person` object is mutated inside the `mutate` function. Function argument mutation can be intentional and accidental, and both are problematic: - It’s harder to understand how a function works and how to use it because it doesn’t return a value but changes one of the incoming arguments. - Accidental argument mutation is even worse because function consumers don’t expect it. And it can lead to hard-to-find bugs when a value that is mutated inside a function is later used somewhere else. Consider this example: ```js const addIfGreaterThanZero = (list, count, message) => { if (count > 0) { list.push({ id: message, count }); } }; const getMessageProps = ( adults, children, infants, youths, seniors ) => { const messageProps = []; addIfGreaterThanZero(messageProps, adults, 'ADULTS'); addIfGreaterThanZero(messageProps, children, 'CHILDREN'); addIfGreaterThanZero(messageProps, infants, 'INFANTS'); addIfGreaterThanZero(messageProps, youths, 'YOUTHS'); addIfGreaterThanZero(messageProps, seniors, 'SENIORS'); return messageProps; }; ``` <!-- expect(getMessageProps(1, 5, 0, 2, 0)).toEqual([ {count: 1, id: 'ADULTS'}, {count: 5, id: 'CHILDREN'}, {count: 2, id: 'YOUTHS'} ]) --> It converts a bunch of number variables to a `messageProps` array that groups people of different ages with their count: ```js [ { id: 'ADULTS', count: 7 }, { id: 'SENIORS', count: 2 } ]; ``` The problem with this code is that the `addIfGreaterThanZero` function mutates the array we’re passing to it. This is an example of an intentional mutation: it’s required for this function to work. However, it’s not the best API for what this function does. We can change this function to return a new array instead: ```js const addIfGreaterThanZero = (list, count, message) => { if (count > 0) { return [ ...list, { id: message, count } ]; } return list; }; ``` <!-- let array = [{count: 1, id: 'ADULTS'}] array = addIfGreaterThanZero(array, 0, 'CHILDREN') array = addIfGreaterThanZero(array, 2, 'YOUTHS') expect(array).toEqual([{count: 1, id: 'ADULTS'}, {count: 2, id: 'YOUTHS'}]) --> But I don’t think we need this function at all: ```js const MESSAGE_IDS = [ 'ADULTS', 'CHILDREN', 'INFANTS', 'YOUTHS', 'SENIORS' ]; const getMessageProps = ( adults, children, infants, youths, seniors ) => { return [adults, children, infants, youths, seniors] .map((count, index) => ({ id: MESSAGE_IDS[index], count })) .filter(({ count }) => count > 0); }; ``` <!-- expect(getMessageProps(1, 5, 0, 2, 0)).toEqual([ {count: 1, id: 'ADULTS'}, {count: 5, id: 'CHILDREN'}, {count: 2, id: 'YOUTHS'} ]) --> Now it’s easier to understand what the code does. There’s no repetition, and the intent is clear: the `getMessageProps` function converts a list of values to an array of objects and removes “empty” items. We can simplify it further: ```js const MESSAGE_IDS = [ 'ADULTS', 'CHILDREN', 'INFANTS', 'YOUTHS', 'SENIORS' ]; const getMessageProps = (...counts) => { return counts .map((count, index) => ({ id: MESSAGE_IDS[index], count })) .filter(({ count }) => count > 0); }; ``` <!-- expect(getMessageProps(1, 5, 0, 2, 0)).toEqual([ {count: 1, id: 'ADULTS'}, {count: 5, id: 'CHILDREN'}, {count: 2, id: 'YOUTHS'} ]) --> But this makes the function API less discoverable and can make editor autocomplete less useful. It also gives the wrong impression that the function accepts any number of arguments and that the count order is unimportant — the number and order of arguments were clear in the previous iteration. We can also use `.reduce()` method instead of `.map()` / `.filter()` chaining: ```js const MESSAGE_IDS = [ 'ADULTS', 'CHILDREN', 'INFANTS', 'YOUTHS', 'SENIORS' ]; const getMessageProps = (...counts) => { return counts.reduce((acc, count, index) => { if (count > 0) { acc.push({ id: MESSAGE_IDS[index], count }); } return acc; }, []); }; ``` <!-- expect(getMessageProps(1, 5, 0, 2, 0)).toEqual([ {count: 1, id: 'ADULTS'}, {count: 5, id: 'CHILDREN'}, {count: 2, id: 'YOUTHS'} ]) --> I’m not a huge fan of `.reduce()` because it often makes code harder to read and intent less clear. With `.map()` / `.filter()` chaining, it’s clear that we’re first converting an array to another array with the same number of items, and then removing array items we don’t need. With `.reduce()` it’s less obvious. So I’d stop two steps ago with this refactoring. Probably the only valid reason to mutate function arguments is performance optimization: when you work with a huge piece of data, and creating a new object or array would be too slow. But like with all performance optimizations: measure first to know whether you actually have a problem, and avoid premature optimization. ### Make mutations explicit if you have to use them Sometimes we can’t avoid mutations, for example, because of an unfortunate language API that does mutation. Array’s `.sort()` method is an infamous example of that: ```js const counts = [6, 3, 2]; const puppies = counts.sort().map(n => `${n} puppies`); ``` <!-- expect(puppies).toEqual(['2 puppies', '3 puppies', '6 puppies']) --> This example gives the impression that the `counts` array isn’t changing, and we’re just creating a new `puppies` array with the sorted array. But the `.sort()` method returns a sorted array _and_ mutates the original array at the same time. This kind of code is hazardous and can lead to hard-to-find bugs. Many developers don’t realize that the `.sort()` method is mutating because the code _seems_ to work fine. It’s better to make the mutation explicit: ```js const counts = [6, 3, 2]; const sortedCounts = [...counts].sort(); const puppies = sortedCounts.map(n => `${n} puppies`); ``` <!-- expect(puppies).toEqual(['2 puppies', '3 puppies', '6 puppies']) --> Here we’re making a shallow copy of the `counts` array using the spread syntax and then sorting it, so the original array stays the same. Another option is to wrap a mutating API into a new API that doesn’t mutate original values: ```js function sort(array) { return [...counts].sort(); } const counts = [6, 3, 2]; const puppies = sort(counts).map(n => `${n} puppies`); ``` <!-- expect(puppies).toEqual(['2 puppies', '3 puppies', '6 puppies']) --> Or use a third-party library, like Lodash and its [`sortBy` function](https://lodash.com/docs#sortBy): ```js const counts = [6, 3, 2]; const puppies = _.sortBy(counts).map(n => `${n} puppies`); ``` <!-- expect(puppies).toEqual(['2 puppies', '3 puppies', '6 puppies']) --> ### Updating objects Modern JavaScript makes it easier to do immutable data updates thanks to [the spread syntax](http://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax). Before the spread syntax we had to write something like: ```js const prev = { coffee: 1 }; const next = Object.assign({}, prev, { pizza: 42 }); // -> { coffee: 1, pizza: 42 } ``` <!-- expect(next).toEqual({coffee: 1, pizza: 42}) --> Note the empty object as the first argument: it was necessary; otherwise, `Object.assign` would mutate the initial object: it considers the first argument as a target. It mutates the first argument and also returns it — this is a very unfortunate API. Now we can write: ```js const prev = { coffee: 1 }; const next = { ...prev, pizza: 42 }; ``` <!-- expect(next).toEqual({coffee: 1, pizza: 42}) --> This does the same thing but is less verbose, and no need to remember `Object.assign` quirks. And before the [Object.assign](http://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) in ECMAScript 2015, we didn’t even try to avoid mutations: it was too painful. Redux has a great [page on immutable update patterns](https://redux.js.org/recipes/structuring-reducers/immutable-update-patterns): it describes patterns for updating arrays and objects without mutations, and it’s useful even if you don’t use Redux. And still, spread syntax quickly gets incredibly verbose: ```js function addDrink(meals, drink) { return { ...meals, lunch: { ...meals.lunch, drinks: [...meals.lunch.drinks, drink] } }; } ``` <!-- const next = addDrink({breakfast: 'none', lunch: {food: 'pasta', drinks: ['tea']}}, 'coffee'); expect(next).toEqual({breakfast: 'none', lunch: {food: 'pasta', drinks: ['tea', 'coffee']}}) --> We need to spread each level of the object to change a nested value; otherwise, we’ll _overwrite_ the initial object with a new one: ```js function addDrink(meals, drink) { return { ...meals, lunch: { drinks: [drink] } }; } ``` <!-- const next = addDrink({breakfast: 'none', lunch: {food: 'pasta', drinks: ['tea']}}, 'coffee'); expect(next).toEqual({breakfast: 'none', lunch: {drinks: ['coffee']}}) --> Here we’re keeping only the first level of properties of the initial object: `lunch` and `drinks` will have only the new properties. Also, spread and `Object.assign` only do shallow cloning: only the first level properties are copies, but all nested properties are references to the original object, meaning mutation of a nested property mutates the original object. Keeping your objects as shallow as possible might be a good idea if you update them often. While we’re waiting for JavaScipt [to get native immutability](https://github.com/tc39/proposal-record-tuple), there are two non-exclusive ways we can make our lives easier today: - prevent mutations; - simplify object updates. **Preventing mutations** is good because it’s so easy to miss them during code reviews, and then spend many hours debugging weird issues. One way to prevent mutations is to use a linter. ESLint has several plugins that try to do just that, and we’ll discuss them in the Tooling chapter. [eslint-plugin-better-mutation](https://github.com/sloops77/eslint-plugin-better-mutation) disallows any mutations, except for local variables in functions. This is a great idea because it prevents bugs caused by the mutation of shared objects but allows you to use mutations locally. Unfortunately, it breaks even in simple cases, such as a mutation occurring inside `.forEach()`. Another way to prevent mutations is to mark all objects and arrays as read-only in TypeScript or Flow. For example, using the `readonly` modifier in TypeScript: ```ts interface Point { readonly x: number; readonly y: number; } ``` Or using the `Readonly` utility type: ```ts type Point = Readonly<{ readonly x: number; readonly y: number; }>; ``` And similar for arrays: ```ts function sort(array: readonly any[]) { return [...counts].sort(); } ``` Note that both `readonly` modifier and `Readonly` utility type are shallow, so we need to add them to all nested objects too. [eslint-plugin-functional](https://github.com/jonaskello/eslint-plugin-functional) has a rule to require read-only types everywhere, which may be more convenient than remembering to do that yourself. Unfortunately, it only supports `readonly` modifier but not `Readonly` utility type. I think it’s a good idea, because there’s no runtime cost, though it makes type definitions more verbose. I’d prefer [an option in TypeScript](https://github.com/microsoft/TypeScript/issues/32758) to make all types read-only by default with a way to opt out. Similar to making objects read-only on the type level, we can make them read-only at runtime with `Object.freeze`. `Object.freeze` is also shallow, so we’d have to use a library like [deep-freeze](https://github.com/substack/deep-freeze) to ensure that nested objects are also frozen, and we might want to have freezing only in development since it can otherwise slow our app down. I don’t think freezing is worth it on its own unless it is part of another library. **Simplifying object updates** is another option that we can combine with mutation prevention. The most popular way to simplify object updates is to use the [Immutable.js](https://immutable-js.github.io/immutable-js/) library: ```js import { Map } from 'immutable'; const map1 = Map({ food: 'pizza', drink: 'coffee' }); const map2 = map1.set('drink', 'vodka'); // -> Map({ food: 'pizza', drink: 'vodka' }) ``` I’m not a big fan of it because it has completely custom API that one has to learn. Also, converting arrays and objects from plain JavaScript to Immutable.js and back every time we need to work with any native JavaScript API or almost any third-party API, is annoying and feels like Immutable.js creates more problems than it solves. Another option is [Immer](https://immerjs.github.io/immer/), which allows you to use any mutating operations on a _draft_ version of an object, without affecting the original object in any way. Immer intercepts each operation, and creates a new object: ```js import produce from 'immer'; const map1 = { food: 'pizza', drink: 'coffee' }; const map2 = produce(map1, draftState => { draftState.drink = 'vodka'; }); // -> { food: 'pizza', drink: 'vodka' } ``` And Immer will freeze the resulting object in development. ### Even mutation is not so bad sometimes In rare cases, imperative code with mutations isn’t so bad, and rewriting it in a declarative way without mutations doesn’t make it better. Consider this example: <!-- // TODO import adddays from date-fns? --> ```js const getDateRange = (startDate, endDate) => { const dateArray = []; let currentDate = startDate; while (currentDate <= endDate) { dateArray.push(currentDate); currentDate = addDays(currentDate, 1); } return dateArray; }; ``` <!-- // TODO --> Here we’re making an array of dates to fill a given date range. I don’t have good ideas on how to rewrite this code without an imperative loop, reassignment, and mutation. And here we can live with this: - all “bad” things are isolated in a small function; - the function has a meaningful name; - the code is clear enough; - the function is pure: it doesn’t have any internal state and avoids mutating its arguments. It’s better to have simple and clear code with mutations than complex and messy code without them. But if you do use mutations, it’s wise to isolate them to a small function with a meaningful name and clear API. --- Start thinking about: - Rewriting imperative code with mutations in a pure declarative way to improve its readability. - Keeping the complete object shape in a single place; when you create a new object, make its shape as clear as possible. - Deduplicating logic and separating “what” from “how.” - Avoiding mutation of function arguments to prevent hard-to-find bugs. - Using `.map()` / `.filter()` chaining instead of `.reduce()`. - Making mutations explicit if you have to use them. - Preventing mutations in your code using a linter or read-only types. --- _If you have any feedback, [tweet me](https://twitter.com/iamsapegin), [open an issue](https://github.com/sapegin/washingcode-book/issues) on GitHub, or email me at [artem@sapegin.ru](mailto:artem@sapegin.ru). [Preorder the book on Leanpub](https://leanpub.com/washingcode/) or [read a draft online](https://github.com/sapegin/washingcode-book/blob/master/manuscript/book.md)._
sapegin
287,971
Building a site from scratch. Part 2 - First routing
Structuring Having chosen the main categories, I needed to start building the website stru...
5,597
2020-03-25T15:05:17
https://buaiscia.github.io/posts/building-site-from-scratch-first-routing
beginners, tutorial, react, webdev
--- canonical_url: "https://buaiscia.github.io/posts/building-site-from-scratch-first-routing" --- ## Structuring Having chosen the main categories, I needed to start building the website structure. The tree is like the following - 📂 __src__ - 📄 [App.css](src/App.css) - 📄 [App.js](src/App.js) - 📂 __Components__ - 📂 __About__ - 📂 __Candles__ - 📂 __Ceramics__ - 📂 __Contact__ - 📂 __Gingerbread__ - 📄 [Landing.jsx](src/Components/Landing.jsx) - 📂 __Misc__ - 📂 __Woodcarving__ - 📂 __Containers__ - 📂 __HOC__ - 📂 __Layout__ - 📄 [index.css](src/index.css) - 📄 [index.js](src/index.js) By the way, I used <a href="https://github.com/michalbe/md-file-tree" target="__blank">md-file-tree by @michalbe</a> to generate the tree in my terminal of VSCode. ## Routing My App.js is only importing the Layout component <h3>App.js</h3> ``` render() { return ( <Layout /> ); } ``` The Layout is a HOC (High Order Component) which eventually will include the landing page and the routing to the other pages/components + the navigation (which will be hidden in the landing page): <h3>Layout.jsx</h3> ``` <BrowserRouter> <React.Fragment> <Switch> <Route path='/' exact component={Landing} /> <Route path='/about' component={About} /> <Route path='/contact' component={Contact} /> </Switch> </React.Fragment> </BrowserRouter> ``` BrowserRouter is the react-router-dom HOC component which is necessary to create the routing. React.Fragment is the Aux component to wrap the children (instead of using the previously-needed divs). Switch is making sure that once you load a component, it will not check the other routes but will stop at the first found. Route has the various path to the components/pages (I haven't created all of them) Then, for now, I just tested if the links on the landing page were working: <h3>Landing.jsx</h3> ``` class Landing extends Component { render() { { console.log(this.props) } return ( <React.Fragment> <h1>Landing page</h1> <Link to='/about'>About</Link> <Link to='/contact'>Contact</Link> </React.Fragment> ) } } ``` The Routing props were passed from the Layout to its children, in this case the Landing component. In fact, as I added console log to props, you can see all history, location and match props, that can be used afterwards to run customized functions on them. Eventually, I will add Suspense for Lazy loading, but now it's useless as the components will just need more time to be loaded. I think I'll create another component as Container, which will function as Main page out of the landing page and will render the children components. In this way I can separate the root path from the others in a clean way. <h3>Bonus(es)</h3> <strong>1 - </strong>I started the project with create-react-app...but it was installing only the node modules and package.json, and nothing else. What was wrong? Well, I found out that I was breaking some flow in having create-react-app installed globally. So I had to: <li>npm uninstall -g create-react-app</li> <li>npm cache clean --force</li> After that I was able to ```npx create-react-app my-app``` without issues <strong>2 - </strong> What's the difference between doing ```npx create-react-app my-app``` and ```npm install create-react-app -g```? npx is the execution command for npm. So npx is saying, use the create-react-app (CRA) tool to my new app. The global install is not used anymore since npm version > 5. This was used to run the command directly from the terminal, like: create-react-app my-app. Also, you probably can check out the CRA templates by <a href="https://twitter.com/fragileglass" target="__blank"> Derek Shanks </a> for having added automatically <a href="https://www.npmjs.com/~dbshanks" target="__blank">react-router-dom and sass. </a> <strong>3 - </strong>I was wondering, should I create my React files with <strong>JS</strong> or <strong>JSX</strong> extension? <a href="https://stackoverflow.com/questions/46169472/reactjs-js-vs-jsx" target="__blank"> Here's the discussion </a> about the topic... given that, I decided to opt for .jsx (at least I'll have a nice icon on VSCode ;) That's all for today! Thanks for reading and if you like it, please share it. Original post <a href="https://buaiscia.github.io/posts/website-from-scratch-react-firebase_2/" target="__blank">here on my blog</a>. Alex
buaiscia
287,992
How to choose an online course
Table of contents: What are Free Learning Tutorials?, go to canonical section What are MOOCs?, go...
0
2020-05-04T08:40:09
https://www.reactgraphql.academy/blog/how-to-choose-an-online-course
remotelearning, react, javascript
--- title: How to choose an online course published: true date: 2020-03-24 23:00:00 UTC tags: remote learning, react js, javascript canonical_url: https://www.reactgraphql.academy/blog/how-to-choose-an-online-course --- Table of contents: - What are Free Learning Tutorials?, [go to canonical section](https://www.reactgraphql.academy/blog/how-to-choose-an-online-course#what_are_free_learning_tutorials) - What are MOOCs?, [go to canonical section](https://www.reactgraphql.academy/blog/how-to-choose-an-online-course#what_are_moocs) - What Is Remote Training?, [go to canonical section](https://www.reactgraphql.academy/blog/how-to-choose-an-online-course#what_is_remote_training) - Make an informed decision, [go to canonical section](https://www.reactgraphql.academy/blog/how-to-choose-an-online-course#make_an_informed_decision) - Sign Up for Remote Training Today, [go to canonical section](https://www.reactgraphql.academy/blog/how-to-choose-an-online-course#sign_up_for_remote_training_today) Following the success of our Remote Training programme early this year, we wanted to clarify the difference between our training, free online tutorials and MOOCs. To make sure you get the most out of any online course, we think it’s important to understand what are the options and where the value lies in each. This will help you make an informed decision about which one works best for you. So today we’ll be looking at the difference between remote training, free learning tutorials and MOOCs. First up, let’s look at free learning tutorials. ## What are Free Learning Tutorials? Free learning tutorials are a form of online training that anyone can access at any time. They usually consist of blog posts describing how to do something, and can even be accompanied by videos that can take you on a generic tour of your subject, or explain a specific feature of a topic. These tutorials can act as a nice introduction to a subject like GraphQL and can get you up to speed on a topic by providing the basic knowledge you’ll need in a new area. Furthermore, because these tutorials are readily available you can use them as a refresher for topics you haven’t touched on for a while. Free Learning Tutorials often lack of structured content. You might find some introductions to different subjects or an explanation of a specific topic, but you'll rarely find a series of tutorials following an order that allows learning a subject from the beginning to the end. If you want to learn an advanced subject, you need to know certain things before. For example, in order to learn design systems in React you need to know what is component-based CSS, and it’s easier to understand what that is if you know what components are. Therefore it makes sense to learn [React basics](https://www.reactgraphql.academy/react/curriculum?tab=React%20Fundamentals%20Part-time&section=session1) first, then [CSS-in-JS](https://www.reactgraphql.academy/react/curriculum/?tab=Advanced%20React%20Part-time&section=session6), and finally [design systems](https://www.reactgraphql.academy/react/curriculum?tab=Advanced%20React%20Part-time&section=session7), following all the steps in-between. Another pain point of free online tutorials is that students can’t engage with the material or their tutors. There aren’t tasks to complete to find out how well you’ve learned something, and while some blogs do let you post comments and ask questions, this isn’t always the case. Free online tutorials also don’t offer feedback. Feedback is about answering the questions that learners don’t know they should ask because they aren’t aware of them. In conclusion, free learning resources are: - Good as an introduction to a new topic - Readily available - Free to access - Perfect for refreshing knowledge - Good for exploring a specific topic On the other hand, they provide: - Little structured content - Little to no feedback or collaboration - Few credentials to earn - No networking opportunities ## ## What are MOOCs? A MOOC (Massive Open Online Course) is an online-only course that prides itself on its very wide reach and accessibility. Thousands of people can enjoy these lessons, learning the basic skills they need to code in a new language. Unlike free online tutorials, MOOCs contain more engagement between students and tutors. For instance, some MOOCs offer group chats and even Q&A functionality. Not only do MOOCs offer interactive elements, but they also offer structured content. The course allows students to learn a new topic in a logical way, where each new module or lesson builds upon the knowledge learned in the previous modules. Some courses offer certificates for completion, with end-of-course assessments to ensure that you’ve learned the topic and new skill. This is a great opportunity to show management that you’ve completed a training course. MOOCs are also a great way to provide the foundations for further knowledge development, such as through remote training. But there are some limitations to MOOCs. Firstly, they offer limited feedback because MOOCs tutors are unable to identify knowledge areas in which students are lacking. Similarly, there is little collaboration, with students unable to explain to their peers what they think they have learned. Finally, you have no accountability when you learn in a MOOC or free online tutorial. If you don’t do the work, no one will hold you accountable and your learning will stagnate as a result. In conclusion, MOOCs: - Offer a solid introduction to a topic - Present structured content - Offer some credentials to earn - Offer interactive elements, like Q&As and Group Chat However, they do have a few limitations, including: - Limited feedback - Limited collaboration - No accountability - Few networking opportunities ## ## What Is Remote Training? We can easily think of Remote Training like a normal in-person lecture, only done online using video-call tools. Remote training is different from online tutorials and MOOCs, which are aimed at thousands of students. Instead, this training is live and often involves a limited number of students. The limited number means tutors can interact with students in real-time, so they can focus on areas where the latest cohort of students struggle most. Likewise, students can get the feedback they need to thrive. It is also an environment in which you feel compelled to learn and progress. Working with others and having a closer relationship with your tutor means that you have the peer pressure students need to put the time in and do the work. The added benefit is that when your peers see your hard work pay off, they’ll remember - at a later point, those new connections you made could land you a dream job. While in an online course the students must have the willpower to do the exercise, in remote training the group exercises make learning fun and naturally pull the student into active participation. Remote training also offers practice tests that ensure you’ve learned the new skill adequately. On completion of these tests, students earn a credential to certify their new skill. Free online tutorials and MOOCs provide a fantastic starting point for developers learning a new topic, but with remote training, they can cement and advance their new skills. In conclusion, what can you get from Remote Training? - Gain the feedback you need to grow and learn - Work in a collaborative environment to develop skills - Earn credentials with practical tests - Benefit from a structured course designed to strengthen learning - Learn all the latest features with up-to-date content - Engaging and fun training with group exercises - Network with peers On the other side, Remote Training is also: - More expensive - Requires commitment, as you need to adapt to a certain schedule and show up at a given time - Not as flexible in terms of timing, you must wait for the next cohort to start. ## Make an informed decision What’s your goal? Are you looking for a specific tutorial because you forgot part of a procedure, or you need to integrate your knowledge? Or are you looking for online lectures that are easily accessible and available anytime? If you’re looking for an active learning-by-doing training instead, Remote Training is what we suggest. They’re live and interactive courses accessible from your home or office. You won’t need to move or travel and you’ll still get all the advantages of in-person training. ## Sign Up for Remote Training Today At React GraphQL Academy, we provide professional and experienced React and GraphQL remote training for advanced developers looking to expand their skill set in a new area. If you’re looking to further your knowledge beyond that of free online tutorials and MOOCs, our remote training is for you. To find out more about the training, head to our dedicated [GraphQL Remote Training page](https://reactgraphql.academy/graphql/training/part-time-course/online/) or [React Remote Training page](https://reactgraphql.academy/react/training/part-time/remote/) and find out more about the curriculum. Alternatively, you can get in touch with our expert team to discuss the training in further detail, find out about pricing, and what you’ll get out of the part-time dev training. But to get a real idea of what our training looks like, join us for our a trial session before the training kicks off. We are running a 3-hour trial of the React training frequently, check this page to see when is the next one: [https://reactgraphql.academy/react/training/react-trial/](https://reactgraphql.academy/react/training/react-trial/). Join us, and if you like what you learn you can secure your spot for the full training later.
reactgraphqlacademy
287,998
Trusting Your Gut Feeling To Take Action
Introduction I think in times like this, we should really trust our own gut on what we sho...
0
2020-03-25T15:46:34
https://www.maxongzb.com/trusting-your-gut-feeling-to-take-action-reading-time-4-mins/
startup, productivity, beginners
# Introduction I think in times like this, we should really trust our own **gut** on what we should do. Which result in us to not make a decision and not chuck it aside. For another day to think about it till it is too late to take action and we get the consequences of inaction. Part of it has been me growing up to be **resourceful** find a path. Even the path, I was embarking might have been unproven with little to no references to draw from. Like making a decision from the age of 11 to progress to get a technical diploma from local polytechnic. Despite I'm not academically inclined and beyond my wildest dream of graduating with a degree. Wanting to be part of the startup ecosystem in Singapore as a developer. Taking the leap of faith, to start a technical blog despite knowing I'm bad at writing. I came from a family that speaks Chinese and uses Chinese dialects to communicate with each other since I was born. # Avoiding Analysis Paralysis I believe in the need to gather & analysis information to make a decision. Besides tapping on the shoulder of giants in mental models from a diverse pool of people. To reflect and make a decision to plan and take action. The last thing that you really want is to be perpetually gathering information and being fearful of making decisions. Due to your need to get 100% information to make a decision and deciding on a course of action. Which is why agile practices like Scrum and the common startup mantra is "moving fast & break things". So that we gather rapid feedback and challenge our assumptions while we are learning by doing and not make costly and irreversible mistakes. # What is The OODA Loop? The [OODA loop][1] is really a deep topic in the level of strategy, decision making, organisation adaptability and learning. This is created by an ex-fighter pilot, military theorist called [John Byod][2]. Who was involved in the creation of the iconic F15 and F16 fighter jet that tests conventional wisdom in his time. Namely the debate between single-engine vs multi-engine fighter jets in fuel consumption and distance covered for the fighter jet. If John Boyd was still alive, he would be giving this lecture that will take you about 7 hours to understand the whole concept of OODA. Which he gave it tirelessly to every officer he could get his hands in the US military. To incorporate his methodology in the decision-making process and organisation adaptive learning. ## The OODA Loop Breakdown: * **Observe** - To observe the current environment, competitors, other industries and continuously gather information with case studies to help in the later stage of the loop. * **Orient** - A self-reflection process to make sense of the information from the **observe** stage. In a quest to search for **mismatches** in the current environment to allow you to act upon it. Aka finding the third door or the weakness in the armour to run with it. * **Decide** - To make a decision and create a course of action to take advantage of the mismatch in the environment. * **Action** - To execute and on the decision and loop through the OODA again to get ready for the next course of action to take advantage of the mismatch The key about the OODA loop you have to make this loop go faster. As either as an individual or organisation level which the gap between you and present or future competitors becomes wider. As you take advantage of the mismatch in the present environment in chaotic times. # Conclusion I hope I had provided you with a good reason to go with your **gut** to make decisions not out of fear. But with **sufficient (Around 80%)** information that you could capitalise upon especially in times like this. Instead, we should seek to take calculated risk to be outside of our comfort zone. I know that the present situation might be gloom and doom for many. It is easy to give up and double down to be on survival mode to ride it out of the storm. We can take advantage of the **mismatch** in the present. To **thrive** in this crisis instead of **surviving** that we are undergoing now. If you like this article do **sign up** for my [Adventurer's Newsletter](https://maxongzb.activehosted.com/f/1) for my **Adventurer's newsletter** which contains interesting content I stumble across over the week in the area of **Python**, **Web Development**, **Startup**. You can also **follow** me to get the **latest** update of my article on **Dev** The original post was on [Trusting Your Gut Feeling To Take Action - Reading Time: 4 Mins](https://www.maxongzb.com/trusting-your-gut-feeling-to-take-action-reading-time-4-mins/) and cover image by [Muzammil Soorma on Unsplash](https://unsplash.com/photos/R11bppS4q8o) # Reference * [The Science of Analysis Paralysis: How Overthinking Kills Your Productivity & What You Can Do About It][3] * [How To Make The Start Employee You Need][4] * [John Boyd: The Fighter Pilot Who Changed the Art of War][2] * [Go with your Gut Trusting Your Intuition][5] * [The Overthinker’s Guide for Taking Action: A Complete Guide][6] * [The Innovators Dilemma][7] * [OODA Loop][8] [1]: https://taylorpearson.me/ooda-loop/ [2]: https://www.amazon.com/Boyd-Fighter-Pilot-Who-Changed/dp/0316796883 [3]: https://doist.com/blog/analysis-paralysis-productivity/ [4]: https://mastersofscale.com/marissa-mayer-how-to-make-the-star-employees-you-need/ [5]: https://www.inc.com/geil-browning/go-with-your-gut-trusting-your-intuition.html [6]: https://startupbros.com/overthinkers-guide-taking-action-complete-guide/ [7]: https://www.amazon.com/Innovators-Dilemma-Technologies-Cause-Great/dp/1565114159 [8]: https://taylorpearson.me/ooda-loop/#Orient
steelwolf180
288,009
TypeScript Function Overloading
Being able to handle the ability to call a single function with variable inputs is really nice. Somet...
0
2020-03-25T16:03:32
https://dev.to/thinkster/typescript-function-overloading-k9i
typescript, webdev, productivity, programming
Being able to handle the ability to call a single function with variable inputs is really nice. Sometimes we need a function to do different things based on the parameters given. When doing this be sure to consider code smells like I detailed in my last email. Let's look at a simple example we're all probably familiar with: optional parameters. ![image](https://cdn-images-1.medium.com/max/800/0*KvSgVF9vq4k3lSJ2?) Here we see three uses of the slice method. Slice returns a portion of an array. It has two parameters, each optional. The first is the index to start from, the second is the index to end at. Straightforward. We can also use default parameters. In the above, you can consider the first parameter to have a default value of 0. (slice actually just has that first parameter marked as optional, but the effect is the same) A reasonable signature for this may be the following: ![image](https://cdn-images-1.medium.com/max/800/0*ba_WZGuIEqurVeC0?) This gives us all kinds of flexibility in creating signatures in TypeScript that give us good intellisense and clues as to how to use the method. Just look at the intellisense that Visual Studio code gives us for the array.slice method. ![image](https://cdn-images-1.medium.com/max/800/0*sPGgD5lqyH9re2ll?) This is great, but there are some scenarios that optional and default parameters just don't cover. Let's look at a different scenario. What if we are writing a method that creates database connections? We have two scenarios. First, we just receive a connection string with an optional timeout. Second, we get an IP address, port number, username, password, and optional timeout. Our scenarios look like this: ![image](https://cdn-images-1.medium.com/max/800/0*zBX1H2SAkc6wIi6g?) So how can we solve this? **Option 1:** We could try to use optional and default parameters here. Something like this: ![image](https://cdn-images-1.medium.com/max/800/0*l_k5OwBpMfJHC2sb?) But WOW look how messy and even misleading that is. Not expressive at all. **Option 2:** different methods ![image](https://cdn-images-1.medium.com/max/800/0*DSMzNvgIxqMjzUya?) That's ok, but it really feels clunky. If we have more scenarios, it gets worse. **Option 3:** Parameter Object. This is commonly used on JavaScript for just this scenario. ![image](https://cdn-images-1.medium.com/max/800/0*Re7ZC7FfJvU2SgEr?) This works just fine, but it doesn't communicate through the signature what the parameters are, and the various configurations. You can, with TypeScript give the parameter object a defined shape, but it's pretty complex based on our possible configurations, and is just less expressive. I personally think parameter objects can be ok, but they are possibly overused. Destructuring with an interface is kind of a better parameter object. You can check out a blog on that here. Sometimes this method fits the bill, but let's look at another option: **Function Overloading** With function overloading, we can create different signatures of a function, making each expressive, giving us IntelliSense, and ultimately a single method handles all the calls. Let's look at the basics of this, and then we'll solve our problem. Here's the basic syntax in TypeScript: ![image](https://cdn-images-1.medium.com/max/800/0*nSQtvnlt4JRLFhrg?) The way this works is we give each signature without a body, and then we provide a method that is a superset of ALL signatures and actually put our function body in there. There can only be ONE implementation. It's the last one. We can't have 2 different implementations. Inside our method body we have to branch based on the parameters, like so: ![image](https://cdn-images-1.medium.com/max/800/0*lR5qpweFbNSAVWTN?) The signatures show up in TypeScript intellisense, but the third signature, the ACTUAL implementation, doesn't actually show up. So TypeScript would allow only strings and numbers, not other types. Now that we've seen how to do this, let's solve our problem above: ![image](https://cdn-images-1.medium.com/max/800/0*fXAmOAT4xo7nPxp8?) Here's the correct syntax. Notice that the final signature is the actual implementation, and it's a superset of the first two. Since the types line up, we're able to use specific types. Often times you use the any type or a union type when the types for a specific parameter vary. Notice that I named the first parameter just p1. This is because it's either a connection string OR an IP Address. Again, this won't show up in the IntelliSense when I try to call this method. So in the implementation, I'll determine which variation of my signature was called, and take the appropriate action. Possibly even assigning p1 to either a local ipAddress or connectionString variable. That really just depends on the implementation. A start of a possible implementation for the actual method would be this: ![image](https://cdn-images-1.medium.com/max/800/0*qpNk7I2Pcdwuwtf6?) This gives us the basic branch based on which signature the user selected. Again, the great thing here is that TypeScript does the checking for us at compile time. So if the 3rd argument is a string, then we know which signature the caller used. Note: don't use the typical !!username method to determine if something was passed in. Use the type as your branching criteria. This method isn't the solution to every scenario. As we get multiple scenarios that are very complex, parameter objects make more sense. Also, sadly we can't use default values in overloads. But getting comfortable with overloading functions just gives you another simple way to handle a somewhat common scenario in a simple fashion. I hope you find it useful. Check out our [100 Algorithms challenge](https://thinkster.io/tutorials/100-algorithms-challenge?utm_source=devto&utm_medium=blog&utm_term=typescriptfunctionoverloading&utm_content=&utm_campaign=blog) and [all our courses](https://thinkster.io?utm_source=devto&utm_medium=blog&utm_term=typescriptfunctionoverloading&utm_content=&utm_campaign=blog) on JavaScript, Node, React, Angular, Vue, Docker, etc. Happy Coding! Enjoy this discussion? Sign up for our newsletter [here](https://thinkster.io/?previewmodal=signup?utm_source=devto&utm_medium=blog&utm_term=typescriptfunctionoverloading&utm_content=&utm_campaign=blog). Visit Us: [thinkster.io](https://thinkster.io?utm_source=devto&utm_medium=blog&utm_term=typescriptfunctionoverloading&utm_content=&utm_campaign=blog) | Facebook: @gothinkster | Twitter: @GoThinkster
josepheames
288,444
Add Extra Custom Data To Ninja Forms Submissions
Automatically add extra data to Ninja Forms Submissions with 1 easy step Capturing all the...
0
2020-03-26T22:15:07
https://nimblewebdeveloper.com/blog/add-extra-data-to-ninja-forms-submissions
wordpress, php, webdev
--- title: Add Extra Custom Data To Ninja Forms Submissions published: true date: 2019-10-21 13:00:00 UTC tags: wordpress, php, webdev canonical_url: https://nimblewebdeveloper.com/blog/add-extra-data-to-ninja-forms-submissions --- ## Automatically add extra data to Ninja Forms Submissions with 1 easy step Capturing all the data you need in form submissions can be critical to running successful campaigns and gathering all the data you need. I use Ninja Forms for a lot of client websites because it's quick and easy to spin up, and pretty reliable. One complaint I often have about Ninja Forms, though, is that it doesn't really play nice with developers. (I think they've done this on purpose in order to sell more add-ons. Hey fair play to them). Something I often need is the ability to collect extra data on a Ninja Forms submission. You have some ability to collect extra data by using hidden fields and capturing page url or user id etc (see [this post](https://ninjaforms.com/blog/user-data-wordpress-forms/) from Ninja Forms). That's all well and good (and works fine). But sometimes I need to add extra data to all or a great number of forms. Or in this case, the client doesn't want to do it every time they create a form. ## How it works The solution is actually quite simple, and most of the functionality _is_ actually provided by Ninja Forms! It's just poorly documented. **What we're going to do:** - Add a Wordpress hook to listen for a Ninja Form being displayed - Inject a Javascript that listens for the form submit action - Add some 'extra' data to the Ninja Form - Sit back, open a beer - we're done. Yep it's that easy! Ninja Forms has the ability to add 'extra' data to a form submission which it stores as **post\_meta** on the submission. Currently Ninja Forms uses this to store calculations (and I'm assuming various add-ons store other data here too). ## Enough! Just show me the code Ok I'm going to wrap this into a nice little class so we don't pollute the global namespace. If you were an uncultured swine, you could do this whole thing in about 5 lines. But we don't roll like that! ```php <?php //Setup our class if(!class\_exists('Nimble\_NF\_ExtraData')) { class Nimble\_NF\_ExtraData { var $form\_ids = []; //Store the form IDs we want to modify var $script\_added = false; public function \_\_construct() { //This is the earliest available hook that fires when //a ninja form is being displayed //We could add our script on all pages, but this //way we dont add it if it's not needed! add\_action('ninja\_forms\_before\_form\_display', array($this, 'addHooks')); } public function addHooks($form\_id) { //You could test the form id here if you want $this->form\_ids[] = $form\_id; //Make sure we only add the script once if(!$this->script\_added) { add\_action('wp\_footer', array($this, 'add\_extra\_to\_form'), 99); $this->script\_added = true; } } public function add\_extra\_to\_form() { ?> <script> (function() { var form\_ids = [<?php echo join(", ", $this->form\_ids); ?>]; nfRadio.channel("forms").on("before:submit", function(e) { //Make sure the form being submitted is one we want to modify if(form\_ids.indexOf(+e.id) === -1)return; //Get any existing extra data var extra = e.get('extra'); //Merge in new extra data //EG the post ID extra.post\_id = <? the\_ID(); ?>; //Set the extra data e.set('extra', extra); }); })(); </script> <?php } } } new Nimble\_NF\_ExtraData(); ``` That's all there is to it, your extra data will be added to the post submission as post\_meta. In this example you would get a post meta with key: **post\_id**. You can apply this in a Code Snippet, stick it in your functions file, or build out a plugin!
sebtoombs
288,015
Ideas For Website
As software engineers most of us have personal websites but I got my personal website a few months ag...
0
2020-03-25T16:10:58
https://dev.to/nyamador/ideas-for-website-36ij
html, javascript, dns
As software engineers most of us have personal websites but I got my personal website a few months ago [Nyamador.me](https://nyamador.me) but I feel like I'm under utilizing it. What are you guys using your domains for ?Ideas are welcome.
nyamador
288,122
Are there virtual/remote JavaScript meetups?
Now that we can't meet in person anymore I'm wondering whether there are some virtual meetups related...
0
2020-03-25T18:16:41
https://dev.to/joshx/are-there-virtual-remote-javascript-meetups-5863
javascript, meetup, remote
Now that we can't meet in person anymore I'm wondering whether there are some virtual meetups related to JavaScript?
joshx
288,140
Community culture in a time of crisis
Many of us working in software development take working from home for granted. For a large chunk of u...
0
2020-03-25T18:51:28
https://mgkeen.com/blog/community-culture-in-a-crisis
culture, leadership, community
Many of us working in software development take working from home for granted. For a large chunk of us adapting to the current crisis has been relatively simple from a “how do I work remote” point of view. However this isn’t a simple working from home scenario, and it comes with many new challenges that we’ve never had to deal with before. ## A new environment Quite simply working from home alone is not the same as working from home with a family. Without being able to go outside. For a month. At least. I don’t have any children, but even working in the same room as my partner all day has been an eye opening experience. Apparently I’m with a “we can think blue sky on this” kind of person. It also impacts my usual work from home routine. Her job requires her to be on the phone with clients at a moment’s notice, and so watching shit TV to relax at lunch time isn’t an option (I know, I’ve got it REALLY bad). And then there are those with young children. One of our team has a two year old daughter. In our team’s daily social meeting she seems to delight in making as much noise as possible. We all find it quite cute and entertaining, but I can’t begin to imagine what working with that _all day_ is like. ## Mental health is going to get worse Some countries have been in lockdown for a while, but for many of us this is just beginning. Today is the first official day of lockdown in NZ, though many have been practising it all week at least. We’ve likely got a long road of this head of us. A lot more stress and anxiety is coming. Uncertainty, sick loved ones you can’t visit, being stuck a long way from friends and family, cabin fever, the list goes on. It will be different sources for all of us, but it will impact all of us. ## What can we do to help? For many of us, our teams at work will be the main link to the outside world. As such we need to treat each other less like a team, and more like a community. Teams are often about achieving a goal, and their culture often focuses on how best to achieve that goal together. For most of us the workplace is usually quite separate from the rest of our lives, and so this approach can work quite well. But now our home lives and our work lives have been smooshed together with little warning, and that separation isn’t so clear. Communities are all about supporting each other through life. I was already a strong believer that companies should do as much as they can to support their employees outside of work, but now it’s vital. Not just for employers either, but as a core principal of our team culture. So what can we do? **Be flexible**. Live in a different time zone to your family and need to take some time during the day to talk to them? Need to play with your daughter for an hour or two to keep her in a good mental state? Need to get outside whilst it’s not raining? Go ahead. Many of us work in jobs where the exact hours we work doesn’t really matter that much. Sure we need some overlap to discuss things, but in general a culture of good asynchronous communication can allow us to work very flexibly. **Be kind**. Kid screaming in the back of a call? Cat jumping in front of the camera? This kind of stuff is going to happen a lot over the coming months. We need to make sure it isn’t a source of stress. We need to actively reassure each other that these things aren’t a problem, just a fact of life right now. **Be caring**. Quite simply ask each other how it’s going. Ask what you can do to help. Take a load off others when you can. This is the most important part we can play in helping our coworkers maintain a healthy mental state. In short, **be a community**.
mgkeen
288,141
Laravel Signature Pad Example Tutorial
Hey Artisan Hope you are doing well. In this tutorial i am going to show you laravel 7 signature pad...
0
2020-03-25T18:52:53
https://www.codechief.org/article/laravel-signature-pad-example-tutorial-from-scratch
laravel, signature, pad
Hey Artisan Hope you are doing well. In this tutorial i am going to show you laravel 7 signature pad example. In this tutorial i am going to show you how we can create a signature pad and save it into public directory. https://www.codechief.org/article/laravel-signature-pad-example-tutorial-from-scratch
hafizpustice05
288,258
Why I detest React Hooks
React Hooks has been the new hotness ever since it was introduced. I have heard many people discuss a...
0
2020-03-25T23:54:03
https://dev.to/allentv/why-i-detest-react-hooks-20da
react, hooks, javascript
React Hooks has been the new hotness ever since it was introduced. I have heard many people discuss about how hooks help them write less code with the same functionality and how it is more performant since everything is now functions. There has also been many articles published online touting, we should ditch classes for functions altogether as less lines of code (LOC) is always better. What gets me, is how folks think brevity is always better and trying to be clever with their code is somehow the best way to write code. I disagree on both fronts. Brevity should not be at the expense of clarity as code is written for humans, not machines. Any code you write today will be encountered by you or someone else in your team again in the next 6 months. Being able to still understand the context behind that block of code and make changes confidently, is what well-written code is all about. I always prefer to be explicit rather than implicit. And React Hooks seems to me like a clever hack. Having converted multiple class based React components to functional components using Hooks, I feel like the function is bloated and violates Single Responsibility Principle (SRP). The hook related code seems to be floating around in the function definition trying to separate the main section of how the component will be rendered, from the function signature. Compare this to a class based React component where every section is clearly separated into functions that are named after what they represent in the React lifecycle or what action they perform for event handlers. Compare this to the `useEffect` hook which is trying to consolidate mount, update and unmount processes into one. No UI engineer would be confused when they implement lifecycle methods in a class but would certainly be stumped in the beginning when they see the code within `useEffect` being invoked 3 times when they first implement this hook. Also, trying to adopt the Redux patterns within React seems like moving from being a library to a framework. React is a very good UI library and gives the freedom to use whatever works in other areas. Trying to push towards the redux pattern of using reducers and dispatchers, is a bad move in my books. Not sure if that is because the creator of Redux is now part of the React team. This reminds me of the example of how the React team was pushing for using mixins in the beginning even when a lot of folks had been burnt using it in either other languages or in Javascript. The React team has now denounced the use of mixins. I hope React will stay as an excellent UI library that is a go-to standard for high-performance UIs and stop trying to be framework. I would love to see more investment in tooling, especially create-react-app and make it more easier to build UIs by standardizing some of conflicting issues that developers have when they start React projects. This is an aspect that I like about the Go programming language where they have published an article about writing Idiomatic Go code to make sure folks follow the same conventions. The tools that Go has take out most of the friction that teams usually have, making any open-source Go code look very much the same. I look forward to seeing more improvements in the library that lets developers focus more on implementing business features as fast as possible and reduce friction in writing tests by generating test code for most common scenarios like clicking a button, shallow render, etc
allentv
288,273
【Git】ちょっと難しいリベースの挙動
複数ブランチに対する git rebase A---B---C---D master \ X---Y---Z dev \...
0
2020-03-26T00:17:11
https://dev.to/dyoshimitsu/git-43o8
git
## 複数ブランチに対する git rebase ```text A---B---C---D master \ X---Y---Z dev \ P---Q dev2 ``` devブランチをmasterブランチの先頭に移動 ```shell $ git rebase master dev ``` ```text X'--Y'--Z' dev / A---B---C---D master \ X---Y---P---Q dev2 ``` > `'`はリベースによって作られた、既存のコミットを書き換えた、新しいコミットを表しています。 dev2ブランチがコミットBから作られたことになりました。 ## マージを含むブランチに対する git rebase ```text P---M---N dev / / X---Y---Z / A---B---C---D master ``` devブランチをmasterブランチの先頭に移動 ```shell $ git rebase master dev ``` ```text A---B---C---D master \ X---Y---Z---P---N dev ``` マージコミットであるMが消えました。 リベースは難しいよというお話でした。(マージを使いませんかと主張したい)
dyoshimitsu
288,362
Docker で Hack
Facebook Hack試してみた。(Hello...
0
2020-03-26T02:11:39
https://dev.to/dongri/docker-hack-2bde
docker, hack
--- title: Docker で Hack published: true description: tags: Docker, hack --- Facebook Hack試してみた。(Hello Worldだけ) https://github.com/facebook/hhvm/wiki#installing-pre-built-packages-for-hhvm 自前のサーバーは汚したくないんで、Dockerでやってみた。 ``` $ vim Dockerfile ``` ```rb FROM ubuntu RUN apt-get update RUN apt-get install -y python-software-properties RUN apt-get install -y wget RUN add-apt-repository ppa:mapnik/boost RUN wget -O - http://dl.hhvm.com/conf/hhvm.gpg.key | sudo apt-key add - RUN echo deb http://dl.hhvm.com/ubuntu precise main | sudo tee /etc/apt/sources.list.d/hhvm.list RUN apt-get update RUN apt-get install -y hhvm ``` ``` $ docker build -t hack . ..... $ docker images $ docker run -i -t hack bash root@c5b48b4e3988:/# vi helloworld.hh ``` ```php <?hh echo "Hello World\n"; ``` ``` root@c5b48b4e3988:/# hhvm helloworld.hh Hello World root@c5b48b4e3988:/# ``` ハマったところ! ```php <?hh echo "Hello World\n"; ?> ``` HipHop Fatal error: syntax error, unexpected T_HH_ERROR, expecting $end in /hellworld.hh on line 2 ?> 閉じカッコを書くと上のエラーが出る
dongri
288,390
How to be Pythonic? Design a Query Language in Python
I gave a talk at PyAmsterdam today and it was a lovely community. I get the chance to answer some que...
0
2020-03-26T11:49:04
https://cheuk.dev/2020/03/20/how-to-be-pythonic/
talk, python, querylanguage, terminusdb
--- title: How to be Pythonic? Design a Query Language in Python published: true date: 2020-03-20 00:00:00 UTC tags: Talk,Python,Query Language,TerminusDB canonical_url: https://cheuk.dev/2020/03/20/how-to-be-pythonic/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/793u48f9tqucjg8hzdyt.png --- I gave a talk at [PyAmsterdam](https://py.amsterdam/2020/03/25/virtual-pyamsterdam-from-home-stayathome.html) today and it was a lovely community. I get the chance to answer some questions that have been puzzling me for a while. I ask people to vote for me on [DirectPoll](https://directpoll.com/) (first time trying it) so I know what the community thinks. ## Is Pythonic a thing? It is a questions that I have been thinking since I was a naive Python Data Scientist. “Why I can’t just do it in a for loop?” came through my mind all the time. Why we have to follow certain convention in coding? Is Pythonic a thing or just peer pressure. ![Poll of Is Pythonic a Thing](https://cheuk.dev/assets/images/is_pythonic_a_thing.png) Almost 90% of you things that it really is a thing. (35 votes) ## Who like SQL? For me, I am not a fan of SQL. Date back to my first data science job I was furious about writing thousand links of SQL just to get some aggregated results. Joining tables are not fun as mistakes can be made easily. As we are designing a new query language in TerminusDB, I want to know what people things about SQL. ![Poll of Who like SQL](https://cheuk.dev/assets/images/who_like_sql.png) I am surprised that 70% of you like SQL! Hmmmmm… (38 votes) ## Which one do you prefer? During the time I was translating WOQLjs to [WOQLpy](http://blog.terminusdb.com/2020/01/20/design-a-query-language-client-for-pythonistas-and-data-scientists/) I wonder what how shall I make the query building more “Pythonic”. What would Pythonistas prefer? Chainable calls like `WOQLQuery().doctype("journey").label("Journey)` or Pandas DataFrame style, multi-parameters calls like `WOQLQuery().doctype(id="journey, label="Journey")`. (I failed to show the result in the talk so here you go!) ![Poll of Which one do you prefer](https://cheuk.dev/assets/images/which_one_do_you_prefer.png) Since Pandas, the most popular data manipulation library in Python uses the multi-parameters calls, I am not surprised that 80% of you would prefer that. (25 votes) * * * If you have missed the talk, you can now watch it [here](/videos/hl7xl7kurkg/). If you want to catch me streaming live, follow me on [Twitch](https://www.twitch.tv/cheukting_ho).
cheukting_ho
288,400
Security tips for Djangonauts
Secure your Django application and be a star
0
2020-03-26T03:07:45
https://dev.to/hayleydenb/security-tips-for-djangonauts-3mbf
security, python, django
--- title: Security tips for Djangonauts published: true description: Secure your Django application and be a star tags: security, python, django --- Lucky you, you user of the web framework for perfectionists with deadlines (AKA Django). The Django team has put a lot of thought into their security practice. I have summarized some of the best tips to keep your Django project secure. See all ten tips [here](https://snyk.io/blog/django-security-tips/) ### Throttle user authentications Django provides a lot of security features baked in, but the authentication system does not inherently protect against brute force attacks. A malicious actor could hit your system with numerous login attempts, and potentially get in. If this kind of attack is of concern for your project, use a project like Django Defender to lock out users after too many login attempts. ### Protect your source code Protecting your source code may seem to be an obvious step, but it is a multi-faceted step and is, therefore, worth exploring. One way to protect your source code is to make sure that it is not included in your web server’s root directory. If it is, there is a possibility that it is served or that, part of it, is executed in a way that you had not planned. And although it goes without saying, if your project is sensitive, be sure to use a private repository on GitHub, Bitbucket, or Gitlab. Also, make sure to never check your secrets into your version control system, regardless of whether you intend to use a private repo. It is possible that a private repository does not always stay private and someone with access to a private repo cannot always be trusted. ### Use raw queries and custom SQL with caution While it is tempting to write raw sql queries and custom SQL, doing so may open the door for an attack. Django’s object-relational-mapping (ORM) framework is designed to make querying your database easy. Querysets are constructed using query paramatization. The query's parameters have been abstracted away from the query's sql code. A user attempting to perform a sql injection (execute arbitrary sql on a database) is going to find it much harder if you always use the ORM. Django does allow the use of raw queries, but their use is not recommended. If you do use them, take extra care to properly escape any parameters. If you find the Django ORM to be insufficient for your needs, it is possible to use a different ORM within Django. SQLAlchemy is an example of an ORM that can be used with Django. If there is an ORM that better suits your project, making use of it is preferable to writing large amounts of raw sql. ### Don’t let the perfect get in the way of the good Every security step you take is a step in the right direction. Django may be for perfectionists with deadlines, but code doesn’t have to be perfect to reap security benefits. Implementing the concepts discussed above, to the best of your ability, can dramatically improve the security of your code and result in a healthier, more resilient project. Happy coding, Pythonistas! *** I am a Developer Advocate at [Snyk](www.snyk.io). This post originally appeared on the Snyk blog, with even more security tips. Find the full article [here](https://snyk.io/blog/django-security-tips/) as well as a easily shareable pdf.
hayleydenb
288,408
Guidance: Plan for Life
This will guide you towards setting up an eternal (lasting or existing forever; without end or beginn...
0
2020-03-26T03:17:13
https://dev.to/coltonehrman/guidance-plan-for-life-2pkh
This will guide you towards setting up an eternal *(lasting or existing forever; without end or beginning)* plan for your life. ## Tools: - [Todist](https://todoist.com/r/colton_ehrman_xzlapc) ## Sections * [What To-Do NOW?](#what-to-do) * [Create A Blueprint](#create-blueprint) * [Create Weekly To-Do List](#create-weekly) * [Create Today's To-Do List](#create-today) ### What to do NOW? <a name="what-to-do"></a> I am adding this section based on the fact that I run into this problem a lot of times. Where I am working on something and asking myself, *"Should I be working on this right now, or maybe something else?"*, or when I am questioning what I should do at this moment *(usually after I finished doing something pretty productive)*. It seems that this may be my biggest problem and most likely the reason that pushed me into writing this whole article and doing what I am doing today. So, let me think this out for a minute. I have trouble coming up with what "direction" to go in. My mind is cluttered with all these ideas and possibilities of what to do next. My first step to solving this was to write it all down with the idea of getting it out of my head and onto something that I can plainly visualize and hopefully better sort/organize. I feel that this is a great first step in the *right* direction. Now I need to focus on untangling this mess that I just threw up, and figure out what is worth keeping and what to throw away. Being somewhat of a hoarder *(along the lines of, "I don't know if I will need this later, so I will keep it for now")* I will probably save the task of weeding through it for later. Let's just focus on finding the *right* direction for now. I have everything *(mostly)* that I am currently thinking out of my head and down on something I can look at and visualize. Now, I am going to pick the first thing that excites me the most, the thing that I can see myself doing for a long period of time and not worrying about getting tired of and dropping all the sudden. Because **another problem** I have is sticking to a specific goal instead of juggling a bunch of things around and usually dropping them all one-by-one. Now that I have finally **picked one thing**, I am going to focus on this singular idea for the most part of my days. Starting **_NOW_**, I will use this idea as the sort of blueprint to start up my day. *How can I break down this idea into specific tasks and short-term goals?* This is the question I will ask myself. *To better give some practical advice for this article, I will show you exactly what I am using to follow along with this "junk-of-information".* I **picked** the idea of *learning flutter* out of the other ideas I had spit out from my head, as my focus. ![List of ideas](https://i.imgur.com/pws0lXr.png) *If you pick something that is making you think "I don't know if I will be able to stick with this for very long, or has you feeling at unease", then you probably didn't pick the __right__ thing. Although I will say, the thing I picked I am feeling a little anxious about, but it is not a concern for me at this moment since I am doing this all as an experiment and learning process* **Now**, let me take this idea, and start devising specific tasks and goals that revolve around it for my everyday to-do lists. First, I am going to put together a basic to-do list for my day tomorrow, then I will be able to insert my specific tasks into my day. Here is my basic list, with typical tasks that I do every day. I put this together using the [blueprint](#create-blueprint) section. ![Basic to-do list](https://i.imgur.com/PwDBTuJ.png) This gives me something to start with and an idea of how my day will look so far. Now I can begin incorporating the other tasks into this list. My over-arching focus is **learning flutter**, so I am going to come up with things to do that embrace this idea. To start, I want to reflect on my current status, such as what I have at my disposal. Luckily, I recently purchased two courses that *teach flutter*. Also, I happen to be working on a couple of projects with *some other people* which involve me building a flutter app. So, at this point it appears that I have the resources necessary to begin putting together specific tasks related to **learning flutter**. Let me go ahead and add some specific tasks into my list **NOW**. *Here are the two __general__ tasks I came up with.* ![Two general tasks](https://i.imgur.com/I7uj4ix.png) I decided to pick the Udemy course as my focus right now since it doesn't require other people's involvement and I can work on it at my own pace. Also, I know where I am currently at in the course is building a flutter app, so I wanted to add a specific task for that as well. Even though these two tasks are so related, they help define my *actions* for the day, which is what I am trying to do. This is just the start, I **now** need to break these tasks up into even more specific *actions* that I can take. *Here is what I came up with* ![Added specific tasks](https://i.imgur.com/jF7LiP2.png) Not going to lie, this part was difficult for me, and I am still not fully satisfied with it. Nonetheless, it is a starting point and will do for now. I have broken up the general tasks into more specific *actions*. Tomorrow, I will be able to go through my task list with little to no thinking involved, just **taking action**. Also, I have added some comments to each task, forcing myself to explain why I am doing this, and the reasoning involved *(just in case I need a reminder)*. ![Task comment](https://i.imgur.com/QZo2LcV.png) I feel satisfied with my current progression at this point, and believe my process will only get better over time and practice. I foresee myself running into some issues tomorrow, with some specific tasks I have put in the list, but plan on tackling that when the time comes. For now, I will focus on the end goal. Whenever I encounter a problem, I plan on documenting it and going over a possible solution. *To be continued...* ### Create A Blueprint <a name="create-blueprint"></a> **Put together a general to-do list blueprint with things that you need to do repeatedly** *This one is pretty obvious, but here are some ideas and tips to help you - this blueprint can apply to daily to-dos, weekly, monthly, etc.* **_How a blueprint list may look_** ![Basic to-do list](https://i.imgur.com/PwDBTuJ.png) 1. What is something you always end up needing to do? *Wake up, shower, eat, sleep, pay bills, etc.* - Try to add tasks that you *want* to start doing into your blueprint. *This will force you to build better habits* - Set times and reminders for tasks to get into a routine. - Play around with different times to see what works best for you. *Maybe you study better around bedtime instead of the morning* - No task is too small to add to the list. *Pretend you are a computer and you are writing out a daily algorithm for your body to follow and make it through the day, every second counts* ### Create Weekly To-Do List <a name="create-weekly"></a> **Let's create a weekly to-do list.** *I would recommend setting up a [blueprint](#create-blueprint) that you can use.* 1. Ask yourself - *What is something I end up doing every week?* - Are you working on something at the moment? *Break up that project into specific daily tasks, which can then be more detailed the day of or before. The idea is to start breaking up the project at a high level* - Have something you've been meaning to get done? *Look through your current daily to-dos, and pick one that seems pretty open to add that task to* - When making up a weekly to-do, it is also a good idea to start looking at upcoming events/plans and organizing them better for your schedule. ### Create Today's To-Do List <a name="create-today"></a> **Let's create today's to-do list, or preferably tomorrow's if you want to stay ahead of the game!** *I would recommend setting up a [daily to-do list blueprint](#create-blueprint) that you can use.* 1. Ask yourself - *What do I need to get done this day?* - Are you working on something at the moment? *Start breaking up that project into small tasks, don't worry about overloading your day, just write it down so it can get done* - What have you been thinking about lately? *I really want to clean my room. I wish my truck was clean. Ughh, when will I finally make time to read that book* - Is there anything you wish was better organized in your life? *Clean up and organize your cluttered phone screen of apps. Go through your digital files and clean out / organize them* - Are there any activities/habits you should remove from your life? *Identify and replace those with literally anything that is better for you* ## Goals: - Stop asking yourself what to do every day - Have a concrete guideline of what to do now - Stop questioning what's next or looking ahead - Focus on the important things in life - Declutter your life - Attain *(succeed in achieving something that one desires and has worked for)* a clear mind
coltonehrman
288,454
Making apps during quarantine!
Boring, huh? Quarantine, a different perspective of "staying home as usual", only it's unu...
5,608
2020-03-26T05:05:02
https://dev.to/pasenidis/making-apps-during-quarantine-1hl6
node, api, express, coronavirus
## Boring, huh? Quarantine, a different perspective of "staying home as usual", only it's unusual and you can't go out if you get bored. Bad, huh? Eventually it makes you bored - that much that I created a COVID-19 tracker. But how does it work? I mean, what's the difference of it from many others crappy trackers? Well this one is developed by two people & it contains time charts :) (https://covid-19-system.herokuapp.com/developers) ## What is this tracker all about? I mean, now you compare two time periods (e.g.: December & March) Kinda useless? Maybe, but social media like using phrases like "COVID-19 infection rate has raised, 5% more than it was in February" and things like that. Who knows, maybe journalists will use that thing. The funny part is that the API wasn't even created by us, yeah - you heard right! Basically, we will be utilizing a second API soon which is also not ours! That's open-source for you, beginners! (yes, especially contributing is amazing). Back to our topic, we won't even implemented a custom API, although I may also do this later. Anyways, we will be adding more charts, country search, better mobile responsibility & much more. Now, let's see how that thing works behind the hood... ## Exploring the project So, if you `git clone` the site repository you will basically download the repository. Let's start exploring it - open the **src** folder to get started. See? There are many files; some are for Pug, other are for browser JS, there is also one CSS, nevertheless there are many things on that repo. ## But how do they talk? Well, if you type `npm start`, a node express server will start. Express is responsible for the routes & some minor things inside the repo. Then comes Pug, a HTML pre-processor, something like a library that replaces placeholders inside HTML, with real content! Next coming up is the public directory which contains CSS files and JavaScript that runs in browser (not related to Node, it's linked by Pug), this fetches information, from an API that you can find on the GitHub project repository as soon as this article ends. [1] This was a brief documentation, I am not gonna dive deeper; you will be able to do that yourself when the major release will be ready! Let's not forget to mention the developers; * Me, (Edward, also the writer of this post) * Lean, (Tasos, a cool dude who has developed from Discord bots to an Arduino-to-Discord webhook system) ## Some important links [1]. https://github.com/pasenidis/covid19-stats [2]. https://github.com/pasenidis [3]. https://github.com/TasosY2K
pasenidis
292,245
How to upgrade Rails
I don’t write much about Rails here but whoa, two posts in a row! Well, it turns out that I thought I...
0
2020-03-26T09:05:53
https://flaviabastos.ca/2020/03/26/how-to-upgrade-rails/
howto, rails
--- title: How to upgrade Rails published: true date: 2020-03-26 08:25:00 UTC tags: how-to,rails canonical_url: https://flaviabastos.ca/2020/03/26/how-to-upgrade-rails/ --- I don’t write much about Rails here but whoa, two posts in a row! Well, it turns out that I thought I should record another lesson I learned while upgrading Rails: how to do it, meaning, what are the practical steps one should take to upgrade Rails? During my research at work I came across many resources, including [Rails’ own upgrade guide](https://edgeguides.rubyonrails.org/upgrading_ruby_on_rails.html), but after running three upgrades, I think I got my own process. And here is the key take-away: this is what worked for ME. There might be many issues with this process – don’t DM me! – but if in the future I need to run another upgrade, I will consult this notes (yeah, take this as my field notes! There!) ## The main issue to be resolved: dependencies The Rails version on a project can only be upgraded after all dependencies are properly resolved. At work we use [bundler](https://bundler.io/) as a gem manager, so all information below takes that into account. There are **two approaches** for dealing with dependency management: **one approach** is to go through the Gemfile and check and update all installed gems. This might take quite some time depending on how many gems the project has but it’s also a very conservative approach. If this is your first time upgrading, do this! The **other approach** is to try to update the Rails gem version in the Gemfile to start with and run `bundle update rails`. This will most likely throw an error similar to the one below. The next step is to identify which gems are preventing the project to be upgraded. Note that this second approach **fails** the upgrade process, which means that the rails version is still the old one. It can also be tricky to identify which gem is holding the upgrade back: most times the error messages are not very clear and dependencies might depend on yet another dependency, making it even harder to find the start point. Another problem with this approach is that at the end, only the required gems will have been upgraded, when in an ideal scenario, [projects are always up to date with dependencies](https://wp.me/pa0b0y-6b). Typical dependency error message when attempting to upgrade Rails: | `Bundler could not find compatible versions for gem "activemodel": ` `In Gemfile: ` `rails (~> 4.0.13) was resolved to 4.0.13, which depends on ` `activerecord (= 4.0.13) was resolved to 4.0.13, which depends on ` `activemodel (= 4.0.13) ` `draper (~> 1.0) was resolved to 1.4.0, which depends on ` `activemodel (>= 3.0) ` `simple_form (~> 3.1.1) was resolved to 3.1.1, which depends on ` `activemodel (~> 4.0)` | > The best way, in my opinion, to deal with an error similar to the one above is to check each dependency (gem) listed in the error message for a version conflict. One gem might need to be upgraded to resolve all other dependencies. Note that all active\*, action\*, etc are Rails related and there’s no way to upgrade those apart from Rails itself. > > Yeah, it’s tricky to find which gem is throwing the error. No easy way out of that… ¯\_(ツ)\_/¯ Despite all the cons, on my last two upgrades I used this second approach. ## More on updating gems Before updating a gem, you need to check it for compatibility issues: the latest gem version might not be compatible with your Rails target version. In this case, aim to have the latest possible version you can. Here’s an outline of what you should look for when trying to upgrade gems: - Check the gem version installed: in the app, run `gem list <gen_name_here>`. This will show all versions of the gem installed in the app. If there’s more than one, another good place to check is the `Gemfile.lock` - Go to [RubyGems.org](https://rubygems.org/) and search for the installed gem. Here you can see the latest gem version and lots of information about that gem - Go to the source code for the gem and look for a `CHANGELOG.md` or similar file (VERSIONS, CHANGES, HISTORY, NEWS, etc). Check if there are any breaking changes between your currently installed version and the latest and also if there is any compatibility issues with your Rails target version in any of the gem’s version. It might be the case that you can’t update the gem to latest because it doesn’t work with your Rails target version. - If there is no changelog type information, check the file `<gen_name>.gemspec` in the root of the source code. That file shows any gem dependencies, if any - If no information is found in the source code, try Googling `<gem_name> rails <your_target_version> support` - If there is any issues preventing the gem to be updated to latest, make sure you specify the desired version in the `Gemfile`. It’s a good idea to use a conservative indicator here (`~> 3.5.0`, for example). ## How to update a gem - In the app, run `bundle update <gem_name>`. In most cases this should update the gem to the desired version: there will be a nice message highlighting the change - If the gem version doesn’t change, look for possible dependencies or requirements (for example, some gems will only work with the Rails target version. In this case, the gems must updated **at the same time** as the Rails version, by passing the gem name along with `rails` to the bundle update command). - Restart your server (again, this can vary, but I have puma, so: `pumactl restart`) so the new gem version gets implemented - Load the app and do a quick check - It’s a good idea to commit each gem update separately, so you can roll back if any issues are identified ## Updating the Rails version After all dependencies are updated as required, you can update Rails itself and any gem that needs to be updated at the same time. - Change the Rails version in the `Gemfile` to the target version - Run `bundle update rails` - If everything goes well, there should be a list of gems updated. If you check the rails version (`rails --version`), it will show the new version - Restart the server - If this fails, the error message will list the dependency issues, which means that one or more gems are not running the required version yet and the upgrade process will have failed. Afterwards, run the full test suite to make sure all tests pass. Warning: some (or many!) will probably fail. > Pushed my last commit of 2019, finally fixing all the 400+ tests that have been failing. Mood: [pic.twitter.com/4nhd3BXEps](https://t.co/4nhd3BXEps) > > — Flávia Bastos (@FlaSustenido) [December 31, 2019](https://twitter.com/FlaSustenido/status/1212043151692771329?ref_src=twsrc%5Etfw) <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> * * * > Did this help you? Let me know on [Twitter](https://twitter.com/FlaSustenido)! > _The post [How to upgrade Rails](https://wp.me/pa0b0y-6r) was originally published at _[flaviabastos.ca](https://flaviabastos.ca/)
flaviabastos
292,247
ed25519 と x25519 で公開鍵と秘密鍵を作る
OTP-22.3-rc2 で ed25519 に対応しました。 Erlang/OTP 23 [erts-11.0] [source] [64-bit] [smp:12:12] [ds:12:12:1...
0
2020-03-26T09:07:20
https://dev.to/voluntas/ed25519-x25519-4eln
erlang, crypto
OTP-22.3-rc2 で ed25519 に対応しました。 ```erlang Erlang/OTP 23 [erts-11.0] [source] [64-bit] [smp:12:12] [ds:12:12:10] [async-threads:1] [hipe] [sharing-preserving] Eshell V11.0 (abort with ^G) 1> crypto:generate_key(eddsa, ed25519). {<<125,35,162,182,22,21,162,141,50,135,15,227,58,228,192, 29,126,65,238,25,99,54,18,220,253,153,209,222,...>>, <<162,78,62,125,24,238,130,139,208,62,159,201,51,207,143, 240,70,226,152,30,39,89,65,172,131,198,164,...>>} 2> crypto:generate_key(ecdh, x25519). {<<240,91,161,68,198,253,253,158,164,169,36,43,229,150,53, 196,178,35,147,54,62,156,121,213,54,156,135,231,...>>, <<0,171,160,172,83,207,11,26,217,171,220,241,161,245,193, 171,32,209,157,20,7,88,248,186,231,5,219,...>>} ``` 戻り値は {公開鍵, 秘密鍵} です。生成される証明書両方で compute_key は実行できるので、どちらで作っても大差ないです。
voluntas
292,253
Angular Modules - Custom Lazy Load Strategy
Contents: What are modules Example of a module Lazy-loading feature modules Default way Using cus...
0
2020-03-26T12:40:58
https://dev.to/bogicevic7/angular-modules-custom-lazy-load-strategy-474b
angular, tutorial, beginners, typescript
Contents: - What are modules - Example of a module - Lazy-loading feature modules - Default way - Using custom preload strategy - Summary Before we tackle module lazy loading and custom preloading strategies, let's start from the beginning and figure out what modules actually are. **What are modules** As you may know, Angular apps are modular based and Angular has its own modularity system called **NgModules**. NgModules are like containers for a closely related set of capabilities, for example, you can have auth, user, booking, shared modules that can contain components, directives, pipes, services and other cohesive blocks of code. Each module can import functionality that is exported from other NgModules, and export selected functionality for use by other NgModules achieving encapsulation. **Example of a module** ```javascript import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; @NgModule({ imports: [ BrowserModule ], providers: [ MyLogger ], declarations: [ AppComponent ], exports: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { } ``` In order to define a module we need to use `NgModule()` class decorator and pass metadata to it, here are commonly used properties: `imports`: an array of other modules that are imported in this case by `AppModule`, we are importing other modules when we want to reuse for example components or directives defined in them. `providers`: an array of services defined in `AppModule`; they become accessible in all parts of the app. `declarations`: an array of components, directives, and pipes that belong to this `NgModule`. `exports`: an array of components, directives, and pipes that belong to this `NgModule` but we want to make them publicly visible, or should I say, imported and reused by other modules. `bootstrap`: The main application view, which hosts all other app views. Only the root `NgModule` should set the bootstrap property By default, `NgModules` are eagerly loaded, which means that as soon as the app loads, so do all the `NgModules`, whether or not they are immediately necessary. [Here](https://stackblitz.com/edit/eager-load) you can see an example of module eager load (please open the console to see the outputs; both modules are loaded at the same time). **Lazy-loading feature modules** For large apps with lots of routes and modules, consider lazy loading — a design pattern that loads `NgModules` as needed. Lazy loading helps keep initial bundle sizes smaller, which in turn helps decrease load times. _Default way_ [Here](https://stackblitz.com/edit/angular-ll) you can see an example of a module being lazy-loaded (please open the console, at the beginning only `AppModule` will be loaded, after you click on `Go to login` button the `AuthModule` will be loaded on demand). _Using custom preload strategy_ [Here](https://stackblitz.com/edit/angular-custom-preloading-strategy) you can see an example with custom lazy load strategy - this means than you can provide additional metadata for a module and load it in after a certain period of time. This can be useful if you aware of your application flow, and for example while user is on module 1 may be filling up forms then using custom preload strategy module 2 will be downloaded in the background. (please open the console, at the beginning only `AppModule` will be loaded, after 5 seconds `AuthModule` will be loaded as well). **Summary** To wrap-up, lazy loading is a really useful and powerful feature available by default in the Angular framework. You should use it regularly in order to have smaller initial bundles and improve application performance. Please, leave a comment down below if you would like to see in-depth explanations of examples used above. Thanks for reading!
bogicevic7
292,287
Using absolute (alias) imports in Javascript and VSCode
Learn how to import javascript or typescript modules with absolute paths in webpack &amp;...
0
2020-03-26T10:50:55
https://nimblewebdeveloper.com/blog/absolute-alias-imports-in-javascript-vscode
javascript, webdev, productivity, vscode
--- title: Using absolute (alias) imports in Javascript and VSCode published: true date: 2020-03-25 13:00:00 UTC tags: javascript, webdev, productivity, vscode canonical_url: https://nimblewebdeveloper.com/blog/absolute-alias-imports-in-javascript-vscode cover_image: https://dev-to-uploads.s3.amazonaws.com/i/drv13j4vklt9oiwxlsop.png --- Learn how to import javascript or typescript modules with absolute paths in webpack & VSCode Developers love productivity hacks. Get more done in less time, more time for... more coding? Something like that anyway, and I'm no different. One of my favourite little productivity and "tidyness" hacks lately is **absolute imports** in javascript apps - the ability to import modules from an absolute path rather than a relative path. Using relative imports works great for small projects and examples in blog posts, but for larger projects, relative imports can quickly become a nightmare. Hopefully this tip helps you out too! **Note** : this tip is specific to webpack. If you're not using webpack the first part of this will not work! The part pertaining to VSCode is still relevant. ## What are relative and absolute module imports? In javascript, relative module imports usually look something like this (in ES6); ```javascript // e.g. src/pages/index.js import myComponent from '../components/myComponent' import someUtil from './utils/someUtil' // ... ``` In this example, the component **myComponent** is imported from the _relative_ path `../components/myComponent`. Why is this path 'relative'? Because the path is relative to the current file. The single dot or double dots at the beginning of the import path, followed by the directory separator (slash) indicate a either the same directory as the current file or a directory one level above. As you can see, if we have a large project with a deeply hierarchical directory structure, we might end up with relative imports like; ```javascript import myComponent from '../../../../myComponent' ``` And that's going to get annoying real fast! ## Why use absolute module imports? Relative imports aren't all bad. I'm not saying never use them! On the contrary, it's a good idea to use relative module imports sometimes. For example if you have closely related files that might be considered part of the same larger module, which are probably located in the same directory, you would almost definitely want to use a relative import. Much of the time, however, relative imports are used throughout the whole codebase, and this can get messy really quickly as the project grows in scale. Relative imports just work straight out-of-the-box. Zero config necessary. Whereas absolute imports require a (very) small amount of configuration. ## Webpack configuration for absolute imports To enable absolute imports we'll need to make a small change to our webpack config. (Note: if you're using create-react-app you might have difficulty customizing your webpack config). It's really easy to configure webpack to look for your source files using an absolute path. All we need to do is add some **aliases** to the **resolve** section of the webpack config. For example a vanilla webpack.config.js might look like; (See the [webpack docs on resolve settings](https://webpack.js.org/configuration/resolve/)). ```javascript module.exports = { //... resolve: { alias: { '@Components': path.resolve(\_\_dirname, 'src/components/'), '@Utilities': path.resolve(\_\_dirname, 'src/utilities/') } } }; ``` Now we can use these aliases like; ```javascript import myComponent from '@Components/myComponent' import someUtil from '@Utilities/someUtil' ``` Which is awesome! No longer do we need to know where the component we want is relative to our current file. Nice! ## Use webpack alias in VSCode Being able to import our components and utilities with absolute paths is awesome. But it can still get annoying typing out "@Components/myComponent..." every time. Fortunately the lovely people behind visual studio code thought of this. You can use a **jsconfig.js** file in the root of your project to tell VSCode about your webpack aliases. Add a file called **jsconfig.js** to the root of your project with the following code; ```javascript // jsconfig.json { "compilerOptions": { "baseUrl": ".", "paths": { //Assuming your components/utilities live in ./src //Update this path as necessary "@Components/\*": ["./src/Components/\*"], "@Utilities/\*": ["./src/Utilities/\*"] } }, //Add any build/compiled folders here to stop vscode searching those "exclude": ["node\_modules", "build"] } ``` You might need to update the paths in the config to match your project. This config assumes your components folder is in ./src which may or may not be the case. ## Now you should have absolute webpack imports that work in vscode Hopefully this quick tip helped you set up your project for faster, more productive development using absolute imports with webpack aliases. Got more tips? Let me know! Everyone likes a good productivity hack!
sebtoombs
292,295
IT Outsourcing 2020 Overview & Trends
In today's world of ever-changing requirements, fleeting markets and the incredibly fast pace of life...
4,831
2020-03-26T11:03:29
https://www.romexsoft.com/blog/it-outsourcing-trends/
devlive, todayilearned, todayisearched
In today's world of ever-changing requirements, fleeting markets and the incredibly fast pace of life, everybody naturally wants to optimize everything. In the present conditions, companies who want to be innovative are methodically searching for ways how to optimize costs, human hours, and operations. A lot of them choose IT outsourcing services as they provide operational flexibility, generous cost savings and access to specific skills. Remote IT outsourcing has changed the way businesses operate today. Today the fundamental shift in outsourcing of IT has been made from low-risk functions towards business activities and strategic directions, such as idea generation. IT outsourcing providers are expected to deliver not only solutions to present problems but to help companies differentiate themselves in the world market, create new opportunities, and obtain competitive opportunities. Companies expect IT outsourcing to ensure: -High-quality solutions -Cost efficiency -Excellent performance -Cutting-edge technologies -Customer-centricity -Strong partnerships -Multitalented pool -Affordable resources -Overhead reduction Over time organizations grow naturally and stand more complex. For companies who want to survive the automation-first approach is becoming a strategic peremptory. The evolution of AI will remain exponential, the synthetical intuition is the way to go, and the leaders should leverage the expertise of IT outsourcing companies for the development of a digital transformation strategy Let’s dive into the most anticipated trends in IT outsourcing for the 2020: IT Outsourcing Robotic Process Automation (RPA) Artificial Intelligence (AI) and Machine Learning (ML) provide a practical and real opportunity to automate processes and delegate them to machines. The combination of these technologies has progressed into Robotic Process Automation (RPA), which is an innovative form of business process automation technology. Read More: https://www.romexsoft.com/blog/it-outsourcing-trends/
annaboy75634026
292,306
Remove time in DataGridView and when it is exported to PDF
Please help me guys! I want the time in Date_Purchased(date) to be removed in datagridview. Because w...
0
2020-03-26T11:24:30
https://dev.to/annie85159209/remove-time-in-datagridview-and-when-it-is-exported-to-pdf-1k20
Please help me guys! I want the time in Date_Purchased(date) to be removed in datagridview. Because whenever I exported the datagrid in PDF, it has the date and time in it. I only want the date and remove the time especially when exported to PDf. Here's the sample piece of code. Public Sub NewInventory() Dim NI as SqlCommand =con.CreateCommand NI.CommandText = "Insert into Items_table(Room_ID, PC_Number, Item_Name, Date_Purhased) VALUES (@Room_ID,@PC_Number, @Item_Name, @Date_Purchased);" NI.Parameters.AddWithValue("@Room_ID", Room_ID) NI.Parameters.AddWithValue("@PC_Number", PC_Number) NI.Parameters.AddWithValue("@Item_Name", Item_Name) NI.Parameters.AddWithValue("@Date_Purchased", DatePurchasedDTP.Value) NI.ExecuteNonQuery() MessageBox.Show("New item created.") End Sub //for DataGridView Public Sub GetRoomItems(RoomID As String) Dim getItems As String = "Select Item_ID, PC_Number, Item_Name, Date_Purchased WHERE Room_ID=" + RoomID Dim adapter As New SqlDataAdapter(getItems, connection) Dim table As New DataTable() adapter.Fill(table) InventoryDataGrid.DataSource = table End Sub //For exporting to pdf Private Sub ExportButton_Click(sender As Object, e As EventArgs) Handles ExportButton.Click connection.Close() connection.Open() Dim pdfTable As New PdfPTable(ReportDataGridView.ColumnCount) pdfTable.DefaultCell.Padding = 1 pdfTable.WidthPercentage = 100 pdfTable.DefaultCell.HorizontalAlignment = Element. ALIGN_CENTER Dim ptable As New Font(iTextSharp.text.Font.FontFamily.HELVETICA, 11, iTextSharp.text.Font.BOLD, BaseColor.BLACK) For Each column As DataGridViewColumn in ReportDataGridView.Columns Dim cell as new PdfPCell(New Phrase(New Chunk(column.HeaderText, ptable))) cell.HorizontalAlignment = Element.ALIGN_CENTER cell.FixedHeight = 30 pdfTable.AddCell(cell) Next For Each row as DataGridViewRow In ReportDataGridView.Rows For each cell as DataGridViewCell In row.Cells pdfTable.AddCell(cell.Value.ToString) Next Next Dim folderpath As String = "C:\PDFs\" If Not Directory.Exists(folderpath) Then Directory.CreateDirectory(folderpath) End If Using sfd as New SaveFileDialog() sfd.ShowDialog() sfd.OverWritePrompt = True sfd.Title =" Save As" sfd.AddExtension = True sfd.DefaultExt = ".pdf" Using stream As New FileStream(sfd.FileName & ".pdf",FileMode.Create) Dim pdfdoc As New Document (PageSize.LETTER, 36.0F, 36.0F,36.0F,36.0F) PdfWriter.GetInstance(pdfdoc.stream) pdfdoc.Open() pdfdoc.Add(pdfTable) pdfdoc.Close() stream.Close() If File.Exists("path") Then File.AppendAllText("path", "contents") End If pdfdoc.Close() stream.Close() End Using End Using End Sub If you would ask me what's the data type of Date_Purchased, it is date. I used date not datetime. And still confused why the time is still in pdf whenever I exported it. Please help me! Thank you so much #sqlserver2017express,#VB.NET,#System,#Thesis
annie85159209
292,322
Easily Calculate Summary of Selected Rows with WinForms DataGrid
We are happy to announce that in our Essential Studio 2020 Volume 1 beta release, we added summary ca...
0
2020-03-27T11:22:56
https://www.syncfusion.com/blogs/post/easily-calculate-summary-of-selected-rows-with-winforms-datagrid.aspx
csharp, dotnet, productivity
--- title: Easily Calculate Summary of Selected Rows with WinForms DataGrid published: true date: 2020-03-26 11:30:07 UTC tags: csharp, dotnet, productivity canonical_url: https://www.syncfusion.com/blogs/post/easily-calculate-summary-of-selected-rows-with-winforms-datagrid.aspx cover_image: https://dev-to-uploads.s3.amazonaws.com/i/qnpcpbisw55femtau3lo.jpg --- We are happy to announce that in our [Essential Studio 2020 Volume 1 beta](https://www.syncfusion.com/forums/152560/essential-studio-2020-volume-1-beta-release-v18-1-0-36-is-available-for-download) release, we added summary calculation for selected rows in the [WinForms DataGrid](https://www.syncfusion.com/winforms-ui-controls/datagrid) control. This blog provides a walk-through of how to calculate summaries for selected rows and how to use the available options. The DataGrid allows users to calculate summaries for: - Selected rows. - All rows. - All rows until rows are selected (mixed rows). The [SummaryCalculationUnit](https://help.syncfusion.com/cr/windowsforms/Syncfusion.Data.WinForms~Syncfusion.Data.SummaryCalculationUnit.html) enumeration is used to perform these operations. To calculate a summary of selected rows in the DataGrid, set the [SfDataGrid.SummaryCalculationUnit](https://help.syncfusion.com/cr/windowsforms/Syncfusion.SfDataGrid.WinForms~Syncfusion.WinForms.DataGrid.SfDataGrid~SummaryCalculationUnit.html) property to **SelectedRows**. In the following code example, a summary is calculated for group caption summary rows. This summary calculation support is available for both group and table summary rows too. ```csharp this.sfDataGrid.SummaryCalculationUnit = Syncfusion.Data.SummaryCalculationUnit.SelectedRows; this.sfDataGrid.SelectionMode = GridSelectionMode.Multiple; this.sfDataGrid.AutoGenerateColumns = true; SalesInfoCollection sales = new SalesInfoCollection(); sfDataGrid.DataSource = sales.YearlySalesDetails; ``` ![Group Caption Summary Rows in DataGrid](https://www.syncfusion.com/blogs/wp-content/uploads/2020/03/Group-Caption-Summary-Rows-in-DataGrid.gif)<figcaption>Group Caption Summary Rows in DataGrid</figcaption> A summary row can also be considered while calculating the summary. ```csharp this.sfDataGrid.CaptionSummaryRow = new GridSummaryRow() { Name = "CaptionSummary", ShowSummaryInRow = false, CalculationUnit= Syncfusion.Data.SummaryCalculationUnit.SelectedRows, Title="Sales details in {ColumnName} : {Key}", TitleColumnCount=1, SummaryColumns = new System.Collections.ObjectModel.ObservableCollection() { new GridSummaryColumn() { Name = "SQ1", SummaryType = Syncfusion.Data.SummaryType.DoubleAggregate, Format="{Sum:c}", MappingName="Q1", }, new GridSummaryColumn() { Name = "SQ2", SummaryType = Syncfusion.Data.SummaryType.DoubleAggregate, Format="{Sum:c}", MappingName="Q2", }, new GridSummaryColumn() { Name = "SQ3", SummaryType = Syncfusion.Data.SummaryType.DoubleAggregate, Format="{Sum:c}", MappingName="Q3", }, new GridSummaryColumn() { Name = "SQ4", SummaryType = Syncfusion.Data.SummaryType.DoubleAggregate, Format="{Sum:c}", MappingName="Q4", }, new GridSummaryColumn() { Name = "SQ5", SummaryType = Syncfusion.Data.SummaryType.DoubleAggregate, Format="{Sum:c}", MappingName="Total", } } }; ``` Here, let’s use the value **Mixed** for the property **SummaryCalculationUnit** of the DataGrid to calculate a summary of the selected rows; otherwise, the summary for all rows will be calculated. ```csharp this.sfDataGrid.SummaryCalculationUnit = Syncfusion.Data.SummaryCalculationUnit.Mixed; this.sfDataGrid.SelectionMode = GridSelectionMode.Multiple; this.sfDataGrid.AutoGenerateColumns = true; SalesInfoCollection sales = new SalesInfoCollection(); sfDataGrid.DataSource = sales.YearlySalesDetails; ``` ![Calculated Summary for Mixed Rows](https://www.syncfusion.com/blogs/wp-content/uploads/2020/03/Calculated-Summary-for-Mixed-Rows.gif)<figcaption>Calculated Summary for Mixed Rows</figcaption> I hope you now understand this feature and how to calculate summaries for selected rows in the WinForms DataGrid. You can download a demo of calculating summaries for selected rows from our [GitHub samples](https://github.com/syncfusion/winforms-demos/tree/master/DataGrid.WinForms/Samples/Summaries/Summaries). To learn more about the summaries in the Syncfusion [DataGrid](https://www.syncfusion.com/winforms-ui-controls/datagrid), please refer to our [documentation](https://help.syncfusion.com/windowsforms/datagrid/summaries). You can download our [2020 Volume 1 beta](https://www.syncfusion.com/forums/152560/essential-studio-2020-volume-1-beta-release-v18-1-0-36-is-available-for-download) release to check out all our new features and controls. If you have any questions or require clarifications about this control, please let us know in the comments section. You can also contact us through our [support forum](https://www.syncfusion.com/forums), [Direct-Trac](https://www.syncfusion.com/support/directtrac/), or [feedback portal](https://www.syncfusion.com/feedback). We are happy to assist you! The post [Easily Calculate Summary of Selected Rows with WinForms DataGrid](https://www.syncfusion.com/blogs/post/easily-calculate-summary-of-selected-rows-with-winforms-datagrid.aspx) appeared first on [Syncfusion Blogs](https://www.syncfusion.com/blogs).
sureshmohan
292,930
11 best Pluralsight Courses to learn Python, Java, React.js, and Angular
A list of the best programming, software development, big data, and web development courses from Pluralsight for online learning and skill development
0
2020-03-27T08:36:41
https://dev.to/javinpaul/11-of-the-best-pluralsight-courses-programmers-can-take-to-learn-key-programming-skills-during-covid-19-epidemic-463l
programming, course, python, algorithms
--- title: 11 best Pluralsight Courses to learn Python, Java, React.js, and Angular published: true description: A list of the best programming, software development, big data, and web development courses from Pluralsight for online learning and skill development tags: programming, course, python, algorithms cover_image: https://dev-to-uploads.s3.amazonaws.com/i/t5sb8zinl6vdsdupwujs.png --- *Disclosure: This post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.* Hello guys, this is extraordinary times, something like which happens every 100 years. With so many people are now working from home or under lockdown due to COVID-19 and Coronavirus precautionary measures, I would like to share some useful online courses you can take to learn key technical skills and utilize this time productively. Pluralsight is one of the best online learning websites to learn programming and tech skills with expert teaching online. more like Netflix for Software Developers and Since learning is an important part of our job, Pluralsight membership is a great way to stay ahead of your competition. They also provide a **[10-day free trial](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Flearn)** without any commitment, which is a great way to not just access this course for free but also to check the quality of courses before joining Pluralsight. Pluralsight also has Skill IQ tests, if you are feeling curious about where do you rank among other programmers for a particular skill like React, JavaScript, or Node.js, take **[Pluralsight Skill IQ test](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fskill-iq)**. I know it's not easy with whatever going around but being positive and having a learning mindset also means less stress which is better for your immune system, Probably your best defense against COVID-19 which doesn't have any cure yet. This is an extraordinary time and you need an extraordinary mindset to go through over it, isn't it? The best thing you can do during this period is to learn online. You should spend this time learning some key programming skills which can get your job once this COVID-19 period is over. You can use this time to take courses and build in-demand skills that will help you to get a job or a career once coronavirus is over. But, to be honest, online learning is not easy, you can easily get overwhelmed with so many resources and so many things to learn. You need to be smart and only learn from the best resources and build the skills which will help you to get a job or make you job-ready. Some essential skills everyone should learn in 2020 are: - Java, Python and Data Science - Computer Science concepts, algorithms, data structures, and databases - Frontend frameworks like React and Angular - And, Command line tools like Linux, Git, and Bash So, you need to pick some of the best courses you always wanted to check and stick with them, even if you complete one or two, it's a perfectly utilized weekend. So, here are some of the best [Pluralsight](http://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Flearn) courses, you can join or watch while you are at home. I have tried to include courses from different verticals which covers above mentioned skills like [Java](https://medium.com/javarevisited/10-free-courses-to-learn-java-in-2019-22d1f33a3915), [Web Development](https://dev.to/javinpaul/top-6-courses-to-learn-web-development-best-of-lot-2fae), [Data Structure, and Algorithms](https://javarevisited.blogspot.com/2018/11/top-5-data-structures-and-algorithm-online-courses.html), [Python](https://medium.com/swlh/5-free-python-courses-for-beginners-to-learn-online-e1ca90687caf), [Data Science, and Machine learning](https://becominghuman.ai/9-data-science-and-machine-learning-courses-by-harvard-ibm-udemy-and-others-12a0c7c23ec1), [Big Data](https://dev.to/javinpaul/top-5-courses-to-learn-big-data-and-hadoop-for-beginners-6g8), [Cloud Computing](https://medium.com/javarevisited/top-10-courses-to-learn-amazon-web-services-aws-cloud-in-2020-best-and-free-317f10d7c21d), [SQL](https://javarevisited.blogspot.com/2020/02/top-5-courses-to-learn-microsoft-sql-server-mssql.html), [Linux](https://medium.com/javarevisited/top-10-courses-to-learn-linux-command-line-in-2020-best-and-free-f3ee4a78d0c0), etc but it's not exhaustive. [![](https://miro.medium.com/max/1545/1*W9rlOSh5kFtsKYIwtufG0A.png)](http://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Flearn) ##Top 11 Pluralsight Courses for Programmers You can also share your favorite courses from Pluralsight on comments and programmers coming here for suggestions can benefit from them. So, without wasting any more of your time, here are some of the [best Pluralsight Courses](https://javarevisited.blogspot.com/2017/12/top-10-pluralsight-courses-java-and-web-developers.html) you can check out this weekend: ###1\. Python Fundamentals Python Fundamentals gets you started with Python, a dynamic language popular for web development, big data, science, and scripting and probably the most important thing you can learn during COVID-19. Instructors are [Austin Bingham](https://medium.com/@austin.bingham) and @RobertSmallshire Link to Join: [Python Fundamentals](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fpython-fundamentals) [![](https://miro.medium.com/max/765/0*E490uaT3IjHT3ULN.jpg)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fpython-fundamentals) ###**2\. Java Fundamentals: The Java Language** This is a great Java course for beginners on [Pluralsight]. It's not most-up-to-date, last updated in December 2015, but all the things it teaches are still relevant. You will learn the basics of Java, Class, Object, Data Types, [Threads](https://javarevisited.blogspot.com/2016/06/5-books-to-learn-concurrent-programming-multithreading-java.html), Files, Error handling and other core Java concepts. The instructor Jim Wilson is an experienced software engineer with more than 30 years of experience under his belt. Link to Join --- [***Java Fundamentals: The Java Language***](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fjava-fundamentals-language) In short, a good course to learn Java from a beginner's point of view. It provides complete coverage of Java Programming language and servers as found for all Java-based development jobs like server-side development and client-side development including Android apps. [![](https://miro.medium.com/max/348/0*kdkSKc_GSDkaRFaB.jpg)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fjava-fundamentals-language) ###3\. React.js: Getting Started I have recently started working on a project which is using React.js for creating views and it's mandatory for me now to learn React.js. I have some idea about it like its similar framework like [Angular framework](https://javarevisited.blogspot.com/2018/06/5-best-courses-to-learn-angular.html) but backed by Facebook but yet to do a deep dive and that's why I have selected this Pluralsight course to take in 2020. Link to Join--- [**React.js: Getting Started**](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Freact-js-getting-started) **by** [**Samer Buna**] [![](https://miro.medium.com/max/765/0*gC6Zn2BO4P3PWKwi.png)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Freact-js-getting-started) This course covers the basics of [React.js](https://javarevisited.blogspot.com/2018/08/top-5-react-js-and-redux-courses-to-learn-online.html) and prepares the student to start developing web applications with the library. It also explains the essential React.js concept using a sample web application to demonstrate math skills kids' games. ###4\. Linux Command Line Interface (CLI) Fundamentals In this course by [AndrewMallette](https://medium.com/@andrew.mallette), you will learn to master the command line shell in Linux and Unix. This is the 2nd of 4 courses that will prepare you for the LPIC-1 and CompTIA Linux+ certification. Link to Join: [**Linux Command Line Interface (CLI) Fundamentals**](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Flinux-cli-fundamentals) [![](https://miro.medium.com/max/765/0*GJUyyvrKXyi6g9Cx.jpg)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Flinux-cli-fundamentals) ###5\. Git Fundamentals Git is a popular distributed version control system (DVCS). This is one of the top courses from Pluralsight which teaches you how to create a local repository, commit files, push changes to a remote repository, fix errors in your commits, and many of Git's other features. **Link to Join: --** [**Git Fundamentals**](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fgit-fundamentals) It will also help you to understand the difference between the working copy, staging area, and the repository itself. One of my goals is to master Git in 2020. [![](https://miro.medium.com/max/348/0*t-MKPLnTasi4Cvj9.png)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fgit-fundamentals) Even though I know Git and I have downloaded a couple of projects from Github but by using Git client in Eclipse. I have yet to work with the Git command line and that's where this Pluralsight course is going to help me. Come learn the power of Git. ###6\. Big Data: The Big Picture This is one of the new things I am going to explore this year. Big Data technologies like Spark and Hadoop are my focus but I will spend some time learning the bigger picture and that's where this Pluralsight Course will going to help me. Link to Join --- [**Big Data: The Big Picture**](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fbigdata-bigpicture) In this course, ZDNet's Big Data correspondent [Andrew Brust](https://medium.com/@andrewbrust) teaches you about concepts, companies, and technologies that make up the Big Data world and devise a strategy for adopting Big Data in your organization. [![](https://miro.medium.com/max/765/0*fPN0CQ0uFlRa7pYR.png)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fbigdata-bigpicture) ###7\. Understanding Machine Learning Apart from Big Data technologies, one more thing I would like to explore in 2020 is Machine learning. It's getting increasingly popular and 2020 seems to be the right time to learn about machine learning algorithms. Link to Join --- [**Understanding Machine Learning**](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Funderstanding-machine-learning) [![](https://miro.medium.com/max/348/0*vOH4AQsyRv88e7ir.png)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Funderstanding-machine-learning) If you work in technology today, you need to understand at least the basics of [machine learning](https://javarevisited.blogspot.com/2018/10/data-science-and-machine-learning-courses-using-python-and-R-programming.html) and this one of the best Pluralsight courses provides a short introduction to the topic that assumes only a basic IT background. If you've been looking for a simple overview of machine learning this is the course you should take. ###8\. [Spark Fundamentals](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fapache-spark-fundamentals) Apache Spark is one of the popular Big Data framework and one of the new framework I am aiming to explore in 2020. Link to Join --- [*Spark Fundamentals*](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fapache-spark-fundamentals) [![](https://miro.medium.com/max/278/0*zoZDALJq45gPFfqp.png)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fapache-spark-fundamentals) I have already shortlisted some of the best Apache Spark online courses form Pluralsight in my earlier article and one of them is Apache Spark Fundamentals from Pluralsight. This course will teach you how to use Apache Spark to analyze your big data at lightning-fast speeds; leaving Hadoop in the dust!. If you need more courses, then check out my full list of [Apache spark](https://javarevisited.blogspot.com/2017/12/top-5-courses-to-learn-big-data-and.html#axzz5bKDxWpoU) course here. ###9\. [Angular: Getting Started](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fangular-2-getting-started-update) This is another top Angular 2 course from Pluralsight. This course will teach you the Angular fundamentals required to create testable, MVC-style single-page applications with Angular framework. In particular, you will learn how to bootstrap your Angular application; use Angular markup and expressions; create and use controllers; use built-in services and create custom services. Link to Join --- [*Angular: Getting Started*](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fangular-2-getting-started-update) [![](https://miro.medium.com/max/348/0*entKFEwMipHR6_ta.png)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fangular-2-getting-started-update) You will also learn test-driven development using [Angular](http://www.java67.com/2018/01/top-5-free-angular-js-online-courses-for-web-developers.html) and MVC pattern and learn to turn your application into a SPA using routing and create your own custom elements and handle events using directives. You can take this course after taking Getting Started with Angular to better learn Angular in 2020. ###10\. [Introduction to Android Development](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fandroid-intro) This is a great course to learn Android from a beginner's perspective by John Sonmez, one of the best instructors on Pluralsight. I have already shortlisted some of the best Android courses from Pluarsight in my last article. Link to Join - [***Introduction to Android Development***](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fandroid-intro) That includes both beginner and an intermediate course on Android. If you are also learning Android or improving your Android skill in 2020, you can take a look at my full list of shortlisted Android courses for Java programmers [here](http://javarevisited.blogspot.com/2017/12/top-5-android-online-training-courses-for-Java-developers.html). [![](https://miro.medium.com/max/348/0*jNZrPo1g35S7fTcc.png)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fandroid-intro) ###11\. Algorithms and Data Structures - Part 1 & 2 By Robert Horvick In [this course](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fads-part1), you will learn core data structures and algorithms used in everyday applications. You will learn the trade-offs involved with choosing each data structure, along with traversal, retrieval, and update algorithms. This is part 1 of a two-part series of courses covering algorithms and data structures. In the first part, linked lists, stacks, queues, binary trees, and hash tables are covered and the second part covers graph and string algorithms. **Link to Join:** [**Algorithms and Data Structures --- --- Part 1**](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fads-part1) [![](https://miro.medium.com/max/765/0*awP1MP6dKKcRV8yf.jpg)](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fads-part1) Btw, you need a [**Pluralsight membership**](http://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Flearn) to access this course. A monthly subscription cost around $29 per month but also give access to more than 500+ course which is worth money. But, if you don't have a membership, you can still access this course by signing up for the [**10-day free trial**](http://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Flearn) which provides 200 minutes of watch time for free, without any commitment. And, if you need some free resources to complement your learning, here are some more **Free Online Courses** You May like to explore [5 Free Courses to Learn Git and Github](http://javarevisited.blogspot.sg/2018/01/5-free-git-courses-for-programmers-to-learn-online.html#axzz568Oo1Jao) [The Complete DevOps Developer RoadMap](https://hackernoon.com/the-2018-devops-roadmap-31588d8670cb?gi=1490c6cb9f25) [Top 5 Courses to learn Jenkins](https://javarevisited.blogspot.com/2018/09/top-5-jenkins-courses-for-java-and-DevOps-Programmers.html) [5 Free Eclipse and JUnit Courses for Java Developers](http://www.java67.com/2018/02/5-free-eclipse-and-junit-online-courses-java-developers.html) [10 DevOps Course for Experienced Developers](https://javarevisited.blogspot.com/2018/09/10-devops-courses-for-experienced-java-developers.html) [5 Online training courses to learn Angular for Free](http://www.java67.com/2018/01/top-5-free-angular-js-online-courses-for-web-developers.html) [5 Free course to learn Blockchain technology](http://www.java67.com/2018/02/5-free-blockchain-technology-courses.html) [5 Free course to Kubernetes for DevOps](https://javarevisited.blogspot.com/2019/01/top-5-free-kubernetes-courses-for-DevOps-Engineer.html) That's all guys, enjoy these courses on [Pluralsight] while you are at home and make the best use of this COVID-19 time. I know it's not easy with whatever going around but being positive and having a learning mindset means less stress which is better for your immune system. Probably your best defense against COVID-19 which doesn't have any cure yet. All the best with your learning Hackathon, stay safe, be positive, and be healthy. >P.S. - Pluralsight also has Skill IQ tests, if you are feeling curious about where do you rank among other programmers for a particular skill like React, JavaScript, or Node.js, take **[Pluralsight Skill IQ test](https://pluralsight.pxf.io/c/1193463/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fskill-iq)**.
javinpaul
297,202
Onworks, how to run Ubuntu online
Continues from my last post: Onworks.net is a free Linux emulation site, it allows you to use dis...
0
2020-04-02T10:40:44
https://dev.to/17lwinn/onworks-how-to-run-ubuntu-online-cnm
linux
Continues from my last post: ------ Onworks.net is a free Linux emulation site, it allows you to use distros like Ubuntu for free with root access and internet connection. But how do we use it? 1. Go to https://onworks.net 2. Find Ubuntu 19 and click 'run online' 3. A new tab will open, please wait until a piece of red text shows reading something like 'OS ubuntu-19.10-desktop-amd64' 4. Click 'start' You repeat the steps for any distro on its site, FYI the root/user password is 123456 (says in banner on startup) You can now use this to install many things, but to be honest- I would use fedora workstation -----
17lwinn
301,658
Explaining Bots and Its types - Part 1
Fact: Do you know that a normal person interacts at least 3-4 times a day with a bot? but when? and...
5,830
2020-04-07T16:59:46
https://dev.to/pepipost/explaining-bots-and-its-types-part-1-15po
machinelearning, datascience, beginners, javascript
> <b>Fact</b>: Do you know that a normal person interacts at least 3-4 times a day with a bot? but when? and how? impressive isn't it... It is also said that in the future a person will interact more with a bot than their spouse. Let's unfold the truth behind this... ### <b> Introduction </b> In simple words, bots are dumb machines that are programmed to do repetitive tasks, which are automated by a human being to save time. This was the definition that I concluded after working on a few bots. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/eu5qnbj3326gpaalps6y.gif) Bots are coming into existence more and more because of their accuracy, speed and much more similar behavior to humans. if you ask me where I can find a bot? I would reply in just a few words "<b>Everywhere on the web</b>". The most common bot/type of bot that you can experience right now is Just unlock your phone and speak: - "Hey, Siri" if you have iPhone. - "Ok, Google" if you have Android. Similarly, you can find Alexa or Google Home/mini which are also home assistants used for many tasks that a human wants to execute within the house like playing music, asking for news or setting reminders and many more. The above examples are advance bots that were contributed by Gaint technology leaders, but there are many small bots that developers write for their own convenience, it can be chatbots, a web crawler, social bots, and some malicious bots. ### <b> Good Bot vs Bad Bot</b> This is the section, let's dive deep with nature of bots and how they are helpful to us with our day to day task. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/q79ha4tew9gntk4jjvkn.png) So, Bots are developed to automate various repetitive tasks which turns out to be useful in many ways, but few are developed to harm your resource which is classified into good bots and bad bots. - <b>What is a Good Bot?</b> - Good bots are built to gain profit for the business. These bots are beneficial for both businesses and individuals. The simple example you can assume is whenever you search for any websites, products or any services, you often get next to accurate results, how? - This is possible because of search engines spider bot which is also known as <b>crawler bot</b>. Bots like - [Googlebot](https://support.google.com/webmasters/answer/182072?hl=en) - Slurp Bot [yahoo] - Alexa crawler [Amazon Alexa] - Reputed companies often deploy this bot by following the rules of the webmaster crawling activity and indexing rate at websites [robot.txt](https://www.techinasia.com/talk/robotstxt-secure-website-content). - Besides from these search engine crawlers, there are many different third party bots like - [Slack bot.](https://api.slack.com/bot-users) (Any complex integration can be done which can be notified directly on the channel). - [telegram bot.](https://core.telegram.org/bots) - Pingdom bot. (website monitoring bot) By this time you might be clear about good bots, so any bot who follow the rules and regulation of webmaster and the policies which result in profit for business are good bots. - <b> What is a Bad Bot?</b> - As we know Bad is always opposite to Good. These bots are built by hackers, cybercriminals, fraudsters so that they get engaged in illegal activities. - These bots are programmed for doing malicious jobs on the web. - Let's take an example, you have a business setup for toys and you have a unique toy that is made by you. your competitor may build a bot that can be a scraper that can collect all the content, product reviews, feedbacks and what new toy you are working on and publish fake reviews on other websites. - The second example is they travel thousands of visits on your website within the minimum span of time, that chokes the availability for other genuine users. This bot is highly injurious for brand reputation which results in hampering of search engine website ranking. ### <b> Types of Bots</b> <b>Good Bots</b> are used to gain profit for the business and also helps to build your domain and website health. This bot helps by crawling website for search engine optimization(SEO), Collecting information, obtaining marketing analytics and many more. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/qe87o9zpcwz13agl8uq8.png) - <b>Social Network Bots</b>: These bots are managed and supported by social networking sites like [Facebook](https://developers.facebook.com/docs/workplace/integrations/custom-integrations/bots/), Twitter. Bots help to give visibility to the brand website and drive engagement to their platforms. - <b>Feedfetcher Bots</b>: These bots are used to collect information from different websites and help to keep subscribers updated with the product, events, and blog posts. - <b>Parter Bots</b>: These bots are third party bots that are developed and supported by the SaaS organization like Slack, PayPal, Stripe and many more. these bots help to integrate directly with the program within the organization. - <b>Monitoring Bots</b>: These bots are programmed in such a way that they periodically monitor and updates us about the uptime and health of the server/websites. - <b>Search Engine Crawlers</b>: These are the most common and maybe most used bots in this modern world. No matter, who you are you need search engines for simplicity and get your work done. Let's look at few <b>Bad Bots</b> ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ek6yj2ehspqzr98i56xx.png) - <b>Scraper Bots</b>: - These scrapers are used to steal vital information, prices, updates, and content. This will help the competitor to undermine the business strategies which will help them to target the company's revenue. - Point to remember is that competitor ofter use third-party scrapers to perform this illegal act. - <b>Spam Bots</b>: - These Spam Bots target community forums, lead collection forms, and comments sections. - They usually target this section by adding unwanted promotional advertisements, links, and flood up the comment section by trolling users. - The above activities frustrate genuine users to comment or use the forum's information. The main motive of such bots is to insert the link to phishing pages which are build to collect user critical information which included bank account, username, and passwords. - <b>Scalper Bots</b>: - These bots target ticketing websites, they purchase 100-1000 of tickets and sell them to third party seller due to which the genuine ticket selling website lose their customers. > - Each and every activity of a bot depends on the <b>data</b> it can be a training data or real-time data. - I always recommended not to expose your websites on HTTP instead always use HTTPS or use appropriate protocols to restrict such bots to crawl your website. ### <b>Concluding...</b> - In this world of Machines and Artificial intelligence, I always want each and everyone to learn how the bot works and why you need a bot. - If you are keen to learn this new world paradigm, "<b>First decode it</b>" that is what I have discussed above in this blog. I hope you enjoyed reading! Stay tuned for Part 2 where I will be building a <i>Telegram bot from scratch.</i> Thank You! Do follow and share 🤗 __<i>Ref: [shieldsquare](https://www.shieldsquare.com/what-are-bots/)</i>__
sahuvikramp
301,716
How to upload images to Amazon S3 using the AWS Amplify Storage module
AWS Amplify is a development platform for building secure, scalable mobile and web applications. It p...
0
2020-04-28T05:42:14
https://dev.to/danielbayerlein/how-to-upload-images-to-amazon-s3-using-the-aws-amplify-storage-module-of6
aws, s3, amplify, react
[AWS Amplify](https://aws.amazon.com/amplify/) is a development platform for building secure, scalable mobile and web applications. It provides several [libraries](https://docs.amplify.aws/lib/q/platform/js) for the communication with the AWS services. In this blog post I show you how to store images (also audio, video, etc. possible) on [Amazon S3](https://aws.amazon.com/s3/) using a [React](https://reactjs.org/) application. This example uses the [`@aws-amplify/storage`](https://www.npmjs.com/package/@aws-amplify/storage) and the [`@aws-amplify/auth`](https://www.npmjs.com/package/@aws-amplify/auth) package. More on this later. To manage the infrastructure I use the [Serverless Framework](https://serverless.com/). ## Amazon S3 and Cognito Identity Pool For the upload we need a S3 bucket to store the files and a Cognito Identity Pool for access control. ### Configure S3 bucket First of all you need a S3 bucket. I create it as a private bucket called `example-bucket`. The CORS configuration is important, otherwise some CORS exceptions occur and the upload will not work. You can also define the allowed methods - in the example `GET` and `PUT` are allowed. ```yaml S3ImageBucket: Type: AWS::S3::Bucket Properties: BucketName: example-bucket AccessControl: Private CorsConfiguration: CorsRules: - AllowedOrigins: - '*' AllowedHeaders: - '*' AllowedMethods: - GET - PUT MaxAge: 3000 ExposedHeaders: - x-amz-server-side-encryption - x-amz-request-id - x-amz-id-2 - ETag ``` ### Configure Cognito Identity Pool After the S3 bucket has been created, a Cognito Identity Pool must be created. I use an existing Cognito User Pool as provider. This can be configured with the `CognitoIdentityProviders` option. Of course you can also use another provider. In the policy, I specify which actions may be carried out. In this case `s3:GetObject` and `s3:PutObject`. ```yaml CognitoIdentityPool: Type: AWS::Cognito::IdentityPool Properties: IdentityPoolName: ${self:service}-${self:provider.stage}-${self:provider.region}-IdentityPool AllowUnauthenticatedIdentities: false CognitoIdentityProviders: - ClientId: 111xxx111xxx111xxx111 ProviderName: cognito-idp.eu-central-1.amazonaws.com/eu-central-1_XXX CognitoIdentityPoolRoles: Type: AWS::Cognito::IdentityPoolRoleAttachment Properties: IdentityPoolId: Ref: CognitoIdentityPool Roles: authenticated: !GetAtt CognitoAuthRole.Arn CognitoAuthRole: Type: AWS::IAM::Role Properties: Path: / AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: 'Allow' Principal: Federated: 'cognito-identity.amazonaws.com' Action: - 'sts:AssumeRoleWithWebIdentity' Condition: StringEquals: 'cognito-identity.amazonaws.com:aud': Ref: CognitoIdentityPool 'ForAnyValue:StringLike': 'cognito-identity.amazonaws.com:amr': authenticated Policies: - PolicyName: ${self:service}-${self:provider.stage}-${self:provider.region}-S3CognitoAuthPolicy PolicyDocument: Version: '2012-10-17' Statement: - Effect: 'Allow' Action: - 's3:GetObject' - 's3:PutObject' Resource: - !Join [ '', [ !GetAtt S3ImageBucket.Arn, '/*' ] ``` 💡 You can also set a role for unauthenticated users via `unauthenticated` if your application requires access to the S3 bucket. ## The Storage module The `@aws-amplify/storage` module provides a simple mechanism for managing user content for your app in public, protected or private storage buckets. ### Configure Amplify Storage The configuration is very simple. You only have to set the `bucket` name and the `region` of this S3 bucket. ```js import Storage from '@aws-amplify/storage' Storage.configure({ AWSS3: { bucket: 'example-bucket', region: 'eu-central-1' } }) ``` ## The Auth module Additionally we need the `@aws-amplify/auth` module so that the application can authenticate itself. ### Configure Amplify Auth The configuration object expects the following parameters: * `region`: Region of your Amazon Cognito * `identityPoolId`: ID of your Amazon Cognito Identity Pool * `userPoolId`: ID of your Amazon Cognito User Pool * `userPoolWebClientId`: Web Client ID of your Amazon Cognito User Pool As code it looks like this: ```js import Auth from '@aws-amplify/auth' Auth.configure({ region: 'eu-central-1', identityPoolId: 'eu-central-1:xxx-xxx-xxx-xxx-xxxxxx', userPoolId: 'eu-central-1_XXX', userPoolWebClientId: '111xxx111xxx111xxx111' }) ``` ## Using Amplify Storage Enough configurations, time for usage. 🎉 With the `Storage.put()` function you can put the data to S3. It returns a `{key: S3 Object key}` object on success. ```js const S3ImageUpload = () => { const onChange = async (file) => { const { key } = await Storage.put('example.png', file, { contentType: 'image/png' }) console.log('S3 Object key', key) } return ( <input type='file' accept='image/png' onChange={(e) => onChange(e.target.files[0])} /> ) } ``` With the return value (`key`) and the function `Storage.get()` you can retrieve the image again. 📖 All Storage functions can be found in the [documentation](https://docs.amplify.aws/lib/storage/upload/q/platform/js).
danielbayerlein
301,734
Neural Network Basics: Gradient Descent
In the previous post, we discussed what a loss function is for a neural network and how it helps us t...
5,321
2020-04-07T12:44:30
https://makshay.com/neural-network-basics-gradient-descent
machinelearning, learninpublic
In the [previous post](https://dev.to/_akshaym/neural-network-basics-training-a-neural-network-47gl), we discussed what a loss function is for a neural network and how it helps us to train the network in order to produce better, more accurate results. In this post, we will see how we can use gradient descent to optimize the loss function of a neural network. ## Gradient Descent Gradient Descent is an iterative algorithm to find the minimum of a differentiable function. It uses the slope of a function to find the direction of descent and then takes a small step towards the descent direction in each iteration. This process continues until it reaches the minimum value of the function. ![Gradient Descent](https://dev-to-uploads.s3.amazonaws.com/i/jmzj24jwurnsg5uuy873.png) Let's say we want to optimize a function J(W) with respect to the parameter W. We can summarize the working of the Gradient Descent algorithm as follows: 1. Start at any random point on the function. 2. Calculate the slope of the function at that point. 3. Take a small step in the direction opposite to the slope of the function. 4. Repeat until it reaches the minimum value. The algorithm works in the same way for any N-dimensional differentiable function. The example above shows a 2D plot because it is easier for us to visualize. For our neural network, we need to optimize the value of our loss function J(W), with respect to the weights(W) in the used in the network. We can write the algorithm as: **Algorithm** ![Summary](https://dev-to-uploads.s3.amazonaws.com/i/np62rev0ganz1s1t6jba.png) ## Learning Rate The parameter <span><img style="width: 18px" src="https://math.now.sh?from=%5Calpha" /></span> is known as the learning rate. It is the rate at which we descend the slope of the function. - A small learning rate means that the algorithm will take small steps in each iteration and will take a long time to converge. - A very large learning rate can cause the algorithm to overshoot the point of minimum value and then overshoot again in the opposite direction, which can eventually cause it to diverge. To find an optimum learning rate, it is a good idea to start with something small, and slowly increase the learning rate if it takes a long time to converge. ## Conclusion To summarize the basics of a neural network: ![Summary](https://dev-to-uploads.s3.amazonaws.com/i/e3kf2cztgngul5g4zoi8.png) 1. Perceptron is the basic building block of a neural network. It multiplies an input with a weight and applies a non-linearity to the product. 2. Perceptrons connect together to form a layer of a neural network. There are multiple set of weights between different layers. 3. To train the network we choose a loss function and then optimize the loss functions with respect to the weights W using gradient descent. This concludes this series on the basics of neural networks. I would love to hear your views and feedback. Feel free to hit me up on [Twitter](https://twitter.com/_akshaym).
_akshaym
301,737
Announcing GraphQL Fireside Chats: A Special Series of Online Events
Photo by Lāsma Artmane on Unsplash We are thrilled to be partnering with This Dot for a special seri...
0
2020-04-16T04:44:11
https://hasura.io/blog/graphql-fireside-chats/
graphql
--- title: Announcing GraphQL Fireside Chats: A Special Series of Online Events published: true date: 2020-04-07 12:11:05 UTC tags: #graphQL canonical_url: https://hasura.io/blog/graphql-fireside-chats/ --- ![](https://hasura.io/blog/content/images/2020/04/lasma-artmane-gd88udnwc2Q-unsplash.jpg)<figcaption>Photo by <a href="https://unsplash.com/@lasmaa?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Lāsma Artmane</a> on <a href="https://unsplash.com/s/photos/bonfire?utm_source=unsplash&amp;utm_medium=referral&amp;utm_content=creditCopyText">Unsplash</a></figcaption> We are thrilled to be partnering with [This Dot](http://thisdot.co) for a special series of GraphQL online events! Starting on April 14th, we will be co-hosting weekly “GraphQL Fireside Chats”, where each week we’ll focus on a different GraphQL related topic. Join the panel of special guests for a roundtable discussion and lively Q&A! The purpose of these events is for the GraphQL community to come together and discuss some key topics in the GraphQL ecosystem and facilitate cross collaboration amongst the community. We are thrilled to be able to connect with fellow GraphQL enthusiasts! Check out the various topics we’ll be discussing below: - **[April 14, 2020](https://www.graphql-meetup.com/#/2020-04-14T12:00-04:00)**: Platform and data centric use cases with GraphQL with special guests, **[Sasha Solomon](https://twitter.com/sachee)**, **[Eve Porcello](https://twitter.com/eveporcello)**, and **[Alex Banks](https://twitter.com/moontahoe)**. - **[April 21, 2020](https://www.graphql-meetup.com/#/2020-04-21T12:00-04:00)**: Performance and monitoring tips/tricks/caveats with GraphQL with special guests, **[Mark Stuart](https://twitter.com/mark_stuart)** and **[Jon Wong](https://twitter.com/jnwng)**. - **[April 28, 2020](https://www.graphql-meetup.com/#/2020-04-28T12:00-04:00)**: Authorization and security patterns for GraphQL with special guest, **[German Frigerio](https://twitter.com/gagoar)**. - **[May 12, 2020](https://www.graphql-meetup.com/#/2020-05-12T12:00-04:00)**: Event driven applications with GraphQL & serverless with special guests, **[Simona Cotin](https://twitter.com/simona_cotin)** and **[Christian Nwamba](https://twitter.com/codebeast)**. If you would like to attend the live stream and interact with the speakers during the Q&A, sign up for each event. You'll receive a live link the day of! Looking forward to seeing you there! Enjoyed this article? Join us on [Discord](discord.gg/hasura) for more discussions on Hasura & GraphQL! Sign up for our [newsletter](http://eepurl.com/dBUfJ5) to know when we publish new articles.
hasurahq_staff
301,742
Quick Security in VS Code with CodeSweep
I'll start with a disclosure and mention that I'm working for the company that put out this product (...
0
2020-04-07T13:01:12
https://dev.to/coadaflorin/quick-security-in-vs-code-with-codesweep-1b20
vscode, appsec, sast, security
_I'll start with a disclosure and mention that I'm working for the company that put out this product (HCL AppScan), but I'm writing this as Florin, the guy who write some code and has an interest in Security._ Tl;DR: here's a video: https://www.youtube.com/watch?v=zQvonHi4ak8 plugin here: https://hclsw.co/codesweep In the recent years everyone's been talking more about security, cybersec, IT security, cyber, etc. If you're a developer and did not hear about any of these, I would be very surprised. One of the key areas companies have been focusing lately is application security. The area of security that focuses on securing applications and making sure they can't be exploited to hurt the company, the customers or its partners. You've seen this happening a bunch of times to various companies. Just a quick Google search could produce a very interesting list for you. There's various types of vulnerabilities that could be exploited to produce harm. I'll classify them in 2 main categories: proprietary code (your code) and 3rd party code (libraries). The purpose of HCL AppScan CodeSweep is to help you find potential vulnerabilities in your code as you introduce them. They range from Cross Scripting, SQL Injection, hardcoded credentials, OS Injections to old encryption algorithms, and so on. Created as a VS Code plugin the tool will review your files upon saving them. If there's something potentially dangerous we'll flag it. This will give you two options: Fix it or ignore it. It's up to you what risk you're willing to take, but try to understand the issue before jumping to the conclusion it's not a problem. :) If you have any questions feel free to join the community. Without adding more words to something that shouldn't take more than 140 characters, you can get the plugin here: https://hclsw.co/codesweep And you can join our community here: https://hclsw.co/CodeSweepCommunityInvite
coadaflorin
301,807
WEB APP REACTJS POKEMON API
https://lnkd.in/e9mg7jA Fala, galera, beleza? Um belo dia me deparei com uma API com todos os dados...
0
2020-04-07T14:13:46
https://dev.to/raphaeldefalcoayres/web-app-reactjs-pokemon-api-2jlp
https://lnkd.in/e9mg7jA Fala, galera, beleza? Um belo dia me deparei com uma API com todos os dados sobre pokemons (https://pokeapi.co/) e na mesma hora tive uma super nostalgia! Quando criança sempre acompanhei e pensei em criar algo sobre pokemon. Pois bem, chegou a hora! Criei este projeto com objetivo de testar e praticar meus conhecimentos em javascript es6 com reactjs e json-server e espero que gostem do resultado! Segue o link dos projetos desenvolvidos e que são open source: front-end: https://lnkd.in/e86vtVu back-end: https://lnkd.in/eHMrq8y Me adiciona no linkedin! https://www.linkedin.com/in/raphael-de-falco-ayres-6b053826/ Deixe o seu LIKE e compartilhe! Comentem com sugestões para que eu possa melhorar e também deem sugestões de novos vídeos! Vlw Abraço!
raphaeldefalcoayres
308,682
Challenge 1 - Intro to Vue
A post by Ervin Ismu
0
2020-04-14T11:10:41
https://dev.to/ervinismu/challenge-1-intro-to-vue-1l26
codepen
{% codepen https://codepen.io/ervinismu-the-flexboxer/pen/MWaapjM %}
ervinismu
301,813
StartNames - www.startnames.co
Like everybody else amidst this crisis, I have found myself stuck at home without much to do. This su...
0
2020-04-07T14:17:59
https://dev.to/joeltankard/startnames-www-startnames-co-1p3i
showdev, sideprojects
Like everybody else amidst this crisis, I have found myself stuck at home without much to do. This sudden free time has lead me back to my one passion — building awesome products. Too often I struggle to come up with exactly what to build... My favorite technique to overcome this is to start with the name. So I built a product around this idea. I'm super excited to announce my first product on Product Hunt, StartNames! I'd love to hear your feedback and hope StartNames can inspire your next project 😄 HTTP://startnames.co
joeltankard
301,824
Spoiler: 5-star reviews are a good start, right?
Who does not want to learn more about the team he is going to work with? Especially when your product...
0
2020-04-07T14:21:01
https://dev.to/hiretester/spoiler-5-star-reviews-are-a-good-start-right-2196
testing
Who does not want to learn more about the team he is going to work with? Especially when your product quality is at stake. If you'd like to know the HireTester team better, follow the link to check out what clients say about us on Clutch: https://clutch.co/profile/hire-tester
akh
301,858
Previously on our Developer Economics newsletter
Our newsletter which is filled with dev related news, jokes, inspirational quotes, events but most importantly, resource. Have a look a look into the resources that developers enjoyed the most last month
0
2020-04-07T16:45:01
https://dev.to/developernationsurvey/previously-on-our-developer-economics-newsletter-8p4
newsletter, resources, learning, tools
--- title: Previously on our Developer Economics newsletter published: true description: Our newsletter which is filled with dev related news, jokes, inspirational quotes, events but most importantly, resource. Have a look a look into the resources that developers enjoyed the most last month tags: newsletter, resources, learning, tools --- Our mission here, in [Developer Economics] (https://www.developereconomics.com/), is to help developers become better developers! And, how do we achieve that, you may ask? First, come the data. We create and publish reports and graphs, based on data from developers who take our surveys, allowing them to benchmark themselves against the global programming community. Secondly, there is our 🥁🥁 [Developer Economics newsletter] (https://developereconomics.us1.list-manage.com/subscribe?u=f5cdd9d9e59e9c39c83d7b50f&id=4b023cda2e). A newsletter filled with dev related news, jokes, inspirational quotes, events but most importantly, resources 📚. So, let's have a look into the resources that developers enjoyed the most last month, shall we? 💡 Learning [The 25 best programming books of all-time.] (https://www.daolf.com/posts/best-programming-books/) A data-backed answer. [DAOLF] [20+ Machine Learning datasets & project ideas.](https://www.kdnuggets.com/2020/03/20-machine-learning-datasets-project-ideas.html)Finding good datasets to work with can be challenging, so this article discusses more than 20 great datasets along with machine learning project ideas for you to tackle today. [KDNUGGETS] [Awesome remote jobs and resources.] (https://github.com/lukasz-madon/awesome-remote-job) Here is an awesome repository with content (articles, books, videos) about working remotely. [GITHUB.LUKASZ.MADON] [Remote work survival guide.] (https://blog.remotive.io/remote-work-survival-guide/) Employees, Managers, Leaders - here's a short guide on working remotely in 2020. [REMOTIVE] All the startups threatened by [iOS 14's new features] (https://techcrunch.com/2020/03/10/all-the-startups-threatened-by-ios-14s-new-features/). Fitness, wallpaper, and lost item-finding startups could have a big new competitor baked into everyone’s iPhones. Leaks of the code from iOS 14 that Apple is expected to reveal in June signal several new features and devices are on the way. Startups could be at risk due to Apple’s ability to integrate these additions at the iOS level, instantly gain an enormous install base and offer them for free or cheap, as long as they boost sales of its main money maker, the iPhone. [TECHCRUNCH] 🧰Tools [Free tools and services for businesses during the COVID-19 crisis.](https://www.zdnet.com/article/free-tools-and-services-for-businesses-during-the-covid-19-crisis/) Companies like Atlassian, Okta, Tableau and Intermedia are extending free versions of their offerings to organizations to help them stay afloat during the global pandemic. [ZDNET] [Top React.js tools for developers.] (https://dzone.com/articles/top-reactjs-tools-for-developers) Discussing some of the top React tools for developers, including Reactide, React Sight, and React Toolbox. [DZONE] [Unix Toolbox.] (http://cb.vu/unixtoolbox.xhtml) A collection of Unix/Linux/BSD commands and tasks which are useful for IT work or for advanced users. This is a practical guide with concise explanations, however the reader is supposed to know what s/he is doing. [CB.VU] 🖥️ Code [Add dark mode to your website with just a few lines of code.] (https://inspiredwebdev.com/add-dark-mode-to-your-website) In this short tutorial we are going to look at how to add support for dark mode in your website in different ways: first with just CSS and lastly with a toggle built with JavaScript. [INSPIREDWEBDEV] Found something you like? Why not [sign up] (https://developereconomics.us1.list-manage.com/subscribe?u=f5cdd9d9e59e9c39c83d7b50f&id=4b023cda2e) to our Developer Economics Newsletter and receive fresh developer resources and news directly in your inbox? It’s spam free, we promise 📥 Our next newsletter coming up on Thursday.
developernationsurvey
301,864
Django เลือก template มา render อย่างไร และทำไม Best Practice ถึงถูก
จากการลองเล่น framework ตัวหนึ่งชื่อ Django (จังโก้) เพื่อนำมาใช้ในการทำ Web GUI สำหรับ project หนึ่ง...
0
2020-04-07T15:30:44
https://dev.to/peepeepopapapeepeepo/django-template-render-best-practice-4d33
django, python, templates
จากการลองเล่น framework ตัวหนึ่งชื่อ Django (จังโก้) เพื่อนำมาใช้ในการทำ Web GUI สำหรับ project หนึ่ง พอลองเล่นมาถึง [Writing your first Django app, part 3](https://docs.djangoproject.com/en/3.0/intro/tutorial03/) เกิดงงเรื่อง directory structure ของ template ที่ต้องเป็น `project_name/app_name/templates/app_name/` และการสั่ง render template ต้องอ้างอิงชื่อเป็น `app_name/home.html` ซึ่งเค้าบอกว่าเป็น Best Practice ที่ผมงงคือ... ทำไม directory structure ต้องมี `app_name` ซ้ำ 2 ครั้งเลยล่ะ ครั้งเดียวมันน่าจะรู้แล้วไหม และ ตอนเรียก template มา render ก็ต้องใส่ `app_name` ข้างหน้าชื่อ template อีก ซึ่งเราทำที่ app นี้อยู่แล้วทำไม่ยังต้องระบุอีกล่ะ > **OK! ถ้ามันเป็น Best Practice ความไม่ make sense ต้องมีคำอธิบาย** มาดูกัน ผมเจออะไร (ต้องบอกก่อนว่าผมเพิ่งหัดใช้นะครับ เรื่องนี้หลายคนอาจรู้อยู่แล้ว 😁) ใน `settings.py` ของ Django ที่เป็น "DjangoTemplate" จะเป็นแบบนี้ ``` python TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] ``` เราจะสนใจ 2 parameters นี้ก่อนคือ 1. `DIRS` คือ path ที่เราต้องการให้ django ไปค้นหา เป็น list of strings (default = []) 2. ` APP_DIRS` คือ เป็น boolean บอกว่าให้หา template ใน directory ของ app หรือ ไม่ (default = True) โดย พฤติกรรมในการหา template ของ "DjangoTemplate" จะเป็นลำดับตามนี้ 1. หาใน `DIRS` ก่อน โดยหาตามลำดับของ list ที่เรากำหนด เช่น `DIRS = ['path/to/template1', 'path/to/template2', 'path/to/template3']` จะหาตามลำดับดังนี้ > /path/to/template1/ > /path/to/template2/ > /path/to/template3/ 2. หาใน `APP_DIRS` โดยถ้าเราตั้งค่าเป็น `True` Django จะไปดูจาก ตัวแปร `INSTALLED_APPS` ใน `settings.py` ซึ่ง เป็น list of string เช่น `INSTALLED_APPS = ['app-a', 'app-b', 'app-c']` Django จะหาตามลำดับของ list นี้ ดังนี้ > /path/to/project/app-a/templates/ > /path/to/project/app-b/templates/ > /path/to/project/app-b/templates/ จากตัวอย่างข้างต้น ลำดับของ directory ที่ Django ใช้หา template คือ > /path/to/template1/ > /path/to/template2/ > /path/to/template3/ > /path/to/project/app-a/templates/ > /path/to/project/app-b/templates/ 👈 เราจะเขียน template ที่นี่ > /path/to/project/app-b/templates/ ด้วย logic การทำงานแบบนี้ ถ้าเราเขียน template ที่ `app-b` ชื่อ `home.html` แล้วเราระบุ ชื่อ template ในการ render แค่ `home.html` ถ้าบังเอิญว่า `app-a` ก็มี `home.html` อยู่เช่นกัน มันจะไป render `home.html` ของ app-a แทน เพราะมันหาเจอก่อน ด้วยพฤติกรรมการทำงานแบบนี้ เราจึงต้องทำตาม Best Practice โดยสร้าง sub directory ที่เป็นชื่อ app ใต้ directory templatate ของ app นั้นๆ ด้วย ดังนี้ > /path/to/template1/ > /path/to/template2/ > /path/to/template3/ > /path/to/project/app-a/templates/<mark>app-a</mark>/ > /path/to/project/app-b/templates/<mark>app-b</mark>/ 👈 เราจะเขียน template ที่นี่ > /path/to/project/app-b/templates/<mark>app-c</mark>/ และเวลาที่เราเรียก template มาใช้งานก็จะเรียกด้วยการระบุชื่อ template ดังนี้ `app-b/home.html` ด้วยวิธีนี้ การ render จะไม่หยิบ template มาผิด app
peepeepopapapeepeepo
308,706
Asteroid App: Phase Three
For the Twilio Hackathon, I'm doing an automated WhatsApp account to which you can ask information ab...
5,828
2020-04-14T11:57:25
https://dev.to/savagepixie/asteroid-app-phase-three-1a3m
twiliohackathon, javascript, node
For the Twilio Hackathon, I'm doing an automated WhatsApp account to which you can ask information about the closest asteroid to Earth at a particular date. ## Today's Work Today was the hardest day of work so far. Not because anything was particularly challenging in itself, but because all I did was very unfamiliar terrain for me. I deployed the asteroid app on Heroku, I configured the end point to properly interpret and respond to Twilio's request and set that as the end point for incoming messages on Twilio. I'm not going to bore you with the details of all that. Big thanks to @avalander for helping me deploy the app on Heroku, though. It is nice that now I just need to push changes to the repository and they get automatically deployed. The API's endpoint wasn't all that hard to configure, but I had to read through a lot of documentation to get there. It also didn't help that I was parsing a JSON instead of stringifying it and that caused it to throw an error. I didn't get to tidying up the code. But oh well, that'll be another day's trouble. ## Next Steps Now I finally have a bare bones app. It is extremely simple, but it is completely functional. Next on my list is: - Tidy up the code a bit (this time for real). - Create a fallback URL for error handling. - Allow for some level of customisation when sending a request for asteroid data.
savagepixie
301,886
Flask Delicious Tutorial : Building a Library Management System Part 4 - Focus on Responses
The reference kit for Flask responses
0
2020-04-07T16:24:53
https://dev.to/abdurrahmaanj/flask-delicious-tutorial-building-a-library-management-system-part-4-focus-on-responses-4j18
flask, python, api, tutorial
--- title: Flask Delicious Tutorial : Building a Library Management System Part 4 - Focus on Responses published: true description: The reference kit for Flask responses tags: flask, python, api, tutorial cover_image: https://dev-to-uploads.s3.amazonaws.com/i/up9w1wvic4czas335pzr.png --- Previous: [Part 3: Routes](https://dev.to/abdurrahmaanj/flask-delicious-tutorial-building-a-library-management-system-part-3-routes-d68) This post is a reference for what you need to return / flask responses. Feel free to come back again. I have configured all in this repo: [DeliciousFlask-4.1](https://github.com/DeliciousFlask/DeliciousFlask-4.1) Download and run app.py, see [Part2](https://dev.to/abdurrahmaanj/flask-delicious-tutorial-building-a-library-management-system-part-2-start-with-a-loaded-skeleton-3j96) for in the series if you are a beginner in Python. It's a post i wish i had when i started Flask. In the repo there are different kinds of returns, let's begin by returning a string. ## Returning Strings ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/bc0xx35unylri9vv4kt8.png) Going to _http://127.0.0.1:5000/_ gives us `home` as configured here: ```python @app.route('/') def index(): return 'home' ``` ## Returning Integers ```python @app.route('/return-int') def return_int(): return 1 ``` Going to _http://127.0.0.1:5000/return-int_ gives us: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/zczgu9ddsg1yeylsegq0.png) as it is not possible to return integers. The whole error line says > TypeError: The view function did not return a valid response. The return type must be a string, dict, tuple, Response instance, or WSGI callable, but it was a int. So, you know what to return. ## Returning Json In Python, dictionaries were purposely designed to imitate JSON for easy transfer of formats. We use jsonify to convert a dictionary to JSON response. ```python @app.route('/return-json') def return_json(): data = { 'name': 'Umar', 'address': 'Port Louis, Mauritius', 'age': 15, 'has_pass': True } return jsonify(data) ``` Going to _http://127.0.0.1:5000/return-json_ gives us: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/6fde5ujkxywoy43o3azo.png) Note that the `True` returned became the JavaScript `true`, our integer of 15 turned into a JavaScript integer and our String became a JavaScript string though it looks the same. JSON means JavaScript Object Notation. ## Returing HTML When returning a string you actually return HTML strings. ```python @app.route('/return-html') def return_html(): return '<h1>I am a BIG header enclosed in a h1 tag</h1>' ``` Going to _http://127.0.0.1:5000/return-html_ gives us: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/r10o1iugev7oy9g7lm7w.png) But of course it is not convenient to return whole files. Let's see how to return files ## Return Files ```python @app.route('/return-file') def return_file(): return render_template('index.html') ``` Going to _http://127.0.0.1:5000/return-html_ gives us a blank page ... Try putting some html in the index file found in the templates folder. Flask searches for files in a folder named templates by default. ## Redirect ```python @app.route('/redirect-home') def return_redirect(): return redirect('/') ``` Going to _http://127.0.0.1:5000/redirect-home_ redirects to _http://127.0.0.1:5000/_ ## Redirect to function ```python @app.route('/redirect-function-html') def return_redirect_function(): return redirect(url_for('return_html')) ``` Going to _http://127.0.0.1:5000/redirect-function-html_ redirects to _http://127.0.0.1:5000/return-html_ url_for searches for the function and returns the response ## Redirect in case of blueprints Though we'll cover blueprints later on, including this one here to serve as a reference Let's say you have a blueprint named `book`. You want to redirect to the function named `index`. You do it like this: ```python return redirect(url_for('book.index')) ``` Hope you liked this summary sheet! Stay tuned for the next part! _My mail if you did not understand something: arj.python at gmail dot com_ You have anything you'd want to see in this series? Tell me below!
abdurrahmaanj
301,900
Currying for front-end developers
Currying is a concept from the computer science world which has become popular in Javascript thanks t...
0
2020-04-07T16:31:58
https://dev.to/danlaush/currying-for-front-end-developers-47hb
javascript, react, vue
Currying is a concept from the computer science world which has become popular in Javascript thanks to the Functional Programming paradigm. It’s the idea of calling a series of functions with a single argument, instead of one function with many arguments: ```js myFunction(a, b, c, d); // vs myFunction(a)(b)(c)(d); ``` This is a pretty heavy-handed simplification and skips over a lot of the true power of currying, but I’m a front-end developer who mostly focuses on the UI. I never made an effort to understand it. It felt very… computer science-y. I didn’t see how I would use it, so I skimmed it and moved on. Then I found myself needing to **conditionally transform some data in a .then() Promise chain**, and suddenly currying was useful and even intuitive. This article explores one use for currying. There are many more benefits and I leave the rest of the internet’s resources to help you with those. Maybe this can be the start of a beautiful journey into functional programming for you. We’ll **start with an example that seems a bit silly** (adding two numbers) in order to understand how it works, and then **move on to an example where it feels more natural** (data fetching, Promises, and transforms). ## Currying add() Normally I would write a function with multiple parameters, and run it by calling it with 2 arguments: ```js function add(a, b) { return a + b; } add(1, 2) // returns 3 ``` Currying is the idea of taking that series of arguments and separating them into multiple function calls that each take a single parameter: ```js function add(a) { return function(b) { return a + b; } } const addFirst = add(1) // returns a new function const sum = addFirst(2) // returns 3 // More succinct: const sumQuick = add(1)(2) // returns 3 ``` 1. Runs the function `add` with `1` as an argument 2. `add` returns a function 3. Run this new function with the `2` argument Thanks to Javascript’s idea of a closure, when we run the first `add(1)` command we create a context where the value of `a` sticks around. When we call the inner function with `b`, it also has access to the `a` value and can use both of them to return a new value. ## Currying in a real use case That seems a bit obtuse for addition. Where would I actually want or need to use this? Consider Promises: ```js function async getData() { const apiData = await fetch(API_URL); } ``` The `fetch()` function returns a Promise, and when that Promise is successful I can pass the result to a function of my choice. I use this to transform the API response into something more useful for my application: ```js function transformData(fetchResponse) { return { // Here I can modify the data structure given to me by the API // In the getData() function below, const result will // equal whatever I return here. } } function async getData() { const result = await fetch(API_URL).then(transformData); } ``` Notice inside the `.then` we don’t run the function with parentheses (`transformData()`), we merely point to it (`transformData`). Javascript will trigger the function to run when it’s ready, and it will run it with the argument returned by the `fetch()` command. But… what if I need to transform the data in different ways sometimes, depending on when the fetch function is run? ```js function transformData(fetchResponse) { if (meetsSomeCondition) { return { // one data structure } } return { // a different data structure } } ``` Where can we get `meetsSomeCondition` from? ```js // BROKEN function async getData(meetsSomeCondition = false) { const result = await fetch(API_URL).then(transformData(meetsSomeCondition)); } ``` The above code snippet **will not work.** `.then()` needs a pointer to a function - what we’ve done is run our transformData function which returns an object. This is where currying is useful. We’ll make our transformData function return a function, so we can run it once with our condition, and return a shiny new function, ready to be called. Then `.then()` can run it with the fetch result when it needs to: ```js function transformData(meetsSomeCondition) { return function(fetchResponse) { if (meetsSomeCondition) { return { // one data structure } } return { // a different data structure } } } function async getData(meetsSomeCondition = false) { const result = await fetch(API_URL).then(transformData(meetsSomeCondition)); } ``` ## Slimming down with ES6 syntax The above syntax is kind of a lot. We can make it look cleaner and hopefully easier to skim using ES6 fat arrows. A quick recap of how fat-arrow functions work: ```js function myFunc(param1, param2) { return whatever; } // vs (multi-line function) const myFunc = (param1, param2) => { const doStuff = param1 + param2(lol); return whatever; } // vs (single-expression function that implicitly returns the result) const myFunc = (param1, param2) => param1 + param2; ``` ### Stage 1: Convert to fat arrows ```js const transformData = (meetsSomeCondition) => { return (fetchResponse) => { if (meetsSomeCondition) { return { // one data structure } } return { // a different data structure } } } ``` ### Stage 2: The inner function is a single expression, so we can implicitly return it ```js const transformData = (meetsSomeCondition) => (fetchResponse) => { if (meetsSomeCondition) { return { // one data structure } } return { // a different data structure } } ``` ### Stage 3: When fat arrow functions only have one parameter, the parentheses can be skipped ```js const transformData = meetsSomeCondition => fetchResponse => { if (meetsSomeCondition) { return { // one data structure } } return { // a different data structure } } ``` ## Summary We learned how currying works, and saw how to use it when fetching data to transform the result based on an outside condition. ```js const transformData = meetsSomeCondition => fetchResponse => { if (meetsSomeCondition) { return { // one data structure } } return { // a different data structure } } const getData = async (meetsSomeCondition = false) { const result = await fetch(API_URL).then(transformData(meetsSomeCondition)); return result; } ```
danlaush
301,924
About programming languages
So, I've been trying to focus a little bit more on my training, being that I am a junior and all. M...
0
2020-04-07T17:05:34
https://www.codegram.com/blog/about-programming-languages/
programming, languages
So, I've been trying to focus a little bit more on my training, being that I am a junior and all. My current goal is to finish a chapter a day of a book recommended by our CTO "[Metaprogramming Ruby](https://pragprog.com/book/ppmetr2/metaprogramming-ruby-2)". While I achieved my goal today, it has required a little bit of focus. The book seems quite interesting and concentration was good until one question popped in my mind. 💡 What other metaprogramming languages do I know? <blockquote>"Metaprogramming is writing code that writes code." <br><small>Excerpt From: Paolo Perrotta. “Metaprogramming Ruby 2”.</small></blockquote> Of course I lost concentration and I went asking almighty google. And the non-less almighty wikipedia readily answered. Well, not directly to my question but still... And I found out that there are no less than 50 kinds of programming languages!! Some of them are sub-categories and classifications from different angles but still... 50! ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/0gn2k40rlsb2v87wo462.jpg) So I started wandering around the wikipedia classification and identified some languages I knew (oh, yes, forgot to say I'm not a "real junior". I was a developer before, but I'm a junior at web development). While checking that, and learning more about types of programming languages, some rather nice and interesting memories came to mind. And I felt like sharing them, cause, you know... ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/hj7oun5so82vsm9mf3xc.jpg) What surprised me the most was the amount of different types. Of course a certain language can fall under different classifications, but still. I barely remembered compiled and interpreted languages from my university days. Maybe also procedural rang a bell... but that's about it. So, here it comes. My thoughts and memories from working at a certain point in my life with some of those languages. If you came here looking for a thorough description of all categories or a very technical-ly post, sorry to disappoint you... this ain't it. Look at it more as a recreational walk through my "programming life". ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/wjbdelvwwt9d6j3ph8w5.jpeg) **Let's start with COBOL** [COBOL](https://en.wikipedia.org/wiki/COBOL) falls under three different classifications: [Compiled](https://en.wikipedia.org/wiki/Compiled_language), [Imperative](https://en.wikipedia.org/wiki/Imperative_programming) and [Procedural](https://en.wikipedia.org/wiki/Procedural_programming). This language was designed in 1959 and yes, my friends, I've used COBOL. It was back in... oh gosh, it was so long ago that I would need to calculate the dates... Anyhow, I remember it was before university, at technical high school. And I remember writing code one day, leaving it compile overnight, coming back the next day and finding out the results of either the execution or the errors printed in a gigantic perforated-paper printer. Yes, I am that old... ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/6x6ig7x8ramu2uh6yast.jpg) You can imagine how complicated that was. We were doing small simple programs, we were students but still, a small mistake would mean having to wait yet another day to have the results. It has obviously evolved over the years and since 2002 it has become an [Object-oriented](https://en.wikipedia.org/wiki/Object-oriented_programming) language since then. **Next in memory lane is ADA** [ADA](https://en.wikipedia.org/wiki/Ada_(programming_language)) falls yet again under many classifications: [Compiled](https://en.wikipedia.org/wiki/Compiled_language), [Concurrent](https://en.wikipedia.org/wiki/Concurrent_computing), [Imperative](https://en.wikipedia.org/wiki/Imperative_programming) - [Procedural](https://en.wikipedia.org/wiki/Procedural_programming), [Multi-paradigm](https://en.wikipedia.org/wiki/Programming_paradigm#Support_for_multiple_paradigms), [Object-oriented](https://en.wikipedia.org/wiki/Object-oriented_programming) and System programming language with manual and deterministic memory management (no link for this, sorry, look it up 👀). The only thing I remember about ADA is that it helped me pass my Operative Systems exam in my last year of university. I remember it was a simple language (it might be because of OO). And I remember telling my friends that studying ADA could actually save us from failing the exam. I thought of it as a low investment high ROI. Turns out it was a good strategy... for me. Many of them didn't listen... and failed. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ev8tc71k0p7cot8vtbez.jpeg) **Lets talk about C** Not that I like [C](https://en.wikipedia.org/wiki/C_(programming_language)), in fact this is the only subject I dropped in university. I simply hated it. My sister though, loved it. I remember having to write some code to simulate a typewriter. Having to manipulate all the memory directly and all that complicated stuff, I just simply couldn't stand it. Always found it way too complicated. Hate, hate, hate. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ff7a6lzq5b16ip7rwn4x.jpeg) C is also [Compiled](https://en.wikipedia.org/wiki/Compiled_language), [Imperative](https://en.wikipedia.org/wiki/Imperative_programming) - [Procedural](https://en.wikipedia.org/wiki/Procedural_programming) and System programming language with manual and deterministic memory management (did you look it up before?). I'll skip a few years and languages, cause I guess you have a life and things to do other than reading this, right? **Let's move on to ABAP** This is an easy one, only one category. As it turns out, [ABAP](https://en.wikipedia.org/wiki/ABAP) is a [4th Generation language](https://en.wikipedia.org/wiki/Fourth-generation_programming_language) (4GL). I didn't even know that existed... ABAP is the language created and used by [SAP](https://en.wikipedia.org/wiki/SAP_SE). The all-mighty says *"It is extracted from the base computing languages Java, C, C++ and Python"*, but I would have compared it (at least in the first years) with COBOL... but what do I know... I didn't even like C! Do I like it? Well, I didn't dislike it, to be honest. I worked with it for about 20 years, so I guess the answer might be yes. It is the only language I've actually used professionally (other than Ruby), so I don't think I have much criteria to compare. Or maybe I do...? 🤔 **Ruby. It's a wrap.** In the end this is the language at the origin of this post. If you are a web developer you most likely know [Ruby](https://en.wikipedia.org/wiki/Ruby_(programming_language)). If you don't it's never too late. Did you know that Ruby is an [Functional](https://en.wikipedia.org/wiki/Functional_programming) (impure), [Imperative](https://en.wikipedia.org/wiki/Imperative_programming), [Interpreted](https://en.wikipedia.org/wiki/Interpreted_language), [Object-oriented](https://en.wikipedia.org/wiki/Object-oriented_programming), [Meta-programming](https://en.wikipedia.org/wiki/Metaprogramming) language? Well, know you know. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/mwvmdphro4lkq5ejzbv9.jpeg) I found learning Ruby was easy. It took a while to get used to the fact that so many things where just so easy to achieve. At the beginning of my training I would develop methods to do simple tasks to later on learn they already existed. Now, I am learning that most of those methods exist thanks to Meta-programming which seems fascinating to me. Well, well, well, after all these interesting facts it is now time for me to go back to learning. There's so much to discover! Ups, almost forgot! Check out the [wikipedia link](https://en.wikipedia.org/wiki/List_of_programming_languages_by_type) that brought me here! What are your favourite languages? Any memories worth sharing? We would love to hear about it. Share your thoughts in [Twitter](https://twitter.com/codegram).
emaroto
301,978
Comando para rodar todos os testes instrumentais do app Android
Segue abaixo o comando e mais detalhes verificar a documentação. Um ponto importante é que o Debug e...
0
2020-04-07T19:01:04
https://dev.to/viniciusalvesmello/comando-para-rodar-todos-os-testes-instrumentais-do-app-android-4elg
gradle, android, androidstudio, commands
Segue abaixo o comando e mais detalhes verificar a [documentação](https://developer.android.com/studio/test/command-line). Um ponto importante é que o `Debug` e o nome da build variant. ``` ./gradlew connectedDebugAndroidTest mergeAndroidReports --continue ````
viniciusalvesmello
301,988
Building Shopping Cart Actions and Reducers with Redux
A simple guide about actions and reducers used to build shopping cart.
0
2020-04-09T11:48:55
https://dev.to/aneeqakhan/building-shopping-cart-actions-and-reducers-with-redux-in5
redux, react, beginners
--- title: Building Shopping Cart Actions and Reducers with Redux published: true description: A simple guide about actions and reducers used to build shopping cart. tags: redux, react, beginners cover_image: https://blog.tylerbuchea.com/content/images/size/w2000/2019/04/desktop_16_9-1.gif --- This blog is about simple actions and reducers required in shopping cart app. Here I am not going to write down all the UI used for it, its only about how you can manage your state in redux store and update it accordingly. Here I am writing actions and reducers for these five scenarios 1. Add to cart 2. Remove from cart 3. Increase quantity of product 4. Decrease quantity of product 5. Discard cart First we need to create three files `actionTypes.js`, `actions.js` and `reducer.js`. So first thing first we'll write our `actionTypes.js` file and define our all action types there like this. ```javascript export const ADD_TO_CART = 'ADD_TO_CART'; export const REMOVE_FROM_CART = 'REMOVE_FROM_CART'; export const ADD_QUANTITY = 'ADD_QUANTITY'; export const SUB_QUANTITY = 'SUB_QUANTITY'; export const EMPTY_CART = 'EMPTY_CART'; ``` Our `actions.js` will look like this now ```javascript export const addToCart = id => { return { type: ADD_TO_CART, id }; }; export const removeFromCart = id => { return { type: REMOVE_FROM_CART, id, }; }; export const subtractQuantity = id => { return { type: SUB_QUANTITY, id, }; }; export const addQuantity = id => { return { type: ADD_QUANTITY, id, }; }; export const emptyCart = () => { return { type: EMPTY_CART, }; }; ``` Here you need to import ur action types from `actionTypes.js` file above. In actions we are only getting id of products and returning to reducer with its respective action type and id. Empty/Discard cart action doesn't need any id, it will discard the whole cart. Before writing reducer, I want to show you the sample of my products json: ```javascript "products": [ { "id": 1, "name": "Perfume", "image": "https://image.shutterstock.com/z/stock-photo-vintage-red-shoes-on-white-background-92008067.jpg", "price": 200, "quantity": 1, "selected": false } ] ``` Now the real work is done in `reducer.js` ```javascript const initialState = { products: [], }; const ShoppinReducer = (state = initialState, action) => { switch (action.type) { case ADD_TO_CART: return { ...state, products: state.products.map(product => product.id === action.id ? {...product, selected: true} : product, ), }; case REMOVE_FROM_CART: return { ...state, products: state.products.map(product => product.id === action.id ? {...product, selected: false, quantity: 1} : product, ), }; case ADD_QUANTITY: return { ...state, products: state.products.map(product => product.id === action.id ? {...product, quantity: product.quantity + 1} : product, ), }; case SUB_QUANTITY: return { ...state, products: state.products.map(product => product.id === action.id ? { ...product, quantity: product.quantity !== 1 ? product.quantity - 1 : 1, } : product, ), }; case EMPTY_CART: return { ...state, products: state.products.map(product => product.selected ? {...product, selected: false, quantity: 1} : product, ), }; default: return state; } }; export {ShoppinReducer}; ``` so that's it, you get a basic functionality of cart done. I hope you like it and do visit my profile for more blogs. Thanks! {% user aneeqakhan %}
aneeqakhan
302,012
OpenNMS Meridian 2018.1.17 (Pandemic) Released
Release 2018.1.17 is a small update to 2018.1.16 that fixes another security issue that affects most...
0
2020-04-20T21:07:20
https://www.opennms.com/en/blog/2020-04-07-opennms-meridian-2018-1-17-pandemic-released/
news, circleci, confd, drools
--- title: OpenNMS Meridian 2018.1.17 (Pandemic) Released published: true date: 2020-04-07 19:20:10 UTC tags: News,circleci,confd,drools canonical_url: https://www.opennms.com/en/blog/2020-04-07-opennms-meridian-2018-1-17-pandemic-released/ cover_image: https://i.imgur.com/74fmzvG.png --- Release 2018.1.17 is a small update to 2018.1.16 that fixes another security issue that affects most current OpenNMS releases. Hat tip to Johannes Moritz for reporting this. The codename for 2018.1.17 is _Pandemic_. ##### Bug - Security issue disclosures, 31 Jan 2020 (Issue [NMS-12513](http://issues.opennms.org/browse/NMS-12513)) - Drools working memory facts are not restored properly on engine reload (Issue [NMS-12586](http://issues.opennms.org/browse/NMS-12586)) - Confd download fails silently on Docker install (Issue [NMS-12642](http://issues.opennms.org/browse/NMS-12642)) ##### Story - Backport CircleCI pipeline to foundation-2018 (Issue [NMS-12476](http://issues.opennms.org/browse/NMS-12476))
rangerrick
303,276
Daily Developer Jokes - Thursday, Apr 9, 2020
Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)
4,070
2020-04-09T12:00:00
https://dev.to/dailydeveloperjokes/daily-developer-jokes-thursday-apr-9-2020-18b7
jokes, dailydeveloperjokes
--- title: "Daily Developer Jokes - Thursday, Apr 9, 2020" description: "Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)" series: "Daily Developer Jokes" cover_image: "https://private.xtrp.io/projects/DailyDeveloperJokes/thumbnail_generator/?date=Thursday%2C%20Apr%209%2C%202020" published: true tags: #jokes, #dailydeveloperjokes --- Generated by Daily Developer Jokes, a project by [Fred Adams](https://xtrp.io/) ([@xtrp](https://dev.to/xtrp) on DEV) ___Read about Daily Developer Jokes on [this blog post](https://xtrp.io/blog/2020/01/12/daily-jokes-bot-release/), and check out the [Daily Developer Jokes Website](https://dailydeveloperjokes.github.io/).___ ### Today's Joke is... ![Joke Image](https://private.xtrp.io/projects/DailyDeveloperJokes/public_image_server/images/5e1259b6d7a73.png) --- *Have a joke idea for a future post? Email ___[xtrp@xtrp.io](mailto:xtrp@xtrp.io)___ with your suggestions!* *This joke comes from [Dad-Jokes GitHub Repo by Wes Bos](https://github.com/wesbos/dad-jokes) (thank you!), whose owner has given me permission to use this joke with credit.* <!-- Joke text: ___Q:___ Did you hear about the programmer that was scared of IDEs? ___A:___ They retreated back into their shell -->
dailydeveloperjokes
303,510
🐋 Docker Cheat Sheet [PDF + Infographic]
Docker is a fantastic tool designed to make it easier to create, deploy, and run applications by usin...
5,758
2020-04-09T15:27:20
https://dev.to/godcrampy/docker-cheat-sheet-pdf-infographic-3lfk
docker, devops, beginners, inthirtyseconds
**Docker** is a fantastic tool designed to make it easier to create, deploy, and run applications by using containers. Here's a cheat-sheet to help you remember common docker commands. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/jmg2rly6n0kyy9hkbty9.png) 📁 [Download](https://github.com/godcrampy/cheat-sheets/blob/master/docker/docker-cheatsheet.pdf) the pdf version of this cheatsheet 🌟 All my cheat-sheets are in this [repo](https://github.com/godcrampy/cheat-sheets/) (Star it!) 🚀 Find me on [Instagram](https://www.instagram.com/godcrampy/) | [Github](github.com/godcrampy) | [Twitter](twitter.com/godcrampy) | [Website](sahil.surge.sh) 😄 Have a wonderful day!
godcrampy
303,519
What are your thoughts on GitLab vs Github?
Hey guys, I'm creating a brand new 2020 edition article on the topic - "GitLab vs Github" And I'm lo...
0
2020-04-09T15:44:54
https://dev.to/codegiantio/what-are-your-thoughts-on-gitlab-vs-github-2oc0
Hey guys, I'm creating a brand new 2020 edition article on the topic - "GitLab vs Github" And I'm looking for your expert opinions on the topic. If you can these questions it would be great: Github or Gitlab? What is the one feature that you really love about the platform you are using? Why do you use your current platform instead of the other? Also, feel free to send me your twitter account so that I can feature you in my article! Thanks!
codegiantio
306,589
This Was Puzzle Day 2020
A write up of Team Puzz Lightyear’s solution’s
0
2020-04-16T14:53:38
https://dev.to/rpalo/this-was-puzzle-day-2020-4ad
cs50, puzzles
--- title: This Was Puzzle Day 2020 published: true description: A write up of Team Puzz Lightyear’s solution’s tags: CS50, puzzles --- Pandemic. That’s a terrible word. It starts with P. It means disease that’s everywhere. Means that you have to stay away from friends and public places and fun things. Means you have to fight tooth-and-nail for scraps of toilet paper. Maybe. But, do you know what else starts with P? **PUZZLES.** That’s right. We will not lie down and let this Pandemic get what it wants. COVID SHMOVID. We will still engage our minds and overcome challenges as a team and as a family in a way that is both satisfying and time-consuming. So that is what we did this weekend. Here’s the write up of the solutions for our team: Team Puzz Lightyear. **Spoiler Alert: This is a write up of the solutions to CS50’s Puzzle Day 2020 puzzles. By definition it will contain those solutions.** ## Puzzle 1: Symbolism ![Puzzle 1 is a grid with a couple of letters in each square. Some squares are bordered bold in groups](https://dev-to-uploads.s3.amazonaws.com/i/w5yvfreu4h9pweh0cdus.png) This was the first puzzle we looked at, and it was the last thing we saw as we went to sleep on Sunday night in frustration. What started out as a quick couple of bursts of inspiration ended with us unable to make the final connection. If you don’t believe me, here’s Jenny’s work on the subject. ![My wife’s work scrawled all over the page](https://dev-to-uploads.s3.amazonaws.com/i/7axh44hbkgk0ntme6odt.jpg) We noticed that each of the squares makes up part of a city name. So we looked up cities, and we looked up countries, and we looked up latitudes and elevations, and we looked up airport codes and ISO country codes, and we mixed them around and mapped them and averaged them and… Nothing. Oy. You win this one, Puzzle Day. Except... “You know,” Jenny mused as we turned off the light to go to sleep, “the real *symbol* of a place is its flag.” “Well, how are you going to make a word with a bunch of flags?” I asked. “I don't know, what's the flag for Mauritius (the country containing Port Louis)?” I reached for my phone. “Looks like... four horizontal bars of color. Matches the four stacked boxes in the clue.” Jenny got quiet. “What about Ukraine (where Kiev is)?” “*Two* horizontal bars. Another match.” *Click.* She turned her light on and ran into the kitchen, and came back with her puzzles and some markers. Oh. Oooooohhhhhh... We ended up with this: ![The grid all colored in.](https://dev-to-uploads.s3.amazonaws.com/i/uu9k2egzc5rt4x89yo9l.png) “That's Ireland’s flag in the lower left!” she shouted. “And Netherlands, and France!” The others turned out to be Lithuania and Bulgaria. And now we had somewhere to go with the clues at the bottom. Doing some more research and some more coloring revealed the final flag. **Answer: Gabon** ## Puzzle 2: Secret Message from US ![Puzzle two is a bunch of encrypted letters split into groups like words.](https://dev-to-uploads.s3.amazonaws.com/i/vc1mxbcuxyewpljf0i65.png) This was a relatively early victory for Jenny and Steve. At first glance, it’s very clearly a message in some kind of cipher. The capitalized “US” in the title had me thinking maybe some kind of Caesar cipher shifting U to S or vice versa? But the punctuation! How in the heck do you shift punctuation? Jenny stared at her fingers on her keyboard in thought. And she stared. And then she took her fingers *off* of the keyboard and squinted a little. And then she grinned and started scribbling frantically. ![The completed puzzle with the decrypted message written above.](https://dev-to-uploads.s3.amazonaws.com/i/wnw0xugcdj9zanpm7h47.jpg) It was actually a “*keyboard cipher.*” Each letter was one away from the encoded letter on a standard American QWERTY keyboard. Which, in fact, was the final answer. **Answer: QWERTY** ## Puzzle 3: Sets ![Puzzle three is a series of lines made up of a couple of capital letters, a colon, and then several more capital letters.](https://dev-to-uploads.s3.amazonaws.com/i/ch7qebgychx5qyqmwno8.png) This was probably the puzzle where we saw the most simultaneous teamwork. Armed with the knowledge that sets are mathematically defined as “a group of things,” we noticed the line: `RN : IVXLCD` Roman numerals! Each of those letters was a Roman numeral. And then we were in business, trying to come up with what each of the other categories could be. Steve very quickly followed up with FOTR (Fellowship of the Ring) containing Frodo, Sam, Gimli, Legolas, Gandalf, Aragorn, Boromir (thanks Jenny!), and Merry. Followed by “Pippin is missing.” Oh. I guess “M” could be missing from Roman numerals? Aha! Now we have a way of getting from Sets to the coveted one-word answer to the puzzle! What followed can only be described as someone who is *very good* at puzzles being *very good at puzzles.* Steve rattled off: - TR is Taxonomic Rank: Order is missing. - SDN is Single Digit Numbers: One is missing. - Oh, and along the way, the overall combined message looks like ?OO: P?M??, so we're probably looking at Order Of Operations: PEMDAS. - DS is Deadly Sins: Envy is missing, further strengthening the PEMDAS theory. At this point, we hit two snags. First, we became very stuck on the remaining Sets—especially CZA (Catherine Zeta Aones? Confederate Zombie Army?). Second, we realized that there weren't quite enough ?’s to make PEMDAS. We were one short. Which was totally fine! The missing one had to be the final answer! Sweet. Now to solve the rest. (Common Zoo Animals?) Ali solved AS: Astrological signs with either Scorpio or Sagittarius missing. Jenny solved SR: Santa’s Reindeer with Donner missing. That was enough to get us to ?OO:PEMDS, making the missing one “addition.” **Answer: ADDITION** P.S. After a few more minutes and comical but unlikely guesses, the remaining set revealed itself to be Chinese Zodiac Animals with Ox missing, letting us all relax and take a nap. ## Puzzle 4: SEEDY Round ![Puzzle four is six clues describing companies, each one followed by a year.](https://dev-to-uploads.s3.amazonaws.com/i/rauxs2a2o0wksapnlzhb.png) There wasn't a whole lot to this one. I feel like we stumbled around, made our best guesses and came up with an answer that seemed reasonable. We came up with three companies off the top of our heads right away: 1. Restaurant delivery service: Doordash 2. Coding boot camp: FullStack 3. Frontpage of the internet: Reddit I took us on a wild goose chase thinking that the watch company was Misfit (they *were* funded on Indiegogo). Jenny tracked down another company: DWISS. At this point, Steve noticed that they all had a pair of double letters—just like SEEDY in the title? Taking the double letters so far gave us SOLD??, which seemed believable enough to run with. It *is* CS50 Puzzle Day, and we've done more with less, after all. We found a few clothing companies that fit the bill, and TeePublic had a pair of E’s. Combining that with the fact that I knew about HackerRank (even though the founding date we found didn't line up), and that it spelled SOLDER all together, which was a word that is vaguely tech-startup-related, we declared a shaky, but good enough victory. **Answer: SOLDER?** Actual Answer: BOLDER. The watch company ended up being Pebble, due to the connection that all the companies were funded by Y-Combinator, which means Steve was originally partially right! ## Puzzle 5: Sword Search ![Puzzle five looks like a huge word search with eleven comma-separated numbers between 4 and 10 below.](https://dev-to-uploads.s3.amazonaws.com/i/8698yehhdz20zydujvn1.png) OK. I am going to go ahead and classify this puzzle as **the largest total waste of our time in the entire set of puzzles.** Possibly ever. Make no mistake, this was a good puzzle. Perhaps even a great puzzle. We started out *very* stumped, just staring at the numbers and letters with no luck. Jenny ran it through first, finding every word she could find. Steve suggested that “You get the point…” in the subtitle referred to coordinate points and that the comma-separated numbers at the bottom were X, Y pairs. That is, until we realized that there were 11—an odd number. And then, more than a day later, Steve mentioned that he had found “GLADIUS.” Sort of. He found “Tadius.” When I took a second look, I realized that there was a “GL” before the “T” in “TADIUS.” We had a sword name in the word search, as the title promised, with an extra letter. And, by this point, as everyone now knows, extra letters point to puzzle answers. We were in business. In the midst of chasing our crawling baby around the house, improbably, I spotted a word that was almost “ZWEIHANDER.” I didn’t know that zweihander was a word before that moment, but I’ve done just enough Duolingo to know that it means “two-hander” in German, and that seemed like a sword name. Factor in that it contained an extra “S,” and we can count it as a 10-letter clue! After that, we spent a mind-numbing *two-and-a-half hours* staring alternately at the word search and Wikipedia’s list of all the swords ever. And we found them, piece by piece. - Khopesh with an extra N - Wakizashi with an extra A - Sica with an extra H (turned out later to be a false positive) - Epee with an extra M (which was an unfortunate find, because it was found down-to-up, telling us that words can appear backward - Flamberge with an extra E, backward horizontally 😭 - Piandao with an extra H - Shamshir with an extra T - Cutlass with an extra B (backward *and diagonal* up and left 😭 😭 😭 - Saber with an extra W up and right - Odachi with an extra O ![The word search page, covered in my findings.](https://dev-to-uploads.s3.amazonaws.com/i/i4dlb98ccmj7p6z58ak3.jpg) What this left us with was this brick wall: TSNAMEHTBWO De-scramblifying got us nowhere. We got things like STAB THEM NOW, STAB TH WOMEN, NAME TWO BTHS, NAME BOTH TWNS, and more. Nothing definitive, but nothing proving this was the wrong path. And that's where we stopped. **Answer: WHAT ENTOMBS?** Actual Answer: CROSS SWORDS. After watching the solution video involving a bunch of hilts and referring to the letter *just past the “tip” of the sword,* I'm content in the knowledge that I couldn't have possibly made that logic jump, and that we got as far as we could. ## Puzzle 6: Putting It All Together ![Puzzle six is about a dozen groups of letters. Each set in a bit of a random 3x3 grid of their own.](https://dev-to-uploads.s3.amazonaws.com/i/flck4r7fmx8cfmhlh4e5.png) It was late. The baby was asleep, and we had spent a long day puzzling. Most of the California crew were done for the day, and the Nebraska team was likely asleep (or had muted our crazed puzzle chat for bedtime). I pulled a beer out of the fridge. “I'll just take a peek at one more, see if something shakes loose,” I thought. My eyes glazed over as I stared at the page of letter jumble. Having spent many nights while putting the baby to sleep playing sudoku, I started to notice a 3x3 grid situation with each of the groups. Not all groups used all the columns or rows. Some columns were used only rarely. Maybe I could squash the grids together or overlay them somehow. After a few minutes of typing groups of letters that ended up being meaningless, I gave up. A dead end. But I felt like I should still be able to fit them together somehow. Like... Tetris! I looked at the shapes. And looked again! The frenzied Tetris music started crescendoing in my head. The shapes the letters made were all classic, much-loved Tetris pieces! To the scissors! I got exactly one row of shapes into cutting them out before I remembered why I was bad at crafts and that this was for the birds. To the computer! I got exactly 88 lines and only slightly less than 88 error messages into a program to solve the Tetris puzzle for me before I realized that this was a more complicated problem than I had thought. To the spreadsheet! Ah yes. This was better. I scaled the columns and rows to be little Tetris squares with letters in them and colored them in different colors. Maybe just a few minutes of sliding shapes around before bed. *Two hours later.* It’s fine. Who’s obsessive? Shut up! Anyways, trying different combinations of shapes together and slowly building a sentence yielded the question: ![My completed Tetris puzzle. The letters spell “the last name of the inventor of the jigsaw puzzle is the answer to this puzzle”.](https://dev-to-uploads.s3.amazonaws.com/i/0rl1cdor5u5eyo19ekob.png) **Answer: Spilsbury** ## Puzzle 7: A Poem from Hannah ![Puzzle seven is a poem with several stanzas. The words and rhythm are weird.](https://dev-to-uploads.s3.amazonaws.com/i/e7dkqgmcazqdufw21ksc.png) In the heat and flurry of solving #3, some of us branched out a little bit to start to work on the other puzzles. I sent out the message, “The first line of 7 is a palindrome. So is Hannah in the title.” Ali messaged back, “I was looking at that one. There are a lot of lines that are palindromes except for one letter. But I’m no good at these things.” This was followed in a couple of minutes by this picture. ![The annotated puzzle with portions highlighted and aibohphobia written in the margin.](https://dev-to-uploads.s3.amazonaws.com/i/5i4v8us2b0ib5brp6snn.png) So. That’s that one handled then. Puzzles are fun when your team is made of geniuses! **Answer: AIBOHPHOBIA** ## Puzzle 8: Stretch Out and Break Up to Get In ![Puzzle eight is a couple of long division problems followed by a longer, more complicated arithmetic problem. The bottom two problems are made of letters.](https://dev-to-uploads.s3.amazonaws.com/i/4epbr0mc7hkz9xaqfmzt.png) Within ten minutes of blowing through #3, Steve sent over this image with the message, “Did I make any mistakes so far?” ![The first division problem, partially solved.](https://dev-to-uploads.s3.amazonaws.com/i/6olev568a4ztv88dwjw9.png) So it was clear that he was eating his Wheaties that morning. The first section required filling in numbers to make the division equation work out. This provided numerical values for `a` - `g`. His completed work: ![The first division problem, fully solved.](https://dev-to-uploads.s3.amazonaws.com/i/z2xadtues55oyojxnlbe.png) The second section was more of a cryptarithmetic problem, assigning single-digit integers from zero to nine to letters to make the calculation work out. A snapshot of Steve’s logic for that part: ![Some logic for what numbers the letters might stand for in the second division problem.](https://dev-to-uploads.s3.amazonaws.com/i/u8n130u05nhq0q3686jg.png) Using the numerical values from the first part, filling them into the equation at the bottom, and calculating a total provided a large number. By this point, Jenny was doing calculations to double-check Steve’s math and here, their logic diverged. Steve only selected the numbers with lines under them for the `a-g` values, but Jenny selected the entire line and came up with a much larger number: 405,913,276. She succinctly explained her methodology with a callback to puzzle #3: > PEMDAS, B****. Using the capital-letter-to-digit mapping from the second part, Steve converted his number to “SKHHMRSI.” Jenny’s larger number converted to “LOCKSMITH.” Point, Jenny. **Answer: LOCKSMITH** ## Wrapping up 2020 Puzzle Day Puzzles: beaten (mostly). Pandemic: coped with. Team morale: strong! Overall, I’m thinking this puzzle day was even better than 2019. We grew our team a little, knew more about what to expect going in, and I feel like more of the puzzles were accessible. Or, at least, afterward, when hearing about the solutions, we went, “How in the heck were we supposed to know to do that?!” much less. As long as the world is still here in 2021, you can rest assured, we’ll be here for Puzzle Day. Thanks, CS50!
rpalo
307,712
Electron and React, a successful marriage?
In one of my previous posts, I talked (or rather wrote) about a framework called Electron, which offe...
5,930
2020-04-16T10:08:22
https://dev.to/alexdhaenens/electron-and-react-a-successful-marriage-ncf
javascript, webdev, beginners, tutorial
In one of my previous [posts](https://dev.to/alexdhaenens/electron-the-future-18nc), I talked (or rather wrote) about a framework called [Electron](https://www.electronjs.org/), which offers the possibility to _create cross-platform desktop applications with HTML, CSS and JavaScript_. As soon I saw it, I had to try it out! The first thing I asked myself, though, after I created my first Electron app was: __since Electron displays web pages can I use other JavaScript frameworks (such as React) to build and render my web pages?__ The answer is __YES__, and as it turns out combining both offers amazing opportunities! #Short recap In my blogpost about Electron, I explained that Electron uses a so-called __main process to display GUI’s__. Each GUI renders a web page (could be an external link or an html file inside your project). __Web pages are run in separate, isolated processes called renderer processes__. Electron offers __IPC__ (inter process communication) to send messages between the main & renderer processes. Another nice feature is that the __full Node.js API is exposed by Electron__ in both the main as the renderer processes. #Enter React Electron displays a web page inside a GUI. As a developer you must provide the link to that web page, that page is (often) a static html page inside your project folder. There you can add your React script & container and as soon as the page is displayed, your React app will start. __Your React application runs therefor in the renderer process__. This is also the same if you would use any other framework (e.g. angular). As I discussed in the recap, you can communicate between the main and renderer process(es). This provides the developers & software engineers with endless possibilities, since your React runs in that renderer process. For example, _you can create a menu in the native window (runs in the main process) and when a certain menu item is clicked, the React app (runs in the renderer process) navigates to a certain page_. This is done by using the IPC to send a message from the main process to the renderer process, telling which page to go to. This is amazing! Because Electron makes it possible to use the full Node.js API in both the main as renderer process, it is possible to let __React use the Node.js API__. This also provides amazing opportunities, since your React app can now use any Node module. This opens many, many doors: making the React app executing bash scripts on the user's computer, reading from or writing to the user's filesystem, ... #Tons of boilerplates Although __setting up a brand-new Electron-React project is not that much work__, there are however a lot of things that developers might require or desire for each project: hot reloading, linting and the usage of certain plugins. Setting those up for each project can be cumbersome and time consuming. Luckily for us, there are __amazing boilerplates out there for an Electron-React project__. The Electron documentation contains a [list of recommended ones] (https://www.electronjs.org/docs/tutorial/boilerplates-and-clis). Most of those boilerplates are open source so,you can help them improve if you would like. #My opinion In my free time I’m currently building an Electron-React application and so far, I’ve liked combining them very much. Although initially it was a challenge to figure out how Electron works, especially in combination with React. I’ve used a boilerplate that has all the features I require for developing (hot reloading, linting, Sass compiler, …) so I did not have to set them up myself. In my experience it is a fast way of developing desktop applications. There is also another, less obvious benefit: you can actually create a React application and host it online but also build a desktop version with the same source code by using Electron. You don’t have to rewrite anything, only setting up the Electron-React project might take some time. __The same React application code can be used without any modifications__. You can even go further, you could add additional desktop specific features (adding a menu,…) or change the behavior on desktop (saving user preferences,…) with the same code. Since this uses Electron, it is important to note that the __performance issues introduced by Electron will also rise here.__ Therefor, picking the right technologies for a project is still an important task that must be done before starting.
alexdhaenens
307,745
Career Advice: PHP vs Python vs JS?
I am a Laravel/Vue full stack developer with 2+ years of experience. While searching for new job, I o...
0
2020-04-13T14:39:44
https://dev.to/parthp1808/career-advice-php-vs-python-vs-js-g7p
discuss, php, javascript, python
I am a Laravel/Vue full stack developer with 2+ years of experience. While searching for new job, I observed that big companies or small but exciting companies tend to use technologies like React, Python, C# etc. And that sort of demotivated me as I am only efficient in Laravel with bit of experience in Vue. Should I learn and focus on these technologies? (I love working with Laravel but I want to work for company which has exciting product to work on or bigger companies (everyone fancies working for BIG companies I guess) If yes, then I am confused between Python or JS (React mostly as vue is not being used much by bigger companies) Also, I have experience with only web development which I love but don't know about other development. My Dilemma comes down to my wish for working for great and exciting company for long term rather than jumping from one startup to another
parthp1808
307,751
State of Cloud Communities in France
Karen Tamrazyan wrote a good article that analyzes local communities of 3 major public cloud provider...
0
2020-04-14T06:52:54
https://dev.to/zenika/state-of-cloud-communities-in-france-7ko
cloud, community, meetup
[Karen Tamrazyan](https://www.linkedin.com/in/karentamrazyan/) wrote a good article that analyzes local communities of 3 major public cloud providers AWS, Azure and Google Cloud in Germany, Austria and Switzerland (German-speaking countries) here: https://www.tamrazyan.com/state-of-cloud-communities-in-german-speaking-countries/ His article made me think about France. He helped me to create a similar sheet to superficially compare regional communities behind the three major public cloud providers: Amazon Web Services (aka AWS), Microsoft Azure and Google Cloud. # Comparison The top right quadrant from the latest Gartner cloud market analysis clearly indicates three global cloud leaders: Amazon Web Services, Microsoft Azure, Google Cloud. ![quadrant](https://cdn-images-1.medium.com/max/1600/0*HLk4BgFc7-QM6iWx) How are we going to compare the communities with each other? As Karen did in his article, we will compare them by the amount of registered members or the group size in other words. It is obvious that the group size is just one metric of many, like e.g. the activity rate (how often do events take place), the average number of participants, the date of the last event and so on… The members count should serve as a simplified way of measurement of a group success or importance. The internet is big. What kind of groups are we going to count? Indeed, there are cloud communities as Facebook groups, on different messenger networks like e.g. Telegram, independently hosted forums and so on. In this work we will count only the communities, which are hosted on the Meetup.com platform. Meetup is at the moment a de facto most popular community platform in those 3 countries. We will only consider active groups. It means that a group should have at least 1 registered past or upcoming event. In the locations where multiple communities dedicated to the same cloud technology are present, only the biggest one will be considered because of the very high probability of members overlap between those groups. The article gives only a snapshot of the situation as of April 12, 2020. The author doesn’t intend to update it as the situation changes. It’s a photo, not a video. # Cloud communities in France France has a population of over 67 million people. It is one of the most important technological hubs of Europe (competing with Germany and England). Plenty of universities, industrial and service companies belong to the landscape of most of the cities in this country. Even small places can often have lively user groups here like in Brest, Tours or Cannes. {% gist https://gist.github.com/jlandure/3498ab5e63cec5680106d7713a63b854 %} # Analysis of Cloud Impact As we can clearly see from the table, AWS has far away the biggest communities in France : it demonstrates a tendency to go on this Cloud Provider. This popularity is clearly brought from the location of [one datacenter in Paris, France at the end of 2017](https://aws.amazon.com/blogs/aws/now-open-aws-eu-paris-region/). Azure have broad community networks that cover the top 5 most populated cities. Azure is never the biggest community in the city. Google Cloud groups (aka “GDG Cloud” chapters) have the weakest cloud community network in France among the 3 contenders. However, one should not forget the much wider network of general Google Developer Groups (abbreviated “GDG”), which also on occasions run Google Cloud themed meetups. We can observe that Paris, Lyon and Nantes are the top 3 cities where the aws, azure and google cloud communities can enjoy events and meet together. Lyon is quite particular : it’s the only city where Google Cloud has more members than the others Cloud communities. This article highlights that cloud communities don’t match the [top “La French Tech” cities](https://medium.com/frenchtech/bienvenue-aux-communaut%C3%A9s-et-capitales-french-tech-labellis%C3%A9es-le-3-avril-2019-f417539d3bd0). However, many cloud providers provide Cloud credits for Startups like this [GCP initiative](https://cloud.google.com/developers/startups): [check this page to redeem it](https://goo.gl/zY4pH2). ![frenchtech](https://cdn-images-1.medium.com/max/1600/0*7kDNA0EXz_-qRuUo) There are some large cities in the country like Montpellier, Bordeaux or Lille and a couple more, where none of Azure or Google Cloud have community footprints. These are opportunities for the future to grow. # What about GDGs? GDGs, Google Developer Groups, are too general to be studied in this article but GDGs bring together tons of passionate developers in France : 4134 members for GDG Paris, 2360 for GDG Nantes, 2213 for GDG Lille and 2133 for GDG Toulouse. These numbers are boosted by the organization of a DevFest each year, a day or 2-days conference dedicated to share about the technology and its impact. And GDGs run some Cloud events sometimes, please check out the [listing of GDGs here](https://developers.google.com/community/gdg/groups). ![gdg-france](https://cdn-images-1.medium.com/max/1600/1*41BZw-jnUOPFw0VANQpYDw.png) # Disclaimer Author of this article runs voluntarily the [Google Cloud developer group in Nantes (France)](https://meetup.com/fr-FR/GDG-Nantes) and also the [GDG Cloud Nantes](https://www.meetup.com/fr-FR/GDG-Cloud-Nantes). He’s also a [GDE Cloud](https://developers.google.com/community/experts/directory/profile/profile-julien_landur_C3_A9), a community reward from Google Cloud. His opinions were intended to be honest and unbiased. Apart from this research, the following official sources were used: [AWS](https://aws.amazon.com/developer/community/usergroups/europe/), Azure & [Google Cloud](https://cloud.google.com/community/meetups). Thanks to [Karen Tamrazyan](https://twitter.com/KarenTamrazyan) & [Charles-Henri Guérin](https://twitter.com/charlyx) for the review of the article.
jlandure
307,753
Conceito de Mock
Olá, esse é meu primeiro post no dev.to. Vou começar a minha jornada para ajudar a comunidade coloca...
0
2020-04-13T14:47:11
https://dev.to/estevaowat/conceito-de-mock-ipd
testing
Olá, esse é meu primeiro post no dev.to. Vou começar a minha jornada para ajudar a comunidade colocando um pouco do que estou aprendendo durante o meu dia a dia, com isso espero ajudar alguém. :) Bom vamos lá... Mock serve para substituir o retorno de uma função muito demorada com algum retorno esperado. Veja esse exemplo: ``` //Instaciando o Mock var userMock = new User() { id = 123; name = "John Doe" } public User GetById(string id) { // Ir até o banco de dados e buscar o usuário de acordo com aquele ID return user; } //Exemplo para retornar o userMock da função GetById _mock.Setup(x => x.GetById(It.IsAny<string>())).Returns(userMock); ``` Nesse caso eu estou falando que para a função GetById o retorno dela vai ser userMock, com isso diminuímos o tempo dos testes, pois eles não precisam ir para o banco de dados e buscar a informação. Uma característica interessante do Mock é que eu sempre assumo que o retorno da função está sempre certo, não queremos testar o retorno da função que pega do banco de dados, seria alguma outra função que necessita desses dados. Bom por hoje é isso... Gostaria do feedback de vocês, espero que gostem.
estevaowat
307,807
Working with SQL Relations in Go - Part 5
Over these series of posts I have been exploring an approach that could be taken when working with SQ...
5,843
2020-04-13T16:12:09
https://andrewpillar.com/programming/2020/04/13/working-with-sql-relations-in-go-part-5/
go, sql, api
Over these series of posts I have been exploring an approach that could be taken when working with SQL relationships in Go. The precursor to all of this was the initial [ORMs and Query Building in Go](https://andrewpillar.com/programming/2019/07/13/orms-and-query-building-in-go/). This explored one aspect of ORMs, the query building, but it didn't address how relationships could also be handled in an idiomatic way. So before I wrap up this series in this final post, let us address the code we have currently. * [Finishing up the Application](#finishing-up-the-application) * [Callbacks and Interfaces](#callbacks-and-interfaces) * [A Note on Generics](#a-note-on-generics) * [Why Not Make this a Library](#why-not-make-this-a-library) * [Conclusion](#conclusion) >**Note:** If you're interested in taking a look at the code I put together for this example application I put together, then take a look at it online here: https://github.com/andrewpillar/blogger. ## Finishing up the Application Previously, we successfully implemented the `Index`, and `Show` methods for the Post entity. However, for the Category entity we need to update the `Show` method so that we return a list of posts for that category. This can be done by utilising the `model.Binder` interface we implemented on the `post.Store` struct. // category/handler.go package category import ( ... "blogger/post" ... ) ... func (h Handler) Show(w http.ResponseWriter, r *http.Request) { ... pp, paginator, err := post.NewStore(h.DB, c).Index(r.URL.Query()) if err != nil { // handle error } data := struct{ Category *Category Prev string Next string Posts []*post.Post }{ Category: c, Prev: fmt.Sprintf("/category/%d?page=%d", c.ID, paginator.Prev), Next: fmt.Sprintf("/category/%d?page=%d", c.ID, paginator.Next), Posts: pp, } w.Header().Set("Content-Type", "application/json; charset=utf-8") json.NewEncoder(w).Encode(data) } You'll notice here, that we imported the `blogger/post` package which will result in an import cycle. This can be easily fixed by creating a third sub-package in the `post` and `category` packaged called `web` to hold the web handler implementations. The application at this point is mostly finished, if you wish to see a complete example of this then take a look at the repository in GitHub, https://github.com/andrewpillar/blogger. Now let me go about trying to justify the approach I took to this problem. ## Callbacks and Interfaces When it comes to working with SQL relationships in Go there is going to be similarities between how things are done. For example, we want to load relationships, as well as bind them. Not to mention that obvious similarities between the entities we have, they all have 64-bit integer primary keys, and they each have differen relations. Because of these similarities it is only natural to look to an interface to implement what we need when it comes to relationship loading. So when it comes to writing the actual code we can just take the interface we have, and tell it "load in the relationships I want", without necessarily caring how they're loaded in. Furthermore, we also implemented a light interface to represent our entity models. The actual logic for binding the models to one another is deferred to a function callback. This makes sense to do, since different models could be bound in different ways. However, with the implementation of the `model.Model` interface we were able to implement the `model.Bind` function to have a generic way of binding our models together, assuming that the models have 64-bit integer keys. We take these function callbacks a step further, and allow for the description of how these models are bound together via the `model.Relation` function and `model.RelationFunc` type. As you can see, when coupled together, callbacks and interfaces can achieve what we went in a way that is fairly idiomatic. And we managed to do this without having to dip into the `reflect` package. ## A Note on Generics I touched on generic behaviour briefly, so I may aswell add my two cents on the whole generic situation in Go. When I first approached Go I rather liked the lack of supports for generics, and wouldn't have minded if the language continued without generics. This belief mainly arose from the fear of people abusing generics to write god code (code that is so generic and arbirtrary it could do anything, and yet is hard to understand). However, I think my fears in this regard are unfounded, mainly because some people like abusing `interface{}` and `reflect` to achieve this instead. That being said, I cannot deny that certain things would be easier with generics in Go. It is comforting to see some of the [performance gains](https://blog.tempus-ex.com/generics-in-go-how-they-work-and-how-to-play-with-them/) that can be made via the use of generics in Go. So I would welcome generics in Go, and hope that people use them responsibly. ## Why Not Make this a Library One final thing I should address before wrapping this up, is why didn't I take what I have written and turn it into a library? Well, I didn't turn this into a library for a platitude of reasons. The first being that I don't want to make any assumptions about how people went about modelling their data. For example, you may use something other than an integer for your primary key, perhaps a string, a byte array. And I think this is another thing where ORMs fall short, and that is making assumptions about the data being worked with. Second of all, this implementation only contains a handful of functions and interfaces. And because of what I mentioned in the first post, it makes a number of assumptions about the data. Finally, since the implementation is only a handful of functions and interfaces, I don't think this would make for a very substantial library. Also, I would defer to one of the [Go proverbs](https://go-proverbs.github.io/) here too, "A little copying is better than a little dependency". ## Conclusion I hope the ideas presented throughout this series of posts will help you when it comes to working with SQL relationships in Go. This is something I have struggled with, especially since the solutions out there for modelling data are lacking, perhaps due to the points I made earlier. I also want to say, that these ideas are not gospel, just an approach that I have found that works for me. As always feel free to contact me to discuss this further.
andrewpillar
307,890
TwilioHackathon - DevicePolice Available Now!
What I built DevicePolice is a small tool that helps you curve your habit of spending too...
0
2020-04-13T18:41:56
https://dev.to/htnguy/twiliohackathon-devicepolice-app-5h06
twiliohackathon, react, node, showdev
## What I built DevicePolice is a small tool that helps you curve your habit of spending too much time on your device. As a result of COVID-19 and the quarantine, more people are staying at home and consequently spending more time on their device. However, staying at home doesn't mean you have to give up all activities. There are plenty of things that you can do in the comfort of your own home. ## How it works 1. A user sign ups for an account using their phone number. 2. They get transported to a dashboard or pieces of one at least 😰 3. They set a timer. How many hours, minutes, or seconds they want 4. Request is sent to Node and Express server. 5. The server starts a timer independent of what state the client is in => you can close the browser or refresh the page and the timer is still running. 6. When the timer is over, you get a text message to your phone reminding you to get off your device! It also includes a recommendation for something else you can do :smile: Note: there is also a feature to **delete** your account if you no longer want to use it. We don't keep any of your information (phone number, etc...) after you delete your account 👍 ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/jktvz18v38riodhjqs4k.png) ## Demo Link Check it out! [Device Police](https://dreamy-easley-c0f78a.netlify.com/) ## Link to Code [frontend](https://github.com/htnguy/device-police-frontend) If you just want to try it out locally: ```bash git clone https://github.com/htnguy/device-police-frontend.git ``` [backend](https://github.com/htnguy/device-police-backend) ```bash git clone https://github.com/htnguy/device-police-backend ``` ## How I built it ### Backend - NodeJS - server runtime for JS - Express - web framework - Mongoose - ODM that makes working with MongoDB feel like heaven - MongoDB - database for storing users, verification tokens, etc... - Twilio Node Helper - The whole point of this hackathon => Makes interacting with Twilio SMS API much easier. - JSONWebTokens - authentication - Bored API - getting a random activity that you can do instead of being on your device. ### Frontend - React - the wonderful UI library that we all know and love. - Gatsby - Awesome static site generator - Axios - making API Request from client ## Deployments Backend - Heroku (it is free and super easy to deploy your node app) Frontend - Netlify - great hosting for Gatsby and React app. Both of these have continuous integration with Github => push new changes => redeploy app :smile: ## Walls that I bumped into - Coming up with an idea - This part was the most difficult aspect about this project. I was so conscious of what other people have done, so that this project doesn't seem redundant. - Authentication - this app does not store any emails or other credentials besides your phone number and a password, So coming up with a reliable and secure way of authenticating a user was one of the top priorities. ### Let me know how I can make it better 💡
htnguy
307,952
Depois do Café - Episodio 7 - Trabalhando em Ambientes Internacionais
Nesse episódio pocket edition a gente fala sobre a nossa experiência de trabalhar em um ambientes internacionais, nós comentamos como foi começar a trabalhar em Inglês e com pessoas de outras nacionalidades, além de brasileiros.
0
2020-04-13T20:02:59
https://dev.to/depoisdocafe/depois-do-cafe-episodio-7-trabalhando-em-ambientes-internacionais-3p3b
podcast, ambientesinternacionais, portugues
--- title: Depois do Café - Episodio 7 - Trabalhando em Ambientes Internacionais published: true description: Nesse episódio pocket edition a gente fala sobre a nossa experiência de trabalhar em um ambientes internacionais, nós comentamos como foi começar a trabalhar em Inglês e com pessoas de outras nacionalidades, além de brasileiros. tags: podcast, ambientes-internacionais, portugues --- {% spotify spotify:episode:1Loh4oPJdB0WjWiV1732qf %} Você também pode ouvir no seu aplicativo preferido: [iTunes](https://podcasts.apple.com/br/podcast/depois-do-caf%C3%A9-com-airton-zanon/id1480842641), [Breaker](https://www.breaker.audio/depois-do-cafe-com-airton-zanon), [Google Podcasts](https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9lMGU0MDU4L3BvZGNhc3QvcnNz), [Overcast](https://overcast.fm/itunes1480842641/depois-do-caf-com-airton-zanon), [Pocket Casts](https://pca.st/bpyo3i4y), [Radio Public](https://radiopublic.com/depois-do-caf-com-airton-zanon-6r2oLq), [Spotify](https://open.spotify.com/show/4cqX5o40bClwqtYHv9X7Lp) Nesse episódio a gente fala sobre a nossa experiência de trabalhar em um ambiente internacional aqui na Holanda, nós comentamos como foi começar a trabalhar em Inglês e com pessoas de outras nacionalidades, além de brasileiros. Este foi um episódio da Série Pocket Edition, na qual vamos lançar um episódio menor que o normal toda semana, durante este lockdown, para comentar o que estamos fazendo depois do café. ---------------------- Participantes: Airton Zanon - [@airtonzanon](https://twitter.com/airtonzanon) (twitter) Elisa Pedrosa Reis - [@liisapedrosa](https://twitter.com/liisapedrosa) (twitter) ---------------- Links comentados: Canal do youtube do Gavin: https://www.youtube.com/channel/UCskEPRzGlsYHs_a5SJyCXag Canal Slack de Brasileiros Expatriados: https://join.slack.com/t/brazil-tech-expats/shared_invite/zt-975uiifq-CxUOV4cdyMgG9~bF5mCvHg ---------------- 
Para mais episodios e saber mais sobre o podcast acesse https://anchor.fm/depoisdocafe. Siga-nos no twitter [@dpsdocafe](https://twitter.com/dpsdocafe)
airtonzanon
308,004
April 25 — Daily CodeNewbie Check-in Thread
A daily thread to ask questions, share progress, and stay accountable!
5,940
2020-04-25T12:00:22
https://dev.to/codenewbieteam/april-25-daily-codenewbie-check-in-thread-8lp
codenewbie, checkin, beginners, discuss
--- title: April 25 — Daily CodeNewbie Check-in Thread published: true description: A daily thread to ask questions, share progress, and stay accountable! tags: codenewbie, checkin, beginners, discuss series: Daily Checkin --- You are encouraged to use this daily thread to: - Ask for help (all questions encouraged) - Explore topics you’re curious about - Share something you’ve written or read - Celebrate your wins - Stay accountable to your goals - Support your fellow programmers Happy Coding! --- _Interested in joining the twice-weekly live [#CodeNewbie Twitter chats](https://twitter.com/search?q=codenewbie)? You can find us [on Twitter](https://twitter.com/search?q=codenewbie) each Wednesday at 9pm ET (1am GMT on Thursday) and Sunday at 2pm ET (6pm GMT)!_
codenewbiestaff
308,041
Readable code
Programmers play a big role in creating an interactive interface that can be used to bring about the...
0
2020-04-13T22:54:13
https://dev.to/asar358/readable-code-1lie
Programmers play a big role in creating an interactive interface that can be used to bring about the real outcome of every activity that human beings are involved in. In each case, there are specific codes that are used to ensure that the program suits the targeted purpose which varies from one app or website to another according to their needs. Changes in the system may result in the need for adjustments to the existing program. Every developer should strive to write code that is flexible to changes as a way of improving its purpose to the user. Since the program is involved in the day to day interactions with the world, a change in the world dictates a similar change to programs so that they can be applied to increase efficiency. Once another developer is contracted, the developer can have a difficult time while trying to understand the codes used by the original programmer. Hence, there is a need for developers to write readable codes that is easier for other developers to understand. **Try figuring out this code:** ![bad code](https://miro.medium.com/max/1400/1*waghFCxAfEwyQIDP9qGOeA.png) Moreover, the original developer may even forget the codes used with time and hence be unable to make changes when need be. As a result, the changes needed may not be effectively be made. To ensure that consistency and flexibility of programs to changing the nature of the world, there is a need to have a readable code that every developer can interpret. Example of an organized code block ![good code](https://cdn-media-1.freecodecamp.org/images/jaJg1ODAb7FcbQbWaQ8FwegEmTD4IsTtx7Of) Below are some common practices to consider when writing codes. **Commenting**: Writing comments in your code helps other developers to be able to point out what each code block or function does and that enables them to easily navigate to which targeted part to be updated or fixing of bugs. **Naming Scheme**: Naming functions and variables based on their purpose is an effective way to tell developers the purpose of each function in the code. The use of a camel case is a great way for naming functions and their purpose. For instance, function sayHi() will be a self-explained function that greets a user. **Indentation**: Proper indentation helps organize codes in many ways, making codes more readable by showing where lines of code end. **DRY Principle**. Don't Repeat Yourself. The DRY principle encourages developers to organize their code to make it reusable instead of repeating the same code over and over. There are many other ways to write readable codes other than the four mentioned above. Readable codes are easy to interpret and hence changes can be easily be adjusted. New developers need to consider the use of readable codes in their programs since they are more convenient to work with.
asar358
308,106
Git commit not created Error: Command failed: git commit -m “Initialize project using Create React App”
I am a Windows user and I have yarn, node js, git, npm installed. In the part when I create my react...
0
2020-04-14T01:52:35
https://dev.to/irruur/git-commit-not-created-error-command-failed-git-commit-m-initialize-project-using-create-react-app-2dgk
react, node, yarn, windows
I am a Windows user and I have yarn, node js, git, npm installed. In the part when I create my react application with the command npx create-react-app my-app starts to create but almost at the end I mark this error: PS C:\React> npx create-react-app my-app Creating a new React app in C:\React\my-app. Installing packages. This might take a couple of minutes. Installing react, react-dom, and react-scripts with cra-template... yarn add v1.22.4 [1/4] Resolving packages... [2/4] Fetching packages... info fsevents@1.2.12: The platform "win32" is incompatible with this module. info "fsevents@1.2.12" is an optional dependency and failed compatibility check. Excluding it from installation. info fsevents@2.1.2: The platform "win32" is incompatible with this module. info "fsevents@2.1.2" is an optional dependency and failed compatibility check. Excluding it from installation. [3/4] Linking dependencies... warning "react-scripts > @typescript-eslint/eslint-plugin > tsutils@3.17.1" has unmet peer dependency "typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta". [4/4] Building fresh packages... success Saved lockfile. success Saved 13 new dependencies. info Direct dependencies ├─ cra-template@1.0.3 ├─ react-dom@16.13.1 ├─ react-scripts@3.4.1 └─ react@16.13.1 info All dependencies ├─ @babel/plugin-transform-flow-strip-types@7.9.0 ├─ @babel/plugin-transform-runtime@7.9.0 ├─ @babel/plugin-transform-typescript@7.9.4 ├─ @babel/preset-typescript@7.9.0 ├─ babel-preset-react-app@9.1.2 ├─ cra-template@1.0.3 ├─ eslint-config-react-app@5.2.1 ├─ react-dev-utils@10.2.1 ├─ react-dom@16.13.1 ├─ react-error-overlay@6.0.7 ├─ react-scripts@3.4.1 ├─ react@16.13.1 └─ scheduler@0.19.1 Done in 19.76s. Initialized a git repository. Installing template dependencies using yarnpkg... yarn add v1.22.4 [1/4] Resolving packages... [2/4] Fetching packages... info fsevents@2.1.2: The platform "win32" is incompatible with this module. info "fsevents@2.1.2" is an optional dependency and failed compatibility check. Excluding it from installation. info fsevents@1.2.12: The platform "win32" is incompatible with this module. info "fsevents@1.2.12" is an optional dependency and failed compatibility check. Excluding it from installation. [3/4] Linking dependencies... warning "react-scripts > @typescript-eslint/eslint-plugin > tsutils@3.17.1" has unmet peer dependency "typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta". warning " > @testing-library/user-event@7.2.1" has unmet peer dependency "@testing-library/dom@>=5". [4/4] Building fresh packages... success Saved lockfile. success Saved 20 new dependencies. info Direct dependencies ├─ @testing-library/jest-dom@4.2.4 ├─ @testing-library/react@9.5.0 ├─ @testing-library/user-event@7.2.1 ├─ react-dom@16.13.1 └─ react@16.13.1 info All dependencies ├─ @babel/runtime-corejs3@7.9.2 ├─ @sheerun/mutationobserver-shim@0.3.3 ├─ @testing-library/dom@6.16.0 ├─ @testing-library/jest-dom@4.2.4 ├─ @testing-library/react@9.5.0 ├─ @testing-library/user-event@7.2.1 ├─ @types/prop-types@15.7.3 ├─ @types/react-dom@16.9.6 ├─ @types/react@16.9.34 ├─ @types/testing-library__dom@7.0.1 ├─ @types/testing-library__react@9.1.3 ├─ css.escape@1.5.1 ├─ csstype@2.6.10 ├─ dom-accessibility-api@0.3.0 ├─ min-indent@1.0.0 ├─ react-dom@16.13.1 ├─ react@16.13.1 ├─ redent@3.0.0 ├─ strip-indent@3.0.0 └─ wait-for-expect@3.0.2 Done in 6.53s. Removing template package using yarnpkg... yarn remove v1.22.4 [1/2] Removing module cra-template... [2/2] Regenerating lockfile and installing missing dependencies... info fsevents@2.1.2: The platform "win32" is incompatible with this module. info "fsevents@2.1.2" is an optional dependency and failed compatibility check. Excluding it from installation. info fsevents@1.2.12: The platform "win32" is incompatible with this module. info "fsevents@1.2.12" is an optional dependency and failed compatibility check. Excluding it from installation. warning " > @testing-library/user-event@7.2.1" has unmet peer dependency "@testing-library/dom@>=5". warning "react-scripts > @typescript-eslint/eslint-plugin > tsutils@3.17.1" has unmet peer dependency "typescript@>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta". success Uninstalled packages. Done in 5.67s. Git commit not created Error: Command failed: git commit -m "Initialize project using Create React App" at checkExecSyncError (child_process.js:630:11) at execSync (child_process.js:666:15) at tryGitCommit (C:\React\my-app\node_modules\react-scripts\scripts\init.js:62:5) at module.exports (C:\React\my-app\node_modules\react-scripts\scripts\init.js:334:25) at [eval]:3:14 at Script.runInThisContext (vm.js:120:20) at Object.runInThisContext (vm.js:311:38) at Object.<anonymous> ([eval]-wrapper:10:26) at Module._compile (internal/modules/cjs/loader.js:1156:30) at evalScript (internal/process/execution.js:94:25) { status: 128, signal: null, output: [ null, null, null ], pid: 10420, stdout: null, stderr: null } Removing .git directory... Success! Created my-app at C:\React\my-app Inside that directory, you can run several commands: yarn start Starts the development server. yarn build Bundles the app into static files for production. yarn test Starts the test runner. yarn eject Removes this tool and copies build dependencies, configuration files and scripts into the app directory. If you do this, you can’t go back! We suggest that you begin by typing: cd my-app yarn start Happy hacking! PS C:\React> Also when I try to lift the react application created I get this error: PS C:\React> cd .\my-app\ PS C:\React\my-app> yarn start yarn run v1.22.4 $ react-scripts start i 「wds」: Project is running at http://192.168.56.1/ i 「wds」: webpack output is served from i 「wds」: Content not from webpack is served from C:\React\my-app\public i 「wds」: 404s will fallback to / Starting the development server... events.js:287 throw er; // Unhandled 'error' event ^ Error: spawn cmd ENOENT at Process.ChildProcess._handle.onexit (internal/child_process.js:267:19) at onErrorNT (internal/child_process.js:469:16) at processTicksAndRejections (internal/process/task_queues.js:84:21) Emitted 'error' event on ChildProcess instance at: at Process.ChildProcess._handle.onexit (internal/child_process.js:273:12) at onErrorNT (internal/child_process.js:469:16) at processTicksAndRejections (internal/process/task_queues.js:84:21) { errno: 'ENOENT', code: 'ENOENT', syscall: 'spawn cmd', path: 'cmd', spawnargs: [ '/s', '/c', 'start', '""', '/b', '"http://localhost:3000"' ] } error Command failed with exit code 1. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. PS C:\React\my-app>
irruur
308,128
Logging with App Engine and Stackdriver
Logging with App Engine and Stackdriver In days of old, App Engine's logging statement...
0
2020-04-14T02:41:43
https://whistlr.info/2020/appengine-stackdriver-logging/
appengine, gcp
--- title: Logging with App Engine and Stackdriver published: true date: 2020-04-08 00:00:00 UTC tags: appengine, gcp canonical_url: https://whistlr.info/2020/appengine-stackdriver-logging/ --- # Logging with App Engine and Stackdriver In days of old, App Engine's logging statement looked something like this: ```go c := appengine.NewContext(r) log.Infof(c, "You made a log! Here's a thing: %v", thing) ``` This appeared in Stackdriver's logging page as a line correctly attributed to the corresponding HTTP request.And if you call e.g., `Warningf`, or `Errorf`, the severity of the request itself is increased to match (to the maximum level of severity you log). Easy, right? Well, not anymore. ⚠️ While this post uses Go, it applies to all App Engine runtimes.This is especially true as the reason behind App Engine's massive changes is that by removing functionality, more languages are supported 'for free'. ## New App Engine Since [the go112 runtime](https://cloud.google.com/appengine/docs/standard/go/go-differences), App Engine, as we knew it for 10+ years, has been effectively deprecated.It's now a place to run a HTTP server with a bit of magic around the edges. Because of this, you now have a couple of points to consider when logging. For background, remember that App Engine still generates a default HTTP log for every request. If you simply follow the documentation on how to log for go112 and above, you'll encounter two fundamental issues: - the logs you generate won't be associated with the current HTTP request - and, each log event you generate will appear inside Stackdriver **on its own line** , not grouped together Of course, you can see these contextless log messages _adjacent_ to incoming HTTP requests, which could be useful for long-lived tasks.But it's now difficult to 'at-a-glance' see logs generated due to a HTTP request in context. ### How To Log For background, to log from user code, you can either: - Log to stdout (via the [built-in log package](https://golang.org/pkg/log/)) or by literally printing to stdout or stderr - or, log to a named log via [Cloud Logging](https://cloud.google.com/logging/docs/setup/go) - Confusingly, the named log you choose _can also be named_ stdout or stderr. - The name doesn't really matter except that Stackdriver's Log Viewer will show stdout, stderr and your App Engine logs by default. Additionally, if you print JSON to stdout, it will be treated _as_ a structured log (as if you called Cloud Logging with "stdout" as the log name).[This is badly documented](https://github.com/GoogleCloudPlatform/golang-samples/issues/802), but there are [third-party loggers](https://github.com/andyfusniak/stackdriver-gae-logrus-plugin) that can help. ### Associate Your Logs App Engine's [documentation](https://cloud.google.com/appengine/docs/standard/go/writing-application-logs#related-app-logs) is vague on how you associate log events with the current HTTP request.Let's go through the steps.To associate your logs with the top-level, default App Engine log for the HTTP request, you'll need to: - Parse the `X-Cloud-Trace-Context` header, which include the Trace ID, Span ID and an optional flag - Insert the Trace ID into a string like "projects/<projectName>/traces/<traceId>" - Use this string as the `Trace` field in a structured log - Ensure you're passing a valid `mrbp.MonitoredResource` that describes the log as `Type: "gae_app"` This will ensure that your log is nested with the HTTP request, based on `Trace` and `Type`. ![Stackdriver logging](https://storage.googleapis.com/hwhistlr.appspot.com/assets/nested-log.png)<figcaption>shows the nested log statement</figcaption> However, this log will _still exist_ at the top-level—it's just being nested by Stackdriver's UI.A simple workaround here is to use a log name that's not shown by default (only stderr, stdout and the App Engine default logs are shown), so it won't clutter your view. ⚠️ You can set the `HTTPRequest` field of the log entry.But this will appear as if a _whole other_ HTTP request has occured (as it'll use the structured log format and display "GET", the path, etc) for every individual line you log. ## Putting It Together The code looks roughly like this: ```go import ( "fmt" "os" "strings" "cloud.google.com/go/logging" mrpb "google.golang.org/genproto/googleapis/api/monitoredres" ) const ( projectName = "your-project-name" // you can also use os.Getenv("GOOGLE_CLOUD_PROJECT") for this in prod ) var ( lg *logging.Logger ) func init() { ctx := context.Background() loggingClient, _ := logging.NewClient(ctx, fmt.Sprintf("projects/%s", projectName)) resource := logging.CommonResource(&mrpb.MonitoredResource{Type: "gae_app"}) lg := loggingClient.Logger("your-app-appengine-client-log", resource) } func httpHandler(w http.ResponseWriter, r *http.Request) { traceId := strings.Split(r.Header.Get("X-Cloud-Trace-Context"), "/")[0] lg.Log(logging.Entry{ Trace: fmt.Sprintf("projects/%s/traces/%s", projectName, traceId), Payload: "Yes, your log message finally goes here", Severity: logging.Info, }) defer lg.Flush() } ``` Of course, you probably want to write a helper._Simple_. 🤨 ### Caveats You can't modify the severity of the default App Engine HTTP log.While this _is_ [mentioned in the docs](https://cloud.google.com/appengine/docs/standard/go/writing-application-logs#related-app-logs), it's actually an error—there's no way to do this. You also can't really test this locally, as App Engine no longer runs via the `dev_appserver`, so no magic headers are provided to you.Local code just won't see the `X-Cloud-Trace-Context` header.A quick way to test if you're in production or not is: ```go projectName := os.Getenv("GOOGLE_CLOUD_PROJECT") isProd := projectName != "" ``` ## Alternatives App Engine is no longer designed to help you associate simple log output with its default HTTP logging and provide helpful 'at-a-glance' information.So, let's not work against it: another option is to **write our own logs**. ### Parallel To App Engine Logs As we know, App Engine generates default HTTP logs.They can't be disabled, which means if you insert _additional_ log statements, you might be fooled into thinking that your application has twice the number of requests. However, if you create logs under a different log name, and aggressively use a different search inside Stackdriver (as you can't set a default), it's possible to see just your own log lines. You'll need to create two different types of logs. 1. The parent log (this maps to the App Engine log we're trying to replicate) 2. Any individual log statement (generated from a classic `Logf`-style function) Confusingly, you should create the parent entry last, because it contains information you only know at request completion—e.g., the response size and the request latency.You don't _have_ to specify this data, but Stackdriver will show "undefined" for several fields without it (Stackdriver has a UI for custom fields, but it aggressively tries to include an undocumented number of HTTP-related fields regardless). As I mentioned before, Stackdriver will associate requests with the same Trace ID.Since we're not logging a real request, you can just make one up.I suggest deriving something from the _real_ ID. Here's how you might log individual log lines (part 2, above): ``` res := logging.CommonResource(&mrpb.MonitoredResource{ Type: "gae_app", }) fakeTraceID := "_"+r.Header.Get("X-Cloud-Trace-Context") // derived by adding a char clientLogger, err := loggingClient.Logger("events", res) err := logger.LogSync(r.Context(), logging.Entry{ Payload: "I'm a log message!", Severity: logging.Info, // you decide the level Trace: fakeTraceID, }) ``` Next, you can log information about the whole request (part 1, again, typically after your request is complete): ```go parentLogger, err := loggingClient.Logger("sane_requests", res) err := logger.LogSync(r.Context(), logging.Entry{ HTTPRequest: &logging.HTTPRequest{ Request: r, // use incoming *http.Request from http handler RemoteIP: r.Header.Get("X-Appengine-User-Ip"), // not in App Engine's *http.Request // nb. These can only be found by wrapping your handler. ResponseSize: 1234, Latency: time.Millisecond * 1234, Status: 200, }, Payload: payload, // the top-level payload is totally freeform (JSON or text) Severity: logging.Warning, // you decide what this is Trace: fakeTraceID // from previous example }) ``` ... phew.There's definitely room for a library to help you here, and then as a reminder, you'll have to ask Stackdriver to show you the "sane\_requests" log. ### Orthogonal to App Engine logs App Engine is going to continue generating its own logs.Many of these logs are likely completely boring: requests for static assets, redirections, etc. Rather than trying to replace the built-in behavior, another suggestion is to just create logs for the most interesting of your handlers.You can follow the above guidance to insert HTTP requests but remember that the request is mutable and _something you can fake_—or even not provide **at all**. While I mentioned the list was undocumented (it is), I've observed that Stackdriver will show the following fields in its default view: - `httpRequest.requestMethod`: usually 'GET' or 'POST', but could be 'FOO' - `httpRequest.status` - `httpRequest.responseSize`: parsed as bytes - `httpRequest.latency`: parsed as a time - `httpRequest.userAgent` - `httpRequest.remoteIp`: only displayed when the log event is expanded If any of these fields exist, then Stackdriver will try to display _all_ of them.So the choice is up to you: add single text logging events for the progress of your loggable event, and then provide the top-level logger, which can contain a _real_, _fake_ or _no_ HTTP request. ## Troubleshooting When writing this blogpost, I found that my App Engine's Service Account (in the form [appid@appspot.gserviceaccount.com](mailto:appid@appspot.gserviceaccount.com)) didn't have permissions to write logs.I think this is because my app is actually quite old–it predates the Cloud Console. ![The Google Cloud IAM page](https://storage.googleapis.com/hwhistlr.appspot.com/assets/logwriter.png)<figcaption>adding a service account to the right groups</figcaption> If you see security or other errors, you might need to add the service account to your project (it doesn't always show on the IAM page) and give it "Logs Writer" access. ## Parting Thoughts None of these solutions are ideal.There is an [official bug](https://github.com/googleapis/google-cloud-go/issues/720) from 2017 which I referenced to write this post.Since this behavior remains the same in early 2020, I don't think there's any plans to simplify logging again.
samthor
308,131
A Faster Way to Track Down Bugs
Most of my time working as a software engineer is spent tracking down and fixing bugs. While a good p...
0
2020-04-14T02:41:18
https://dev.to/blackgirlbytes/a-faster-way-to-track-down-bugs-2aod
github, git, productivity, tutorial
*Most of my time working as a software engineer is spent tracking down and fixing bugs. While a good portion of the bugs are my own doing, some of the bugs were created by other engineers, which makes the task more ambiguous. As the codebase grows, finding bugs becomes increasingly difficult. It's a time sink and super boring to manually click through every suspicious commit to find the culprit. However, you can automate this process by using a tool called git bisect.* ## What is git bisect? Git bisect is based on a binary search algorithm, but don't worry, you won't need to dust off your old data structure textbooks! Git does all the work of repetitively dividing the commits in half, checking out commits, and helping you identify when new changes were introduced. ## How do you use git bisect? To let git know you want to start the process run : git bisect start git bisect good < commit’s SHA, tag or branch > git bisect bad < commit’s SHA, tag or branch > Git will checkout a commit between the two commits you provided. It will ask you to determine if that commit is good or bad, so play around with that copy of your code and see if the bug still exists. If it still exists, run git bisect bad again. Git will continue to narrow down the commits until you find the culprit -- allowing an easier process of elimination. Using this, I realize I had the advantage of expending less brain power on finding the bug and more brainpower on fixing it. ## Did you find the bug? If so, run: git bisect reset This will tell git to stop bisecting. Then you can check out the offending commit, and start refactoring. *Warning: For this to work well, you will need a linear commit history.* Without a linear commit history, the history wouldn't show a true reflection of when code changes were made. To achieve a linear history, utilize git rebase and git squash and merge. Comment below to tell me if it worked for you! I originally wrote this post on https://www.blackgirlbytes.com/
blackgirlbytes
308,155
Nuxt, Meet Vuetify
This article is part of a series on my experiences with Nuxt.js that I built into the nuxt-toolkit by...
5,946
2020-04-14T03:27:20
https://dev.to/overscoremedia/nuxt-meet-vuetify-58fj
nuxt, vue, javascript, vuetify
This article is part of a series on my experiences with Nuxt.js that I built into the [nuxt-toolkit](https://github.com/overscore-media/nuxt-toolkit) by [OverScore Media](https://overscore.media) {% github https://github.com/overscore-media/nuxt-toolkit no-readme %} See a live example at https://nuxt-toolkit.overscore.media! :] ----- Well, Nuxt is great. It's my favourite JS framework for the web. Based on the awesome Vue.js, you can't really go wrong. Nuxt is my go-to for building websites and web apps alike, these days, since it can also function as a Static Site Generator. You probably already know about Nuxt.js, so let's begin. Now, let's add support for the wonderful Vuetify CSS/Vue framework to our Nuxt app. {% github https://github.com/vuetifyjs/vuetify no-readme %} If you're using `yarn create nuxt-app`, you can easily select Vuetify.js from the list of options for UI frameworks during the interactive installation process. It could take a while, but the process is fairly straightforward. I personally recommend the following options, but your mileage may vary: ``` ? Choose programming language JavaScript ? Choose the package manager Yarn ? Choose UI framework Vuetify.js ? Choose custom server framework None (Recommended) ? Choose Nuxt.js modules Axios, Progressive Web App (PWA) Support ? Choose linting tools ESLint, Prettier, Lint staged files, StyleLint ? Choose test framework None ? Choose rendering mode Universal (SSR) ? Choose development tools jsconfig.json (Recommended for VS Code) ``` Frankly, I'd choose Jest as a test framework (if I felt like I needed it). Once that process is all done, you'll have a bunch of defaults available to you (most of which are really quite good). One caveat is that the default font is Roboto, and I actually haven't been able to effectively change it, which is a bit of a shame (though I don't mind Roboto, so I'm not complaining all that much). There really isn't much more to say at this point. Vuetify's documentation is pretty comprehensive (though you'll likely be looking things up every few minutes until you get used to it). I particularly like the `v-card`, `v-icon`, `v-stepper`, `v-row`, `v-col`, `v-dialog`, `v-btn`, and `v-divider` components. Check 'em out if you have the chance. If I'm not mistaken, the `@nuxtjs/vuetify` module imports basically all of the Vuetify components, so you'll have access to the full gamut of its capabilities. Vuetify is OP, IMO, so it gives you plenty to work with, and looks great! That aside, though, don't expect building with Vuetify to be a complete breeze. It's an adventure, if you know what I mean. One particular nuisance, IMO, is that a lot of the CSS uses `!important`'s, so you'll probably end up having to make your own classes with more specificity than Vuetify's... Though, [https://vuetifyjs.com/en/customization/theme/](https://vuetifyjs.com/en/customization/theme/) is cool, as it offers a ton of customization options out-of-the-box (in `nuxt.config.js` in your Nuxt app). Overall, Vuetify's an excellent choice for really any web project, so by all means take it for a spin. Love it or hate it, you can't deny that it's powerful and useful in the right hands. ----- ## Some Iconography (Optional and hacky) Something I noticed about Vuetify is that it loads in either Material Design Icons or Material Icons (yes, there's a difference; the former includes some non-Google community icons - it's our favourite at OverScore), I kinda forget, from the icon font. Personally, I doubt you'll need to change this, but if you do, this is how you can load in icons programmatically. ### Material Design Icons from @mdi/js #### Step 1: Disable Loading of Icon Font from CDN In `nuxt.config.js`, add the following code to the `vuetify: { }` object: ```js defaultAssets: { icons: false }, ``` #### Step 2: Load in the Icon Package of your Choice This is where you can BYOI (Bring Your Own Icons). Pick your favourite icon set, and assuming it has an NPM package with SVG paths you can load in dynamically (like [@mdi/js](http://npmjs.com/@mdi/js)). Then add it to your dependencies list with something like `yarn add @mdi/js` or `npm install --save @mdi/js`. #### Step 3: Put 'em in your Components Granted, you really don't have to do it this way; the default does work, and it's actually less work. You'll also probably end up breaking some Vuetify components that expect icons... But, this way of doing it gives you a bit more flexibility in terms of what you load in. Since Webpack supports [tree-shaking](https://dev.to/hoangbkit/what-is-tree-shaking-28bp) (and assuming the icon package you use does too), you can just load in the icons you need and are using - no more, no less. Here's my code (you'll have to repeat this same kind of thing for every component - I never said it was easier/better): ```vue <template> <!-- --> <v-app-bar :clipped-left="clipped" fixed app> <v-icon class="mr-5" color="#C9C3B2" @click.stop="drawer = !drawer" > {{ burgerSVG }} </v-icon> <v-toolbar-title v-text="title" /> </v-app-bar> <!-- --> </template> <script> import { mdiMenu } from '@mdi/js' export default { // ... data () { return { burgerSVG: mdiMenu } } } // ... </script> ``` Basically, you load in an icon, return it as a named `data` variable, and insert it inside a `<v-icon>` component. Pretty cool, huh? Or not. Take it or leave it. Hopefully it's helpful. TTYL all. Stay `{ home, safe }` everybody, and keep on coding!
mtpiercey