id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
253,839
Laptops?
Looking for a used or new cheap MACBOOK if anyone knows somebody selling theirs or if anyone has one...
0
2020-02-02T20:46:55
https://dev.to/jasminelad16/laptops-29ef
hardware, laptops, ios, delas
Looking for a used or new cheap MACBOOK if anyone knows somebody selling theirs or if anyone has one to lend. Thats all missing to start my bootcamp.
jasminelad16
253,889
Primeira Oficina de Lógica de Programação WoMakersCode no Rio de Janeiro
Créditos da foto de capa: Érika Alves No dia 01/02 aconteceu a nossa primeira oficina de lógica de p...
0
2020-02-02T22:16:34
https://dev.to/womakerscode/primeira-oficina-de-logica-de-programacao-womakerscode-no-rio-de-janeiro-2de2
womenintech, wecoded, womakerscode
_Créditos da foto de capa: Érika Alves_ No dia 01/02 aconteceu a nossa primeira oficina de lógica de programação da comunidade WoMakersCode RJ. O evento gratuito ocorreu no Sesc Tijuca e contou com o apoio da DigitalOcean e a Alura. ![Adesivos da WoMakersCode e DigitalOcean](https://i.imgur.com/vXEGC8t.jpg) A oficina foi bastante especial para nós. Por conta das vagas limitadas precisávamos selecionar apenas 25 mulheres, mas recebemos no total **156 inscrições** o que para nós foi uma grande - e feliz! - supresa. A escolha das participantes não foi fácil, pois a nossa vontade era de escolher todas! Ouvimos histórias incríveis de mulheres de advogadas, arquitetas a jornalistas, vendedoras e estudantes que estavam em busca de um novo aprendizado, mudança de carreira ou simplesmente complementando os seus estudos. Conhecimento nunca é demais! Muita coisas legais aconteceram durante o dia e tivemos alguns destaques: duas mulheres que vieram de São Paulo para participar do evento - e que no final descobrimos que uma era da Colômbia e outra do Peru! Além disso, tivemos uma participante de 15 anos que nos contou que quer estudar programação e seguir por essa área quando terminar o ensino médio. Sua irmã veio apenas para acompanhar, mas no fim acabou sentando em um dos computadores e participou também. Por fim, uma das mulheres selecionadas veio do Acre, mas por sorte o evento aconteceu bem no período que a mesma estava de férias e simplesmente uniu o útil ao agradável :) ![Luanda apresentando sobre a comunidade WoMakersCode](https://i.imgur.com/P7JMKMn.jpg) _Foto: Amanda Azevedo_ O evento começou às 9h30 com a apresentação da nossa voluntária Luanda Pereira sobre a comunidade e realizando uma pequena dinâmica entre as participantes e depois apresentando o time de voluntárias que estariam atuando naquele dia: Gabrielly de Andrade, Daiane Alves, Adrielle Ribeiro, Mariana Coelho, Hillary Sousa e Aline Bezzoco. Depois da apresentação inicial tivemos um bate-papo com as voluntárias Gabrielly, Adriele e Hillary sobre as diversas vertentes da área de tecnologia. Em seguida, Daiane e Adriele apresentaram sobre o que é lógica de programação, sua importância na área de tecnologia e além de apresentar a pseudo-linguagem a ser utilizada na oficina, o Portugol, utilizando da IDE Portugol Studio. ![Gabrielly falando sobre a área de tecnologia e suas vertentes](https://i.imgur.com/vphqsHF.jpg) ![Adriele ensinando sobre como criar funções em Portugol](https://i.imgur.com/7WohA2W.jpg) Nisso, alguns exercícios foram sendo passados e as mentoras foram ajudando as participantes, tirando dúvidas e auxiliando-as em todos os momentos. No Dojo, a nossa voluntária Gabrielly conduziu o desafio com as participantes em desenvolver um programa chamado Megasena. Vimos as participantes ajudando umas as outras algo que para nós mulheres é gratificante, já que prezamos pela união e protagonismo feminino na área de tecnologia. ![Mentoras auxiliando as participantes](https://i.imgur.com/10E3flG.jpg) ![Mentoras auxiliando as participantes](https://i.imgur.com/DX5B9TU.jpg) _Foto: Mariana Coelho_ ![Duas participantes no momento do Dojo](https://i.imgur.com/LlgiHYQ.jpg) Por fim, tivemos a hora do sorteio! Sorteamos algumas camisetas cedidas pela DigitalOcean, dois cursos da Alura e um ingresso para o PHPWomen. Agradecemos a todas as voluntárias que se disponibilizaram a organizar mais um evento da comunidade! Desde a parte da organização até conteúdo e mentorias! Vocês foram incríveis! ![Voluntárias da Oficina](https://i.imgur.com/AqmI3XX.jpg) Por trás de todo código existem pessoas e essas tem suas histórias para contar. Esperamos que a WoMakersCode tenha sido um capítulo feliz na vida delas e as que se apaixonaram e escolheram a programação como forma de aprendizado possa continuar estudando e conseguir trabalhar como uma pessoa desenvolvedora. Esperamos todas vocês novamente nos nossos próximos eventos! Para quem não conseguiu participar desta primeira edição: fiquem atentas, pois estamos considerando fazer mais uma edição ainda este ano :) Acompanhe as novidades nas nossas [redes sociais](https://linktr.ee/womakerscode) para ficar por dentro de tudo o acontece na comunidade! **Sobre a WoMakersCode** A WoMakersCode é uma iniciativa sem fins lucrativos, que busca o protagonismo feminino na tecnologia, através do desenvolvimento profissional e econômico. Acreditamos que empoderar é incentivar a participação, o aprendizado colaborativo e, acima de tudo, dar voz às mulheres. Oferecemos para a comunidade workshops, eventos e debates com foco no mercado de tecnologia, orientados para capacitação técnica e fortalecimento de habilidades pessoais. Nós trabalhamos para prepará-las e incentivá-las a investir em suas carreiras e em realizar seus sonhos.
alinebezzoco
254,019
Patterns for resilient architecture & optimizing web performance
TL;DR notes from articles I read today. Patterns for resilient architecture: Embracing fail...
0
2020-02-03T13:00:20
https://insnippets.com/tag/issue88/
todayilearned, architecture, webperf, serverless
*TL;DR notes from articles I read today.* ### [Patterns for resilient architecture: Embracing failure at scale](http://bit.ly/38YQm8P) - Build your application to be redundant, duplicating components to increase overall availability across multiple availability zones or even regions. To support this, ensure you have a stateless application and perhaps an elastic load balancer to distribute requests. - Enable auto-scaling not just for AWS services but application auto-scaling for any service built on AWS. Determine your auto-scaling technology by the speed you tolerate - preconfigure custom golden AMIs, avoid running or configuring at startup time, replace configuration scripts with Dockerfiles, or use container platforms like ECS or Lambda functions. - Use infrastructure as code for repeatability, knowledge sharing, and history preservation and have an immutable infrastructure with immutable components replaced for every deployment, with no updates on live systems and always starting with a new instance of every resource, with an immutable server pattern.  - As a stateless service, treat all client requests independently of prior requests and sessions, storing no information in local memory. Share state with any resources within the auto-scaling group using in-memory object caching systems or distributed databases.
 *[Full post here](http://bit.ly/38YQm8P), 10 mins read* --- ###[Tips to speed up serverless web apps in AWS](http://bit.ly/384p1li) - Keep Lambda functions warm by invoking the Ping function using AWS CloudWatch or Lambda with Scheduled Events and using the Serverless WarmUP plugin. - Avoid cross-origin resource sharing (CORS) by accessing your API and frontend using the same origin point. Set origin protocol policy to HTTPS when connecting the API gateway to AWS CloudFront and configure both API Gateway and CloudFront to the same domain, and configure their routing accordingly. - Deploy API gateways as REGIONAL endpoints. - Optimize the frontend by compressing files such as JavaScript, CSS using GZIP, Upload to S3. Use the correct Content-Encoding: gzip headers, and enable Compress Objects Automatically in CloudFront. - Use the appropriate memory for Lambda functions. Increase CPU speed when using smaller memory for Lambda.
 *[Full post here](http://bit.ly/384p1li), 4 mins read* --- ### [Optimizing website performance and critical rendering path](http://bit.ly/36Nrcbz) - Many things can lead to high rendering times for web pages - the amount of data transferred, the number of resources to download, length of the critical rendering path (CRP), etc. - To minimize data transferred, remove unused parts (unreachable JavaScript functions, styles with selectors not matching any element, HTML tags always hidden with CSS) and remove all duplicates. - Reduce the total count of critical resources to download by setting media attributes for all links referencing stylesheets and making some styles inlined. Also, mark all script tags as async (not parser blocking) or defer (evaluated at end of page load). - You can shorten the CRP with the approaches above, and also rearrange the code amongst files so that the styles and scripts of above-the-fold content load before you parse or render anything else. - Keep style tags and script tags close to each other in HTML (linewise) to help the browser preloader, and batch HTML updates to avoid multiple layout changes (such as those triggered by window resizing or device orientation). *[Full post here](http://bit.ly/36Nrcbz), 8 mins read* --- *[Get these notes directly in your inbox every weekday by signing up for my newsletter, in.snippets().](https://mailchi.mp/appsmith/insnippets?utm_source=devto&utm_medium=post08&utm_campaign=is)*
mohanarpit
254,105
[solution] Des champs personnalisés en plein coeur
Salut les joomlers de l'extrême! Un ami joomler qui se reconnaitra m'a demandé comment faire pour...
0
2020-02-05T17:11:18
https://dev.to/mralexandrelise/solution-des-champs-personnalises-en-plein-coeur-3k2o
webdev, tutorial, joomla
--- title: [solution] Des champs personnalisés en plein coeur published: true date: 2019-11-26 09:41:31 UTC tags: webdev,tutorial,joomla canonical_url: --- Salut les joomlers de l'extrême! Un ami joomler qui se reconnaitra m'a demandé comment faire pour intégrer $this->item->jcfields dans un module comme mod\_articles\_latest J'ai accepté le défi et je partage le resultat avec vous. La communauté de Joomla!. La famille des joomlers. Découvrez sans plus attendre l'exemple de code à utiliser, bien commenté pour réussir le challenge. Bon courage et à bientôt pour de nouvelles astuces [Voir comment faire](https://gist.github.com/alexandreelise/e04d417c9f911ce2ab2a3e931142e89b "Exemple de code pour le module articles latest")
mralexandrelise
254,112
docker containers deployment in ECS EC2
Hello community, I am looking for deployment best practices. i want to deploy docker images in ECS c...
0
2020-02-03T09:52:28
https://dev.to/mourik/docker-containers-deployment-in-ecs-ec2-2m2m
terraform, cicd, ecs, aws
Hello community, I am looking for deployment best practices. i want to deploy docker images in ECS cluster, i want to know if terraform is a good way to perform container deployment in ECS, is there a better way adapted to this kind of deployment? Thanks in advance.
mourik
254,200
Build React Native Fitness App #8 : [iOS] Firebase Facebook Login
This tutorial is eight chapter of series build fitness tracker this app use for track workouts, diets...
4,584
2020-02-03T13:47:15
https://kriss.io/build-react-native-fitness-app-8-ios-firebase-facebook-login/
reactnativefirebas, firebase
--- title: Build React Native Fitness App #8 : [iOS] Firebase Facebook Login published: true date: 2020-02-03 05:37:00 UTC tags: react-native-firebas,firebase canonical_url: https://kriss.io/build-react-native-fitness-app-8-ios-firebase-facebook-login/ cover_image: https://cdn-images-1.medium.com/max/1024/0*Zi-H6NonKNlpenCh.png series: Build React native Fitness app --- This tutorial is eight chapter of series build fitness tracker this app use for track workouts, diets, or health activities and analyze data and display suggestion the ultimate goal is to create food and health recommendation using Machine learning we start with creating app that user wants to use and connect to google health and apple heath for gathering everything to create dataset that uses for train model later I start with ultimate goal. Still, we will start to create a react native app and set up screen navigation with React navigation. inspired by [React native template](http://instamobile.io/) from instamobile your can view the [previous chapter here](https://kriss.io/category/react-native-fitness/) in this chapter we want to add more way that user can authenticate to the app first we implement Facebook login in this part we deal with iOS then will deal with Android in the next episode first, we need to install react-native-fbsdk package ``` yarn add react-native-fbsdk ``` next, we need to follow the official document install cacao pod package ``` cd ios ; pod install ``` now we finish on React native part #### Configure on Xcode next, open our project in Xcode add code below to info.plist ``` <key>CFBundleURLTypes</key> <array> <dict> <key>CFBundleURLSchemes</key> <array> <string>fb6598530980\*\*\*\*</string> </array> </dict> </array> <key>FacebookAppID</key> <string>65985309808\*\*\*\*</string> <key>FacebookDisplayName</key> <string>FitnessMaster</string> ``` result like here ![](https://cdn-images-1.medium.com/max/809/0*ZvwaRaK3jssh3cQg.png) #### React native part next, we come back to react-native part in LoginScreen.js import react-native-fbsdk package ``` import {LoginManager, AccessToken} from 'react-native-fbsdk'; ``` then add function for handle Authentication data ``` async FacebookLogin() { const result = await LoginManager.logInWithPermissions([ 'public\_profile', 'email', ]); if (result.isCancelled) { throw new Error('User cancelled the login process'); } const data = await AccessToken.getCurrentAccessToken(); if (!data) { throw new Error('Something went wrong obtaining access token'); } const credential = firebase.auth.FacebookAuthProvider.credential( data.accessToken, ); await firebase.auth().signInWithCredential(credential); alert('Registration success'); setTimeout(() => { navigation.navigate('HomeScreen'); }, 2000); } ``` Here, we’ve implemented the FacebookLogin() function as an asynchronous function. First, we activate the Facebook login. Then, we’ll get an access token in return, which we save to Firebase. And when Firebase returns the login credentials, we manually authenticate to Firebase. And after ensuring that everything is a success, we navigate to the Home screen. Next, we need to add the FacebookLogin() function to the onPress event of the SocialIcon component, which represents the Facebook login button: ``` <TouchableOpacity onPress={() => this.FacebookLogin()}> <SocialIcon type="facebook" light /> </TouchableOpacity> ``` #### Activate the Firebase login method lasting we need to activate Facebook authentication on Firebase and then App ID and App secret ![](https://cdn-images-1.medium.com/max/1021/0*buBvS57YIZ9i9vFU.png) now we can log in to our app #### Conclusion in this chapter, we learn how to add Facebook login to iOS part in the next chapter we learn adding Facebook login again but on android part _Originally published at _[_Kriss_](https://kriss.io/build-react-native-fitness-app-8-ios-firebase-facebook-login/)_._ * * *
kris
254,231
Heap Sort
The heap sort is useful to get min/max item as well as sorting. We can build a tree to a min heap or...
0
2020-02-03T14:57:58
https://dev.to/drevispas/heap-sort-2fmb
algorithms, programming, java
The heap sort is useful to get min/max item as well as sorting. We can build a tree to a min heap or max heap. A max heap, for instance, keeps a parent being not smaller than children. The following code is for a max heap. The __top__ indicates the next insertion position in the array which is identical to the array size. ```java class Heap { private int[] arr; private int top; public Heap(int sz) { arr=new int[sz]; top=0; } public void push(int num) { arr[++top]=num; climbUp(top); } public void pop() { int min=arr[1]; arr[1]=arr[top--]; climbDown(1); arr[top+1]=min; } public int size() { return top; } private void climbUp(int p) { if(p<=1||arr[p]<=arr[p/2]) return; swapAt(p,p/2); climbUp(p/2); } private void climbDown(int p) { int np=p*2; if(np>top) return; if(np<top&&arr[np+1]>arr[np]) np++; if(arr[p]>=arr[np]) return; swapAt(p,np); climbDown(np); } private void swapAt(int p,int q) { int t=arr[p]; arr[p]=arr[q]; arr[q]=t; } } ```
drevispas
254,697
Docker stop all processes on Github Actions
How to show all Docker running containers, stop and remove them on Github Actions
0
2020-02-04T01:18:56
https://dev.to/saulsilver/docker-stop-all-processes-on-github-actions-533j
cicd, docker, intermediate
--- title: Docker stop all processes on Github Actions published: true description: How to show all Docker running containers, stop and remove them on Github Actions tags: CI/CD, Docker, intermediate --- ### _Side note_ _Advancing in CI/CD can be challenging due to lack of intermediate tutorials/blogs regarding this part of development. It's easy to find simple "how to setup your workflow with minimum jobs" articles that don't really help in production-level projects. I wonder why._ ### Problem I have Docker in one of the projects I am working on and wanted to integrate CI/CD. So I went with Github Actions v2 and started to check how to handle Docker and Docker-compose on Github Actions. There aren't many differences between Github Actions and other deployment workflow providers (e.g. CircleCI, Travis), but I would say the major difference is [Actions](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/about-actions). In the Continuous Deployment step, I need to access the server and stop all running Docker processes. Of course, I cannot stop the docker process by container ID because it's dynamic and changes whenever the process is run. ### Example Consider we have these Docker containers running ```sh $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4c01db0b339c ubuntu:12.04 bash 17 seconds ago Up 16 seconds 3300-3310/tcp webapp d7886598dbe2 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db ``` ### Workaround A static identifier is needed to stop the process, so I could call each image by its name and stop it. ```sh docker stop ubuntu:12.04 crosbymichael/redis:latest docker rm ubuntu:12.04 crosbymichael/redis:latest ``` That's fairly okay if you have a few containers running and you are sure other docker containers don't exist. But it's an assumption I'm not willing to take and also, it's buggy if I add a new container later on and forget to add it in the workflow script. ### Common solution Some of you might be screaming already with this holy grail command. ```sh docker stop $(docker ps -a -q) docker rm $(docker ps -a -q) ``` This basically stops/removes all existing container processes no matter what their identifiers are. Nice! Now push that to our Github `remote` and wait for Github Actions Runner to finish the workflow script, but then the Runner shows the red cross :x: The build has failed with an error about `docker stop requires a parameter`. Github Actions uses the `$` to call variables (e.g. [`GITHUB_REPOSITORY`](https://help.github.com/en/actions/automating-your-workflow-with-github-actions/using-environment-variables#default-environment-variables)), so it looks for variable called `docker ps...` but it's undefined since I didn't set it as an environment variable in the workflow. We could escape the Action's default `$` with another `$` so the command becomes ```sh docker stop $$(docker ps -a -q) ``` However, this also didn't work, I looked it up a bit and didn't find a good explanation for it but was spending too much time on this. So I moved on, after all, stopping docker processes isn't the main task for the whole workflow. ### Working solution After prioritizing my tasks for the project, I decided to find a different solution and quickly overcome this problem. Only then, I stumbled upon this piece of bash script which I haven't seen before. ```sh ids=$(docker ps -a -q) for id in $ids do echo "$id" docker stop $id && docker rm $id done ``` First line assigns the outcome of `docker ps ...` to a variable called `ids`. Then we make a `for` loop to iterate through all the ids and for each id, we stop and remove the process with that id. Github Actions Runner passed this without any errors so I was happy to have learnt a new trick and move on with my other tasks to finish up the workflow.
saulsilver
254,277
Design Patterns for JavaScript Applications
What are the design patterns that you use while developing your JavaScript applications? Be it Fronte...
0
2020-02-03T16:30:26
https://dev.to/jsandfriends/design-patterns-for-javascript-applications-2fj
discuss, javascript
What are the design patterns that you use while developing your JavaScript applications? Be it Frontend or Middleware.
baskarmib
254,296
How to fetch subcollections from Cloud Firestore with React
More data! First, I add more data to my database. Just to make things more realistic. For...
4,654
2020-02-03T16:54:07
https://dev.to/rossanodan/how-to-fetch-subcollections-from-cloud-firestore-with-react-3n93
google, firebase, firestore, react
--- title: How to fetch subcollections from Cloud Firestore with React published: true description: tags: google, firebase, firestore, react series: Getting started with Firebase Cloud Firestore --- # More data! First, I add more data to my database. Just to make things more realistic. For each cinema I add a subcollection `movies` in which I add some `movies`. Each movie has this info ``` name: string, runtime: string, genre: string, release_date: timestamp ``` In Firestore, data can also have different structure (NoSQL's power!) but, for simplicity, I follow the canonical way. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/k9n8b9rrl4moqck3aczy.png) I add one movie for the first cinema and two movies for the second one. # Fetching the subcollection I make the cinemas list clickable, so when I click an item I load movies scheduled for that specific cinema. To do this, I create a function `selectCinema` that will perform a new `query` to fetch a specific subcollection. ```typescript ... const selectCinema = (cinema) => { database.collection('cinemas').doc(cinema.id).collection('movies').get() .then(response => { response.forEach(document => { // access the movie information }); }) .catch(error => { setError(error); }); } .. {cinemas.map(cinema => ( <li key={cinema.id} onClick={() => selectCinema(cinema)}> <b>{cinema.name}</b> in {cinema.city} has {cinema.total_seats} total seats </li> ))} ``` At this point is easy managing the show/hide logic with React using the `state`. {%gist https://gist.github.com/rossanodan/e97e19d31000019a3b705412662eca46 %} # A working demo Gaunt but working. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/dh613zks8s877vy0cxih.gif)
rossanodan
254,587
🐍 Writing tests faster with pytest.parametrize
Write your Python tests faster with parametrize I think there is no need to explain here h...
492
2020-02-03T21:33:58
https://www.daolf.com/posts/writing-tests-faster/
python, webdev, tutorial, beginners
# Write your Python tests faster with parametrize I think there is no need to explain here how important testing your code is. If somehow, you have doubt about it, I can only recommend you to read those great resources: - [The testing introduction I wish I had](https://dev.to/maxwell_dev/the-testing-introduction-i-wish-i-had-2dn) - [The importance of software testing](https://www.testdevlab.com/blog/2018/07/importance-of-software-testing/) But more often than not, when working on your side projects, we tend to overlook this thing. I am in no way advocating that every code you write in every situation should be tested, it is perfectly fine to hack something and never test it. This post was only written to show you how to quickly write Python tests using `pytest` and one particular feature, hoping it will reduce the amount of non-tested code you write ## Writing some tests We are going to write some tests for our method that takes an array, removes its odd member, and sort it. This will be the method: ```python def remove_odd_and_sort(array): even_array = [elem for elem in array if not elem % 2] sorted_array = sorted(even_array) return sorted_array ``` That's it. Let's now write some basic tests: ```python import pytest def test_empty_array(): assert remove_odd_and_sort([]) == [] def test_one_size_array(): assert remove_odd_and_sort([1]) == [] assert remove_odd_and_sort([2]) == [2] def test_only_odd_in_array(): assert remove_odd_and_sort([1, 3, 5]) == [] def test_only_even_in_array(): assert remove_odd_and_sort([2, 4, 6]) == [2, 4, 6] def test_even_and_odd_in_array(): assert remove_odd_and_sort([2, 1, 6]) == [2, 6] assert remove_odd_and_sort([2, 1, 6, 3, 1, 2, 8]) == [2, 2, 6, 8] ``` To run those tests simple `pytest <your_file>` and you should see something like this: ![](https://www.daolf.com/images/python-tips-3/screen_1.png) As you can see it was rather simple, but writing 20 lines of codes for such a simple method can sometimes be seen as a too expensive price. ## Writing them faster Pytest is really an awesome test framework, one of its features I use the most to quickly write tests is the `parametrize` decorator. The way it works is rather simple, you give the decorator the name of your arguments and an array of tuple representing multiple argument values. Your test will then be run one time per tuple in the list. With that in mind, all the tests written above can be shortened into this snippet: ```python @pytest.mark.parametrize("test_input,expected", [ ([], []), ([1], []), ([2], [2]), ([1, 3, 5], []), ([2, 4, 6], [2, 4, 6]), ([2, 1, 6], [2, 6]), ([2, 1, 6, 3, 1, 2, 8], [2, 2, 6, 8]), ]) def test_eval(test_input, expected): assert remove_odd_and_sort(test_input) == expected ``` And this will be the output: ![](https://www.daolf.com/images/python-tips-3/screen_1.png) See? And in only 10 lines. `parametrize` is very flexible, it also allows you to defined case the will break your code and many other things as detailed [here](https://docs.pytest.org/en/latest/example/parametrize.html#paramexamples) Of course, this was only a short introduction to this framework showing you only a small subset of what you can do with it. You can [follow me on Twitter](https://twitter.com/intent/follow?screen_name=PierreDeWulf"), I tweet about bootstrapping, indie-hacking, startups and code 😊 Happy coding If you like those short blog post about Python you can find my 2 previous one here:
daolf
254,611
Let's Create a Twitter Bot using Node.js and Heroku (3/3)
Welcome to the third and final installment of creating a twitter bot. In this post, I'll show you how...
0
2020-02-05T20:08:57
https://dev.to/developer_buddy/let-s-create-a-twitter-bot-using-node-js-and-heroku-3-3-agk
node, twitter, javascript, tutorial
Welcome to the third and final installment of creating a twitter bot. In this post, I'll show you how to automate your bot using Heroku. If you haven't had the chance yet, check out [Part 1](https://dev.to/developer_buddy/let-s-create-a-twitter-bot-using-node-js-and-heroku-1-3-43kb) and [Part 2](https://dev.to/developer_buddy/let-s-create-a-twitter-bot-using-node-js-and-heroku-2-3-22g3). After this, you will have your own fully automated Twitter bot. Let's jump in. # 1. Setup Heroku Account You'll want to sign up for a Heroku [account](https://heroku.com). If you have a Github account you'll be able to link the two accounts. ![Heroku Home Page](https://github.com/agyin3/images/blob/master/Twitter%20Bot%20Blog/Screen%20Shot%202020-02-03%20at%204.08.19%20PM.png?raw=true) # 2. Create Your App Once you're all set up with your account, you'll have to create an app. In the top right corner, you'll see a button that says 'New' Click on that and select 'Create New App' ![Heroku Create New App](https://github.com/agyin3/images/blob/master/Twitter%20Bot%20Blog/Screen%20Shot%202020-02-03%20at%204.12.19%20PM.png?raw=true) That should take you to another page where you'll have to name your app. ![Heroku App Description Page](https://github.com/agyin3/images/blob/master/Twitter%20Bot%20Blog/Screen%20Shot%202020-02-03%20at%204.18.12%20PM.png?raw=true) # 3. Install Heroku You can install Heroku a few different ways depending on your OS. If you want to use the CLI to install it, enter the following code in your terminal ### sudo snap install --classic heroku If that didn't work for you, you can find other ways of installing Heroku to your device [here](https://devcenter.heroku.com/articles/heroku-cli) # 4. Prepare For Deployment Open up your terminal and cd into your tweetbot folder. Once inside run this code to log in to your Heroku account. ### heroku login You'll have the option to log in either through the terminal or webpage. If you haven't deployed to Github run the following code. If you have you can skip this part ### git init Now you'll want to connect to Heroku's remote git server. Run this code in your terminal. Be sure to replace `<your app name>` with the name of your Heroku's app name ### heroku git:remote -a <your app name> Almost there!!! You just want to setup our access keys on Heroku's server. You can do this directly in terminal quite easily. Run the following code to get it setup. You're actually just going to be copying it over from your `.env` file ``` heroku config:set CONSUMER_KEY=XXXXXXXXXXXXXXXXXXXXXXXXX heroku config:set CONSUMER_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX heroku config:set ACCESS_TOKEN=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX heroku config:set ACCESS_TOKEN_SECRET=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ``` Sweet! Now we are going to create a Procfile to configure the process we want Heroku to run. ### touch Procfile Once you have this file created open it up and add the following code inside ### worker: node bot.js Now you just need to commit and push your files up to the Heroku server. Run this last bit of code in your terminal ``` git add . git commit -m "add all files" git push heroku master ``` Time to test out our bot now that it's on Heroku. In your terminal, run the following: ### heroku run worker You should see your terminal output 'Retweet Successful' and 'Favorite Successful' If you're getting some type of error message, be sure to double-check your code and your deployment. # 5. Time To Automate All that's left is getting our bot to run on a schedule. I really like the Herkou Scheduler add to handle this. Go back to your overview page on Heroku and select configure add-ons ![Heroku Configure Add-on](https://github.com/agyin3/images/blob/master/Twitter%20Bot%20Blog/addon-config.png?raw=true) Do a search for <strong>Heroku Scheduler</strong> and add it to your app. ![Heroku Scheduler](https://github.com/agyin3/images/blob/master/Twitter%20Bot%20Blog/heroku-scheduler.png?raw=true) Now click on Heroku Scheduler to open up the settings in a new window. For this example, I'm going to configure mine to run every 10 minutes. You can change this to run every hour or less if you'd prefer. ![Heroku Scheduler Settings](https://github.com/agyin3/images/blob/master/Twitter%20Bot%20Blog/heroku-scheduler-settings.png?raw=true) You'll notice that I added <strong>node bot.js</strong> under the Run Command section. You'll want to do the same so Heroku knows which command to run for your bot. There you have it!!! You have now successfully created your own automated twitter bot. If you'd like to check mine out you can at [@coolnatureshots](https://twitter.com/coolnatureshots). You can also find the GitHub repo for it [here](https://github.com/agyin3/tweetbot-photo)
developer_buddy
254,623
What are you going to do if/when your position gets automated?
Thinking out loud.
0
2020-02-04T14:53:20
https://dev.to/jenc/discuss-what-are-you-going-to-do-if-when-your-position-gets-automated-13j0
discuss, devdiscuss, career
--- title: What are you going to do if/when your position gets automated? published: true description: Thinking out loud. tags: discuss, devdiscuss, career cover_image: https://dev-to-uploads.s3.amazonaws.com/i/in9z5qf1to2s4l3rv4c3.jpeg --- Let's take it real far: what are you going to work because your job got automated? (Not a speculation on whether it can be automated, but it actually just *is*) Would you continue to design and code?
jenc
254,745
Ripping Out Node.js - Building SaaS #30
In this episode, we removed Node.js from deployment. We had to finish off an issue with permissi...
2,058
2020-03-05T18:05:24
https://www.mattlayman.com/building-saas/ripping-out-nodejs/
python, django, saas, node
{% youtube PyZDK-D0eWE %} In this episode, we removed Node.js from deployment. We had to finish off an issue with permissions first, but the deployment got simpler. Then we continued on the steps to make deployment do even less. Last episode, we got the static assets to the staging environment, but we ended the session with a permissions problem. The files extracted from the tarball had the wrong user and group permissions. I fixed the permissions by running an Ansible task that ran `chown` to use the `www-data` user and group. To make sure that the directories had proper permissions, I used `755` to ensure they were executable. Then we wrote another task to set the permission of non-directory files to `644`. This change removes the executable bit from regular files and reduces their security risk. We ran some tests to confirm the behavior of all the files, even running the test that destroyed all existing static files and starting from scratch. With the permissions task complete, we could move onto the fun stuff of ripping out code. Since all the static files are now created in Continuous Integration, there is no need for [Node.js](https://nodejs.org/en/) on the actual server. We removed the [Ansible](https://www.ansible.com/) galaxy role and any task that used Node.js to run JavaScript. Once Node was out of the way, I moved on to other issues. I had to convert tasks that used `manage.py` from the Git clone to use the manage command that I bundled into the [Shiv](https://shiv.readthedocs.io/en/latest/) app. That work turned out to be very minimal. The next thing that can be removed is the Python virtual environment that was generated on the server. The virtual environment isn't needed because all of the packages are baked into the Shiv app. That means that we must remove anything that still depends on the virtual environment and move them into the Shiv app. There are two main tools that still depend on the virtual environment: 1. [Celery](http://www.celeryproject.org/) 2. [wal-e](https://github.com/wal-e/wal-e) for [Postgres](https://www.postgresql.org/) backups For the remainder of the stream, I worked on the `main.py` file, which is the entry point for Shiv, to make the file able to handle subcommands. This will pave the way for next time when we call Celery from a Python script instead of its stand-alone executable. Show notes for this stream are at [Episode 30 Show Notes](https://www.mattlayman.com/building-saas/ripping-out-nodejs/). To learn more about the stream, please check out [Building SaaS with Python and Django](https://www.mattlayman.com/building-saas/).
mblayman
254,775
How to Intercept the HTTP Requests in Angular(Part 1)
How to Intercept the HTTP Requests in Angular (Part 1) Ud...
0
2020-02-04T04:14:33
https://dev.to/udithgayan/how-to-intercept-the-http-requests-in-angular-part-1-1am6
angular, javascript, developers, interceptor
{% medium https://medium.com/javascript-in-plain-english/how-to-intercept-the-http-requests-in-angular-2a67df423020 %}
udithgayan
254,820
KineMaster - The Top Video Editor For Android
KineMaster is a video editing application that brings a full suite of editing tools to iPhone, iPad,...
0
2020-02-04T06:16:26
https://dev.to/steve_smith/kinemaster-the-top-video-editor-for-android-3le
KineMaster is a video editing application that brings a full suite of editing tools to iPhone, iPad, iPod Touch, and Android devices. Designed for productivity on the go, KineMaster delivers the ability to create professional video content without requiring a laptop or desktop computer (https://bigtechbyte.com/kinemaster-pro/) BEST editing ever! I've used Kinemaster for the past year or so. I must say I'm not tech-savvy whatever and this app is extremely easy to use. I use it for my Youtuber channel (LifeWithLowee) Kinemaster is has a YouTuber channel (KineMaster) that shows you what new features and how to use their app. Music and Sound Effect Clips Music and Sound Effect Clips can be added to projects by tapping Audio on the Media Wheel to open the Audio Browser, and then selecting the desired music or sound effect track and tapping Add to place it on the Timeline starting at the time of the playhead’s current position. Selecting a Music or Sound Effect Clip will bring up the Options Panel, which contains tools used to edit audio, which will be discussed in more detail below. Please note that while on Android, KineMaster will display almost any available music or sound effect file on your device, on iOS devices, music must either be in iTunes on your device or in the KineMaster Internal folder. The KineMaster Internal folder is the same folder that is seen in folder sharing in iTunes on your computer Files with DRM (Digital Rights Management), also known as copyright protection, cannot be imported into KineMaster, as it breaks the terms of service for most music services.
steve_smith
254,851
PostCSS: cómo reducir hojas de estilo CSS, eliminando los selectores que sobran (vídeo)
Aunque todo profesional del desarrollo Web que se precie debe dominar HTML y CSS, la realidad es que...
0
2020-10-07T10:30:03
https://www.campusmvp.es/recursos/post/postcss-como-reducir-hojas-de-estilo-css-eliminando-los-selectores-que-sobran.aspx
webdev, spanish, video
--- title: PostCSS: cómo reducir hojas de estilo CSS, eliminando los selectores que sobran (vídeo) published: true date: 2020-02-04 08:00:00 UTC tags: WebDev, Spanish, Video cover_image: https://www.campusmvp.es/recursos/image.axd?picture=/2020/1T/purgecss-portada.png canonical_url: https://www.campusmvp.es/recursos/post/postcss-como-reducir-hojas-de-estilo-css-eliminando-los-selectores-que-sobran.aspx --- Aunque todo profesional del desarrollo Web que se precie debe [dominar HTML y CSS,](https://www.campusmvp.es/recursos/catalogo/Product-HTML5-y-CSS3-a-fondo-para-desarrolladores_185.aspx) la realidad es que en la mayor parte de los proyectos normalmente hacemos uso de **alguna biblioteca o _framework_ CSS** , como por ejemplo [Bootstrap](https://www.campusmvp.es/recursos/catalogo/Product-Dise%C3%B1o-Web-Responsive-con-HTML5,-Flexbox,-CSS-Grid-y-Bootstrap_212.aspx) (que es la más utilizada) o herramientas similares. Utilizar un _framework_ CSS nos permite **maquetar muy rápido** , dar un **aspecto atractivo** por defecto a las aplicaciones, y **[tener ya hechas muchas cosas complicadas](https://dev.to/campusmvp/bootstrap-42---spinners-notificaciones-toast-interruptores-y-otras-novedades-l0f)**. Pero, por otro lado, utilizar un _framework_ implica que **estamos añadiendo gran cantidad de cosas a la aplicación que jamás vamos a utilizar**. Por ejemplo, si creas una aplicación sencilla con Bootstrap utilizando tan solo la facilidad de maquetación con su rejilla (aunque hoy en día [con CSS Grid y Flexbox](https://www.campusmvp.es/recursos/catalogo/Product-Dise%C3%B1o-Web-Responsive-con-HTML5,-Flexbox,-CSS-Grid-y-Bootstrap_212.aspx) no lo necesitarías) y tomando el aspecto por defecto de botones y cuadros de texto, estarás añadiendo a la aplicación unos 156KB de tamaño con la versión minimizada del CSS de Boostrap. Sin embargo, **si analizas el uso real que estás haciendo** de las reglas incluidas, verás que es mínimo y **te llegaría con un porcentaje pequeño** del contenido de ese archivo. Lo puedes ver en este mini-vídeo en el que tengo una página muy sencilla con unas cuantas secciones para maquetar texto e imágenes, abro la página en Chrome y empleo su herramienta para análisis de cobertura de código: {% youtube hTdEbVaLtoE %} Como puedes comprobar, **casi el 96% del código de la hoja de estilos de Bootstrap no se utiliza** para nada. El resto, para esta página en concreto, no lo necesitamos. Pero claro, hacer esto manualmente es una tarea propensa a errores y, además, lamentablemente las herramientas de cobertura de Chrome no nos dejan exportar a un archivo tan solo lo que se identifica como que está en uso. **En el siguiente vídeo** te explico paso a paso cómo puedes **sacar partido a la estupenda herramienta [PurgeCSS](https://purgecss.com/)** para automatizar el análisis y limpieza de los archivos CSS que emplee tu aplicación web Front-End y acabar con aplicaciones más ligeras y más rápidas. Para ello utiliza un pequeño sitio web con algunas páginas bastante complejas, y que se apoya en el uso de diversas bibliotecas y mucho CSS. Vamos a verlo: {% youtube xN3w49LSMYo %} Todo lo explicado se puede automatizar con **npm** , **Gulp** o **Webpack** e incorporarlo al proceso de desarrollo de tu aplicación. Puedes aprender a dominar todas estas herramientas de desarrollo Front-End y muchas más, para ser mejor profesional, con nuestro curso de [Herramientas modernas para desarrollo Web Front-End empresarial](https://www.campusmvp.es/recursos/catalogo/Product-Herramientas-modernas-para-desarrollo-Web-Front-End-empresarial_243.aspx). Háblalo con tu empresa o invierte por tu cuenta en mejorar tu perfil profesional 😉 ¡Espero que te resulte útil!
campusmvp_es
254,852
A use case for the Object.entries() method
Split up objects with Object.entries() and Array.filter
0
2020-02-05T07:15:03
https://dev.to/keevcodes/a-use-case-for-the-object-entries-method-5dcj
javascript, beginners, webdev
--- title: A use case for the Object.entries() method published: true description: Split up objects with Object.entries() and Array.filter tags: #javascript #beginner #webdev cover_image: https://user-images.githubusercontent.com/17259420/73816918-134db300-47ea-11ea-97eb-1109473ab723.jpg --- *Perhaps you already know about Object.keys() and Object.values() to create an array of an objects keys and values respectively. However, there's another method `Object.entries()` that will return a nested array of the objects key and values. This can be very helpful if you'd like to return only one of these pairs based on the other's value.* # A clean way to return keys in an Object Often times in form with form data there will be a list of choices presented to users that are selectable with radio buttons. The object's data returned from this will look something like this... ```javascript const myListValues = { 'selectionTitle': true, 'anotherSelectionTitle': false } ``` We could store these objects with their keys and value in our database as they are, however just adding the `key` name for any truthy value would be sufficient. By passing our `myListValues` object into Object.entries() we can filter out any falsey values from our newly created array and then return the keys as a string. ###Execution We'll make use of not only Object.entries(), but also the very handy array methods `filter()` and `map()`. The output from `Object.entries(myListValues)` will be... ```javascript const separatedList = [ ['selectionTitle', true ], ['anotherSelectionTitle', false ] ]; ``` We now have an array that can be utilise `.filter()` and `.map()` to return our desired result. So let's clean up our `separatedList` array a bit. ```javascript const separatedFilteredList = Object.entries(myListValues).filter([key, value] => value); const selectedItems = separatedFilteredList.map(item => item[0]); ``` There we have it. Our selectedItems array is now just a list of the key names from our objects who's value was truthy. This is just one of many use cases for perhaps a lesser know object method. I'd love to see some more interesting use cases you may have come up with.
keevcodes
254,856
You Should Know These Tips To Properly Maintain And Extend The Durability Of Your NY Signs
Getting custom NY Signs finally made by the professionals is not the last step toward a successful bu...
0
2020-02-04T07:26:57
http://www.vidasigns.com/
signs
Getting custom<strong><b> </b></strong><a href="http://www.vidasigns.com/">NY Signs</a><strong><b> </b></strong>finally made by the professionals is not the last step toward a successful business. Neon signs are the most popular these days due to its benefit of standing out in the crowd. It is a perfect way of marketing that most business people use for their companies. An entrepreneur usually makes sure to maintain it one day after another so that it can provide lifelong services to the company. Many believe that since neon signs are mostly durable, they do not need any effort for maintenance. <img class="alignnone wp-image-7182 size-full" src="https://s3-ap-southeast-2.amazonaws.com/www.cryptoknowmics.com/blog/wp-content/uploads/2020/02/04072541/neon-sings.png" alt="" width="1920" height="1080" /> Such aspects do not mean that we allow it to get damaged because of our lack of care. The prolonged lifeline of the Neon signs can be established only by our effort of proper maintenance. This short guide can save us from being involved in investing our money time and again over neon signs because we couldn't care about it from the start. <h3><strong><b>Choosing and placement</b></strong></h3> Most of us assume that the signs are meant to be durable with or without proper care. In reality, two factors matter in this respect: Option we opt for and placement of it. Experts suggest that if the Neon sign is placed in a crowded area, where things are carried or transported often, we can consider getting a cover for your neon sign. Since the neon sign is still not strong enough because of having primary substance like glass, it can be broken into pieces without a clear cover. By making sure of such things, we would ensure that the Neon sign can last for a decade or two. It also shows the importance of why there is a need to select a better place for our neon sign so that it doesn't fall off and break. <h3><strong><b>Taking good care</b></strong></h3> The use of neon signs can be flashy in a way to ensure that the business can attain higher development. Indeed, the neon signs need to be held and maintained correctly because of the tubes. The first aspect of taking good care of the signs is to ensure whether the tubes are broken or cracked. If the damage is visible, we might have to approach the professionals who are experienced enough to fix it appropriately. <h3><strong><b>Cleanliness of the tubes</b></strong></h3> The use of neon signs can be flashy in a way to ensure that the business can attain higher development. Indeed, the neon signs need to be held and maintained correctly because of the tubes. The first aspect of taking good care of the signs is to ensure whether the tubes are broken or cracked. If the damage is visible, we might have to approach the professionals who are experienced enough to fix it appropriately. <h3><strong><b>Safe from bugs</b></strong></h3> One thing that attracts the light is flying insects that makes it difficult for anyone around. We might even see the dead bodies of these bugs sticking with the tubes every morning. At such a time, cleaning becomes a gross work that needs to be done almost every day. The first thing you can do is use the help of traps and bug zappers to capture the flying bugs easily. It can help you in lowering down the insect population quickly. <h3><strong><b>Conclusion</b></strong></h3> If one desires to buy neon signs in NYC or led signs for business, we might have to gain more information about how to maintain them properly. The maintenance aspect can ensure that we give us higher durability and better quality of light.
nelliemarteen
254,879
How to add gitignored files to Heroku (and how not to)
Sometimes, you want to add extra files to Heroku or Git, such as built files, or secrets; but it is a...
0
2020-02-04T08:37:51
https://dev.to/patarapolw/how-to-add-gitignored-files-to-heroku-and-how-not-to-3fbe
webdev, javascript
Sometimes, you want to add extra files to Heroku or Git, such as built files, or secrets; but it is already in `.gitignore`, so you have to build on the server. You have options, as this command is available. ```sh git push heroku new-branch:master ``` But how do I create such `new-branch`. A naive solution would be to use `git switch`, but this endangers gitignored files as well. (It might disappear when you switch branch.) That's where `git worktree` comes in. I can use [a real shell script](https://github.com/patarapolw/aloud/blob/77133bb2950af19819afd99b17026eabdb16fd4c/deploy.sh), but I feel like using Node.js is much easier (and safer due to [pour-console](https://github.com/patarapolw/pour-console)). So, it is basically like this. ```js async function deploy ( callback, deployFolder = 'dist', deployBranch = 'heroku', deployMessage = 'Deploy to Heroku' ) { // Ensure that dist folder isn't exist in the first place await pour('rm -rf dist') try { await pour(`git branch ${deployBranch} master`) } catch (e) { console.error(e) } await pour(`git worktree add -f ${deployFolder} ${deployBranch}`) await callback(deployFolder, deployBranch) await pour('git add .', { cwd: deployFolder }) await pour([ 'git', 'commit', '-m', deployMessage ], { cwd: deployFolder }) await pour(`git push -f heroku ${deployBranch}:master`, { cwd: deployFolder }) await pour(`git worktree remove ${deployFolder}`) await pour(`git branch -D ${deployBranch}`) } deploy(async (deployFolder) => { fs.writeFileSync( `${deployFolder}/.gitignore`, fs.readFileSync('.gitignore', 'utf8').replace(ADDED_FILE, '') ) fs.copyFileSync( ADDED_FILE, `${deployFolder}/${ADDED_FILE}` ) }).catch(console.error) ``` ## How not to commit Apparently, this problem is easily solved on Heroku with ```js pour(`heroku config:set SECRET_FILE=${fs.readFileSync(secretFile, 'utf8')}`) ``` Just make sure the file is deserializable. You might even write a custom serializing function, with ```js JSON.stringify(obj[, replacer]) JSON.parse(str[, reviver]) ``` Don't forget that `JSON` object is customizable.
patarapolw
254,940
Add your project located on PC to GitHub, A detailed How To Article
This tutorial will walk you through the step by step approach about how to add your local code base to your GitHub account.
0
2020-02-04T10:54:26
https://dev.to/windson/add-your-project-located-on-pc-to-github-a-detailed-how-to-article-2p95
github, addexisitingproject, versioncontrol, sourcecontrol
--- title: Add your project located on PC to GitHub, A detailed How To Article published: true description: This tutorial will walk you through the step by step approach about how to add your local code base to your GitHub account. tags: github, add exisiting project, version control, source control --- This detailed tutorial http://bit.ly/2UoEXLe will walk you through adding an existing project to GitHub. Following is the Quick Cheat Sheet to add existing repository to git has step by step approach to add and push an existing code base to a new GitHub repository. `echo "# my-first-repo-on-github" >> README.md` `git init` `# Create New repository on GitHub` `# git remote add origin url` `git remote add origin https://github.com/your-awesome-username/name-of-your-repository.git` `git remote -v` `git pull origin master` `git add .` `git commit -m 'init'` `git push origin master`
windson
254,987
Near real-time Campaign Reporting Part 2 - Aggregation/Reduction
This is the second in a series of articles describing a simplified example of near real-time Ad Cam...
0
2020-02-04T12:09:47
https://dev.to/aerospike/near-real-time-campaign-reporting-part-2-aggregation-reduction-4af6
![](https://raw.githubusercontent.com/helipilot50/real-time-reporting-aerospike-kafka/master/architecture/aerospike-logo-long.png) This is the second in a series of articles describing a simplified example of near real-time Ad Campaign reporting on a fixed set of campaign dimensions usually displayed for analysis in a user interface. The solution presented in this series relies on [Kafka](https://en.wikipedia.org/wiki/Apache_Kafka), [Aerospike’s edge-to-core](https://www.aerospike.com/blog/edge-computing-what-why-and-how-to-best-do/) data pipeline technology, and [Apollo GraphQL](https://www.apollographql.com/) * [Part 1](https://dev.to/aerospike/near-real-time-campaign-reporting-part-1-event-collection-2kal): real-time capture of Ad events via Aerospike edge datastore and Kafka messaging. * Part 2: aggregation and reduction of Ad events leveraging Aerospike Complex Data Types (CDTs) for aggregation and reduction of Ad events into actionable Ad Campaign Key Performance Indicators (KPIs). * [Part 3](https://dev.to/aerospike/near-real-time-campaign-reporting-part-3-campaign-service-and-campaign-ui-812): describes how an Ad Campaign user interface displays those KPIs using GraphQL retrieve data stored in an Aerospike Cluster. ![Data flow](http://www.plantuml.com/plantuml/proxy?src=https://raw.githubusercontent.com/helipilot50/real-time-reporting-aerospike-kafka/master/architecture/data-flow.puml) *Data flow* ### Summary of Part 1 In part 1 of this series, we - used an ad event simulator for data creation - captured that data in the Aerospike “edge” database - pushed the results to a Kafka cluster via Aerospike’s Kafka Connector Part 1 is the base used to implement Part 2 ## The use case — Part 2 The simplified use case for Part 2 consists of reading Ad events from a Kafka topic, aggregating/reducing the events into KPI values. In this case the KPIs are simple counters, but in the real-world these would be more complex metrics like averages, gauges, histograms, etc. The values are stored in a data cube implemented as a Document or Complex Data Type ([CDT](https://www.aerospike.com/docs/guide/cdt.html)) in Aerospike. Aerospike provides fine-grained operations to read or write one or more parts of a [CDT](https://www.aerospike.com/docs/guide/cdt.html) in a single, atomic, database transaction. The Aerospike record: | Bin | Type | Example value | | --- | ---- | ------------- | | c-id | long | 6 | | c-date | long | 1579373062016 | | c-name | string | Acme campaign 6 | | stats | map | {"visits":6, "impressions":78, "clicks":12, "conversions":3}| The Core Aerospike cluster is configured to prioritise consistency over availability to ensure that numbers are accurate and consistent for use with payments and billing. Or in others words: **Money** In addition to aggregating data, the new value of the KPI is sent via another Kafka topic (and possible separate Kafka cluster) to be consumed by the Campaign Service as a GraphQL subscription and providing a live update in the UI. Part 3 covers the Campaign Service, Campaign UI and GraphQL in detail. ![Impression sequence](http://www.plantuml.com/plantuml/proxy?src=https://raw.githubusercontent.com/helipilot50/real-time-reporting-aerospike-kafka/master/architecture/event-sequence-part-2.puml&fmt=svg) *Aggregation/Reduction sequence* ## Companion code The companion code is in [GitHub](https://github.com/helipilot50/real-time-reporting-aerospike-kafka). The complete solution is in the `master` branch. The code for this article is in the `part-2` branch. Javascript and Node.js are used in each service although the same solution is possible in any language. The solution consists of: * All of the service and containers in [Part 1](https://dev.to/aerospike/near-real-time-campaign-reporting-part-1-event-collection-2kal). * Aggregator/Reducer service - Node.js Docker and Docker Compose simplify the setup to allow you to focus on the Aerospike specific code and configuration. ### What you need for the setup All the perquisites are described in [Part 1](https://dev.to/aerospike/near-real-time-campaign-reporting-part-1-event-collection-2kal). ### Setup steps To set up the solution, follow these steps. Because executable images are built by downloading resources, be aware that the time to download and build the software depends on your internet bandwidth and your computer. Follow the setup steps in [Part 1](https://dev.to/aerospike/near-real-time-campaign-reporting-part-1-event-collection-2kal). Then **Step 1.** Checkout the `part-2` branch ```bash $ git checkout part-2 ``` **Step 2.** Then run ```bash $ docker-compose up ``` Once up and running, after the services have stabilised, you will see the output in the console similar to this: ![Sample console output](https://raw.githubusercontent.com/helipilot50/real-time-reporting-aerospike-kafka/master/architecture/kpi-event-output.png) *Sample console output* ## How do the components interact? ![Component Interaction](http://www.plantuml.com/plantuml/proxy?src=https://raw.githubusercontent.com/helipilot50/real-time-reporting-aerospike-kafka/master/architecture/part-2-component.puml&fmt=svg) *Component Interaction* **Docker Compose** orchestrates the creation of several services in separate containers: All of the services and containers in [Part 1](https://dev.to/aerospike/near-real-time-campaign-reporting-part-1-event-collection-2kal) with the addition of: **Aggregator/Reducer** `aggregator-reducer` - A node.js service to consume Ad event messages from the Kafka topic `edge-to-core` and aggregates the single event with the existing data cube. The data cube is a document stored in an Aerospike CDT. A CDT document can be a list, map, geospatial, or nested list-map in any combination. One or more portions of a CDT can be mutated and read in a single atomic operation. See [CDT Sub-Context Evaluation](https://www.aerospike.com/docs/guide/cdt-context.html) Here we use a simple map where multiple discrete counters are incremented. In a real-world scenario, the datacube would be a complex document denormalized for read optimization. Like the Event Collector and the Publisher Simulator, the Aggregator/Reducer uses the Aerospike Node.js client. On the first build, all the service containers that use Aerospike will download and compile the supporting C library. The `Dockerfile` for each container uses multi-stage builds to minimise the number of times the C library is compiled. **Kafka Cli** `kafkacli` - Displays the KPI events used by GraphQL in [Part 3](https://dev.to/aerospike/near-real-time-campaign-reporting-part-3-campaign-service-and-campaign-ui-812). ### How is the solution deployed? Each container is deployed using `docker-compose` on your local machine. *Note:* The `aggregator-reducer` container is deployed along with **all** the containers from [Part 1](https://dev.to/aerospike/near-real-time-campaign-reporting-part-1-event-collection-2kal). ![Deployment](http://www.plantuml.com/plantuml/proxy?src=https://raw.githubusercontent.com/helipilot50/real-time-reporting-aerospike-kafka/master/architecture/docker-compose-deployment-part-2.puml&fmt=svg) *Deployment* ## How does the solution work? The `aggregator-reducer` is a headless service that reads a message from the Kafka topic `edge-to-core`. The message is the whole Aerospike record written to `edge-aerospikedb` and exported by `edge-exporter`. The event data is extracted from the message and written to `core-aerospikedb` using multiple CDT operations in one atomic database operation. ![Event processing](http://www.plantuml.com/plantuml/proxy?src=https://raw.githubusercontent.com/helipilot50/real-time-reporting-aerospike-kafka/master/architecture/aggregator-reducer-activity.puml&fmt=svg) *aggregation flow* ### Connecting to Kafka To read from a Kafka topic you need a `Consumer` and this is configured to read from one or more topics and partitions. In this example, we are reading a message from one topic `edge-to-core` and this topic has only 1 partition. ```Javascript this.topic = { topic: eventTopic, partition: 0 }; this.consumer = new Consumer( kafkaClient, [], { autoCommit: true, fromOffset: false } ); let subscriptionPublisher = new SubscriptionEventPublisher(kafkaClient); addTopic(this.consumer, this.topic); this.consumer.on('message', async function (eventMessage) { ... }); this.consumer.on('error', function (err) { ... }); this.consumer.on('offsetOutOfRange', function (err) { ... }); ``` Note that the `addTopic()` is called after the `Consumer` creation. This function attempts to add a topic to the consumer, if unsuccessful it waits 5 seconds and tries again. Why do this? The `Consumer` will throw an error if the topic is empty and this code overcomes that problem. ```javascript const addTopic = function (consumer, topic) { consumer.addTopics([topic], function (error, thing) { if (error) { console.error('Add topic error - retry in 5 sec', error.message); setTimeout( addTopic, 5000, consumer, topic); } }); }; ``` ### Extract the event data The payload of the message is a complete Aerospike record serialised as JSON. ```json { "msg": "write", "key": [ "test", "events", "AvYAdzrOmm1xwvWZFyGwEvgjwnk=", null ], "gen": 0, "exp": 0, "lut": 0, "bins": [ { "name": "event-id", "type": "str", "value": "0d33124c-beca-4b4f-a833-8c9646167e8c" }, { "name": "event-data", "type": "map", "value": { "geo": [ 52.366521, 4.894981 ], "tag": "0a3ca7c5-b845-49ed-ab3a-129f9eca23d6", "publisher": "e5b08db3-07b5-456b-aaac-1e59f76c4dd6", "event": "impression", "userAgent": "Mozilla/5.0 (X11; CrOS x86_64 8172.45.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.64 Safari/537.36" } }, { "name": "event-tag-id", "type": "str", "value": "0a3ca7c5-b845-49ed-ab3a-129f9eca23d6" }, { "name": "event-source", "type": "str", "value": "e5b08db3-07b5-456b-aaac-1e59f76c4dd6" }, { "name": "event-type", "type": "str", "value": "impression" } ] } ``` These items are extracted: 1. Event value 2. Tag id 3. Event source These values are used in the aggregation step. ```javascript let payload = JSON.parse(eventMessage.value); // Morph the array of bins to and object let bins = payload.bins.reduce( (acc, item) => { acc[item.name] = item; return acc; }, {} ); // extract the event data value let eventValue = bins['event-data'].value; // extract the Tag id let tagId = eventValue.tag; // extract source e.g. publisher, vendor, advertiser let source = bins['event-source'].value; ``` ### Lookup Campaign Id using Tag The Tag Id is used to locate the matching Campaign. During campaign creation, a mapping between Tags and Campaign is created, this example uses an Aerospike record where the key is the Tag id and the value is the Campaign Id, and in this case, Aerospike is used a Dictionary/Map/Associative Array. ```javascript //lookup the Tag id in Aerospike to obtain the Campaign id let tagKey = new Aerospike.Key(config.namespace, config.tagSet, tagId); let tagRecord = await aerospikeClient.select(tagKey, [config.campaignIdBin]); // get the campaign id let campaignId = tagRecord.bins[config.campaignIdBin]; ``` ### Aggregating the Event The Ad event is specific to a Tag and therefore a Campaign. In our model, a Tag is directly related to a Campaign and KPIs are collected at the Campaign level. In the real-world KPIs are more sophisticated and campaigns have many execution plans (line items). Each event for a KPI increments the value by 1. Our example stores the KPIs in a document structure ([CDT](https://www.aerospike.com/docs/guide/cdt.html)) in a [bin](https://www.aerospike.com/docs/architecture/data-model.html#bins) in the Campaign [record](https://www.aerospike.com/docs/architecture/data-model.html#records). Aerospike provides operations to atomically [access and/or mutate sub-contexts](https://www.aerospike.com/docs/guide/cdt-context.html) of this structure to ensure the operation latency is ~1ms. In a real-world scenario, events would be aggregated with sophisticated algorithms and patterns such as time-series, time windows, histograms, etc. Our code simple increments the value KPI value by 1 using the KPI name as the 'path' to the value: ```javascript const accumulateInCampaign = async (campaignId, eventSource, eventData, asClient) => { try { // Aerospike CDT operation returning the new DataCube let campaignKey = new Aerospike.Key(config.namespace, config.campaignSet, campaignId); const kvops = Aerospike.operations; const maps = Aerospike.maps; const kpiKey = eventData.event + 's'; const ops = [ kvops.read(config.statsBin), maps.increment(config.statsBin, kpiKey, 1), ]; let record = await asClient.operate(campaignKey, ops); let kpis = record.bins[config.statsBin]; console.log(`Campaign ${campaignId} KPI ${kpiKey} processed with result:`, JSON.stringify(record.bins, null, 2)); return { key: kpiKey, value: kpis }; } catch (err) { console.error('accumulateInCampain Error:', err); throw err; } }; ``` The new KPI value is incremented and the new value is returned. The magic of Aerospike ensures that the operation is Atomic and Consistent across the cluster with a latency of about 1 ms. ### Publishing the new KPI We could stop here and allow the Campaign UI and Service (Part 3) to poll the Campaign store `core-aerospikedb` to obtain the latest campaign KPIs - this is a typical pattern. A more advanced approach is to stimulate the UI whenever a value has changed or at a specified frequency. While introducing new technology and challenges, this approach offers a very responsive UI presenting up to the second KPI values to the user. The `SubScriptionEventPublisher` uses Kafka as Pub-Sub to publish the new KPI value for a specific campaign on the topic `subscription-events`. In Part 3 the `campaign-service` receives this event and publishes it as a [GraphQL Subscription](https://www.apollographql.com/docs/apollo-server/data/subscriptions/) ```javascript class SubscriptionEventPublisher { constructor(kafkaClient) { this.producer = new HighLevelProducer(kafkaClient); }; publishKPI(campaignId, accumulatedKpi) { const subscriptionMessage = { campaignId: campaignId, kpi: accumulatedKpi.key, value: accumulatedKpi.value }; const producerRequest = { topic: subscriptionTopic, messages: JSON.stringify(subscriptionMessage), timestamp: Date.now() }; this.producer.send([producerRequest], function (err, data) { if (err) console.error('publishKPI error', err); // else // console.log('Campaign KPI published:', subscriptionMessage); }); }; } ``` ## Review [Part 1](https://dev.to/aerospike/near-real-time-campaign-reporting-part-1-event-collection-2kal) of this series describes: * creating mock Campaign data * a publisher simulator * an event receiver * an edge database * an edge exporter This article (Part 2) describes the aggregation and reduction of Ad events into Campaign KPIs using Kafka as the messaging system and Aerospike as the consistent data store. [Part 3](https://dev.to/aerospike/near-real-time-campaign-reporting-part-3-campaign-service-and-campaign-ui-812) describes the Campaign service and Campaign UI to for a user to view the Campaign KPIs in near real-time. ## Disclaimer This article, the code samples, and the example solution are entirely my own work and not endorsed by Aerospike or Confluent. The code is PoC quality only and it is not production strength, and is available to anyone under the MIT License.
helipilot50
255,085
Build Your Own Personal Data Repository With Nostalgia
The companies that we entrust our personal data to are using that information to gain extensive insig...
0
2020-02-04T15:26:12
https://www.pythonpodcast.com/nostalgia-personal-data-repository-episode-248/
<p>The companies that we entrust our personal data to are using that information to gain extensive insights into our lives and habits while not always making those findings accessible to us. Pascal van Kooten decided that he wanted to have the same capabilities to mine his personal data, so he created the Nostalgia project to integrate his various data sources and query across them. In this episode he shares his motivation for creating the project, how he is using it in his day-to-day, and how he is planning to evolve it in the future. If you're interested in learning more about yourself and your habits using the personal data that you share with the various services you use then listen now to learn more.</p><p><a href='https://www.pythonpodcast.com/nostalgia-personal-data-repository-episode-248/'>Listen Now!</a></p>
blarghmatey
255,121
A New Package for the CLI
TL;DR, we're re-releasing the CLI package under a new name, @ionic/cli! To update, first you will nee...
0
2020-02-18T18:00:03
https://ionicframework.com/blog/a-new-package-for-the-cli/
ionic, cli
--- title: A New Package for the CLI published: true date: 2020-02-04 15:40:02 UTC tags: Ionic, CLI canonical_url: https://ionicframework.com/blog/a-new-package-for-the-cli/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/9wyq5ol8gp7vyr6dlb1i.png --- TL;DR, we're re-releasing the CLI package under a new name, `@ionic/cli`! To update, first you will need to uninstall the old CLI package. ```shell $ npm uninstall -g ionic $ npm install -g @ionic/cli ``` You will still interact with the CLI via `ionic` command, just how the CLI is installed has changed. And now, on with the blog post! <!--more--> ## Everything has a beginning Many years ago, when Ionic was still in it's pre 1.0, we saw a great opportunity to help devs build amazing apps without having to guess how that would be done. While the V1 days of Ionic included things like bower, scripts tags, and gulp, it was our first attempt to make a tool that did everything for you. After building out all the initial functionality, we had one last task...what do we call this tool? We called it...Ionic! Our initial logic was that we would have one tool and one framework that covered everything. Want to build apps with Ionic? Just install Ionic. This worked great, and for a long time we were set on just keeping things as they were. That is until we started noticing a common point of confusion with our users. ## What version of Ionic are you using? When debugging a user issue or working with community members, one of the first things we ask people is "What version of Ionic are you using?" This has led to some confusion in the community as people would assume that `ionic -v` would give them the version of both the framework and the CLI. This however is not the case. One or two instances would be enough to ignore this, but given how common this is, we thought it was finally time to solve this issue. As the number of packages under the Ionic organization has grown, we would ship them under the scoped package name. Our release of Ionic for Angular? `@ionic/angular`. React? `@ionic/react`. This is a pretty clear message that when you install one of these packages, you know exactly what you are getting. There is no confusion in this package's purpose or what context this package should be used in. ## Moving towards a scoped package To help with this confusion, we're rereleasing the CLI package under a new name, `@ionic/cli`. This unifies how we ship tools across Ionic and make sure that people are aware what tool they are installing when setting up their environment. I mentioned this last week in an Ionic newsletter and so far the feedback from the community has been incredibly supportive. In the past this has been a suggestion that many community members have made and as time has gone on, it seems to be the move many other CLI tools have done. Angular in particular rebranded the `angular-cli` package in favor of `@angular/cli` and the Vue CLI has done the same thing (`vue-cli` to `@vue/cli`). With this in mind, we finally decided it was time. To update to the new CLI, you must first uninstall the old CLI package: ```shell $ npm uninstall -g ionic # Then install the new CLI package $ npm install -g @ionic/cli ``` This new package name coincides with the release of the CLI’s 6.0. This includes some new features which can all be reviewed in the [Changelog](https://github.com/ionic-team/ionic-cli/blob/develop/packages/%40ionic/cli/CHANGELOG.md#600-2020-01-25). The old CLI package will not be updated to the newer 6.0 releases and has an official deprecated warning now. While this should work for some time, we encourage everyone to update to the new CLI package to receive all the latest updates. Well that’s all for now folks! We are glad the feedback so far has been super supportive of this change and can’t wait for you all to upgrade….Seriously, we can’t wait. Upgrade your CLI 😄. Cheers!
mhartington
255,131
Using Fetch in JavaScript
Sometimes we need to get information from an API. Since the 2015 updates to JavaScript, fetch() is th...
0
2020-02-04T16:45:14
https://dev.to/eliastooloee/using-fetch-in-javascript-3i7c
Sometimes we need to get information from an API. Since the 2015 updates to JavaScript, fetch() is the best way to go about this. Below I will explain the ES6 syntax for using fetch(). ```function getBooks() { fetch('http://localhost:3000/books') .then((response) => { return response.json(); }) .then((books) => { renderBooks(books); }); }``` Here we can see a function getBooks, which will communicate with an API. The first step is use fetch, followed by the Url of the API inside the parentheses. This gets us the promise from the API, but the information is not usable. We then pass the response from fetch() into the next function to get the JSON data from the promise, and finally we name the JSON data 'books' and pass it to the function renderBooks. ```function renderBooks(books) { books.forEach(book => { renderBook(book) }); }``` Render books simply calls another function, renderBook, that will display each book. We pass book into this function. We then look at the HTML and find the element "list". We then target "list and create a new element called li. We set the content of this new element to be book.title, then append it to the list. We also add an event listener to li that will call the function showBookCard when li is clicked. We must pass in an empty function for this to work. If we do not pass an empty function, the event listener will run on page load, which we do not want. ```function renderBook(book) { const unorderedList = document.getElementById("list"); const li = document.createElement("li"); li.textContent = book.title li.addEventListener("click", () => showBookCard(book)) unorderedList.appendChild(li) }``` Next, we have function showBookCard, which will create a display that is triggered by the event listener attached to li. ```function showBookCard(book) { const showPanel = document.getElementById("show-panel") showPanel.innerHTML = ""; const coverImage = document.createElement("img") coverImage.src = book.img_url const description = document.createElement("h1") description.textContent = book.description const button = document.createElement("button") button.textContent = "Like" button.addEventListener("click", () => likeBook(book)) const usersList = document.createElement("ul") book.users.forEach(user => { const userLi = document.createElement("li"); userLi.textContent = user.username; usersList.appendChild(userLi); }) showPanel.appendChild(coverImage) showPanel.appendChild(description) showPanel.appendChild(button) showPanel.appendChild(usersList) }``` This function looks enormous, but is actually quite simple. We first look at the HTML to find the element "show-panel" and set it to a constant showPanel. We set its innerHTML equal to an empty string to make sure any content disappears when a user clicks away. We then create a constant called coverImage and set its source to book.img_url. We then append coverImage to showPanel. We can follow the same steps for the book description, substituting textContent for source. We then create a button called "Like" that will allow a user to like a book and attach an event listener to it that will call the function likeBook when the button is clicked. We then create an element called usersList with a "ul" tag. We then use forEach to iterate through the books users (user must be passed in), creating an "li"for each one and setting the textContent equal to their username. We then append the user to to the list. All of these elements must then be appended to showPanel. We then come to the function likeBook, which is called when a user clicks the like button on a book card. ```function likeBook(book) { book.users.push(userName) fetch(`http://localhost:3000/books/${book.id}`, { method: "PATCH", headers: { "Content-Type": "application/json", Accept: "application/json" }, body: JSON.stringify({users: book.users}) }) .then(res => res.json()) .then(likedBook => showBookCard(likedBook)); }``` This function accepts book as an argument. It then pushes the current user into the book's array of users. It then uses fetch to send a patch request to our API. A PATCH request updates only some of an object, rather than replacing everything like a PUSH request,. We use headers and a body tag to tell the API what kind of data it will be receiving and sending. We then convert the response to JSON and pass it to likedBook and showBookCard, which will update the DOM to reflect changes to the API.
eliastooloee
255,140
How Do I find my Liked Posts??
Hi, I'm new here. I've read some articles, liked some, unicorned others. How do I go back and find al...
0
2020-02-04T16:58:06
https://dev.to/emag3m/how-do-i-find-my-liked-posts-626
help
Hi, I'm new here. I've read some articles, liked some, unicorned others. How do I go back and find all the articles I've enjoyed?? Thanks!
emag3m
255,149
How to Turn a String into an Array of Characters: 3 Ways to do so.
Here are 3 different ways to convert a string into an array of its characters in javascript. const...
0
2020-02-04T17:19:35
https://dev.to/islam/how-to-turn-a-string-into-an-array-of-characters-3-ways-to-do-so-16a9
javascript, tutorial, array, string
Here are 3 different ways to convert a string into an array of its characters in javascript. ```javascript const str = 'Hello'; //1. Using split() method const arr1 = str.split(''); //2. Using spread operator const arr2 = [...str]; //3. Using Array.from() method const arr3 = Array.from(str); ``` Follow me on: 👉Instagram: https://www.instagram.com/islamcodehood 👉Twitter: https://twitter.com/islam_sayed8 👉Facebook page: https://www.facebook.com/Codehood-111... 👉Dev: https://dev.to/islam
islam
255,178
Modern <s>JavaScript</s> TypeScript
How far we’ve come! Back in the day, JavaScript was the nightmare language that no one wanted to use...
0
2020-02-04T17:54:46
https://dev.to/stoutsystems/modern-s-javascript-s-typescript-1gja
javascript, typescript, tutorial, codenewbie
<p>How far we’ve come!</p> <p>Back in the day, JavaScript was the nightmare language that no one wanted to use—partly due to its quirks and mostly due to terrible competing browser ecosystems.</p> <p>It got better with JQuery, which fixed the latter problem by making it easier to access the browser's DOM in a (mostly) uniform way. JQuery also made a nice platform for adding UI components and 3rd party plugins.</p> <p>Then in 2009 and 2015 new versions of the JavaScript standard were released that improved some of the quirks and added new language features.</p> <p>Fast forward to today. Some developers choose JavaScript for full stack—that is both server and client development. /p> <p>I’m not there yet. I use JS a lot, but still prefer something statically typed on the back-end.</p> <p>For similar reasons, I actually prefer TypeScript over JavaScript on the front-end. TypeScript gives you two benefits:</p> <table border="0" cellpadding="0" cellspacing="0" style="font-size:14px; line-height:21px; font-family:verdana, arial, sans-serif; color:#1A1A1A; "><tr><td style="font-size:14px; line-height:21px; font-family:verdana, arial, sans-serif; color:#1A1A1A; " valign="top" width="15">1.</td><td style="" valign="top"><strong>Types</strong>. As you can guess from the name, TypeScript lets you annotate types to get some static compile-time type checking. It’s just an annotation/hint though (since JavaScript itself is still dynamically typed), but I find it more helpful than I do annoying (most of the time; sometimes it gets in your way, and you want to bail out by casting to “any”).<br /><br /></td></tr> <tr><td style="font-size:14px; line-height:21px; font-family:verdana, arial, sans-serif; color:#1A1A1A; " valign="top" width="15">2.</td><td style="" valign="top"><strong>Language features</strong>. TypeScript is on the vanguard of adding new language features, sometimes getting them before they are added to JavaScript itself. Since Typescript requires a transpiler (see below), it has more freedom to add features than JavaScript does.</td></tr></table> <p>If you aren’t doing modern JavaScript or TypeScript, here’s a whirlwind primer of concepts and features you need to know.</p> <h2>Transpiling</h2> <p>Most of my JS work targets the browser, which means I need to target old JavaScript standards (though for most clients I no longer support Internet Explorer!). This isn’t a limitation, but it does mean that you need an extra build step to convert your new JavaScript/TypeScript to something the browser can understand. Enter the transpiler, which is similar to a compiler except it converts one programming language to another programming language (instead of to machine language). Babel is the most popular option for JavaScript, but for TypeScript you just need TypeScript itself. (It <i>is</i> a transpiler.)</p> <h2>Polyfill</h2> <p>Polyfills are essentially code or libraries that "patch" older browsers to provide language features that are part of newer JavaScript. Modern browsers provide these features out of the box, in which case the polyfill does nothing.</p> <p>Many helpful functions have been added, even to basic things like Arrays and Strings. I love using Promises for all of my development. Promises are features for doing asynchronous programming. Basically they encapsulate a task, like make a web request, and allow you to add callbacks that will be notified when the task completes in the future. <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript">Mozilla’s Developer Network</a> is still the best reference for what’s available and on what browser versions (and it usually has polyfills too).</p> <h2>Libraries</h2> <p>I'm not sure how you'd do modern JS development without 3rd party libraries, and there are a number of ways to get them and manage them. Some popular options are NPM, YARN, and Bower. They work similarly to Nuget in the .Net world; they provide a huge repository of versioned libraries and ways to install them and track them (so other developers on the team get the same versions). If you don't have a package manager, I'd default to NPM. It is popular and well supported.</p> <p>One thing to be aware of is the need to update packages regularly. This isn't unique to JavaScript or NPM, but it is a bigger concern here because of the sheer number of dependencies. (Many libraries use other libraries that use other libraries.) Remember that the Equifax data breach was caused because they failed to update a 3rd party library! (Though it was Java in their case, not JavaScript.)</p> <h2>Language features:</h2> <p>Here are some of my favorite every-day-can’t-live-with-out-them language features. Note that most language features I’m talking about are not TypeScript specific but are actually features from newer versions of JavaScript (or ECMA script as it’s officially called by no one). Since I mainly use TypeScript, I’m not usually aware of what features are coming from TS or JS. </p> <p>My list targets TypeScript, and <italics>may</italics> also apply to JavaScript:</p> <p>Classes & Constructors: Yes, they just paper over JavaScript's confusing prototypical inheritance model, but still they are great to use, even for readability alone. TypeScript has support for inheritance (“extends”) as well as public/protected/private accessibility modifiers that do what you would expect.</p> <p>Interfaces: TypeScript only, since they are only used for typing, but they help make API function calls easier to use, while still supporting JavaScript’s dynamic duck-typing.</p> <p>Arrow functions: AKA delegates, functors, and inline functions. Being able to write inline functions with</p> <code> (incrementMe) => incrementMe +1</code> <p>is a tremendous improvement over JavaScript’s wordier functions, especially when using a more functional style of programming (like Array.filter, Array.find and Array.map). Code is much more concise and readable!</p> <p>Improved "this": JavaScript is notorious for its confusing and bug inducing use of “this.” (Why it is confusing would take an entire article. Fortunately the Internet is full of them.) Arrow functions capture “this” and generally do what you would expect. You still have to be aware of the “this” issue, but it crops up way less often.</p> <p>Variable scoping: JavaScript is also notorious for confusing variable scoping. If you switch to “let” instead of “var” to define variables, then suddenly JavaScript works like every other language. It takes some retraining to form the new habit, but it’s painless and free.</p> <p>Const variables: Instead of “let” you can use “const” to define things that don’t change. Note that it is the <italics>variable</italics> that doesn’t change, not the thing that the variable points to (which you can still mutate). Not as powerful as a full C++ style const implementation, but still useful, and enforced by the (TypeScript) transpiler or runtime.</p> <p>Destructuring: Frequently, when passing an object around you want to pluck out and use just a few properties of that object. TypeScript makes that super convenient:</p> <code>let { a, b, c} = someObject;</code> <p>This is equivalent to the following:</p> <code>let a = someObject.a;</code><br/> <code>let b = someObject.b;</code><br/> <code>let c = someObject.c;</code><br/> <p>You can even use destructuring for function parameters so <code>({value}) => alert(value);</code> takes an object with a member named value and automatically pulls it out into a variable of the same name. This is great for event handlers!</p> <p>Object construction: There’s also a similar syntax for creating objects. The output from</p> <code>const a = "hello"; const other = "world";</code><br/> <code>let output = {a, b: other};</code><br/> <p>is an object with a field named “a” that has the value “hello” and a field named “b” that has the value “world.” This syntax is confusing when you are first introduced to it, but it’s easy to read after you understand it.</p> <p>Spread operator: JavaScript supports a new <code>. . .</code> operator that spreads out an object or an array. This can be used to spread out an array of arguments to call a function rather than using .apply(), but I love it best for cloning arrays and objects.</p> <code>const theClone = {...Source, age:10}</code> <p>This creates a new object (theClone) that contains a shallow copy of the members from Source, with an age field that has a value of 10. If Source has its own age property, it will be overridden by the new value. This is equivalent to manually setting all of the fields of Source into a new object, but so much easier and more versatile. (I don't need to know the fields of Source ahead of time.) It also handles Source being null/undefined. The same syntax works with arrays, and both are a great help for working with immutable data (which is a very helpful paradigm for simplifying reactive data updates).</p> <p>Import/export: JavaScript now supports proper import/exports for sharing types and functions between code files. This change alone cleans up your codebase by fixing collision issues and allowing you to "hide" internal implementation details, by only exporting things that form the API you want to support.</p> <p>Generics: TypeScript has full support for generics in type definitions, and they work exactly as you'd expect.</p> <p>Enums: TypeScript supports full-fledged enumerations based on either numeric values or strings! Much nicer than hardcoding strings or even using exported const variables.</p> <p>Async/await: I love Promises for asynchronous programming. I've recently started using async/await in TypeScript, which are easy to use and work exactly the same as the C# equivalents. (It is great to have such a nice parallel when working on the .Net tech stack.)</p> <h2>Summary</h2> <p>There are many more great features of TypeScript (and new JavaScript), and new ones are added regularly. The nice thing, though, is you can learn them as you need them. You can start off writing plain JavaScript in .ts files, and just improve it and add new features as needed.</p> <p>TypeScript works well with React, Vue.JS, and is mandatory with Angular. It's easy to integrate into existing projects alongside legacy code (though it's easier to call JS code from TS than the reverse depending on your transpiling setup). TypeScript works with all existing JavaScript libraries, and many have type definitions available specifically for TypeScript, so there's very little reason not to use it. The only real downsides are the extra cognitive load of learning it (just learn it as you go), and the extra build process (vastly paid back by developer productivity).</p> <br /><br /><br /> <p><i>Stout Systems is the software consulting and staffing company Fueled by the Most Powerful Technology Available: Human Intelligence®. Stout was founded in 1993 and is based in Ann Arbor, Michigan. Stout has clients across the U.S. in domains including engineering, scientific, manufacturing, education, marketing, entertainment, small business and, yes, robotics. Stout provides expert level software, Web and embedded systems development consulting and staffing services along with direct-hire technical recruiting and placements. If you're looking for a job in the tech industry, visit our <a href="https://www.stoutsystems.com/jobs/">job board</a> to see if you qualify for some of our positions. Best of luck to you! If you're looking to hire technical talent for your company, please <a href="https://www.stoutsystems.com/connect/">contact us</a>. This is a technical article catered to developers, technical project managers, and other technical staff looking to improve their skills.</i></p>
stoutsystems
265,146
Creating Apps for Wearable Tech via Apple Watch Mockup
There is a rapid growth of wearable technology so it is the best time to create apps for it. Showcase...
0
2020-02-20T07:35:56
https://dev.to/vittoriotsolinni/creating-apps-for-wearable-tech-via-apple-watch-mockup-o67
<em>There is a rapid growth of wearable technology so it is the best time to create apps for it. Showcase your idea through the Apple Watch mockup.</em> Wearable technology used to be something that spies used. There is the pair of eyeglasses that has a camera, or the necklace with the pendant that also doubles as a camera. But within the last few years, wearable technology has become an important part of life. The younger customer just seems mad at wires and cables. They don’t want them. That’s why your earphones no longer have the wires. This is why the headphones are no longer connected to the source of the music. Then of course, there is the smartwatch. Decades ago, the best thing that the watch can do aside from telling you the time is showcasing direction because it doubles as a compass. Now, the watch can do everything like taking and making calls and text messages. If there is a mother of all wearable technology, it must be the Apple Watch. It is not the first wearable tech but it seems to be the most important nowadays. <h2>History of wearable technology</h2> It’s hard to trace the beginnings of the wearable technology but it seems like the first one that truly fits the bill is the wristwatch, which was invented in the 16th century, based on records. But there were other portable watches around that time, too. Aside from the very common pocket watch, some of the watches were also worn as a necklace. It was slow going since then and the revolutionary technology released was the Sony Walkman. Today’s youth may not know what it is but the Walkman, released in 1979, was a portable cassette tape player. It was music that you can take around with you wherever you go! It was the best thing that ever happened to music during that time and for at least two decades more. There were variations to this technology over the years as music evolved from being distributed through cassette tapes to compact discs. Then Apple entered into the picture with the iPod, which doesn’t need a tape or compact disc to play music. The Bluetooth headset, again, changed the game making music even more convenient. It came out in 2002. Years later, other wearable technology that had nothing to do with music emerged. There was the Nike+, Fitbit and Google Glass. <h3>Apple Watch</h3> But there is really nothing more amazing than the Apple Watch. So for those planning on creating an app, the Apple Watch is the best backdrop for that, which you can showcase through a mockup. If you want free mockups, the best source for those is <a href="https://uxplanet.org/free-apple-watch-mockup-386a7d231ac9">UX Planet</a>. The website is your one-stop resource for everything related to user experience. PSD mockups are definitely vital tools in showcasing the user experience for your digital product. The selection of mockups from the website is available in Sketch formats and PSD file. So if you are thinking of making a proposal for iOS apps, the <a href="https://store.ramotion.com/apple-watch-mockup">Apple Watch mockup</a> is the best instrument to show off your design and creativity. <h3>Apple Watch mockup</h3> The Apple Watch is definitely the most in-demand wearable technology in the world. People actually line up at the Apple Stores during every Apple Watch launch. Just like the iPhone and other Apple devices, the tech giant always ensures that amazing new features will be introduced in every new Apple Watch lineup release. It takes more time to improve the Apple Watch, though, so new models are not released every year unlike with the iPhone. According to a new report from <a href="https://www.forbes.com/sites/davidphelan/2020/02/13/apple-reveals-dazzlingly-different-apple-watch-design--features/" rel="nofollow">Forbes</a>, there may be radical changes in the next release of the Apple Watch. This just proves that the Apple Watch mockup is the best way to present your ideas to other professionals. One, it is relatable considering the many people who wear it. In fact, you could even point out the people wearing Apple’s smartwatch during your presentation. Second, there is always that air of sophistication linked to Apple’s products. So when you use the Apple Watch in your presentation, there is already that impression of something great associated with the product or much of Apple’s mobile devices. <img src="https://dev-to-uploads.s3.amazonaws.com/i/gm0br0tdu2nem3pl55fo.jpeg"> <h3>How to create the mockup</h3> This is actually very easy, which just adds to the many advantages of using the mockup. As long as you have Photoshop and Sketch apps on your computer, the mockup can be done in less than 30 minutes. The hard part is just the creation of your content. Your app design will be integrated to the mockup through the Smart Object, which renders Smart layers on the mockup PSD. This way, it is easy for you to customize the frame by simply dragging and dropping your app design onto the screen of the Apple Watch. Once that is done, you just need to save it and you are ready to present your digital product. See how easy it is? There is no going back after using the mockup. You wouldn’t want to do PowerPoint presentations anymore. <h3>Kinds of mockups</h3> The internet is a great source of mockups and you will surely see hundreds of them—each different from the next. It is nice to have choices but you would have to really use your head when it comes to choosing the right mockup for your product. Sometimes, the way you create your presentation provides a great impact on you chance to seal the deal or not. Even if your product is for wearable technology, that doesn’t mean that you just use a mockup that only showcases the Apple Watch. The iOS mockups and Apple mockups, for example, usually showcase more than just the Apple Watch. In most cases, the other Apple devices make an appearance in this kind of mockups. So in the iOS mockup, there is emphasis on the operating system. It might be more appropriate to showcase the screenshot of the details of the products. Based on that, it is not wise to use a mockup that only shows off the Apple Watch because the product details may be rendered unreadable. The Apple mockup is perfect at demonstrating the versatility of your design. The mockup features almost all the Apple products: the iMac, MacBook, iPad, iPhone and Apple Watch. You have the smallest screen in the Apple Watch and the largest in the iMac. There will be variations in how the app designs will be displayed, for sure, but the most important thing is that you will be able to show that whatever the screen, the digital product remains as good as ever!
vittoriotsolinni
255,260
Gitlab: Create merge requests from cli
Me and my co-workers are working on a single project. Each one of us creates a branch for a specific...
0
2020-02-04T20:22:54
https://farnabaz.ir/@farnabaz/gitlab-create-merge-requests-from-cli
gitlab, git, javascript
Me and my co-workers are working on a single project. Each one of us creates a branch for a specific task and after doing some magic we had to create a merge request to the project's main branch. Merge request will be merged after another folk did approve its changes. One thing that bothers me, is that I have to open Gitlab and create a new merge request to the main branch, every time. I had an idea to create a merge request in CLI, and without having to visit the Gitlab website. And thanks to the Gitlab's team it is really easy to create a merge request from CLI. As [the documentation](https://docs.gitlab.com/ce/user/project/push_options.html) says: >GitLab supports using Git push options to perform various actions at the same time as pushing changes. >Currently, there are push options available for: > - Skipping CI jobs > - Merge requests **NOTICE**: You need to have Git 2.10 or newer to use push options. Using Gitlab's push options we can create a merge request just by pushing our new branch to the remote repository. All we have to do is to add `-o merge_request.create` option to `git push` command ```bash git push -o merge_request.create origin my-branch ``` Executing this command will push `my-branch` to the remote repository and create a new merge request from out branch to the main branch of the project. There an option to specify the target branch of the merge request. `-o merge_request.target=my-target-branch` will do the magic. ```bash git push \ -o merge_request.create \ -o merge_request.target=my-target-branch \ origin my-branch ``` Also, we can change the title of the merge request's ```bash git push -o merge_request.title="<title>" ``` Set the description of the merge request. ```bash git push -o merge_request.description="The description I want" ``` And set the merge request to remove the source branch when it’s merged. ```bash git push -o merge_request.remove_source_branch ``` Gitlab push options are awesome and solved my problem. However, I'm too lazy to write all these options every time. I needed to create a script to do it with ease. I have created a little js file to execute this command, let's call it `.create-merge-request.js` ```js var exec = require('child_process').exec; var targetBranch = process.argv[2] || "develop" exec("git push origin HEAD \ -o merge_request.create \ -o merge_request.remove_source_branch \ -o merge_request.target=" + targetBranch, (error, stdout, stderr) => { stdout && console.log(`[stdout]\n${stdout}`); stderr && console.log(`[stderr]\n${stderr}`); if (error !== null) { console.log(`exec error: ${error}`); } } ); ``` After this I've updated the project's `package.json` file and added new script. ```json { "scripts": { "merge": "node .create-merge-request.js", } } ``` Finally, I've created a merge request using this simple command. ```bash yarn merge my-target-branch ``` **NOTICE**: Do not push your branch before this command. If you push your branch before this command it will not work. Git will response with `Everything up-to-date` and merge request will not be created
farnabaz
255,283
Inheritance In ES6
Inheritance In ES6 Classes In ES6 Implementing a Class to Implement a class w...
0
2020-02-04T19:13:32
https://dev.to/salaheddin12/inheritance-in-es6-2amp
es6, javascript, inheritance, oop
# Inheritance In ES6 ## Classes In ES6 **Implementing a Class** - to Implement a class we use the class keyword. - every class must have a constructor function. attributes of that class are defined using getters/setters(functions as well). - as for attributes we use getters and setters. - getter and setter are special kind of methods that allow us to set the value of an attribut, and get the values of an attribute. ```javascript class Person { //getter //here we used the get keyword and then the name of the getter get Name() { return this.name; //here we simply return the value of that attribute but with the this key word //so that we get the attribute of the select instance of that class } //setter set Name(name_) { //and here we set the value of the attribute this.name = name_; } //getter get Age() { return this.age; } //setter set Age(age_) { this.age = age_; } //constructor constructor(name_, age_) { this.name = name_; this.age = age_; this.canWalk = function() { console.log("YAY !! I'm Walking"); }; } } ``` ```javascript //this is an instance of a Person let me = new Person("salah", 25); console.log(me); //Person {name:'salah',age:25} console.log(me.Age); //getter me.Age = 22; //setter console.log(me.Name); //getter me.Name = "SALAH EDDINE"; //setter console.log(me); //Person {name:'SALAH EDDINE',age:22} ``` ## Inheritance - In modern javascript we have the extends key work which makes easier to implement - inheritance in between our classes. - Male extends Person means that Male class has all the properties and methods of Person And also have it's own. ```javascript class Male extends Person { //getter get Gender() { return this.gender; } constructor(name_, age_ /*,here we can add other attributes as well*/) { //in the child class(Male) we must call the parent constructor first //we do that using the super method and pass to it age_ and name_ parent properties super(name_, age_); //and now we assign the child property gender which in this case will have a fixed value this.gender = "male"; } } ``` ```javascript //this is an instance of a male let MalePerson = new Male("mohammed", 15); console.log(MalePerson); //Male{name:'mohammed',age:15,gender:''male} ```
salaheddin12
255,284
Pitfalls of overusing React Context
Written by Ibrahima Ndaw✏️ For the most part, React and state go hand-in-hand. As your React app...
0
2020-02-07T18:14:22
https://blog.logrocket.com/pitfalls-of-overusing-react-context/
react, javascript, tutorial
--- title: Pitfalls of overusing React Context published: true date: 2020-02-04 18:30:57 UTC tags: react,javascript,tutorial canonical_url: https://blog.logrocket.com/pitfalls-of-overusing-react-context/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/tf5ymw7v42phe112yv0x.png --- **Written by [Ibrahima Ndaw](https://blog.logrocket.com/author/ibrahimandaw/)**✏️ For the most part, React and state go hand-in-hand. As your React app grows, it becomes more and more crucial to manage the state. With [React 16.8 and the introduction of hooks](https://dev.to/bnevilleoneill/introducing-react-16-8-featuring-official-support-for-hooks-lbd), the React Context API has improved markedly. Now we can combine it with hooks to mimic `react-redux`; some folks even use it to manage their entire application state. However, React Context has some pitfalls and overusing it can lead to performance issues. In this tutorial, we’ll review the potential consequences of overusing React Context and discuss how to use it effectively in your next React project. ### What is React Context? React Context provides a way to share data (state) in your app without passing down props on every component. It enables you to consume the data held in the context through providers and consumers without prop drilling. ```jsx const CounterContext = React.createContext(); const CounterProvider = ({ children }) => { const [count, setCount] = React.useState(0); const increment = () => setCount(counter => counter + 1); const decrement = () => setCount(counter => counter - 1); return ( <CounterContext.Provider value={{ count, increment, decrement }}> {children} </CounterContext.Provider> ); }; const IncrementCounter = () => { const { increment } = React.useContext(CounterContext); return <button onClick={increment}> Increment</button>; }; const DecrementCounter = () => { const { decrement } = React.useContext(CounterContext); return <button onClick={decrement}> Decrement</button>; }; const ShowResult = () => { const { count } = React.useContext(CounterContext); return <h1>{count}</h1>; }; const App = () => ( <CounterProvider> <ShowResult /> <IncrementCounter /> <DecrementCounter /> </CounterProvider> ); ``` Note that I intentionally split `IncrementCounter` and `DecrementCounter` into two components. This will help me more clearly demonstrate the issues associated with React Context. As you can see, we have a very simple context. It contains two functions, `increment` and `decrement`, which handle the calculation and the result of the counter. Then, we pull data from each component and display it on the `App` component. Nothing fancy, just your typical React app. ![Gordon Ramsay Asking, "What Is the Problem?"](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/gordon-ramsay-what-is-the-problem.gif?resize=500%2C338&ssl=1) From this perspective, you may be wondering what’s the problem with using React Context? For such a simple app, managing the state is easy. However, as your app grows more complex, React Context can quickly become a developer’s nightmare. [![LogRocket Free Trial Banner](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/f760c-1gpjapknnuyhu8esa3z0jga.png?resize=1200%2C280&ssl=1)](https://logrocket.com/signup/) ### Pros and cons of using React Context Although React Context is simple to implement and great for certain types of apps, it’s built in such a way that every time the value of the context changes, the component consumer rerenders. So far, this hasn’t been a problem for our app because if the component doesn’t rerender whenever the value of the context changes, it will never get the updated value. However, the rerendering will not be limited to the component consumer; all components related to the context will rerender. To see it in action, let’s update our example. ```jsx const CounterContext = React.createContext(); const CounterProvider = ({ children }) => { const [count, setCount] = React.useState(0); const [hello, setHello] = React.useState("Hello world"); const increment = () => setCount(counter => counter + 1); const decrement = () => setCount(counter => counter - 1); const value = { count, increment, decrement, hello }; return ( <CounterContext.Provider value={value}>{children}</CounterContext.Provider> ); }; const SayHello = () => { const { hello } = React.useContext(CounterContext); console.log("[SayHello] is running"); return <h1>{hello}</h1>; }; const IncrementCounter = () => { const { increment } = React.useContext(CounterContext); console.log("[IncrementCounter] is running"); return <button onClick={increment}> Increment</button>; }; const DecrementCounter = () => { console.log("[DecrementCounter] is running"); const { decrement } = React.useContext(CounterContext); return <button onClick={decrement}> Decrement</button>; }; const ShowResult = () => { console.log("[ShowResult] is running"); const { count } = React.useContext(CounterContext); return <h1>{count}</h1>; }; const App = () => ( <CounterProvider> <SayHello /> <ShowResult /> <IncrementCounter /> <DecrementCounter /> </CounterProvider> ); ``` I added a new component, `SayHello`, which displays a message from the context. We’ll also log a message whenever these components render or rerender. That way, we can see whether the change affects all components. ```jsx // Result of the console [SayHello] is running [ShowResult] is running [IncrementCounter] is running [DecrementCounter] is running ``` When the page finishes loading, all messages will appear on the console. Still nothing to worry about so far. Let’s click on the `increment` button to see what happens. ```jsx // Result of the console [SayHello] is running [ShowResult] is running [IncrementCounter] is running [DecrementCounter] is running ``` As you can see, all the components rerender. Clicking on the `decrement` button has the same effect. Every time the value of the context changes, all components’ consumers will rerender. You may still be wondering, who cares? Isn’t that just how React Context works? ![Joey From "Friends" Shrugging](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/friends-who-cares.gif?resize=192%2C231&ssl=1) For such a tiny app, we don’t have to worry about the negative effects of using React Context. But in a larger project with frequent state changes, the tool creates more problems than it helps solve. A simple change would cause countless rerenders, which would eventually lead to significant performance issues. So how can we avoid this performance-degrading rerendering? ### Prevent rerendering with `useMemo()` Maybe memorization is the solution to our problem. Let’s update our code with `useMemo` to see if memorizing our value can help us avoid rerendering. ```jsx const CounterContext = React.createContext(); const CounterProvider = ({ children }) => { const [count, setCount] = React.useState(0); const [hello, sayHello] = React.useState("Hello world"); const increment = () => setCount(counter => counter + 1); const decrement = () => setCount(counter => counter - 1); const value = React.useMemo( () => ({ count, increment, decrement, hello }), [count, hello] ); return ( <CounterContext.Provider value={value}>{children}</CounterContext.Provider> ); }; const SayHello = () => { const { hello } = React.useContext(CounterContext); console.log("[SayHello] is running"); return <h1>{hello}</h1>; }; const IncrementCounter = () => { const { increment } = React.useContext(CounterContext); console.log("[IncrementCounter] is running"); return <button onClick={increment}> Increment</button>; }; const DecrementCounter = () => { console.log("[DecrementCounter] is running"); const { decrement } = React.useContext(CounterContext); return <button onClick={decrement}> Decrement</button>; }; const ShowResult = () => { console.log("[ShowResult] is running"); const { count } = React.useContext(CounterContext); return <h1>{count}</h1>; }; const App = () => ( <CounterProvider> <SayHello /> <ShowResult /> <IncrementCounter /> <DecrementCounter /> </CounterProvider> ); ``` Now let’s click on the `increment` button again to see if it works. ```jsx <// Result of the console [SayHello] is running [ShowResult] is running [IncrementCounter] is running [DecrementCounter] is running ``` Unfortunately, we still encounter the same problem. All components’ consumers are rerendered whenever the value of our context changes. ![Michael Scott Making a Sad Face](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/michael-scott-sad.gif?resize=220%2C220&ssl=1) If memorization doesn’t solve the problem, should we stop managing our state with React Context altogether? ### Should you use React Context? Before you begin your project, you should determine how you want to manage your state. There are myriad solutions available, only one of which is React Context. To determine which tool is best for your app, ask yourself two questions: 1. When should you use it? 2. How do you plan to use it? If your state is frequently updated, React Context may not be as effective or efficient as a tool like [React Redux](https://blog.logrocket.com/when-and-when-not-to-use-redux-41807f29a7fb/). But if you have static data that undergoes lower-frequency updates such as preferred language, time changes, location changes, and user authentication, passing down props with React Context may be the best option. If you do choose to use React Context, try to split your large context into multiple contexts as much as possible and keep your state close to its component consumer. This will help you maximize the features and capabilities of React Context, which can be quite powerful in certain scenarios for simple apps. So, [should you use React Context](https://dev.to/bnevilleoneill/use-hooks-context-not-react-redux-1j08)? The answer depends on when and how. ### Final thoughts React Context is an excellent API for simple apps with infrequent state changes, but it can quickly devolve into a developer’s nightmare if you overuse it for more complex projects. Knowing how the tool works when building highly performant apps can help you determine whether it can be useful for managing states in your project. Despite its limitations when dealing with a high frequency of state changes, React Context is a very powerful state management solution when used correctly. * * * ## Full visibility into production React apps Debugging React applications can be difficult, especially when users experience issues that are difficult to reproduce. If you’re interested in monitoring and tracking Redux state, automatically surfacing JavaScript errors, and tracking slow network requests and component load time, [try LogRocket.](https://www2.logrocket.com/react-performance-monitoring) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/eq752g8qhbffxt3hp9t4.png) [LogRocket](https://www2.logrocket.com/react-performance-monitoring) is like a DVR for web apps, recording literally everything that happens on your React app. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app's performance, reporting with metrics like client CPU load, client memory usage, and more. The LogRocket Redux middleware package adds an extra layer of visibility into your user sessions. LogRocket logs all actions and state from your Redux stores. Modernize how you debug your React apps — [start monitoring for free.](https://www2.logrocket.com/react-performance-monitoring) * * * The post [Pitfalls of overusing React Context](https://blog.logrocket.com/pitfalls-of-overusing-react-context/) appeared first on [LogRocket Blog](https://blog.logrocket.com).
bnevilleoneill
255,286
NGROK on Rails
Handling Webhooks in a Rails App
0
2020-02-07T18:37:44
https://dev.to/ianvaughan/ngrok-on-rails-315m
webhooks, rails, ngrok, ruby
--- title: NGROK on Rails published: true description: Handling Webhooks in a Rails App tags: webhooks,rails,ngrok,ruby --- When writing code to handle incoming webhooks, it's a good idea to actually get the service to send some real ones in, the best API docs in the world won't beat a real integration! _A Webhook is just a POST request to your server, when something happens on a third party service._ The problem comes when you want to test those webhooks locally, to actually see a real request and payload, and see how your controller handles it, maybe debug some payload param, etc. The issue is that the service cannot call your local computer, as its not publically on the internet. This is where [ngork](https://ngrok.com/) comes in. It will open the default 8080 port on your machine, and direct any traffic to that port to a chosen localhost port. eg, to open up your default rails app : ```bash $ ngrok http 3000 ``` That command starts ngrok which opens its status screen which looks like: ![ngrok console](https://dev-to-uploads.s3.amazonaws.com/i/rgfqf3s1drtt5tye9d2p.png) It even has a nice web UI which can be used to view, edit and replay requests, very cool: ![ngrok webui](https://dev-to-uploads.s3.amazonaws.com/i/icmam9a456l7slilau9k.png) After starting ngrok above, the rails app on my machine is open to the world on the hostname `9fdb5bc4.ngrok.io` (you can assume I've closed that by the time you read this!) Your service provider will have a webhooks config page, where you can put the hostname of the service for it to post to. eg ![Go Cardless webhook setup](https://dev-to-uploads.s3.amazonaws.com/i/a6qg2za8u75nmgx6y3mx.png) So all is great, you get your webhook fired off, which routes to your rails app, which reports this: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/fliaxbk4a4qa44papa1l.png) (You will only see the log output version of that on the `rails s` console!) So it says add the hostname to the config, but we can do one better than that. If we add the following into `./config/environments/development.rb` : ```ruby config.hosts << ENV['NGROK_HOST'] if ENV['NGROK_HOST'].present? ``` Then we don't need to change our code when we restart ngrok (each time it starts it will give a random hostname, unless you pay!) With that config line in place, all we need to do before starting rails is to export aa var, eg: ```bash export NGROK_HOST=a234c154.ngrok.io ``` Then it will be added to rails `cofig.hosts` so it accepts requests.
ianvaughan
255,466
Music Player UI Using Bootstrap 4
We have lots of collection of the bootstrap component with own styling and design. So here we designe...
0
2020-02-05T03:54:16
https://dev.to/w3hubs/music-player-ui-using-bootstrap-4-4p68
css, todayilearned, todayisearched, html
We have lots of collection of the bootstrap component with own styling and design. So here we designed simple Music Player User-interface Using Bootstrap 4. The music player manages you all song in a single application with attractive interfaces. Here we designed a simple and responsive player with font awesome icons.  In this element, we used button group and bootstrap color classes to our element more beautiful.  Make it yours now by using it, downloading it and please share it. we will design more elements for you. GET SOURCE CODE:- https://w3hubs.com/music-player-ui-using-bootstrap-4/
w3hubs
256,314
Day 79: Drive
liner notes: Professional : Got up early to sit in on a Talk Prep session where we discussed Demos....
0
2020-02-06T03:29:20
https://dev.to/dwane/day-79-drive-2mm1
hiphop, code, coding, lifelongdev
_liner notes_: - Professional : Got up early to sit in on a Talk Prep session where we discussed Demos. Had a really good discussion and great takeaways. After the session, I got ready to hit the road. I made it all the way to West Palm Beach before talking the second Talk Prep session of the day. Got some more great discussion and thoughts from more team members. - Personal : After the meeting, I continued the drive and made it to Miami and got a chance to hang out with my sister, nephew and brother in law. Had a really great time and ate some great food. ![Vulcano El Teide, Spain at night with a grassy plain and mountatins in the distance](https://dev-to-uploads.s3.amazonaws.com/i/wjd9j6bsxvs491teay95.png) It's late and I have to get up early to start my day at Sunshine PHP! As usual, I'll be posting from the event on my travel blog thing. https://dwane.in Yup. That's it. I'm tired. haha Have a great night! peace piece Dwane / conshus https://dwane.io / https://HIPHOPandCODE.com {% youtube EpSlPgLanVQ %}
dwane
255,549
[astuce] Comment utiliser du SQL modulaire dans votre manifest XML
Salut les joomlers de l'extrême! Voici une nouvelle astuce croustillante qui vient du coeur du code...
0
2020-02-05T17:12:15
https://dev.to/mralexandrelise/astuce-comment-utiliser-du-sql-modulaire-dans-votre-manifest-xml-a8h
webdev, tutorial, joomla, blog
--- title: [astuce] Comment utiliser du SQL modulaire dans votre manifest XML published: true date: 2020-02-05 02:40:21 UTC tags: webdev,tutorial,joomla,blog canonical_url: --- Salut les joomlers de l'extrême! Voici une nouvelle astuce croustillante qui vient du coeur du code de Joomla!. Saviez-vous que vous pouviez utiliser plusieurs fichiers SQL pour le processus d'installation de votre extension Joomla? Ces fichiers sont lus de façon séquentielle (l'un après l'autre) dans l'ordre dans lequel ils apparaissent dans le fichier xml du manifest de votre extension. Voici un exemple. Prêtez une attention particulière à la partie concernant l'installation (install). <script src="https://gist.github.com/alexandreelise/958989a3495bfb60ad0a0e8c681044b6.js" data-inline></script>
mralexandrelise
255,596
Demonstrating BDD (Behavior-driven development) in Go
In Demonstrating TDD (Test-driven development) in Go I've written about TDD and this time I want to...
4,984
2020-02-05T08:55:11
https://www.linkedin.com/pulse/demonstrating-bdd-behavior-driven-development-go-artur-neumann/
testing, bdd, go, qa
In [Demonstrating TDD (Test-driven development) in Go](https://dev.to/jankaritech/demonstrating-tdd-test-driven-development-in-go-27b0) I've written about TDD and this time I want to demonstrate BDD (Behavior-driven development) with Go. I will not explain all principles of BDD upfront, but explain some of them as I use them in the example. You can read more about them here: - ["Introducing BDD", by Dan North (2006)](http://blog.dannorth.net/introducing-bdd) - [Wikipedia](https://en.wikipedia.org/wiki/Behavior-driven_development) - ["The beginner's guide to BDD (behaviour-driven development)", By Konstantin Kudryashov, Alistair Stead, Dan North](https://inviqa.com/blog/bdd-guide) - [Behaviour-Driven Development](https://cucumber.io/docs/bdd/) If you have more good resources, please post them in the comment section. ## The basic idea I'm a fan of explaining things with real examples, that's why in [Demonstrating TDD (Test-driven development) in Go](https://dev.to/jankaritech/demonstrating-tdd-test-driven-development-in-go-27b0) I've created that small library to convert from Bikram Sambat (BS) (also called Vikram Samvat) dates to Gregorian dates and vice-versa. Now I want to use that library to create an API-driven service to do the conversion. (The project can be found on [github](https://github.com/JankariTech/bsDateServer)) One could now give that "requirement" to a developer and see what happens. With that kind of small project, chances are, something good will come out, but bad things might also happen: - the API will be super-complex and over-engineered - the API does the conversion, but does not handle errors correctly - etc. So there is a lot of potential for wasted resources, conflicts, misunderstandings etc. So it would be better to write down the requirements in more detail, because: 1. As customer you want your application to behave correctly (sometimes without knowing exactly what that means). 2. As developer your want to develop exactly what is requested and needed (to save time) and get paid afterwards. 3. As as QA-person, you want to know what you have to test, and you want to know what is a bug and what is a feature. So basically the goal is to get all the stakeholders (there might be more than the listed 3) to communicate and agree on what should be the acceptable behavior of the application. And that is in a nutshell the idea of BDD: improve the communication between stakeholders so that everybody knows what is talked about. But how to do that? The customer might think that the one-line explanation: "API to convert dates from BS to AD and vice-versa" is enough, the manager wants to write a contract and the developer says: "code is documentation enough". A good way to bring everybody on the same page is to describe the features of an application using the Gherkin language. Its a semi-structured language, that is so simple a cucumber could understand. ## Who wants to achieve what and why? In the project folder we create a new file called `bs-to-ad-conversion.feature`. Here we want to describe the feature to convert the dates in one direction. The description of every feature of the app is supposed to go into a separate file. _Side note: there is always the discussion what is a "feature"? In our example: is the conversion in both directions one or two features? Is the error-handling a separate feature or a part of the conversion feature? If you are not sure, be practical and simply make sure the file does not get too long._ We start the feature file with a very general description of the feature: ``` Feature: convert dates from BS to AD using an API As an app-developer in Nepal I want to be able to send BS dates to an API endpoint and receive the corresponding AD dates So that I have a simple way to convert BS to AD dates, that can be used in different apps ``` These lines are very important. They answer the question WHO wants to achieve WHAT with that feature and WHY. If you don't know who will use that feature, why do you implement it? If there is nothing to achieve with that feature, you actually don't have a feature. And if there is no reason to use that feature, it doesn't have a business value. So if the stakeholders (developer, customer, manager, QA, etc.) cannot answer these 3 questions, nobody really should spend time and money to implement it. ## Scenarios Every feature has different scenarios. A "add item to shopping basket"-feature in an online-shop could have scenarios like: - adding item to the basket while user is logged in - adding item to the basket while user is not logged in - adding item to the basket when the card is empty - adding item to the basket when there is already the same item in the basket - adding multiple items to the basket at once - etc. In every scenario your app might behave differently. If that specific behavior in that scenario matters for one or more stakeholders, better describe it. In Gherkin we have to start the scenario description with the `Scenario:` keyword and a short free-text sentence: ``` Scenario: converting a valid BS date Scenario: converting an invalid BS date ``` ## Given, When, Then Now we want to describe the specific behavior of the app in that scenario. For that Gherkin provides 3 different keywords: - **Given** - prerequisites for the scenario - **When** - the action to be tested - **Then** - the desired observable outcome Additionally there is **And**, if you have multiple of one of the above, you don't need to write ``` When doing A When doing B ``` but you can use `And` (it just sounds and reads nicer) ``` When doing A And doing B ``` For a complex application there will be most-likely some steps to bring the application into the state that you want to test (e.g. create users, navigate to a specific page, etc), for those prerequisites you should use the `Given` keyword. For our app, I cannot really think of anything. So I skip over to the `When` keyword. The `When` keyword is for the action (or multiple) you really want to test. ``` Scenario: converting a valid BS date When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" Scenario: converting an invalid BS date When a "GET" request is sent to the endpoint "/ad-from-bs/2060-13-01" ``` Now, what should happen in those specific scenarios? What is the observable outcome? Use the `Then` keyword to describe that (if there are different outcomes connect multiple `Then`s with `And`s) ``` Scenario: converting a valid BS date When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" Then the HTTP-response code should be "200" And the response content should be "2003-07-17" Scenario: converting an invalid BS date When a "GET" request is sent to the endpoint "/ad-from-bs/60-13-01" Then the HTTP-response code should be "400" And the response content should be "not a valid date" ``` So as pieces of our description we have: 1. features - one feature per file 2. scenarios - different ways that the feature should behave 3. steps - detailed description of every scenario. Every step starts with `Given`, `When` or `Then` All these pieces have to be written in a natural language, that all stakeholders can understand. What that means in detail would be a whole own post. In our case the "customer", requested an API, so IMO using technical terms like "HTTP-response code" should be OK. If you describe a GUI, the descriptions should be probably even less technical. The bottom line is: use words that all understand. Remember: BDD is all about improving communication! For more information about how to phrase the steps definitions see: https://cucumber.io/docs/gherkin/reference/ After specifying one feature (or even one scenario) the developer could start developing. In SCRUM-terms: one feature is one user-story, so you do all your agile development cycle with it. Create one or multiple, put them in sprints, work on them, test them, etc. The description is not only the ToDo list for the developer, but also the test-procedure for QA and the documentation. ## Test it automatically We could stop there, but there is a great bonus-point: let's use these descriptions to run automatic tests. For that we need software that interprets the Gherkin language and runs code that executes the tests. For Go there is the [godog package](https://github.com/cucumber/godog). To install godog we fist have to create a simple `go.mod` file with the content ``` module github.com/JankariTech/bsDateServer go 1.19 ``` and then run `go get github.com/cucumber/godog@v0.12.6` If you are running within $GOPATH, you will need to set `GO111MODULE=on`, as: (The version number `@v0.12.6` is optional, if it's not given the latest version will be installed. I set the version here to make sure this blog-post stays valid also when s.th. changes in godog) We also, need the godog cli command to run our tests. Run the following command to add the godog cli to `$GOPATH/bin` ```shell go install github.com/cucumber/godog/cmd/godog@v0.12.6 ``` Now you should be able to run godog with `$GOPATH/bin/godog *.feature` and the output would be something like: ``` Feature: convert dates from BS to AD using an API As an app-developer in Nepal I want to be able to send BS dates to an API endpoint and receive the corresponding AD dates So that I have a simple way to convert BS to AD dates, that can be used in different apps Scenario: converting a valid BS date # bs-to-ad-conversion.feature:6 When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" Then the HTTP-response code should be "200" And the response content should be "2003-07-17" Scenario: converting an invalid BS date # bs-to-ad-conversion.feature:11 When a "GET" request is sent to the endpoint "/ad-from-bs/60-13-01" Then the HTTP-response code should be "400" And the response content should be "not a valid date" 2 scenarios (2 undefined) 6 steps (6 undefined) 281.282µs You can implement step definitions for undefined steps with these snippets: func aRequestIsSentToTheEndpoint(arg1, arg2 string) error { return godog.ErrPending } func theHTTPresponseCodeShouldBe(arg1 string) error { return godog.ErrPending } func theResponseContentShouldBe(arg1 string) error { return godog.ErrPending } func InitializeScenario(ctx *godog.ScenarioContext) { ctx.Step(`^a "([^"]*)" request is sent to the endpoint "([^"]*)"$`, aRequestIsSentToTheEndpoint) ctx.Step(`^the HTTP-response code should be "([^"]*)"$`, theHTTPresponseCodeShouldBe) ctx.Step(`^the response content should be "([^"]*)"$`, theResponseContentShouldBe) } ``` Godog lists all the scenarios we want to run and tells us that it has no idea what to do with them, because we haven't implemented any of the steps. Now we actually need to write code to tell godog how to execute our scenarios. For that create a file with the name `bsdateServer_test.go` and the content: ``` package main import ( "github.com/cucumber/godog" ) func aRequestIsSentToTheEndpoint(arg1, arg2 string) error { return godog.ErrPending } func theHTTPresponseCodeShouldBe(arg1 string) error { return godog.ErrPending } func theResponseContentShouldBe(arg1 string) error { return godog.ErrPending } func InitializeScenario(ctx *godog.ScenarioContext) { ctx.Step(`^a "([^"]*)" request is sent to the endpoint "([^"]*)"$`, aRequestIsSentToTheEndpoint) ctx.Step(`^the HTTP-response code should be "([^"]*)"$`, theHTTPresponseCodeShouldBe) ctx.Step(`^the response content should be "([^"]*)"$`, theResponseContentShouldBe) } ``` In the `InitializeScenario` function we have the link between the human-readable Gherkin, and the function that the computer has to execute for that step. The output of `$GOPATH/bin/godog` now looks a bit different: ``` Feature: convert dates from BS to AD using an API As an app-developer in Nepal I want to be able to send BS dates to an API endpoint and receive the corresponding AD dates So that I have a simple way to convert BS to AD dates, that can be used in different apps Scenario: converting a valid BS date # bs-to-ad-conversion.feature:6 When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" # bsdateServer_test.go:8 -> aRequestIsSentToTheEndpoint TODO: write pending definition Then the HTTP-response code should be "200" # bsdateServer_test.go:12 -> theHTTPresponseCodeShouldBe And the response content should be "2003-07-17" # bsdateServer_test.go:16 -> theResponseContentShouldBe Scenario: converting an invalid BS date # bs-to-ad-conversion.feature:11 When a "GET" request is sent to the endpoint "/ad-from-bs/60-13-01" # bsdateServer_test.go:8 -> aRequestIsSentToTheEndpoint TODO: write pending definition Then the HTTP-response code should be "400" # bsdateServer_test.go:12 -> theHTTPresponseCodeShouldBe And the response content should be "not a valid date" # bsdateServer_test.go:16 -> theResponseContentShouldBe 2 scenarios (2 pending) 6 steps (2 pending, 4 skipped) 576.1µs ``` Godog found the functions that correspond to every step, but those don't do anything yet, just returning an error. Let's implement the first function to send the request: ```diff index c8b0144..f7ee56d 100644 --- a/bsdateServer_test.go +++ b/bsdateServer_test.go @@ -1,11 +1,26 @@ package main import ( + "fmt" "github.com/cucumber/godog" + "net/http" + "strings" ) -func aRequestIsSentToTheEndpoint(arg1, arg2 string) error { - return godog.ErrPending +var host = "http://localhost:10000" +var res *http.Response + +func aRequestIsSentToTheEndpoint(method, endpoint string) error { + var reader = strings.NewReader("") + var request, err = http.NewRequest(method, host+endpoint, reader) + if err != nil { + return fmt.Errorf("could not create request %s", err.Error()) + } + res, err = http.DefaultClient.Do(request) + if err != nil { + return fmt.Errorf("could not send request %s", err.Error()) + } + return nil } func theHTTPresponseCodeShouldBe(arg1 string) error { ``` Here we create a request and send it using the `net/http` package. The trick in godog is to return `nil` if everything goes well, that will make the step pass. If a step function returns something that implements the `error` interface the step will fail. BTW: the `res` variable is defined outside the function because we need to access it from other steps also. Running godog now gives us this result ``` ... Scenario: converting a valid BS date # bs-to-ad-conversion.feature:6 When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" # bsdateServer_test.go:13 -> aRequestIsSentToTheEndpoint could not send request Get "http://localhost:10000/ad-from-bs/2060-04-01": dial tcp 127.0.0.1:10000: connect: connection refused Then the HTTP-response code should be "200" # bsdateServer_test.go:27 -> theHTTPresponseCodeShouldBe And the response content should be "2003-07-17" # bsdateServer_test.go:31 -> theResponseContentShouldBe ... ``` It cannot connect to the server, because nothing is listening on that port. Let's change that. For a minimal implementation of a server waiting on the port put this code into `main.go` and run it with `go run main.go` ``` package main import ( "fmt" "github.com/gorilla/mux" "log" "net/http" ) func homePage(w http.ResponseWriter, r *http.Request){ fmt.Fprintf(w, "Bikram Sambat Server") } func handleRequests() { myRouter := mux.NewRouter().StrictSlash(true) myRouter.HandleFunc("/", homePage) log.Fatal(http.ListenAndServe(":10000", myRouter)) } func main() { handleRequests() } ``` Now we are a step further: ``` Scenario: converting a valid BS date # bs-to-ad-conversion.feature:6 When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" # bsdateServer_test.go:13 -> aRequestIsSentToTheEndpoint Then the HTTP-response code should be "200" # bsdateServer_test.go:27 -> theHTTPresponseCodeShouldBe TODO: write pending definition And the response content should be "2003-07-17" # bsdateServer_test.go:31 -> theResponseContentShouldBe Scenario: converting an invalid BS date # bs-to-ad-conversion.feature:11 When a "GET" request is sent to the endpoint "/ad-from-bs/60-13-01" # bsdateServer_test.go:13 -> aRequestIsSentToTheEndpoint Then the HTTP-response code should be "400" # bsdateServer_test.go:27 -> theHTTPresponseCodeShouldBe TODO: write pending definition And the response content should be "not a valid date" # bsdateServer_test.go:31 -> theResponseContentShouldBe 2 scenarios (2 pending) 6 steps (2 passed, 2 pending, 2 skipped) 1.849695ms ``` The `When` step passed, it sent the request, but the first `Then` step failed as expected, because it's not implemented yet. Let's do that: ```diff --- a/bsdateServer_test.go +++ b/bsdateServer_test.go @@ -3,6 +3,7 @@ package main import ( "fmt" "github.com/cucumber/godog" + "io/ioutil" "net/http" "strings" ) @@ -23,16 +24,23 @@ func aRequestIsSentToTheEndpoint(method, endpoint string) error { return nil } -func theHTTPresponseCodeShouldBe(arg1 string) error { - return godog.ErrPending +func theHTTPresponseCodeShouldBe(expectedCode int) error { + if expectedCode != res.StatusCode { + return fmt.Errorf("status code not as expected! Expected '%d', got '%d'", expectedCode, res.StatusCode) + } + return nil } -func theResponseContentShouldBe(arg1 string) error { - return godog.ErrPending +func theResponseContentShouldBe(expectedContent string) error { + body, _ := ioutil.ReadAll(res.Body) + if expectedContent != string(body) { + return fmt.Errorf("status code not as expected! Expected '%s', got '%s'", expectedContent, string(body)) + } + return nil } func InitializeScenario(ctx *godog.ScenarioContext) { ctx.Step(`^a "([^"]*)" request is sent to the endpoint "([^"]*)"$`, aRequestIsSentToTheEndpoint) - ctx.Step(`^the HTTP-response code should be "([^"]*)"$`, theHTTPresponseCodeShouldBe) + ctx.Step(`^the HTTP-response code should be "(\d+)"$`, theHTTPresponseCodeShouldBe) ctx.Step(`^the response content should be "([^"]*)"$`, theResponseContentShouldBe) } ``` Here we simply get the status code and the result body and compare it with the expectation. If it does not match, return an error. Make sure you show good error messages, the goal is to direct the developer as much as possible to the problem. The clearer the message is the quicker the developer will be able to fix the issue. Remember: these tests will not only be used during the initial development but also in the future to prevent regressions. The regular-expression change in the `FeatureContext` just makes sure that we only accept decimal numbers in that step. Now the tests fail with: ``` ... Scenario: converting a valid BS date # bs-to-ad-conversion.feature:6 Then the HTTP-response code should be "200" # bs-to-ad-conversion.feature:8 Error: status code not as expected! Expected '200', got '404' Scenario: converting an invalid BS date # bs-to-ad-conversion.feature:11 Then the HTTP-response code should be "400" # bs-to-ad-conversion.feature:13 Error: status code not as expected! Expected '400', got '404' 2 scenarios (2 failed) 6 steps (2 passed, 2 failed, 2 skipped) 1.766438ms ``` Why? Because the endpoint does not exist! The server returns 404. It's time to write the software itself! Here are the changes in `main.go` to do a simple conversion: ```diff index ae01ed0..06299b0 100644 --- a/main.go +++ b/main.go @@ -2,18 +2,34 @@ package main import ( "fmt" + "github.com/JankariTech/GoBikramSambat" "github.com/gorilla/mux" "log" "net/http" + "strconv" + "strings" ) +func getAdFromBs(w http.ResponseWriter, r *http.Request) { + vars := mux.Vars(r) + dateString := vars["date"] + var splitedDate = strings.Split(dateString, "-") + day, _ := strconv.Atoi(splitedDate[2]) + month, _ := strconv.Atoi(splitedDate[1]) + year, _ := strconv.Atoi(splitedDate[0]) + date, _ := bsdate.New(day, month, year) + gregorianDate, _ := date.GetGregorianDate() + fmt.Fprintf(w, gregorianDate.Format("2006-01-02")) +} + func handleRequests() { myRouter := mux.NewRouter().StrictSlash(true) myRouter.HandleFunc("/", homePage) + myRouter.HandleFunc("/ad-from-bs/{date}", getAdFromBs) log.Fatal(http.ListenAndServe(":10000", myRouter)) } ``` Basically: split the incoming string, send it to the `GoBikramSambat` lib and return the formatted result. And with that the first scenario passes: ``` ... Scenario: converting a valid BS date # bs-to-ad-conversion.feature:6 When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" # bsdateServer_test.go:14 -> aRequestIsSentToTheEndpoint Then the HTTP-response code should be "200" # bsdateServer_test.go:27 -> theHTTPresponseCodeShouldBe And the response content should be "2003-07-17" # bsdateServer_test.go:34 -> theResponseContentShouldBe Scenario: converting an invalid BS date # bs-to-ad-conversion.feature:11 When a "GET" request is sent to the endpoint "/ad-from-bs/60-13-01" # bsdateServer_test.go:14 -> aRequestIsSentToTheEndpoint could not send request Get "http://localhost:10000/ad-from-bs/60-13-01": EOF Then the HTTP-response code should be "400" # bsdateServer_test.go:27 -> theHTTPresponseCodeShouldBe And the response content should be "not a valid date" # bsdateServer_test.go:34 -> theResponseContentShouldBe --- Failed steps: Scenario: converting an invalid BS date # bs-to-ad-conversion.feature:11 When a "GET" request is sent to the endpoint "/ad-from-bs/60-13-01" # bs-to-ad-conversion.feature:12 Error: could not send request Get "http://localhost:10000/ad-from-bs/60-13-01": EOF 2 scenarios (1 passed, 1 failed) 6 steps (3 passed, 1 failed, 2 skipped) 2.035601ms ``` With a bit of error-handling we should be able to make the other one pass also. ```diff index 06299b0..a62eaf6 100644 --- a/main.go +++ b/main.go @@ -21,7 +21,11 @@ func getAdFromBs(w http.ResponseWriter, r *http.Request) { day, _ := strconv.Atoi(splitedDate[2]) month, _ := strconv.Atoi(splitedDate[1]) year, _ := strconv.Atoi(splitedDate[0]) - date, _ := bsdate.New(day, month, year) + date, err := bsdate.New(day, month, year) + if err != nil { + http.Error(w, err.Error(), http.StatusBadRequest) + return + } gregorianDate, _ := date.GetGregorianDate() fmt.Fprintf(w, gregorianDate.Format("2006-01-02")) } index 3156498..16c48ab 100644 --- a/bsdateServer_test.go +++ b/bsdateServer_test.go @@ -35,7 +35,7 @@ func theHTTPresponseCodeShouldBe(expectedCode int) error { func theResponseContentShouldBe(expectedContent string) error { body, _ := ioutil.ReadAll(res.Body) - if expectedContent != string(body) { + if expectedContent != strings.TrimSpace(string(body)) { return fmt.Errorf("status code not as expected! Expected '%s', got '%s'", expectedContent, string(body)) } return nil ``` In `main.go` we now spit out an Error if the conversion does not work and in the tests we trim the body, because `http.Error` likes to send an `\n` at the end of the body. Finally, the scenarios pass: ``` Feature: convert dates from BS to AD using an API As an app-developer in Nepal I want to be able to send BS dates to an API endpoint and receive the corresponding AD dates So that I have a simple way to convert BS to AD dates, that can be used in different apps Scenario: converting a valid BS date # bs-to-ad-conversion.feature:6 When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" # bsdateServer_test.go:14 -> aRequestIsSentToTheEndpoint Then the HTTP-response code should be "200" # bsdateServer_test.go:27 -> theHTTPresponseCodeShouldBe And the response content should be "2003-07-17" # bsdateServer_test.go:34 -> theResponseContentShouldBe Scenario: converting an invalid BS date # bs-to-ad-conversion.feature:11 When a "GET" request is sent to the endpoint "/ad-from-bs/60-13-01" # bsdateServer_test.go:14 -> aRequestIsSentToTheEndpoint Then the HTTP-response code should be "400" # bsdateServer_test.go:27 -> theHTTPresponseCodeShouldBe And the response content should be "not a valid date" # bsdateServer_test.go:34 -> theResponseContentShouldBe 2 scenarios (2 passed) 6 steps (6 passed) 1.343415ms ``` ## Examples The scenarios we have written down are pretty limited, probably there are more requirements of the software. Specially there will be those that have not been spoken about. To reduce the size of the feature-file Gherkin has the `Examples:` keyword. ```diff index 5a00814..18db1ed 100644 --- a/features/bs-to-ad-convertion.feature +++ b/features/bs-to-ad-convertion.feature @@ -3,12 +3,25 @@ Feature: convert dates from BS to AD using an API I want to be able to send BS dates to an API endpoint and receive the corresponding AD dates So that I have a simple way to convert BS to AD dates, that can be used in other apps - Scenario: converting a valid BS date - When a "GET" request is sent to the endpoint "/ad-from-bs/2060-04-01" + Scenario Outline: converting a valid BS date + When a "GET" request is sent to the endpoint "/ad-from-bs/<bs-date>" Then the HTTP-response code should be "200" - And the response content should be "2003-07-17" + And the response content should be "<ad-date>" + Examples: + | bs-date | ad-date | + | 2060-04-01 | 2003-07-17 | + | 2040-01-01 | 1983-04-14 | + | 2040-12-30 | 1984-04-12 | ``` Instead of `Scenario` we have to use `Scenario Outline` and at the bottom of the Outline we add a table. The headings of the table are used as "variables" and the table rows are substituted into the steps e.g. `<bs-date>` becomes `2060-04-01`. Godog will run a single scenario for every line in the examples table. That way you can very easily multiply out the test cases. To learn more about Scenario Outlines and Example-tables read this blog post: [Scenario Outline In Gherkin Feature File ](https://dev.to/jankaritech/scenario-outline-in-gherkin-feature-file-16jh) by {% user jasson99 %} ## Conclusion 1. Writing down the expected behaviors using the Gherkin language can improve the communication between the different stakeholders and with that increase customer satisfaction, productivity and the chances to make the project a success. 2. The feature descriptions become the requirement documentation. 3. Additionally, the same feature descriptions can be used to run automatic tests. If you need help with setting up BDD or you want to outsource your test-development, please contact us: - https://www.jankaritech.com/ - https://www.linkedin.com/company/jankaritech/
individualit
255,618
Open Source Contributions: A catalyst for growth.
Sometimes learning from other people’s experiences builds a certain level of certainty as to what we...
0
2020-02-05T09:53:50
https://dev.to/didicodes/open-source-contributions-a-catalyst-for-growth-38il
opensource, education, beginners
Sometimes learning from other people’s experiences builds a certain level of certainty as to what we are doing right and what we may have done wrong. This is because it is very easy to learn from their experiences and build on them. In this article, I will use my personal experience (my Open Source story) to explain why I believe making open source contributions is a catalyst for growth and I hope it gives you the push to create your first [pull request.](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests) ##How did I get started? Well, my first attempt to make an Open Source contribution was sometime in March 2018. At that time, I just found out about the [Google Summer of Code (GSOC) program](https://summerofcode.withgoogle.com/) from Samson Goddy and I told myself, “oh well, I think it’s time to apply what I learned on Android development during my industrial training late 2017 at Start Innovation Hub to a real life project”. I went to the GSOC site and glanced through all the organizations and their respective projects and the [Open Data Kit](https://opendatakit.org/) organization caught my attention. To participate in the Google Summer of Code program, you have to be a student in the university and also make at least one contribution to the organization. I was already in the university, so all I had to do was to make at least one contribution to an organization before submitting my proposal. When I first joined the Open Data Kit slack channel, I was intimidated and overwhelmed but most open source organizations label issues on their repositories so it was easy to find the right issue to solve. As a beginner at the time, I jejely glanced through the beginner-friendly issues and decided to work on an issue titled [Escape markdown characters using backslashes](https://github.com/opendatakit/collect/pull/2045). This task entailed writing code to enable the ODK Collect app support escaping markdown characters using a backslash and this could be achieved using regular expressions (Regex). ##My First Pull Request Honestly, I never knew about regular expressions before I decided to work on that issue, I didn't even know it existed, which meant I had to do a lot of research. You probably won’t believe me if I told you it took over 3 weeks for the pull request to be merged because the mentor ensured that I understood what I was trying to do, the implications of making those changes and most importantly that the code I wrote adhered to the coding standards of the organization. Working on the issue took me from not understanding anything about regex to getting my first pull request merged by the Open Data Kit open source organization and *mehn*, this was literally the best feeling. The thought that you’ve contributed to an application that is used by thousands of people across the world and that **someone somewhere** is currently benefiting from that feature you improved is so fulfilling. It was truly an overwhelming but interesting first Pull Request. After I had successfully made a contribution to the organization, all I had to do was to submit a proposal describing the project I wanted to work on and why I felt I was the best candidate for the project; this was another good thing to learn because I had never written a proposal in my life. I eventually created a proposal and submitted it for review. Unfortunately, when the accepted candidates were announced, I wasn’t accepted. This wasn’t a sad moment for me because I suspected that someone with more experience would be selected for the project but I was super grateful that I learned a lot about writing proposals, contributing to open source, writing a good commit message and raising a good pull request. So I saw it as a win-win situation. ##My second Open Source project, well I thought it was… After I graduated from school in August 2018, I decided to give Open Source contributions another try by applying for the Outreachy internship to an organization called [Wikimedia](https://www.wikimedia.org/) and my role there was to improve the MediaWiki Action API and also write sample codes in Python which was a great opportunity to learn not just as a developer but as a technical writer. Making contributions to the Mediawiki organization was not as difficult as my first pull request because I had already submitted a pull request before. If you ask me, I think I was *killing it* during this application phase. A few days to the deadline for submitting my proposal, I had to travel to Taraba state via Enugu for [NYSC](https://www.nysc.gov.ng/); however, I missed my bus that morning because I was trying to fix an issue with the pull request I submitted. When this happened, I told myself that I was definitely going to be accepted, afterall, I missed my bus because of it. But I wasn’t selected for the internship. This time around, I was deeply sad because I thought I did better than other applicants. The rejection broke me to the point were I said I wasn’t going to make open source contributions again. Well, thinking about it now, I guess I should have started with unpaid Open Source contributions first to build myself instead of jumping directly into applying for paid Open Source internship opportunities. ##My failed applications start to pay off In January 2019, [Outreachy](https://www.outreachy.org/) announced that they were accepting proposals for the internship. Lo and behold, the same project I had applied for in the previous round in 2018 was among the projects and I felt I had a better chance of getting accepted since I had worked on this project before. As usual, I made contributions to the organization and submitted a proposal during the application phase. At this time, I was also looking for a company to serve during my NYSC in Lagos and I applied to a couple of companies and went to different interviews. One interesting thing I will love to point out in this article was my interview with a company called Interswitch. During the second phase of my interview with Interswitch, I met with the CIO and during our interaction, I spoke about my experience with making open source contributions and how I was currently writing sample codes and improving API documentation. The CIO asked me about the last time I made a contribution to the organization and luckily for me, I actually made a contribution to Mediawiki that morning before I went for the interview **so with a smile on my face I told him “today”**. He went further to ask which API I worked on and as I explained the workings of the API to him, I noticed he was impressed. It turned out that Interswitch was looking for someone with that skill at that time so there was a match. I got hired primarily because of my experience with writing sample codes, improving the API docs for the MediaWiki and of course, the ability to code. Sadly, when the results for accepted interns were announced by Outreachy, I wasn't selected. It was a bittersweet experience because I had gained a lot of experience from making those contributions and also got accepted as a Software Engineer at an amazing company for my NYSC. ##Finally the big W - Google Season of Docs I refused to be fazed by the disappointments I had with my previous applications so I decided to give it a shot again, this time I applied for Google’s Season of Docs with the [VideoLAN organization](https://www.videolan.org/). The VideoLAN organization owns the [VLC media player](https://www.videolan.org/vlc/) which has been downloaded over 3 billion times. When you think of the fact that there are over 7 billion people in the world right, it means that means almost half of the people in the entire world must have downloaded VLC into either their phones or desktop. Blown! **Again**, I researched about the project, what it entailed, wrote a proposal and submitted it to Google and guess what? My proposal was accepted. I think I literally almost threw my laptop away when I got the email from Google. It was a surreal moment for me. I have literally been using the VLC media player since I was in secondary school and the fact that I was going to work for this organization was mind blowing. ##TL;DR Contributing to Open Source organizations opens you up to a lot of opportunities and connects you with people doing amazing things all over the world. Who would have thought that I would get an opportunity to work as a technical writer for an organization that I have been using their product since secondary school and get paid 3000$ to do it, or get to have meetings and conversations with the president of the VideoLAN organization or get to do my NYSC at Interswitch or even get to speak at events about the beauty of Open Source? To sum it up, I successfully completed the [Season of Docs program](https://developers.google.com/season-of-docs/docs/participants) by Google in November and also got a full-time job as a Developer Relations at Interswitch in December and without any doubt in my heart, I can boldly say that **OPEN SOURCE CONTRIBUTIONS** strongly contributed to my growth as a Software Engineer and Technical Writer. Yes, it is a **catalyst for growth** and you absolutely do not have to jump right into applying for paid remote open source internships like I did if you are not ready for that, you can start by submitting an issue, fixing an already existing issue, correcting a typo, making a pull request to an organization, creating an Open Source project, joining the [Open Source Community in Africa](https://oscafrica.zulipchat.com/), attending the [Open Source Festival](https://festival.oscafrica.org/) and so much more. Trust me, the things you will learn in a short time would amaze you. This article was inspired by my talk at the Hacktoberfest hackathon in Kampala last year and my colleague **Damola Adekoya** who told me it would be great to write an article about my open source journey. Happy to answer any and all questions regarding Open Source contributions, [feel free to send me a DM me on Twitter](https://twitter.com/Didicodes).
didicodes
255,624
What is the 'new' keyword in JavaScript?
answer re: What is the 'new' keyword...
0
2020-02-05T10:17:14
https://dev.to/geschwatz/what-is-the-new-keyword-in-javascript-8jn
{% stackoverflow 3658673 %}
geschwatz
255,633
Listen to Compiler!
Many programmers, especially juniors (or while yet studying) tend to ignore compiler warnings :) I'd...
0
2020-02-05T10:42:32
https://dev.to/rodiongork/listen-to-compiler-3529
java, cpp, errors, types
Many programmers, especially juniors (or while yet studying) tend to ignore compiler warnings :) I'd like to share two examples - first is "real-life" from some of my past jobs. Another is recent, asked at the forum on my hobby-website. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/datygchojkbunzv23c4g.png) --- ## #1 Losing daily income reports :) Some years ago I was working on project which used POS-terminals to collect payments bus rides from contactless cards. One of my colleagues was adapting new version of terminal to project and, as it seemd, did this successfully. Then he went to other project (banking). Suddenly, when this new terminal was tried in real conditions, horrible problem was found. It was sending money report in the end of the day (over built-in cell-phone like modem) - but often this report was never seen at the receiving server! And terminal, after sending it, was deleting report from internal storage. **Our client was losing money reports!** In worst case some of client's employees could **end in prison for fraud** despite being really innocent! I spent about a day to find what was wrong. It was comically simple: ```cpp // in terminal API all modem functions // return negative value on error, like this /** returns number of bytes sent or negative error code */ int MODEM_SendData(unsigned char* buffer, int count); // in the code my colleague did the following unsigned int result = MODEM_SendData(fileData, fileLength); if (result < 0) { reportError("Sending report failed"); } ``` So he was storing operation result in variable which could only have **non-negative values** - and of course negative error values were lost. I found **he deliberately turned off compiler warnings, shouting that `result < 0` will never be true** - when I asked him, he told "Ha-ha, I didn't understand what it is about, thought it is some silly useless message, breaking compilation". (we used "treat warning as error" policy) _So this was about careless types treating, regretfully it is easy in C++ and languages like Java or Go try maniacally prevent such errors._ ## #2 Java Generics and Raw types Second example is in Java and it came from [this forum thread](https://www.codeabbey.com/index/forum_topic/5b40ca193117555df3d24d976658fa26) (find somewhere below, post by "Radovan Markus"). Colleague complains that: _I recieved this kind of error when I clicked compile with "java" "Note: PrimeRanges.java uses unchecked or unsafe operations.Note: Recompile with -Xlint:unchecked for details."_ Code contains things like this: ```java import java.util.ArrayList; import javafx.util.Pair; // ... ArrayList<Pair<Integer,Integer>> primes = new ArrayList<>(); // ... inputNumbers.add(new Pair(2, 1)); ``` Here situation is more subtle. Though details (and line numbers) could be seen if compiler is executed with `-Xlint:unchecked` flag as the message suggests. The matter is that `new Pair(2, 1)` creates `Pair` object of "raw type", with types of components erased. This is allowed for compatibility with older code when java had no type parameters (before 1.5). To correct this we should indicate we want Pair with specific types: ```java new Pair<Integer, Integer>(2, 1); // or, since java 1.7, indicate it is typed, but implicitly: new Pair<>(2,1); ``` In some of later lines such correction will lead to error: ```java long number; int index; // ... inputNumbers.add(new Pair<>(number, index)); ``` it is because typed `Pair<Integer, Integer>` don't want to hold `long`, unless it is explicitly converted: ```java inputNumbers.add(new Pair<>((int)number, index)); ``` **This may seem capricious** because code works anyway. But it works only by luck (as operations used won't break on finding `Long` instead of `Integer` inside the pair). However with types erased we could put completely different type inside, e.g. `String`: ```java inputNumbers.add(new Pair("2", 1)); ``` This will compile nicely (just with warning like above), but will break in runtime if value from this pair is used in some calculation (but won't break if it is used just for printing). **In enterprise projects such errors may remain unnoticed for long time**, if the code with flaw is not covered with tests or error condition is not met usually. **That is why strictly typed languages are sometimes preferred for enterprise projects - for sake of improving reliability** - compiler can find out and prevent some kinds of mistakes, which are inevitable when many people work on the same large code through months and years. _Sorry for that many letters and thanks for reading that far :)_
rodiongork
255,643
WP Snippet #004 Custom option pages with acf.
A tutorial on how to add custom admin option pages for your Wordpress theme with the Advanced Custom Fields plugin and Php
0
2020-02-05T18:30:56
https://since1979.dev/snippet-002-adding-option-pages-with-acf/
wordpress, webdev, php
--- title: WP Snippet #004 Custom option pages with acf. published: true description: A tutorial on how to add custom admin option pages for your Wordpress theme with the Advanced Custom Fields plugin and Php canonical_url: https://since1979.dev/snippet-002-adding-option-pages-with-acf/ cover_image: https://since1979.dev/wp-content/uploads/2020/01/snippet-004-custom-option-pages-with-acf.jpg tags: wordpress, webdev, php --- [Originally posted on my website on February 5th 2020](https://since1979.dev/snippet-002-adding-option-pages-with-acf/) How to add custom option pages with Advanced Custom Fields. ----------------------------------------------------------- Of-course WordPress offers a build in Api to create custom admin/option pages using *[add_options_page()](https://developer.wordpress.org/reference/functions/add_options_page/)* and related functions. But to be honest this can be quite tedious. Fortunately for us we can use the amazing [Acf plugin](https://www.advancedcustomfields.com/) to make this task a lot easier. In the snippet below we register a custom option page called "Theme options" with two Sub pages called "Theme logos" and "Seo options". The menu items for these pages will show just below the "Appearance" item. {% gist https://gist.github.com/vanaf1979/74681488ba155dd7a9313324bbece146 %} On the last line we hook into the *[acf/init](https://www.advancedcustomfields.com/resources/acf-init/)* hook and register a callback function called *add_acf_menu_pages*. Inside the *add_acf_menu_pages* function we first use the *[acf_add_options_page](https://www.advancedcustomfields.com/resources/acf_add_options_page/)* function to register the "Theme options" page. To do so we pass it an array containing the following parameters: - **page_title:** This is the title that will be shown at the top of the page. - **menu_title:** This is the title that will be shown in the admin menu. - **menu_slug:** The name/slug to be used within the pages url. - **capability:** The [capability](https://wordpress.org/support/article/roles-and-capabilities/) a user must have for the menu item to show. - **position:** The position inside the menu. here we use 61.1 which is just below the appearance menu item. - **redirect:** Should this main page redirect to it's first child page or not. - **icon_url:** The icon to show in the menu. Should be a [dashicon](https://developer.wordpress.org/resource/dashicons/#admin-site) class. - **update_button:** The text to show in the submit button. - **updated_message:** The text to show at the top of the page when the options are saved. Next we add two sub pages using the *[acf_add_options_sub_page](https://www.advancedcustomfields.com/resources/acf_add_options_sub_page/)* function. This function also takes an array of key/value pairs and can take the same parameters as the *acf_add_options_page* function. In this example we just set the *page_title* and *menu_title*. But we also set the *parent_slug* to the same value as *menu_slug* of the "Theme options" page. Making these pages sub pages of that page. **Note:** Because we set the redirect parameter of the Theme options page to true, when we click its menu item in the admin we will be redirected to it's first child page. In this case we will be redirected to the "Theme logos" page. **Tip:** You can also just register a sub page and set the *parent_slug* to be an existing/core menu page slug to add the sub page to that menu group. For instance if you set the *parent_slug* to *"themes.php"* the sub page will show under the "appearance" menu group. You can now create a new field group and set it's location to be any of the registered option pages like shown in the image below. ![Set an acf field group to a option page location](https://since1979.dev/wp-content/uploads/2020/02/acf-fields-to-options-page-1024x181.png) Set an acf field group to a option page location #### Follow Found this post helpful? Follow me on twitter [@Vanaf1979](https://twitter.com/Vanaf1979) or here on Dev.to [@Vanaf1979](https://dev.to/vanaf1979) to be notified about new articles, and other WordPress development related resources. **Thanks for reading**
vanaf1979
255,667
Is your React app RTL language ready?
This step-by-step guide will help you to implement RTL in react application.
0
2020-02-07T22:21:36
https://dev.to/redraushan/is-your-react-app-rtl-language-ready-1009
rtl, react, web, arabic
--- title: Is your React app RTL language ready? published: true description: This step-by-step guide will help you to implement RTL in react application. tags: RTL, react, web, arabic cover_image: https://dev-to-uploads.s3.amazonaws.com/i/ym6sb3rz18m92kybq3r9.jpg --- [1. What is RTL?](#chapter-1) [2. Why your application should have support for RTL?](#chapter-2) [3. How to make your React apps RTL ready?](#chapter-3) [4. Demo source code with all the steps done.](#chapter-4) [5. What's next?](#chapter-5) <a name="chapter-1"></a> ###What is RTL? In a right-to-left(commonly abbreviated RTL) script, flow of the writing starts from the right side of the page and continues to the left. For example, [Arabic script](https://en.wikipedia.org/wiki/Arabic_script) is the most widespread RTL writing system in modern times. ![Screenshot image of a paragraph written in Arabic](https://dev-to-uploads.s3.amazonaws.com/i/j76vl6xy47ds24iuj716.png) <a name="chapter-2"></a> ###Why your app should have support for RTL? In simple words, if you want your application to reach out to wider audience, supporting RTL is one of the features that your application must have, especially if your application is being served in the region where format of writing is RTL. Below are the screenshots of [tajawal](https://www.tajawal.ae/en) - UAE's online travel platform for flights and hotels, in both RTL(Arabic) and LTR(English) language, you can see the differences, it is just like a mirror, where everything gets flipped. ![screenshot image of tajawal.com in Arabic and English](https://dev-to-uploads.s3.amazonaws.com/i/8skaekht2187er67mo3k.png) *Image source: https://www.tajawal.ae/* <a name="chapter-3"></a> ###How to make your React apps RTL ready? The below steps are for your apps that you have created with [Create React App](https://create-react-app.dev/). Also keep in mind that this process will require you to eject your application and add a lightweight dependancy(development only) [webpack-rtl-plugin](https://www.npmjs.com/package/webpack-rtl-plugin). The good thing about customizing your build process to implement [webpack-rtl-plugin](https://www.npmjs.com/package/webpack-rtl-plugin) is that this plugin will on the fly generates a different CSS file as soon as you make any CSS changes while running the application in dev mode, and also while creating deployment build of the application. Instead of authoring two sets of CSS files, one for each language direction. Now you can author the LTR version and this plugin will automatically create the RTL counterpart for you! ```css .example { display:inline-block; padding:5px 10px 15px 20px; margin:5px 10px 15px 20px; border-width:1px 2px 3px 4px; border-style:dotted dashed double solid; border-color:red green blue black; box-shadow: -1em 0 0.4em gray, 3px 3px 30px black; } ``` Will be transformed into: ```css .example { display:inline-block; padding:5px 20px 15px 10px; margin:5px 20px 15px 10px; border-width:1px 4px 3px 2px; border-style:dotted solid double dashed; border-color:red black blue green; box-shadow: 1em 0 0.4em gray, -3px 3px 30px black; } ``` In the following screenshot you can see that during build it has generated a file `main.9e32ea2d.chunk.rtl.css` that will have all the CSS that needs to be applied in RTL languages. ![screenshot of generated rtl.css file](https://dev-to-uploads.s3.amazonaws.com/i/rs91yvwg5lbe7rfa8jiy.png) Oftentimes, ejecting lets you customize build process, but from that point on you have to maintain the configuration and scripts yourself. You can learn here more about [ejecting your React application](https://create-react-app.dev/docs/available-scripts/#npm-run-eject) ##Let's do it! 1.Create a new React app with [Create React App](https://create-react-app.dev/) if you are starting from scratch or skip to the second step. ``` npx create-react-app react-rtl ``` 2.CD into your newly created app directory or your own [CRA](https://create-react-app.dev/) app if you already have one. ``` cd react-rtl ``` 3.Now is the time to [eject your app](https://create-react-app.dev/docs/available-scripts/#npm-run-eject). > Note: this is a one-way operation. Once you eject, you can’t go back! ``` npm run eject ``` 4.Add webpack-rtl-plugin and @babel/plugin-transform-react-jsx-source as a dev only dependency. ``` npm install webpack-rtl-plugin @babel/plugin-transform-react-jsx-source --save-dev ``` 5.Go to `config/webpack.config.js` to make some configuration changes as follows: ```javascript // First you need to import the plugin, make sure the plugin is already installed. const WebpackRTLPlugin = require('webpack-rtl-plugin') // ... module: { ... } plugins: [ // ..., // Put this inside plugins array to use the plugin new WebpackRTLPlugin({ diffOnly: true, minify: true }) ].filter(Boolean), ].filter(Boolean), // ... ``` On this stage, if you run yarn build and look up build/static/css folder, you should hopefully see additional `.rtl.css` file that contains your rtl styles. Now we need to tell webpack to use MiniCssExtractPlugin.loader for development as well so it will serve styles through link tags instead of inline styles: ```javascript // common function to get style loaders const getStyleLoaders = (cssOptions, preProcessor) => { const loaders = [ // isEnvDevelopment && require.resolve('style-loader'), comment-out above line and add the below one // to enable MiniCssExtractPlugin loader for dev mode. isEnvDevelopment && { loader: MiniCssExtractPlugin.loader, options: { publicPath: '/', reloadAll: true } }, ``` Also you need to add `MiniCssExtractPlugin` configs in `config/webpack.config.js` ```javascript module: { ... } plugins: [ // ..., // isEnvProduction && comment-out this line so that MiniCssExtractPlugin // could be enabled for dev mode. new MiniCssExtractPlugin({ // Options similar to the same options in webpackOptions.output // both options are optional filename: 'static/css/[name].[contenthash:8].css', chunkFilename: 'static/css/[name].[contenthash:8].chunk.css', }), // ... ].filter(Boolean), ``` That's it, our build process is customized to have RTL styled CSS file automatically generated as soon as we make any changes in CSS. Now to change the layout to RTL, we just need to programmatically include `.rtl.css` file inside `body` of the HTML so that it could override the styles of our main CSS file. You can see the below the how it should look once included within `body` of the HTML. ![screenshot of React app in element inspect mode in chrome browser](https://dev-to-uploads.s3.amazonaws.com/i/g3eklablo5h7j5h4of2a.png) The following is the screenshot of the demo [Create React App](https://create-react-app.dev/) where you can see how the layout flips in RTL mode. <a name="chapter-4"></a> ###Do you think you are missing some steps? You can follow the source code of the demo application which I have already made available with all the steps, I guess it will help you. {% github redraushan/reactjs-rtl-support no-readme %} ###Screenshot of the running demo application### ![screenshot of the demo react app configured with RTL](https://dev-to-uploads.s3.amazonaws.com/i/o576crv2lhu4w54xxzen.gif) If you find the article informative, please don't forget to follow 🙈​ Happy coding 😍​ <a name="chapter-5"></a> ###What's next? If you talk optimization and walk optimization, memoization is one of the basic concepts that you must add in your skillsets. Moreover you will also learn how you can easily memoize your React components. {% post https://dev.to/redraushan/memoizing-react-components-2km7 %} Learn some of the best practices in web accessibility that will help your web content or web application reach out to a broader audience. {% post https://dev.to/redraushan/accessibility-mantras-for-web-developers-2m76 %} Learn how code splitting is a thing that you have been missing. {% post https://dev.to/redraushan/route-based-code-splitting-in-reactjs-28ah %} Find out how Optional chaining and Nullish coalescing can make your code look more clean and readable. {% post https://dev.to/redraushan/javascript-optional-chaining-nullish-coalescing-48o5 %}
redraushan
255,683
Debugging System.OutOfMemoryException using .NET tools
The complete guide to debugging System.OutOfMemoryException. In this post, I will show you how to use Perfmon and a profiler to track down slow methods.
0
2020-02-05T11:57:47
https://blog.elmah.io/debugging-system-outofmemoryexception-using-net-tools/
csharp, dotnet, performance, debugging
--- title: Debugging System.OutOfMemoryException using .NET tools published: true description: The complete guide to debugging System.OutOfMemoryException. In this post, I will show you how to use Perfmon and a profiler to track down slow methods. tags: csharp, dotnet, performance, debugging cover_image: https://blog.elmah.io/content/images/2019/02/debugging-system-outofmemoryexception-using-net-tools-1.png canonical_url: https://blog.elmah.io/debugging-system-outofmemoryexception-using-net-tools/ --- Welcome to the second part in the series about Debugging common .NET exceptions. The series is my attempt to demystify common exceptions as well as to provide actual help fixing each exception. In this post, I take a look at one of the more tricky exceptions to fix: `System.OutOfMemoryException`. As the name suggests, the exception is thrown when a .NET application runs out of memory. There are a lot of blog posts out there, trying to explain why this exception occurs, but most of them are simply rewrites of the documentation on MSDN: <a href="https://docs.microsoft.com/en-us/dotnet/api/system.outofmemoryexception" target="_blank" rel="noopener noreferrer">System.OutOfMemoryException Class</a>. In the MSDN article, two different causes of the `OutOfMemoryException` are presented: 1. Attempting to expand a StringBuilder object beyond the length defined by its StringBuilder.MaxCapacity property. This type of error typically has this message attached: "Insufficient memory to continue the execution of the program." 2. The common language runtime (CLR) cannot allocate enough contiguous memory. In my past 13 years as a .NET developer, I haven't experienced the first problem, why I won't bother spending too much time on it. In short, doing something like this will cause a `System.OutOfMemoryException`: ```csharp StringBuilder sb = new StringBuilder(1, 1); sb.Insert(0, "x", 2); ``` Why? Well, we define a new `StringBuilder` with a max capacity of one character and then try to insert two characters. With that out of the way, let's talk about why you are probably experiencing the exception: because the CLR cannot allocate the memory that your program is requesting. To translate this into something that your Mom would understand, your application is using more resources than available. .NET programs often use a lot of memory. The memory management in .NET is based on garbage collection, which means that you don't need to tell the framework when to clean up. When .NET detects that an object is no longer needed, it is marked for deletion and deleted next time the garbage collector is running. This also means that an `OutOfMemoryException` doesn't always equal a problem. 32-bit processes have 2 GB of virtual memory available, and 64-bit processes have up to 8 TB. Always make sure to compile your app to 64-bit if running on a 64-bit OS (it probably does that already). If you're interested in more details about this subject, I recommend this article from Eric Lippert, a former Microsoft employee working on the C# compiler: <a href="https://blogs.msdn.microsoft.com/ericlippert/2009/06/08/out-of-memory-does-not-refer-to-physical-memory/" target="_blank" rel="noopener noreferrer">“Out Of Memory” Does Not Refer to Physical Memory</a>. It's important to distinguish between heavy memory usage and a memory leak. The first scenario can be acceptable, while the second always requires debugging. To start debugging the `OutOfMemoryException`, I recommend you to look at your application either through the Task Manager or using `perfmon.msc`. Both tools can track the current memory consumption, but to get a better overview over time, perfmon is the best. When launched, right-click the graph area and click *Add Counters...* Expand the *.NET CLR Memory* node and click *# Total committed Bytes*. Finally, select the process you want to monitor in the *Instances of selected object* list and click the <kbd>OK</kbd> button. For the rest of this post, I will use and modify a sample program, adding strings to a list: ```csharp class Program { static void Main(string[] args) { try { var list = new List<string>(); int counter = 0; while (true) { list.Add(Guid.NewGuid().ToString()); counter++; if (counter%10000000 == 0) { list.Clear(); } } } catch (OutOfMemoryException e) { Environment.FailFast(String.Format($"Out of Memory: {e.Message}")); } } } ``` In its current state, the program keeps adding strings to a list and every 10,000,000 times clear the list. When looking at the current memory usage in Perfmon, you'll see the current picture: ![](https://blog.elmah.io/content/images/2019/02/garbage-collection-at-its-finest.png) Garbage collection at its finest. Here, I've removed the call to `list.Clear()`: ```csharp class Program { static void Main(string[] args) { try { var list = new List<string>(); while (true) { list.Add(Guid.NewGuid().ToString()); } } catch (OutOfMemoryException e) { Environment.FailFast(String.Format($"Out of Memory: {e.Message}")); } } } ``` We now get a completely other picture: ![](https://blog.elmah.io/content/images/2019/02/perfmon.png) The program keeps allocating memory, until a `System.OutOfMemoryException` is thrown. The example illustrates how you can utilize Perfmon to monitor the state of your application. Like the chefs on TV, I cheated and made up an example for this post. In your case, you probably have no clue to what causes the extensive use of memory. Memory profilers to the rescue! Unlike Task Manager and Perfmon, memory profilers are tools to help you find the root cause of a memory problem or memory leak. There a lot of useful tools out there like <a href="https://www.jetbrains.com/dotmemory/" target="_blank" rel="noopener noreferrer">JetBrains dotMemory</a> and <a href="https://www.red-gate.com/products/dotnet-development/ants-memory-profiler/index" target="_blank" rel="noopener noreferrer">ANTS Memory Profiler</a>. For this post, I'll use <a href="https://memprofiler.com/" target="_blank" rel="noopener noreferrer">.NET Memory Profiler</a>, which I have used heavily in the past. BTW, as an elmah.io customer, you will get a [20% discount on .NET Memory Profiler](https://elmah.io/goodiebag/). .NET Memory Profiler integrates nicely into Visual Studio, why profiling your application is available by clicking the new Profiler > Start Memory Profiler menu item. Running our sample from previously, we see a picture similar to that of Perfmon: ![](https://blog.elmah.io/content/images/2019/02/dotnet-memory-profiler.png) The picture looks pretty much like before. The process allocates more and more memory (the orange and red lines), and the process throws an exception. In the bottom, all objects allocated from the profiling sessions are shown and ordered allocations. Looking at the top rows is a good indicator of what is causing a leak. In the simple example, it's obvious that the strings added to the list if the problem. But most programs are more complex than just adding random strings to a list. This is where the snapshot feature available in .NET Memory Profiler (and other tools as well) shows its benefits. Snapshots are like restore points in Windows, a complete picture of the current memory usage. By clicking the Collect snapshot button while the process is running, you get a diff: ![](https://blog.elmah.io/content/images/2019/02/memory-profiler-snapshots.png) Looking at the *Live Instances* > *New* column, it's clear that someone is creating a lot of strings. I don't want this to be an ad for .NET Memory Profiler, so check out their documentation for the full picture of how to profile memory in your .NET programs. Also, make sure to check out the alternative products mentioned above. All of them have free trials, so try them out and pick your favorite. I hope that this post has provided you with "a very particular set of skills" (sorry) to help you debug memory issues. Unfortunately, locating memory leaks can be extremely hard and requires some training and experience. Also make sure to read the other posts in this series: [Debugging common .NET exception](https://blog.elmah.io/tag/common-exceptions/). ## Would your users appreciate fewer errors? elmah.io is the easy error logging and uptime monitoring service for .NET. Take back control of your errors with support for all .NET web and logging frameworks. ➡️ [Error Monitoring for .NET Web Applications](https://elmah.io/?utm_source=devto&utm_medium=social&utm_campaign=devtoposts) ⬅️ This article first appeared on the elmah.io blog at https://blog.elmah.io/debugging-system-outofmemoryexception-using-net-tools/
thomasardal
255,748
Python enumerate
In Python, the enumerate() function takes a collection (e.g. a list) and returns it as an enumerate o...
0
2020-02-05T14:19:54
https://dev.to/bluepaperbirds/python-enumerate-57pj
python, beginners
In <a href="https://python.org">Python</a>, the enumerate() function takes a collection (e.g. a <a href="https://pythonbasics.org/list/">list</a>) and returns it as an enumerate object. Consider any list, like the list ```python grocery = [ 'milk', 'butter', 'bread' ] ``` How would you get the index of an element? If you used C, C++, C# or Java before, you use a for loop with an index and use the index to get the value at that location. ## Enumerate List and Tuples In Python, you use enumerate() for that. As parameter you want the iterable, in this case the grocery list. ```python >>> grocery = [ 'milk', 'butter', 'bread' ] >>> for idx, val in enumerate(grocery): ... print("index %d has value %s" % (idx,val)) ... index 0 has value milk index 1 has value butter index 2 has value bread >>> ``` This works for other data too. You can do it for tuples: ```python >>> grocery = ( 'milk', 'butter', 'bread' ) >>> for idx, val in enumerate(grocery): ... print("index %d has value %s" % (idx,val)) ... ``` ## Enumerate Strings You can do it for <a href="https://pythonbasics.org/strings/">strings</a>. As parameter of the enumerate function, set the string you want to enumerate. Then it outputs the characters with index. ```python >>> for idx, val in enumerate("hello"): ... print("index %d has value %s" % (idx,val)) ... ``` This outputs as expected: ```python index 0 has value h index 1 has value e index 2 has value l index 3 has value l index 4 has value o >>> ``` **Related links:** * <a href="https://www.python.org/dev/peps/pep-0279/">Python PEP on enumerate</a> * <a href="https://gumroad.com/l/dcsp">Python course</a>
bluepaperbirds
255,766
Ruby on Rails Testing Resources
When taking the plunge into Ruby on Rails it’s really easy to get carried away with learning all abou...
0
2020-02-05T15:00:28
https://dev.to/rbazinet/ruby-on-rails-testing-resources-2gmc
rails, bdd, minitest, rspec
--- title: Ruby on Rails Testing Resources published: true date: 2020-02-05 14:59:24 UTC tags: Ruby on Rails,bdd,minitest,rspec canonical_url: --- When taking the plunge into Ruby on Rails it’s really easy to get carried away with learning all about the framework. It’s easy to learn the fundamentals and later realize the Rails community is a community of testers. It’s a strange world when you set out to learn about testing, TDD (test-driven development), BDD (behavior-driven development) and other acronyms and phrases relating to testing Ruby on Rails applications. I decided to put together a short list of helpful resources to get started. If you have suggestions that would be useful to be added to this list, please add a comment or email me directly and I’ll update this post. ## Books - [Everyday Rails Testing with RSpec](https://leanpub.com/everydayrailsrspec) – this is a great, hands-on, roll-up your sleeves and get-to-work book. If you want to use RSpec on a daily basis, this book gives great advice on how to use RSpec in your day-to-day routine. It’s kept up-to-date with latest RSpec too. - [Rails 5 Test Prescriptions](https://pragprog.com/book/nrtest3/rails-5-test-prescriptions) – I use this book as a reference I often go to. It’s been updated to from previous versions to now Rails 5 and is a great tool to have on the shelf. - [Effective Testing with RSpec 3](https://pragprog.com/book/rspec3/effective-testing-with-rspec-3) – if you decide you’d rather start without worrying about all the details around Rails you can start with learning RSpec with plain Ruby and help yourself. I’ve been through this one cover-to-cover and it’s a great tutorial. - [The Minitest Cookbook](https://chriskottom.com/minitestcookbook/) – if you decide RSpec isn’t for you, this is probably the ultimate resource for Minitest. Well-written and kept up-to-date. ## Podcasts You can’t really learn testing from a podcast but you can learn how others approach the craft. The first is a podcast dedicated to testing Ruby applications. The rest is a list of a few episodes of podcasts that discussed testing. - [The Ruby Testing Podcast](http://www.rubytestingpodcast.com/) - [Ruby Rogues 385: “Ruby/Rails Testing” with Jason Swett](https://devchat.tv/ruby-rogues/rr-385-ruby-rails-testing-with-jason-swett/) - [Ruby Rogues 269 Testing](https://devchat.tv/ruby-rogues/269-rr-testing/) - [Full Stack Radio 46: Joe Ferris – Test Driven Rails](http://www.fullstackradio.com/46) I’ve been listening to The Ruby Testing Podcast and picked up some nice tidbits so far. ## Training I love [Pluralsight](https://www.pluralsight.com/). - [RSpec the Right Way](https://www.pluralsight.com/courses/rspec-the-right-way) - [Test-driven Rails with RSpec, Capybara, and Cucumber](https://www.pluralsight.com/courses/test-driven-rails-rspec-capybara-cucumber) Xavier Shay has long been involved in the Ruby community and well-known for discussions around testing. One of his best blog posts [explains his approach to testing](https://rhnh.net/2012/12/20/how-i-test-rails-applications/). - [Testing Ruby Applications with RSpec](https://www.pluralsight.com/courses/rspec-ruby-application-testing) I’ve taken several courses on [Udemy](https://www.udemy.com/) and they are one of my favorite places for training. The prices are low and there are many courses, so you have to do a bit of work to see which course is right for you but well worth the effort. - [The Complete TDD Course: Master Ruby Development with RSpec](https://www.udemy.com/complete-tdd-course-ruby-rspec/) - [Ruby on Rails 5 – BDD, RSpec and Capybara](https://www.udemy.com/ruby-rails-5-bdd-rspec-capybara/) The post [Ruby on Rails Testing Resources](https://accidentaltechnologist.com/ruby-on-rails/ruby-on-rails-testing-resources/) appeared first on [Accidental Technologist](https://accidentaltechnologist.com).
rbazinet
255,805
Feedback for a project
I'd like to ask a community for a feedback on a little project I've made a while ago. It's a small we...
0
2020-02-05T16:13:20
https://dev.to/nikitahl/feedback-for-a-project-36o8
help, feedback, review
I'd like to ask a community for a feedback on a little project I've made a while ago. It's a small web app to generate CSS theme [https://nikitahl.github.io/css-base/](https://nikitahl.github.io/css-base/). Does it make any sense? Can someone actually benefit from it or get some sort of a value from it, or it's just a useless tool. Please be honest. Thanks.
nikitahl
256,142
Conceptos básicos de la sesión en una Alexa Skill
Estas últimas semanas estoy dedicando tiempo a crear un par de skills para Alexa que me están sirvien...
0
2020-02-19T14:51:52
https://www.kinisoftware.com/alexa-skill-session/
alexa, alexaskills, skillsession, aws
--- title: Conceptos básicos de la sesión en una Alexa Skill published: true date: 2020-02-05 18:54:06 UTC tags: alexa,alexaskills,skill session,aws canonical_url: https://www.kinisoftware.com/alexa-skill-session/ --- Estas últimas semanas estoy dedicando tiempo a crear un par de skills para Alexa que me están sirviendo para profundizar más y aprender temas nuevos. A partir de lo que aprendo elaboro un listado de posibles posts y los priorizo. Haciendo esto me he dado cuenta que hasta ahora no había hablado de algo esencial en una Alexa Skill y que sale mucho en errores de certificación: la `skill session`. Aunque no tiene nada de complicado me viene bien contarlo aquí para luego enlazar otros temas: gestión del contexto, persistencia, recuperar información del usuario/dispositivo, contenido de una request/response, etc. ## Skill Session ### Request Con la invocación de una skill por parte de un usuario se crea una `session` que viajará con la `request` al back. La estructura de esa `request` da para otro post pero veamos aquí la parte que corresponde a la `session`: {% gist https://gist.github.com/kinisoftware/2cc07be64d6a553272889f595db98251.js %} La info que tenemos en la `session` es: - Un flag `new` que nos indica si estamos creando la sesión en esta petición, valor a `true`, o ya estaba abierta, valor a `false`. - Un `sessionId` como identificador único de la sesión activa de un usuario. Es un valor consistente en cada petición siguiente con la sesión abierta. - Un objeto `application` que contiene información sobre la skill que crea la sesión. De esta forma se pueden hacer comprobaciones en back que la petición viene de la skill que se espera. Si alojas el back en AWS Lambda y conectas la función con un `trigger` de Alexa para tu skill se hará ese check de forma automática. - Un objeto `user` que tiene información sobre la cuenta que está usando la skill. Escribiré otro post sobre el uso del `user` y otro objeto en la request que es `persona`. Por el momento, como punto más importante, de aquí sacamos el `userId` que representa de forma única a la cuenta Amazon para la cual la skill está activa. Veremos el uso y otros detalles de este `userId` en otros posts. El envío de la info de sesión va con cada request al back con la sesión abierta pero con el flag de `new` a `false`. ### Response Una vez se procese la petición en el back se envía una `response` a Alexa. Uno de los parámetros de esa respuesta es el flag `shouldEndSession` de cuyo valor dependerá lo que pase a continuación: `true` > La sesión finaliza de forma que la skill termina de ejecutarse. A partir de ahí cualquier petición la manejará Alexa. Cuidado con el cierre de sesión en skills con APL ya que la skill se cierra y si el usuario tenía que leer algo por pantalla no podrá. Es un defecto habitual que se detecta también en certificación. `false` > Alexa queda a la espera de una respuesta manteniendo la sesión y el micrófono abiertos. Se pueden dar a su vez varias posibilidades: - Si el usuario dice algo que corresponda a un `intent` de la skill se genera la `request` correspondiente hacía el back. En esa petición la sesión ya llevará el flag `new` a `false`. - Si el usuario no dice nada en un tiempo determinado se cierra el micro. En el caso de que la `response` llevara un `reprompt` se vuelve a abrir el micro unos segundos más. - Una vez que el usuario ya no responde más y se cierra el micro, es posible que la sesión quede abierta por un periodo corto de tiempo. Este es el caso en dispositivos con pantalla donde el usuario aún puede interactuar con la skill ya que la sesión se mantiene abierta por un rato con el micro cerrado. Para ello tendría que realizar la petición despertando primero a Alexa. Esta petición iría directa a la skill sin tener que invocarla. En el caso de otros dispositivos la sesión finalizaría. `undefined/null` > Aquí el comportamiento depende del tipo de dispositivo o de la respuesta. En la [documentación oficial](https://developer.amazon.com/en-US/docs/alexa/echo-button-skills/keep-session-open.html) cuentan las variantes. ## La sesión y el proceso de certificación Uno de los problemas más comunes que reportan en el proceso de certificación es el manejo incorrecto de la sesión de la skill. En concreto de la decisión que toma el desarrollador para dejar abierta o cerrar la sesión en la respuesta de un `Intent`. Cuando estás creando las respuestas para el usuario debes pensar si tras esa interacción esperas otra a continuación. Por ejemplo, en el caso de mi skill de [Estrenos de Cine](https://www.amazon.es/Kinisoftware-Estrenos-de-cine/dp/B07MKCLZ62) tengo ambas situaciones: - El usuario pregunta por los estrenos de esta semana y tengo respuesta de estrenos desde la API. Informo al usuario de las películas y, desde back, indico que quiero que se cierre la sesión luego. No espero otra interacción una vez que le he dado al usuario lo que pedía. - El usuario pregunta por los estrenos de esta semana pero la API no me retorna ninguno. Lo que hago en ese caso es mandarle al usuario una pregunta, ya que no tengo los estrenos, si quiere conocer la cartelera actual. O, aunque no se lo indique, también podría preguntar por otra fecha, no es una ejecución lineal. En este caso, en la respuesta de back, indico que quiero que la sesión se quede abierta. En este código podemos ver ambos casos enviando un `shouldEndSession` en la respuesta: {% gist https://gist.github.com/kinisoftware/eb3ddeeecf1cbcba9862cb2e0d28e785.js %} Si por ejemplo hubiéramos cerrado la sesión en el segundo escenario, nos lo hubieran detectado como error en el proceso de certificación. Le estamos haciendo una pregunta al usuario pero luego cerramos la sesión, con lo cual la respuesta no la va a manejar la skill sino Alexa. Esto también suele pasar cuando manejamos el `Intent` de `Amazon.HelpIntent` donde, habitualmente, lo que esperamos luego es una interacción por parte del usuario. Igualmente saber cuándo dejar la sesión abierta o no, afecta mucho a la experiencia de usuario final. Podemos llegar a crear una skill que acabe siendo muy pesada esperando siempre respuestas a interacciones que se podrían finalizar de una sola vez. Para skills con APL, como decía en la primera parte del post, hay que tener cuidado con cerrar la sesión desde el back. En mi skill de Estrenos de Cine estaba cerrando la sesión con la respuesta y esto producía que un usuario no pudiera leer los estrenos de cine por pantalla. Se cerraba la skill al terminar la salida de audio, mostrando el listado, pero se cerraba enseguida. ### SessionEndedRequest Hay un tema que no había visto documentado en mis primeros procesos de certificación y que descubrí cuando me lo detectaron como fallo. No tenía contemplado una `SessionEndedRequest` a mi back. Una `SessionEndedRequest` es una petición que Alexa hace al back de una skill para indicar que una sesión finalizó. Esta petición se produce si: - El usuario dice utterances como "salir" o "quitar". - El usuario no responde o dice algo que no corresponde a ningún `Intent` definido en el modelo. - Se produce un error. Como dato curioso, una respuesta a esta petición debe ser vacía: {% gist https://gist.github.com/kinisoftware/bfacf122ee738065fade26bc085c3b66.js %} Hay otras formas de manejar situaciones que envían esta request, como el `FallbackIntent` o registrar un `ErrorHandler` y lo veremos en otro post. _Nota_: si es la skill, indicado por back, quien decide finalizar la sesión con el flag `shouldEndSession` a `true` no habrá esa `SessionEndedRequest`. * * * Un punto que no comento, porque no se ve en la sesión de ejemplo que pongo arriba, es el tema de los atributos de sesión. Ese será mi siguiente post y el motivo inicial de haber escrito este primero :)
kini
256,146
Intro a Sublime Text
El editor que se adapta a tus necesidades
0
2020-02-05T20:44:24
https://medium.com/capua-dev/intro-a-sublime-text-f79da511ce8b
webdev, beginners, tooling, editor
--- title: Intro a Sublime Text published: true description: El editor que se adapta a tus necesidades tags: webdev, beginners, tooling, editor, canonical_url: https://medium.com/capua-dev/intro-a-sublime-text-f79da511ce8b cover_image: https://dev-to-uploads.s3.amazonaws.com/i/a0jvel2m7qqhkc8iq93j.jpeg --- > *Este post fue publicado originalmente en abril de 2015.* Como buenos gladiadores además de estar bien preparados es importante familiarizarnos con nuestras armas. Cuanto más cómodos y adaptados estemos en nuestro entorno de batalla mayor será nuestro rendimiento. Una de las herramientas más importantes para un programador es su editor. Personalmente somos de los que prefieren editor de texto en vez de IDE por su simplicidad y ligereza, para nuestros menesteres no nos hace falta más. [Sublime Text](http://www.sublimetext.com/) en su versión 3, a pesar de estar en beta es bastante estable y funciona realmente bien. Al abrirlo por primera vez sientes que viene casi “pelado” y es una de sus ventajas, ya que se puede personalizar prácticamente todo. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/3dmcq5d3a012y7u6l2mb.png) <figcaption>Sublime Text recién abierto.</figcaption> Algunos de sus puntos fuertes son: - Ligereza. - Capacidad de personalización. - Selecciones y cursores múltiples. - Paleta de comandos. - Movernos entre ficheros, buscar en los mismos, ir a una función o linea determinada… - Cambiar de proyecto instantáneamente, recordando el estado del área de trabajo. - Acceso a una infinidad de paquetes creados por la comunidad. --- ## ¡A las armas! A continuación os voy a mostrar algunas de mis preferencias. Una de las primeras cosas que debemos hacer después de instalar Sublime es instalar [Package Control](https://packagecontrol.io/) siguiendo los [pasos](https://packagecontrol.io/installation) que indican en su web. Para instalar un paquete tenemos que abrir la paleta de comandos `(⌘+⇧+P)` o `(⌃+⇧+P)`, escribir *install package* y buscar por el nombre del paquete que deseamos. En la web de Package Control podemos ver los paquetes de moda, nuevos y populares, además de mucha información extra. ## Packages ### [Emmet](https://packagecontrol.io/packages/Emmet) Para mejorar el workflow cuando trabajamos con HTML y CSS. Echad un vistazo al ejemplo de su [web](https://emmet.io/) y dedicadle un rato a ver su [cheat sheet](https://docs.emmet.io/cheat-sheet/), no tiene desperdicio. ### [Sublime Linter](https://packagecontrol.io/packages/SublimeLinter) El uso de un linter se hace esencial para escribir código mejor, más limpio y con menos bugs, empleando buenas prácticas de programación. Los linters no están incluidos en el paquete así que además tendremos que instalar los que deseemos. En concreto los que más suelo usar son [JSHint](https://packagecontrol.io/packages/SublimeLinter-jshint), [CSSLint](https://packagecontrol.io/packages/SublimeLinter-csslint) y [SCSSLint](https://packagecontrol.io/packages/SublimeLinter-contrib-scss-lint). Para obtener información acerca de un error en concreto tan solo hemos de situar el cursor en la linea del error y podremos ver una ver una breve descripción en la barra de estado *(situada en la parte inferior de la ventana)*, zona donde también podemos ver el número total de errores en cualquier momento. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/dgz3sylxs81jn4m3ich7.png) <figcaption>Sublime​Linter-csslint</figcaption> ### [Side​Bar​Enhancements](https://packagecontrol.io/packages/SideBarEnhancements) Ofrece una mejora a las operaciones que tiene por defecto el sidebar de Sublime. Por ejemplo: abrir con, copiar, cortar, mover, copiar rutas… ### [DocBlockr](https://packagecontrol.io/packages/DocBlockr) Ayuda a comentar bien nuestro código completando todo lo posible a partir del elemento que comentemos. Soporta lenguajes como JavaScript, TypeScript y Objective C entre otros. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/qfog2r1kahsk4hj3gaqz.png) ### [GitGutter](https://packagecontrol.io/packages/GitGutter) Como su nombre indica nos muestra en el gutter el estado de cada linea, ya sea insertada, modificada o eliminada, comparando las diferencias con un commit/rama/tag específico. ### [BracketHighlighter](https://packagecontrol.io/packages/BracketHighlighter) Resalta la apertura y cierre de etiquetas, comillas, paréntesis, llaves, corchetes… Muy útil sobre todo cuando tenemos que tocar algo de *spaguetti code*. Podemos mostrar los indicadores en el gutter, en el código, o en ambos a la vez, así como aplicarles distintos estilos. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/4lcadvycthlswnqhd7zq.png) <figcaption>BracketHighlighter con la configuración por defecto.</figcaption> ### [Project Manager](https://packagecontrol.io/packages/ProjectManager) Muy cómodo para ordenar y manejar proyectos olvidándonos de donde están los ficheros. Es una mejora a la funcionalidad que trae Sublime por defecto. ### Otros paquetes que uso: jQuery, Underscorejs snippets, Handlebars, SCSS, Meteor Snippets, Color Highlighter, ColorPicker, EditorConfig... ¿Cuáles son vuestros preferidos? ## Configuración La configuración de Sublime Text se basa en ficheros **JSON**, podéis verla en *Preferences → Settings — Default* y los atajos de teclado en *Preferences → Keybindings — Default*. Para añadir o sobrescribir los valores hemos de editar los ficheros que terminan por * *— User*, veamos algunos ejemplos: Settings — User: ```jsonc { // Resalta la linea actual "highlight_line": true, // Añade una linea en blanco al final del documento "ensure_newline_at_eof_on_save": true, // Elimina los espacios al final de las lineas al guardar "trim_trailing_white_space_on_save": true, // Modifica los separadores de palabras eliminando '-' para CSS "word_separators": "./\\()\"':,.;<>~!@#$%^&*|+=[]{}`~?", // Permite hacer scroll aunque se haya terminado el documento "scroll_past_end": true } ``` Keybindings — User: ```jsonc [ // Indenta todo el documento { "keys": ["super+shift+r"], "command": "reindent", "args": { "single_line": false } }, // Intercambia los keybindings de "pegar" y "pegar e indentar" { "keys": ["super+v"], "command": "paste_and_indent" }, { "keys": ["super+shift+v"], "command": "paste" }, // Resalta el fichero actual en el sidebar { "keys": ["alt+shift+r"], "command": "reveal_in_side_bar" } ] ``` --- ## Sincronización Si sois usuarios de Dropbox u otro servicio similar podéis sincronizar todas vuestras preferencias y ahorraros mucho trabajo para estar cómodos cada vez que cambiéis de equipo o sistema operativo. Si vuestro Dropbox no está en la ubicación por defecto, debéis sustituir *~/Dropbox* por vuestra ubicación. Las siguientes instrucciones son para *OS X*, para otro sistema operativo las tenéis [aquí](https://packagecontrol.io/docs/syncing#dropbox). En vuestro primer equipo: ```shell $ cd ~/Library/Application\ Support/Sublime\ Text\ 3/Packages/ $ mkdir ~/Dropbox/Sublime $ mv User ~/Dropbox/Sublime/ $ ln -s ~/Dropbox/Sublime/User ``` En vuestros otros equipos: ```shell $ cd ~/Library/Application\ Support/Sublime\ Text\ 3/Packages/ $ rm -r User $ ln -s ~/Dropbox/Sublime/User ```
marioblas
256,169
A modern front-end workflow for Wordpress
For all intents and purposes, this is a first look at the Wordpress starter theme Sage. I think I did...
0
2020-05-25T07:53:16
https://smth.uk/a-modern-front-end-workflow-for-wordpress/
webdev, wordpress, productivity, tutorial
--- title: A modern front-end workflow for Wordpress published: true date: 2020-02-05 00:00:00 UTC tags: webdev, wordpress, productivity, tutorial cover_image: https://smth.uk/img/feature-image-wordpress.jpg canonical_url: https://smth.uk/a-modern-front-end-workflow-for-wordpress/ --- For all intents and purposes, this is a first look at the Wordpress starter theme [Sage](https://roots.io/sage/). I think I did, a long time ago, try an earlier incarnation of this theme, when it was called Roots, but I can't remember much about it (I must be going back the best part of a decade here). [Roots](https://roots.io/) is now a suit of Wordpress development tools, one of which being the starter theme, now known as Sage. Shortly after trying the original Roots, I for whatever reason switched to [Underscores](https://underscores.me/), which I continued to use as a starter theme for many years. Underscores has served me well, and I've not really had any major gripes with it; but when something recently drew my attention to Sage it made me (re)consider what I / Underscores was potentially missing. In short the missing pieces are a bunch of developer experience stuff, that we've become used to, especially working with other stacks. Specifically that means a templating engine, and a configured build script with concatenation, minification and live reloading. If these are things you tend to add to your Wordpress workflow, or like me, you'd quite like to have but not sure it's worth the setup time, then read on. With my interest piqued I needed something Wordpressy to build. I decided to make a little shop; then set about designing a couple of pages, to give myself something to work towards. ![Web page showing mug for sale. It is branded Hate Capitalism](https://smth.uk/img/hc-home.jpg) If you want to skip to the end result, it can be found at [hatecapitalism.com](https://hatecapitalism.com/). Setting up your project is pretty straight forward. Recently I've been using [Local by Flywheel](https://localbyflywheel.com/) as my local Wordpress environment, and I continued to do so here. New Sage projects are created with [Composer](https://getcomposer.org/) (which I ran outside of Local). ``` # wp-content/themes/$ composer create-project roots/sage my-theme-name ``` During the setup you can choose to include one of a selection of CSS frameworks, but I opted for a blank slate. In your new project you run `yarn` to install dependancies, then when you've got got things configured `yarn start` to run the development server. When running the dev server, you'll want to switch the to the BrowserSync address (localhost:3000 by default), not only because this is where the live reloading happens, but once run, your updated assets do not exist in their build location. To build your assets, so they are available at my-new-site.local you need to run `yarn build`. When you're ready to go live `yarn build:production`. I suggest reading the [docs](https://roots.io/sage/docs/) before getting started. I found them to be pretty good, if maybe a little thin, given how much there is to cover. The [forum](https://discourse.roots.io/) does a good job of bolstering the docs, should you be left wanting. Once up and running I had a couple of little niggles with dependancies being out of date. I quickly found that I couldn't disable the Webpack specific Browsersync (in browser) notifications. There wasn't much recourse for this as the [plugin being used](https://github.com/qwp6t/browsersync-webpack-plugin) is no longer maintained. If this is the only problem that this plugin causes, then it's not too big of a deal (I can hide the notifications with CSS, I guess), but a bit of a concern. Less of a concern was Stylelint flagging stuff that it shouldn't be, like the `system-ui` font-family keyword. Updating Stylelint resolved that problem. On to the good stuff - as I thought would be the case, having a templating language in place was nice. I've never used a templating language in Wordpress before, frankly because I've never really felt the need enough to warrant looking into it. Sage uses [Laravel's Blade templates](https://laravel.com/docs/5.8/blade) out of the box, so you get templating for free. The main benefit of using a templating language, such as Blade, is the ability to define, inherit and extend layouts. Many of us are used to working in this way, via various templating languages, in other environments, whether that be Liquid templates in Jekyll or Jinja templates in a Python project. A secondary advantage is less opening and closing of `PHP` tags in your templates, specifically when echoing variables: ``` <h1>Hello <?php echo $name; ?>.</h1> ``` becomes ``` <h1>Hello {{ $name }}.</h1> ``` Blade also gives you "shortcuts" for common PHP control structures; for example you can put if statements and loops right in your templates. ``` @if($products) <ul> @foreach($products as $i=>$product) <li>{{ $product->name }}</li> @endforeach </ul>@endif ``` When you need to, breaking into regular PHP is easy; you just wrap it like so: ``` @php //php here@endphp ``` I found myself doing this out of habit - I wrote a little switch statement (to handle the words that come before the price on the home page) before realising that Blade has its own syntax for this. If you're going to put such things in your templates though, I don't think using one syntax over the other will make a huge difference to the cleanliness of those templates. ``` @php switch ($i) { case 1: First case... break; case 2: Second case... break; default: Default case... break; }@endphp @switch($i) @case(1) First case... @break @case(2) Second case... @break @default Default case...@endswitch ``` When it came to writing styles I felt at home working with CSS in Sage. First of all, this is probably the point when you appreciate that Sage has BrowserSync ready to go, out of the box. In addition to that, Sass and PostCSS are wired up in a Webpack configuration. If I were constructing this build process from scratch I would probably forgo Sass in favour of PostCSS plugins, and opt for something less gnarly than Webpack, but I think the choices here are reasonable, and given that they are configured for us, I'm perfectly happy to use them. The Sage build process really started to shine when it came to adding Javascript. Sage provides DOM-based routing, giving you an easy way to specify which scripts run on what pages. For example, if I wanted the Javascript for the slider on my homepage to only run the homepage, that can be achieved pretty much just by putting it in a file named `home.js` (the is based on body classes, of which `home` is one). Global Javascript, such as the slide-out navigation for example, lives in `common.js`. From a developer experience point of view, I found this to be a really nice answer to the question of what to do with page specific Javascript. From a user experience point of view, the appropriateness of this technique will vary with each project. You will need to consider whether putting all the Javascript in a single file the best approach. Ironically, given that this was an exercise in exploring a Wordpress starter theme, I felt like I didn't spend much time engaging with Wordpress while working on this little [project](https://hatecapitalism.com/). That is perhaps the biggest compliment I could give Sage, it puts a layer of modern tooling between you and Wordpress. I still had the odd Wordpress moment, like writing a PHP function in order to change the contents of the `title` tag, but for the most part I felt free to focus on the sort of front-end stuff that I wanted to be focusing on. That was a breath of fresh air.
smth
256,240
Validations in Spring Boot
Hello! In this article, I'm gonna cover validations in a Spring Boot app. The only requirement for yo...
0
2020-02-06T00:26:01
http://kamerelciyar.com/validations-in-spring-boot/
java, spring, springboot
--- title: Validations in Spring Boot cover_image: "https://dev-to-uploads.s3.amazonaws.com/i/qrdw0iy2qrufcd2c2rfk.png" published: true date: 2019-09-30 19:27:12 UTC tags: java,spring, springboot canonical_url: http://kamerelciyar.com/validations-in-spring-boot/ --- Hello! In this article, I'm gonna cover validations in a Spring Boot app. The only requirement for you to understand this topic is to be able to create controllers in Spring Boot and of course, be comfortable with Java. You can find source code for examples here: [https://github.com/kamer/validations-in-spring-boot](https://github.com/kamer/validations-in-spring-boot) ## Why do we need both client-side and server-side validation? In web applications, we generally use both client-side and server-side validations for form data or any other data that goes to server-side. Why do we bother with both of them? Because we use client-side validation to validate and respond quickly. For instance, we have a field that accepts phone number. So, first of all, we should prevent user to type any character other than numbers. Also, we have a pattern that validates phone number. If we control and reject any incorrect input value on the client-side we eliminate the time for this request to go server-side and get rejected. So we can say that client-side validations are mostly used to give user fast feedback and validate syntactical things. (e.g. pattern, length, characters) But client-side validations can be considered useless since they can be easily manipulated or disabled. Here’s a great representation what it is like to trust any kind of client-side validation. ![Client Side Validation](https://dev-to-uploads.s3.amazonaws.com/i/e092mfd68a20l9p2ywzw.jpg) If we go on with the above image, we should create a validation mechanism that rejects any other number than 911 as input. Here’s where server-side validation comes into play. Briefly, server-side validation is our last chance to reject incorrect inputs properly. Also, we validate constraints that need more logical operations on server-side. For instance, rejecting creation of an employee if the manager of the department is not assigned yet. Enough for introduction, let’s get our hands dirty. ### Javax Validation Constraints _javax.validation_ is the top-level package for Bean Validation API and it has some predefined annotation-based constraints in _constraints_ package for us to use. Here are some examples. - If we want to check a field if it’s null, we use `@NotNull`. ```java @NotNull(message = "Name cannot be null.") private String name; ``` In the above example _name_ field cannot be null. But it can be empty. If we use `@NotBlank` _name_ cannot be null and must contain at least non-whitespace one character or if we use `@NotEmpty` annotation _name_ cannot be null and `name.length() > 0`. So it can accept a String that has whitespace character. -If we limit any number input we use `@Max` and `@Min` annotations. ```java @Min(value = 3, message = "Experience must be at least 3 years.") private Integer experienceInYears; ``` -`@Positive`, `@Negative`, `@PositiveOrZero` and `@NegativeOrZero` annotations do what their name suggests. ```java @PositiveOrZero(message = "You cannot have negative numbers of children.") private Integer numberOfChildren; ``` -`@Size` annotation gives minimum and maximum values for size of anything. (_CharSequence_, _Collection_, _Map_, _Array_) ```java @Size(min = 2, max = 35, message = "Surname must be 2-35 characters long.") private String surname; ``` -`@Past`, `@Future`, `@PastOrPresent`, `@FutureOrPresent` annotations validate date types according to their name. ```java @Past(message = "Date input is invalid for a birth date.") private LocalDate dateOfBirth; ``` -You can validate any regex pattern with `@Pattern` annotation. ```java @Pattern(regexp = "^4[0-9]{12}(?:[0-9]{3})?$", message = "Only Visa cards are accepted.") private String cardNumber; ``` -No need to explain `@Mail` annotation. ```java @Email(message = "Enter a valid email address.") private String email; ``` I explained the most important ones above. If you want to see the others you can find them [here.](https://docs.oracle.com/javaee/7/api/javax/validation/package-summary.html) I created a dummy example to try these constraints and show validation messages on form inputs with Thymeleaf. We should annotate controller input with `@Valid` to activate these constraints. ```java @PostMapping("/javax-constraints") String postJavaxConstraints(@Valid JavaxValidationConstraints javaxValidationConstraints, BindingResult bindingResult) { ... ... ... } ``` Then show error messages on Thymeleaf. ![Error Messages](https://dev-to-uploads.s3.amazonaws.com/i/1d6sngjwfq9694c8jc4j.png) ### Creating Your Own Validation Annotations If you have followed previous chapter well you should’ve seen that you achieve almost anything with Javax Validation Constraints. But sometimes defining your own annotations can seem a much better option though. Here’s one example. We want to validate _creditCard_ field with `@Pattern`. ```java @NotEmpty(message = "You must enter a credit card number.") @Pattern(regexp = "^(?:4[0-9]{12}(?:[0-9]{3})?|[25][1-7]" + "[0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}" + "|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])" + "[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11})$", message = "Invalid card number.") private String creditCard; ``` Do you see any problem here? It seems too ugly considering we will have at least 5 more fields with at least 2 validation annotations and so on. In this type of situation we can choose defining our own annotation. First of all create an annotation as below. ```java @Documented @Constraint(validatedBy = CreditCardValidator.class) @Target({ ElementType.FIELD }) @Retention(RetentionPolicy.RUNTIME) public @interface CreditCard { String message() default "Invalid card number"; Class<?>[] groups() default {}; Class<? extends Payload>[] payload() default {}; } ``` We want our annotation to serve at runtime, to be used with field types and to be validated by CreditCardValidator class. So here’s our validator class. ```java public class CreditCardValidator implements ConstraintValidator<CreditCard, String> { private static final String CREDIT_CARD_REGEX = "^(?:4[0-9]{12}(?:[0-9]{3})?|[25][1-7][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\\d{3})\\d{11})$"; private static final Pattern CREDIT_CARD_PATTERN = Pattern.compile(CREDIT_CARD_REGEX); @Override public void initialize(CreditCard constraintAnnotation) { } @Override public boolean isValid(String creditCardNumber, ConstraintValidatorContext context) { Matcher matcher = CREDIT_CARD_PATTERN.matcher(creditCardNumber); return matcher.matches(); } } ``` We implement ConstraintValidator<[AnnotationsName], [TargetType]> and enforcedly override `initialize()` and `isValid()` methods. _initialize_ method is guaranteed to be run before any use of this validation and _isValid_ method is where we reject or accept any value. Our annotation is ready. Let’s use them like the ones above. ```java @PostMapping("/custom-constraint-annotation") String postCustomConstraint(@Valid CustomConstraintAnnotation customConstraintAnnotation, BindingResult bindingResult) { if(bindingResult.hasErrors()){ return "custom-constraint-annotation"; } ... ... ... } } ``` All validation errors are saved in BindingResult object and we can show error messages with Thymeleaf. ```java <form role="form" th:object="${customConstraintAnnotation}" th:action="@{/custom-constraint-annotation}" th:method="post"> <div style="color: red;" th:if="${#fields.hasErrors('*')}"> <p><strong>Errors</strong></p> <ul> <li th:each="err : ${#fields.errors('*')}" th:text="${err}"></li> </ul> </div> <label>Credit Card Number</label> <br> <input type="text" id="creditCard" name="creditCard" th:field="*{creditCard}"> <br> <input type="submit"> </form> ``` ![Custom Annotation Error Message](https://dev-to-uploads.s3.amazonaws.com/i/1sfdowmttcc0ilmg18l3.png) ### Combining Multiple Annotations Another way of implementing good validations is combining multiple validation annotations. Hibernate documentation calls it [Constraint composition](https://docs.jboss.org/hibernate/stable/validator/reference/en-US/html_single/?v=5.4#section-constraint-composition). It’s quite simple. First of all create an annotation type and fill as below. ```java @NotEmpty @Size(min = 8) @Pattern(regexp = "\"^(?=.*[a-z])(?=.*[A-Z])(?=.*\\d)[a-zA-Z\\d]$\"") @Target({ METHOD, FIELD, ANNOTATION_TYPE }) @Retention(RUNTIME) @Constraint(validatedBy = {}) @Documented public @interface CombinedPasswordConstraint { String message() default "Invalid password."; Class<?>[] groups() default {}; Class<? extends Payload>[] payload() default {}; } ``` You can add different messages for each annotation. Then use it like the other constraints. ![Constraint Composition Error Message](https://dev-to-uploads.s3.amazonaws.com/i/jbudcxioo0nlbqbu88g2.png) ### Creating Custom Validator Class We’ve been through constraints so far. It’s time to create a more complicated validation with a custom validator class. In previous examples we’ve validated syntactical things. But generally we need more complicated things that possibly need a database query. I’m repeating prior example. You have created a department in your imaginary app. Then you try to add a new employee to this department. But you want to add a constraint that requires assigning a manager before assigning an employee. So you should add a validator that checks either this department has a manager or not. This is possible with custom validator. But you can also validate simple things like regex patterns. Let’s create one. I’m gonna show you a simplified example. First of all, create a dummy class and fill it as below. ```java public class CustomValidationEntity { private String email; private Long departmentId; public Boolean existManagerByDepartmentId(Long departmentId) { return false; } public Boolean existEmployeeWithMail(String email) { return true; } } ``` It will always say ‘department has no manager’ and ‘this email already exist’ whatever we enter. Then create validator class as below. I’m gonna explain details. ```java @Component public class CustomValidator implements Validator { @Override public boolean supports(Class<?> clazz) { return CustomValidationEntity.class.isAssignableFrom(clazz); } @Override public void validate(Object target, Errors errors) { CustomValidationEntity customValidationEntity = (CustomValidationEntity) target; if (customValidationEntity.existEmployeeWithMail(customValidationEntity.getEmail())) { errors.rejectValue("email", null, "Employee with this email is already exists."); } if (!customValidationEntity.existManagerByDepartmentId(customValidationEntity.getDepartmentId())) { errors.reject(null, "Department does not have a manager."); } } } ``` This Validator interface that we extend is `org.springframework.validation.Validator;`. Not javax… one. This interface gives us two methods. `supports()` method controls if the target object is what we intended to validate and `validate()` method is where we control and reject things. You can reject the whole object and add a global error message with `reject()` or reject a single value and add an error message for this value with `rejectValue()`. Then you should annotate this class with `@Component`. Let’s use our validator. But we will do something different than using constraints. After annotating object parameter in the controller with `@Valid`, we will add an InitBinder method in that controller. ```java @InitBinder private void bindValidator(WebDataBinder webDataBinder) { webDataBinder.addValidators(customValidator); } ``` This `@InitBinder` annotated method will initialize _WebDataBinder_. Since _WebDataBinder_ prepares objects that come from requests for controllers, it can validate before the request reaches controller. That’s it. Let’s try our example. ![Custom Validator Error Message](https://dev-to-uploads.s3.amazonaws.com/i/vui1fnq6r98uvlnwrqp6.png) In this article we’ve gone through Validations in Spring Application. For questions, suggestions or corrections feel free to reach me on: **Email:** [kamer@kamerelciyar.com](mailto:kamer@kamerelciyar.com) **Twitter:** [https://twitter.com/kamer\_ee](https://twitter.com/kamer_ee)
kamer
256,327
Developers don’t talk about this enough …
I feel like developers don’t talk about this enough … The dreaded first impression that fuels our im...
0
2020-02-06T03:58:03
https://dev.to/moyarich/developers-don-t-talk-about-this-enough-2khb
career, interview, impostersyndrome, confidence
I feel like developers don’t talk about this enough … The dreaded first impression that fuels our imposter syndrome from a bad Interview. You show up to your interview … you are way too nervous, you perform badly on the test. Before you know it, you are out. The first impression you left behind is that you are stupid. After that, you spend the rest of your day/weeks/even months trying to become smarter. How did you overcome that feeling? What words did you tell yourself to remind yourself that you are amazing.
moyarich
256,333
Building Great User Experience with React Suspense
Since its launch, ReactJS is known for a great developer experience when working on large scale front...
0
2020-02-06T04:35:12
https://dev.to/jakelumetta/building-great-user-experience-with-react-suspense-3emk
Since its launch, [ReactJS](https://buttercms.com/blog/react-vs-react-native) is known for a great developer experience when working on large scale frontend apps. While there has been a lot of focus on producing error-free and scaleable frontends, there has been less emphasis on user experience in large scale applications. However, with the release of Suspense, the React core team is more focused on developing the best user experience and doing so requires rethinking how we approach loading code and data for our apps. **What is Suspense** In React 16.6, Suspense was initially introduced with React.lazy() API allowing developers to do components based code splitting, which was done via libraries like react-loadable. Here is how it looks like: const BlogContent = React.lazy(() => import('./BlogContent)); ``` <Suspense fallback={<Loader />}> <BlogContent /> </Suspense> </ErrorBoundary> ``` In this example, a BlogContent component’s code will be fetched when the BlogContent is about to be rendered. The suspense boundary around it waits for the code to load. While the component’s chunk of code is downloaded, the `fallback` component is rendered. This is Suspense in its simplest form. Moving forward, Suspense is much more than just wrapping the component for code splitting. In a broader perspective, Suspense provides developers with declarative API to handle UI while code or data is being loaded. It doesn’t say how the data is fetched, but it lets you closely control the visual loading sequence of your app. Here some might feel like it is some sort of data fetching solution provided by React but that's not what Suspense is. Rather, Suspense allows developers to display the loading states in your UI but doesn’t tie the network logic to React components. **What is Suspense & Rendering** In the React community, multiple approaches have evolved to handle data fetching and rendering: - Fetch on Render - Fetch then Render - Render as you Fetch ( A new approach from Suspense) **Fetch on Render** This is the most common technique amongst React developers today. In this approach, we mount our component and then start fetching data from our server. While data is being fetched the component shows the loading state and as soon as the data is available, component re-renders with real content. One of the major pitfalls in this technique is the waterfall, becoming unavoidable at times when the application grows into a beast. **Fetch then Render** To avoid the waterfall, we tend to centralize data fetching and render once all the data is available. Though this avoids the waterfall, the user has to wait until all of that data is available. This wait might be longer than the one in the previous approach as we get more data in parallel. **Render-as-You-Fetch - Suspense!** Suspense empowers developers to start fetching and rendering simultaneously by taking away all the heavy lifting of managing intermediate states. With earlier approaches, we had to wait for fetching to finish before we start rendering. **Here rendering begins as soon as we start fetching our code.** One of the key features in Suspense for data fetching is it treats code as data. With the usage of React.lazy API, it is easier to fetch our code along with our data and rendering fallback during transitional states. It eliminates the if (...) “is loading” checks from our components. This doesn’t only remove boilerplate code, but it also simplifies making quick design changes. **Suspense for Data Fetching** An important and basic tool in Suspense is data fetching coupled with fetch promise wrapping for communication status. Before we dive in code, we need to understand a little about how Suspense looks at its dear children. As we wrap our fetch promise around, the function should return its state of pending, rejected and resolved. Here is how these states should be returned from the wrapper function: - Pending State by throwing a promise - Rejected State by throwing a JS Error - Resolved state by returning the result. Here is our Suspense based fetching component: ``` function suspenseFetch(promise) { let status = "loading"; let output; const fetcher = promise.then( response => { status = "success"; output = response; }, error => { status = "error"; output = error; } ); return { load() { if (status === "loading") throw promise; if (status === "error") throw output; if (status === "success") return output; } }; let blog = suspenseFetch(BLOG_URL).then(res => res.json() ); const MarkdownRenderer = React.lazy(‘./MarkdownRenderer’); export default function BlogContent() { const content = blog.load().content; return ( <Suspense fallback={<Loader />}> <MarkdownRenderer content={content} /> </Suspense> ); } ``` As seen, our `suspenseFetch ` function returns a read()method that returns our data if its available or else it will throw a Promise or Error method to mention loading and error states respectively. Both data fetching and code fetching wrapped in Suspense allowing us to download both in parallel and render when both data and code is available. **Suspense As Boundaries** Versions before 16.6 React introduced a great way to handle error in components. `componentDidCatch` & `getDerivedStateFromError` let developers avoid apps to crash because of an error in any single component. Functional components didn’t have any such API yet. Now, Suspense can be of great help to enable error boundaries around components. Here is a small example: An error boundary component to use across our app: ``` class ErrorBoundary extends React.Component { <Suspense fallback={<Loader />}> <BlogContent /> </Suspense> </ErrorBoundary> ``` This component can now wrap any component in our app and to manage error boundaries. Example: ``` <ErrorBoundary fallback={<h2>Could not fetch posts.</h2>}> locale="en" textComponent={React.Fragment} messages={translationsFromLocale} > <App /> </IntlProvider>, document.getElementById("app") ); ``` This boundary will now catch both rendering errors and Suspense data fetching errors. As Suspense will get reject status from an underlying promise, the error boundary above will catch it to display a declared fallback UI. **Is it Production Ready?** Though Facebook.com’s rewrite is completely based on Suspense and Relay as said by the React team, it is an experimental feature and APIs might change drastically before it gets stable. Nevertheless, it is something that developers who like to move fast and break must try. Suspense and the Concurrent mode are surely turning around the frontend world to a large degree. This shift will bring in a lot of new ideas that will make it easier for a frontend developer to cope up with the demands of great experiences. **What lies Ahead** Suspense is still very new to the frontend world and best practices are still in process of development. As we move forward and the community adopts Suspense more, many patterns will evolve to enable great user experiences.
jakelumetta
256,404
Things to consider before choosing a framework or technology for a project
Hello guys, here I am going to list some important points to consider before you decide go for a fram...
0
2020-02-06T05:56:33
https://dev.to/ezhurik/things-to-consider-before-choosing-a-framework-or-technology-for-a-project-158n
discuss, beginners, framework, basics
Hello guys, here I am going to list some important points to consider before you decide go for a framework/technology when starting a project. First things first ### **1) Whether it have a detailed documentation or not?** The documentation helps a lot in understanding the basic stuffs. It should have a simple and understandable language. Having proper examples would be a bonus. ### **2) How big is the community and can I find people to help me when I am stuck?** It is a big factor while choosing a framework or technology. The documentation provides us the structure of framework and its functionality. But as we know all projects don't have similar requirements and we developers aren't behind in the matter of encountering strange abnormal issues😉. So before choosing the framework, you should make sure that it has a huge community support and large developer base. ### **3) How expert is my team and how wide is their knowledge base?** Another important factor to consider before committing to a framework is the expertise and the knowledge base of your team. Any project involves as much effort after the completion of development as before and during the process. Besides, the product developed must be easily testable. ### **4) Is there any learning resources?** Check whether it have something new that you can learn in the process of development. Always look to learn something new, so don't be afraid to add something new, different to the project. ### **5) How scalable is the proposed application?** Scalability is not a part of the development process though it needs to be considered while developing the application. Any app generally is scalable in two ways: **a) Vertical scalability:** It means the app has the flexibility of adding new components to a web application without affecting its performance. **b) Horizontal scalability:** It means the app must be capable of handling the increasing number of users. ### **6) Which is the best shot financially?** Even though most of the web development tools and techniques are free and open source, yet sometimes they require you to pay some additional fee to enjoy the advanced benefits. Depending on the technology, you may sometimes be required to purchase the license as well. Thus, before committing to a technology you must understand the financial implications(including maintenance cost as well) it brings along and proceed accordingly . ### **7) Time to develop and market** The technology stack may depend on your time to market in the way that it supports reusable components and third-party integrations. If tools and frameworks allow easy integration, this is going to speed up the development process, therefore, resulting in faster development. ### **8) What do my instincts say?** Lastly, developers often tend to choose what appeals to them the most. A technology may be the best in the market but if it doesn’t fall in the skill set of a particular developer, he/she won't choose it. The personal preferences of the developer also play a significant role in the choice. Its upto you whether you consider the various suggestions being given to you or ignore them all and choose something entirely different. That's it guys. These all are purely my opinions. Every developer is unique thus each have different strategies. Please ignore any typos. Any suggestions to improve this post is much appreciated 😊.
ezhurik
256,413
Authenticate users with firebase and react.
In this article, we are going to make a basic user auth with firebase. If you have experience with an...
0
2020-02-26T16:57:09
https://dev.to/itnext/user-auth-with-firebase-and-react-1725
react, firebase, javascript
In this article, we are going to make a basic user auth with firebase. If you have experience with any other type of user auth, you probably got frustrated. Firebase does have a learning curve but I have found it small compared to other alternatives. Firebase is going to do a lot of the heavy backend functionality If you would like to see what this app does here is the "finished" product you can __[here](https://firebaseauthtutorial.surge.sh/login)__ __Why is this tutorial useful?__ This is how to leverage firebase so that you don't have to create your own backend, encrypt your user's passwords or go through the hassle of deploying a backend application. __Prerequisites:__ 1. understanding of JavaScript including how to pass arguments to functions and asynchronous code. 2. understanding of react, context, hooks with [create-react-app](https://www.npmjs.com/package/create-react-app). 3. Text editor of your choice.(I will use [vscode](https://code.visualstudio.com/download)) 4. [A Firebase account](https://firebase.com) 5. basic understanding of the command line. 6. knowledge of git. Optional: bash command line/Mac OS. You can do this without it but I’ll be using it for this tutorial. first, make a new firebase project by visiting https://firebase.com. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/kccguino7xi9mu8o2q3m.png) Click on a new project. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/f9up8812lv7qdhzsrb3x.png) click "my first project" and then you can name your project whatever you want. Click continue. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ykbwzi4v5zkx4y6gsc7m.png) You can choose not to have google analytics and it should not interfere with this tutorial, I left it on, so you will see parts of my code where it’s enabled. Click continue. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ljczadrd2rh935c9trxp.png) You will be prompted to select an account. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/52316prga7fm8dqf0kx4.png) select the default account, then click create project. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/52316prga7fm8dqf0kx4.png) you should now see this. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/zzw1izku3jy2wjfztlwj.png) you should be in your firebase console for this project. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/upb8h5qgkpsnyof9mxwf.png) click on authentication on the left side navigation. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/4ett85773gm95rkpw8zp.png) click set up sign-in method. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/qlnmluar4eoguo8932zb.png) here is a wide variety of ways to set up users signing into our apps. We are going to do the easiest way for this tutorial. click email and password. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/4iefop3njd08yyafiqrr.png) Click enable. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/yevb49yckz39pnr2rb3q.png) Save. Make sure that it actually got enabled. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/tmno7e1au6wzejzhu7pc.png) Now go to the project overview. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/vw8rfi9zyr27g383j4cn.png) We need to get info about how our app can send and receive firebase data, so we have to get API keys and other sensitive information given to us in the form of an SDK. Click on the brackets to begin. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/zzpeg5dqj7qmlewy87h9.. We will be creating a react app and adding everything inside the script tag to the react project. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/xfv1yz2eyav0s3o5z64g.png) since we don't have a firebaseIndex.js we can't add it yet. this is everything we have to do on the firebase console for our project. make a new react app. ```bash create-react-app firebaseauthtutorial ``` cd the app ```bash cd firebaseauthtutorial ``` this is a good moment to plan out what kind of packages are wanted. these will all be installed via npm. 1. [firebase](https://www.npmjs.com/package/firebase). if this was an ordinary javascript, we would use the whole script take and the SKD. 2. [react-router-dom](https://www.npmjs.com/package/react-router-dom). this is so that when a user logs in we display components only accessible by users. 3. [dotenv](https://www.npmjs.com/package/dotenv), the best habit you can have with making apps that contain user data or leveraging APIs' (like this app will) is to ensure that hackers can't get access to your API keys, encryption techniques or other users sensitive info. dotenv allows you to save sensitive information as environment wide variables, in a way that you can't publish to a remote repo but still be able to use in your app. run an npm install on the command line for all the packages _pro tip: make sure that you are in the root directory of the project before you run npm install_ ```bash npm install firebase dotenv react-router-dom ``` now open the project. I'm using vscode so this is how from the command line. ```bash code . ``` look at the package.json file and you should see the packages that you installed. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/nc5cd5ppsxomfyy86yby.png) _package.json_ __moving SDK firebase in the app.__ before you copy and paste the SDK into our file, its best practice to add the .env file to the .gitignore so that you don't publish your environment variables to github. It is very easy to forget. then add the API keys to the .env then reference them from the firebaseIndex.js we are about to create to the .env file. this way, you are never in danger of publishing your keys while following this tutorial. Click on your .gitignore ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/c26xrt0d2bml7sy5zse3.png) write .env anywhere in the file ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/7q55hb6xy424rev2wn5z.png) then right-click a blank spot in the root directory. (if you don't have one you can minimize the outline to reveal space.) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/1anf4xm5orqyj55ucc2k.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/cji3uwv30gt3p43oxo7z.png) copy and paste the following variables to the .env file ``` REACT_APP_API_KEY= REACT_APP_AUTHDOMAIN= REACT_APP_BASEURL= REACT_APP_PROJECT_ID= REACT_APP_STORAGEBUCKET= REACT_APP_MESSAGING_SENDER_ID= REACT_APP_APP_ID= REACT_APP_MEASUREMENT_ID= ``` __Including the quotations__ copy and paste the info from the SDK one by one. API key, auth domain, baseurl ect... you should have something like this. _your info from firebase._ ``` REACT_APP_API_KEY="your secret api key" REACT_APP_AUTHDOMAIN="your secret authdomain" REACT_APP_BASEURL="your secret baseurl" REACT_APP_PROJECT_ID="your secret projectid" REACT_APP_STORAGEBUCKET="your secret storeagebucket" REACT_APP_MESSAGING_SENDER_ID="your secret messaging sender id" REACT_APP_APP_ID="your secret app id" REACT_APP_MEASUREMENT_ID="your secret measurment id" ``` now the easy part. Begin by making the folder to keep firebases SDK and the helper methods for the auth. try and do this from your text editor. by right-clicking the src folder and click new folder. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/958j9vlkp1hpjlq1ddv1.png) name the folder firebase. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/obi8sbzkke7idguy7a3p.png) now right-click the firebase folder and add a firebaseIndex.js ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/6snxqyc019fow83yakhg.png) _firebaseIndex.js_. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/h2c2ae0g9qthgrz964j3.png) import firebase at the top of the firebaseIndex.js file along with the features you want from it. ```javascript import firebase from 'firebase' import 'firebase/auth' import 'firebase/app' ``` now that your environment variables are already set up app-wide you can copy and paste this SDK to reference your sensitive data inside the firebaseIndex file with the code I provide. ```javascript var firebaseConfig = { apiKey: process.env.REACT_APP_API_KEY, authDomain: process.env.REACT_APP_AUTHDOMAIN, databaseURL: process.env.REACT_APP_BASEURL, projectId: process.env.REACT_APP_PROJECT_ID, storageBucket: process.env.REACT_APP_STORAGEBUCKET, messagingSenderId: process.env.REACT_APP_MESSAGING_SENDER_ID, appId: process.env.REACT_APP_APP_ID, measurementId: process.env.REACT_APP_MEASUREMENT_ID }; // Initialize Firebase firebase.initializeApp(firebaseConfig); firebase.analytics(); ``` add firebase.auth() helper method underneath the analytics() method. ```javascript firebase.auth() ``` we are going to need the firebaseConfig object in another file so it needs to be exported ```javascript export default { firebaseConfig, } ``` the whole file should look like this. ```javascript import firebase from 'firebase' import 'firebase/auth' import 'firebase/app' var firebaseConfig = { apiKey: process.env.REACT_APP_API_KEY, authDomain: process.env.REACT_APP_AUTHDOMAIN, databaseURL: process.env.REACT_APP_BASEURL, projectId: process.env.REACT_APP_PROJECT_ID, storageBucket: process.env.REACT_APP_STORAGEBUCKET, messagingSenderId: process.env.REACT_APP_MESSAGING_SENDER_ID, appId: process.env.REACT_APP_APP_ID, measurementId: process.env.REACT_APP_MEASUREMENT_ID }; // Initialize Firebase firebase.initializeApp(firebaseConfig); firebase.analytics(); firebase.auth() export default { firebaseConfig, } ``` if you followed these steps you could have pushed to github at any time and it would not have saved your keys. __Adding the auth methods.__ inside your firebase folder make a file called auth methods, this is where to keep an object that contains the signin, signup, signout, functions. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/rxpr18w0acpwk9113fvr.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/kz2e5jt0qxov08jtusg6.png) at the top import two things, firebaseConfig object and firebase from firebase like so. ```javascript import firebaseconfig from './firebaseIndex' import firebase from 'firebase' ``` now make an export and make an auth methods object. ```javascript export const authMethods = { // firebase helper methods go here... } ``` we are going to send this to context where this will be the top of a chain of methods that link all the way to the form for signin. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/qu78701foby0ns496yoj.png) these are going to be key/value pairs that we give anonymous functions for signing in. ```javascript export const authMethods = { // firebase helper methods go here... signup: (email, password) => { }, signin: (email, password) => { }, signout: (email, password) => { }, } ``` this looked really unusual the first time I saw it. This will make a lot more sense after we start calling on it from context. this is from the [firebase documentation for user auth](https://firebase.google.com/docs/auth/web/start?authuser=0). ```javascript signup: (email, password) => { firebase.auth().createUserWithEmailAndPassword(email,password) .then(res => { console.log(res) }) .catch(err => { console.error(err) }) }, ``` I want to test if this code works before I start adding the other methods. to do that build the context and signup form and see if firebase will respond. __Creating context for our application.__ right-click on the src folder and make a new folder called provider. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/iolz0m0lmjz29p5uohtp.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/15vizl132e9456izwxp5.png) right-click on provider and make a file called AuthProvider.js ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/zbltch6w4kmwkssrwgwz.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/33ixjwpnk0k79t6c21vo.png) make a functional component, add props. ```javascript import React from 'react'; const AuthProvider = (props) => { return ( <div> </div> ); }; export default AuthProvider; ``` outside of the function, make a firebaseAuth variable and make it equal to react context. ```javascript export const firebaseAuth = React.createContext() ``` we have to export it so that we can access the useContext hook. erase the div tags and make the provider inside of the return for the AuthProvider I'm not going to explain everything that is happening here but if you want to know more about context this is an article where I explain [context and the useContext hook](https://dev.to/tallangroberg/context-and-the-usecontext-hook-1g8l). ```javascript const AuthProvider = (props) => { return ( <firebaseAuth.Provider value={{ test: "context is working" }}> {props.children} </firebaseAuth.Provider> ); }; ``` _AuthProvider.js_ now we need to wrap our App.js in the AuthProvider component in the index.js file. we also need to import our ability to route components dynamically, since we are already in this file, add BrowserRouter from react-router-dom. start by importing the AuthProvider and BrowserRouter at the top. ```javascript import AuthProvider from './provider/AuthProvider' import {BrowserRouter} from 'react-router-dom' ``` then make an App sandwich with BrowserRouter and AuthProvider. ```javascript ReactDOM.render( <BrowserRouter> <AuthProvider> <App /> </AuthProvider> </BrowserRouter> , document.getElementById('root')); ``` two things, go to the App.js, at the top change how react is imported to include useContext and React. import {firebaseAuth} so that we can destructure the test key/value pair out of it like this. ```javascript import React, {useContext} from 'react'; import {firebaseAuth} from './provider/AuthProvider' ``` inside the function destructure test from the firebaseAuth variable. console.log test. ```javascript const {test} = useContext(firebaseAuth) console.log(test) ``` go back to the terminal and start the server. ```bash npm start ``` inspect with the dev tools and you should see this. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ueli4qgew7jhx1q7dk5t.png) __connecting to authMethods__ now that we have context App wide, go back to the AuthProvider.js and import the authMethods. ```javascript import {authMethods} from '../firebase/authmethods' ``` This file to be the middle man between firebase and the Signup component we are about to make, that means all the stateful logic will be housed here. make a function called handleSignup inside the AuthProvider. ```javascript const handleSignup = () => { // middle man between firebase and signup } ``` Pass it as a value in the firebaseAuth.Provider ```javascript <firebaseAuth.Provider value={{ //replaced test with handleSignup handleSignup }}> {props.children} </firebaseAuth.Provider> ``` now change test with handleSignup in the App.js ```javascript const {handleSignup} = useContext(firebaseAuth) console.log(handleSignup) ``` _App.js_ you should see ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/w3l7715rhxsrc393nzce.png) in the AuthProvider, add the authMethod.signup() to the handleSignup. ```javascript const handleSignup = () => { // middle man between firebase and signup console.log('handleSignup') // calling signup from firebase server return authMethods.signup() } ``` make a components folder and Signup.js component, recreate the same functionality where we want it to end up so that we can define our routing in the App.js ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/lydehlps203vywn800mb.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/nb2rbldo7oj4hyx1ycye.png) make the Signup.js ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/s9tstuimehix3itu8wki.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/j359wjpfn4bah9v16317.png) make a basic component ```javascript // add useContext import React, {useContext} from 'react'; const Signup = () => { return ( <div> Signup </div> ); }; export default Signup; ``` destructure the handleSignup function out of context just like in the App.js ```javascript const {handleSignup} = useContext(firebaseAuth) console.log(handleSignup) ``` __ in the App.js add the beginnings of react-router-dom by removing the boilerplate and adding Switch and Route, setting the signup to be rendered by the Route. ```javascript import {Route, Switch} from 'react-router-dom' import Signup from './component/Signup' ``` _App.js_ ```javascript return ( <> {/* switch allows switching which components render. */} <Switch> {/* route allows you to render by url path */} <Route exact path='/' component={Signup} /> </Switch> </> ); ``` if everything worked you should see a white screen with signup. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/l59b8ore82j1kxw5zjw0.png) make a signup form. ```javascript return ( <form> {/* replace the div tags with a form tag */} Signup {/* make inputs */} <inputs /> <button>signup</button> </form> ); ``` at this point, it might be tempting to make state here. but we want the context to be the single source of truth so that if a user toggles between login and signup, whatever they typed in will persist. go back to the AuthProvider and start setting up state. we need a piece of state for a token from firebase and for user data. import useState next to React. ```javascript import React, {useState} from 'react'; ``` _AuthProvider.js_ the pieces of state that we want will be. 1. token as null (then a string once we get a token from firebase), more about [json web tokens](https://jwt.io/introduction/). 2. input as an object with email and password both strings. 3. errors as an array, so that error messages can be displayed to the users. add those states to the AuthProvider.js ```javascript const [inputs, setInputs] = useState({email: '', password: ''}) const [errors, setErrors] = useState([]) const [token, setToken] = useState(null) ``` add inputs to the value object of the provider. ```javascript <firebaseAuth.Provider value={{ //replaced test with handleSignup handleSignup, inputs, setInputs, }}> ``` in the Signup.js get them from the authContext with the useContext hook like this. ```javascript const {handleSignup, inputs, setInputs} = useContext(firebaseAuth) ``` make handleChange and handleSubmit functions as basic forms. ```javascript const handleSubmit = (e) => { e.preventDefault() console.log('handleSubmit') } const handleChange = e => { const {name, value} = e.target console.log(inputs) setInputs(prev => ({...prev, [name]: value})) } ``` change the form and input fields to work with the form functions. ```javascript <form onSubmit={handleSubmit}> {/* replace the div tags with a form tag */} Signup {/* make inputs */} <input onChange={handleChange} name="email" placeholder='email' value={inputs.email} /> <input onChange={handleChange} name="password" placeholder='password' value={inputs.password} /> <button>signup</button> </form> ``` if you did everything correctly and ran a test that looks like this... ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/y5z999l46yg5uvr25l3y.png) here is the error message you would have gotten. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/mrnqv8iqkdi69eiwflfa.png) the reason we got this error is that we didn't pass the authMethods.signup the email and password arguments that it was expecting. pass inputs.email and inputs.password into authMethods.signin ```javascript authMethods.signup(inputs.email, inputs.password) ``` when you do a test like this. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/knab9m0aox1f7z2n29k0.png) you should get a response like this. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/978cuy3g3vkgdoi2ghm3.png) but if you try and do it twice you will get an error. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/fl75zxhqgrhnuhmi9plr.png) this is because you can't do this twice. all of the emails have to be unique. to make it so that the error message displays to the user we have to do the following. 1. in the AuthProvider.js, pass setErrors as an argument along with email and password, this is the only way I could figure out how to do this. whenever you do have to pass more than one argument to a function you should have a good justification. 2. in the authMethods.js on the signup(), add the third argument at the top and in the .catch, we will have the error messages save to state in the errors array. 3. have the error display to the screen by passing it to the Signup.js and mapping through the array. 1. ```javascript //sending setErrors authMethods.signup(inputs.email, inputs.password, setErrors) console.log(errors) ``` now add the setErrors message along with email and password. _AuthProvider.js_ 2. ```javascipt //catching setErrors signup: (email, password, setErrors) => { ``` _authMethods.js_ change the catch to the setErrors include prev in case it's more than one error ```javascript .catch(err => { //saving error messages here setErrors(prev => ([...prev, err.message])) }) ``` if it worked __and you console logged it__, you should see this error. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/ruyfaqgxpexodf72bo4d.png) 3. add errors to the value object of the Provider ```javascript <firebaseAuth.Provider value={{ //replaced test with handleSignup handleSignup, inputs, setInputs, //added errors to send to Signup.js errors, }}> {props.children} </firebaseAuth.Provider> ``` _AuthProvider.js_ destructure it from useContext from the Signup.js ```javascript const {handleSignup, inputs, setInputs, errors} = useContext(firebaseAuth) ``` _Signup.js_ now add a ternary that will only show up if an error occurs. ```javascript <button>signup</button> {errors.length > 0 ? errors.map(error => <p style={{color: 'red'}}>{error}</p> ) : null} </form> ``` if everything worked you will get your error on the screen. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/iol3gmynbouk2oa5ycg7.png) if you want to [filter duplicates](https://medium.com/dailyjs/how-to-remove-array-duplicates-in-es6-5daa8789641c) you can find out or see how I did on the repo but this tutorial is getting long and a couple more things to do. to make it so that you can enable multiple emails per account. go to firebase inside of this project, click on authentication. click on signin method scroll to the bottom and where it says advanced in small black letters. it says one account per email in bold. Click the blue change button ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/kvzrr83o0i8mmcsgsr1s.png) click allow multiple accounts with the same email. this will help us move faster with testing but don't forget to switch it back later. 1. The same way that we set an error we are going to save the token to localStorage and the token's state in the AuthProvider. 2. make it so that we can only see some components if we have a token. 3. redirect to that page if the token in local storage matches the token in state. 4. repeat the process for signin. 5. erase the token and pushing the user out of the authenticated parts of our app with the login method. 1. go to the AuthProvider.js and add setToken as another argument after setErrors. ```javascript //sending setToken function to authMethods.js authMethods.signup(inputs.email, inputs.password, setErrors, setToken) console.log(errors, token) ``` _AuthProvider.js_ add this as a 4th argument at the top. ```javascript // added the 4th argument signup: (email, password, setErrors, setToken) => { ``` inside the .then, underneath the console.log(res)... I am about to save you so much time you would have to spend digging through the res object to find the token. this is also about to be a little messy with the async code. ```javascript signup: (email, password, setErrors, setToken) => { firebase.auth().createUserWithEmailAndPassword(email,password) //make res asynchronous so that we can make grab the token before saving it. .then( async res => { const token = await Object.entries(res.user)[5][1].b //set token to localStorage await localStorage.setItem('token', token) //grab token from local storage and set to state. setToken(window.localStorage.token) console.log(res) }) .catch(err => { setErrors(prev => ([...prev, err.message])) }) }, ``` _authMethods.js_ now if you make yet another account and go to the browsers dev tools ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/vgqi9g2gp329vxwgexk7.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/mfv7w276twxvza28dzwr.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/3iomrkvi86j2udpp6n5a.png) __2. signing in __ we are going to copy and paste a lot of what we have for signup and easily configure it for login. we will start from the bottom of the component tree by making a Signin component slightly change file by file until it works in the authMethods. start by making a new file called Signin.js ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/f2lozjeagtutrnoxm101.png) copy and paste everything from the Signup.js to the Signin.js highlight everywhere it says signup and change that to signin Click on the name of the react component and Command + d if you are using a mac. Otherwise, you can use ctrl + f and type it in at the top. I only had 3 words by remember to change handleSignup to handleSignin using the same method. change the button as well. Now go to the App.js and import the file. ```javascript import Signin from './component/Signin' ``` make sure that component folder on the import is singular. add a new route for the Signin ```javascript <Route exact path='/' component={Signup} /> <Route exact path='/signin' component={Signin} /> ``` your signin component will render now if you type in the http://localhost:3000/signin but as soon as you do click the button it will crash because there is no handleSignin function. to fix that we can go to the AuthProvider.js and copy and paste changing the wording just like we did for signup. then add the handleSignin function to the value object. ```javascript const handleSignin = () => { //changed to handleSingin console.log('handleSignin!!!!') // made signup signin authMethods.signin(inputs.email, inputs.password, setErrors, setToken) console.log(errors, token) } ``` now to add that function to the firebaseAuth.Provider ```javascript <firebaseAuth.Provider value={{ //replaced test with handleSignup handleSignup, handleSignin, inputs, setInputs, errors, }}> {props.children} </firebaseAuth.Provider> ``` _AuthProvider.js_ now go to authMethods.js and do something similar, instead of createUserWithEmailAndPassword, change to... signInWithEmailAndPassword() ```javascript signin: (email, password, setErrors, setToken) => { //change from create users to... firebase.auth().signInWithEmailAndPassword(email,password) //everything is almost exactly the same as the function above .then( async res => { const token = await Object.entries(res.user)[5][1].b //set token to localStorage await localStorage.setItem('token', token) setToken(window.localStorage.token) console.log(res) }) .catch(err => { setErrors(prev => ([...prev, err.message])) }) }, ``` ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/jockcc4l0u82dzk7xden.png) if you didn't delete your token from local storage then a token will still be there. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/o4pao73qfq592fixdc04.png) almost there!! 1. make a home component and only allow users with tokens to get there. 2. make a signout button that deletes the token and pushes the user away from the page with react-router-dom. since you should already be in the authMethods.js we will start from the top and go to the bottom this time. this method is really simple compared to the other two because we aren't using firebase to keep user's status there. ```javascript //no need for email and password signout: (setErrors, setToken) => { // signOut is a no argument function firebase.auth().signOut().then( res => { //remove the token localStorage.removeItem('token') //set the token back to original state setToken(null) }) .catch(err => { //there shouldn't every be an error from firebase but just in case setErrors(prev => ([...prev, err.message])) //whether firebase does the trick or not i want my user to do there thing. localStorage.removeItem('token') setToken(null) console.error(err.message) }) }, } ``` go to AuthProvider.js and make a signout function ```javascript const handleSignout = () => { authMethods.signout() } ``` add the method to the Provider ```javascript setInputs, errors, handleSignout, ``` now we need a component for this to be useful which we haven't done yet. make a Home.js, and a basic React component inside it. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/9q86kunn9kytdkv73co4.png) ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/j0qzpp9s1f8mi7dz2mo2.png) ```javascript import React from 'react'; const Home = (props) => { return ( <div> Home </div> ); }; export default Home; ``` import useContext and firebaseAuth ```javascript import React, {useContext} from 'react'; import {firebaseAuth} from '../provider/AuthProvider' ``` between return and Home inside the Component, destructure signout from useContext ```javascript const {signout,} = useContext(firebaseAuth) ``` in the return statement. add login successful, then a button to call on signout. ```javascript return ( <div> Home, login successful!!!!!! <button onClick={signout}>sign out </button> </div> ); ``` before we can test it, we need to go back up our component tree and change how strict it is to access each component. in the App.js we are going to use a ternary statement to make it so that users can't get to the home component without a token saved to state. import the Home component in the App.js. ```javascript import Home from './component/Home' ``` destructure the token out of firebaseAuth with useContext ```javascript const { token } = useContext(firebaseAuth) console.log(token) ``` when you use Route to render the Home component, add a ternary statement checking the data type of the token this means that setting up the "/" or root URL differently. change your Home components route to use the render prop instead of the component prop. and designate the URL paths more strictly. ```javascript <Route exact path='/' render={rProps => token === null ? <Signin /> : <Home />} /> <Route exact path='/signin' component={Signin} /> <Route exact path='/signup' component={Signup} /> ``` in the AuthProvider.js, add the token to the value object. ```javascript <firebaseAuth.Provider value={{ //replaced test with handleSignup handleSignup, handleSignin, token, inputs, setInputs, errors, handleSignout, }}> {props.children} </firebaseAuth.Provider> ``` now users can signin and signout. One final touch, make it so that when a user signs up, react-router-dom will send them to the home page. go to the Signup.js and import withRouter from react-router-dom ```javascript import {withRouter} from 'react-router-dom' ``` pass the default export to the withRouter higher-order component ```javascript export default withRouter(Signup); ``` add props to the Signup component ```javascript const Signup = (props) => { ``` now we have access to prop.history.push("/goAnyWhereInApp") now make handleSubmit an async function and await the handleSignup and then push to the root URL. ```javascript const handleSubmit = async (e) => { e.preventDefault() console.log('handleSubmit') //wait to signup await handleSignup() //push home props.history.push('/') } ``` you might have a delay, but once you get your credentials it will work. if you want to publish this sight [here is how with surge.](https://surge.sh/help/getting-started-with-surge) I am a big fan and am doing these firebase tutorials because of a dev who has suffered much at the hands of heroku this is the [finished product](https://firebaseauthtutorial.surge.sh/login) this is the [github](https://github.com/TallanGroberg/firebase-auth-tutorial) give it a star if you can. __Finally, that is it__ you now have a static site with powerful backend functionality. I will be doing a lot more tutorials on firebase. please like and share if you found this tutorial beneficial. the [firebase docs](https://firebase.google.com/docs/auth/web/password-auth?authuser=0) are helpful but I have a few things in here that make it so much easier to transpose to a react project. if you have anything to say please add it to the comments below.
tallangroberg
256,438
GitHub deprecates access token in URL!
Just recently I posted brief comparison about using authentication with Facebook / Gmail / Github on...
0
2020-02-06T07:13:58
https://dev.to/rodiongork/github-deprecates-access-token-in-url-53ph
github, php, tutorial
Just recently I [posted brief comparison](https://dev.to/rodiongork/facebook-google-or-github-which-oauth-for-your-site-20ni) about using authentication with Facebook / Gmail / Github on web-sites. I told that FB annoys me with often breaking updates to API, while others do not. **Well, recently GitHub made some update too!** Luckily it is not complicated change so I'll post the example of quick fix and couple of links to instructions - just in case someone use it and missed notifications. Example is in PHP but you can easily understand and convert it to anything including curl :) ## How it worked before For "Login via GitHub" we use link or button on our web-page, which point to the following url (with some parameters) > https://github.com/login/oauth/authorize This leads user to open GitHub login form (if he/she isn't already signed in) and then returns back to our web-site, providing us with **authentication TOKEN**. This token is then used to fetch various information from API. In the simplest form we shall call `/api/user` endpoint to get public github ID of the user who signed in (and sign him/her into our site with this ID). **And here is the change** Old-style passed the TOKEN as query parameter, e.g. `/api/user?token=...` - but now it is deprecated. ## How it is now? Get to code! Nowadays Github API methods want us to provide TOKEN in the request headers instead. Not a big difference, luckily: _Old code looked like this_ ```php function github_fetch_user_data($token) { // old style // url includes token: 'https://api.github.com/user?access_token=' . $token // suppose we already have some additional options to API request $options = array('http' => array('some_option'=> 'Some_Value')); // structure to pass additional features to HTTP GET request $context = stream_context_create($options); // do HTTP GET request at last return file_get_contents($url, false, $context); } ``` Now it is changed to provide `Authorization: token ...` header line and remove query parameter: ```php function github_fetch_user_data($token) { $url = 'https://api.github.com/user'; // include 'header' field in the options $options = array( 'http' => array('some_option'=> 'Some_Value', 'header' => "Authorization: token $token")); $context = stream_context_create($options); return file_get_contents($url, false, $context); } ``` _As one may see from the links below, similar deprecation exists for other methods of API which passed authentication token in query parameters or as a part of "path" of the url. ## Links [GitHub deprecation of passwords and tokens in urls, since November 2019](https://developer.github.com/changes/2019-11-05-deprecated-passwords-and-authorizations-api/#authenticating-using-query-parameters) [GitHub "Webflow" authorization API](https://developer.github.com/apps/building-oauth-apps/authorizing-oauth-apps/#web-application-flow)
rodiongork
256,612
Best Digital Marketing Service Provider
No.1 Website, Online Guest Blogging Platform, A complete SEO groundwork and much more. Of course, ent...
0
2020-02-06T11:32:48
https://dev.to/arunseo/best-digital-marketing-service-provider-2ag7
digitalmarketing, seoservice, marketing, seo
No.1 Website, Online Guest Blogging Platform, A complete SEO groundwork and much more. Of course, entering into a profession and being successful in the same is not easy.
arunseo
256,628
Web Components 101
Web Components are a set of technologies that allow you to create reusable custom elements with...
0
2020-02-06T19:18:56
https://blog.jws.app/2020/web-components-101/
coding, customelements, htmlelement, registerelement
--- title: Web Components 101 published: true date: 2020-02-06 11:54:31 UTC tags: Coding,customElements,HTMLElement,registerElement canonical_url: https://blog.jws.app/2020/web-components-101/ --- Web Components are a set of technologies that allow you to create reusable custom elements with functionality encapsulated away from the rest of your code. This allows you to define something like `<js-modal>Content</js-modal>` and then attach behavior and functionality to it. In this post, I want to explore how web components do what they do. {% codepen https://codepen.io/steinbring/pen/bGGpaLo %} In the above [pen](https://codepen.io/steinbring/pen/bGGpaLo), there are two examples. The first one (`<js-description>Content</js-description>`) uses a custom element (defined in JavaScript using `customElements.define()`). It is definitely useful but if you look at the second example (`<js-gravatar>Content</js-gravatar>`), there is now a `<template>` element that allows you to define what is within the custom element. I plan on building on some of these concepts in a later post. Have a question, comment, etc? Feel free to drop a comment, below.
steinbring
256,640
Reddit Live Feed
What is it? Reddit posts, live, as they happen, right now, in your face! A React based si...
0
2020-02-06T12:43:59
https://dev.to/entozoon/reddit-live-feed-15dp
## What is it? Reddit posts, live, as they happen, right now, in your face! A React based site that loads all Reddit posts as soon as they're made. [Reddit Live Feed](https://entozoon.github.io/reddit-live-feed/) [Source](https://github.com/entozoon/reddit-live-feed/) ### Why? Building Super Simple™ (at first) projects like this is **always a good way to improve React knowledge**, plus I love the concept of seeing the entirety of Reddit sluicing in. ### Ingredients *React JS ### Source [Source](https://github.com/entozoon/reddit-live-feed/) All Reddit posts are actually available as `.json` files such as this, which I check every now and then in order to update the list of posts. It takes advantage of React by having the posts data stored in the components as state variables, which allows everything to automatically update on the page - for example the timestamp information (2 seconds ago, 3 seconds ago, etc).
entozoon
256,645
Ready to explore your NPM scripts?
Explore the NPM scripts of your project and see how they depend on one another - You can try this op...
0
2020-02-06T12:53:13
https://dev.to/mbarzeev/ready-to-explore-your-npm-scripts-1j81
node, javascript, npm, opensource
Explore the NPM scripts of your project and see how they depend on one another - You can try this open-source CLI tool I've made, called **npmapper** Simply run `npx npmapper` on any project directory which has package.json in it and browse the results I would love to hear your feedback :) For more details, check this one here: https://dev.to/mbarzeev/mapping-your-npm-scripts-with-npmapper-1ih0
mbarzeev
256,678
Needed a Web Server, quick and easy..
Today I was asked to create a simple static html page that reads a json file with some information to...
0
2020-02-06T13:47:24
https://dev.to/liviasilvasantos/needed-a-web-server-quick-and-easy-3anc
npm, html, todayilearned
Today I was asked to create a simple static html page that reads a json file with some information to show it on a grid. I've created the structure, added some bootstrap css files and opened the index.html on a browser and an error... Reason: CORS request not HTTP The solution was to run a web server to access the static html page. I needed it quick and easy. Found out I could install it via npm and run it on the base directory. So, to install the server: $npm install -g http-server And to run it: $cd my_project_base $http_server And voilá! Starting up http-server, serving ./ Available on: http://<ip>:8081
liviasilvasantos
256,735
Respondendo 5 perguntas comum sobre AMP para WordPress [pt-BR]
No ano passado comecei a rodar um curso de AMP ou Accelerated Mobile Pages em meu canal no Youtube, a...
0
2020-02-09T13:40:11
https://dev.to/fellyph/respondendo-5-perguntas-comum-sobre-amp-para-wordpress-2nga
amp, webdev, mobile, wordpress
No ano passado comecei a rodar um curso de AMP ou Accelerated Mobile Pages em meu canal no Youtube, a decisão de rodar este curso foi após rever a plataforma depois de 3 anos. ![Comparando versões do amp](https://dev-to-uploads.s3.amazonaws.com/i/2ppa3qgfbjmlg7wf9qr8.png) De 3 anos pra cá muita coisa mudou antes tínhamos somente o modo leitor que limitava bastante a experiência do usuário, agora temos uma vasta biblioteca de componentes para diferentes soluções. Daí comecei a apresentar esses componentes, mas com foco mais no desenvolvimento front-end. Navegando no Youtube para dar uma olhada em outros materiais para entender o interesse da audiência. Foi fácil ver que a maior audiência relacionada a AMP no Youtube é referente ao plugin do WordPress. Assistindo vídeos, encontrei alguns pontos apresentados e comentários que gostaria de responder. ###1- AMP é exclusivamente para SEO AMP não é exclusivamente para SEO, o foco do AMP é melhorar a experiência do usuário, ou seja entregar um conteúdo rápido, seguindo as melhores práticas de desenvolvimento relacionadas a performance, accessibilidade e design responsivo. Como é publicamente compartilhado pelo Google páginas que entregam conteúdo rápido terão prioridade nos resultados da busca, isso pode ser alcançado por qualquer framework ou vanilla JavaScript. O foco do AMP é atingir essas métricas com o menor nível de complexidade no desenvolvimento. ###2 - AMP é exclusivo para Mobile? Já foi, agora não é mais. Tanto que seu nome é Accelerated Mobile Pages, mas agora a biblioteca é apenas chamada de AMP. As melhorias alcançadas na plataforma móvel são potencializadas na plataforma Desktop por conta do maior poder de processamento. Com a evolução da plataforma com inclusão de componentes focados em layout permitindo layout responsivo isso permitiu a criação de aplicação que se adaptassem em diferentes telas. ###3 - Com conseguir cards personalizados na busca? Com AMP conseguimos exibir resultados personalizados na busca, mas isso depende de outros fatores além da inclusão da biblioteca em nossas páginas: * Inclusão de structure data válido * Sua página precisa ser uma página AMP Válida para isso [AMP Validator] (https://validator.ampproject.org/) ###4 - AMP pode prejudicar a experiência do usuário? Quando falamos da plataforma WordPress, muitas vezes o plugin por si só não resolve o problema, por exemplo, SEO não se resume apenas em instalar um plugin de SEO precisamos realizar umas séries de ações para melhorar o desempenho em motores de buscas. E muitas vezes plugins mal desenvolvidos pode prejudicar sua aplicação, soluções de baixa qualidade são o problema, não a tecnologia em si. ![Plugins AMP Disponíveis](https://dev-to-uploads.s3.amazonaws.com/i/5lc1ylimhbc4ngee05w9.png) Isso não se limita para AMP atualmente temos uma série de plugins para AMP com propostas específicas, cada plugin exige uma configuração e essa configuração é crucial para o sucesso de sua aplicação. Esquecer uma tag do Google tag manager pode comprometer os seus relatórios ou disponibilizar uma template genérico pode sim afetar a experiência do usuário então sempre quando escolher um plugin: * Fique atento aos últimos reviews * Teste o plugin antes de colocar em produção * Invista tempo no layout da sua versão amp * Verifique se o plugin está gerando conteúdo AMP Válido * Acompanhe as métricas do seu site no search console ###5 - AMP é um plano do Google para controlar a web? Isso é uma visão pessoal, imagine o seguinte cenário: uma empresa de delivery, para ajudar no processo você passa a disponibilizar caixas para embalar sua entregas que agilizam a entrega das suas encomendas, mas os usuários podem continuar utilizando qualquer outro tipo de caixa. Seu core não é fabricar caixas, mas sim a entrega. Caixas que agilizam o processo independente do fornecedor agilizam o processo e mais pessoas utilizam o serviço. Claro que as caixas de rápida entrega serão priorizadas. Nos últimos 4 anos AMP tem sido um assunto polêmico muitas vezes, quando AMP foi lançado no primeiro momento em 2016 o foco da tecnologia era entrega de conteúdo da maneira mais rápida possível e isso incluía o remoção de scripts de terceiros que bloqueavam a renderização da página essa ação gerou um desconforto com ferramentas de marketing e rastreamento. Depois com a evolução da plataforma muitas ferramentas foram incluídas através de componentes seguindo as melhores práticas de desenvolvimento. O segundo ponto quando conteúdo AMP passou a ser pre-carregado na busca. Muitos usam o argumento que a tecnologia foi privilegiada, mas esse ponto era relacionado a um item com AMP temos uma limitação no carregamento de conteúdo as páginas AMP são leves, isso é fator atrativo para motores de busca. Não só o Google, mas o Bing também realiza cache de páginas AMP que são exibidas na primeira página da busca. A limitação do uso de JavaScript foi outro ponto polêmico, esse é um item crucial na renderização do DOM, a restrição inicialmente foi aplicada por uma questão de prioridade, atualmente temos possibilidade de adicionar JavaScript com o uso da tag amp-script. Muitas vezes tomamos opiniões baseadas em nosso background JavaScript developers não ficaram felizes com essa limitação mas temos que assumir web performance atualmente não um assunto de domínio comum, usuários e pequenas empresas não tem acesso a esse item e isso é um fator fundamental para ter sucesso na internet. AMP é um ferramenta inclusiva tenta resolver problemas relacionados a web performance com o menor nível de complexidade possível. Com AMP conseguimos resolver aplicações mais complexas com controle de estado, reuso de templates e integração com REST APIs. AMP atualmente é um projeto Open Source originalmente criado pelo Google e atualmente mantido pela fundação OpenJS. Maior parte dos contribuidores continuam sendo do Google mas o projeto possui mais de 380 contribuidores ao redor do mundo. Caso queira saber como contribuir só visitar a página: [Guia de contribuição] (https://amp.dev/pt_br/documentation/guides-and-tutorials/contribute/?format=websites)
fellyph
256,739
A little bit about Linux
What is Linux? Linux is an operating system family of open-source Unix-like operating...
0
2020-02-06T14:58:16
https://dev.to/slimcoder/a-little-bit-about-linux-53km
linux, slimcoder
--- firstPublishedAt: 1575117524019 latestPublishedAt: 1575204764523 slug: a-little-bit-about-linux title: A little bit about Linux published: true tags: linux , slimcoder cover_image: https://cdn-images-1.medium.com/max/1280/1*P29IoCDYKw7Tu4UZMJjUZg.png --- --- # What is Linux? Linux is an operating system family of open-source **Unix-like** operating systems based on the **Linux kernel**, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged in a **distribution** (often abbreviated as distro). --- From the above definition, keep in mind these points I will explain it to you one by one. 1. Unix-Like 1. Kernel 1. Distribution (often abbreviated as distro) # Linux ❤ Unix Linux is a clone of Unix. This does not mean that Linux is using codebase of UNIX but this means that they are the same in terms of functionality and they both use bash commands. if you are using Linux or want to use Linux , Then Unix is not compulsory but if you are using Linux you will be definitely using Unix without knowing it so stick with Linux, you don’t need to learn Unix because Linux bash commands are nearly the same as Unix. Unix is an operating system as a whole (it contain utilities, shell, kernel) and Linux is a family of an operating system that has different type of base and each base has a different type of distributions and each distribution work on top of Linux kernel. got it? # What is Kernel? ### It is a computer program that controls systems call, CPU, memory, and system drivers, etc in your operating system. **BONUS # 1:** Linux kernel codebase is one of the largest open-source codebases on this planet and will be found on this link [https://git.kernel.org/](https://git.kernel.org/) they don’t maintain github/gitlab/bitbucket or any other version controlling platform because they are very slow to merge huge PR’s . Linus Torvalds also the founder of git you can also found git codebase on [https://git.kernel.org/](https://git.kernel.org/) . **BONUS # 2:** If you want to learn more about Linux kernel then here is official documentation [https://www.kernel.org/doc/html/latest/](https://www.kernel.org/doc/html/latest/) this documentation is maintained in[ https://git.kernel.org/](https://git.kernel.org/). # What is Linux Distributions? ### **Linux distribution** (often abbreviated as **distro**) is an operating system made from a software collection, which is based upon the Linux kernel and, often, a **package management system**. This means that each Linux distro is the same as other but difference will be in the **package management system,** utility software and UI. > **Package Management System:** It is a collection of software tools that automates the process of installing, upgrading, configuring, and removing computer programs for a computer’s operating system in a consistent manner, i.e yum, Pacman, APT, APK,dbpkg, etc each package manager will install package from its own repositories. According to the above definition, we understand that all Linux distro is the same as other but difference will be in the package management system and use-case of the distro. Suppose if want to use Linux for your personal computer you will choose (Linux Mint, Ubuntu) or if you want to use Linux for ethical hacking and penetration testing you will choose (Kali Linux) but the thing is mint, ubuntu, and kali they all are Debian based Linux distro they use same package management system i.e APT frontends (frontends is used for simplifying the installation process) for installing packages. So Linux community divides Linux distro in terms of its base so here is a list of different Linux base and each base have a number of different distros which is using the same package management system as their base from which they are extended. Don’t worry they all use the same bash commands but the difference is in UI, Utility Softwares, and Package management system command. #### - Different Linux Distro Base **RPM-based:** Default command and package manager is **rpm** but you can use this Several front-ends to RPM ease the process of obtaining and installing RPMs from repositories and help in resolving their dependencies. These include: - **yum** used in Fedora, CentOS 5 and above, Red Hat Enterprise Linux 5 and above, Scientific Linux, Yellow Dog Linux and Oracle Linux - **DNF**, introduced in Fedora 18 (default since 22), and Red Hat Enterprise Linux 8. - **up2date** used in Red Hat Enterprise Linux, CentOS 3 and 4, and Oracle Linux - **Zypper** used in Mer (and thus Sailfish OS), MeeGo,[10] openSUSE and SUSE Linux Enterprise - **urpmi** used in Mandriva Linux, ROSA Linux, and Mageia - **apt-rpm**, a port of Debian’s Advanced Packaging Tool (APT) used in Ark Linux, PCLinuxOS and ALT Linux - **Smart Package Manager**, used in Unity Linux, available for many distributions including Fedora. - **rpmquery**, a command-line utility available in Red Hat Enterprise Linux **Debian-based:** Default command and package manager is **dpkg** but if use other frontends instead of **dpkg** I mostly use **APT** for installing **dpkg.** **Pacman-based:** Default command and package manager is **pacman.** but you can use other frontends as well as. **Gento-based:** It is a distribution designed to have highly optimized and frequently updated software. Distributions based on Gentoo use the Portage package management system with **emerge** or one of the alternative package managers. **Slackware-based:** Default command and package manager is **slackpkg.** **Independent:** They are independent and not in any Linux base each and every distro from this category use there own package manager i.e Alpine Linux (Use Insider Docker Container) is in this category and it uses **APK** package manager , Android is also in this category it is also **APK-based**. #### Which Linux Distro You Should Choose? Now after learning this stuff, your mind will be stuck in a dilemma of Linux distro selection. See below what you should do before working with Linux Distro? ### Select a Linux Distro according to your needs and start working with it. Suppose if you want to do programming in your personal computers select Ubuntu or Mint. Or if you want to do hacking/Penetration Testing select, Kali. Or if you care about security select Alpine. If you are doing cloud computing then Ubuntu is for you. **BONUS # 3:** If you want to choose a distro according to your needs and you are unable to find it online check this [website](https://distrochooser.de/en) fill the form and they will guide you which distro will be suitable for your needs. Follow me AKA #slimcoder on [medium](https://medium.com/@vvkofficial) and [github](https://github.com/viveksharmaui) for more articles ❤
slimcoder
256,770
Goroutines demystified
In programming languages, code is often split up into functions. Functions help to make code reusable...
0
2020-02-06T16:03:53
https://dev.to/bluepaperbirds/goroutines-demystified-49eb
go, beginners
In programming languages, code is often split up into functions. Functions help to make code reusable, extensible etc. In <a href="https://golang.org/">Go</a> there's a special case: goroutines. So is a goroutine a function? Not exactly. A goroutine is a lightweight thread managed by the Go. If you call a function f like this: f(x) it's a normal function. But if you call it like this: go f(x) it's a goroutine. This is then started <a href="https://golangr.com/concurrency/">concurrently</a>. If you are new to Go, you can use the <a href="https://play.golang.org/">Go playground</a> ## Goroutine examples Try this simple program below: ```go package main import ( "fmt" "time" ) func say(s string) { for i := 0; i < 3; i++ { time.Sleep(100 * time.Millisecond) fmt.Println(s) } } func main() { go say("thread") say("hello") } ``` Execute it with: ``` go run example.go ``` So while say('hello') is synchronously executed, go say('hello') is asynchronous. Consider this program: ```go package main import ( "fmt" "time" ) func main() { go fmt.Println("Hi from goroutine") fmt.Println("function Hello") time.Sleep(time.Second) // goroutine needs time to finish } ``` Then when I ran it: ``` function Hello Hi from goroutine Program exited. ``` As expected, the goroutine (thread) started later. **Related links:** * <a href="https://golangr.com/goroutines/">More on Goroutines</a> * <a href="https://golangr.com/">Learn Go</a>
bluepaperbirds
256,837
Recomendação de artigo: Goodbye, Clean code
Hoje eu li esse artigo, escrito por um dos devs que desenvolvem o React.js, o Dan Abramov. "Adeus, Có...
0
2020-02-06T17:55:33
https://dev.to/sarahraqueld/recomendacao-de-artigo-goodbye-clean-code-e0k
Hoje eu li esse artigo, escrito por um dos devs que desenvolvem o React.js, o Dan Abramov. "Adeus, Código limpo" é o seu título. Nele, o Dan fala com um exemplo bem prático que Código Limpo não é um objetivo per se. "Desenvolver software é uma jornada. Pense o quão longe você chegou desde sua primeira linha de código até onde está agora. Acho que foi uma alegria ver pela primeira vez como extrair uma função ou refatorar uma classe pode simplificar o código complicado. Se você se orgulha de seu ofício, é tentador buscar a limpeza no código." O artigo é bem curto, leitura de 5 minutos (pra quem lê inglês facilmente), e indico principalmente pra quem já escreveu algumas linhas de código anteriormente. https://overreacted.io/goodbye-clean-code/
sarahraqueld
256,972
Successful status codes
listing of successful status codes and their descriptions
4,713
2020-02-06T21:01:28
https://dev.to/hexangel616/successful-status-codes-3jm
webdev, programming, productivity, softwareengineering
--- title: Successful status codes published: true description: listing of successful status codes and their descriptions tags: webdev, programming, productivity, softwareengineering series: HTTP Status Code Cheatsheet cover_image: https://dev-to-uploads.s3.amazonaws.com/i/lx29f99tcim0f26igooz.jpeg --- *Preface:* A HTTP status code is issued from the server in response to a client's request made to the server. The five status code response classes are **informational**, **successful**, **redirection**, **client error** and **server error**. --- --- --- ## Successful status codes Successful status codes are returned when the browser request was successfully received, understood and processed by the server. ### Table Of Contents * [200 OK](#ok) * [201 Created](#created) * [202 Accepted](#accepted) * [203 Non-Authoritative Information](#non-authoritative-information) * [204 No Content](#no-content) * [205 Reset Content](#reset-content) * [206 Partial Content](#partial-content) * [207 Multi-Status (WebDAV)](#multi-status) * [208 Already Reported (WebDAV)](#already-reported) * [226 IM Used](#im-used) --- #### `200 OK` <a name="ok"></a> *Definition:* The request has succeeded. Its response is cacheable. The meaning of this response depends on the HTTP request method the server received: For `GET` requests, the resource has been fetched and is transmitted in the message body; for `HEAD` requests, the entity headers are in the message body; for `POST` requests, the resource describing the result of the action is transmitted in the message body; for `TRACE` requests, the message body contains the request message as received by the server. For `PUT` and `DELETE` requests, a successful request is often not a `200`, but a `204 No Content`. `200 OK` is defined in [RFC 7231](https://tools.ietf.org/html/rfc7231#section-6.3.1). --- #### `201 Created` <a name="created"></a> *Definition:* The request has been fulfilled and resulted in a new resource being created. Successful creation occurred (via either `POST` or `PUT`). The new resource is returned in the body of the message, its location being either the URL of the request, or the content of the Location header. `201 Created` is defined in [RFC 7231](https://tools.ietf.org/html/rfc7231#section-6.3.2). --- #### `202 Accepted` <a name="accepted"></a> *Definition:* The request has been accepted for processing, but the processing has not been completed. It is non-committal, meaning that there is no way for the HTTP to later send an asynchronous response indicating the outcome of processing the request. It is intended for cases where another process or server handles the request, or for batch processing. `202 Accepted` is defined in [RFC 7231](https://tools.ietf.org/html/rfc7231#section-6.3.3). --- #### `2013 Non-Authoritative Information` <a name="non-authoritative-information"></a> *Definition:* The request was successful but is returning a modified version of the origin's response resulting in an enclosed payload that has been modified by a transforming proxy from that of the origin server's `200 OK` response. The `203 Non-Authoritative Information` response is similar to the value `214 Transformation Applied` of the `Warning` header code, which has the additional advantage of being applicable to responses with any status code. `2013 Non-Authoritative Information` is defined in [RFC 7231](https://tools.ietf.org/html/rfc7231#section-6.3.5). --- #### `204 No Content` <a name="no-content"></a> *Definition:* The request has succeeded, but the client doesn't need to go away from its current page. This response is cacheable by default and includes an `ETag` header. The common use case is to return `204 No Content` as a result of a `PUT` request, updating a resource, without changing the current content of the page displayed to the user. If the resource is created, `201 Created` is returned instead. If the page should be changed to the newly updated page, the `200 OK` should be used instead. `204 No Content` is defined in [RFC 7231](https://tools.ietf.org/html/rfc7231#section-6.3.5). --- #### `205 Reset Content` <a name="reset-content"></a> *Definition:* The request was successfully processed by the server, but is not returning any content. Similar to `204 No Content`, but lets the client know that it should reset the document view. `205 Reset Content` is defined in [RFC 7231](https://tools.ietf.org/html/rfc7231#section-6.3.6). --- #### `206 Partial Content` <a name="partial-content"></a> *Definition:* The request has succeeded, but the server is delivering only part of the resource due to a `Range` header sent by the client. If there is only one range, the `Content-Type` of the whole response is set to the type of the document, and a `Content-Range` is provided. If several ranges are sent back, the `Content-Type` is set to `multipart/byteranges` and each fragment covers one range, with `Content-Range` and `Content-Type` describing it. `206 Partial Content` is defined in [RFC 7233](https://tools.ietf.org/html/rfc7233#section-4.1). --- #### `207 Multi-Status (WebDAV)` <a name="multi-status"></a> *Definition:* The request has succeeded and the body of the server's response is by default an XML message which can contain a number of separate responses codes, depending on how many sub-requests were made. Useful for situations where multiple status codes might be appropriate. `207 Multi-Status (WebDAV)` is defined in [RFC 4918](https://www.iana.org/go/rfc4918). --- #### `208 Already Reported (WebDAV)` <a name="already-reported"></a> *Definition:* The request has succeeded and contains a collection: Only one element will be reported with a `200 OK` status. The others will be reported with a `208` status inside the response element `<dav:propstat>` to avoid repeatedly enumerating the internal members of multiple bindings to the same collection. `208 Already Reported (WebDAV)` is defined in [RFC 5842](https://www.iana.org/go/rfc5842). --- #### `226 IM Used` <a name="im-used"></a> The request has succeeded and the response sent by the server is a representaiton of the result of one or more instance-manipulations applied to the current instance. `226 IM Used` is defined in [RFC 3229](https://www.iana.org/go/rfc3229). --- --- --- Unofficial and customized non-standard responses defined by server softwares are not included in the list above. Resources: - [Hypertext Transfer Protocol (HTTP) Status Code Registry](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml) - [Mozilla Developer Network](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status)
hexangel616
257,002
Globally Style the Gatsby Default Starter with styled-components v5
Photo by Jeremy Bishop on Unsplash I'm going to go over globally styling the Gatsby...
0
2020-02-07T13:50:00
https://scottspence.com/2020/02/06/globally-style-gatsby-styled-components/
javascript, webdev, tutorial, gatsby
--- title: Globally Style the Gatsby Default Starter with styled-components v5 published: true tags: javascript, webdev, tutorial, gatsby canonical_url: https://scottspence.com/2020/02/06/globally-style-gatsby-styled-components/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/6me62w9r1ku6mah9sfb5.jpg --- ###### Photo by Jeremy Bishop on Unsplash I'm going to go over globally styling the Gatsby Default Starter with styled-components v5, I've done this in the [past with styled-components v4] but I've changed my approach and want to document it. I'll be swapping out the styles included with a CSS reset and adding in global style with the styled-components `createGlobalStyle` helper function and also adding in the styled-components theme provider. To start I'll make a new Gatsby project using npx: ```bash npx gatsby new gatsby-starter-styled-components ``` ## Install styled-components dependencies I'm using yarn to install my dependencies, the backslash is to have the packages on multiple lines instead of one long line: ```bash yarn add gatsby-plugin-styled-components \ styled-components \ babel-plugin-styled-components \ styled-reset ``` ## Configure `gatsby-plugin-styled-components` and `createGlobalStyle` Pop `gatsby-plugin-styled-components` into the `gatsby-config.js` file `plugins` array: ```js plugins: [ `gatsby-plugin-styled-components`, `gatsby-plugin-react-helmet`, { ``` Now I'm going to create a `global-style.js` file in a new directory `src/theme` then import the styled-components helper function `createGlobalStyle` into that, this is where the styles for the site are going to live now. Create the dir and file with the terminal command: ```bash mkdir src/theme && touch src/theme/global-style.js ``` The base styles go in here, along with the `styled-reset`. To start with I'll create the `GlobalStyle` object and add in the the reset. ```jsx import { createGlobalStyle } from 'styled-components'; import reset from 'styled-reset'; export const GlobalStyle = createGlobalStyle` ${reset} `; ``` ## Remove current styling Remove the current styling that is used in the `<Layout>` component, it's the `import './layout.css'` line, I'll also delete the `layout.css` file as I'm going to be adding in my styles. ```jsx import { graphql, useStaticQuery } from 'gatsby'; import PropTypes from 'prop-types'; import React from 'react'; import Header from './header'; import './layout.css'; ``` Now the site has the base browser default styles, time to add in my own styles. Before that I'm going to confirm the reset is doing it thing. ## Confirm CSS reset Now I have the base browser styles I'm going to confirm the CSS reset in the `<Layout>` component. This is where I've removed the previous global styles (`layout.css`) from. ```jsx import { graphql, useStaticQuery } from "gatsby" import PropTypes from "prop-types" import React from "react" import { GlobalStyle } from "../theme/global-style" import Header from "./header" const Layout = ({ children }) => { // static query for the data here return ( <> <Header siteTitle={data.site.siteMetadata.title} /> <div style={{ margin: `0 auto`, maxWidth: 960, padding: `0 1.0875rem 1.45rem`, }} > <GlobalStyle /> <main>{children}</main> <footer> ``` In the code example here 👆I've I removed the `useStaticQuery` hook for readability. ![reset page](https://dev-to-uploads.s3.amazonaws.com/i/3jtkgg77lhbnwp0bne2e.png) Ok, cool, looks pretty reset to me! ## Create the new browser base styles Time to add in some more styles to the global style. First, the `box-sizing` reset, take a look at the CSS Tricks post on [Box Sizing] for a great explanation of why we do this. ```jsx import { createGlobalStyle } from 'styled-components'; import reset from 'styled-reset'; export const GlobalStyle = createGlobalStyle` ${reset} *, *:before, *:after { box-sizing: border-box; } html { box-sizing: border-box; } `; ``` Then I'm adding in the smooth scroll html property and some additional styles for base font size and colour along with base line height letter spacing and background colour. ```jsx import { createGlobalStyle } from 'styled-components'; import reset from 'styled-reset'; export const GlobalStyle = createGlobalStyle` ${reset} *, *:before, *:after { box-sizing: border-box; } html { box-sizing: border-box; scroll-behavior: smooth; font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; font-size: 16px; color: '#1a202c'; } body { line-height: 1.5; letter-spacing: 0; background-color: '#f7fafc'; } `; ``` ## Place `GlobalStyle` at the top of the React tree 🌳 I'm adding this as high up the component tree as possible so that the global styles will affect everything that is 'underneath' it. In the case of the Gatsby Default Starter you'll notice that the `<Layout>` component wraps a the `index.js` page, `page-2.js` and the `404.js` page so adding the `<GlobalStyle />` component here is a sound option. There is an alternative to adding it to the layout and that is to use the Gatsby Browser and Gatsby SSR API [wrapRootElement]. If I add in the following code to `gatsby-browser.js` the styles are applied. ```js import React from 'react'; import Layout from './src/components/layout'; import { GlobalStyle } from './src/theme/global-style'; export const wrapRootElement = ({ element }) => ( <> <GlobalStyle /> <Layout>{element}</Layout> </> ); ``` I also have a double header, that's because the layout component is still wrapping the index page, page 2 and the 404 page. I'll remove the layout component from those locations so I have it in one place to manage. ## Make a Root Wrapper to keep things DRY 🌵 I also need to add the same code into `gatsby-ssr.js` so so that the styles are rendered on the server when the site is built. Rather than have the code duplicated in the two files I'll create a `root-wrapper.js` file _(you can call it what you like!)_ and add it to the root of the project. I'll import that into both the `gatsby-browser.js` and `gatsby-ssr.js` files: ```js import { wrapRootElement as wrap } from './root-wrapper'; export const wrapRootElement = wrap; ``` ## Global fonts with `gatsby-plugin-google-fonts` Onto the main reason for this post, with the [v5 release] of styled-components the use of `@imports` in `createGlobalStyle` isn't working, (that approach is [detailed here]) it's recommended that you embed these into your HTML index file, etc. > NOTE: At this time we recommend not using @import inside of `createGlobalStyle`. We're working on better behavior for this functionality but it just doesn't really work at the moment and it's better if you just embed these imports in your HTML index file, etc. But! As I'm using Gatsby, of course, _**"There's a Plugin For That™️"**_ so I'm going to use `gatsby-plugin-google-fonts` for this, I'm using this in place of `gatsby-plugin-web-font-loader` because it uses `display=swap`. ```bash yarn add gatsby-plugin-google-fonts ``` Configure, I'll add three fonts, sans, sans serif and monospace to the Gatsby plugin array in `gatsby-config.js`: ```js { resolve: `gatsby-plugin-google-fonts`, options: { fonts: [ `cambay\:400,700`, `arvo\:400,700`, `ubuntu mono\:400,700`, ], display: 'swap', }, }, ``` I can now use these fonts throughout my site. ## styled-components Theme provider The [styled-components ThemeProvider] is a great solution for managing your styles throughout a project. Part of the inspiration for my approach came from Sid's talk at React Advanced which [I wrote about] and part from watching the Tailwind CSS courses from Adam Wathan on Egghead.io check out the playlist here: [Introduction to Tailwind and the Utility first workflow] With the ThemeProvider I can have things like colours, sizes, font weights in one place so that there is a consistent set of presets to choose from when styling. In the `global-style.js` file I'm creating a theme object to hold all the values for the theme. For the font I'll add in the types I defined in the Gatsby config, for serif, sans serif and monospace. ```jsx import { createGlobalStyle } from 'styled-components'; import reset from 'styled-reset'; export const theme = { font: { sans: 'Cambay, sans-serif', serif: 'Arvo, sans', monospace: '"Ubuntu Mono", monospace', }, }; export const GlobalStyle = createGlobalStyle` ${reset} *, *:before, *:after { box-sizing: border-box; } html { box-sizing: border-box; scroll-behavior: smooth; font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; font-size: 16px; color: '#1a202c'; } body { line-height: 1.5; letter-spacing: 0; background-color: '#f7fafc'; } `; ``` Now I'll need to add the `<ThemeProvider>` high up in the React render tree, same as with the global style, I'll add it to the `root-wrapper.js` file. ```jsx import React from 'react'; import { ThemeProvider } from 'styled-components'; import Layout from './src/components/layout'; import { GlobalStyle, theme } from './src/theme/global-style'; export const wrapRootElement = ({ element }) => ( <ThemeProvider theme={theme}> <GlobalStyle /> <Layout>{element}</Layout> </ThemeProvider> ); ``` When I want to pick a font type to use in the project I can use the `theme` object and pick out the desired type. Like setting the HTML font-family to sans serif: ```jsx export const GlobalStyle = createGlobalStyle` ${reset} *, *:before, *:after { box-sizing: border-box; } html { box-sizing: border-box; scroll-behavior: smooth; font-family: ${({ theme }) => theme.font.sans}; font-size: 16px; color: '#1a202c'; } body { line-height: 1.5; letter-spacing: 0; background-color: '#f7fafc'; } `; ``` The base font is now set to Cambay, why stop there though, I'll bring in some fonts sizes and font weights from the [Tailwind full config] and add them to the `theme` object. ```jsx import { createGlobalStyle } from 'styled-components'; import reset from 'styled-reset'; export const theme = { font: { sans: 'Cambay, sans-serif', serif: 'Arvo, sans', monospace: '"Ubuntu Mono", monospace', }, fontSize: { xs: '0.75rem', sm: '0.875rem', base: '1rem', lg: '1.125rem', xl: '1.25rem', '2xl': '1.5rem', '3xl': '1.875rem', '4xl': '2.25rem', '5xl': '3rem', '6xl': '4rem', }, fontWeight: { hairline: '100', thin: '200', light: '300', normal: '400', medium: '500', semibold: '600', bold: '700', extrabold: '800', black: '900', }, }; export const GlobalStyle = createGlobalStyle` ${reset} *, *:before, *:after { box-sizing: border-box; } html { box-sizing: border-box; scroll-behavior: smooth; font-family: ${({ theme }) => theme.font.sans}; font-size: ${({ theme }) => theme.fontSize.lg}; color: '#1a202c'; } body { line-height: 1.5; letter-spacing: 0; background-color: '#f7fafc'; } `; ``` I'll add the base font at `.lg` (`1.125rem`), I'll also add in line height and line spacing defaults but I'll save adding the whole config here to save you a codewall, you get the idea though, right? Here's the rest of the GlobalStyle with defaults applied. ```jsx export const GlobalStyle = createGlobalStyle` ${reset} *, *:before, *:after { box-sizing: border-box; } html { box-sizing: border-box; scroll-behavior: smooth; font-family: ${({ theme }) => theme.font.sans}; font-size: ${({ theme }) => theme.fontSize.lg}; color: ${({ theme }) => theme.colours.grey[900]}; } body { line-height: ${({ theme }) => theme.lineHeight.relaxed}; letter-spacing: ${({ theme }) => theme.letterSpacing.wide}; background-color: ${({ theme }) => theme.colours.white}; } `; ``` ## Shared Page Elements The current page is still missing basic styles for `h1` and `p` so I'm going to create those in a new directory `src/components/page-elements` ```bash mkdir src/components/page-elements touch src/components/page-elements/h1.js touch src/components/page-elements/p.js ``` And add some base styles to those for `h1`: ```jsx import styled from 'styled-components'; export const H1 = styled.h1` font-size: ${({ theme }) => theme.fontSize['4xl']}; font-family: ${({ theme }) => theme.font.serif}; margin-top: ${({ theme }) => theme.spacing[8]}; line-height: ${({ theme }) => theme.lineHeight.none}; `; ``` And the same sort of thing for the `p`: ```jsx import styled from 'styled-components'; export const P = styled.p` font-size: ${({ theme }) => theme.fontSize.base}; margin-top: ${({ theme }) => theme.spacing[3]}; strong { font-weight: bold; } em { font-style: italic; } `; ``` Then it's a case of replacing the `h1`'s and `p`'s in the project to use the styled components. Here's the `index.js` file as an example: ```jsx import { Link } from 'gatsby'; import React from 'react'; import Image from '../components/image'; import { H1 } from '../components/page-elements/h1'; import { P } from '../components/page-elements/p'; import SEO from '../components/seo'; const IndexPage = () => ( <> <SEO title="Home" /> <H1>Hi people</H1> <P>Welcome to your new Gatsby site.</P> <P>Now go build something great.</P> <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}> <Image /> </div> <Link to="/page-2/">Go to page 2</Link> </> ); export default IndexPage; ``` ## Export all your styled-components from an index file As the amount of page elements grows you may want to consider using an `index.js` file instead of having an import for each individual component you can import from one file. Let's take a quick look at that, so let's say I import the `h1` and `p` into a file, it'll looks something like this: ```jsx import { H1 } from '../components/page-elements/h1'; import { P } from '../components/page-elements/p'; ``` If you have several elements your using in the file the imports could get a bit cluttered. I've taken to creating an `index.js` file that will export all the components, like this: ```jsx export * from './h1'; export * from './p'; ``` Then when importing the components it will look like this: ```jsx import { H1, P } from '../components/page-elements'; ``` That's it for this one! ## 📺 Here’s a video detailing the process {% youtube 8jlh_FyuM8c %} ## Thanks for reading 🙏 Please take a look at my other content if you enjoyed this. Follow me on [Twitter] or [Ask Me Anything] on GitHub. ## Resources - [Design Systems Design System - Siddharth Kshetrapal] - [Tailwind full config] - [Introduction to Tailwind and the Utility first workflow] - [Design and Implement Common Tailwind Components] - [Build a Responsive Navbar with Tailwind] - [Build and Style a Dropdown in Tailwind] <!-- Links --> [past with styled-components v4]: https://thelocalhost.blog/2018/11/29/gatsby-starter-to-styled-components/ [wraprootelement]: https://www.gatsbyjs.org/docs/browser-apis/#wrapRootElement [v5 release]: https://styled-components.com/releases#v5.0.0 [detailed here]: https://thelocalhost.blog/2018/11/29/gatsby-starter-to-styled-components/#3-global-style [box sizing]: https://css-tricks.com/box-sizing/#article-header-id-3 [styled-components themeprovider]: https://styled-components.com/docs/api#themeprovider [introduction to tailwind and the utility first workflow]: https://egghead.io/playlists/introduction-to-tailwind-and-the-utility-first-workflow-0b697b10 [design and implement common tailwind components]: https://egghead.io/playlists/design-and-implement-common-tailwind-components-8fbb9b19 [build a responsive navbar with tailwind]: https://egghead.io/playlists/build-a-responsive-navbar-with-tailwind-4d328a35 [build and style a dropdown in tailwind]: https://egghead.io/playlists/build-and-style-a-dropdown-in-tailwind-7f34fead [i wrote about]: https://thelocalhost.blog/2019/11/06/react-advanced-london-2019/#siddharth-kshetrapal---design-systems-design-system [design systems design system - siddharth kshetrapal]: https://www.youtube.com/watch?v=Dd-Y9K7IKmk&feature=emb_title [tailwind full config]: https://github.com/tailwindcss/designing-with-tailwindcss/blob/master/01-getting-up-and-running/07-customizing-your-design-system/tailwind-full.config.js [twitter]: https://twitter.com/spences10 [ask me anything]: https://github.com/spences10/ama
spences10
257,128
What can you do in one year ?
Where do you see in one year? I asked myself this question last year, because at the time I felt I wa...
0
2020-02-07T06:40:51
https://dev.to/hectorcaac/what-can-you-do-in-one-year-djc
webdev, portafolio, oneyear, herewego
Where do you see in one year? I asked myself this question last year, because at the time I felt I was loose and let me explain why ? In December of 2018 I graduate from college . But at the time I didn't know what kind of software developer I wanted to be. I knew how to code but to be honest I didn't have any demanding skills (not react, node, android, swift, spring, or anything similar to it) I knew basic JavaScript, html, css. Some Python and java. I was really good at algorithms and data structures, I could create my own merge sort algorithm (hopefully I still can ),I also played with some AI, some image processing and some data mining as well but I didn’t know what to do, until I was able to decide. Last year around February /April I decided that I wanted to be a web developer. But now I had to choose between being a front end developer or a back end developer. After some time I found out that I really enjoyed both. The back end is challenging. You have to deal with databases, sessions, user authentication, user authorization , third parties api, asynchronous tasks and more. It feels like being the architect of a building that only a few are lucky to see. On the other hand the front end is more creative. (Don't get me wrong, you still need to be a really logical person) You can choose the visual structure of the website like what colors to use, what effects to apply, how the page will be seen on different dispositives . In other words work in the front end can make you feel like a Picasso (except when you tried to arrange something in css, in that case I feel like Alice in wonderland , at least that is my case . praise the goods for the grid and flexbox) When I decided I wanted to become a web developer I started to learn as much as I could about it, after 2 weeks I thought I was ready to create my own website (I was wrong) anyhow I somehow make it look presentable. here is a link if you wanted to see it http://first.hacasoftware.com/ ##Current situation After 8 months I thought It was time to update my website, but instead I will use some of the tools i used on my daily basis, and also it will be a good idea to try to learn a new technology . For the past month I have been playing with Gatsby. At the beginning I didn't see what was the big deal of it, but after playing for a while I started to see why it is a good tool to learn. Once I was pretty decent using it I decided to put a little challenge myself. My challenge would be to create a new website in less than half of the time it took me the first time. After 1 week and a half I was able to create a presentable website. Still has a lot of work to do and I will keep updating it. But if I compare the old website with the new website I prefer the new one. It has a better look. The user interface is much better, and the images load a lot faster (thanks Gatsby). ##Conclusion One year may look like a lot of time, but in reality it is not. You will be surprised how fast it goes, but if you keep learning something new everyday, you will be surprised, last year I never thought I was going to become a web developer (I still need to learn a lot, and I'm okay knowing that I will take some time before I become an expert ) but for sure I am excited about what the future can bring.
hectorcaac
257,163
Answer: Angular-Chart not rendering anything
answer re: Angular-Chart not renderin...
0
2020-02-07T08:00:27
https://dev.to/basantkumarpogeyan/answer-angular-chart-not-rendering-anything-889
{% stackoverflow 53361017 %}
basantkumarpogeyan
263,782
The Road Map: 2020
Image by Hello I'm Nik 🇬🇧 Like I talked about in my last post I decided not to set resolutions for t...
0
2020-02-19T04:36:23
https://www.thatamazingprogrammer.com/blog/the-road-map-2020/
2020, career
--- title: The Road Map: 2020 published: true date: 2020-02-18 13:00:00 UTC tags: 2020, career cover_image: https://res.cloudinary.com/programazing/image/upload/v1578105431/blogposts/2020/The%20Roadmap%202020/The_Roadmap_2020.jpg canonical_url: https://www.thatamazingprogrammer.com/blog/the-road-map-2020/ --- _Image by [Hello I'm Nik 🇬🇧](https://unsplash.com/photos/YiRQIglwYig)_ Like I talked about in my [last post](https://www.thatamazingprogrammer.com/blog/looking-to-the-future-my-2020/) I decided not to set resolutions for this year but to instead layout a roadmap. In the past, I would normally set out some [S.M.A.R.T. Goals](https://www.mindtools.com/pages/article/smart-goals.htm) and then write quarterly update posts to let people know how I was doing. I still plan on doing quarterly reviews as a way to keep me on track and to let you know what's going on. This year I'm only focusing on two overarching goals; my health and my career. In this post, I'm going to layout the overall technical topics I plan to focus on but I'm not going to go super in-depth. Later this month I'll put out another blog post detailing how I plan to use the roadmap and how I plan to keep myself on task. I actually put this post off for a while because in January I was bombarded by idea after idea on how to implement a learning year and how to keep track of everything. I eventually decided that I was trying to do too much with this post and split them up. * C# Fundamentals going * API Design * Linq * Entity Framework * SQL * TDD * JavaScript * Design Patterns * Azure * Unit Testing ## C# I plan on focusing on the basics this year which to me means things like: * Dependency Injection * When to use an Interface * How to design an Interface * How to Refactor * How the Build Pipeline works * Extension Methods vs Statics * Statics vs Instance Methods * Etc. ## API Design I'm sure I could improve on how I layout my APIs and how they perform. I've come to realize I really like working with APIs and I just want to be better at it. I plan on figuring out some more specifics soon but that's all I have for now. ## Linq I've used Linq for years but I plan on doing a deep dive into how certain features work. Notably, I'll be doing a blog post about how __every__ [enumerbale method](https://docs.microsoft.com/en-us/dotnet/api/system.linq.enumerable?view=netframework-4.8) works. ## Entity Framework I've come to the conclusion that if you're going to use Entity Framework you should really have a better idea about how things work under the hood and how E.F. decides to build queries and perform optimizations. ## SQL While interviewing recently I've come to realize that I rely on E.F. way too much in my career. That's because it's in almost every major application I've ever worked on whether I've designed the application or it was pre-existing. This year I plan on reviewing the basics of SQL that I already know and build on that foundation by using Dapper in my smaller applications. ## TDD The more I use TDD the more I realize how amazing it can be, for me anyway. Even after I've designed the basic layout of a class on paper TDD helps me catch flaws in my logic. I've completely redesigned entire classes because it just didn't make sense when I was building the tests. This year I want to get more serious in my use of TDD by using it to make some public libraries I have in the works. ## JavaScript I tend to live in the back-end of a project usually working on the business logic of designing an API so I don't tend to use JavvaScript that often. At most I'll use TypeScript in Angular and ASP.NET web apps but that's not the same thing. I want to deepen my understanding of JavaScript, and to a lesser extent TypeScript, so I don't let that knowledge just rust away. ## Design Patterns Throughout my career, I've used Design Patterns and not even realized it. I can recall using a decorator pattern on another dev's code because they refused to work with the rest of the team. I was the lead developer on the project and we couldn't afford to be down a developer. The issue was that they were in Germany, the other dev was in California, and I was in Kentucky. I decided that since the owner didn't want to hear about our "bickering" we'd write wrappers around his code just to make the deadline. Now I'd love to learn more about Design Patterns so I can point them out more easily, use them in the planning stages of a project, and optimize my refactorings. ## Azure I've used Azure in the past but not for much. More and more these days I see jobs calling for AWS or Azure experience and I simply don't want to be left behind. I decided to go with Azure just because it's in the .Net stack and for no other reason. I'm told if you know one then you can pick up the other pretty easily. ## Unit Testing Though I Unit Test as part of my TDD practice I want to do a deep dive and learn more about best practices and see if I can improve how I work.
programazing
264,661
React Navigation Fix header height in iOS
How to fix header height in iOS for Nested Navigators
0
2020-02-20T13:54:11
https://dev.to/chakrihacker/react-navigation-fix-header-height-in-ios-278b
reactnative
--- title: React Navigation Fix header height in iOS published: true description: How to fix header height in iOS for Nested Navigators tags: react-native --- Have you ever faced large header height for iphones?? Like this one ![large-header](https://i.imgur.com/e8O7w3h.png) Well then you can fix this easily with one line of code In your navigator's default navigation options add this `headerForceInset: { top: "never", bottom: "never" }` So It will be something like ```js const awesomeNavigator = createStackNavigator({ // all screens }, { defaultNavigationOptions: { headerForceInset: { top: "never", bottom: "never" } } }) ``` Now it changes to ![normal-header](https://i.imgur.com/LRP8O6r.png)
chakrihacker
264,662
Simplifying the GraphQL data model
Written by Leonardo Losoviz✏️ Don't think in graphs, think in components This article is...
0
2020-02-21T15:02:44
https://blog.logrocket.com/simplifying-the-graphql-data-model/
graphql, webdev
--- title: Simplifying the GraphQL data model published: true date: 2020-02-19 14:00:32 UTC tags: graphql,webdev canonical_url: https://blog.logrocket.com/simplifying-the-graphql-data-model/ cover_image: https://dev-to-uploads.s3.amazonaws.com/i/1gz35yr5028s56e7geom.jpeg --- **Written by [Leonardo Losoviz](https://blog.logrocket.com/author/leonardolosoviz/)**✏️ ## Don't think in graphs, think in components _This article is part of an ongoing series on conceptualizing, designing, and implementing a GraphQL server. The first article in this series is “[Designing a GraphQL server for optimal performance](https://dev.to/bnevilleoneill/designing-a-graphql-server-for-optimal-performance-gc5).”_ * * * Simplicity and performance are possibly the two most important features of our application. These two must be balanced; either optimizing for performance at the expense of simplicity, or for simplicity at the expense of performance, would render our application useless. No developer would want to use software that is extremely fast but so complex that you need to be a genius to use it, or very simple to use but too slow. Hence, designing for simplicity cannot be an afterthought; it must be engineered into the software right from the beginning. In my previous article, “[Designing a GraphQL server for optimal performance](https://dev.to/bnevilleoneill/designing-a-graphql-server-for-optimal-performance-gc5),” I showed how a GraphQL server can completely avoid the N+1 problem by having the resolvers return the ID of the objects (instead of the objects themselves) when dealing with relationships. Doing so made the code for the resolvers become very simple because it doesn’t need to implement the “deferred” mechanism anymore, which is instead embedded within the server itself, hidden from view. For example, take the following (PHP) code for a resolver. Field `author`, which is a field of type `User` that must be resolved through one or more additional queries to the database, should be more difficult to resolve than field `title`, which is a scalar field that can be immediately resolved. Yet the only difference is two lines of code in function `resolveFieldTypeResolverClass`, which itself just returns a classname (to indicate the type of object that `author` must resolve to): ```jsx class PostFieldResolver implements FieldResolverInterface { public function resolveValue($object, string $field, array $args = []) { $post = $object; switch ($field) { case 'title': return $post->title; case 'author': return $post->authorID; // This is an ID, not an object! } return null; } public function resolveFieldTypeResolverClass(string $field, array $args = []): ?string { switch ($field) { case 'author': return UserTypeResolver::class; } return null; } } ``` This step is half of the solution to load data in a very simple manner. It transfers the responsibility of implementing the complex code in the resolvers away from the developer and into the server’s data loading engine, to hopefully be coded only once and used forever. However, doing this alone doesn’t make the overall application become simpler, it just moves its complexity around. So now let’s delve into the second half of the solution: making the code in the server’s data loading engine as simple as it can ever be. For this, we need to understand graphs, the data model over which GraphQL stands. Or is it? [![LogRocket Free Trial Banner](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2017/03/f760c-1gpjapknnuyhu8esa3z0jga.png?resize=1200%2C280&ssl=1)](https://logrocket.com/signup/) ## Graphs all the way down? On its page titled [Thinking in Graphs](https://graphql.org/learn/thinking-in-graphs/), the GraphQL project states (emphasis mine): _Graphs are powerful tools for modeling many real-world phenomena because they resemble our natural mental models and verbal descriptions of the underlying process. With GraphQL, you **model your business domain as a graph** by defining a schema; within your schema, you define different types of nodes and how they connect/relate to one another. On the client, this creates a pattern similar to Object-Oriented Programming: types that reference other types. **On the server, since GraphQL only defines the interface, you have the freedom to use it with any backend (new or legacy!)**._ The takeaway from this definition is the following: even though the response has the shape of a graph, this doesn’t mean that data is actually represented as a graph when dealing with it on the server side. The graph is only a _mental model_, not an actual implementation. This realization is shared by others: > _GraphQL itself has the name “graph” in it, even though GraphQL isn’t actually a graph query language! [Caleb Meredith](https://blog.apollographql.com/explaining-graphql-connections-c48b7c3d6976) (ex-Apollo GraphQL)_ > _[GraphQL is] neither a query language, nor particularly graph-oriented. … If your data is a graph, it’s on you to expose that structure. But your requests are, if anything, trees. [Alan Johnson](https://artsy.github.io/blog/2018/05/08/is-graphql-the-future/) (ex-Artsy)_ This is good news because [dealing with graphs](https://medium.com/basecs/a-gentle-introduction-to-graph-theory-77969829ead8) is not trivial, and we can then attempt to use a simpler data structure. What comes to mind first is a tree, which is [simpler than a graph](https://leapgraph.com/tree-vs-graph-data-structures) (a tree is actually a subset of a graph). Indeed, as mentioned in the quote above, the shape of the GraphQL request is a tree. However, using a tree structure to represent and process the data in the server is not trivial either and may require hacks to [support modeling recursions](https://stackoverflow.com/questions/44746923/how-to-model-recursive-data-structures-in-graphql). Is there anything simpler? ## Components all the way down! What I found to be a most suitable structure for storing and manipulating object data in the server side is… components! Using components to represent our data structure on the server side is optimal for simplicity because it allows us to consolidate the different models for our data into a single structure. Instead of having a flow like this: `build query to feed components (client) => process data as graph/tree (server) => feed data to components (client)` …our flow will be like this: `components (client) => components (server) => components (client)` This is achievable because the GraphQL request can be thought of as having a “component hierarchy” data structure, in which every object type represents a component and every relationship field from an object type to another object type represents a component wrapping another component. Huh? Can you explain that in English, please? ## Visual guide for modeling data using components Let me make my previous explanation clearer using an example. Let’s say that we want to build the following “Featured director” widget: ![Our Featured Director Widget](https://i2.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/featured-director-widget.jpg?resize=600%2C490&ssl=1)<figcaption id="caption-attachment-14143">Featured director widget.</figcaption> Using Vue or React (or any other component-based library), we would first identify the components. In this case, we would have an outer component `<FeaturedDirector>` (in red), which wraps a component `<Film>` (in blue), which itself wraps a component `<Actor>` (in green): ![Identifying Components Within The Featured Director Widget](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/featured-director-widget-components.jpg?resize=600%2C491&ssl=1)<figcaption id="caption-attachment-14144">Identifying components within the widget.</figcaption> The pseudo-code will look like this: ```jsx <!-- Component: <FeaturedDirector> --> <div> Country: {country} {foreach films as film} <Film film={film} /> {/foreach} </div> <!-- Component: <Film> --> <div> Title: {title} Pic: {thumbnail} {foreach actors as actor} <Actor actor={actor} /> {/foreach} </div> <!-- Component: <Actor> --> <div> Name: {name} Photo: {avatar} </div> ``` Then we identify what data is needed for each component. For `<FeaturedDirector>`, we need the `name`, `avatar`, and `country`. For `<Film>`, we need `thumbnail` and `title`. And for `<Actor>`, we need `name` and `avatar`: ![Identifying Data Properties For Each Component In The Featured Director Widget](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/featured-director-widget-data.jpg?resize=600%2C491&ssl=1)<figcaption id="caption-attachment-14146">Identifying data properties for each component.</figcaption> And we build our GraphQL query to fetch the required data: ```jsx query { featuredDirector { name country avatar films { title thumbnail actors { name avatar } } } } ``` As you can see, there is a direct relationship between the component hierarchy above and this GraphQL query. Let’s now move to the server side to process the request. Instead of dealing with the query as a tree, we continue using the same component hierarchy to represent the information. In order to process the data, we must flatten the components into types (`<FeaturedDirector>` => `Director`; `<Film>` => `Film`; `<Actor>` => `Actor`), order them as they appeared in the component hierarchy (`Director`, then `Film`, then `Actor`), and deal with them in “iterations,” retrieving the object data for each type on its own iteration, like this: ![Featured Director Type Iterations](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/featured-director-type-iterations.png?resize=730%2C121&ssl=1)<figcaption id="caption-attachment-14148">Dealing with types in iterations.</figcaption> The server’s data loading engine must implement the following (pseudo-)algorithm to load the data: _Preparation:_ 1. Prepare an empty [queue](https://en.wikipedia.org/wiki/Queue_(abstract_data_type)) to store the list of IDs from the objects that must be fetched from the database, organized by type (each entry will be: `[type => list of IDs]`) 2. Retrieve the ID of the featured director object and place it on the queue under type `Director` _Loop until there are no more entries on the queue:_ 1. Get the first entry from the queue: the type and list of IDs (e.g., `Director` and `[2]`), and remove this entry off the queue 2. Using the type’s `TypeDataLoader` object (explained in [my previous article](https://dev.to/bnevilleoneill/designing-a-graphql-server-for-optimal-performance-gc5)), execute a single query against the database to retrieve all objects for that type with those IDs 3. If the type has relational fields (eg: type `Director` has relational field `films` of type `Film`), then collect all the IDs from these fields from all the objects retrieved in the current iteration (eg: all IDs in field `films` from all objects of type `Director`), and place these IDs on the queue under the corresponding type (eg: IDs `[3, 8]` under type `Film`). By the end of the iterations, we will have loaded all the object data for all types, like this: ![Featured Director Loading Data In Iterations](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/featured-director-loading-data-in-iterations.png?resize=730%2C390&ssl=1)<figcaption id="caption-attachment-14151">Dealing with types in iterations.</figcaption> Notice how all IDs for a type are collected until the type is processed in the queue. If, for instance, we add a relational field `preferredActors` to type `Director`, these IDs would be added to the queue under type `Actor`, and it would be processed together with the IDs from field `actors` from type `Film`: ![Featured Director Loading Data Extension](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/featured-director-loading-data-extension.png?resize=730%2C481&ssl=1) However, if a type has been processed and then we need to load more data from that type, then it’s a new iteration on that type. For instance, adding a relational field `preferredDirector` to the `Author` type will force the type `Director` to be added to the queue once again: ![Loading Data Iterating Over A Repeated Type](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/featured-director-loading-data-repeated.png?resize=730%2C481&ssl=1)<figcaption id="caption-attachment-14154">Iterating over a repeated type.</figcaption> Also note that here we can use the caching mechanism as implemented in [dataloader](https://github.com/graphql/dataloader): on the second iteration for type `Director`, the object with ID 2 is not retrieved again since it was already retrieved on the first iteration. Thus, it can be taken from the cache. Now that we have fetched all the object data, we need to shape it into the expected response, mirroring the GraphQL query. However, as you can see, the data does not have the required tree structure. Instead, relational fields contain the IDs to the nested object, emulating how data is represented in a relational database. Hence, following this comparison, the data retrieved for each type can be represented as a table, like this: _Table for type `Director`:_ | ID | name | country | avatar | films | | --- | --- | --- | --- | --- | | 2 | George Lucas | USA | george-lucas.jpg | [3, 8] | _Table for type `Film`:_ | ID | title | thumbnail | actors | | --- | --- | --- | --- | | 3 | The Phantom Menace | episode-1.jpg | [4, 6] | | 8 | Attack of the Clones | episode-2.jpg | [6, 7] | _Table for type `Actor`:_ | ID | name | avatar | | --- | --- | --- | | 4 | Ewan McGregor | mcgregor.jpg | | 6 | Nathalie Portman | portman.jpg | | 7 | Hayden Christensen | christensen.jpg | Having all the data organized as tables, and knowing how every type relates to each other (i.e., `Director` references `Film` through field `films;` `Film` references `Actor` through field `actors`), the GraphQL server can easily convert the data into the expected tree shape: ![Featured Director Widget Graph](https://i0.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/featured-director-graph.png?resize=730%2C525&ssl=1)<figcaption id="caption-attachment-14155">Tree-shaped response.</figcaption> Finally, the GraphQL server outputs the tree, which has the shape of the expected response: ```jsx { data: { featuredDirector: { name: "George Lucas", country: "USA", avatar: "george-lucas.jpg", films: [ { title: "Star Wars: Episode I", thumbnail: "episode-1.jpg", actors: [ { name: "Ewan McGregor", avatar: "mcgregor.jpg", }, { name: "Natalie Portman", avatar: "portman.jpg", } ] }, { title: "Star Wars: Episode II", thumbnail: "episode-2.jpg", actors: [ { name: "Natalie Portman", avatar: "portman.jpg", }, { name: "Hayden Christensen", avatar: "christensen.jpg", } ] } ] } } } ``` ## Analyzing the time complexity of the solution Let’s analyze the [big O notation](https://en.wikipedia.org/wiki/Big_O_notation) of the data loading algorithm to understand how the number of queries executed against the database grows with the number of inputs to make sure this solution is performant. The GraphQL server’s data loading engine loads data in iterations corresponding to each type. By the time it starts an iteration, it will already have the list of all the IDs for all the objects to fetch, hence it can execute one single query to fetch all the data for the corresponding objects. It then follows that the number of queries to the database will grow linearly with the number of types involved in the query. In other words, the time complexity is `O(n)`, where `n` is the number of types in the query (however, if a type is iterated more than once, then it must be added more than once to `n`). This solution is very performant, much better than the exponential complexity expected from dealing with graphs or logarithmic complexity expected from dealing with trees. ## Implemented (PHP) code The solution outlined in this article is used by the GraphQL server in PHP that I’ve implemented, [GraphQL by PoP](https://graphql-by-pop.com). The code for its data loading engine can be found [here](https://github.com/getpop/component-model/blob/7e1588286ce2eb67bffebe9fd6ab2e5274238777/src/Engine/Engine.php#L759). Since this piece of code is very long, and I have already explained how the algorithm works, there’s no need to reproduce it here. ## Conclusion ![Cartoon Depicting Frustrated Software Engineer](https://i1.wp.com/blog.logrocket.com/wp-content/uploads/2020/02/shouldnt-be-hard.png?resize=710%2C259&ssl=1)<figcaption id="caption-attachment-14158">Shouldn’t be hard! Image courtesy of xkcd.</figcaption> In [my previous article](https://dev.to/bnevilleoneill/designing-a-graphql-server-for-optimal-performance-gc5), I started describing how we can build a GraphQL server that is performant, and in this article I completed it by describing how it can be made simple by using components to represent the data model in the server instead of using graphs or trees. As a consequence, implementing resolvers is very performant (linear complexity time based on the number of types), and very easy to do (the “deferred” mechanism is not implemented by the developer anymore). * * * ## 200's only ‎✅: Monitor failed and show GraphQL requests in production While GraphQL has some features for debugging requests and responses, making sure GraphQL reliably serves resources to your production app is where things get tougher. If you’re interested in ensuring network requests to the backend or third party services are successful, [try LogRocket.](https://www2.logrocket.com/signup-lr) ![Alt Text](https://thepracticaldev.s3.amazonaws.com/i/jd2amkhuiynfjf3i33qk.png) [LogRocket](https://www2.logrocket.com/signup-lr) is like a DVR for web apps, recording literally everything that happens on your site. Instead of guessing why problems happen, you can aggregate and report on problematic GraphQL requests to quickly understand the root cause. In addition, you can track Apollo client state and inspect GraphQL queries' key-value pairs. LogRocket instruments your app to record baseline performance timings such as page load time, time to first byte, slow network requests, and also logs Redux, NgRx, and Vuex actions/state. [Start monitoring for free.](https://www2.logrocket.com/signup-lr) * * * The post [Simplifying the GraphQL data model](https://blog.logrocket.com/simplifying-the-graphql-data-model/) appeared first on [LogRocket Blog](https://blog.logrocket.com).
bnevilleoneill
264,713
Quick Apps On Firefox Nightly
So I was trying out Firefox Nightly shortly after creating a new react app using Glitch create-react...
0
2020-02-19T15:53:54
https://dev.to/mindsgaming/quick-apps-on-firefox-nightly-2hgo
firefox, glitch, apps, nightly
So I was trying out [Firefox Nightly](https://blog.nightly.mozilla.org/) shortly after creating a new react app using [Glitch create-react-app](https://dev.to/glitch/create-react-app-and-express-together-on-glitch-28gi) for my site and ran into something pretty cool. When I opened my site on Firefox Nightly the settings tab at the bottom right turned blue: ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/wmb4qvrv3fzprc0jrfv4.jpeg) At first I thought I did something wrong of course, what warning is going to pop up I wondered as I opened the tab, behold it wasn't an issue but a feature to install the page as a react app ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/wd5lbuktz84aogdbmq7c.jpeg) Unlike saving a web-page on a mobile browser (or to your phones home page) this nifty little feature actually wraps the react app into a box so no browser features are present and you can run your site like you would any other app. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/yjj1xdw3acw2i9saeemz.jpeg) Anyways I thought this was pretty cool and worth the time to post, maybe someone else will enjoy this cool little feature.
mindsgaming
264,744
Django & DRF 101, initialisation de l'environnement : virtualenv & docker
Initialisation d'un environnement pour django / drf
4,485
2020-02-19T17:13:01
https://dev.to/zorky/django-drf-101-initialisation-1e39
django, drf, djangorestframework, installation
--- title: Django & DRF 101, initialisation de l'environnement : virtualenv & docker published: true description: Initialisation d'un environnement pour django / drf tags: django, drf, djangorestframework, installation series: Django DRF --- # Note Bene Dans la suite de l'article, le serveur d'application Web django pourra fonctionner selon le type d'environnement : * http://localhost:8000 dans le cas du virtualenv * http://plateform dans le cas de containers Docker # Environnement django sur un poste de développement * [avec virtualenv](#virtualenv) * [avec docker](#docker) Dans cet article, je vais axer sur l'installation de Django et de modules sur un poste de développement, en local, via virtualenv et docker. On se basera sur la version 3.0.6 de django et la 3.11.x de DRF et quelques modules utiles. > <mark>**[EDIT]**</mark> Le billet se basait initialement sur une version django 2.2.11, pour passer en django 3.0, des modules ont dû être modifiés car plus maintenus : django-rest-swagger => [drf-yasg](https://drf-yasg.readthedocs.io/en/stable/) (le __staticfiles__ utilisé sur django-rest-swagger n'est plus valable en django 3.0 voir [les modules retirés en 3.0](https://docs.djangoproject.com/en/dev/releases/3.0/#features-removed-in-3-0) ou la réponse {% stackoverflow 55929473 %} djangorestframework-jwt => [drf-jwt](https://styria-digital.github.io/django-rest-framework-jwt/) (il y a aussi [simplejwt](https://django-rest-framework-simplejwt.readthedocs.io/en/latest/) mais drf-jwt permet l'[__impersonnalisation__](https://styria-digital.github.io/django-rest-framework-jwt/#impersonation-token) que nous verrons dans un article ultérieur) Le répertoire de travail est **projet** le <a name="requirements">requirements.txt</a> utilisé dans les 2 méthodes ``` django==3.0.4 djangorestframework==3.11.0 drf-jwt==1.15.1 django-filter==2.2.0 django-cors-headers==3.2.1 coreapi==2.3.3 coreapi-cli==1.0.9 requests==2.20.0 django-extensions==2.2.9 django-debug-toolbar==2.2 drf-yasg==1.17.1 ``` ## virtualenv <a name="virtualenv"></a> En utilisant [virtualenv](https://virtualenv.pypa.io/en/latest/). Dans un nouveau répertoire : $ mkdir projet && cd projet Le nom `env`, derrière virtualenv, est arbitraire, c'est le nom de l'environnement virtuel, cela pourrait être `myenv`, `projet_truc`, etc $ virtualenv env $ source vend/bin/activate ou sous Windows : $ source env/Scripts/activate pour désactiver l'environnement virtuel : $ deactivate Installation des packages à la main : $ pip install django==3.0.4 $ pip install djangorestframework==3.11.0 $ pip install django-filter==2.1.0 $ ... ou par un fichier **requirements.txt** qui contient les packages que l'on souhaite installer pour les utiliser par la suite ``` $ pip install -r requirements.txt ``` L'environnement est prêt, un ``` $ python manage.py runserver ``` permet de lancer le serveur sur le port 8000 (http://127.0.0.1:8000/) ## docker <a name="docker"></a> A mon sens, c'est le plus adapté en équipe, l'objectif étant d'élaborer des images qui peuvent être partagées au sein de différentes stations de travail, l'environnement sera uniforme pour tous : - du frontal (nginx) - du serveur d'applications (django et des modules) et "monter" les sources dedans pour l'exécuter - si on utilise un SGBD (Postgresql par exemple), un containeur postgresql Une requête HTTP suivra le chemin suivant : navigateur --> nginx --> django ``` $ mkdir projet && cd projet $ mkdir docker && cd docker ``` A partir du répertoire docker, on aura les fichiers suivants, à créer ``` $ find . . ./build.sh ./docker-compose.yml ./Dockerfile ./nginx ./nginx/conf.d ./nginx/conf.d/plateform.conf ./requirements.txt ``` - créer l'image : exécute la commande **docker build** avec comme source le fichier Dockerfile qui contient les instructions de création de l'image django. Source du **Dockerfile**, basé sur une image python 3.7, le port exposé à partir du container sera le 8100 ```yaml FROM python:3.7-slim-buster RUN apt-get clean RUN apt-get update RUN apt-get install -y libsasl2-dev python-dev libldap2-dev libssl-dev \ libfontconfig1 libxrender1 unzip \ libfreetype6 libc6 zlib1g libpng-dev\ libxtst6 libtiff5-dev libjpeg62-turbo-dev zlib1g-dev libfreetype6-dev fontconfig libxml2-dev libxslt1-dev gettext \ libaio1 libaio-dev build-essential \ && mkdir /code && mkdir /static RUN apt-get clean ENV PYTHONIOENCODING=UTF-8 WORKDIR /code COPY ./requirements.txt requirements.txt RUN pip install --upgrade pip && pip install -r requirements.txt RUN apt-get install -y python3-lxml python3-cffi libcairo2 libpango1.0-0 libgdk-pixbuf2.0-0 libffi-dev shared-mime-info libcairo2 && apt-get -y clean ADD . /code ENV TZ=Europe/Paris RUN apt-get update && apt-get install -y cron RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone ``` le fichier **build.sh**: crée à partir du Dockerfile une image api-plateform:latest ```bash #!/usr/bin/env bash docker build -f Dockerfile -t api-plateform:latest . ``` ``` $ ./build.sh ``` L'image est créée ``` $ docker image ls -a |grep api-pla api-plateform latest bda692c67a29 27 hours ago 782MB ``` - ajouter à votre **hosts** un alias __plateform__ ``` 127.0.0.1 localhost plateform ``` - démarrer les containers grâce au **docker-compose.yml** qui décrit les services (container) à démarrer Le **docker-compose.yml** qui sert au démarrage des différents containeurs : nginx, le serveur d'applications django : ```yaml version: '3.1' services: api-plateform: image: api-plateform:latest container_name: api-plateform command: python3 manage.py runserver 0.0.0.0:8110 volumes: - ../backend:/code networks: - api-plateform nginx-plateform: image: nginx container_name: nginx-plateform restart: "no" depends_on: - api-plateform ports: - 80:80 networks: - api-plateform volumes: - ./nginx/conf.d/plateform.conf:/etc/nginx/conf.d/plateform.conf - ../log/:/var/log/nginx/ networks: api-plateform: ``` le fichier de configuration NGINX qui permet de __natter__ le 8100 du containeur vers le port 80 pour l'extérieur, fichier nginx/conf.d/plateform.conf ``` server { listen 80; server_name plateform; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; client_body_buffer_size 10m; client_max_body_size 10m; client_body_timeout 120s; location / { proxy_pass http://api-plateform:8110/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Protocol "http"; } } ``` ``` $ docker-compose up Creating network "docker_api-plateform" with the default driver Creating api-plateform ... done Creating nginx-plateform ... done Attaching to api-plateform, nginx-plateform api-plateform | Watching for file changes with StatReloader ``` Le site est accessible par http://plateform
zorky
264,824
Chaskiq: The open source alternative to Intercom releases version 0.1.3
Chaskiq releases version 0.1.3 with lots of fixes, redesign and a plugin system for conversations, editor blocks and dashboard items
0
2020-02-19T19:27:16
https://dev.to/michelson/chaskiq-the-open-source-alternative-to-intercom-releases-version-0-1-3-23e1
webhooks, api, rails, showdev
--- title: Chaskiq: The open source alternative to Intercom releases version 0.1.3 published: true description: Chaskiq releases version 0.1.3 with lots of fixes, redesign and a plugin system for conversations, editor blocks and dashboard items tags: webhooks, api, rails, showdev cover_image: https://user-images.githubusercontent.com/11976/74865420-d926fa00-532f-11ea-9c1f-1e1dcf3931af.png --- Since our launching we have been working hard to bring many improvements and new features to our platform! we've just released [Chaskiq version 0.1.3](https://github.com/chaskiq/chaskiq) ---- > We expect to deliver more awesomeness in the next weeks. so, stay tuned to [Chaskiq's github repo](https://github.com/chaskiq/chaskiq) ---- # Panel Redesign: First things first, we've a panel redesign , so it's more cool than before :) ![alt](https://pbs.twimg.com/media/EQcOpcmW4A0NAx-?format=jpg&name=small) # Pluggin system: We have architectured a plugin system that handles third party API integrations. There are some categories of plugin that are worth to mention. Also we have already implemented some third party services on top of it. ## Plugin Categories + **Events**: This is the base and more generic kind of plugin. the configuration of this plugin can react on specific platform events. Like when a conversation is created, a message is added, an email is changed and so on. The integrations that we have made on this was conversation Sync for **Slack** and **Twitter** (Direct Messages) + **CRM**: in this category the integrated plugins will be enabled to receive and send contact information. Out very first CRM we've integrated is **Pipedrive** , Also the platform will handle external profiles that are sync-able from the panel. + **Dashboard**: This kind of plugin will display information in the main dashboard , you can have as many blocks with different information. The first Dashboard integration we have choosen was [Dailytics](Dailytics.com) ![](https://user-images.githubusercontent.com/11976/74865010-3a9a9900-532f-11ea-962d-79a9f2d44a68.png) + **Editor**: This kind of plugin has the ability to extend the text editor and send customized blocks or external blocks as a chat message. Our first integration was **Calendly**. Editor plugins can be found in the editor menu ![image](https://user-images.githubusercontent.com/11976/74865763-71bd7a00-5330-11ea-89e2-8ea936318a93.png) ### Calendly example: ![image](http://g.recordit.co/YSH1neKDUE.gif) # Webhooks Support: Webhooks are an important part for platforms like Chaskiq if you need to keep track or listen events on other applications. Chaskiq provide a way to deliver specific events to specific URL provided by you. ![image](https://user-images.githubusercontent.com/11976/74866772-299f5700-5332-11ea-8530-0d7988785b4c.png) We expect to deliver more awesomeness in the next weeks , so stay tuned ---- Check out the [github repo of Chaskiq](https://github.com/chaskiq/chaskiq) we will be happy to help and answer your questions.
michelson
264,947
Be comfortable with being uncomfortable
One piece of advise I often give to new developers or people who find it difficult to accept they don...
0
2020-02-19T22:52:15
https://dev.to/phalt/be-comfortable-with-being-uncomfortable-j4b
beginners, career
One piece of advise I often give to new developers or people who find it difficult to accept they don't know everything: # be comfortable with being uncomfortable I have been programming for over eight years now, I've given tech talks and built open source projects, and I've been a tech lead for two years. I regularly, almost daily, do not know how to immediately solve the tasks I am given. Over the years I have learned tricks to help me discover a path to a solution, but I rarely have a solution jump up straight away. This is a common situation for programmers: _we are problem solvers_. We don't just write lines of code. Problems are problems because they don't have easy solutions. So when you're faced with a problem and you have no idea where to start remember: **that's okay, it's part of your role. You will figure it out.** Get comfortable with being uncomfortable. Every time you learn a new technique to help you problem solve: remember it, and use it when you encounter a new problem.
phalt
270,349
Playing with MediaStream API
I recently purchased a new headset for use on video conference calls at work. I wanted to test out th...
0
2020-02-27T23:10:13
https://www.geekytidbits.com/playing-with-mediastream-api/
--- title: Playing with MediaStream API published: true date: 2020-02-26 00:00:00 UTC tags: canonical_url: https://www.geekytidbits.com/playing-with-mediastream-api/ --- I recently purchased a new headset for use on video conference calls at work. I wanted to test out the microphone to see how it sounded compared to my internal MacBook microphone. I started fiddling around with using macOS Voice Memos and it worked but it wasn’t long until I was looking on the web for “test microphone” site. Sadly, the sites I found were full of ads and bloated with things I didn’t want. Since I’m a developer and can’t help myself, I set out to build something to my own liking. It was time to play with the [MediaStream API](https://developer.mozilla.org/en-US/docs/Web/API/Media_Streams_API), something I really haven’t developed for before. The way I like to learn something new is to get the big picture first. Then, dive into the details. Here in this case, this meant getting a super simple script working that would start recording when the page is opened, stop after 5 seconds, and automatically play it back. It ended up like this: ``` // This script starts recording when the page is loaded, stops after 5 seconds​ // and then automatically plays back what was recorded.​ navigator.mediaDevices.getUserMedia({ audio: true }).then(function(stream) {​ let chunks = [];​ const mediaRecorder = new MediaRecorder(stream);​ ​ mediaRecorder.ondataavailable = function(e) {​ // Store data stream chunks for future playback​ chunks.push(e.data);​ };​ ​ mediaRecorder.onstop = function(e) {​ // Playback recording​ const blob = new Blob(chunks, { type: "audio/ogg; codecs=opus" });​ const audio = document.createElement("audio");​ audio.src = window.URL.createObjectURL(blob);​ document.body.appendChild(audio);​ audio.play();​ ​ // Clear recording​ chunks = [];​ };​ ​ // Start recording!​ mediaRecorder.start();​ ​ // Record for 5 seconds then stop and playback​ setTimeout(()=>{​ mediaRecorder.stop();​ }, 5000);​ }); ``` If you drop the above code in a `<script>` tag you have a working recorder / playback. Easy and simple! For the record, [this MDN article](https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API/Using_the_MediaStream_Recording_API) and corresponding [demo](https://mdn.github.io/web-dictaphone/) were really helpful in understanding how the MediaStream API works. Once I had that working, I iterated on the details and eventually landed with something I like and find useful: an app where you click ‘Record’, make some sound, click ‘Stop’ and then it automaitcally plays it back. Simple, right? Also, it shows a nice little realtime graph of the audio input. It looks like this: ![Test Microphone Demo](https://www.geekytidbits.com/playing-with-mediastream-api/test-microphone-demo.gif) You can see the working app here: [https://bradymholt.github.io/test-microphone/](https://bradymholt.github.io/test-microphone/) and view the source here: [https://github.com/bradymholt/test-microphone](https://github.com/bradymholt/test-microphone).
bradymholt
264,991
What are some good web dev podcasts to follow?
I am a fan of: JS Party React Podcast What are your favourites? I love jotting down my thought...
0
2020-02-20T00:26:40
https://dev.to/sleeplessyogi/what-are-some-good-web-dev-podcasts-to-follow-57em
webdev, podcast, javascript, react
I am a fan of: [JS Party](https://changelog.com/jsparty) [React Podcast](https://reactpodcast.simplecast.fm/) What are your favourites? --- I love jotting down my thoughts on my blog [Ninja Academy](http://ngninja.com/).
sleeplessyogi
265,004
Looking for feedback on a search app for NPM and Github
Whenever I look for an NPM package, I always search it in Google which would lead me to NPM where I'd...
0
2020-02-20T17:29:40
https://dev.to/monikkgandhi/looking-for-feedback-on-a-search-app-for-npm-and-github-34lj
webdev, startup, discuss
Whenever I look for an NPM package, I always search it in Google which would lead me to [NPM](https://www.npmjs.com/) where I'd find its documentation and downloads per week to know how popular it is but that doesn't suffice. Usually I'd also checkout its Github repo to find out the amount of times its been forked and number of stars and watchers. I'd also like to check out how many issues it has to judge its stability and how frequently commits happen to figure out that it's alive and not in limbo. That comes down to 5 steps and still I'll have to install it and try it out before I can confirm I can really use it. What if you can cut off all these steps and do it with just 1 click. Introducing [LibStack](https://libstack.dev). LibStack searches for packages across NPMJS registry and Github and provides results with consolidated info from both platforms. Here's the page you land into when you follow the above link: ![LibStack home page](https://dev-to-uploads.s3.amazonaws.com/i/27d3l5hokfdl9r9gz4e8.png) Once you type a package name and submit, the app takes you to the search results page where for every result the app displays all the stats -- number of stars on Github, downloads per week, watchers, number of forks and bugs reported in Github. ![LibStack search page](https://dev-to-uploads.s3.amazonaws.com/i/bryxib67lzzbpho5615w.png) As you click on one of the results, the app navigates to a detailed view which along with stats displays the documentation of the package. In addition to that it also opens up a coding playground with a snippet for the user to play with before downloading and installing the package. This code snippet is grabbed from the Github repo of the package. ![LibStack repo page](https://dev-to-uploads.s3.amazonaws.com/i/yfjtyoo2fzwxrygrg63e.png) This summarizes all the features I've developed so far in LibStack. I'd like to keep working on LibStack and make it more usable but before I do that I'd like to get feedback from the people for whom this product is all about. Let me know your thoughts and suggestions and I'd make sure they get heard by myself 😄.
monikkgandhi
265,053
Developer Wisdom
Many things change every year in the world of software development. But there are certainly things th...
0
2020-02-20T03:41:52
https://dev.to/bhumi/developer-wisdom-2i2h
discuss, computerscience, career, productivity
Many things change every year in the world of software development. But there are certainly things that were true 10 years ago and will be true 10 years from now. What are some truths you've learned as it applies to the actual act of designing code, writing code, testing, debugging, refactoring and generally working hands on with code day to day? Here are some of mine: - There is no such thing as perfect bug-free code (there are just bugs that you haven't found yet or ones that don't really matter to the end user) - If some code 'works' but you don't understand *why* it works, there will come a time when you'll need to :) - When debugging, start with the observable truth, the machine doesn't lie. If you can't believe what you're observing because all of your mental models point to 'this is impossible', believe it! Start with the facts and question your assumptions to find out which one is flawed. The machine doesn't lie, there is always a reason :) I would love to hear wisdom from others!
bhumi
265,067
Webstorm Plugins for React Developers
Programming React apps on Webstorm can be quite enjoyable. Webstorm, out of the box has over 50 plug...
0
2020-02-20T04:24:26
https://dev.to/react_school/webstorm-plugins-for-react-developers-51bk
react, webdev, javascript, webstorm
Programming React apps on Webstorm can be quite enjoyable. Webstorm, out of the box has over 50 plugins pre-installed. So hunting for new plugins wasn't entirely necessary given I still wrestle with Prettier from time to time. I created this video discussing 6 plugins for Webstorm. The thumbnail photo is a splice of 2 different themes from the Material UI Theme plugin: Palenite and Monokai. Material UI Theme is great so far offering some interesting UI updates like icons and overall a full system UI redesign including loading menus. I did not know there was a Styled-components plugin, offering syntax highlighting for CSS inside templates! Rainbow brackets I installed recently but I'm unsure if it will stay around. What this plugin does is color the matching parenthesis, brackets or <> differently, so if you have a bunch of nested HTML elements or arrow functions you should be able to read them easier. {% youtube 481A_U-0xZA %} Next I'm interested in checking out VSCode's plugins. What are your favorite IDE Plugins?
react_school
265,124
Adventure Game Sentence Parsing with Compromise
The post Adventure Game Sentence Parsing with Compromise appeared first on Kill All Defects. In...
4,634
2020-02-20T06:00:37
https://killalldefects.com/2020/02/20/adventure-game-sentence-parsing-with-compromise/#utm_source=rss&utm_medium=rss&utm_campaign=adventure-game-sentence-parsing-with-compromise
angular, javascript, typescript, gamedev
--- title: Adventure Game Sentence Parsing with Compromise published: true date: 2020-02-20 05:49:34 UTC series: Doggo Quest tags: Angular, JavaScript, TypeScript, GameDev cover_image: https://i0.wp.com/killalldefects.com/wp-content/uploads/2020/02/image-43.png?fit=768%2C342&ssl=1 canonical_url: https://killalldefects.com/2020/02/20/adventure-game-sentence-parsing-with-compromise/#utm_source=rss&utm_medium=rss&utm_campaign=adventure-game-sentence-parsing-with-compromise --- The post [Adventure Game Sentence Parsing with Compromise](https://killalldefects.com/2020/02/20/adventure-game-sentence-parsing-with-compromise/) appeared first on [Kill All Defects](https://killalldefects.com). --- In this article I’ll show you how to use the [Compromise](https://nlp-compromise.github.io/) JavaScript library to interpret user input and translate it to a hierarchical sentence graph. I’ll be using Compromise to interpret player input in an Angular interactive fiction game, but you can use Compromise for many different things including: - Analyzing text for places, names, and companies - Building a context-sensitive help system - Transforming sentences based on tenses and other language rules **Learning Objectives** In this article we’ll cover: - What compromise is - How you can use compromise to analyze sentences - Making inferences about sentence structure based on compromise _Note: this article is an updated and more narrowly scoped version of [an older article](https://killalldefects.com/2019/09/24/building-text-based-games-with-compromise-nlp/) I wrote on Compromise. This information works with modern versions of Angular as well as modern versions of Compromise._ ## What is Compromise? Compromise is a JavaScript library aiming to be a _compromise_ between speed and accuracy. The aim is to have a client-side parsing library so fast that it can run as you’re typing while still providing relevant results. In this article I’ll be using Compromise to analyze the command the player typed into a text-based game and build out a `Sentence` object representing the overall structure of the sentence they entered. This sentence can then be used in other parts of my code to handle various verbs and make the application behave like a game. ![](https://i0.wp.com/killalldefects.com/wp-content/uploads/2020/02/image-43.png?fit=770%2C343&ssl=1) ## Installing and Importing Compromise To start with compromise, you first need to install it as a dependency. In my project I run `npm i --save compromise` to save the dependency as a run-time dependency. Next, in a relevant Angular service I import Compromise with this line: `import nlp from 'compromise';` Thankfully, Compromise includes TypeScript type definitions, so we have strong typing information available, should we choose to use it. ## String Parsing with Compromise Next let’s look at how Compromise can be used to parse text and manipulate it. Take a look at my `parse` method defined below: ![](https://i2.wp.com/killalldefects.com/wp-content/uploads/2020/02/ParseCommand.png?w=770&ssl=1) Here I use `nlp(text)` to have Compromise load and parse the inputted text value. From there I could use any one of a number of methods Compromise offers, but the most useful thing for my specific scenario is to call `.termList()` on the result and see what Compromise has inferred about each word in my input. _Note: the input text doesn’t have to be a single sentence, it could be several paragraphs and Compromised is designed to function at larger scales should you need to analyze a large quantity of text._ When I log the results of Compromise’s parse operation, I see something like the following: ![](https://i2.wp.com/killalldefects.com/wp-content/uploads/2020/02/image-41.png?w=770&ssl=1) Note here that the `Term` array contains information on a few different things, including: - **text** – the raw text that the user typed - **clean** – normalized lower-case versions of the user’s input. This is useful for string comparison - **tags** – an object containing various attributes that may be present on the term, based on Compromise’s internal parsing rules. This tags collection is the main benefit to Compromise that I’ll be exploring in this article (aside from its ability to take a sentence and break it down into individual terms as we’ve just seen). Here we see that the `tags` property of the `Open` term contains `{Adjective: true, Verb: true}`. This is because English is a complex language and open can refer to the verb of opening something or an object’s state, such as an _open door_. We’ll talk a bit more about this disambiguation later on, but for now focus on Compromise’s ability to recognize English words it knows and make inferences on words it doesn’t know based on patterns in their spelling and adjacent terms. Compromise’s intelligence in this regard is its main selling point for me on this type of an application. Compromise gets me most of the way there on figuring out how the user was trying to structure a sentence. This lets me filter out words I don’t care about and avoid trying to codify the entire English language in a simple game project. ## Adding an Abstraction Layer If you scroll back up to my `parse` method, you’ll note it has a `: Sentence` return type specified. This is because I believe in adding abstraction layers around third party code whenever possible. This has a number of benefits: - If third party behavior or signatures change significantly, you only need to adapt signatures in a few places since everything else relies on your own object’s signature - If you need to change out an external dependency with another, you just need to re-implement the bits that lead up to the abstraction layer - Wrapping other objects in my own makes it easier for me to define new methods and properties that make working with that code easier For Compromise, I chose to implement two main classes, a Word class and a Sentence class: {% gist https://gist.github.com/IntegerMan/a35508cc5a0bcf66bb772442b2cbcce9 %} I won’t stress any of the details of either of these implementations except to state that they wrap around Compromise’s `Term` class while allowing me to do integrated validation and structural analysis of the entire sentence. ## Validating Sentences Once I have a `Sentence` composed of a series of `Word` objects, I can make some inferences on word relationships based on how _imperative_ (command-based) sentences are structured in English. Note that for the purposes of my application that I treat all input as a single sentence regardless of punctuation. My validation rules catch cases with multiple sentences fairly easily so I don’t see a need to distinguish on sentence boundaries. Specifically, I validate that the first word in a sentence is a verb. This makes sense only for imperative sentences such as `Eat the Fish` or `Walk North`, but that’s the types of sentences we expect in a game like this. Next I validate that a sentence only contains a single verb (Term with a `Verb` tag). Anything with two or more is too complex for the parser to be able to handle. Once these checks are done, I can start to analyze words in relation to each other. ## Making Inferences about Sentences I operate under the assumption that the sentence is mainly oriented around one verb and zero or more nouns. I then loop over each word in the sentence from the right to the left and apply the following rules: 1. If the word is an adverb, I associate it with the verb 2. If the word is not a noun, verb, or adverb, I associate it with the last encountered noun, if any. The full method can be seen here: ![](https://i0.wp.com/killalldefects.com/wp-content/uploads/2020/02/LinkSentence.png?fit=770%2C768&ssl=1) Once that’s done, I have a hierarchical model of a sentence. For ease of illustration, here is a debug view of a sample sentence: ![](https://i2.wp.com/killalldefects.com/wp-content/uploads/2020/02/image-42.png?w=770&ssl=1) ## Next Steps With parsing in place the sentence contains a fairly rich picture of the structure of the sentence. This doesn’t mean that the player’s sentence makes logical or even grammatical sense, or even refers to something present in the game world. The sentence can, however, be passed off to a specific verb handler for the command entered, which in turn can try to make sense of it and come up with an appropriate reply, though this is out of the scope of this article, so stay tuned for a future article on game state management.
integerman
265,212
Svelte, Sapper, and Squidex Headless CMS, Part 1
I have experienced many different web platforms since I pivoted to the web around 2003. First ASP.Ne...
4,988
2020-02-20T09:50:35
https://cleverdev.codes/blog/svelte-sapper-and-squidex-headless-cms-part-1
svelte, sapper, squidex, blog
I have experienced many different web platforms since I pivoted to the web around 2003. First ASP.Net, then ASP.Net MVC, then AngularJS, then React. Imagine how many frameworks boom and bust cycles I also managed to skip! I remember how excited I was for each new renaissance to lift us out of the past dark ages. Gone was the complexity that weighed us down, now that the new hotness is here! In truth, each of these did provide a productivity boost, but then a plateau would inevitably come. Now there is Svelte. The feeling playing with Svelte is both very similar to this prior experience and altogether different at the same time. It just seems... simple. Straightforward. The learning curve, at least for me, is way lower. And it generates impossibly small code, so everything is super fast. It must be too good to be true. <q>Like all magnificent things, it's very simple.<q> <cite>Natalie Babbitt, Tuck Everlasting.<cite> Well, let's explore that and other topics here, in my brand new space, with *Yet Another How I Bootstrapped this Blog* post. To give it a fresh spin, let's wire it up to a headless CMS. I shopped around and ultimately landed on Squidex because I liked the feature set but mainly because the API was REST-y, and I prefer that over GraphQL. Let's give it a spin. I created a Squidex account (there is a free account option at https://squidex.io/ and clicked the New Blog Sample. Now let's get a feel for their API. Head over to settings in the project you created above and grab a clientid and secret. I generated a new one as the default one has editor permissions, and to export our website, we only need read permissions. Another one of *Cleve's new favorite things* <sup>TM</sup> is the [REST client extension](https://github.com/Huachao/vscode-restclient) for VS Code. The REST client extension feels like an even more developer-friendly Postman. I like UIs, but this has way less clicking and window management and can be checked into source control to share with others. Here is a a `.http` file for getting posts from Squidex. `project`, `squidex_clientid` and `squidex_secret` are setup as [REST extension environment variables](https://github.com/Huachao/vscode-restclient#environment-variables). X-Flatten removes these `iv` extra depth fields that Squidex adds by default. ``` # @name authorization POST https://cloud.squidex.io/identity-server/connect/token Content-Type: application/x-www-form-urlencoded grant_type=client_credentials&client_id={{squidex_clientid}}&client_secret={{squidex_secret}}&scope=squidex-api @accessToken = {{authorization.response.body.$.access_token}} ### GET https://cloud.squidex.io/api/content/{{project}}/posts/ Authorization: Bearer {{accessToken}} X-Flatten: true ``` What is `X-Flatten`? At least for Squidex is removes the `iv` field, which feels superfluous. Invariant? Meh. Now let's create our Svelte/Sapper project. Assuming you have npm/npx installed: `npx degit "sveltejs/sapper-template#rollup" my-blog` Create a `src/routes/_squidex.js` file (borrowed config from `@querc/squidex-client`). Unfortunately, that library didn't work for me because of auth issues and was more complicated than I needed. ```javascript import fetch from "node-fetch"; function defineProperty(obj, key, value) { if (key in obj) { Object.defineProperty(obj, key, { value: value, enumerable: true, configurable: true, writable: true }); } else { obj[key] = value; } return obj; } function convertFromJson(json) { const result = Object.assign({}, json); result.created = new Date(json.created); result.lastModified = new Date(json.lastModified); return result; } class SquidexClientConfiguration { constructor() { defineProperty(this, "url", "https://cloud.squidex.io"); defineProperty(this, "clientId", void 0); defineProperty(this, "clientSecret", void 0); defineProperty(this, "project", ""); } } class ConfigurationManager { static buildConfiguration(options, ...extraOptions) { if (options === undefined) { throw new Error( "Configuration options are required" ); } if (options.clientId === undefined) { throw new Error("`clientId` is required"); } if (options.clientSecret === undefined) { throw new Error("`clientSecret` is required"); } if (options.project === undefined) { throw new Error("`project` is required"); } return Object.assign( {}, new SquidexClientConfiguration(), options, ...extraOptions ); } } export class SquidexClient { constructor(options) { defineProperty(this, "config", void 0); defineProperty(this, "token", void 0); this.config = ConfigurationManager.buildConfiguration(options); } async getAuthenticationToken() { if (!this.token) { await this.initializeToken(); } return this.token; } async initializeToken() { const authorizationResponse = await fetch( `${this.config.url}/identity-server/connect/token`, { headers: { "Content-Type": "application/x-www-form-urlencoded" }, method: "POST", body: `grant_type=client_credentials&client_id=${this.config.clientId}&client_secret=${this.config.clientSecret}&scope=squidex-api` } ); if (!authorizationResponse.ok) { const errorText = await authorizationResponse.text(); throw new Error(`Could not obtain Squidex token. ${errorText}`); } const json = await authorizationResponse.json(); this.token = `Bearer ${json["access_token"]}`; } async query(schema) { const token = await this.getAuthenticationToken(); const response = await fetch( `${this.config.url}/api/content/${this.config.project}/${schema}`, { headers: { Authorization: token, "X-Flatten": "true" } } ); if (!response.ok) { const errorText = await response.text(); throw new Error(errorText); } const data = await response.json(); return data.items.map(x => { return convertFromJson(x); }); } } ``` Let's go ahead and run `yarn add node-fetch` since we depend on it here. Now we have to modify how the starter template fetches data. To be honest, I do not come from the NodeJS/Nuxt inspired/JAMstack world, so I found how Sapper mixes server and client together to be confusing at first. But after reading and playing with the code a few times, I got the hang of it, starting by reading https://www.codingwithjesse.com/blog/statically-generating-a-blog-with-svelte-sapper/. Basically inside `src/routes/blog/index.svelte` there is a preload call to fetch blog.json. The preload returns an object with a `posts` variable. Since there is a local variable called posts Sapper wires them together. This part also seemed a little weird and hidden magic for me, but c'est la vie, let's move on. This calls `src/routes/blog/index.json.js` which is a server side route, which gets data from `src/routes/blog/_posts.js`. Similarly `src/routes/blog/[slug].svelte` calls `src/routes/blog/[slug].json.js` which gets data from `src/routes/blog/_posts.js`. Now let's modify the `src/routes/blog/_posts.js` to fetch from Squidex. We are doing is caching our results, so we only get the data once. In a future post, we will add some more features like calculate reading time and process markdown. ```javascript import { SquidexClient } from "../_squidex.js"; let posts; let lookup; export async function getPostLookup() { if (!lookup) { const items = await getPosts(); lookup = new Map(); items.forEach(item => { lookup.set(item.slug, JSON.stringify(item)); }); } return lookup; } export async function getPosts() { if (posts) { return posts; } var client = new SquidexClient({ clientId: process.env.SQUIDEX_CLIENT_ID, clientSecret: process.env.SQUIDEX_SECRET, project: process.env.SQUIDEX_PROJECT }); const items = await client.query("posts"); posts = items.map(item => { const post = item.data; const id = item.id; const title = post.title; const slug = post.slug; const html = post.text; return { id, title, slug, html }; }); return posts; } ``` We need to change the two endpoint files to use this data: Change `src/routes/blog/index.json.js`: ```javascript import { getPosts } from "./_posts.js"; export async function get(request, response) { const posts = await getPosts(); response.writeHead(200, { 'Content-Type': 'application/json' }); response.end(JSON.stringify(posts)); } ``` and change `src/routes/blog/[slug].json.js` to use our cached lookup: ```javascript import { getPostLookup } from './_posts.js'; export async function get(req, res, next) { const lookup = await getPostLookup(); const { slug } = req.params; if (lookup.has(slug)) { res.writeHead(200, { 'Content-Type': 'application/json' }); res.end(lookup.get(slug)); } else { res.writeHead(404, { 'Content-Type': 'application/json' }); res.end(JSON.stringify({ message: `Not found` })); } } ``` Now run `yarn dev`, and you should see the sample blog post from Squidex. Stay tuned, in future posts in this series we will: * Render markdown, processing image assets, and add reading time * Add the RSS/Atom feed * Create a 404 page * Implement a tag cloud and recent posts * Play with styling and layout * Implement SEO and OG markup * Create a static about me page
cleverguy25
265,235
Deep Dive with RxJS in Angular
Before we deep dive in RxJS or Reactive Extension For Javascript in Angular, we should know what exac...
0
2020-03-22T11:19:58
https://dev.to/mquanit/deep-dive-with-rxjs-in-angular-4e6o
angular, rxjs, reactive, asynchronous
Before we deep dive in [RxJS](https://rxjs.dev/) or [Reactive Extension For Javascript](https://rxjs.dev/) in Angular, we should know what exactly is RxJS. RxJs is a Powerful Javascript library for reactive programming using the concept of [Observables](https://rxjs.dev/guide/observable). It is one of the hottest libraries in web development, Offering a powerful, functional approach for dealing with events and with integration points into a growing number of frameworks, libraries, and utilities, the case for learning Rx has never been more appealing. ### According to its Documentation > Think of RxJS as Lodash for events. ReactiveX or RxJS works internally with [Observer Pattern](https://en.wikipedia.org/wiki/Observer_pattern) in which an Object, we call as <b>Subject</b> maintains it's dependencies and notifies when any of its state changes. # Why RxJS As RxJS, follows functional programming fundamentals, it provides every type of [Pure Function](https://www.freecodecamp.org/news/what-is-a-pure-function-in-javascript-acb887375dfe/) for events. This simply means your code is less prone to errors. Normally we create impure functions that could possibly mess up your code when it grows. # Streams RxJS works as [Streams](https://developer.mozilla.org/en-US/docs/Web/API/Streams_API) for your app on any event. Streams are basically the definition of [Observables](https://rxjs.dev/guide/observable) which we cover right after it. Stream API allows you to get a sequence of data in the form of chunks, where we usually get large data from API in little pieces of data. RxJS Stream itself contains many sub-API's which makes it easier for everyday tasks related to web APIs like mouse events, Keyboard events or any kind of data that is coming right from the backend services. Now let's move onto some basic concepts which RxJS is based on for async event management. # Observables As we have discussed above, Observables are a definition or declaration of Streams and by its means is that it is a collection of future events or values, which we get continuously from time to time. You can create an observable from nearly anything, but the most common use case in RxJS is from events. The easiest ways to create <b>Observables</b> is by using built-in functions provided by <b>RxJS</b>. Angular ships this cool library by default so you don't need to install it explicitly. Let's see a code snippet: <b>Note:</b> Try code snippets online in <b>[ng-run.com](https://ng-run.com/)</b> so you don't have to create angular project just for these snippets. ```javascript import { Component, VERSION, OnInit } from '@angular/core'; import { interval, fromEvent } from "rxjs"; // <----------- importing rxjs lib @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { ngOnInit() { const interval$ = interval(2000); //<-- interval func. same as setinterval in vanilla javascript interval$.subscribe(val => console.log(val)) // subscribed to listen our stream of numbers } } ``` After running this code, open chrome debugging tools by pressing the `F-12` key and check the console tab. You will see numbers after 2 seconds of delay. You have noticed that I have created a constant variable `interval$`, and you may be wondering why I added `$` with the variable name. It's just an standard for <b>Observables</b> means that this variable is an <b>Observable</b>. Let's see another simple code example: ```javascript import { Component, VERSION, OnInit } from '@angular/core'; import { interval, fromEvent } from "rxjs"; // <----------- importing rxjs lib @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { ngOnInit() { const clickEvt$ = fromEvent(document, 'click'); clickEvt$.subscribe(evt => console.log(evt)) } } ``` After executing this code, when you click anywhere on the browser document, you'll see `mouse click event` on console as it creates an Stream of click events to listen on every click. # Subscription Subscription is what sets everything in motion. We could say that it's the execution of Observable, where you get to subscribe to events and map or transform data as you want. To create a subscription, you call the subscribe method, supplying a function (or object) - also known as an observer. A Subscription has one important method known as `unsubscribe()` which takes no argument and is responsible for disposing / exiting of subscription. In previous versions of RxJS, Subscription was called "Disposable". ```javascript import { Component, OnInit } from '@angular/core'; import { fromEvent } from "rxjs"; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent implements OnInit { name = 'Angular'; ngOnInit() { const clickEvt$ = fromEvent(document, 'click'); clickEvt$.subscribe(evt => console.log(evt)) } } ``` In the above code snippet, we set up a click event listener on anywhere on the document, then we passed the <b>subscribe</b> method on each click of the document and then it returns an object with <b>Unsbscribe</b> which contains clean up logic, like removing events. It's important to note that each subscription will create it's own execution context which means calling `subscribe` method a second time will create a new event listener ```javascript import { Component, OnInit } from '@angular/core'; import { fromEvent } from "rxjs"; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent implements OnInit { name = 'Angular'; ngOnInit() { const clickEvt$ = fromEvent(document, 'click'); const keyUpEvt$ = fromEvent(document, 'keyup'); clickEvt$.subscribe(evt => console.log(evt)); keyUpEvt$.subscribe(evt => console.log(evt)); } } ``` Subscriptions create one on one, one-sided conversation between the <b>Observable</b> & <b>Observer</b>, which also known as <b>Unicasting</b>. It's worth noting that when we discuss an Observable source emitting data to observers, this is a push-based model. The source doesn't know or care what subscribers do with the data, it simply pushes it down the line. # Operators RxJS is incomplete without its <b>operators</b>, even though <b>Observables</b> are the foundation. Operators are some pure functions in RxJS, which is responsible for manipulating data from source returning an Observable of the transformed values. Many of the RxJS operators are similar to vanilla javascript functions like `map` for Arrays. Here's what it looks like in Rxjs code: ```javascript import { Component, OnInit } from '@angular/core'; import { fromEvent, of } from "rxjs"; import { map } from "rxjs/operators"; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent implements OnInit { name = 'Angular'; ngOnInit() { const transformedData = of(1,2,3,4,5,6) .pipe(map((val: any) => val * 5)) .subscribe(data => console.log(data)); } } ``` You'll see all these numbers are multiplied with `5` in subscription, and if you console `transformedData`, it will show that specific Observable. There are many sheer numbers of Operators which could be overwhelming at first as you starting to learn RxJS. We Obviously don't cover all these operators but will provide details of mostly used ones which you could probably use in your applications. <i>Let's start with the most common one,</i> ## Pipe The <b>Pipe</b> function is the assembly line from your observable data source through your operators. It's for using multiple operators within an observable chain, contained within the pipe function. We can implement multiple operators in the `pipe` function for better readability. ```javascript import { Component, OnInit } from '@angular/core'; import { fromEvent, of } from "rxjs"; import { map } from "rxjs/operators"; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent implements OnInit { name = 'Angular'; ngOnInit() { const transformedData = of(1,2,3,4,5,6) .pipe(map((val: any) => val * 5)) .subscribe(data => console.log(data)); } } ``` ## Of Another most common & simplest RxJS operator is `Of` function. It simply emits each value in a sequence from a source of data and then emits a complete notification. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/h7716xir3zryebtsxyd2.png) <i>official marble image from rxjs official site</i> Code Snippet for `Of` operator ```javascript import { Component, OnInit } from '@angular/core'; import { of } from "rxjs"; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent implements OnInit { name = 'Angular'; ngOnInit() { const person = { name: 'John Doe', age: 22 }; //<-- simple object const personObs = of(person); //<-- convert object into stream personObs.subscribe(data => console.log(data)) //<-- execute observable } } ``` There are 6 types of operators that RxJS based upon. 1) Creation Operators 2) Combination Operators 3) Error Handling Operators 4) Filtering Operators 5) MultiCasting Operators 6) Transforming Operators # Creation Operators Creation operators are functions that can be used to create Observable from any other data type or convert it into an Observable, like in the above example we did. From generic to specific use-cases you are free, and encouraged, to turn [everything into a stream](http://slides.com/robwormald/everything-is-a-stream#/). There are many [other operators](https://www.learnrxjs.io/learn-rxjs/operators/creation) included in Creation Operators. Here's an example of Simple Creation Operators with RxJS Ajax module: ```javascript import { Component, VERSION, OnInit } from '@angular/core'; import { ajax } from 'rxjs/ajax'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { name = 'Angular ' + VERSION.full; githubUsers = `https://api.github.com/users`; users = ajax({ url: this.githubUsers, method: "GET" }) ngOnInit() { const subscribe = this.users.subscribe( res => console.log(res.response), err => console.error(err) ); } } ``` # Combination Operators Combination Operators also known as <b>Join Operators</b> allows the join of data from multiple observables. Emitted values are the primary variation among these operators. There are many [other operators](https://www.learnrxjs.io/learn-rxjs/operators/combination) included in Combination Operators. Here's the example of the most common combination operator, ```javascript import { Component, VERSION, OnInit } from '@angular/core'; import { fromEvent, interval } from 'rxjs'; import { map, combineAll, take } from 'rxjs/operators'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { name = 'Angular ' + VERSION.full; ngOnInit() { const clicks = fromEvent(document, 'click'); const higherOrder = clicks.pipe( map( ev => interval(Math.random() * 2000).pipe(take(3)) ), take(2) ); const result = higherOrder.pipe(combineAll()) result.subscribe(data => console.log(data)); } } ``` In this example, we have combined the result of `clicks` and `higherOrder` observables and show it into the console by subscribing `result` observable. # Error Handling Operators Errors are an unfortunate side-effect of development. These operators provide effective ways to gracefully handle errors and retry logic, should they occur. Some of the [other operators are](https://www.learnrxjs.io/learn-rxjs/operators/error_handling) included in Error Handling Operators. Here's the example of the `catchError` handling operator, which catches errors on the observable to be handled by returning a new observable or throwing an error. ```javascript import { Component, VERSION, OnInit } from '@angular/core'; import { of } from 'rxjs'; import { map, catchError } from 'rxjs/operators'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { name = 'Angular ' + VERSION.full; ngOnInit() { of(1, 2, 3, 4, 5).pipe( map(num => { if (num == 4) throw 'Four!' return num }), catchError(err => of('I', 'II', 'III', 'IV', 'V')), ) .subscribe(data => console.log(data)) } } ``` #Filtering Operators The filtering operators provide techniques for accepting - or declining - values from an observable source and dealing with the build-up of values within a stream. This operator is similar to `Array.prototype.filter`, which yields true for emitted values. Here's the simplest `filter` operator example from RxJS, ```javascript import { Component, VERSION, OnInit } from '@angular/core'; import { from } from 'rxjs'; import { filter } from 'rxjs/operators'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { name = 'Angular ' + VERSION.full; ngOnInit() { const source = from([ { name: 'Joe', age: 31 }, { name: 'Bob', age: 25 } ]); //filter out people with age under 30 const example = source.pipe(filter(person => person.age >= 30)); //output: "Over 30: Joe" const subscribe = example.subscribe(val => console.log(`Over 30: ${val.name}`)) } } ``` #Multicasting Operators In RxJS observables are cold, or unicast (one source per subscriber) by default. These operators can make an observable hot, or multicast, allowing side-effects to be shared among multiple subscribers. Example of `multicast` operator with standard Subject, ```javascript import { Component, VERSION, OnInit } from '@angular/core'; import { Subject, interval, ConnectableObservable } from 'rxjs'; import { take, tap, multicast, mapTo } from 'rxjs/operators'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { name = 'Angular ' + VERSION.full; ngOnInit() { //emit every 2 seconds, take 5 const source = interval(2000).pipe(take(5)); const example = source.pipe( //since we are multicasting below, side effects will be executed once tap(() => console.log('Side Effect #1')), mapTo('Result!') ); //subscribe subject to source upon connect() const multi = example.pipe(multicast(() => new Subject())) as ConnectableObservable<number>; /* subscribers will share source output: "Side Effect #1" "Result!" "Result!" ... */ const subscriberOne = multi.subscribe(val => console.log(val)); const subscriberTwo = multi.subscribe(val => console.log(val)); //subscribe subject to source multi.connect() } } ``` Here in the above example we use `connectObservable<number>` as type for our `pipe` function because `pipe` function only returns an `Observable` but `mutlicast` operator returns `connectObservable`, so that's how we get `connect` funtion with `multi` named observable. Here you can leran more about [Connectable Observable](https://rxjs.dev/api/index/class/ConnectableObservable) #Transformation Operators Transforming values as they pass through the operator chain is a common task. These operators provide transformation techniques for nearly any use-case you will encounter. In some of our examples above we used some of the transformation operators like `mapTo`, `map`, `scan` & `mergeMap`. Here are all the [operators in transformation operators](https://www.learnrxjs.io/learn-rxjs/operators/transformation). Let's see an example of the most common transformation operator, ```javascript import { Component, VERSION, OnInit } from '@angular/core'; import { fromEvent } from 'rxjs'; import { ajax } from 'rxjs/ajax'; import { mergeMap } from 'rxjs/operators'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent implements OnInit { name = 'Angular ' + VERSION.full; ngOnInit() { // free api url const API_URL = 'https://jsonplaceholder.typicode.com/todos/1'; // streams const click$ = fromEvent(document, 'click'); click$ .pipe( /* * Using mergeMap for example, but generally for GET requests * you will prefer switchMap. * Also, if you do not need the parameter like * below you could use mergeMapTo instead. * ex. mergeMapTo(ajax.getJSON(API_URL)) */ mergeMap(() => ajax.getJSON(API_URL)) ) // { userId: 1, id: 1, ...} .subscribe(console.log); } } ``` Here's on above example, we are merging our `click$` observable with response which we are getting from `ajax.getJSON()`. When we click on anywhere on document, we will get a response from API in console. Here are all the main operators, described in this article, and I hope you learned something new regarding RxJS. Here are some more resources of RxJS, [https://www.learnrxjs.io/](https://www.learnrxjs.io/) [https://rxjs.dev/](https://rxjs.dev/api) [https://www.learnrxjs.io/learn-rxjs/recipes](https://www.learnrxjs.io/learn-rxjs/recipes) [https://www.youtube.com/playlist?list=PL55RiY5tL51pHpagYcrN9ubNLVXF8rGVi](https://www.youtube.com/playlist?list=PL55RiY5tL51pHpagYcrN9ubNLVXF8rGVi) If you like it please share it in your circle and follow me for more of this kinda brief article. Peace ✌️✌️✌️
mquanit
265,655
DYNAMIC USER INTERFACES WITH GRAPHQL (React/GraphQL Conference Talk + Tutorial)
This video is a talk from Byteconf GraphQL 2020, a free, live-streamed GraphQL conference that aired...
4,720
2020-02-20T20:39:09
https://dev.to/bytesizedcode/dynamic-user-interfaces-with-graphql-p62
graphql, techtalks, tutorial, react
> This video is a talk from *Byteconf GraphQL 2020*, a free, live-streamed GraphQL conference that aired on January 31st, 2020. [Visit our website](https://www.bytesized.xyz/graphql-2020) to sign up for the Byteconf GraphQL "swag bag", a free downloadable resource with everything you need to get started with GraphQL, as well as info about future Byteconf events! Reduce development time and create consistent and dynamic front-end interfaces which are always up-to-date with your back-end GraphQL API by utilizing the introspection query to reveal an API's schema, automatically generate queries and mutations, and, ultimately, hydrate typed React components. Thanks to [Greg Brimble](https://twitter.com/gregbrimble) who presented this talk at [Byteconf GraphQL 2020](https://www.bytesized.xyz/graphql-2020)! {% youtube I31QyCll80w %} If you enjoyed the video, give it a thumbs-up, and [subscribe to our channel](https://www.youtube.com/channel/UC046lFvJZhiwSRWsoH8SFjg?sub_confirmation=1) for more web dev content every week. We also have a newsletter where we send out what's new and cool in the web development world, every Tuesday – [join here](https://www.bytesized.xyz/newsletter)!
signalnerve
266,967
Considering a Lua Translation to D
JesseKPhillips / lua If this ev...
0
2020-02-29T04:51:54
https://dev.to/jessekphillips/considering-a-lua-translation-to-d-3j5p
lua, dlang
{% github JesseKPhillips/lua no-readme %} I want to play with the idea of converting Lua from C to D. The thought here is that [LuaD](https://code.dlang.org/packages/luad) is a nice wrapper to Lua. I kind of wonder if Lua being written in D would open new doors of integration, compile time Lua? I don't actually expect to make it very far in this project. I will likely not get much further than having D as part of the compilation. # Old School Development It is a little weird coming at these projects. Best I can tell the official source code is only available through zip packages on the website. Even though the code is open source, development is not in the open. I found a repository on github.com which provides updates through version control. # Being More Modern Ultimately I want the transition to be completed incrementally, so that throughout the whole process there is a working build of Lua. To help with this I added github actions. In this case I don't have a good conceptual understanding of what is happening, likely hindered by my knowledge with gitlab pipeline. * it appears only the first job has access to the source code (automatically) * yaml steps seem overly hierarchical It appears actions are very powerful, but the yaml format is not specified (each action gets its own properties). I had originally tried to use a different job for testing than I did for building. It did not end well. I think the online action editor is going to be critical to navigating actions. But I now have a CI system running tests which I can use as part of my conversion.
jessekphillips
268,752
PrestaShop vs Shopify: Which is the Best eCommerce Platform
Every business owner wants an appealing online presence to extend their business reach. This helps th...
0
2020-02-25T12:50:01
https://dev.to/garrysmith010/prestashop-vs-shopify-which-is-the-best-ecommerce-platform-24n0
Every business owner wants an appealing online presence to extend their business reach. This helps them in doing product promotions on an international scale. Do you want to start an eCommerce online store? If yes, then there are a wide variety of tools available that can build websites faster with minimal efforts. And, the two leading options are PrestaShop and Shopify that can create high-performance websites in less time. If you are not sure which platform is best for an eCommerce web development, then this write-up will make you decide quicker. Let’s break down the list of differences to find out which one is better. <a href="https://www.csschopper.com/blog/prestashop-vs-shopify/">PrestaShop vs Shopify</a>: An Overview PrestaShop is self-hosted and it is easy to install it on any server. Through this, you will not be bound to a specific web hosting provider. Not only this, it provides numerous features that are essential for setting up an effective eCommerce store. While Shopify is a hosted platform that means you have to buy its subscription for utilizing it. Also, it is not possible to set up your online store on any server that you want with this platform. Which platform offers cost-effective pricing? Since PrestaShop is an open-source eCommerce platform, you can install and use it for free. It is highly customizable and you can add any feature on your website without any trouble. But, you have to purchase its hosting service. Shopify offers a 14-day free trial period to the users. After this, monthly charges will be applicable. Along with this, other charges for email, transaction fees, designing if you hire Shopify developer will be added too. Which platform is easy to use? Creating a first eCommerce store is a difficult task for many of us. That’s why we give preference to those platforms which are easy to use. Both PrestaShop and Shopify are intuitive platforms. Shopify has an interactive dashboard and drag and drops builder to get started with this platform. PrestaShop is a bit complicated as compared to Shopify. But, you will get a lot of options once you login to the dashboard. You have to spend more time in setting up a PrestaShop as most of the tasks will be performed manually. Which platform has better theme options? PrestaShop offers 4,ooo themes (paid and free versions) to the users to design their website. You can choose whatever they need in accordance with the platform, style, and shop category. When it comes to theme options, Shopify is far behind the PrestaShop. It offers 10 free starter templates and 50 premium themes for creating a functional website. These themes are responsive and provide a wonderful user experience across different devices. Ending Thoughts Choosing the right platform is the prerequisite for an <a href="https://www.csschopper.com/ecommerce-website-development.shtml">eCommerce website development</a>. Shopify is an ideal option for small size businesses while PrestaShop is recommended for mid and large-sized businesses. You can choose an eCommerce platform based on the size and requirements of the project.
garrysmith010
269,433
Get to Know Ansible
A post by KodeKloud
0
2020-02-26T13:34:47
https://dev.to/kodekloud/get-to-know-ansible-5509
devops, beginners, career
{% youtube tRUpaNbr_iU %}
kodekloud
269,768
Recursive React component in Typescript
The titel says it all. A colluege wanted to know how to write a Recursive React component in Typescri...
0
2020-02-26T22:16:12
https://dev.to/martinl83/recursive-react-component-in-typescript-5ae5
react, typescript, recursive
The titel says it all. A colluege wanted to know how to write a Recursive React component in Typescript. So i wrote a simple one with infinite depth. {% gist https://gist.github.com/MartinL83/35fdf71c6c23504687b602c800550662 %} Might be usefull for someone.
martinl83
269,825
Array functions in JavaScript
Introduction Over last few years JavaScript has come a long way. Probably starting with V8...
0
2020-02-27T09:13:00
https://dev.to/hi_iam_chris/array-functions-in-javascript-2noh
javascript, codenewbie, tutorial, functional
## Introduction Over last few years JavaScript has come a long way. Probably starting with V8, we got NodeJS, language syntax improved a lot and it got in almost all parts of IT. It stopped being just toy web language. Today, we use it on backend, in analytics and even in satellites. But even before that, in version 5, we got some improvements that I personally love using. [Array functions](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array). And in this article, I will be documenting some of my favorite ones. ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/vrra665lnb0y13xzlkxo.jpg) ## What are array functions? Just like in other languages, JavaScript arrays have different properties and methods built in. In version 5, sometime in 2009, there was extension in this area. Many useful methods were added. Methods that enable us to write code in a functional way. This means we could skip for loops and creation of temporary variables. So, let’s start with first one. Filter. ## .filter Just like name implies, filter function filters out elements. Or if we want to say it a bit more technically, when running filter on an array, it will return new array with all elements satisfying our filter rule. This new array will be of same size or smaller than array we are running it on. Function signature ```javascript arr.filter((element, index, originalArray) => Boolean); ``` Filter function takes one parameter. Function that will validate if element satisfies our defined rule. This function will be executed on each element of array and receives three parameters, first being currently observed element, second one is index of that element and third one is original array. Return value of this function is boolean. If you want to keep element you return true, otherwise false. Example 1: Get only even numbers from array ```javascript const numbers = [1, 2, 3, 4, 5, 6, 7]; const evenNumbers = numbers.filter(element => element % 2 === 0); console.log(evenNumbers); // [ 2, 4, 6 ] ``` Example 2: Filter out duplicates One interesting, and very nice example of filter use is removing duplicated elements from array because this one uses all three of function parameters. ```javascript const arrayWithDuplicates = [1, 1, 2, 5, 3, 4, 4, 4, 5, 6, 7]; const arrayWithoutDuplicates = arrayWithDuplicates.filter( (element, index, originalArray) => originalArray.indexOf(element) === index); console.log(arrayWithoutDuplicates); // [ 1, 2, 5, 3, 4, 6, 7 ] ``` ## .map Map is function that takes array elements and converts them into different form. This can be extending element with some property, returning just one property value or something else. But always, returned array is of same length. Function signature ```javascript arr.map((element, index, originalArray) => NEW_VALUE); ``` We write map function same as filter, with difference in return. Returned value is one we will keep in new array. Example 1: Return array of prices from array of objects In this example we will have array of object containing property price, but we might want to get average price, minimum, maximum or anything else. For this it would be easier if we have just array of numbers. This is something we can use map for. ```javascript const priceObjects = [ { price: 11.11 }, { price: 42.42 }, { price: 99.99 }, { price: 29.99 } ]; const prices = priceObjects.map(element => element.price); console.log(prices); // [ 11.11, 42.42, 99.99, 29.99 ] ``` ## .reduce Reduce method is a bit different and is used usually to reduce array into single value. That value can be a number, string, object or anything else. It is aggregate function. There are different use cases where reduce can be applied, but getting sum is kind of most often use case I have seen. Function signature ```javascript arr.reduce((currentValue, element, index, originalArray) => NEW_VALUE, DEFAULT_VALUE); ``` Function signature for reduce is a bit different than for filter and map. First difference is that reduce takes two arguments, first one is still function, but second one is default value. If we are doing sum of all numbers, default sum would be zero. This will be seen in example 1 bellow. Second difference is function given as first parameter. This function receives four parameters not three like map and filter. First parameter is current result of reduce. In first run that is default value, and in later iterations it changes. Last iteration return is final result of reduce. Rest of parameters are the same three parameters we receive in filter and map. Example 1: Get sum of all numbers ```javascript const numbers = [1, 4, 2, 5, 6, 3, 5, 5]; const sum = numbers.reduce((currentSum, element) => currentSum + element, 0); console.log(sum); // 31 ``` Example 2: get frequency of names This example takes number of names and returns object saying how many times which occurred. ```javascript const names = ['John', 'Jane', 'Joe', 'John','Jenny', 'Joe', 'Joe']; const namesFrequency = names.reduce((current, name) => { if(!current[name]) current[name] = 0; current[name]++; return current; }, {}); console.log(namesFrequency); // { John: 2, Jane: 1, Joe: 3, Jenny: 1 } ``` ## .forEach This method is more like map and filter than reduce, but I decided to leave it for last because of one important reason. It does not return value. All functions before returned array or some reduced value. This one does not. So why would we want to use this function? If we just want to execute some work on array element, maybe just print out each element. Function signature ```javascript arr.forEach((element, index, originalArray) => { }); ``` As said before, function has same signature as filter and map. Just doesn’t return any value. Example 1: print all element ```javascript const names = ["John", "Joe"]; names.forEach(name => { console.log(name); }); // John // Joe ``` ## Conclusion These are just some of array functions, but ones I personally use most. While there are more advanced ways of using them, I do hope this post explained how. Because of them giving us more functional style of coding, there are many other benefits of using them like function chaining. But maybe more important, if underlying architecture would support it, it might be optimized for parallelism, which would give huge performance improvement. All code examples used for this post can be found in my [Github repository](https://github.com/kristijan-pajtasev/javascript-array-functions-demo).
hi_iam_chris
269,853
Adding Server Side Rendering to a Relay Production App
adding server side rendering to a Relay production App can be a little bit tricky
0
2020-02-27T13:21:38
https://dev.to/sibelius/adding-server-side-rendering-to-a-relay-production-app-30oc
relaygraphqlssr
--- title: Adding Server Side Rendering to a Relay Production App published: true description: adding server side rendering to a Relay production App can be a little bit tricky tags: relay;graphql;ssr; --- # Reality Reality has a surprising amount of detail (https://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail). Reading all blog posts about Server Side Rendering (SSR) is not enough to properly implement it in a production app. The same is valid for most of the development tasks, like setting up a react native project, fixing a webpack weird bug and so on. You need to get your hands dirty with "reality" to understand what tiny details that matter. --- # Our Frontend Stack A bit of context of this task, we work on Feedback House (https://feedback.house/), a platform to manage teams. One of our modules is a hiring platform, where candidates can apply to a job posting and manage their applications (https://entria.contrata.vc/). We decided to move this module/frontend to use SSR to improve social media sharing and improve head meta tags. This frontend uses the following stack: react, styled-components, material-ui, styled-system, loadable-components, react-router, and relay. --- # SSR "framework" We checked pure webpack solution, razzle, nextjs, afterjs. Nextjs won't work well to us, as we have a nested route, and managing persisted layout patterns would be a big change for us (https://adamwathan.me/2019/10/17/persistent-layout-patterns-in-nextjs/) Afterjs was too high level, and pure webpack was too low level. Razzle was in the right spot, razzle is just 2 webpack combined. --- # Basic Razzle knowledge *Commands* razzle start: start your development application (compiles both server and client) razzle build: build your app to production usage *Files/Structure* razzle.config.js: let you bring plugins and modify webpack/babel and other configs index.js/ts: server entry point - basic HMR  server.tsx: server itself (express/koa) that will render React app on server client.tsx: client entry point that will hydrate SSR render --- # Facing Reality We started following razzle examples and tutorials, and keep bumping on "issues" that I will describe here. *Fixing Typescript*  Razzle has a razzle.config.js config file that lets you config any part of their config (webpack, babel and so on). my razzle.config.js looks like this: {% gist https://gist.github.com/sibelius/67f473bd76e47c58a9221343ae27e24a %} inside webRazzlePlugin with have a modify function to return a custom webpack config To make webpack transpile typescript .ts and .tsx files with added new extensions like this: {% gist https://gist.github.com/sibelius/45b76ff9ab9693184be784868b667459 %} We also had to remove `strictExportPresence` webpack config (https://webpack.js.org/configuration/module/#module-contexts), so import types won't cause compilations errors, just warnings *Fixing Monorepo* We modify babel loader to transpile all monorepo packages to let us modify any package and reload our main frontend: {% gist https://gist.github.com/sibelius/232986fd944620d94b6c34e963c62fa4 %} *Fixing .env* We let our environment variable inside .env files to make it easy to build on CI. We added dotenv-webpack to make this possible: {% gist https://gist.github.com/sibelius/a85b79e0720ab6795fe01e9ef7a2c3ec %} # First SSR Render The first SSR render was just a loading component in an html file \o/ {% gist https://gist.github.com/sibelius/d0c29d4fdccb3ef7beaab0917973df11 %} Just calling renderToString is not enough to properly render a complex app *Fixing React-Router* We use a static route config (https://github.com/ReactTraining/react-router/tree/master/packages/react-router-config). And we use nested route to handle persistent layout, routed tabs, routed modal and more. You need to use StaticRouter on server and BrowserRouter on the client to make them work well: {% gist https://gist.github.com/sibelius/fa7868dd65a1543fdeff7786eb6dd3f5 %} client: {% gist https://gist.github.com/sibelius/1a92efba433e42bcd468d2decb03b2df %} StaticRouter will set context.url if there is some redirects while rendering *You gonna need Extractors* styled-components, material-ui and loadable-components, all have "extractors". They will collect styles and chunks to be added to your SSR render {% gist https://gist.github.com/sibelius/efb0d817c58567c07209c54acbaf4c5c %} Without this your first render won't work well, your components won't be styled correct, and code split won't work out of the box. --- ## Checkpoint After all this work, we expect to have at least a better first render experience. However, SSR is harder than it sounds. After all this, we still have a simple <Loading /> component \o/. *Fixing Relay (Data Fetching)* We need to fetch and store all GraphQL queries before rendering our components. I've followed these 2 examples to make this possible (https://github.com/jaredpalmer/react-router-nextjs-like-data-fetching and https://github.com/relayjs/relay-examples/tree/master/issue-tracker) IssueTracker example uses preloadQuery + usePreloadQuery that required react and relay experimental builds, and our production code still can't move to it, as we still need to fix some StrictMode issues. So we have a mixed approach. The first "tricky" is to colocate query and variables on each route, like this: {% gist https://gist.github.com/sibelius/cf29c147a1d25d0194386a6be39d0583 %} require __generated__ is the same as using graphql`` tag {% gist https://gist.github.com/sibelius/d31b86b5e14615cafff6546de90e16cc %} queriesParams will get variables based on match params. query lets us fetch route data dependencies before rendering the component, this also lets us prefetch code and data and follow render-as-you-fetch Reat pattern. *Prefetching Relay queries per route* We use matchRoutes from react-router-config to find which routes have matched, it can be more than one, as we have nested routes. After that we fetch all queries using relay fetchQuery (https://relay.dev/docs/en/fetch-query): {% gist https://gist.github.com/sibelius/f55ac3493ea52d9dd821f9d8042bff34 %} ## Rendering with Relay store data {% gist https://gist.github.com/sibelius/850b3c4a78fbc212409a67b0aad77e62 %} All queries made using fetchQuery will store data on a Relay Environment, and we going to use it on our RelayEnvironmentProvider The trick here is to always use the correct fetchPolicy store-and-network (https://relay.dev/docs/en/query-renderer#props) on QueryRenderer, so it will reuse Environment data, instead of sending another request. *Make RelayEnvironment work on both client and server* {% gist https://gist.github.com/sibelius/02f50e9bb4fdb6f79b33d9dad2193819 %} When on the server, we create a new Relay Environment per request, so we don't leak user data to another user. On the client we reuse the Environment and start the store using some records if available. We store all Relay Store records inside window.__RELAY_PAYLOADS__, so we hydrate Relay store on the client and avoid request the same data again. {% gist https://gist.github.com/sibelius/d2af99af6d4bee9789498a8a8f72c669 %} On the client with create Relay Store like this: {% gist https://gist.github.com/sibelius/b91471eed37071a9bdac82542136a9b9 %} After the use of __RELAY_PAYLOADS__ we remove it from the window. --- ## Where we are? After all this work, we have a nice first render without loading. However, this is not enough, as we have some private routes that needs also to be rendered properly. Fixing authentication (localstorage) Most client-side apps using localstorage to manage authentication as session storage looks like a bunch of work, and cookies look like an outdated and "complex" solution. However, when you want to SSR authed routes you can't rely on localstorage, as it does not work on the server. The first thing to do, it to make your GraphQL server set authentication cookies httpOnly after a login/signup process: {% gist https://gist.github.com/sibelius/8aa078412d1bfa4094d1f96acb63d7e1 %} After this, you need to modify your Relay Network Layer (https://medium.com/entria/relay-modern-network-deep-dive-ec187629dfd3) fetch call, to use credentials: 'include'. This will send cookies to your server automatically (magic), but it will break if your frontend and server is on different domains (dammit CORS). You can fix CORS, using a proxy on your SSR server to "fake" that your GraphQL server is in the same domain as your frontend. On production, you can use nginx to fix this. ## ExecuteEnvironment ExecuteEnvironment will help you check if you are running code on server or client: {% gist https://gist.github.com/sibelius/09a86816326b630e222c84fb8d7291ed %} This is the same/similar to Relay ExecuteEnvironment codebase. Fixing hostname To fix some isomorphic problems like checking what is the hostname, I've come up with a global.ssr that contains some helpers: {% gist https://gist.github.com/sibelius/2f1e58a2b362012889187c284750fb5e %} After that we can have an isomorphic getDomainName like this: {% gist https://gist.github.com/sibelius/5cb125cc24f16eaa51c41b8206318d25 %} --- # After Thoughts Is that all folks? I don't think so, there are still some issues that need to be solved to improve this SSR approach. decide to render a mobile or a desktop version on SSR fixing Head tags and react-helmet (https://twitter.com/sseraphini/status/1232726960494780416), it looks like this is not so easy in React \o/ useIsClient hook to defer from rendering only to client-side {% gist https://gist.github.com/sibelius/65693a8e3fda3e5bd39a3ec0c4343a08 %} use @defer to avoid fetching too much data on server check new React streaming api --- This write has not all the details, ping me on twitter to discuss more it (https://twitter.com/sseraphini) You can also learn more about relay using my open-sourced course (https://github.com/sibelius/relay-modern-course) You can watch me demo some cool Relay features to React Europe here https://twitter.com/ReactEurope/status/1226951417002446849, you can play with the demo here https://react-europe-relay-workshop.now.sh/ If you wanna more hands-on on Relay, check to React Europe Relay Workshop (https://twitter.com/reacteurope/status/1194908997452795904?s=21), I'll show all this and more advanced Relay patterns. medium version: https://medium.com/@sibelius/adding-server-side-rendering-to-a-relay-production-app-8df64495aebf?postPublishedType=repub
sibelius
269,923
22 Dart Interview Questions (ANSWERED) Flutter Dev Must Know in 2020
22 Dart Interview Questions (ANSWERED) Flutter Dev Must Know in 2020
0
2020-02-27T07:44:49
https://www.fullstack.cafe/blog/dart-interview-questions
flutter, dart, career, interview
--- title: 22 Dart Interview Questions (ANSWERED) Flutter Dev Must Know in 2020 published: true description: 22 Dart Interview Questions (ANSWERED) Flutter Dev Must Know in 2020 tags: #flutter #dart #career #interview canonical_url: https://www.fullstack.cafe/blog/dart-interview-questions cover_image: https://images.pexels.com/photos/262438/pexels-photo-262438.jpeg?auto=compress&cs=tinysrgb&dpr=2&w=500 --- Dart is an open source, purely object-oriented, optionally typed, and a class-based language which has excellent support for functional as well as reactive programming. Dart was the fastest-growing language between 2018 and 2019, with usage up a massive 532%. Follow along and check 22 common Dart Interview Questions Flutter and Mobile developers should be prepared for in 2020. > Originally published on [FullStack.Cafe - Don't F*ck Up Your Next Tech Interview](https://www.fullstack.cafe/blog/redis-interview-questions) ### Q1: What is Dart and why does Flutter use it? > Topic: **Flutter**<br/>Difficulty: ⭐⭐ **Dart** is an *object-oriented*, *garbage-collected* programming language that you use to develop Flutter apps. It was also created by Google, but is open-source, and has community inside and outside Google. Dart was chosen as the language of **Flutter** for the following reason: - Dart is **AOT** (Ahead Of Time) compiled to fast, predictable, native code, which allows almost all of Flutter to be written in Dart. This not only makes Flutter fast, virtually everything (including all the widgets) can be customized. - Dart can also be **JIT** (Just In Time) compiled for exceptionally fast development cycles and game-changing workflow (including Flutter’s popular sub-second stateful hot reload). - Dart allows Flutter to avoid the need for a separate declarative layout language like *JSX* or *XML*, or separate visual interface builders, because Dart’s declarative, programmatic layout is easy to read and visualize. And with all the layout in one language and in one place, it is easy for Flutter to provide advanced tooling that makes layout a snap. 🔗 **Source:** [hackernoon.com](https://hackernoon.com/why-flutter-uses-dart-dd635a054ebf) ### Q2: What is Fat Arrow Notation in Dart and when do you use it? > Topic: **Flutter**<br/>Difficulty: ⭐⭐ The fat arrow syntax is simply a short hand for returning an expression and is similar to `(){ return expression; }`. The fat arrow is for returning a single line, braces are for returning a code block. Only an expression—not a statement—can appear between the arrow (`=>`) and the semicolon (`;`). For example, you can’t put an *if* statement there, but you can use a *conditional* expression ```dart // Normal function void function1(int a) { if (a == 3) { print('arg was 3'); } else { print('arg was not 3'); } } // Arrow Function void function2(int a) => print('arg was ${a == 3 ? '' : 'not '}3'); ``` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/questions/51868395/flutter-dart-difference-between-and/51869508) ### Q3: Differentiate between required and optional parameters in Dart > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ **Required Parameters** Dart required parameters are the arguments that are passed to a function and the function or method required all those parameters to complete its code block. ```dart findVolume(int length, int breath, int height) { print('length = $length, breath = $breath, height = $height'); } findVolume(10,20,30); ``` **Optional Parameters** - Optional parameters are defined at the end of the parameter list, after any required parameters. - In Flutter/Dart, there are 3 types of optional parameters: - Named - Parameters wrapped by `{ }` - eg. `getUrl(int color, [int favNum])` - Positional - Parameters wrapped by `[ ]`) - eg. `getUrl(int color, {int favNum})` - Default - Assigning a default value to a parameter. - eg. `getUrl(int color, [int favNum = 6])` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/questions/13264230/what-is-the-difference-between-named-and-positional-parameters-in-dart) ### Q4: Differentiate between named parameters and positional parameters in Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ Both named and positional parameters are part of optional parameter: **Optional Positional Parameters:** - A parameter wrapped by `[ ]` is a **positional** optional parameter. ```dart getHttpUrl(String server, String path, [int port=80]) { // ... } ``` - You can specify multiple positional parameters for a function: ```dart getHttpUrl(String server, String path, [int port=80, int numRetries=3]) { // ... } ``` In the above code, `port` and `numRetries` are optional and have default values of 80 and 3 respectively. You can call `getHttpUrl` with or without the third parameter. The optional parameters are _positional_ in that you can't omit `port` if you want to specify `numRetries`. **Optional Named Parameters:** - A parameter wrapped by `{ }` is a **named** optional parameter. ```dart getHttpUrl(String server, String path, {int port = 80}) { // ... } ``` - You can specify multiple named parameters for a function: ```dart getHttpUrl(String server, String path, {int port = 80, int numRetries = 3}) { // ... } ``` You can call `getHttpUrl` with or without the third parameter. You **must** use the parameter name when calling the function. - Because named parameters are referenced by name, they can be used in an order different from their declaration. ```dart getHttpUrl('example.com', '/index.html', numRetries: 5, port: 8080); getHttpUrl('example.com', '/index.html', numRetries: 5); ``` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/questions/13264230/what-is-the-difference-between-named-and-positional-parameters-in-dart) ### Q5: What is Streams in Flutter/Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ - Asynchronous programming in Dart is characterized by the `Future` and `Stream` classes. - A **stream** is a sequence of *asynchronous* events. It is like an *asynchronous Iterable*—where, instead of getting the next event when you ask for it, the stream tells you that there is an event when it is ready. - *Streams* can be created in many ways but they all are used in the same way; the *asynchronous for loop*( **await for**). E.g ```dart Future<int> sumStream(Stream<int> stream) async { var sum = 0; await for (var value in stream) { sum += value; } return sum; } ``` - Streams provide an *asynchronous* sequence of data. - Data sequences include user-generated events and data read from files. - You can process a stream using either **await for** or `listen()` from the *Stream API*. - Streams provide a way to respond to errors. - There are two kinds of streams: **single subscription** or **broadcast**. 🔗 **Source:** [dart.dev](https://dart.dev/tutorials/language/streams) ### Q6: Explain the different types of Streams? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ There are two kinds of streams. 1. **Single subscription streams** - The most common kind of stream. - It contains a *sequence of events* that are parts of a larger whole. Events need to be delivered in the correct order and without missing any of them. - This is the kind of stream you get when you read a file or receive a web request. - Such a stream can only be listened to once. Listening again later could mean missing out on initial events, and then the rest of the stream makes no sense. - When you start listening, the data will be fetched and provided in chunks. 2. **Broadcast streams** - It intended for individual messages that can be handled one at a time. This kind of stream can be used for mouse events in a browser, for example. - You can start listening to such a stream at any time, and you get the events that are fired while you listen. - More than one listener can listen at the same time, and you can listen again later after canceling a previous subscription. 🔗 **Source:** [dart.dev](https://dart.dev/tutorials/language/streams) ### Q7: What are Null-aware operators? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ - Dart offers some handy operators for dealing with values that might be null. - One is the **??=** assignment operator, which assigns a value to a variable only if that variable is currently null: ```dart int a; // The initial value of a is null. a ??= 3; print(a); // <-- Prints 3. a ??= 5; print(a); // <-- Still prints 3. ``` - Another null-aware operator is ??, which returns the expression on its left unless that expression’s value is null, in which case it evaluates and returns the expression on its right: ```dart print(1 ?? 3); // <-- Prints 1. print(null ?? 12); // <-- Prints 12. ``` 🔗 **Source:** [dart.dev](https://dart.dev/codelabs/dart-cheatsheet) ### Q8: How do you check if an async void method is completed in Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ Changing the return type to `Future<void>`. ```dart Future<void> save(Folder folder) async { ..... } ``` Then you can do `await save(...);` or `save().then(...);` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/a/59864791) ### Q9: How to declare async function as a variable in Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ Async functions are normal functions with some sugar on top. The function variable type just needs to specify that it returns a Future: ```dart class Example { Future<void> Function() asyncFuncVar; Future<void> asyncFunc() async => print('Do async stuff...'); Example() { asyncFuncVar = asyncFunc; asyncFuncVar().then((_) => print('Hello')); } } void main() => Example(); ``` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/a/59798754) ### Q10: How to duplicate repeating items inside a Dart list? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ Consider the code: ```dart final List<Ball> _ballList =[Ball(),Ball(),Ball(),Ball(),Ball(),] ``` What can to be done in order to not repeat `Ball()` multiple times. --- Using `collection-for` if we need different instances of `Ball()` ```dart final List<Ball> _ballList = [ for (var i = 0; i < 5; i += 1) Ball(), ]; ``` If we need the same instance of `Ball()`, `List.filled` should be used ```dart final List<Ball> _ballList = List<Ball>.filled(5, Ball()); ``` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/a/59739135) ### Q11: How is whenCompleted() different from then() in Future? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ * `.whenComplete` will fire a function either when the *Future* completes with an error or not, while `.then` returns a new *Future* which is completed with the result of the call to `onValue` (if this future completes with a value) or to `onError` (if this future completes with an error) * `.whenCompleted` is the asynchronous equivalent of a "**finally**" block. 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/questions/55381236/difference-between-then-and-whencompleted-methods-when-working-with-future/55382389) ### Q12: How to get difference of lists in Flutter/Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐ Consider you have two lists `[1,2,3,4,5,6,7]` and `[3,5,6,7,9,10]`. How would you get the difference as output? eg. `[1, 2, 4]` --- You can do something like this: ```dart List<double> first = [1,2,3,4,5,6,7]; List<double> second = [3,5,6,7,9,10]; List<double> output = first.where((element) => !second.contains(element)); ``` alternative answer: ```dart List<double> first = [1,2,3,4,5,6,7]; List<double> second = [3,5,6,7,9,10]; List<double> output = []; first.forEach((element) { if(!second.contains(element)){ output.add(element); } }); //at this point, output list should have the answer ``` note that for both cases, you need to loop over the **larger list**. 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/questions/59563964/flutter-difference-between-timer-and-animationcontroller-uses/59564169#59564169) ### Q13: Explain async, await in Flutter/Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ *Asynchronous* operations let your program complete work while waiting for another operation to finish. Here are some common asynchronous operations: - Fetching data over a network. - Writing to a database. - Reading data from a file. To perform asynchronous operations in Dart, you can use the `Future` class and the `async` and `await` keywords. The `async` and `await` keywords provide a declarative way to define asynchronous functions and use their results. Remember these two basic guidelines when using `async` and `await`: - To define an async function, add `async` before the function body - The `await` keyword works only in `async` functions. An `async` function runs synchronously until the first `await` keyword. This means that within an `async` function body, all synchronous code before the first `await` keyword executes immediately. Consider an example: ```dart import 'dart:async'; class Employee { int id; String firstName; String lastName; Employee(this.id, this.firstName, this.lastName); } void main() async { print("getting employee..."); var x = await getEmployee(33); print("Got back ${x.firstName} ${x.lastName} with id# ${x.id}"); } Future<Employee> getEmployee(int id) async { //Simluate what a real service call delay may look like by delaying 2 seconds await Future<Employee>.delayed(const Duration(seconds: 2)); //and then return an employee - lets pretend we grabbed this out of a database var e = new Employee(id, "Joe", "Coder"); return e; } ``` 🔗 **Source:** [dart.dev](https://dart.dev/codelabs/async-await) ### Q14: What is Future in Flutter/Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ - A **Future** is used to represent a potential value, or error, that will be available at some time in the future. Receivers of a *Future* can register callbacks that handle the value or error once it is available. For example: ```dart Future<int> future = getFuture(); future.then((value) => handleValue(value)) .catchError((error) => handleError(error)); ``` - If a future doesn’t produce a usable value, then the future’s type is `Future<void>`. - A future represents the result of an *asynchronous* operation, and can have two states: 1. **Uncompleted** When you call an *asynchronous* function, it returns an uncompleted future. That future is waiting for the function’s *asynchronous* operation to finish or to throw an error. 2. **Completed** If the *asynchronous* operation succeeds, the future completes with a value. Otherwise it completes with an error. 🔗 **Source:** [api.dart.dev](https://api.dart.dev/stable/2.7.0/dart-async/Future-class.html) ### Q15: What are the similarities and differences of Future and Stream? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ **Similarity:** - `Future` and `Stream` both work *asynchronously.* - Both have some potential value. **Differences:** - A `Stream` is a combination of **Futures**. - `Future` has only one response but `Stream` could have any number of **Response**. 🔗 **Source:** [medium.com](https://medium.com/flutter-community/understanding-streams-in-flutter-dart-827340437da6) ### Q16: How does Dart AOT work? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ - Dart source code will be translated into assembly files, then assembly files will be compiled into binary code for different architectures by the assembler. - For mobile applications the source code is compiled for multiple processors ARM, ARM64, x64 and for both platforms - Android and iOS. This means there are multiple resulting binary files for each supported processor and platform combination. 🔗 **Source:** [flutterbyexample.com](https://flutterbyexample.com/stateful-widget-lifecycle/) ### Q17: What is a difference between these operators "?? and ?." > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ **??** - It is a **null-aware operator** which returns the expression on its left unless that expression’s value is null, in which case it evaluates and returns the expression on its right: ```dart print(1 ?? 3); // <-- Prints 1. print(null ?? 12); // <-- Prints 12. ``` **?.** - It is a **conditional property access** which is used to guard access to a property or method of an object that might be null, put a question mark (?) before the dot (.): - You can chain multiple uses of `?.` together in a single expression: ```dart myObject?.someProperty?.someMethod() ``` The preceding code returns null (and never calls `someMethod()`) if either `myObject` or `myObject.someProperty` is null. 🔗 **Source:** [flutter.dev](https://flutter.dev/docs/testing/build-modes) ### Q18: What's the difference between async and async* in Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ - `async` gives you a `Future` while `async*` gives you a `Stream` - You add the `async` keyword to a function that does some work that might take a long time. It returns the result wrapped in a `Future`. - You add the `async*` keyword to make a function that returns a bunch of future values one at a time. The results are wrapped in a Stream. - `async*` will always returns a `Stream` and offer some syntax sugar to emit a value through `yield` keyword. 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/a/55397133) ### Q19: How to compare two dates that are constructed differently in Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ You can do that by converting the other date into `utc` and then comparing them with `isAtSameMomentAs` method. ```dart void main(){ String dateTime = '2020-02-03T08:30:00.000Z'; int year = 2020; int month = 2; int day = 3; int hour = 8; int minute = 30; var dt = DateTime.utc(year,month,day,hour,minute); print(DateTime.parse(dateTime).isAtSameMomentAs(dt)); } ``` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/a/60033249) ### Q20: What does "non-nullable by default" mean in Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ - **Non-nullable by default** means that any variable that is declared normally **cannot** be `null`. - Any operation accessing the variable before it has been assigned is **illegal**. - Additionally, assigning `null` to a non-nullable variable is also not allowed. ```dart void main() { String word; print(word); // illegal word = 'Hello, '; print(word); // legal } ``` ```dart void main() { String word; word = null; // forbidden world = 'World!'; // allowed } ``` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/questions/60068435/what-is-nullability-in-dart-non-nullable-by-default/60068436#60068436) ### Q21: What does a class with a method named ._() mean in Dart/Flutter? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ - In Dart, if the leading character is an underscore, then the function/constructor is private to the library. - `Class._();` is a named constructor (another example might be the copy constructor on some object in *Flutter*: `AnotherClass.copy(...);`). - The `Class._();` isn't necessary unless you don't want `Class` to ever be accidentally instantiated using the implicit default constructor. 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/questions/57878112/a-class-method-named-function-in-dart) ### Q22: How do you convert a List into a Map in Dart? > Topic: **Flutter**<br/>Difficulty: ⭐⭐⭐⭐ You can use [Map.fromIterable](https://api.dartlang.org/stable/dart-core/Map/Map.fromIterable.html): ```dart var result = Map.fromIterable(l, key: (v) => v[0], value: (v) => v[1]); ``` or _collection-for_ (starting from Dart 2.3): ```dart var result = { for (var v in l) v[0]: v[1] }; ``` 🔗 **Source:** [stackoverflow.com](https://stackoverflow.com/questions/59563964/flutter-difference-between-timer-and-animationcontroller-uses/59564169#59564169) > *Thanks 🙌 for reading and good luck on your interview!* <br/>*Please share this article with your fellow devs if you like it!* <br/>*Check more FullStack Interview Questions & Answers on 👉 [www.fullstack.cafe](https://www.fullstack.cafe)*
aershov24
269,963
Cross-Platform Apps vs Native Apps
Native apps are the ones built for a specific operating system, like Android, iOS, or Windows. Cros...
0
2020-02-27T08:36:11
https://dev.to/pagepro_agency/cross-platform-apps-vs-native-apps-gn1
reactnative, mobile, apps
<p><strong>Native apps</strong> are the ones built for a specific operating system, like <a href="https://www.android.com/" target="_blank" rel="noreferrer noopener" aria-label=" (opens in a new tab)">Android</a>, <a href="https://www.apple.com/ios/">iOS</a>, or <a href="https://www.microsoft.com/windows/" target="_blank" rel="noreferrer noopener" aria-label=" (opens in a new tab)">Windows</a>. </p> <p><strong>Cross-platform apps</strong> are the ones built-in web languages (like <a href="https://www.javascript.com/" target="_blank" rel="noreferrer noopener" aria-label=" (opens in a new tab)">JavaScript</a>) that can be later pulled (f.e. through <a href="https://pagepro.co/react-native-development.html" target="_blank" rel="noreferrer noopener" aria-label=" (opens in a new tab)">React Native</a>) as native apps able to work on any operating system and device.</p> <p><a href="https://whatwebcando.today/" target="_blank" rel="noreferrer noopener" aria-label=" (opens in a new tab)">Web can do more each and every day</a>, which makes almost no difference for a typical user to use a native app instead of a cross-platform app, and vice versa. Yet, both still have pros and cons that make one better than others.</p> <p><a rel="noreferrer noopener" aria-label=" (opens in a new tab)" href="https://pagepro.co/blog/2020/02/10/how-to-build-a-mobile-app-part-1-before-you-even-pay-for-anything/" target="_blank">The ultimate choice, however, depends mostly on your business type</a> and the result you want to achieve. Let’s take a look a bit deeper.</p> <hr class="separator"/> <h2><b> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Native development pros</b></h2> <img src="https://pagepro.co/blog/wp-content/uploads/2020/02/Screen-Shot-2020-02-20-at-09.46.12.png" alt="native apps logos"/> <hr class="separator"/> <h3>Native apps are better performance-wise</h3> <img src="https://pagepro.co/blog/wp-content/uploads/2019/11/sebastiaan-stam-4lqMd_XBlfU-unsplash-1024x539.jpg" alt="car speedometer"/> <p>Even if the web is able to do more each day, cross-platform apps are still not able to run at the same performance level as native apps in their natural environment.</p> <p>Native apps have been made fully to <strong>run on a specific operating system</strong>, which means they are able to use all the blessings and potential of that system to <strong>maximize app functionalities</strong> and deliver the <strong>ultimate user experience</strong> in the result.</p> <h3>Native apps are better for designs and interactions</h3> <figure class="wp-block-image size-large"><img src="https://pagepro.co/blog/wp-content/uploads/2020/02/designs.png" alt="people playing with light colors"/> <p>The previous point is strictly relater to this one.</p> <p>If you consider <strong>complicated designs or advanced interactions</strong> as a crucial part of your business advantage, you should definitely go for native development.</p> <p>Native apps are like fishes in the water. They cannot live anywhere else, but they live like no other animal in the water. And this is why native development makes it possible to deliver a truly outstanding user experience.</p> <hr class="separator"/> <h2><b> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Cross-platform development pros</b></h2> <img src="https://pagepro.co/blog/wp-content/uploads/2020/02/Screen-Shot-2020-02-20-at-09.46.35.png" alt="cross-platform frameworks logos"/> <hr class="separator"/> <h3>Cross-platform apps work everywhere</h3> <img src="https://pagepro.co/blog/wp-content/uploads/2020/02/everywhere.png" alt="smartwatch on a laptop next to phone"/> <p>If native apps are like fishes, cross-platform apps are like ducks. They can swim, walk, and fly.</p> <p>This is a typical scenario when the gift is a curse as well.</p> <p>Although cross-platform apps cannot use the full potential of specific operating systems, they are built to run regardless of them.</p> <p>That means <strong>you don’t have to develop two apps</strong> (one Android, and one iOS). Instead, you can<strong> use one developer</strong> (for example <a href="https://pagepro.co/blog/2020/02/12/when-does-it-make-sense-to-use-react-native/" target="_blank" rel="noreferrer noopener" aria-label=" (opens in a new tab)">React Native</a>) that will build an app ready to be working on both, Android and iOS, and more than that, <strong>on every device</strong>.</p> <h3>Cross-platform apps are better price-wise</h3> <img src="https://pagepro.co/blog/wp-content/uploads/2020/02/price.png" alt="price tags"/> <p>Don’t get me wrong. <strong>It’s not a typical price-performance relation.</strong></p> <p>To deliver great user experience you don’t have to take native development as a no-brainer. Cross-platform apps are doing great in that matter too, and if you put a price tag next to that, you may just fall in love with cross-platform apps.</p> <p>There are quite a few things that make cross-platform development cheaper. My friend wrote an article that you can use as an example of <a href="https://pagepro.co/blog/2020/01/16/how-react-native-can-cut-your-development-costs/" target="_blank" rel="noreferrer noopener" aria-label=" (opens in a new tab)">How React Native Can Cut Your Development Costs</a>. </p> <p>In a more general view, you need to be aware that the <strong>popularity of web languages</strong> is constantly growing, which means it is easier to get a cross-platform developer, <strong>it's easier to fix the code quickly</strong>, as many bugs been already fixed by community, and also demand on these developers is increasing already for years.</p> <p>More than that, think of a <strong>learning curve</strong>. It is way much easier for Android/iOS developers to learn React Native for example than React Native developer to learn Android/iOS development.</p> <p>Also, in most of the business cases, native <strong>hyper-performance is not so-much-needed</strong>.</p> <p>As long as extreme designs or interactions are not crucial to your business (which is probably more than 80% of business cases worldwide), you don’t need extreme native performance. Cross-platform performance is still delivering <strong>great user experience for a much better price</strong>.</p> <h3>Cross-platform have a better time to market</h3> <img src="https://pagepro.co/blog/wp-content/uploads/2020/02/peter-dawn-mM-L0yx5LcQ-unsplash.jpg" alt="kitchen chef holding presenting a dish in his hands"/> <p>Native development is time-consuming. Or at least in comparison with cross-platform development. </p> <p>You may not be able to have an outstanding user experience from the beginning, but <strong>you can arrive on the market much faster to test your MVP</strong>, get feedback and adapt changes accordingly without a need for a big investment.</p> <p>That way, you protect yourself from building and investing in something that nobody will use in the real world.</p> <h3>Cross-platform reach more people</h3> <img src="https://pagepro.co/blog/wp-content/uploads/2020/01/mauro-mora-31-pOduwZGE-unsplash-1024x682.jpg" alt="people walking down the road"/> <p>I prefer to do not repeat myself, but this is an <strong>important business advantage</strong> related to multi-platform support.</p> <p>Native apps can run only in one operating system, which means you have to build two apps to cover the mass market. In other words, building a native app makes more sense if you want to reach only people using for example iOS.</p> <p>With cross-platform development, <strong>you are able to reach all operating systems users</strong> with one app.</p> <hr class="separator"/> <h2><b> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Tools to use</b></h2> <table class=""><tbody><tr><td class="has-text-align-center" data-align="center"><strong>Native development</strong></td><td class="has-text-align-center" data-align="center"><strong>Cross-platform development</strong></td></tr><tr><td class="has-text-align-center" data-align="center"><a rel="noreferrer noopener" href="https://www.jetbrains.com/objc/" target="_blank">AppCode</a><br><a rel="noreferrer noopener" href="https://developer.android.com/studio" target="_blank">Android Studio</a><br><a rel="noreferrer noopener" href="https://developer.apple.com/xcode/" target="_blank">Xcode</a></td><td class="has-text-align-center" data-align="center"><a rel="noreferrer noopener" href="https://react-native.org/" target="_blank">React Native</a><br><a rel="noreferrer noopener" href="https://flutter.dev/" target="_blank">Flutter</a><br><a rel="noreferrer noopener" href="https://dotnet.microsoft.com/apps/xamarin" target="_blank">Xamarin</a></td></tr></tbody></table> <hr class="separator"/> <h2><b> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Examples of usage</b></h2> <table class=""><tbody><tr><td class="has-text-align-center" data-align="center"><strong>Native development</strong></td><td class="has-text-align-center" data-align="center"><strong>Cross-platform development</strong></td></tr><tr><td class="has-text-align-center" data-align="center"><a rel="noreferrer noopener" href="https://www.angrybirds.com/" target="_blank">Angry Birds</a><br><a rel="noreferrer noopener" href="https://www.waze.com/" target="_blank">Waze</a><br><a rel="noreferrer noopener" href="https://www.uber.com/" target="_blank">Uber</a><br>Pretty much most of the games</td><td class="has-text-align-center" data-align="center"><a rel="noreferrer noopener" href="https://www.instagram.com/" target="_blank">Instagram</a><br><a rel="noreferrer noopener" href="https://www.pinterest.ca/" target="_blank">Pinterest</a> <br><a rel="noreferrer noopener" href="https://aliexpress.com/" target="_blank">AliExpress</a><br><a rel="noreferrer noopener" href="https://www.skype.com/" target="_blank">Skype</a><br><a rel="noreferrer noopener" href="https://about.ubereats.com/" target="_blank">Uber Eats</a></td></tr></tbody></table> <hr class="separator"/> <h2><b> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Summary and a final advice</b></h2> <p>As final advice and a summary: </p> <ul><li><strong>If you are a big company, or a game</strong>, that has a wish to use complicated and demanding designs, or interactions, that can afford two different developers and a designer at the same time – <strong>go native</strong>.</li><li><strong>Other – go for a cross-platform.</strong> Your business will be more than fine.</li></ul>
maniekm
269,970
Solving Project Euler Problem 1:Multiples of 3 and 5, Part 1/4
A post by Samed Alhajajla
0
2020-02-27T10:02:46
https://dev.to/samedhaa/solving-project-euler-problem-1-multiples-of-3-and-5-part-1-4-4c36
samedhaa
270,010
Skip the IIFE brackets
Introduction This article is about how you can skip the brackets while creating an IIFE (I...
0
2020-02-27T10:45:21
https://dev.to/miteshkamat27/skip-the-iife-brackets-mop
javascript, todayilearned
###Introduction This article is about how you can skip the brackets while creating an IIFE (Immediately Invoked Function Expression). As we know , this is how we call IIFE ```javascript (function(){ console.log('calling iife') })() //Output: calling iife ``` It looks like those extra brackets tell the JavaScript parser, that the upcoming code is a Function Expression and not a Function. So, we can skip those extra brackets & still make a valid IIFE. Let's give it a try: ```javascript void function(){ console.log('calling iife') }() //Output: calling iife // undefined + function(){ console.log('calling iife') }() //Output: calling iife // NaN - function(){ console.log('calling iife') }() //Output: calling iife // NaN ! function(){ console.log('calling iife') }() //Output: calling iife // true ``` As you can see , I have skipped the brackets and added operators to tell the parser that it is a function expression. Cheers !!!
miteshkamat27
270,011
So... what database does Gatsby use?
Recently I started migrating an older Drupal based blog to a combination of Gatsby and Netlify CMS (s...
5,163
2020-02-27T21:42:26
https://dev.to/patricksevat/so-what-database-does-gatsby-use-545f
gatsby
Recently I started migrating an older Drupal based blog to a combination of Gatsby and Netlify CMS ([source](https://github.com/patricksevat/de-europeanen-jamstack)), both of which I had no prior experience with. In this blog series I'll talk you through the experiences, hurdles and solutions. In part 5 I'll answer one of the questions that kept nagging me during development: what database does Gatsby use to query from GraphQL? ## GraphQL queries from databases right? In my previous understanding, one of the defining features of GraphQL is being able to query efficiently from multiple datasources. I always understood these datasources can be REST APIs, other GraphQL APIs or databases such as SQL and MongoDb. Thus arised my question, what database does GatsbyJS use? ## No database, but Redux Gatsby actually does not use a database at all. Under the hood everything that is processed by the plugins gets [transformed into `Nodes`](https://www.gatsbyjs.org/docs/data-storage-redux/) and added to a nodes collection in [Redux](https://redux.js.org/). Other collections that exist in Redux are: `pages`, `components`, `schema`, `jobs`, `webpack` and `metadata`. The combination of plugins and actions (such as those defined in `gatsby-node.js` like `createPage()`) will populate these collections. Redux stores these collections *in-memory* during build / dev time and GraphQL queries against these collection based on an [automatically generated GraphQL Schema](https://www.gatsbyjs.org/docs/schema-generation/). The in-memory approach also means that data is *not persisted*! A regular database would write its data to a database file on a system. In-memory databases recreate the whole database every time. Gatsby does provide a [caching system](https://www.gatsbyjs.org/docs/build-caching/) to speed up subsequent builds though. The schema itself is also stored in Redux [and used to execute GraphQL queries that query other Redux collections](https://github.com/gatsbyjs/gatsby/blob/561d33e2e491d3971cb2a404eec9705a5a493602/packages/gatsby/src/query/query-runner.js#L28-L36). That's quite some Redux-ception! ![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/wu4l05im0gdw8rhk0pw6.gif) There is still one problem to be solved. Redux collections store plain Javascript, but GraphQL queries are written in using the GraphQL Syntax. The solution is using [sift.js](https://github.com/crcn/sift.js/tree/master). Further reading can be found [here](https://www.gatsbyjs.org/docs/schema-sift/#summary). The in-memory approach does have some [drawback at scale (100k+ documents)](https://www.gatsbyjs.org/docs/scaling-issues/)... ## Next up, LokiJS To address the scaling issues, the Gatsby team are investigating replacing Redux with the [LokiJS library](http://techfort.github.io/LokiJS/). This storage / retrieval mechanism is [still hidden behind a feature flag](https://www.gatsbyjs.org/docs/scaling-issues/#gatsby_db_nodes). LokiJS is still an in-memory database, but promises better performance. If you'd like to know more, take a peek in the [source code](https://github.com/gatsbyjs/gatsby/blob/master/packages/gatsby/src/db/nodes.js) and follow the code 🔎. That's it! Hopefully you understand now where the data comes from that is being queried by GraphQL! --- This blog is part of a series on how I migrated away from a self-hosted Drupal blog to a modern JAM stack with Gatsby and Netlify CMS.
patricksevat
270,057
Altering a column: null to not null
A post by masoud
0
2020-02-27T12:08:58
https://dba.stackexchange.com/a/152390
masoudzayyani
270,076
Yet another way to changing the page title in Blazor, and more.
This article will show you another way to changing the page title in the Blazor app.
0
2020-02-27T12:36:49
https://dev.to/j_sakamoto/yet-another-way-to-changing-the-page-title-in-blazor-and-more-43k
blazor, aspnetcore, dotnet
--- title: Yet another way to changing the page title in Blazor, and more. published: true description: This article will show you another way to changing the page title in the Blazor app. tags: blazor, aspnetcore, dotnet --- # Changing the page title in Blazor There is a good article already about this topic in "dev.to" site. _**["Changing the page Title in Blazor (Client-side)"](https://dev.to/supachris28/changing-the-page-title-in-blazor-client-side-880)** by [Chris Key](https://dev.to/supachris28)_ However, I'll show you another way to changing the page title in the Blazor app in this article. # "Blazor Head Element Helper" NuGet package I wrote a Blazor component C# program to change the page title with the same technic as Chris's article, and I packaged it as a NuGet package to be anybody can reuse it without writing chore JavaScript codes. That NuGet package - "Blazor Head Element Helper" - can be get from nuget.org. **["Blazor Head Element Helper" NuGet package](https://www.nuget.org/packages/Toolbelt.Blazor.HeadElement/)** What you should do to changing the page title is, add this package to your project, register the service into DI container, and add a markup of the `Title` component in your component. ```html <!-- This is your .razor file --> <Title>Hello World!</Title> ``` (For more detail of instruction, please see [README on GitHub project page](https://github.com/jsakamoto/Toolbelt.Blazor.HeadElement/#blazor-head-element-helper-).) Whenever that your component is rendering, the `Title` component rewrite the page title to "Hello World!". Of course, any client-side SPA frameworks, including Blazor, needs to call a JavaScript code snippet like `document.title = 'foo';` to change the title of the page document. The "Blazor Head Element Helper" NuGet package is also not an exception, too. A JavaScript code snippet like `document.title = 'foo';` is exist deep inside of this library. # Data binding As Chris's article shows us, the "Blazor Head Element Helper" can also change the page title dynamically, too. For example, to show the current counter value in the title of the "Counter" page, you can do it as below. ```html <Tile>Counter(@currentCount) - WebAssembly App</Tile> ``` Yes, you can do it with simple and usual Razor syntax markup starts with `@`. ![fig.1](https://raw.githubusercontent.com/jsakamoto/Toolbelt.Blazor.HeadElement/master/.assets/fig1.png) # Blazor Server app support To change the page title on a Blazor server app (not Blazor WebAssembly app. a.k.a "Server-side Blazor"), the basic concept is the same as the way on the Blazor WebAssembly case. That is, you need to call a JavaScript code snippet like `document.title = 'foo';` via JavaScript interop feature on Blazor. However, there are some considerations points such as calling JavaScript code is prohibited at the `OnInitialized` in the server-side pre-rendering process. ```csharp protected override void OnInitialized() { // This code will cuase exception in server-side pre-rendering process :( JSRuntime.InvokeVoidAsync(...); } ``` To implement "Blazor Head Element Helper", I considered and resolved these consideration points of running on the Server-side Blazor app. Therefore, "Blazor Head Element Helper" can be used safely for both the Blazor WebAssembly app (Client-side Blazor) and Blazor Server app (Server-side Blazor), expectedly. # Server-side pre-rendering support To change the page title by JavaScript code works only on a web browser. This means, calling the JavaScript code like `document.title = 'foo';` doesn't any effect for some crawlers such as Twitter, Facebook, or else. Those crawlers just parse HTML plain text that a web server responded. Fortunately, "Blazor Head Element Helper" supports server-side pre-rendering scenario, it can be integrated with server-side pre-rendering process. To do this, add required NuGet package "Toolbelt.Blazor.HeadElement.ServerPrerendering" to your server-side project, and hook server-side pre-rendering process like the code below. ```csharp using Toolbelt.Blazor.Extensions.DependencyInjection; // <- Add this, and... ... public class Startup { ... public void ConfigureServices(IServiceCollection services) { services.AddHeadElementHelper(); // <!- Don't miss this line, and... ... public void Configure(IApplicationBuilder app) { app.UseHeadElementServerPrerendering(); // <- Add this. ... app.UseStaticFiles() ... ``` After these wire up, the response of the web server becomes includes the page title that Title component rendered. ![fig.2](https://raw.githubusercontent.com/jsakamoto/Toolbelt.Blazor.HeadElement/master/.assets/fig2.png) You can use this technique for a Blazor WebAssembly app, too. You require ASP.NET Core hosted server implements instead. If you want to do this, please see also the following article to learn more. _**["Prerendering a Client-side Blazor Application"](https://dev.to/chrissainty/prerendering-a-client-side-blazor-application-bf6)** by [Chris Sainty](https://dev.to/chrissainty)_ # Changing the "meta" and "link" elements in Blazor The page title is one of the elements which can be lived in `head` element. Therefore, I finally supported not only `title` element also `meta` and `link` elements. This means, for example, you can change favicon dynamically, or you can support OGP. And also, these behaviors can be implemented with simple and usual Razor syntax markup, like this. ```html <Link Rel="icon" Href="@($"/favicons/{GetFaviconName()}")" /> ``` ![fig.3](https://raw.githubusercontent.com/jsakamoto/Toolbelt.Blazor.HeadElement/master/.assets/fig3.gif) # Conclusion "Blazor Head Element Helper" NuGet package allows you to below features, in your Blazor app, without writing chore code from zero ground base. - Change the page title. - Change `meta` and `link` elements such as favicon, OGP, or else. - Support both Blazor WebAssembly and Blazor Server. - Support server-side pre-rendering. I'm happy if this package improves your Blazor programming life. Happy coding :)
j_sakamoto
270,643
Extensões do VS Code para programadores
O VS Code é um editor de código que vem ganhando cada vez mais espaço entre os programadores, indepen...
0
2020-02-28T12:11:39
https://dev.to/jonathanlamim/extensoes-do-vs-code-para-programadores-2l1j
vscode, dev, webdev, programming
O [VS Code](https://code.visualstudio.com/) é um editor de código que vem ganhando cada vez mais espaço entre os programadores, independente da linguagem utilizada, pois ele é versátil, leve, com muitas funcionalidades nativas úteis e que propiciam a agilidade no desenvolvimento de aplicações, e além de tudo é open source e possui uma comunidade bastante envolvida com seu desenvolvimento e evolução. Eu já utilizei PHPStorm, NetBeans, Sublime e alguns outros que não lembro o nome, mas com o VS Code eu alcancei um outro nível de produtividade. Com ele pude otimizar o trabalho através de atalhos (quase não utilizo o mouse quando estou programando) e de extensões. Então eu decidi compartilhar com você algumas das extensões que eu utilizo e que tornam meu trabalho mais produtivo e ágil. Veja a lista a seguir: ## Bracket Pair Colorizer 2 Essa extensão ajuda a identificar pares de chaves ({...}) ao longo do código colorindo cada par de uma cor. Além de trabalhar com cores padrões, ela permite a configuração do conjunto de cores a serem utilizadas. ![Bracket Pair Colorizer 2 by CoenraadS](https://raw.githubusercontent.com/CoenraadS/Bracket-Pair-Colorizer-2/master/images/example.png) **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer-2 ## Dracula Official O Dracula é um dark theme para o VS Code com uma combinação de cores que deixa o código com uma leitura mais leve e suave (ao menos para mim) e já possui mais de 1 milhão de usuários. ![Dracula Official by Dracula Theme](https://raw.githubusercontent.com/dracula/visual-studio-code/master/screenshot.png) **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=dracula-theme.theme-dracula ## ESLint A extensão ESLint vai ajudar você programador a melhorar a padronização do seu código JavaScript, uma vez que ela é capaz de identificar ao longo do código tudo o que está fora do padrão configurado. Ela permite diversas configurações para que você possa adequá-la à sua forma de trabalho. **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint ## PHP Debug A extensão PHP Debug é incrível pois ela permite realizar o debug de código PHP usando o XDebug dentro do próprio VS Code. A integração dela com o XDebug e o VS Code é incrível e torna o trabalho de escrita de código e debug mais produtivo. **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=felixfbecker.php-debug ## PHP IntelliSense Essa é uma extensão que, do meu ponto de vista, nenhum programador PHP pode deixar de ter instalada no VS Code, pois ela ajuda em muitas coisas como por exemplo na construção do código com autocomplete e referências, busca por códigos dentro da aplicação baseados em namespaces e nomes de classes, informações sobre classes, métodos e variáveis ao colocar o mouse sobre o código. ![PHP IntelliSense by Felix Becker](https://raw.githubusercontent.com/felixfbecker/vscode-php-intellisense/master/images/signatureHelp.gif) **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=felixfbecker.php-intellisense ## ApiDoc Snippets Para que precisa documentar o código, essa extensão é indispensável por ela fornece snippets do APiDoc capazes de otimizar a escrita da documentação e manter tudo padronizado. ![ApiDoc Snippets by Miguel Yax](https://raw.githubusercontent.com/Krazeus/ApiDocSnippets/master/images/basic.gif) **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=myax.appidocsnippets ## Bookmarks Assim como os leitores fazem marcações em seus livros, seja com post-its ou com canetas marca-texto, programadores muitas vezes precisam criar marcações no código mas não querem utilizar muitos comentários para não deixa-lo poluído. Com essa extensão é possível marcar os pontos desejados no código e voltar a eles posteriormente com apenas um ou dois cliques. ![Bookmarks by Alessandro Fragnani](https://raw.githubusercontent.com/alefragnani/vscode-bookmarks/master/images/printscreen-select-lines.gif) **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=alefragnani.Bookmarks ## JavaScript (ES6) Code Snippets Essa extensão traz snippets de código para JavaScript e TypeScript que são utilizados através de atalhos e autocomplete, otimizando o processo de escrita de código e tornando o trabalho dos programadores cada vez mais produtivo, sem falar na redução de problemas por erros de digitação. **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=xabikos.JavaScriptSnippets ## Bootstrap 4, Font awesome 4, Font Awesome 5 Free & Pro snippets Essa é a extensão indispensável a programadores frontend que fazem uso do Bootstrap e do Font Awesome em seus projetos. Ela traz uma série de snippets capazes de auxiliar o trabalho de escrita de código e até mesmo inserção dos arquivos css e javascript necessários dessas bibliotecas. **Link no marketplace:** https://marketplace.visualstudio.com/items?itemName=thekalinga.bootstrap4-vscode Se você utiliza alguma outra extensão que eu não listei aqui e ela torna o seu trabalho mais produtivo, compartilhe nos comentários, vou adorar conhecê-la e isso ajudará ainda mais os leitores.
jonathanlamim
270,086
What’s New in Angular 9
Angular is one of the most widely used front-end frameworks, and it has recently launched a major rel...
0
2020-03-11T09:57:44
https://www.syncfusion.com/blogs/post/whats-new-in-angular-9.aspx
angular, webdev, javascript, web
--- title: What’s New in Angular 9 published: true date: 2020-02-27 12:35:29 UTC tags: angular, webdev, javascript, web canonical_url: https://www.syncfusion.com/blogs/post/whats-new-in-angular-9.aspx cover_image: https://dev-to-uploads.s3.amazonaws.com/i/3y5g65t9g72szdng4tpd.png --- [Angular](https://angular.io/) is one of the most widely used front-end frameworks, and it has recently launched a major release, version 9.0. This version of Angular uses [Ivy](https://angular.io/guide/ivy) as the compiler, which was previously under preview. Syncfusion always keeps up with the latest releases and we are very happy to announce that [Syncfusion Angular components](https://www.syncfusion.com/angular-ui-components) are compatible with Angular 9. Syncfusion’s release version 17.4.51 supports Angular 9 with the Ivy compiler. Get started with Angular 9 by installing Angular 9 packages and Syncfusion 17.4.51 Angular packages. For instance, the Syncfusion Angular Grid package (with Angular 9 support) can be installed using the following command. ``` npm install @syncfusion/ej2-angular-grids@17.4.51 ``` Let’s take a look at the updates available with Angular 9. ## Ivy Angular 9 uses Ivy as the default compiler. It has undergone several bug fixes and improvements. These are discussed in the following sections. ### Size of the bundles reduced With the Ivy compiler, items that are not part of the project have been excluded via tree-shaking. So they are not bundled, resulting in a reduction in the size of files. The reduction in file size results in a faster loading of applications. You can see a substantial difference in the bundles in the following images that a production build generated for an Angular 8 app versus an Angular 9 app for an Angular base source. Angular 8 ![Bundle generation in Angular 8](https://www.syncfusion.com/blogs/wp-content/uploads/2020/02/Bundle-generation-in-Angular-8.png) Angular 9 ![Bundle generation in Angular 9](https://www.syncfusion.com/blogs/wp-content/uploads/2020/02/Bundle-generation-in-Angular-9.png) ### Test runs optimized The Angular testbed used to recompile all components regardless of any changes made to the test. With Ivy, that burden has been eliminated. It doesn’t recompile all components unless there is a change. This results in considerable improvement in the time taken to run a test. ### Global object and debugging Angular 9 provides better debugging with the global object available from @angular/core through ng. The ng object is made available when an app runs in development mode. Components, directives, and other instance information can be accessed and states can be updated through the **applyChanges** function. The functions **getComponent** , **getContext** , **getDirectives** , **getHostElement** , and **getInjector** are all available in the ng global object. ![Global Object And Debugging](https://www.syncfusion.com/blogs/wp-content/uploads/2020/02/Global-Object-And-Debugging.gif) ### Better type checking Type checks are better handled with the Ivy compiler in Angular 9. Apart from the existing **_basic_** and **_fullTemplateTypeCheck_,** Angular 9 provides one more type check, **_strictTemplates._** This check applies more strict type checks, like when you try to use an object that is not a part of the ngFor iteration, it throws an error. ![Type Checking in Angular 9](https://www.syncfusion.com/blogs/wp-content/uploads/2020/02/Type-Checking-in-Angular-9.gif) ### Clearer build errors Apart from the strong type check, Ivy also shows more detailed and readable error messages than its earlier versions. ![Clearer build errors - Angular 9](https://www.syncfusion.com/blogs/wp-content/uploads/2020/02/Clearer-build-errors-Angular-9.png) ### ProvidedIn injector with new options @Injectable now has two additional options apart from the root. When we inject a service, we will use **ProvidedIn** as **ProvidedIn:’root’**. Apart from the root, Angular 9 has two more options: **ProvidedIn:’Platform’** : This makes the service available through the singleton platform injector across all applications. **ProvidedIn:’any’** : This makes the service a single instance per module. ## Intro of new components Two new components have been introduced that can be installed in an application. ### youtube-player YouTube videos can now be rendered within an Angular application through the youtube-player component. ``` npm install @angular/youtube-player ``` ### google-maps Google maps can now be easily integrated with Angular applications. ``` npm install @angular/google-maps ``` ## AngularForm changes The **_ngForm_** tag, which was used with forms, is no longer available. It has been changed to **_ng-form_**. ## TypeScript 3.7 Angular has been updated to support TypeScript version 3.6 and 3.7, which have several advantages and improvements. ## How to update to Angular 9 According to Angular documentation, if you have an Angular version older than Angular 8, you need to first update it to Angular 8, and then to 9. Update to 8. ``` ng update @angular/cli@8 @angular/core@8 ``` And then to 9. ``` ng update @angular/cli @angular/core ``` More detailed information on the update is available on the [Angular](https://update.angular.io/)website. ## Conclusion I hope you now have a clear idea about the updates available with Angular 9. Once again, we are glad to announce that Syncfusion Angular components (17.4.51) are compatible with Angular 9. Try using our Angular components in your application development to reduce your development time. You can check out our sample from this [GitHub](https://github.com/syncfusion/ej2-angular-ui-components) location and ask any questions in the [issues](https://github.com/syncfusion/ej2-angular-ui-components/issues) section. If you have any questions about these features, please let us know in the comments below. You can also contact us through our [support forum](https://www.syncfusion.com/forums), [Direct-Trac](https://www.syncfusion.com/support/directtrac/), or [feedback portal](https://www.syncfusion.com/feedback/). We are happy to assist you! The post [What’s New in Angular 9](https://www.syncfusion.com/blogs/post/whats-new-in-angular-9.aspx) appeared first on [Syncfusion Blogs](https://www.syncfusion.com/blogs).
sureshmohan
270,166
Engineering Manager Reading Guide
This is a list of engineering management books, articles, and videos that I’ve found useful in my tim...
0
2020-02-27T15:52:32
https://medium.com/@djglasser/engineering-manager-reading-guide-fa5d36b4f59a
books, learning, career
--- title: Engineering Manager Reading Guide published: true date: 2020-02-27 15:26:33 UTC tags: books,learning,discuss,careers canonical_url: https://medium.com/@djglasser/engineering-manager-reading-guide-fa5d36b4f59a --- This is a list of engineering management books, articles, and videos that I’ve found useful in my time as an Engineering Manager. I don’t intend to make a “definitive guide”, rather I hope it’s a useful reference for me and others when encountering some of the common challenges in management. ![](https://cdn-images-1.medium.com/max/1024/1*_U5jhxfGlIZw3iUAkU98tg.jpeg)<figcaption>Image Source: Billion Photos | Shutterstock</figcaption> ### The 1:1 [Questions for our first 1:1](http://larahogan.me/blog/first-one-on-one-questions/) by [Lara Hogan](https://twitter.com/lara_hogan) [101 Questions to Ask in One on Ones](https://jasonevanish.com/2014/05/29/101-questions-to-ask-in-1-on-1s/) by [Jason Evanish](https://twitter.com/evanish) [The Update, The Vent, and The Disaster](http://randsinrepose.com/archives/the-update-the-vent-and-the-disaster/) by [Michael Lopp (rands)](https://twitter.com/rands/) [Managing more experienced people](https://medium.com/the-year-of-the-looking-glass/managing-more-experienced-people-9893f9903649) by [Julie Zhuo](https://twitter.com/joulee) [The Coaching Habit](https://smile.amazon.com/Coaching-Habit-Less-Change-Forever/dp/0978440749/ref=asc_df_0978440749/?tag=hyprod-20&linkCode=df0&hvadid=312065696873&hvpos=1o1&hvnetw=g&hvrand=10363138855951883313&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=1021696&hvtargid=pla-464534314684&psc=1&tag=&ref=&adgrpid=61316180399&hvpone=&hvptwo=&hvadid=312065696873&hvpos=1o1&hvnetw=g&hvrand=10363138855951883313&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=1021696&hvtargid=pla-464534314684) by [Michael Bungay Stanier](https://twitter.com/boxofcrayons) [Three Tools for Better 1 on 1 meetings](https://www.youtube.com/watch?v=ErWS-s2_XQI) by [Greg Dick](https://hudl.slack.com/team/U03NJ5QFT) ### Effective Communication [Radical Candor](https://www.amazon.com/Radical-Candor-Kickass-Without-Humanity/dp/1250103509) by [Kim Scott](https://twitter.com/kimballscott) [Power Up Your Team with Nonviolent Communication Principles](http://firstround.com/review/power-up-your-team-with-nonviolent-communication-principles/) by [Ann Mehl](https://twitter.com/annmehl) ### Performance [Necessary Endings](https://www.amazon.com/Necessary-Endings-Employees-Businesses-Relationships/dp/B004JLU0ZQ/ref=sr_1_1?ie=UTF8&qid=1507904395&sr=8-1&keywords=necessary+endings) by [Dr. Henry Cloud](https://twitter.com/DrHenryCloud) [Crucial Conversations](https://www.amazon.com/Crucial-Conversations-Talking-Stakes-Second/dp/B009S8GO14/ref=sr_1_1?s=books&ie=UTF8&qid=1507904488&sr=1-1&keywords=Crucial+conversations) by Kerry Patterson, Joseph Grenny, Ron McMillan, Al Switzler [The Five Conditions for Improvement](https://medium.com/@royrapoport/the-five-conditions-for-improvement-20909f856dab) by [Roy Rappaport](https://twitter.com/royrapoport) ### Career Paths [This 90-Day Plan Turns Engineers into Remarkable Managers](http://firstround.com/review/this-90-day-plan-turns-engineers-into-remarkable-managers/) by [David Loftesness](https://twitter.com/dloft) [Work at different management levels](http://larahogan.me/blog/manager-levels/) by [Lara Hogan](https://twitter.com/lara_hogan) [The Manager’s Path](https://www.amazon.com/Managers-Path-Leaders-Navigating-Growth/dp/1491973897) by [Camille Fournier](https://twitter.com/skamille) [On Being a Senior Engineer](https://www.kitchensoap.com/2012/10/25/on-being-a-senior-engineer/) by [John Allspaw](https://twitter.com/allspaw) [Engineering Management: The Pendulum or the Ladder](https://charity.wtf/2019/01/04/engineering-management-the-pendulum-or-the-ladder/) by [Charity Majors](https://twitter.com/mipsytipsy) ### Engagement & Retention [Shields Down](http://randsinrepose.com/archives/shields-down/) by [Michael Lopp (rands)](https://twitter.com/rands/) [Work Rules!](https://www.amazon.com/Work-Rules-Insights-Inside-Transform/dp/1455554790) by [Laszlo Bock](https://twitter.com/LaszloBock2718) [How Google Sold Its Engineers on Management](https://hbr.org/2013/12/how-google-sold-its-engineers-on-management) from Harvard Business Review [Three Powerful Conversations Managers Must Have To Develop Their People](http://firstround.com/review/three-powerful-conversations-managers-must-have-to-develop-their-people/) by [Russ Laraway](https://twitter.com/ral1?lang=en) ### Leadership [Dare to Lead](https://www.amazon.com/Dare-Lead-Work-Conversations-Hearts-ebook/dp/B07CWGFPS7) by [Brené Brown](https://twitter.com/BreneBrown) [Good Strategy Bad Strategy](https://www.amazon.com/Good-Strategy-Bad-Difference-Matters-ebook/dp/B004J4WKEC/ref=sr_1_1) and [Richard Rumelt](https://www.anderson.ucla.edu/faculty-and-research/strategy/faculty/rumelt) [How to be Strategic](https://medium.com/@joulee/how-to-be-strategic-f6630a44f86b) by [Julie Zhuo](https://twitter.com/joulee) ### Management Craft/Other [An Elegant Puzzle](https://www.amazon.com/dp/1732265186/) by [Will Larson](https://twitter.com/Lethain) [Resilient Management](https://resilient-management.com/) by [Lara Hogan](https://twitter.com/lara_hogan) [44 engineering management lessons](http://www.defmacro.org/2014/10/03/engman.html) by [Slava Akhmechet](https://twitter.com/spakhm) [Average Manager vs. Great Manager](https://medium.com/the-year-of-the-looking-glass/average-manager-vs-great-manager-cf8a2e30907d) by [Julie Zhuo](https://twitter.com/joulee) [Awesome Leading and Managing](https://github.com/LappleApple/awesome-leading-and-managing) by Various Authors [Engineering Management](https://leadership-library.dev/Engineering-Management-Books-c151ee522eba4bf8a092b11001b25767) from [Leadership Library](https://leadership-library.dev/The-Leadership-Library-for-Engineers-c3a6bf9482a74fffa5b8c0e85ea5014a) [8 Keys to Happiness at Work](https://dev.to/djglasser/8-keys-to-happiness-at-work-3pja) by [Dan Glasser](https://twitter.com/djglasser)
djglasser