id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
270,203 | Curveball - A typescript microframework | Since mid-2018 we've been working on a new micro-framework, written in typescript. The framework... | 0 | 2020-02-27T17:03:27 | https://evertpot.com/curveball-typescript-framework-update/ | rest, typescript, node, javascript | ---
title: "Curveball - A typescript microframework"
tags:
- rest
- typescript
- nodejs
- javascript
published: true
canonical_url: https://evertpot.com/curveball-typescript-framework-update/
---
Since mid-2018 we've been working on a new micro-framework, written in typescript. The framework competes with [Express][1], and takes heavy inspiration from [Koa][2]. It's called [Curveball][8].
If you only ever worked with Express, I feel that for most people this project will feel like a drastic step up. Express was really written in an earlier time of Node.js, before Promises and async/await were commonplace, so first and foremost the biggest change is the use of async/await middlewares throughout.
If you came from Koa, that will already be familiar. Compared to Koa, these are the major differences:
* Curveball is written in Typescript
* It has strong built-in support HTTP/2 push.
* Native support for running servers on AWS Lambda, without the use of
[strange hacks][3].
* Curveball's request/response objects are decoupled from the Node.js `http`
library.
At [Bad Gateway][11] we've been using this in a variety of (mostly API)
projects for the past few years, and it's been working really well for us.
We're also finding that it tends to be a pretty 'sticky' product. People exposed to it tend to want to use it for their next project too.
Curious? Here are a bunch of examples of common tasks:
Examples
--------
### Hello world
```typescript
import { Application } from '@curveball/core';
const app = new Application();
app.use( async ctx => {
ctx.response.type = 'text/plain';
ctx.response.body = 'hello world';
});
app.listen(80);
```
Everything is a middleware, and middlewares may or may not be `async`.
### Hello world on AWS Lambda
```typescript
import { Application } from '@curveball/core';
import { handler } from '@curveball/aws-lambda';
const app = new Application();
app.use( ctx => {
ctx.response.type = 'text/plain';
ctx.response.body = 'hello world';
});
exports.handler = handler(app);
```
### HTTP/2 Push
```typescript
const app = new Application();
app.use( ctx => {
ctx.response.type = 'text/plain';
ctx.body = 'hello world';
ctx.push( pushCtx => {
pushCtx.path = '/sub-item';
pushCtx.response.type = 'text/html';
pushCtx.response.body = '<h1>Automatically pushed!</h1>';
});
});
```
The callback to `ctx.push` will only get called if Push was supported by the client, and because it creates a new 'context', any middleware can be attached to it, ot even *all* the middleware by doing a 'sub request'.
### Resource-based controllers
Controllers are optional and opinionated. A single controller should only ever manage one type of resource, or one route.
```typescript
import { Application, Context } from '@curveball/core';
import { Controller } from '@curveball/controller';
const app = new Application();
class MyController extends Controller {
get(ctx: Context) {
// This is automatically triggered for GET requests
}
put(ctx: Context) {
// This is automatically triggered for PUT requests
}
}
app.use(new MyController());
```
### Routing
The recommended pattern is to use exactly one controller per route.
```typescript
import { Application } from '@curveball/core';
import router from '@curveball/router';
const app = new Application();
app.use(router('/articles', new MyCollectionController());
app.use(router('/articles/:id', new MyItemController());
```
### Content-negotation in controllers
```typescript
import { Context } from '@curveball/core';
import { Controller, method, accept } from '@curveball/controller';
class MyController extends Controller {
@accept('html')
@method('GET')
async getHTML(ctx: Context) {
// This is automatically triggered for GET requests with
// Accept: text/html
}
@accept('json')
@method('GET')
async getJSON(ctx: Context) {
// This is automatically triggered for GET requests with
// Accept: application/json
}
}
```
### Emitting errors
To emit a HTTP error, it's possible to set `ctx.status`, but easier to just throw a related exception.
```typescript
function myMiddleware(ctx: Context, next: Middleware) {
if (ctx.method !== 'GET') {
throw new MethodNotAllowed('Only GET is allowed here');
}
await next();
}
```
The project also ships with a [middleware][5] to automatically generate [RFC7807][4] `application/problem+json` responses.
### Transforming HTTP responses in middlewares
With express middlewares it's easy to do something *before* a request was handled, but if you ever want to transform a response in a middleware, this can only be achieved through a complicated hack.
This is due to the fact that responses are immediately written to the TCP sockets, and once written to the socket it's effectively gone.
So to do things like gzipping responses, Express middleware authors needs to mock the response stream and intercept any bytes sent to it. This can be clearly seen in the express-compression source:
<https://github.com/expressjs/compression/blob/master/index.js>.
Curveball does not do this. Response bodies are buffered and available by
middlewares.
For example, the following middleware looks for a HTTP Accept header of
`text/html` and automatically transforms JSON to a simple HTML output:
```typescript
app.use( async (ctx, next) => {
// Let the entire middleware stack run
await next();
// HTML encode JSON responses if the client was a browser.
if (ctx.accepts('text/html') && ctx.response.type ==== 'application/json') {
ctx.response.type = 'text/html';
ctx.response.body = '<h1>JSON source</h1><pre>' + JSON.stringify(ctx.response.body) + '</pre>';
}
});
```
To achieve the same thing in express would be quite complicated.
You might wonder if this is bad for performance for large files. You would be completely right, and this is not solved yet.
However, instead of writing directly to the output stream the intent for this is to allow users to set a callback on the `body` property, so writing the body will not be buffered, just deferred. The complexity of implementing these middlewares will not change.
### HTML API browser
Curveball also ships with an [API browser][6] that automatically transforms
JSON in to traversable HTML, and automatically parses HAL links and HTTP Link
headers.
Every navigation element is completely generated based on links found in the
response.
To use it:
```typescript
import { halBrowser } from 'hal-browser';
import { Application } from '@curveball/core';
const app = new Application();
app.use(halBrowser());
```
Once set up, your API will start rendering HTML when accessed by a browser.
<figure>
<img src="https://evertpot.com/assets/posts/hal-browser.png" alt="HAL browser example" />
</figure>
### Sending informational responses
```typescript
ctx.response.sendInformational(103, {
link: '</foo>; rel="preload"'
})
```
### Parsing Prefer headers
```typescript
const foo = ctx.request.prefer('return');
// Could be 'representation', 'minimal' or false
console.log(foo);
```
Installation and links
----------------------
Installation:
```
npm i @curveball/core
```
Documentation can be found on [Github][10]. A list of middlewares can be seen in the [organization page][8].
Stable release
--------------
We're currently on the 11th beta, and closing in on a stable release. Changes will at this point be minor.
If you have thoughts or feedback on this project, it would be really helpful to hear. Don't hesitate to leave comments, questions or suggestions as a [Github Issue][7].
A big thing that's still to be done is the completion of the [website][9]. We got a great design, it just needs to be pushed over the finish line.
One more thing?
---------------
Apologies for the cliché header. We're also working on an Authentcation server, written in curveball. It handles the following for you:
* Login
* Registration
* Lost password
* OAuth2:
* `client_credentials`, `password`, `authorization_code` grant types.
* revoke, introspect support
* TOTP (Google authenticator style)
* User management, privilege management.
The project needs some love in the user experience department, but if you're stick of creating another authentication system and don't want to break the bank, [a12n-server][12] might be for you.
The ultimate goal here is to create a great headless authentication server, and compete with OAuth0 and Okta, but we can use some more people power here!
[1]: https://expressjs.com/
[2]: https://koajs.com/
[3]: https://github.com/awslabs/aws-serverless-express/blob/master/src/index.js
[4]: https://tools.ietf.org/html/rfc7807
[5]: https://github.com/curveball/problem/
[6]: https://github.com/curveball/hal-browser
[7]: https://github.com/curveball/core/issues
[8]: https://github.com/curveball
[9]: https://curveballjs.org/
[10]: https://github.com/curveball/core
[11]: https://badgateway.net/
[12]: https://github.com/curveball/a12n-server | evert |
270,304 | Como fazer backup apenas das imagens originais do WordPress com WP-CLI | Neste tutorial você vai aprender a fazer backup apenas das imagens originais (e não das miniaturas) de sua biblioteca do WordPress usando o WP-CLI. | 0 | 2020-02-27T21:06:17 | https://dev.to/portugues/como-fazer-backup-apenas-das-imagens-originais-do-wordpress-com-wp-cli-1mkd | wordpress, wpcli, productivity | ---
title: Como fazer backup apenas das imagens originais do WordPress com WP-CLI
published: true
description: Neste tutorial você vai aprender a fazer backup apenas das imagens originais (e não das miniaturas) de sua biblioteca do WordPress usando o WP-CLI.
tags: wordpress, wp-cli, productivity
---
Sou membro de uma comunidade de WordPress no WhatsApp. Hoje, um membro fez a seguinte pergunta:
> "Eu preciso fazer um backup de todas as imagens de um site, mas apenas das originais. Eu não preciso das miniaturas (thumbnails). Alguém sabe como fazer isso?"
## TL;DR
Se você estiver com pressa e não tiver tempo para ler todo o artigo, basta executar os seguintes comandos em seu Terminal.
```
# Usando o WP-CLI, para exportar a URLs de todas as imagens originais do seu site em um arquivo txt.
$ wp post list --post_type=attachment --field=guid > wp-content/uploads/imagens.txt
```
```
# Usando o wget, para baixar todas as imagens de uma só vez pro seu computador (ou servidor):
$ wget -i https://seusite.com/wp-content/uploads/imagens.txt
```
Se você pesquisar no Google "como fazer backup das imagens originais no WordPress" ou algo parecido, você vai encontrar vários artigos sobre como fazer na backup de imagens, mas *não apenas* as imagens originais.
Toda vez que você faz upload de uma imagem em um site feito com WordPress, normalmente ele armazena o arquivo original, além de versões desse arquivo em diferentes tamanhos.
```
-rw-r--r-- 1 user group 5845 Feb 25 18:07 foto-01-150x150.jpg
-rw-r--r-- 1 user group 20045 Feb 25 18:07 foto-01-300x300.jpg
-rw-r--r-- 1 user group 115432 Feb 25 18:07 foto-01-768x768.jpg
-rw-r--r-- 1 user group 164387 Feb 25 18:07 foto-01.jpg
```
Na lista acima, o arquivo original é o `foto-01.jpg`. Os outros 3 arquivos são miniaturas dessa imagem nos tamanhos 150x150 pixels, 300x300 pixels e 768 x 768 pixels, respectivamente.
O exemplo acima eu retirei de um site que está usando o tema [Twenty Twenty](https://wordpress.org/themes/twentytwenty/). As miniaturas geradas dependem do tema que você está usando.
Mas identificar as imagens originais é apenas uma etapa do processo, pois outro grande problema é que o WordPress normalmente armazena os arquivos em sub-pastas seguindo o formato *AAAA/MM*, onde AAAA é o ano com 4 dígitos e MM é o mês com 2 dígitos.
Então se você fizer upload de uma imagem no dia 20 de fevereiro de 2020, ele deve armazená-la na pasta `wp-content/uploads/2020/02/`.
Se o seu site foi criado em janeiro de 2019 e você fez upload de pelo menos um arquivo por mês, sua pasta uploads deve de ter uma pasta 2019 com 12 sub-pastas para cada mês do ano.
```
wp-content/uploads/2019/01/
wp-content/uploads/2019/02/
wp-content/uploads/2019/03/
...
wp-content/uploads/2019/10/
wp-content/uploads/2019/11/
wp-content/uploads/2019/12/
```
Então, dependendo de quantas imagens o seu site tiver, você pode levar horas ou até dias para fazer o backup dessas imagens!
Mas calma, porque a solução é mais simples do que você pode imaginar!
## Como baixar múltiplos arquivos de uma só vez?
Se você usa macOS, Linux, ou WSL no Windows, é possível fazer download de múltiplos arquivos de uma só vez usando o comando `wget -i arquivo.txt`, onde **arquivo.txt** é um arquivo contendo uma lista de URLs, mas cada uma delas em uma linha separada.
```
https://seusite.com/wp-content/uploads/imagem-01.jpg
https://seusite.com/wp-content/uploads/imagem-02.jpg
https://seusite.com/wp-content/uploads/imagem-03.jpg
https://seusite.com/wp-content/uploads/imagem-04.jpg
https://seusite.com/wp-content/uploads/imagem-05.jpg
```
Agora que já sabemos como baixar tudo de uma vez só, precisamos descobrir como pegar as URLs de todas as imagens originais de um site em WordPress.
## WP-CLI é a solução!
> [WP-CLI](https://wp-cli.org/) é uma interface de linha de comando para WordPress. Você pode atualizar plugins, configurar instalações multisite e muito mais, sem usar um navegador.
O WP-CLI é uma ferramenta super poderosa para gerenciar um site em WordPress através da linha de comando.
Pra exportar as URLs das imagens em um arquivo, basta executar o seguinte comando em seu Terminal:
```
$ wp post list --post_type=attachment --field=guid > wp-content/uploads/imagens.txt
```
Agora vou explicar cada parte desse comando:
1. `wp post list` é um comando para listar todos os posts no WordPress
2. `--post_type=attachment` é um argumento pra retornar apenas os arquivos
3. `--field=guid` é um argumento pra retornar a URL de cada um desses arquivos
4. ` > wp-content/uploads/imagens.txt` é um comando para exportar o resultado do comando `wp post list --post_type=attachment --field=guid` em um arquivo chamado `imagens.txt` e armazená-lo na pasta `wp-content/uploads/`.
## Como baixar as imagens de uma só vez (e fazer o backup)
Após rodar o comando acima, você deve ter um arquivo localizado em `https://seusite.com/wp-content/uploads/images.txt`.
Agora, abra novamente seu terminal, execute o comando `cd` até a pasta onde você quer baixar as imagens, e execute o seguinte comando:
```
$ wget -i https://seusite.com/wp-content/uploads/imagens.txt
```
Assim que você pressionar Enter/Return, o `wget` vai baixar todas as imagens contidas no arquivo *images.txt* para a pasta do computador (ou servidor) onde você executou esse comando.
Você pode baixar as imagens para suas pastas do Dropbox, OneDrive, Google Drive, iCloud, ou qualquer outro serviço de armazenamento de arquivos.
E pronto! Seu backup estará concluído! | castroalves |
270,328 | What Breaking My Ankle Taught Me About DevOps | Yikes. Unfortunately it’s true. I broke my ankle doing something I took for granted; walking. As I si... | 0 | 2020-03-20T21:09:04 | https://dev.to/dealeron/what-breaking-my-ankle-taught-me-about-devops-bon | devops, productivity | Yikes. Unfortunately it’s true. I broke my ankle doing something I took for granted; walking. As I sit here and write this I am still in recovery, but it’s given me a few interesting perspectives on DevOps. PSA - please do not go breaking your ankles, or anyone elses. I am not advocating for this. Please.
##A broken process needs time to heal
When you get extremely well-practiced at a task, things become as routine as walking down the hall, until you trip holding a ping pong paddle while batting at stressballs. No? Just me?... Let’s talk about CI/CD as an example of this. Last year the entire routine for the developers and DevOps got flipped on its head. Devs went from pressing all the buttons to none of the buttons. What used to be routine suddenly became verboten as I and the rest of DevOps took on roles and tried to implement procedures with Dev management to make up for the void that had been created by the walls that were constructed to delineate the distinction of each team’s priorities.
In our case we knew what exactly we had broken: _the process_. This was very straight-forward to us, from coding, to QA, to Releases, everything was broken. We limped along like Batman post-Bane because of our indomitable wills. The builds moved in mysterious ways, and thus releases were fragmented. You couldn’t easily know what was in production without having to login to BitBucket and review the commit SHA and then review the commits (a major undertaking for any dev who was trying to release anything). Tickets were QA’d individually so it wasn’t always clear where or how they interacted with each other until later, introducing bugs and frequently resulting in hot deploys. It had been like this since I joined the company two and a half years ago, and we hadn't had a reason to change. _Why would we? Everything worked, didn't it?_
For the sake of brevity I won’t go further, but you get the idea. Like me, this whole CI/CD pipeline needed surgery. Although we have surgically performed the painful part and that is behind us, recovery will take weeks. Sometimes though, it takes longer. Just like with an injury, if you try to do too much at once with it before it’s completely healed you injure yourself again and hurt yourself worse, thus you can drag a 3 week recovery into 2 months, or in our case a 1 business quarter task into 2.
Of course, we had had a DevOps department for a couple of years but in my opinion we never really did DevOps as a company until recently. Since the foundation of the department in 2017, we had been in a constant state of firefighting mode that made it impossible to get ahead of the curve. It took 3.5 months to straighten out the CI/CD pipeline to a point that you could trust the setup across the board. I remember the times where I'd have to rebuild a single branch 3 times to get it between QA, UAT(formerly Staging), and finally Production. Worse yet, the environment and configuration settings weren't identical so every environment was setup differently. It took another 6 months to migrate all of the critical systems' deploy pipelines into Octopus Deploy so that the overhead of managing it would be lessened for all parties involved. We did all of this while accepting infrastructure requests and performing migrations to newer hardware in an effort to keep up with the evolving technologies at DealerOn. **As I look back I realize that if we had attempted this DevOps transformation back then, that we would never have completed as much of the work that we have in the timeframes that I listed above.** We simply were not ready then, but that is not true today.
The moral of my story is that DevOps isn't magic. Although there are times that I feel like an arch-wizard with a mastery of the arcane, don't be fooled. DevOps won't instantly transform your business, and it certainly isn't a switch where it is suddenly **ON** and everything works better. DevOps is a mindset and a set of practices, and they take time to implement and reiterate as you come up with cleaner and better ways to do the tasks. Anyone who tells you otherwise is selling something. Though I am fairly new to the DevOps scene, I had never truly considered the scale of how a DevOps transformation would affect the department and company. After all, it was not until we had broken our metaphorical ankles that we had really started to heal and improve...
At the time of publishing this article, I'm pleased to announce that my leg is almost completely healed, and I'll be walking normally again very soon. | gimanval |
270,345 | Automatic cross-platform testing: part 1: Linux | Testing code on multiple platforms, starting with Linux | 0 | 2020-02-28T23:34:52 | https://dev.to/drhyde/automatic-cross-platform-testing-part-1-linux-40ih | testing, travis, ci, linux | ---
title: Automatic cross-platform testing: part 1: Linux
published: true
description: Testing code on multiple platforms, starting with Linux
tags: Testing, Travis, CI, Linux
---
#OBSOLETION NOTICE
Since I wrote this, Travis CI has stopped offering their free service to open source projects and so I have switched to using Github Actions. This article is left here for historical interest only.
I'm not going to re-write it because there are already a bazillion other tutorials on how to do basic Github Actions stuff and it would be a waste of my time and yours for me to copy them. I have [ranted](https://dev.to/drhyde/comment/1gmjp) about the amount of low-effort copy-cat material on this platform before. Also, I'm lazy.
#Introduction
If you're anything like me, and I know I am, then your code will be used by people using all sorts of different OSes, most of which you don't have access to. So how can you make sure that it works for all those people? Obviously you write tests, you pay attention to test coverage metrics to make sure that you're testing all your code and all the conditions and all the branches it takes, and you hope that the users run the tests before installing the software.
And your users will, if those tests fail, carefully figure out what the problem is, write test cases, and a patch, and send those to you, and will diligently re-run all the tests on their bizarre platform whenever you make a change, just to make sure that you haven't re-broken it.
Yeah right. If you believe that I've got a bridge to sell you.
I mostly develop on a Mac, but most of my users are on Linux, FreeBSD, or Windows, so what I really want is a way of testing my code (automatically, so I don't forget) on those other three platforms. Actually, I want to automatically test it on Mac too, because my Mac has, over the years, had all kinds of things installed on it, and it's all too easy for me to write something that depends on something I installed four years ago and which won't work for anyone else because I forgot about the dependency.
And because I'm not getting paid for writing any of my open source code I want to do this for free. I realise that that sounds like a big ask, but it can be done.
#Assumptions
I assume that your code can be run from the command line, that you use git for version control, and that you store your code on Github in a public repository. If any of those don't apply to you then you can probably work around them but I can't help you.
#My project
For the purposes of this article I've written a [really simple script](https://github.com/DrHyde-347624/example-project/blob/master/makelink.sh) which creates a file and a symlink to that file, and some [thoroughly incomplete tests](https://github.com/DrHyde-347624/example-project/blob/master/test.sh) for it. Note that when you run the test script it will signal success or failure by exiting with code 0 for success or 1 for failure.
#Testing on Linux
This was the easiest to find a provider for, presumably because it's the most common OS out there. I went with [Travis CI](https://travis-ci.org/), mostly because they were the first I found. You'll need to sign in to their site using Github, and to authorize it to access your account.
It will then take you to a page listing all your repositories. Confusingly, that list is ... empty.

You need to add your repositories individually, using that little + sign. Once you've done that go back to the home page and you'll see your repositories listed, and also that your most recently updated repository has no builds yet.

Travis looks for a [configuration file](https://github.com/DrHyde-347624/example-project/blob/master/.travis.yml) in your repository telling it how to build the project. Moments after you push that to Github it will notice, and start building your project ...

... and if all goes well, everything will go green. Congratulations, your tests passed. Note that your repository's builds [are visible to the public](https://travis-ci.org/DrHyde-347624/example-project).

You can see all the output from your tests, which will be useful if they failed. Your tests do fail with good diagnostics, right? Travis reports the results of your tests back to Github and the test pass above shows up as a pleasing green tick next to the commit.

Or you might get an angry red cross if there was a failure. You should also get notified by email about failures, but I have found deliverability of those to be a bit dodgy - unsurprising really, auto-generated emails, all looking fairly similar, are quite likely to be mistaken for spam. If you click on the tick (or the cross) you'll see a little pop-up listing all the various automated checks that have happened, with their statuses. Right now there's only one, of course.
The config file linked above is the absolute minimum bare bones needed to work. Travis have lots of helpful stuff pre-configured for many popular languages, including tools to automatically install dependencies, run against multiple versions of an interpreter, and so on. Their [online documentation](https://docs.travis-ci.com/user/for-beginners/) is good, and if the doco leaves you confused, Travis is popular enough that you should be able to easily [find examples](https://github.com/search?q=.travis.yml+python&type=Commits) to crib from.
#Next
In [part 2](https://dev.to/drhyde/automatic-cross-platform-testing-part-2-freebsd-2394) we'll see how to add support for automatic testing on FreeBSD. | drhyde |
270,437 | What do you do while waiting? | In times like slow npm installs, long-running Gradle builds and the likes, what do you do to pass time? Share with the Dev community | 0 | 2020-02-28T03:21:03 | https://dev.to/orimdominic/share-what-you-do-while-waiting-with-the-dev-community-18im | discuss, fun, share | ---
title: What do you do while waiting?
published: true
description: In times like slow npm installs, long-running Gradle builds and the likes, what do you do to pass time? Share with the Dev community
tags: discuss, fun, share
---
We've all been there and we'll never stop being there. Times like when `npm install` takes forever, long running test suites we hope all pass and the infamous Gradle builds that take longer than the Shawshank Redemption with commercials.
What do you do during these times?
- Play swords with a colleague?
- Surf dev.to?
- Watch Youtube videos?
- Pet your pet?
- Snapchat?
- Take a walk?
Share what you do in these wait times that holds you back from exploding with anxiety! Lets all learn and have fun. Someone might just adopt yours! 😉 | orimdominic |
270,461 | Write Angular code 10x faster | In today's age speed is a thing. Speed is a necessity. In this blog i will tell you a secret to write... | 0 | 2020-02-28T04:31:03 | https://hariharan-subramanian.netlify.com/posts/2020/02/write-angular-code-10x-faster/ | angular, vscode, webdev, productivity | In today's age speed is a thing. Speed is a necessity. In this blog i will tell you a secret to write angular code 10x faster :fast_forward: :fast_forward:.
## Prerequisites :grey_question:
* Visual Studio code
* You should be working in Angular :stuck_out_tongue_closed_eyes:
If you are not having VSCode, you can download it [here](https://code.visualstudio.com/) for free.
## Angular & Component sharing :ballot_box_with_check:
In angular, we can have multiple reusable components. `Eg:` You can create the below list of components which are commonly used across the application and it enables sharing and faster development.
:arrow_down: Some of the commonly used components like :arrow_down:
* Blade
* Modal
* Any common filters used across the application.
* Shared components that generate Charts/ Graphs etc.
## :pray: How VS-Code can help :checkered_flag:
When you are starting out on a new project or application, initially we will focus on getting the common components out first. Once we have developed the common components, we can easily keep on re-using it across the entire application.
Let's say we need `blade` on multiple areas of the application. While development instead of typing the entire `snippet`, we can make vscode to automatically insert the whole component HTML code for us.
## How to create snippets? :information_source:
1. Open Visual Studio Code.
2. Open the desired project or workspace. `[Optional]`
The second step is optional because some people prefer to create snippets which applies to a particular workspace or specific project.
3. Type `F1` on your keyboard and type `User Snippets`

4. Press `Enter` and vs code will prompt for selection of a language. Since we are developing snippets for Angular proceed to choose `HTML`

5. Once you have selected `html.json` it will open a json file, in which we are going to make some changes.
6. The syntax for the `snippet.json` will be something like this
```json
{
"snippetName":{
"prefix":"your-shortcut-name",
"body":[
// Your full HTML content to be inserted
]
}
}
```
7. Using the help of this syntax you can insert whatever you want to inside your HTML in an efficient and fastest way.
**NOTE: Each line inside the `body[]` should be enclosed within `""` string notation.**
## My snippet shortcuts
Here are my top snippets for creating something very quickly.
### :zap: Blade :zap:
```json
"app-blade": {
"prefix": "blade",
"body": [
"<app-blade>",
" <div bladeHeader>",
" </div>",
" <div bladeContent>",
" </div>",
" <div bladeFooter>",
" </div>",
"</app-blade>"
]
}
```
### Kendo Grid
```json
{
"KendoGrid": {
"prefix": "k-grid",
"body": [
"<kendo-grid [data]=\"data\"",
" [filterable]=\"true\"",
" [pageSize]=\"10\"",
" [skip]=\"0\"",
" [kendoGridSelectBy]=\"'id'\"",
" [selectedKeys]=\"selectedKeysIndexes\"",
" [resizable]=\"true\"",
" [sortable]=\"true\">",
"",
"</kendo-grid>"
],
"description": "KendoGrid"
}
}
```
I have a much bigger list since I am working on an enterprise application, we have a lot of sharable components that we tend to keep re-using.
I found this highly useful and improves our workflow and the way we write code. My team found it very useful.
If you are reading this, I hope this will definitely help you as well.
Happy coding :collision::collision:
Thanks for reading. :pray: :pray:
Stay tuned for more interesting stuffs :fire::fire::fire::fire: | get_hariharan |
270,470 | Build A Simple State Machine in React | My goal for the day is to get you started with XState library. XState will help us build finite state... | 0 | 2020-02-29T02:56:46 | https://dev.to/ajinkabeer/build-a-simple-state-machine-in-react-2oa4 | xstate, react | My goal for the day is to get you started with [XState](https://xstate.js.org/) library. XState will help us build finite state machines. First, we will build a state machine and then integrate it into a react app.
Excited already? let's get started.
We will start with a very simple state machine called ``toggleStateMachine`` machine which will toggle between two states ``active`` and ``inactive``.
Here's a cool visualizer for the state machine and how it transitions from one state to another.
[XState Visualizer](https://xstate.js.org/viz/)
Once you're in the visualizer page, empty the ``definitions`` tab because we are going to build it from scratch.
* Define a variable. This variable will be an instance of ``Machine()``.
```javascript
const toggleStateMachine = new Machine({})
```
* Now lets give an ``id`` to this variable. We can even use the variable name for this.
```javascript
const toggleStateMachine = new Machine({
id:'toggleStateMachine'
})
```

* Now we need to give an initial value to the state machine, as the name suggests it's the initial state of the machine when we spin it up. Since we are building a toggle machine, there will be two states ``active`` and ``inactive``. So naturally, the initial state will be in ``inactive`` state.
```javascript
const toggleStateMachine = new Machine({
id:'toggleStateMachine',
initial:'inactive'
})
```
* Next, we will define all of the states this machine has. ``states`` is an object. We can add properties to its which are all the different ``states`` this machine can have.
```javascript
const toggleStateMachine = new Machine({
id: "toggleStateMachine",
initial: "inactive",
states: {
inactive: {},
active: {}
}
});
```
* Click the ``update`` button. Voila!

* As you can see now, when the machine starts it will be in ``inactive`` state. So when an event happens the ``inactive`` state should change into ``active`` state. This is how you do it.
```javascript
const toggleStateMachine = new Machine({
id: "toggleStateMachine",
initial: "inactive",
states: {
inactive: {
on: {
TOGGLE: "active"
}
},
active: {}
}
});
```
The ``on`` property tells the initial state which events it should listen for. Here, the ``on`` property tells the ``inactive`` state that it should listen to a `TOGGLE` event.
Similarly, the ``active`` property should listen to the ``TOGGLE`` event. So when the toggle is triggered while in the ``active`` state, it should switch back to the ``inactive`` state.
```javascript
const toggleStateMachine = new Machine({
id: "toggleStateMachine",
initial: "inactive",
states: {
inactive: {
on: {
TOGGLE: "active"
}
},
active: {
on: {
TOGGLE: "inactive"
}
}
}
});
```

That's it folks!. Our state machine is ready to be integrated into a React application.
* Simple implementation using React Hooks.
```javascript
import { useMachine } from '@xstate/react';
const toggleStateMachine = new Machine({
id: "toggleStateMachine",
initial: "inactive",
states: {
inactive: {
on: {
TOGGLE: "active"
}
},
active: {
on: {
TOGGLE: "inactive"
}
}
}
});
function Toggle() {
const [current, send] = useMachine(toggleStateMachine);
return (
<button onClick={() => send('TOGGLE')}>
{current.matches('inactive') ? 'Off' : 'On'}
</button>
);
}
```
## Read More
Well, that's it, folks!. Thanks for reading. I encourage you to read more from the official [XState](https://xstate.js.org/docs/recipes/react.html#hooks) documentation.
| ajinkabeer |
270,479 | Serverless Architecture: Hype or Godsend? | Serverless architecture has been a buzzword in the industry for quite a while now and it's there for... | 0 | 2020-02-28T06:08:46 | https://blog.solutelabs.com/serverless-architecture-hype-or-godsend-a3cd8ccb93c6 | serverless, aws, lambda, backend | Serverless architecture has been a buzzword in the industry for quite a while now and it's there for the right reasons. Serverless is revolutionizing the way software is developed for many products and those who have taken the leap have seen enormous benefits. Serverless is beneficial especially for those who want to prepare themselves for unforeseen workloads and not focus on managing infrastructure themselves.
Mobile apps have started earning more revenue than the entire Hollywood film industry since 2015. That is serious money spent on making games, apps, and such. If your app freezes or crashes, only 16% of your customers will think of giving your app a second chance.
Thus, it becomes important for businesses (especially mobile apps) to make a great first impression and delight the customer at every step of the way (especially if your app goes viral or has to scale up quickly).
Here are a few latest news on serverless architecture and Mobile Apps that you might be interested in:
- By 2024, the [Serverless Architecture Market](https://www.zionmarketresearch.com/market-analysis/serverless-architecture-market) is expected to reach around 18.04 billion globally
- AWS Lambda [dominates](https://www.informationweek.com/cloud/report-aws-lambda-dominates-as-serverless-adoption-grows/a/d-id/1337002) as the adoption of serverless increases
- By the end of 2020, 20% of the global enterprises will focus on [developing support and management capabilities](https://www.gartner.com/en/newsroom/press-releases/2018-12-04-gartner-identifies-the-top-10-trends-impacting-infras) within IT Infrastructure and Operations (I&O) teams using serverless computing, which was less than 5% in 2018
With companies facing challenges in maintaining and updating servers, serverless architecture has been getting more attention by the day.
**What is serverless architecture?**
Serverless architecture is an ecosystem wherein the entire backend logic is split into multiple individual components called as functions. These functions run in a predefined environment, consume some input and produce output as if it were a separate API.
**What are the benefits of serverless architecture?**
The major benefit of a serverless architecture is that it is deployed 100% over the cloud with high flexibility over compute size and choice of programming language. Thus you can easily scale without limit, code parts of your application in Javascript and the other in maybe Ruby.
The serverless architecture consists of three major components:
1) Router
2) Function as a service (FaaS)
3) Database for persistent storage
In this article, we will primarily explain how businesses are reducing their TCO (Total Cost of Ownership) for mobile applications built by leveraging serverless backends.
#**Understanding Backends for Mobile Applications**
Numerous companies around the world are running their business today solely using mobile apps. Some popular names include:
- Instagram
- TikTok
- Whatsapp
- Uber
- OLA
These companies basically have a backend application powering up their mobile apps. The backend of such mobile applications focuses on providing data to the frontend after required processing.
Traditionally, the backend of a mobile application is developed using popular programming languages like Java, Python, Go, NodeJS or Ruby. These backend applications run on virtual machines, or shared servers explicitly configured to run the corresponding application. The backend app connects with the desired database and exchanges data for the frontend. The entire execution is stateless, where the authorization is taken care of using an authentication token.
#**Conventional backend architecture for mobile application**
Conventionally, the backend applications continuously run on a server machine of defined configuration. The companies either opt for shared servers pre-configured for a specific service or they choose to utilize the dedicated device for better performance.
The shared servers (such as Hostgator and Bluehost) restrict the companies to utilize a specific programming language (such as PHP) and configuration (such as WordPress/Magento). The possibility of scaling up is limited.
On the other hand, with the dedicated machines, they normally come without any pre-installed application servers. We need to configure the machine, perform installations and then run the application. This process is not only time-consuming but also costly since someone needs to invest time for the same (usually done by a DevOps Engineer).
Considering the [current state of mobile applications](https://blog.solutelabs.com/the-state-of-mobile-app-development-in-2020-19b1c1e274e0), the backend applications normally require very little processing or compute power but greater bandwidth i.e. I/O (Input Output Operations). The sole purpose of the backend in most cases is to provide the means to exchange data. The usage of the backend highly depends on the number of active users, the amount of data to be processed and the bandwidth utilized. However, the costs of owning and managing a server remain the same irrespective of the usage. This leads to recurring costs for companies.
In addition to the recurring costs, if the application progresses well and the user base rises exponentially, scaling up would proportionately increase the expenses and involve additional installation efforts.
#**Serverless Architecture Benefits - The power of Serverless Backend**
After understanding the problems with the dedicated compute, let us now understand how serverless architecture stands against it. With serverless architecture, you utilize an ecosystem over the cloud where you are free to run code in a variety of popular programming languages. This ecosystem has a flexible compute size for every Function, which means you can specifically optimize the computational power as per the function's need.
These ecosystems, also known as Function as a service, are charged at the granularity of per millisecond execution that is too corresponding to the computational power used. This means the function costs us ZERO when not in use. Moreover, every function execution is in a dedicated environment that ensures that the function never runs out of juice.
Let us understand the benefits of serverless architecture for the backend:
- **Network Security**
One of the top benefits of serverless architecture is that these functions run within the Cloud ecosystem that is built to withstand the majority of the attacks and security vulnerabilities. This means your execution is far more secure than the regular dedicated servers. The functions can be executed within a private VPC where no external system is allowed to connect. This will ensure that your function is never misused.
- **Secure Authorization Gateway**
The functions are executed based on the request routing programmed at the gateway of the serverless ecosystem. Based on the routes, you can configure the authorization for each function. The intermediate routing gateway ensures that your function is never executed for an authorized request. Moreover, it also sanitizes the input data to secure your function from any unexpected request data. Hence, it becomes one of the unmissable advantages of serverless architecture.
- **Unlimited compute**
Another benefit of serverless architecture is that the compute power in the same is on-demand. This means, irrespective of the number of users, you do not need to worry about running out of RAM or bandwidth ideally. Every function execution instance has its dedicated memory and processing power. The available RAM ranges from 128MB to 3GB in various cloud environments. Moreover, even the costs for the function execution is proportionate to the amount of memory utilized. Thus, despite the high scalability, your costs remain optimum.
- **Optimized costs**
As discussed before, the majority of the time, the backend server of mobile application stay idle. It would not make sense to pay for the server when it is not really being used. With Serverless architecture, you only pay for the amount of time you utilize the cloud. The entire ecosystem is free to use for storage of your function and mapping your routes (negligible costs).
- **Freedom of programming language selection**
This is the most crucial benefit of serverless architecture. Every function has its ecosystem to run on. This means you can choose the programming language for every function to be different. For instance, you need python for some faster data processing, but nodeJS is simpler for data exchange, you are free to do so. The cloud environments offer almost every popular backend programming language.
. The serverless layer resides within the applications and base cloud platform to simplify cloud programming.](https://cdn-images-1.medium.com/max/800/0*NhuSzQURQCADeXeD)
#**Serverless Architecture AWS- Going serverless with AWS**
One of the most commonly asked questions is- What is AWS serverless architecture? So, here is the answer-
[Amazon Web Services](https://aws.amazon.com) (AWS) is the most popular platform today for going serverless. It is preferred because the components have been tried and tested exhaustively by numerous organizations. Let us understand the AWS ecosystem of the serverless architecture and map our backend components to go serverless.
The below image shows a high-level overview of the serverless architecture. Let us discuss each of the components to understand how it would fit in for a mobile backend.

#**AWS Cloudfront & WAF**
AWS Cloudfront and Web application firewall together form the entry point for the API calls. Every API call that is made from the mobile application needs to pass through these layers. These layers are responsible for filtering out spam requests, caching the data for the requests and optimizing performance by routing the request to the right region.
#**AWS API Gateway**
This is an important component of the serverless architecture. AWS API gateway takes care of the routing of requests to the right AWS Lambda functions. AWS API gateway is responsible for the below items:
- Authorizing the request
- Mapping and validating headers and the request body
- Routing the request
- Mapping the response body
- Returning the response
#**AWS Lambda**
AWS Lambda is the core compute part of the serverless architecture. With AWS Lambda, the developers get the facility of running function as a service. AWS Lambda today supports the majority of the backend programming languages like Python, Java, NodeJS, Go, Ruby, PHP, .NET and others. Moreover, AWS also has an SDK for most of the programming languages to make access to AWS services easier.
#**AWS DynamoDB & AWS RDS**
Finally, the last component of the serverless architecture is storage. AWS DynamoDB and AWS RDS are two database services by AWS that provide NoSQL & SQL databases, respectively. Both the database services provide an auto-scalable infrastructure where you are charged based on your usage. Additionally, they also have automated backups to ensure that your desired data is never lost.
These components collectively make up the serverless architecture on AWS. Companies around the world have been shifting their backend workloads to AWS serverless to reduce their recurring cost commitments, and leverage scalable cloud computes to keep up the performance.
#**Long Live Serverless Architecture**
Serverless architecture is the future of mobile backends owing to the need for cost optimization. Serverless architecture not only allows the developer to have a flexible compute at the optimum cost but also allows him to be free from any installation and configuration efforts involved.
With security laws getting stricter day by day, handing off the security to the expert in the field is always the right choice. With serverless architecture, you precisely do this. You hand over the majority of the security aspect to the Cloud service provider. Thus, going serverless is definitely a move to consider in your next technology upgrade plans.
{% youtube VzF2dCqp7Q0 %} | ajaybab04952218 |
270,483 | Cloud-Computing: an odyssey of immense possibilities? | In the modern arena, Cloud-Computing is growing to be an increasingly familiar word, but if you are s... | 5,167 | 2020-02-28T06:24:42 | https://nihalpotdar.home.blog/2019/04/14/cloud-computing-an-odyssey-of-immense-possibilities/ | beginners, computerscience, career, architecture | In the modern arena, Cloud-Computing is growing to be an increasingly familiar word, but if you are struggling to find out what it truly is – then, read this blog to find out more! By the end of this blog, you will be able to answer questions such as what is Cloud-Computing, what are some of its very basic principles, its implications on our lives, and how to learn more.
What is Cloud Computing?
Microsoft Azure (Microsoft’s Cloud Service) defines Cloud computing as “the delivery of computing services servers, storage, databases, networking, software, analytics, intelligence and more over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale.”
Essentially, Cloud Computing is the act of storing, retrieving or manipulating your data that does not exist on localized files on your computer, but rather on large data centers, and systems that exist elsewhere across the world – which you can access via the internet. In fact, this article is a by-product of Cloud Computing as it is developed using a mechanism which consistently stores and saves what I write, and is made available using a service which allows me to upload my data (on the internet) without having to store any of the data myself. On a macro-scale, the internet, itself, can be considered to be a means of cloud-storage as it allows us to access the information that we require without having to store any of this information, ourselves. However, this is only an analogy – primarily, since not all the data on the internet is ours.
A better example, would be Gmail through which we store our data (past emails) not on our own hard-drives but on those of the managing enterprise. If all our data on Gmail were to be stored on files on our computer, not only would we not be able to send any emails, we would not be able to access our files from other locations as our phones. Considering this from the perspective of businesses, this is very important as their employees must be able to access their files regardless of their location, and at their convenience. About Gmail, itself, every-time we send an email – it is transmitted to Google servers through fiber optic lines (the internet). The servers, then, duplicate the content of our email, ensuring that there is backup, check for spam, and viruses – finally, then, the email is on its way to its recipient.
As established, it is important for businesses to use cloud computing as a means to avoid locally storing their data. Hence, there are different types of support that a business may require and the different types of cloud computing.
Types of Cloud Computing
IaaS (Infrastructure as a Service)
This is the most rudimentary form of a Cloud-computing service where business are provided with the capability to store and access these files from a remote-server. Amazon Web Services defines IaaS as “the basic building blocks for cloud IT … provides access to networking features, computers (virtual or on dedicated hardware), and data storage space.” In essence, IaaS refers to a form of Cloud Computing where the user (most, often a business) can relegate all the tasks of maintaining a traditional database of information, as with servers and storage technology to the provider. In this way, the user does not have to expend as many resources on their storage requirements and still maintain a degree of control over their needs. This type of a model is most often used by medium enterprises which wish to avoid the hassles of maintaining their own database, but require a degree of control over their own analytics, and systems.
PaaS (Platform as a Service)
Platform as a Service is a cloud-computing service that provides infrastructural support for businesses as well as support with operating systems, and analytics. The primary difference between IaaS and PaaS, lies in their names – while IaaS offers support on the level of infrastructure, PaaS offers extended support with the platform, in itself. A good example would be sites which allow you to compute code – they allow you to build your own applications, but you don’t have to worry about the run-time, yourself (A great analogy for the google play store).
SaaS (Software as a Service)
Software as a Service is arguably, the most prevalent form of cloud computing that we see. The principle of SaaS is that the cloud provider manages all aspects related to the application (data, computation) with only the data being yours. A good example of this, would be web-based hosting, Gmail, Google Drive, and pretty much any service where you can upload, store, and manipulate your data (without worrying about how it will be computed).
Why would a business use Cloud Computing?
1. Access Data Anywhere
For a business with researchers working across the world, it is important for the researchers to be able to access their data at any location, and at any time. For a business as a whole, it is very important to ensure that its data is not relevant to the location.
2. Cost (no need of those bulky data servers)
For small and medium-sized businesses, it is often far cheaper to be able to store their data across a platform where they pay as per the requirement for their use than to manage a database themselves. Managing a local (on a physical location) requires a lot of physical space, manpower, and security – all of which is taken care of, through Cloud providers.
3. Security (no need to manage your own database)
Quite simply, for smaller and medium-enterprises, Cloud solutions can be far more secure than those that require a physical presence – simply owing to the facet, that cloud providers often have a lot more resources than these enterprises, leading to a higher degree of virtual and physical security (Cloud providers also have a lot more data that they have to ensure the security of).
4. Different Levels of Privacy
Often, enterprises have varying needs in terms of their requirement for privacy – with regards, to the extent to which they must rely on Cloud Providers to manage their data. These different layers of privacy/ interaction are denoted by the following terms: Public Cloud (services are managed by the cloud provider), Private Cloud (services are managed on local clouds, exclusively for organizations), and Hybrid (some services are managed by the cloud provider, and some locally).
Concerns over Cloud Computing
1. Privacy (you are handing over your data to someone else)
A concern with Cloud Services is that the data is being stored by a company which is not your own, leading to concerns over privacy and the extent to your data will remain exclusive to your needs.
2. Can the data servers be hacked?
One of the main concerns with Cloud Computing is that this data is being stored on large servers, possibly across the world – which the user has no control over – leading to the possibility of theft, and virtual hacking.
The future of Cloud Computing
With Cloud-Computing being such a vibrant field with ever-expanding possibilities, some have suggested that in the future – we may be looking at the possibility of Virtual Desktops where all our data is stored on a Cloud-scale, allowing for a complete integrating between computers and cloud-services. This would not only ensure that we are able to access our data anywhere, but also from any device in such a manner as to produce the maximum efficiency at the minimum expense. For an example: you could simply use any device to access your data without having to prolong the use of a hardware device, allowing for the implementation of such technology into driver-less car-systems, autonomous robots and much more! | nihalpotdar |
270,667 | Mockup Editor with Scene | This is a good start for your mockup editor. Inspired by https://mockupeditor.com/ | 0 | 2020-02-28T13:21:32 | https://dev.to/tudorsfatosu/mockup-editor-with-scene-5co3 | codepen | <p>This is a good start for your mockup editor.
Inspired by <a href="https://mockupeditor.com/" target="_blank">https://mockupeditor.com/</a></p>
{% codepen https://codepen.io/chris_tudor/pen/JjdWZLO %} | tudorsfatosu |
270,714 | Making TimNash.co.uk – Part 2, The developer strikes back | In part 1 of my “how did I make this site” series, which, in case you are using an RSS reader or are... | 0 | 2020-05-31T08:21:31 | https://timnash.co.uk/making-timnash-co-uk-part-2/ | wordpress | ---
title: Making TimNash.co.uk – Part 2, The developer strikes back
published: true
date: 2020-02-20 12:00:00 UTC
tags: WordPress
canonical_url: https://timnash.co.uk/making-timnash-co-uk-part-2/
---
In part 1 of my “how did I make this site” series, which, in case you are using an RSS reader or are on a site that scraped mine, is [https://timnash.co.uk](https://timnash.co.uk), I went through my choice to focus on a writing experience of the site being a developer playground. The article focused on my choice of plugins and theme as well as the idea that I wanted a site that made it easy for me to write.
Those who know me well may well have stared at that article with suspicion; Tim using off-the-shelf components and nothing else, I simply don’t believe it!
So ok, you got me, I couldn’t entirely leave the developer side alone so in this article I’m going to look at the custom side of the site. I do recommend reading [Part 1, Making TimNash.co.uk – Plugins and Theme](https://dev.to/tnash/part-1-making-timnash-co-uk-plugins-and-theme-1bob-temp-slug-2293721) first, if you haven’t done so, we have plenty of time and can wait.
Onwards!
## Code Management
Before we get to the custom code let’s roll it back a bit and talk about how I manage the site.
My site sits on Managed WordPress hosting that comes with built-in Git integration. The way we set up Git at [34SP.com Managed Hosting](https://www.34sp.com/wordpress-hosting) is that the host sets up a Git repository with a master and staging branch. If you push to master it copies the contents to your live site, if you push to staging it copies to your staging site.
So you can set-up the Git repo as an origin, clone it and off you go, or you can add it as a remote and push to it. Both ways work. The big advantage of doing it this way, over adding the “server” as a user to your existing repo, is that the server controls what it will accept in push requests. For example, by default it lints PHP code, it will also run things in your tests folder, allowing you to specify tests for deployment. If things fail, then it rejects the push and lets you know directly in the Git client feedback.
There is nothing wrong with just using this Git repo, and for lots of projects this is what I do, however for my own site I also keep my custom code within a separate Git repository on CodebaseHQ.
### CodebaseHQ
[CodebaseHQ](https://www.codebasehq.com/) is a code hosting service similar to GitHub/GitLab/BitBucket. Built by aTech media I have used Codebase on and off for over 10 years as my code storage solution alongside other aTech products.
There are many reasons to use Codebase over the larger competitors. It has some fantastic features, some of which the bigger companies have never replicated, but the main reason is they are a local UK company and, unless they are going to DM me, have literally no evil clients.
A few reasons you might want to check them out:
- Really good issuing tracking, that effectively allows you to run projects directly from within Codebase
- Wiki features in the form of notebooks
- Time tracking
- Exception and error handling (see below)
- Really simple yet flexible API and Webhook system
I sound like a shill for them, but I really do think for a lot of smaller companies they are the ideal choice, especially over GitHub private repositories.
If I was smart I would totally have an affiliate link… I don’t, carry on.
I keep timnash.co.uk in a single project, and I organise most of the site tasks within Codebase. For example, right now I have a pair of Milestones:
- Project Speed – A general milestone for exploring and experimenting with speed/performance improvements
- V2 2020 Overhaul – A milestone to collect any big-ticket items I’m looking to change in the second half of the year
Outside of these two big milestones I have my general tickets, these are either raised within CodebaseHQ itself or more likely me sending an email.
Whenever a ticket is created it also adds a Todo within [Todoist](https://todoist.com/) via some horrifying spaghetti code which will never see the light of day. Likewise, if the issue is closed, either through email/the site or through a Git commit, the Todo is removed. In theory, closing the Todo in Todoist should also close the issue but the reality is that’s never worked reliably.
So my usual workflow for a bug I don’t have time to fix immediately is, send an email to Codebase, that generates a ticket and a Todo. Work on ticket, close it and it removes it from the Todo. Prompting to work on the ticket/escalation is all done in Todoist.
### Pushing to Multiple origins
Pushing to two places at once in Git turns out to be remarkably, for Git, easy. To get it setup:
- Clone your original repo, in my case that’s the 34SP.com one
- Add the second as a “git remote”
git remote add codebase [git@codebasehq.com](mailto:git@codebasehq.com):/pathtorepo.git
- Set the two repo to both listen to origin for push and add
git remote set-url –add –push origin git@codebasehq.com/pathtorepo.git
git remote set-url –add –push origin git@[timnash.co.uk/pathtorepo.git](http://timnash.co.uk/pathtorepo.git)
If you then run a `git remote show origin` both are now showing and when you push, they both are pushed too.
**IMPORTANT – The second repo you add as a remote needs to be blank or matching otherwise this is going to become fun.**
It’s worth noting that only “add” and “push” are mirroring so other commands that call origin, for example pull, will still only be occurring from your primary repo, which in my case is the one I cloned from.
This way I can keep my code neatly in two separate repositories. Now I’m aware I have basically broken Git and this only works realistically if one person is working on a project. I also have turned a decentralised system into a centralised system in two places.
One of the bigger issues is making sure they keep in sync, at least for me is testing. The 34SP.com Managed Hosting Git integration provides a way to run tests against your deployed code and rejects the deployment if, for example, it fails a code lint.
This is great and stops code that will kill your site from being deployed, however, it can mean that one of our origin servers has rejected the commit. Normally this isn’t an issue as you just fix and push again and they resync.
I tend to work with 3 branches, the 34SP.com Git integration specifies two branches:
Master & Staging, new commits pushed to the origin on either of these branches will be deployed. So I tend to work on a dev branch and merge into Master or staging depending on what I’m doing.
### What goes into the repo?
I have kept saying custom code, I don’t commit any code that is available via the auto-updater system so this is code from wp.org and sources which hook into the WordPress auto-update system.
So my git repo looks like:
```
wp-content/
plugins/
theme-fiddles
security-headers
my-config.php
composer.json
```
So there is a composer file in the root of my site but no vendor folder.
#### Where is the vendor folder…?
Not in Git, I’m not here to start a flamewar but the point of package managers is that they manage your packages. The moment you hunt something in Git, the manager is no longer in control, you are. Which might be ok in certain scenarios, but for me, I would rather let Composer do its thing.
Also that Composer file, it has nothing to do with WordPress…
Well, that’s not exactly true, it’s for handling code that is currently outside of the WordPress plugin/theme eco-system. Specifically, I use Composer to manage a pair of packages I use for environment variables and exception tracking. Both of these need setting up prior to somewhere in the WordPress loading sequence where there is a suitable hook.
Consequently, they are set up in the my-config.php file, which is a file provided on the 34SP.com hosting that is “required” in the non-writable wp-config.php file for additional directives. Normally it’s used for adding things like WP\_DEBUG defines or similar, but I use it as an early-stage location to add some code.
So why don’t I put all the plugins and theme into Composer and let Composer manage everything?
It’s a good question, and the simple answer is that would require me to write a package management update system or use a third-party, and the hosting already has a plugin update system that works well.
If you have read my [Back to Basics – Updating WordPress Strategies](https://dev.to/tnash/back-to-basics-updating-wordpress-strategies-53g4-temp-slug-3412307) you will hopefully get the impression I am very pro full automation as much as possible, when it comes to keeping things up to date. Therefore one of the criteria for the site is that it should manage itself if I stop looking after it for prolonged periods.
At the moment, all my plugins on the live/staging site auto-update daily. My dev site, my plugins update when I open my IDE Atom and open the project files in addition I have a pre-commit hook in git that runs auto-update and fails the commit if a plugin has been updated allowing me to retest if needs be, or simply commit again.
Yes, in theory, I could do all of this with packagist and wp-packagist and some cron jobs but the current setup has thousands of sites using it, is robust and has decent feedback systems. Why reinvent the wheel?
## So what are the two packages?
### Keeping things organised with .env
The first package in my Composer is [https://github.com/vlucas/phpdotenv](https://github.com/vlucas/phpdotenv) though if I was to recommend one I would lean more towards [https://github.com/josegonzalez/php-dotenv](https://github.com/josegonzalez/php-dotenv)
This reads in from a .env file a bunch of variables and allows me to quickly insert them wherever I like within my code. Why might I do this?
I have in-effect 3 environments – my local machine, staging and live, each of these at times need different variables. So each has its own .env file. So within the my-config.php file I load in the env at the start
```
use Dotenv\Dotenv;
$dotenv = Dotenv::createImmutable( __DIR__.'/../');
$dotenv->load();
```
And then I can use them at any time for calling:
```
getenv('SECRET');
```
This means any of my custom code can make use of .env file but I also have a simple mu\_plugin that looks at variables called OPTION\_\* and then applies a `pre_option_{$option}` filter allowing me to serve any option normally in wp\_options table via the env file instead. This allows me to set separate API keys etc on plugins on local/dev/live.
### Exception Tracking
The second package is Airbrake. Airbrake is a language-agnostic exception tracking service and open-source standard. The idea, instead of reporting your error or thrown exception to your local logs, you throw it to an exception tracking service. There are lots of these services, and most have their own API for handling data sent to them. Airbrake opened up their API and this means multiple services can act as Airbrake endpoints including Codeception.
What does this mean? Well with Airbrake setup and configured on the server and Codebase, whenever my site throws an exception or triggers a warning/fatal error, it sends an HTTP notification to Codeception which generates an exception report. This contains a stack trace and other useful information. It then groups them together so if you have the same error over and over it just includes them in the same report.
This means you can go into Codeception and into Exceptions tab, see all the errors, raise tickets and notes and ultimately close/delete them directly from the interface.
To get it going I just install the ‘phpbrake’ package from Airbrake then:
```
/*
* Setup Airbrake for exception tracking to Codebase
*/
if(getenv('AIRBRAKE_ID')){
// Create new Notifier instance, pointing to Codebase
$notifier = new Airbrake\Notifier(array(
'projectId' => getenv('AIRBRAKE_ID'),
'projectKey' => getenv('AIRBRAKE_KEY'),
'host' => 'https://exceptions.codebasehq.com'
)
);
// Set global notifier instance.
Airbrake\Instance::set($notifier);
// Register error and exception handlers.
$handler = new Airbrake\ErrorHandler($notifier);
$handler->register();
}
```
Within the my-config.php file.
That’s it, I can now explicitly throw an exception and it will appear, or any errors will show up, naturally. This makes debugging quicker and easier and because of notifications in CodebaseHQ I get notified about errors quickly, not when I happen to look in my PHP error log.
## Custom Plugins
I have a few custom plugins and mu-plugins. I have already talked about my options\_to\_env plugin above. In addition, the two of most interest to people will be my security headers and theme\_fiddles plugin, and both will disappoint.
I really try to keep things small and single-purpose. Indeed the entire code in my tn-security-headers plugin is:
```
function tn_security_headers() {
header( 'strict-transport-security: max-age=31536000; includeSubDomains; preload' );
header( 'X-Frame-Options: SAMEORIGIN' );
header( 'X-Xss-Protection: 1; mode=block' );
header( 'X-Content-Type-Options: nosniff' );
header( 'Referrer-Policy: strict-origin-when-cross-origin' );
}
add_action( 'send_headers', 'tn_security_headers' );
```
Likewise, my tn-theme-fiddles is similarly lightweight:
```
remove_action( 'wp_head', 'wp_generator' );
remove_action( 'wp_head', 'wlwmanifest_link' );
remove_action( 'wp_head', 'rsd_link' );
remove_action( 'wp_head', 'wp_shortlink_wp_head' );
remove_action( 'wp_head', 'adjacent_posts_rel_link_wp_head', 10 );
add_filter( 'the_generator', '__return_false' );
remove_action( 'wp_head', 'print_emoji_detection_script', 7 );
remove_action( 'wp_print_styles', 'print_emoji_styles' );
remove_action( 'wp_head', 'rest_output_link_wp_head' );
remove_action( 'wp_head', 'wp_resource_hints', 2 );
add_filter( 'the_seo_framework_indicator', '__return_false', 10 );
//Make sure all images come from cdn
add_filter(
'wp_calculate_image_srcset',
function( $sources ) {
$return = array();
foreach ( $sources as $source ) {
$source['url'] = str_replace( 'https://timnash.co.uk/', 'https://cdn.timnash.co.uk/', $source['url'] );
$return[] = $source;
}
return $return;
}
);
add_action(
'wp_head',
function() {
ob_start(
function( $o ) {
return preg_replace( '/^\n?<!--.*?[S]tream.*?-->\n?$/mi', '', $o );
}
);
},
~PHP_INT_MAX
);
```
The security header just applies header on every page. Really this could be done with Nginx, but at one point some headers changed depending on page, it has been simplified over time.
The theme fiddles is very much the “functions.php” file of the site but in plugin form, so I can keep some control over it. It’s mostly removing things in the head I am uninterested in, though it does include a fix (for posts using classic editor block) where the ‘subscr’ would point to the wrong URL.
That’s it, how dull and boring, but that’s the point I’m trying to be dull and boring just a few very small custom plugins and everything else using existing plugins was the goal. Many of the plugins I use I could make myself, especially if I’m looking for total performance and sacrifice settings pages for configs etc.
## Other code that is running?
### Backups
The hosting takes backups daily in the morning just after any updates are done and stores them for 28 days. I also use a free service called CodeGuard that backs up all the code on the site daily as well, excluding .env file and the wp/ folder itself as I don’t manage that, the hosting does.
In addition, I have a small script that logs on and runs ‘wp db export’ and stores it on my home NAS. When I start developing locally (by opening the project in Atom and on my home network, or connected to the VPN) it will grab the latest backup and apply it to my local environment. This way I am always testing my backup.
So my backups are:
- The hosts own daily backup
- All custom code is in 2 repos
- I back up the code to CodeGuard
- I back up the MySQL DB to a local NAS as well as use it as the basis for local development.
I’m totally not paranoid, and have lost years of posts in the past and had to rely on archive.org to get them back, oh no.
### WP-CLI commands
I have a few maintenance commands, that I hold in a package separately and just require them as and when I need them. Mostly these are old test commands, exporting options and a few quick commands for clearing the cache when I need them. These are the dirty bash scripts of the WordPress admin, you wouldn’t share them but they quickly allow you to test things. My test for finding Gutenberg blocks is an example of the sort of script that is in my wp-cli commands folder.
While typing this I realised how unproud I am of this collection so I have raised a ticket in Codebase to clean them out and organise things a little more logically.
## So is part 3 return of the writer?
So there you have it in, part 1 we looked at how I was trying to simplify and make sure my over-engineering self wouldn’t dominate my site so I could get on with writing. In this 2nd part, we can see that’s not exactly how it’s gone and I clearly have more work to do.
So what of part 3? This is an ongoing project, after all. Since I wrote part 1 things have changed, plugins have gone, and a new plugin has arrived. But I think I will save things that changed since I started writing these posts to part 4. Instead part 3 will be How I write posts and my Gutenberg workflow!
Want to learn more?
This post is from a series called Making TimNash.co.uk, here is the series so far:
- [Part 1 – Plugins and Themes](https://dev.to/tnash/part-1-making-timnash-co-uk-plugins-and-theme-1bob-temp-slug-2293721)
- [Part 2 – Developer Strikes Back](https://timnash.co.uk/making-timnash-co-uk-part-2/)
- Part 3 – Gutenberg (Coming Soon)
Help others find this post:
- [Share on Twitter](javascript:void(0) "Share on Twitter")
- [Share on Facebook](javascript:void(0) "Share on Facebook")
- [Share on LinkedIn](javascript:void(0) "Share on LinkedIn")
- [Share on Reddit](javascript:void(0) "Share on Reddit")
- [Share via Email](mailto:?subject=Making%20TimNash.co.uk%20%E2%80%93%20Part%202,%20The%20developer%20strikes%20back&body=Making%20TimNash.co.uk%20%E2%80%93%20Part%202,%20The%20developer%20strikes%20back%E2%80%94https://timnash.co.uk/making-timnash-co-uk-part-2/ "Share via Email")
This post was written by Me, Tim Nash I write and talk about WordPress, Security & Performance.
If you enjoyed it, please do share it!
[Source](https://timnash.co.uk/making-timnash-co-uk-part-2/) | tnash |
270,765 | What was a tech you recently tried and didn’t fully enjoy? Why? | Taking into account all of the tech you have recently used, from frameworks to editors, what was one you realized you don’t enjoy? | 0 | 2020-03-03T15:14:07 | https://dev.to/heroku/what-was-a-tech-you-recently-tried-and-didn-t-fully-enjoy-why-1ggf | discuss, webdev | ---
title: What was a tech you recently tried and didn’t fully enjoy? Why?
published: true
description: Taking into account all of the tech you have recently used, from frameworks to editors, what was one you realized you don’t enjoy?
tags: discuss, web-dev, meta
cover_image: https://upload.wikimedia.org/wikipedia/commons/thumb/0/0d/Cape_Disappointment_and_Cape_Disappointment_Light.jpg/1600px-Cape_Disappointment_and_Cape_Disappointment_Light.jpg
---
_image of Cape Disappointment by [Adbar](https://commons.wikimedia.org/w/index.php?curid=27188708)_
Taking into account all of the tech you have recently used, from frameworks to editors, what was one you realized you don’t enjoy? Were your expectations different?
| nocnica |
270,901 | Leveraging affiliate programs to sell more as an indie hacker. | Leveraging affiliate programs to sell more as an indie hacker. | 0 | 2020-02-28T19:43:55 | https://dev.to/eddymens/leveraging-affiliate-programs-to-sell-more-as-an-indie-hacker-5bao | affiliatemarketing, sales, indiehacker, solodeveloper | ---
title: Leveraging affiliate programs to sell more as an indie hacker.
published: true
description: Leveraging affiliate programs to sell more as an indie hacker.
tags: affiliate marketing, sales, indie hacker, solo developer
---

**Who is an indie hacker?**
This I believe was coined by Courtland Allen who runs [Indiehackers.com](https://www.indiehackers.com/). So I won’t stress myself to come up with a definition, I will just quote the about section of the site.
> You’re an **indie hacker** if you’ve set out to make money independently. That means you’re generating revenue directly from your customers, not indirectly through an employer. Other than that, there are no requirements!
**What is an affiliate program?**

This is basically a program set up to reward people known as **affiliates** who bring in new customers or sales. These individuals are usually rewarded only when a sale is made. Think of it as a salesperson on commission only.
**Why might this benefit an Indie Hacker?**

As an Indie hacker, you already have a lot to do ie: product development, marketing, legal, sales, etc.
Also, sales might not be your strongest skill, more soo for the many developers who decide to go the Indie hacker route. By having an affiliate program in place, you might only have to pitch to get affiliates to join the program and sell through them.
This is a good strategy to boost your sales without having to be the only salesperson for your product.
With that being said I believe doing the initial sales yourself and continuing to do some sales even with affiliates, is important if your product is to be loved by your users. Not only do you get to understand what your users want, but you also get to become better at selling.
| eddymens |
270,941 | Double Pipe Equals | If the left-hand side of the comparison is true, there's no need to check the right-hand side. Well,... | 0 | 2020-02-28T22:07:24 | https://dev.to/christianotieno/double-pipe-equals-2mk3 | doublepipeequals, operators | If the left-hand side of the comparison is true, there's no need to check the right-hand side. Well, the principle works well in my programming world. I am not so sure of its practicality on the real world though. | christianotieno |
270,984 | How to create a basic search form and how to use Elasticsearch as an alternative.(Ruby on Rails) | In this the tutorial, we will review how to build a search form and then we'll look at how to impleme... | 0 | 2020-03-05T15:22:23 | https://dev.to/aweysahmed/how-to-create-a-basic-search-form-and-how-to-use-elasticsearch-as-an-alternative-3o99 | ruby, rails, tutorial | In this the tutorial, we will review how to build a search form and then we'll look at how to implement a search form using ElasticSearch. I'll be using a book review app that I built recently.
I want to implement a search function that will allow users to search the database for a book that has been reviewed.
# Basic Rails Search
In the app, a user can have more than one book review and a book review belongs to a user.
Let's look at what the respective models look like.
This is the book review model.
```ruby
# frozen_string_literal: true
class BookReview < ApplicationRecord
belongs_to :user
end
```
This is the user model.
```ruby
# frozen_string_literal: true
class User < ApplicationRecord
has_many :book_reviews
end
```
We'll have to work with the model, view and controller in this app.
The first thing we'll do is add the search function to our view. We'll be using the index view to add the search function.
## Modifying the view
```ruby
<%= form_with url: book_reviews_path, method: :get, local: true do |form| %>
<%= form.label :search, "Search Books:"%>
<%= form.text_field :search, id: :search, class: "form-control" %>
<br />
<%= form.submit "Submit", class: 'btn btn-primary'%>
<%end%>
```
This will create our search form and our controller is waiting to receive the params[:search] and that is set in the form.text_field.
## Modifying the Book Review Model
The next thing we want to do is add the logic for our search to the model. The reason we are adding it to our model and not our controller is that it is good practice to keep your controllers lean and add logic to your models.
```ruby
# frozen_string_literal: true
# BookReview Model with validations
class BookReview < ApplicationRecord
belongs_to :user
def self.search(search)
if search
book_reviews = BookReview.where('title LIKE ?', "%#{search}%")
if book_reviews.present?
where(id: book_reviews)
else
all
end
else
all
end
end
end
```
(I'm using Rails 5.1.7 and with this comes the ability to use all instead of typing BookReview.all)
Our search function will render on the index page. If there isn't a search parameter entered it will render all the books. It will also render all the books if the search doesn't find a match.
Now that we have seen how a basic search form is added to a rails project, let's look at what we can use that will allow us to scale and have a more efficient and robust search.
The current search we are using works well for a small project but large scale applications will need something else.
## ElasticSearch
Elasticsearch is an open-source search and analytics engine. It uses a REST API that makes it easy to use.
It has multiple uses but for this blog and tutorial, were are going to use it as a search engine.
If you are using brew you can issue this command to install elasticsearch
```brew install elasticsearch```
To use elasticsearch in our rails app, we'll need to add this gem and run bundle install.
```gem 'searchkick"```
Then we need to add searchkick to the model that we would like to search. In our example, we would add it to your book_reviews model.
```ruby
# frozen_string_literal: true
class BookReview < ApplicationRecord
searchkick
end
```
Then you'll want to add data to the search index.
```rails searchkick:reindex CLASS=BookReview"```
Type elasticsearch in the command line and then you can type
```curl http://localhost:9200```
in a separate window to make sure elasticsearch is up and running.
We don't need to make any changes to our index view, however, we will need to make changes to our model and controller.
Let's add the search logic to our model so that we are adhering to the design principle of fat models and skinny controllers.
```ruby
def self.advanced_search(search_results)
if search_results
BookReview.search(search_results, fields: [:title])
else
all
end
```
Let's add this method to our controller
```ruby
def index
@book_reviews = BookReview.advanced_search(params[:search])
end
```
This is all we need to do to add elasticsearch. ElasticSearch will yield results out-of-the-box even if the user hasn't spelled the word correctly.
ElasticSearch can do some much more but I hope this tutorial has given you the knowledge on how to add it to your rails application.
| aweysahmed |
270,985 | Generating Reports in Figma with a Custom Figma Plugin | Build your own Figma Plugin for generating reports in Figma from various data sources! | 0 | 2020-02-29T02:30:00 | https://dev.to/bobheadxi/generating-reports-in-figma-with-a-custom-figma-plugin-2615 | typescript, figma, plugin, tutorial | ---
title: Generating Reports in Figma with a Custom Figma Plugin
published: true
description: Build your own Figma Plugin for generating reports in Figma from various data sources!
cover_image: ''
tags:
- typescript
- figma
- plugin
- tutorial
---
Recently, I took on a project that involved building a Figma plugin to generate a property pitch report from data about a property collected by employees, mostly aggregated on a central Wordpress instance. For the unfamiliar, [Figma](https://www.figma.com/) is a neat web-based tool for collaborative design, featuring a very robust set of APIs, which made choosing it for automating the property pitch report process a pretty obvious one.
In this post I'll write about approaching the Figma plugin API and leveraging it to automate aggregating data from various sources to generate a baseline report that can easily be customized further! :electric_plug:
- [Requirements](#requirements)
- [Implementation](#implementation)
- [Figma Plugins Rundown](#figma-plugins-rundown)
- [Collecting Input and using React as our iframe](#collecting-input-and-using-react-as-our-iframe)
- [Manipulating Figma Nodes and Working With the FigmaSandbox](#manipulating-figma-nodes-and-working-with-the-figmasandbox)
- [Final Thoughts](#final-thoughts)

## Requirements
This plugin would have to be able to:
* retrieve basic data collected by employees from our Wordpress instance
* download images, generate maps, and retrieve miscellaneous assets from various sources to augment the report
* splat all this data onto a Figma document in an attractive, organized manner
As far as implementation goes, this posed a few problems when using Figma Plugins - read on for more details!
## Implementation
### Figma Plugins Rundown
To start off I am going to give a quick overview of how Figma Plugins work. This is also covered in ["How Plugins Run"](https://www.figma.com/plugin-docs/how-plugins-run/) from the official documentation,
but for some reason it still took me quite a while to figure things out, so I'll explain it slightly differently here:

Sometimes the `FigmaSandbox` is referred to as the "main thread", and the `iframe` is also called a "worker".
The "why" of this setup is explained in the official documentation:
> For performance, we have decided to go with an execution model where plugin code runs on the main thread in a sandbox. The sandbox is a minimal JavaScript environment and does not expose browser APIs.
That means that you'll have two components to your plugin: a user interface that has code that runs in the `iframe`, which *also* handles any browser API usage (any network requests, the DOM, and so on), while any code that does the actual work of handling interactions with Figma (reading layers, manipulating nodes, setting views, and so on) runs in an entirely separate `FigmaSandbox`. The only way these two can communicate is through *message passing* via the Figma plugin framework, as described in the above diagram.
This means that to build this thing, we'd either have to:
* front-load everything in the `iframe` before passing everything onto the `FigmaSandbox` - this would require knowing all such dependencies beforehand, and passing a lot of information around
* do some ping-ponging between the `iframe` and `FigmaSandbox`, where each page we generate can declare its own dependencies and fetch them appropriately
#### Structuring Our Plugin
We ended up going with the second option, which lended itself to a more compartmentalized approach, as outlined below:

The following sections cover [the `iframe`](#collecting-input-and-using-react-as-our-iframe) and [the `FigmaSandbox` and the outside world](#manipulating-figma-nodes-and-working-with-the-figmasandbox) in more detail, complete with examples!
### Collecting Input and using React as our iframe
I started off using a template I found, [`nirsky/figma-plugin-react-template`](https://github.com/nirsky/figma-plugin-react-template), which I quickly forked and made a variant of for my own preferences (mostly related to tooling and styles), which you can find at [`bobheadlabs/figma-plugin-react-template`](https://github.com/bobheadlabs/figma-plugin-react-template).
The template sets up the `iframe` part of our plugin as a Typescript-based React application, which is handy because [Figma provides an API typings file](https://www.figma.com/plugin-docs/api/typings/) that saved me a lot of time. Inputs in the template are collected in standard React fashion with some [hooks](https://reactjs.org/docs/hooks-intro.html):
```ts
const projectIDInput = React.useRef<HTMLInputElement>(undefined);
const projectIDRef = React.useCallback((element: HTMLInputElement) => {
projectIDInput.current = element;
}, []);
```
And later, in your JSX:
```tsx
<p>Project ID: <input ref={projectIDRef} /></p>
```
There are a number of ways to do this, so just use whatever you prefer! Once you've collected your parameters, as denoted in above diagrams, when the user kicks off the report generation process we can fire a messsage to the `FigmaSandbox` via `parent.postMessage`:
```ts
const onSubmit = React.useCallback(() => {
parent.postMessage({ pluginMessage: {
type: actions.LoadProject,
// ... other params
} }, '*');
}, []);
```
To enable bidirectional message passing (in our case, to enable asset loading on the fly), we'll need to hook onto incoming messages to the `iframe` via `window.onmessage`:
```ts
React.useEffect(() => {
window.onmessage = async (event) => {
const { pluginMessage } = event.data;
const { type } = pluginMessage;
try {
switch (type) {
/**
* handle your plugin's various action types - these handlers can also use
* `parent.postMessage` to pass results back to the FigmaSandbox!
*/
}
} catch (err) {
console.error(type, { err });
}
}
}
```
Remember that browser API access requires being in the `iframe`, so any network requests and so on should probably happen in one of these `window.onmessage` handlers.
### Manipulating Figma Nodes and Working With the FigmaSandbox
Similar to the `iframe`, the first step is to listen for messages, this time via `figma.ui.onmessage`:
```ts
figma.showUI(__html__, { height: 300 }); // https://www.figma.com/plugin-docs/manifest/
figma.ui.onmessage = async (msg) => {
const { type } = msg;
try {
switch (type) {
/**
* handle your plugin's various action types - remember to make use of the figma API with the
* `figma` object, and use `figma.ui.postMessage` to send messages back to the iframe!
*/
}
} catch (err) {
console.error(err);
figma.closePlugin(`Error occured: ${err}`);
}
};
```
Since we have no UI component in the `FigmaSandbox`, this code can just be plain Typescript.
Unlike the `iframe`, the actual Figma API is accessible to the plugin here via the `figma` object. For example, to set up an A4-sized frame for each page and set your view onto them:
```ts
for (let i = 0; i < pages.length; i += 1) {
const { name, template } = pages[i];
// set up frame for this page
const frame = figma.createFrame();
frame.name = `Page ${i}: ${name}`;
frame.resize(PAGE_A4.x, PAGE_A4.y);
frame.y = i * (PAGE_A4.y + 100);
// add frame to page
figma.currentPage.appendChild(frame);
newNodes.push(frame);
// ...
}
// select new nodes and zoom to fit
figma.currentPage.selection = newNodes;
figma.viewport.scrollAndZoomIntoView(newNodes);
```
Working with nodes is pretty straight-forward: like any other graph-like structure, each node represents an element (in this case, an element on your Figma document), and attaching them to other nodes gives your elements a bit of structure. Each node exposes a variety of configuration options. For example, see the `TextNode` definition from the [Figma API typings file](https://www.figma.com/plugin-docs/api/typings/):
```ts
interface TextNode extends DefaultShapeMixin, ConstraintMixin {
readonly type: "TEXT"
clone(): TextNode
readonly hasMissingFont: boolean
textAlignHorizontal: "LEFT" | "CENTER" | "RIGHT" | "JUSTIFIED"
textAlignVertical: "TOP" | "CENTER" | "BOTTOM"
textAutoResize: "NONE" | "WIDTH_AND_HEIGHT" | "HEIGHT"
paragraphIndent: number
paragraphSpacing: number
autoRename: boolean
// ...
}
```
Refer to the typings and [official node types documentation](https://www.figma.com/plugin-docs/api/nodes/) to find out what you need!
#### Example: Declaring, Downloading, and Using an Image Asset
Recall from the [earlier diagram](#structuring-our-plugin) that for our templates to declare asset dependencies, a bit of ping-ponging needed to happen between our `iframe` and `FigmaSandbox` components to make the appropriate network requests.
Here is an example of a simple template function (which is our way of organizing page generation - feel free to use whatever system suits you best!) that allows the declaration of some dependencies. Our interface is described as follows:
```ts
export type Asset = ImageAsset | MapAsset;
export type Assets = {[name: string]: Asset}
export interface Template {
/**
* Optional assets function allows templates to declare image dependencies. This callback should
* return a set of URLs keyed by a useful identifier. If provided, hashes of loaded images will be
* provided to Template.
*/
assets?: (data: ProjectData) => Assets;
/**
* Calling a template should populate the given frame with this template. The `assets` argument is
* only provided if `Template::assets` is declared.
*/
(frame: FrameNode, data: ProjectData, assets?: {[name: string]: string}): Promise<void>;
}
```
Note that in the `template` function, `assets` is a simple key to string value dictionary, while in the `assets` property, the function returns `Assets`. This is because Figma images are referenced by a hash - once we have image data, it is easier to simply store the images in Figma first before passing the identifying hashes as values in the `asset` dictionary to the template function. For declaring assets, a bit more detail is needed - hence the `Assets` type.
Here's an example function:
```ts
async function template(frame: FrameNode, _: ProjectData, assets?: {[name: string]: string}) {
// set up a simple coloured background via a helper function we made
Background(frame, LIGHT_BEIGE);
// create a rectangle to populate with our logo
const imgN = figma.createRectangle();
imgN.name = 'logo';
imgN.fills = [{
type: 'IMAGE',
scaleMode: 'FILL',
imageHash: assets.logo, // image hash for Figma, as described above
}];
// center our logo
imgN.x = frame.width / 2 - imgN.width / 2;
imgN.y = frame.height / 2 - imgN.height / 2;
// add the logo to the frame
frame.appendChild(imgN);
}
// declare a dependency on our logo
template.assets = (): Assets => ({
logo: {
type: AssetType.IMAGE,
uri: `${env.wordpressInstance}/wp-content/uploads/2020/02/Dark-Full-LogoLarge.png`,
},
});
```
In our [`FigmaSandbox` message receiver](#manipulating-figma-nodes-and-working-with-the-figmasandbox), we'll have to check for asset dependencies before calling the template function:
```ts
if (template.assets) {
// load assets if provided and let main runtime dispatch template load
figma.ui.postMessage({
type: actions.LoadAssets,
assets: template.assets(projectData),
// used to re-establish context once we've loaded our assets
frameID: frame.id,
pageID: i,
projectData,
});
} else {
// if no assets are provided, just load the template directly
await template(frame, projectData);
}
```
Then, back over in the [`iframe` message receiver](#collecting-input-and-using-react-as-our-iframe), we'll want to retrieve these assets when requested:
```ts
export async function getImage(imageURI: string): Promise<Uint8Array> {
const resp = await fetch(imageURI);
// convert image data into Uint8Array to send back to FigmaSandbox
const blob = await resp.blob();
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => resolve(new Uint8Array(reader.result as ArrayBuffer));
reader.onerror = () => reject(new Error('Could not read from blob'));
reader.readAsArrayBuffer(blob);
});
}
```
```ts
switch (type) {
case actions.LoadAssets: {
const assets = pluginMessage.assets as Assets;
const names = Object.keys(assets);
// prepare to load all declared assets
const loads = [];
const loaded = {};
for (let i = 0; i < names.length; i += 1) {
const n = names[i];
const ast = assets[n];
const assetLoader = async () => {
switch (ast.type) {
case AssetType.IMAGE: {
loaded[n] = await getImage(ast.uri);
break;
}
// ...
}
}
loads.push(assetLoader());
}
// await for all promises to resolve
await Promise.all(loads);
// send results back to the FigmaSandbox
parent.postMessage({ pluginMessage: {
...pluginMessage, // retain context for the FigmaSandbox
assets: loaded, // note that at this point, the values in assets is raw image data
type: actions.AssetsLoaded,
} }, '*');
}
}
```
Now, back to the `FigmaSandbox` again (woo! ping pong!), we'll want to put all this image data straight into Figma to avoid moving all this raw data around, and generate our template!
```ts
// load up context
const frameID = msg.frameID as string;
const pageID = msg.pageID as number;
const projectData = msg.projectData as ProjectData;
const { name, template } = pages[pageID];
// add assets to figma and set up references
const assets = {};
Object.keys(msg.assets).forEach((n) => {
const data = msg.assets[n] as Uint8Array;
const imageN = figma.createImage(data);
assets[n] = imageN.hash;
});
// execute template with loaded assets!
await template(figma.getNodeById(frameID) as FrameNode, projectData, assets);
```
For the example template, this back-and-forth results in a page that looks like this:

If you find this example hard to follow, try and read through it again and match it up against the [plugin sequence diagram](#structuring-our-plugin) illustrated previously - each action should more or less align with the arrows on the diagram. It definitely took me a little while to get the hang of as well :sweat_smile:
## Final Thoughts
The Figma plugin documentation is quite robust (except for loading images and a good way to do the back-and-forth described in the above example, which I had to spend a good amount of time figuring out for myself), and feature-wise it's pretty comprehensive as well. If your team has any sort of design needs, I would highly recommend looking into automating some of your processes with Figma!
That said, it can be a bit of work to do seemingly trivial things (at least at first), but when you get the hang of things you can do some pretty cool tricks with it - I'll probably follow up this post with another one about the map generation work I am about to do!
As an aside: the charts in this post were created with [Mermaid](https://github.com/mermaid-js/mermaid), which I recently (re)discovered - very nifty.
| bobheadxi |
270,995 | Learn with opensource - Appwish status update | Hello everyone! After a long time of silence, I've got an update of Appwish status for you. If somebo... | 4,089 | 2020-02-29T13:26:03 | https://dev.to/pjeziorowski/learn-with-opensource-appwish-status-update-307e | opensource, webdev, showdev, learning | Hello everyone! After a long time of silence, I've got an update of [Appwish](https://github.com/orgs/appwish/projects/1) status for you. If somebody is not familiar with Appwish yet, **let me first describe the idea behind it**:
> A lot of people have amazing ideas that could spark a lot of changes in our lives, but because they lack the knowledge to make such projects, they just let it go.
> Appwish isn't an app or a project. It's a community full of people from diverse industries across the globe united to make the world a better place.
> Appwish lets you express your great ideas and make them reality.
This is our vision and mission. Appwish was started in a post on dev.to and is **built with open-source at heart** - all we do is transparent, anyone can join and contribute. Most of the things we do are documented in a series of posts on dev.to and on our [Github](https://github.com/appwish). If you are willing to do more, grow your software engineering skills and build something great - Appwish is a place for you.
### Backend Status
At the moment we are at the stage of adding the first APIs to our GraphQL server. The backend is still in early stages but the hardest part of doing research, setting up GraphQL server and connecting it to our backend gRPC microservices stack is done. The progress of building our APIs should be much faster now.
We have also successfully deployed Appwish microservices and UI to a new, developer-friendly CaaS platform - [Qovery](https://www.qovery.com) - which is very kind to let us host Appwish there. If you need a way to deploy your apps with ease, I strongly recommend [taking a look](https://www.qovery.com).
### Frontend Status
We have created the first mockups of the landing page and web application. They're not perfect but I hope good enough to let us get started with the UI.
As an example, you can see the [mockup](https://xd.adobe.com/view/2004c923-3980-49d0-6769-e6fcc730a42a-5635/) of our landing page. If you wish to contribute in building it, feel free to assign yourself to the [task](https://github.com/appwish/landingpage/issues/1) on Github or ask me a question on [Slack](https://join.slack.com/t/appwish/shared_invite/enQtOTc3NDQ1MjkyMTM0LTU3MjA0Njk0YzI0ZTRiNzVlZWY3YTEyMTZmMjg5ZDNmODVjM2E0NTgyOTI4MjJmZmU1MmVkNWIxNzA5MmMwODk).
We are also creating a skeleton of our UI web application with React and Next.js. We're adding the most essential logic like authentication (with Google OpenID Connect) and navigation to enable working on different parts of the UI parallelly.
### Contribute & Grow as a Software Engineer with us
Appwish is a great place to develop your software engineering skills.
Seize this opportunity and be a part of our amazing community. There's definitely something for you to work on irrespectively of your background. We believe in diversity and that's why Appwish is built using modern technologies like gRPC, [Vert.x](https://vertx.io/), GraphQL just to mention a few.
Your contribution could go a long way, impacting many people out there and helping realize their dreams into working solutions.
Join me and dozens of other developers on [Slack](https://join.slack.com/t/appwish/shared_invite/enQtOTc3NDQ1MjkyMTM0LTU3MjA0Njk0YzI0ZTRiNzVlZWY3YTEyMTZmMjg5ZDNmODVjM2E0NTgyOTI4MjJmZmU1MmVkNWIxNzA5MmMwODk) and [Github](https://github.com/appwish) and let's get those ideas out of dreams into reality. | pjeziorowski |
271,011 | The Imposter Syndrome is Holding You Back from Your Machine Learning Objectives | You spent a lot of time trying to learn Machine Learning. You read a lot of books, you watched variou... | 0 | 2020-02-29T04:23:03 | https://reinforcementlearning4.fun/2020/02/29/imposter-syndrome-holding-back-machine-learning-objectives/ | machinelearning, datascience, career | <!-- wp:paragraph -->
<p>You spent a lot of time trying to learn Machine Learning. You read a lot of books, you watched various MOOCs, and you earned a lot of certifications. But despite all this effort, you still haven't applied to your dream job, nor started to build your portfolio. You feel you haven't learned anything, and you think the solution is to read another book or earn another certification.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>If you identify yourself with this situation, you may be suffering from the <strong>impostor syndrome</strong>. The impostor syndrome is a psychological phenomenon in which we have a distorted view of ourselves. People who suffer from this syndrome tend to diminish their accomplishments and their merits, leading them to an irrational fear of being discovered as an impostor.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>The impostor syndrome is very harmful to experienced developers who are learning Machine Learning because it causes complete paralysis. Because we are afraid of being discovered as frauds, we stop to share our knowledge, we stop to apply to new jobs, or we even stop trying to build projects. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>And when we are paralyzed by the imposter syndrome, the biggest mistake we can make is to think we need more knowledge. We believe we need another book, another MOOC, or even a new degree, trapping ourselves in this infinite loop of fear and paralysis.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>This paralysis is very harmful to us. Because we are afraid to expose our work, nobody will ever know about our existence. And our objectives with Machine Learning depends on other people to know us. We need recruiters to find us and invite us to interviews, we need users for our products or even the revision of our peers to succeed in our careers. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>So, if you don't want to be stuck in your career, you cannot succumb to the impostor syndrome. You must be conscious of your fear and take little actions that can help you to recover your confidence. Keep reading to learn how. </p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>How to fight the impostor syndrome</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The best way to stop feeling like an impostor is to stop thinking like an impostor. You have to be comfortable with the gaps in your knowledge, and you cannot be afraid of exposing these gaps. But to be comfortable with your weaknesses, you must be confident in strength. In other words, you must be satisfied in what you already know.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>That's why it is essential to work on projects and show your results. To gain confidence in what you already know. But if you try to start a big and complex project, you will be terrified once again by the gaps in your knowledge, and you will drop the project. Instead, begin with small and targeted projects, so you can complete them and show your results. In this way, you will help people that know less than you, and you will build the confidence you need to make yourself comfortable with your knowledge gaps. </p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>Conclusion</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>If you are struggling to build your Machine Learning project, you may be suffering from imposter syndrome. You may already read books or took courses or even went to a master's degree, but you never practiced with real tools or deployed a model. In either case, a lack of knowledge is not your problem. Your problem is a lack of action towards your project.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>So, instead of reading another book or taking another course, I invite you to make your first step. Click on the link below and spend only 5 (five) minutes on building your first Machine Learning model with real tools:</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":1} -->
<h1><a target="_blank" href="http://bit.ly/mlgc5min" rel="noreferrer noopener">Create Your First Machine Learning Model in 5 Minutes With Google Colab</a></h1>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>It's essential to start with a small project or example, so you can complete it quickly and gain confidence. Then you can gradually add complexity in your projects until you begin building real-life applications. </p>
<!-- /wp:paragraph --> | rodolfomendes |
271,062 | 5 React Native listing app that helps your build Yelp clone | React Native is a JavaScript framework that can be used to create truly native applications for platf... | 0 | 2020-03-01T04:25:01 | http://kriss.io/5-react-native-listing-app-that-helps-your-build-yelp-clone/ | reactnativelisting | ---
title: 5 React Native listing app that helps your build Yelp clone
published: true
date: 2020-02-29 00:01:53 UTC
tags: react-native-listing
canonical_url: http://kriss.io/5-react-native-listing-app-that-helps-your-build-yelp-clone/
---
React Native is a JavaScript framework that can be used to create truly native applications for platforms like iOS and Android. It is based on a JavaScript library created by Facebook called React and thus brings its performance to the native development of mobile applications.
It corresponds with today’s mobile app development market needs, as with two operating systems dominating the landscape, enterprises creating mobile apps often face a decision: build apps that give a better user experience or apps that are faster to develop and run on more platforms and devices?
The concept of creating apps using only one paradigm for all platforms sounds a bit unbelievable. However, React Native, despite its immaturity, allows accelerating the process of building apps across different platforms, thanks to the likelihood of reusing most of the code between them.
For the convenience of my readers, I have compiled a list of the 5 React Native listing app that helps your build Yelp clone.
#### 1.React Native Store Locator App Template
<figcaption>Store Locator App Template</figcaption>
The Store Locator app template will certainly ease your life, save you a lot of development money and time since you don’t need to hire someone to develop it for you or code it yourself from zero. It is already developed and well documented. Every class and method is well documented so even if you are lost, you can read the comments and get back on the track to continue customizing the project for your own likings.
Heard about white labeling? This is the way to do it, this template allows you to create 10 apps (or even a 100) easily without changing a lot of stuff and move loads of views around. Simple change colors, change the content on your server if you moved it there and you are good to release your new app.
The app is written in a way, that is easy to customize, all the colors that are used and all the font declarations are in a separate class, so you don’t have to go through all of the classes to find and edit them.
#### **React Native Store Locator App ** Features
There are many features are working entirely, end-to-end following
- User Management with Firebase Auth
- Everything is integrated with Firebase backend: Categories, Filters, Stores, Reviews, Ratings, Saved Stores
- Server Integration with Firebase Firestore
- Advanced Filtering
- Map View
- List View
- Dynamic Filters, managed from Firebase
- Search Functionality
- Interactive Map Location Picker
- Reviews & Rating System
- Favorite Stores
- Facebook Login
- Adding New Stores
- Photo Upload to Firebase Storage
- Camera & Photo Library Integration
- Unlimited Photo Galleries
- Save Login Info
Download this beautiful React Native Store Locator app template to create your own Store Finder app in minutes. By using this fully encoded starter kit to React Native, you save weeks of design and development and can focus on other important tasks such as customer retention and marketing.
[Get React Native Store Locator App Template](https://www.instamobile.io/app-templates/react-native-store-locator-app-template/)
#### 2.Listar — mobile React Native directory listing app template
<figcaption>Listar — directory listing app</figcaption>
Listar is a mobile React Native app template for the classified directory listing industry. The design follows the latest UI/UX design standards and supports multiple devices such as iPhone X, tablet devices, Android smartphones, etc. Besides 20+ screens, this React Native app template includes 20+ reusable components.
#### **Why you should buy Listar — React Native directory listing app template**?
- Easy to customize the application
- Save more 200 hours of development
- Hight performance and stable product
- Suitable for any Directory Listing applications
- iOS and Android cross-platform applications
The design is quite simple and these components were made with pure React Native without use extra libraries. So you can customize easily with your business.
[Get Listar — mobile React Native directory listing app](https://codecanyon.net/item/listar-react-native-directory-listing-app-template/24829545)
#### 3.DWT Listing — Directory & Listing React Native App
<figcaption>DWT Listing — Directory & Listing React Native App</figcaption>
Dwt Listing is a react native app that gives you complete freedom to create any type of directory or listing website. No Paid Plugins Needed, Everything Inside.
#### What did you get?
- App plugin for REST API.
- Dwt Listing React-native code for building your own apps.
#### Required Assets for Android
1. **For Debug APK**
WordPress and FTP or Cpanel Login details
App name & background splash image 1340 h x 750 w
Logo for Splash 160×40
App icon 48×48 , 72×72, 96×96, 144×144, 192×192
App icon circle 48×48 , 72×72, 96×96, 144×144, 192×192
Google Developer Account Credentials
Facebook Developer Account Credentials
2. **For PlayStore**
App title (Maximum 50 Characters)
Short Description (Maximum 80Characters)
Full Description (Maximum 4000 Characters)
App Banner icon 512w x 512h
Background image 1024w x 500h
App images maximum 6
#### Required Assets for IOS
1. App Icon 1024\*1024
2. App Logo
3. App Name
4. Facebook & Google Developer Accounts
5. Firebase Credentials
6. App Store
7. WP Login Details
8. App Description for App Store
**NOTE:** Setting up application doesn’t include in product price. You can do set up by yourself or can hire our technical person to do that for you but that will be a paid task.
[Get DWT Listing -Directory & Listing React Native App](https://codecanyon.net/item/dwt-listing-directory-listing-react-native-app/23428441)
#### 4.ListPro — Listing Directory React Native template
<figcaption>ListPro — Listing Directory React Native template</figcaption>
ListPro is the newborn from ListApp which exclusive support for ListingPro template (+v2.0), it helps you to create, manage, organize and monetize any listing business you want quickly, seamlessly and beautifully.
Having your listing business in App makes a big difference for you and your end-users.
Smooth and easy navigation on their smartphone. Convenience to find and get real-time needed informations like reviews, ratings, offers etc..Better controls over personal search and booking history.
[Get ListPro — Listing Directory React Native template](https://codecanyon.net/item/listpro-listing-directory-react-native-template/22645013)
#### 5.ListApp — Listing Directory mobile app by React Native (Expo version)
<figcaption>Listing Directory mobile app by React Native</figcaption>
Listing Directory mobile app by React Native (Expo version) — CodeCanyon. It is best for List App, Listable, listing, listing app, listing directory, mobile, mobile animation, mobile menu, mobile template, mobile UX, native app, parallax, react, react native and WordPress API.
Having your listing business in App makes a big different for you and your end-users.
Main advantages that ListApp can provide you with:
**Maximize your website potential:**
You don’t need to waste a high amount of money to build new mobile applications besides your existing website to enhance your customers’ booking or ordering experiences when they are on their phones. ListApp takes your website to the next level by releasing your business in Application using contents on your current website.
**Better personalization:**
Each end-user is different from the other. To get profit from them, you need to give them information that they are looking for, within their interest, and in real-time.
**Able to utilize mobile device features** :
**_Maps:_** ListApp integrates map features into your listing app to allow your end-users to be able to find their nearest interesting locations right from their mobile phones.
**_Offline access:_** Users can save their location wish-lists, images or booking events offline for later use without the need for internet connection.
**_Calendar Event:_** Booking a new appointment or making reservations can be in just a few minutes through Calendar Event Setting feature of ListApp. After that, your users will be notified through their calendars.
Develop listing business on the app with ListApp empowers end-user experience which will bring profit to your business.
**For you:**
- Higher sale conversion opportunities
- Better order and booking controls
- Beautiful and clean presence for trustable business
- Consumer relationship management
**For your end-users:**
- Smooth and easy navigation on their smartphone.
- Convenience to find and get real-time needed informations like reviews, ratings, offers, etc.
- Better controls over personal search and booking history.
Passion can not wait, start now with ListApp!
[Get ListApp — Listing Directory mobile app by React Native (Expo version)](https://codecanyon.net/item/listapp-listing-directory-mobile-app-by-react-native/21456447)
#### Conclusion
Using, studying, and applying these React Native applications and strategies is one of the best ways to become a React pro. Digging into completed code, examining UI and UX choices, and using existing building blocks provides insight that’s difficult to come across when starting from scratch.
Pairing something you want to build with a good template and resource material is a recipe for React application success.
* * * | kris |
271,068 | Question about Node.js and scraping websites
| Hello everyone,, I am trying to learn Node.js through Udemy courses and wanted to attempt a project.... | 0 | 2020-02-29T08:13:01 | https://dev.to/hardus1/question-about-node-js-and-scraping-websites-6ig | javascript | Hello everyone,,
I am trying to learn Node.js through Udemy courses and wanted to attempt a project.
My end goal would be to scrape historical sports scores from 2 different websites, store it into a database, and be able to view the data all in one app. Each website will require their own login.
https://100001.onl/ https://1921681254.mx/ https://acc.onl/hotmail
Can someone outline or guide me on how you would approach this?
If the website does not have an API for me to work with, can I still scrape the website?
Are there specific libraries you would recommend I look into?
| hardus1 |
271,169 | WordPress Monthly Dev Digest #015 | A curated list of WordPress development related news articles, tutorials and other resources written or updated in February 2020. | 0 | 2020-02-29T19:39:17 | https://since1979.dev/wordpress-monthly-dev-digest-015/ | wordpress, webdev, php, tutorial | ---
title: WordPress Monthly Dev Digest #015
published: true
description: A curated list of WordPress development related news articles, tutorials and other resources written or updated in February 2020.
canonical_url: https://since1979.dev/wordpress-monthly-dev-digest-015/
cover_image: https://since1979.dev/wp-content/uploads/2020/02/wordpress-monthly-dev-digest-015.jpg
tags: wordpress, webdev, php, tutorial
---
[Originally posted on my website on February 29th 2020](https://since1979.dev/wordpress-monthly-dev-digest-015/)
WordPress February in short.
----------------------------
With WordPress 5.4 around the corner (Planned for [release on March 31th](https://make.wordpress.org/core/5-4/)) there have been a lot of Dev Notes published on the [Make Core Blog](https://make.wordpress.org/core/) for us developers to read. I have listed them below for you to cherry pick.
- [Formal deprecation of some unused Customizer classes in WordPress 5.4](https://make.wordpress.org/core/2020/02/12/formal-deprecation-of-some-unused-customizer-classes-in-wordpress-5-4/).
- [Changes related to Calendar Widget markup in WordPress 5.4](https://make.wordpress.org/core/2020/02/12/changes-related-to-calendar-widget-markup-in-wordpress-5-4/).
- [An updated Button component in WordPress 5.4.](https://make.wordpress.org/core/2020/02/12/changes-related-to-calendar-widget-markup-in-wordpress-5-4/)
- [WordPress 5.4 introduces apply_shortcodes() as an alias for do_shortcode()](https://make.wordpress.org/core/2020/02/13/wordpress-5-4-introduces-apply-shortcodes-as-an-alias-for-do-shortcode/).
- [Enhancements to favicon handling in WordPress 5.4](https://make.wordpress.org/core/2020/02/19/enhancements-to-favicon-handling-in-wordpress-5-4/).
- [Block editor keyboard shortcuts in WordPress 5.4](https://make.wordpress.org/core/2020/02/19/block-editor-keyboard-shortcuts-in-wordpress-5-4/).
- [New hooks let you add custom fields to menu items](https://make.wordpress.org/core/2020/02/25/wordpress-5-4-introduces-new-hooks-to-add-custom-fields-to-menu-items/).
- [New Blocks in WordPress 5.4](https://make.wordpress.org/core/2020/02/27/new-or-updated-blocks-in-wordpress-5-4/).
- [Introduce Block variations API](https://make.wordpress.org/core/2020/02/27/introduce-block-variations-api/).
- [Block collections](https://make.wordpress.org/core/2020/02/27/block-collections/).
- [New @wordpress/create-block package for Block scaffolding](https://make.wordpress.org/core/2020/02/27/block-collections/).
- [REST API Changes in 5.4](https://make.wordpress.org/core/2020/02/29/rest-api-changes-in-5-4/).
- And [Miscellaneous developer focused changes in WordPress 5.4.](https://make.wordpress.org/core/2020/02/26/miscellaneous-developer-focused-changes-in-wordpress-5-4/)
Besides all these Dev Notes there were the announcements that both [Navigation block](https://make.wordpress.org/core/2020/02/07/navigation-block-exclusion-from-wp-5-4/) and [Lazy Loading](https://make.wordpress.org/core/2020/02/25/lazy-loading-update/) will not be included in the 5.4 release. If you want you can still test out the [Lazy-Loading feature plugim](https://github.com/WordPress/wp-lazy-loading) and the [WordPress 5.4 Beta 3 release](https://wordpress.org/news/2020/02/wordpress-5-4-beta-3/).
### WordPress:
- Video: [3 methods for Ajax in WordPress. Which method is the best?](https://www.youtube.com/watch?v=OwBBxwmG49w) by [Alex Young](https://twitter.com/_WPCasts_tv_).
- [Enterprise headless WordPress](https://dev.to/ssinno28/enterprise-headless-wordpress-2kg3) by [Sammi Sinno](https://dev.to/ssinno28).
- [Seo for headless WordPress themes](https://dev.to/frontity/seo-for-headless-wordpress-themes-1567) by [Reyes Martínez](https://twitter.com/r_martinezduque).
- [5 Ways to create a WordPress plugin settings page](https://deliciousbrains.com/create-wordpress-plugin-settings-page/) by [Iain Poulson](https://twitter.com/polevaultweb).
- [Bootstrapping WordPress projects with Composer and WP-CLI](https://www.designbombs.com/bootstrapping-wordpress-projects-with-composer-and-wp-cli/) by Leo.
- [Close unclosed HTML tags in PHP/WordPress](https://btb.works/close-unclosed-html-tags-in-php/) by [Kuba Mikita](https://twitter.com/kubamikita).
- [Working on Trac Tickets Using GitHub Pull Requests](https://make.wordpress.org/core/2020/02/21/working-on-trac-tickets-using-github-pull-requests/)
- [WordPress Hook Autocompletion for VS Code](https://github.com/johnbillion/vscode-wordpress-hooks) by [John Blackbourn](https://twitter.com/johnbillion).
- WordPress snippet [#004](https://since1979.dev/snippet-002-adding-option-pages-with-acf/), [#005](https://since1979.dev/snippet-005-simple-custom-rest-api-route/), [#006](https://since1979.dev/snippet-006-conditionally-loading-a-custom-template/) and [#007](https://since1979.dev/snippet-007-get-and-post-to-remote-api-with-php/) by [Stephan Nijman](https://twitter.com/Vanaf1979).
### Php:
- [Switching between multiple Php versions on macOS](https://dev.to/markhesketh/switching-between-multiple-php-versions-on-macos-566g) by [Mark Hesketh](https://twitter.com/markahesketh).
- [PHP 7 can do that?](https://dev.to/jmau111/php-7-can-do-that-57ol) by [Julien Maury](https://dev.to/jmau111).
- Video: [Tips for cleaner code: Cleaning up IF statements](https://www.youtube.com/watch?v=ldqDpmMkXgw&feature=emb_logo) by [Codecourse](https://www.youtube.com/channel/UCpOIUW62tnJTtpWFABxWZ8g).
- [The snobby demonization of Php](https://dev.to/bytebodger/the-snobby-demonization-of-php-4fae) by [Adam Nathaniel Davis](https://dev.to/bytebodger).
- Video: [How to refactor complex if statements](https://freek.dev/1578-how-to-refactor-complex-if-statements) by [Freek Van der Herten](https://twitter.com/freekmurze).
### Javascript:
- [ResizeObserver: A new powerful tool for the responsive web](https://medium.com/@barvysta/resizeobserver-a-new-powerful-tool-for-responsive-web-f9a53ed71952) by [Khrystyna Skvarok](https://twitter.com/Barvysta).
- [Create a reading scroll progress bar for your blog in JavaScript and Css](https://dev.to/xtrp/create-a-reading-scroll-progress-bar-for-your-blog-in-javascript-and-css-1jmc) by [Fred Adams](https://dev.to/xtrp).
- [How JavaScript Implements Object Oriented Programming](https://estevanmaito.github.io/sharect/) by [Dillion Megida](https://twitter.com/iamdillion).
- [Sharect](https://estevanmaito.github.io/sharect/): Let users share their text selections to social networks by [Estevan Maito](https://github.com/estevanmaito).
- [Popper](https://popper.js.org/): Tooltip & popover positioning engine.
### Css:
- [How to create a more readable Css](https://medium.com/@elad/how-to-create-a-more-readable-css-3e67ea4812ee) by [Elad Shechter](https://twitter.com/eladsc).
- [Animating Css width and height without the squish effect](https://pqina.nl/blog/animating-width-and-height-without-the-squish-effect/?ref=heydesigner) by [Rik Schennink](https://twitter.com/rikschennink).
- [Flexing your Html footer to the screen bottom](https://dev.to/kriswep/flexing-your-html-footer-the-the-screen-bottom-3gp5) by [C.B.W](https://twitter.com/kriswep).
- [Fixed headers and jump links? The solution is scroll-margin-top](https://css-tricks.com/fixed-headers-and-jump-links-the-solution-is-scroll-margin-top/) by [Chris Coyier](https://twitter.com/chriscoyier).
- Video: [Full tutorial on how to use Sass to improve your Css](https://www.youtube.com/watch?v=itEFprr8soo) by [Adrian Twarog](https://twitter.com/twarogadrian).
- [Sass for Css: Advance your frontend skills with Css preprocessor](https://dev.to/educative/sass-for-css-advance-your-frontend-skills-with-css-preprocessor-5e3) by [Amanda Fawcett](https://dev.to/amandaeducative).
- [Applying CSS :focus-within](https://dev.to/lauragift21/applying-css-focus-within-35ae) by [Gift Egwuenu](https://twitter.com/lauragift_).
- [Firefox DevTools will show you which element is the one with scrollable overflow](https://twitter.com/sulco/status/1229393229398888448) by [Tomek Sułkowski](https://twitter.com/sulco).
- [CSS object-fit and object-position properties: Crop images embedded in HTML](https://medium.com/css-mine/css-object-fit-and-object-position-properties-crop-images-embedded-in-html-a52aae7bf73a) by [Martin Michálek](https://twitter.com/machal).
- [A complete guide to data attributes](https://css-tricks.com/a-complete-guide-to-data-attributes/) by [Chris Coyier](https://twitter.com/chriscoyier).
- [When Css blocks](https://timkadlec.com/remembers/2020-02-13-when-css-blocks/?ref=dailydevlinks.com) by [Tim Kadlec](https://twitter.com/tkadlec).
### Misc:
- GitHub needs your feedback on [their new (beta) Cli](https://github.com/cli/cli).
- [MassCode](https://masscode.io/?ref=dailydevlinks.com): A free and open source code snippets manager.
- [Ionicons](https://ionicons.com/): Beautifully crafted open source icons.
- [Firefox add-on for Google Lighthouse](https://addons.mozilla.org/en-US/firefox/addon/google-lighthouse/)!
- [How to make your first pull request on GitHub](https://www.freecodecamp.org/news/how-to-make-your-first-pull-request-on-github-3/) by [MV Thanoshan](https://twitter.com/ThanoshanMV).
- [How to keep yourself motivated as a software developer](https://www.freecodecamp.org/news/how-to-keep-yourself-motivated-as-a-software-developer/) by [Carol Pelu](https://twitter.com/pelu_carol).
- [Accessibility dont's, Learn from basic mistakes in web design](https://pineco.de/accessibility-donts-learn-from-basic-mistakes-in-web-design/) by [Adam Laki](https://twitter.com/_iamadam).
### Suggestions?
**I**f you have any suggestions for good article please do let me know! Follow me on twitter [@Vanaf1979](https://twitter.com/Vanaf1979) or on Dev.to [@Vanaf1979](https://dev.to/vanaf1979) to be notified about the next article in this series, and other WordPress development related stuff.
Have a great March. | vanaf1979 |
271,243 | Inside Node.Js I/O | Node.Js introduces iteslf as a asynchronous event driven javascript runtime. To acheive this asynchro... | 5,190 | 2020-02-29T16:39:07 | https://dev.to/rwik/inside-node-js-i-o-13jc | node, javascript, asynchronous | Node.Js introduces iteslf as a asynchronous event driven javascript runtime. To acheive this asynchronous nature , node uses a open source library called [libuv](libuv.org).
libuv even though built for Node , it is not exclusive to Node.js. There are plenty of other [projects](https://github.com/libuv/libuv/wiki/Projects-that-use-libuv) which uses libuv.
While network i/o is asynchronous , same can't be said for disk i/o .
libuv uses single threaded operation to use event loop and produce asynchronus network i/o . Mostly lib pool apis are not thread safe as it is designed to work in single threaded mode only.
Disk i/o has some platform specific differences . Every majore APIs for disk I/O has it's own limitations, also this increases complexity. So libuv uses blocking disk operations using thread pool. | rwik |
271,250 | Trivia#5: The first word sent over the internet | Welcome to my little CS trivia series. Every week I post a new trivia piece. Today's question is...... | 4,665 | 2020-03-02T14:49:31 | https://dev.to/sylwiavargas/cs-trivia-5-the-first-word-sent-over-the-internet-43n4 | watercooler, todayilearned, codenewbie, computerscience | Welcome to my little CS trivia series. Every week I post a new trivia piece.
Today's question is...
---
## What was the first word sent over the internet?
The historic exchange took place 45 years ago, on October 29, on the ARPANET, or, what the first version of the Internet looked like. The programmers attempted to type in and transmit the word "login" from UCLA to Stanford Research Institute.

However, the system crashed right after they typed in the "o." Therefore, the first word sent over the internet was "lo". The full word was transmitted an hour later.

While *lo* does not mean anything in English (maybe except for the phrase *lo and behold*), it does mean *it* in Spanish or *no* in Hebrew. This, of course, brings to my memory the show "Little Britain" and their famous "computer says no".
| sylwiavargas |
271,251 | [Tutorial Git] git commit: Enviando arquivos para o repositório Git | Vamos supor que temos no nosso repositório a seguinte situação: Temos dois arquivos novos e vamos fa... | 5,484 | 2020-02-29T16:18:38 | https://dev.to/womakerscode/tutorial-git-enviando-arquivos-para-o-repositorio-git-1k91 | git, commit, github, braziliandevs | Vamos supor que temos no nosso repositório a seguinte situação:
Temos dois arquivos novos e vamos fazer o nosso primeiro **commit**.

Enquanto no **fluxo** do git,

Ao colocar os arquivos dentro no repositório, precisamos informar ao **Git** o motivo desses arquivos, com o comando:
```
$ git commit -m 'escreva sua mensagem aqui'
```
- **$** indica que você deve usar o **usuário comum** para fazer essa operação.
- pode-se usar aspas simples ou duplas para escrever a mensagem.
No nosso exemplo, temos:
```
$ git commit -m 'commit inicial'
```

Conferindo o **estado** do **Git**, temos:

E no fluxo,

Esse conjunto de caracteres e números que aparece ao lado da palavra **commit** é a **chave (ou hash)** que identifica o próprio.
| danielle8farias |
271,312 | Best 4 React native Fitness Template 2020 | React Native is one of the most popular frameworks for mobile apps, and not without reason. Ease of u... | 0 | 2020-03-01T04:44:02 | http://kriss.io/best-4-react-native-fitness-template-2020/ | bestlist | ---
title: Best 4 React native Fitness Template 2020
published: true
date: 2020-02-29 11:01:57 UTC
tags: best-list
canonical_url: http://kriss.io/best-4-react-native-fitness-template-2020/
---
React Native is one of the most popular frameworks for mobile apps, and not without reason. Ease of use, modularity and a single code base are a blessing for mobile app developers. With React Native, your team no longer has to create a separate app for iOS and Android. [React Native](https://kriss.io) has simplified the process, but that doesn’t mean it’s easy. This is where the React Native templates come in.
For the convenience of my readers, I have compiled a list of 5 best templates for **React Native fitness app**.
#### 1.React native Fitness Template
<figcaption>fitness app template</figcaption>
Gym near you or hire expensive personal trainers to lose weight. Just find the right app that fits your needs, install it on your smartphone or Android and start pumping. Distance and money can no longer prevent people from doing sports. Express Everything You Need for a Fitness App If you are concerned about your diet, there is no shortage of apps that will give you tips on healthy eating options, but for everyone, our app for the package has a special diet.
Whether you are looking to build your own fitness mobile app for workouts, diets, or health tracking, or you’re simply looking to learn React Native, this starter kit is the best way to get your hands dirty quickly. Avoid reinventing the wheel, and take advantage of what we’ve already pre-built for you. This application help to those who want to improve the fitness of the body.
All the exercises give you step by step instructions on how/to do all types of fitness exercises and make steal bodies. Simply choose your favorite sport, type/length of training and playlist.
Run any devices ( ios or.android)
[Get React native Fitness Template](https://www.instamobile.io/app-templates/react-native-fitness-app/)
#### 2.SixPack — Complete React Native Fitness App + Backend

SixPack- Complete React Native Fitness App + Backend Nulled. This item was published on codecanyon.net and sold by author Wicombit. This is complete beautiful fitness app with idea to provide developers easy and practical way to make their apps work with PHP backend. With our SixPack app, you don’t need to spend so much time and money on your fitness application, easy to manage and configure for both iOS and Android. We have developed this application using React Native (Expo.io).
**Note** : You have to know the basic knowledge of HTML, CSS, and AngularJS. Also, you have to have a basic understanding of the Ionic Framework.
[Get SixPack — Complete React Native Fitness App + Backend](https://codecanyon.net/item/sixpack-complete-react-native-fitness-app-backend/25324139)
#### 3.GoFit — Complete React Native Fitness App + Backend

Developers made this complete beautiful GoFit Application with the idea to provide developers an easy and practical way to make their apps work with PHP backend. With our GoFit application, you don’t need to spend so much time and money on your fitness application easy to manage and configure for both iOS and Android. We have developed this application using the most popular framework React Native.
GoFit — Complete React Native Fitness App + Backend — CodeCanyon. It is best for bodybuilding, Crossfit, diets, exercises, fitness, full, gym, native, react native and workouts app.
[Get-GoFit — Complete React Native Fitness App + Backend](https://codecanyon.net/item/gofit-complete-react-native-fitness-app-admin-panel/22286497)
#### 4.Athlete — Fitness & Workout Mobile App for iOS and Android

Athlete — Fitness & Workout Mobile App is a modern fitness and workout mobile application available for both Android and iOS. It is the perfect solution for anyone who wants to publish his own mobile app and create his own brand.
There are many features of Athlete — Fitness & Workout Mobile App following:
**Cross-Platform:**
The app works well with both iOS and Android.
**Multi-Language:**
It comes with English, French, Spanish, Italian and Russian. Or you can also add your own language.
**Easy Setup:**
We provide a setup wizard that will automatically configure your app with your own database, your own title language, your own theme, and your own admin accounts.
**Admin Panel:**
You will have your own admin dashboard so you can use it to control your app.
**Multi-Theme:**
Choose a light or a dark theme when you set up the app. (more themes will be provided in future updates)
**Video Tutorials:**
We provide video tutorials to help you set up and run your application.
**Support Forum:**
You will have access to our support forum where you can get help from our team members.
**Extension Packs:**
We have an extension packs that you can buy to modify your application, such as creating a custom theme for you, building and publishing the app for you and more.
[Get-Athlete — Fitness & Workout Mobile App for iOS and Android](https://codecanyon.net/item/athlete-fitness-workout-mobile-app-for-ios-and-android-with-admin-panel-languages-themes/23260301)
#### Conclusions
Choose Your React Native Fitness App Template that’s it is a list of the best React Native app templates for 2020. We hope you’ve found the right template for your needs here, but remember that these are just a few of the terrific app templates available at [CodeCanyon](https://codecanyon.net/search/react%20native?sort=sales&_ga=2.239765507.1401873482.1582879734-1929862458.1582879734). So if none of them are quite what you were looking for, there are plenty of other great options there to choose from.
And if you want to improve your skills building React Native applications, check out the ever so useful courses we have on offer.
* * * | kris |
271,346 | Writing an Azure DevOps Pipeline extension | blogged: 2020.02.20 How I ended up having a Saturday night date with the Azure DevOps... | 0 | 2020-02-29T19:56:03 | https://oscarvantol.nl/blogs/azure-piplines | azure, devops, github, marketplace | <sub>blogged: 2020.02.20</sub>
*How I ended up having a Saturday night date with the Azure DevOps marketplace.*
**'At some moment we should really fix this!'** Is this something you and your team sometimes say? We had one of those, we are building really cool software at Virtual Vaults using Dotnet Core, Angular and all the Azure toys. We use Azure DevOps and have a lot of common NuGet packages that we share internally using the Artefacts Feed.
Everyone who updates a package sometimes will at one point forget to bump the version before pushing and we will have a discussion about how we should branch and version the packages. But in the end we will complete the task at hand and move on. Only until the next time.
## Enough, let's do this
At one point I was sick of this. "let's settle this once and for all". After some quick conversations with strong opinioned team members we decided what would be a good strategy. We wanted to have the automated builds pushing out packages without bumping the version by hand. Also if the CI build would be triggered from any other branch than master, the version should be marked as pre-release. This sounded pretty straight forward but I could not make it work with any of the standard options I had in the pipeline. I found a lot of extensions in the [marketplace](https://marketplace.visualstudio.com) that helped me a bit but the yaml pipeline file quickly became a large messy script. Being a developer I decided to write my own extension for a custom build task.
## Building an extension
Because I created a simple build task before I knew it would be pretty straight forward. This was both true and untrue, the [documentation](https://docs.microsoft.com/en-us/azure/devops/extend/get-started/node?view=azure-devops) is really good and I had my basic logic running very quickly using some typescript (which is awesome). But to get the thing properly packaged including what was needed from the node_modules folder had me googling for a while. Lots of people writing extensions have their code on GitHub and this helped me figure out what I was doing wrong.
## Using the extension
The next step was uploading the extension to the marketplace and share it privately with my organisation. I was ready to start testing it in the context of a build. I struggled a bit with setting up the permissions correctly because I needed to use the DevOps api inside the task but it worked like designed and I had an example pipeline running.
Finally I added the custom task to the pipeline of the package I was actually working on during 'normal hours' and told everyone to have fun with it.
## Done, or maybe not?
Noooooo! Not done, I needed to write some documentation on our wiki or at least some comments in the yaml file because otherwise this will definitely get lost as some point. And were to store the code for the extension. I thought to myself **I am doing this anyway, let's do it right!** So I pushed to [sourcecode](https://github.com/oscarvantol/azure-pipelines-version-increment) to GitHub and wrote some minimal documentation. Then I wanted to make the [extension](https://marketplace.visualstudio.com/items?itemName=ovantol.version-increment) publicly available on the marketplace. This meant some more reading and adding information and metadata to my profile and the package. Making this public also made me add some extra documentation, a yaml code example showing how to use the extension and last but not least, an icon.
**What was the benefit of adding these extra steps?**
* I forced myself to write some documentation and clean up.
* This might be of help for someone else writing an extension.
* I might help my future self in need of something similar.
* Someone else might actually use my extension😱

[oscarvantol.nl](https://oscarvantol.nl)
| oscarvantol |
271,425 | Responsive Web Design: Why We Need It and How It’s Implemented | In the early days of working with HTML, most end users accessed the websites through their desktop co... | 0 | 2020-02-29T23:18:57 | https://dev.to/kabir4691/responsive-web-design-why-we-need-it-and-how-it-s-implemented-98j | html, css, design | In the early days of working with HTML, most end users accessed the websites through their desktop computers or laptops only. Web developers used to develop for the large screen real estate and style their websites accordingly. However, in recent times, the number of users accessing the web from their mobile devices has increased exponentially. User adoption of mobile devices has seen massive growth in the last decade, along with the availability of cheap data plans. For example, back in [2012](https://www.gpmd.co.uk/blog/2012-mobile-internet-statistics/), there were more mobile phones than people in the UK.
As a result, web developers need to take into account the smaller screen size of such devices and style their websites so that they are optimized for all screen sizes. For the same purpose, responsive web design had been introduced and is being adopted by most as a defacto standard for designing web pages.
## Responsive Web Design
Responsive web design is simply defined as the practice of building websites for every device and screen size, no matter the size. It involves providing a uniform user experience across multiple platforms. The term was coined and further developed by [Ethan Marcotte](https://alistapart.com/article/responsive-web-design/) in 2010.
While researching about responsive design, it’s common for one to also come across the terms *adaptive* or *mobile*. Although these terms are used interchangeably, there’s a slight difference between them. Responsive design refers to that which reacts quickly and effectively to any environment, while adaptive design involves having a base design that can be easily modified for any special purposes. Mobile design, on the other hand, refers specifically to designing a separate website only for mobile users.
Responsive design is beneficial in that it fulfills the requirements of all three designs and hence is widely preferred. Responsive design consists of mainly three components, namely flexible layouts, flexible media, and media queries. Let us now see in detail how one can go about implementing each of them on their websites.
## Flexible Layouts
The use of flexible layouts involves encapsulating the HTML elements in a flexible container or grid, which resizes dynamically according to the screen width. The dimensions of these layouts are specified in percentage or em units, rather than as static values. This ensures that the layout (and the elements inside it) adjusts itself automatically according to the screen size.
Let us see the same with an example
```css
* {
box-sizing: border-box;
}
.container:before,
.container:after {
content: "";
display: table;
}
.container:after {
clear: both;
}
.container {
*zoom: 1;
}
span {
padding: 20px;
text-align: center;
float: left;
}
.span--1 {
width: 200px;
background: #ff6347;
}
.span--2 {
width: 300px;
background: #0eb36d;
}
```

As you can see above, we have a container that holds two elements of width 200px and 300px each. In a desktop browser, the elements appear normally without any hiccups. Let us see how it appears on a mobile device of dimensions 240x320.

We can see that the second span is running into the next line. It’s because the width of the mobile device was 240px, which was less than the combined width of the two elements which was 200+300=500px. Moreover, the second span also seems to be running off-screen as its width of 300px is greater than the screen width. Now let’s try to fix this by making the layout flexible by specifying percentages instead for the spans.
```css
* {
box-sizing: border-box;
}
.container:before,
.container:after {
content: "";
display: table;
}
.container:after {
clear: both;
}
.container {
width: 100%;
*zoom: 1;
}
span {
padding: 20px;
text-align: center;
float: left;
}
.span--1 {
width: 40%;
background: #ff6347;
}
.span--2 {
width: 60%;
background: #0eb36d;
}
```

We can see that the above layout works well for both mobile and desktop screens. Using the same method of applying percentages, we can create a completely dynamic website that adapts itself to all screen sizes. For more control over your layouts, you can also try using the min-width and max-width properties as well.
## Flexible Media
Media elements need to be resized according to the screen size and dimensions in order to obtain a smooth and undistorted viewing experience. A quick way to ensure that media elements are scalable is to set the max-width property to 100%. This works for media elements like `img`, `video`, and `canvas`.
However, this method does not work on all forms of media, especially for `iframe` and embedded media. In order to make them scalable too, they must be placed within a parent element and their position set as absolute. The parent element then needs to have a width of 100%, so that it may scale according to the viewport. You also need to set the height of the parent element to 0 in order to enable the `haslayout` mechanism of Internet Explorer.
## Media Queries
Media queries are used to specify different CSS properties based on the device properties, such as the viewport width or the screen orientation. There are different ways to embed media queries in CSS, using the `@media` rule inside of an existing style sheet, importing a new style sheet using the `@import` rule, or by linking to a separate style sheet from within the HTML document. However, the preferred method is to use the `@media` rule inside of an existing style sheet to avoid any additional HTTP requests.
Media queries are written as rules with specifications. Each media query begins with the type of media its targeting, for example `all`, `screen`, `print`, `tv`, `braille`, etc. The rule is then followed by logical operators and certain CSS properties with values. If the value holds true for the current scenario, then the rule is applied. Let’s see an example
```css
// For mobile devices
.container {
width: 800px
}
// For desktop devices
@media all and (min-width: 1024px) {
.container {
width: 1200px
}
}
```
In the above CSS, the elements with the class container would have a width of 800px. However, the media rule which we have applied specifies that for all devices that have a minimum viewport width of 1024px, set the width to 1200px instead. By following this approach, one can style pages for both mobile and desktop devices without having to create separate HTML files for different layouts.
> This post was originally published [here](https://levelup.gitconnected.com/responsive-web-design-why-we-need-it-and-how-to-implement-it-657f716bef3) on Medium. | kabir4691 |
271,457 | Gitlab + Netlify | Parte I | Cómo deployar HTML desde Gitlab a Netlify | 0 | 2020-03-01T02:44:47 | https://dev.to/mefhigoseth/gitlab-netlify-parte-1-431n | devops, spanish, gitlab, netlify | ---
title: Gitlab + Netlify | Parte I
published: true
description: Cómo deployar HTML desde Gitlab a Netlify
tags: devops, spanish, gitlab, netlify
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/uq2uyjiro35mf8cjorju.jpg
---
Bienvenidos a una nueva entrega de Dev/Sec/Ops !
En esta oportunidad, les muestro cómo deployar código HTML estático alojado en Gitlab, en la infraestructura CDN inteligente de Netlify.
Adicionalmente, conocemos brevemente el dashboard de Netlify, sus opciones más importantes y como personalizar nuestro sub-dominio. También verificamos el cifrado HTTPS y cómo responde la CDN de acuerdo a la ubicación geográfica de donde provengan los requests.
Que lo disfrute.
{% youtube pKKAt6V9a4w %} | mefhigoseth |
271,469 | Kotlin and retrofit network calls | Kotlin and retrofit network call tutorial What is retrofit This is a rest client library for java an... | 0 | 2020-03-01T04:06:35 | https://dev.to/paulodhiambo/kotlin-and-retrofit-network-calls-2353 | kotlin, retrofit, gson, glide | **Kotlin and retrofit network call tutorial**<br>
**What is retrofit**
This is a rest client library for java and android. This means that we
use retrofit to make network calls from our applications.<br>
In android applications, Retrofit makes use of OkHttp which is a low level
Http client fro android created by the same company that created retrofit.
[Square.io](square.io)<br>
We are also going to use Gson. This is a library used to convert java objects
into a JSON format. But in our case, we will be converting JSON to java object.
We will be getting data from the network in a JSON format and through GSON converter
we will convert it to an object we can use in our application.
**Dependencies**
````groovy
implementation 'com.squareup.retrofit2:retrofit:2.7.0'
implementation 'com.squareup.retrofit2:converter-gson:2.7.0'
implementation 'com.squareup.okhttp3:okhttp:4.3.1'
implementation 'com.google.code.gson:gson:2.8.5'
implementation 'androidx.cardview:cardview:1.0.0'
implementation 'androidx.recyclerview:recyclerview:1.1.0'
implementation 'com.github.bumptech.glide:glide:4.10.0' //Glide
annotationProcessor 'com.github.bumptech.glide:compiler:4.10.0'
````
**TMDB Api**<br>
We are going to use a service called [TMDB](https://www.themoviedb.org) which provides details about
movie shows. This will provide data to our app.<br>
In order to use the service you need an account there and an api key. This is a
key that allows you to access their service. Go ahead and follow the link
above and follow their instructions on how you can get your api key.
**Project Setup**<br>
Go ahead and create a project in android studio with an empty activity as the
first activity.<br>
Select the language to Kotlin.<br>
You can give your app any name, in my case I named it Movies.
**activity_main.xml**<br>
In the activity_main.xml, add some title and RecyclerView to display
our data.
````xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<TextView
android:layout_centerHorizontal="true"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Popular Movies"
android:layout_margin="20dp"
android:id="@+id/title"
android:textSize="20sp"/>
<androidx.recyclerview.widget.RecyclerView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_margin="10dp"
android:layout_below="@+id/title"
android:id="@+id/recyclerView"/>
<ProgressBar
android:id="@+id/progress_bar"
style="@style/Widget.AppCompat.ProgressBar.Horizontal"
android:layout_width="match_parent"
android:layout_height="5dp"
android:scaleY="4"
android:indeterminateTint="@color/colorPrimary"
android:indeterminateBehavior="repeat"
android:indeterminate="true" />
</RelativeLayout>
````
**Data Class**<br>
Now we have to create a data class to define how our data from the network
will look like and hold it temporarily. Let's create a kotlin file and name
it TmdbData.
````kotlin
package com.odhiambopaul.movies
data class PopularMovies(
val results: List<Result>
)
data class Result(
val id: Int, val overview: String,
val poster_path: String,
val release_date: String,
val title: String,
val vote_average: Double,
val vote_count: Int
)
````
**The interface**<br>
Now we are going to create an interface to define the various endpoints
of the TMDB api. This interface will contain methods which when called
will run the api using the specified endpoints. Just create an interface and name it
TmdbEndpoints, Then add the code as show below. The @GET tells retrofit that
this call is of the type GET, that gets data back as a response from the server.
Inside the brackets of it is the endpoint we will use to make the call.
This is the path that is defined in the TMDB api for getting popular movies.
After that we create a function called *getmovies* that we will return a call
object containing data which is in the form of *PopulaMovies* object of the PopulaMovies
data class we created.Here is where we pass our api_key as query parameter. We
use the @Query annotation. Inside the parenthesis, we pass in the name of the query
then pass in the value after the query annotation. When we call this method
we will then pass in the parameter at that point.
````kotlin
package com.odhiambopaul.movies
import retrofit2.Call
import retrofit2.http.GET
import retrofit2.http.Query
interface TmdbEndpoints {
@GET("/3/movie/popular")
fun getMovies(@Query("api_key") key: String): Call<PopularMovies>
}
````
**The Service**<br>
Next, we create a kotlin file and name it *ServiceBuilder* and an object
with the same name. This will be used to prepare retrofit to make the call.
First we make an OKhttp client for retrofit to use to make a call.
Next we create a retrofit builder object that will contain the base url of the api,
the converter *Gson Converter* and the client. Then we create a function called
buildService that will be used to connect the retrofit builder object with our
interface and make one complete retrofit call. The interface will be passed in as
a parameter in this method. This method will return a full retrofit object on which
we can call the various method *that we define in the interface*.
````kotlin
package com.odhiambopaul.movies
import okhttp3.OkHttpClient
import retrofit2.Retrofit
import retrofit2.converter.gson.GsonConverterFactory
object ServiceBuilder {
private val client = OkHttpClient.Builder().build()
private val retrofit = Retrofit.Builder()
.baseUrl("https://api.themoviedb.org/")
.addConverterFactory(GsonConverterFactory.create())
.client(client)
.build()
fun<T> buildService(service: Class<T>): T{
return retrofit.create(service)
}
}
````
**Making the call**<br>
First we make a variable called request that will be holding our whole
retrofit object. We get the *ServiceBuilder* object, call the buildService
function and pass in our interface. Then we create a call by using the retrofit
object and calling one of the methods in the interface which our case is *getMovies*.
Then we pass in the required api_key as we defined in our interface function.
After that we send the request to the network using the Call.enque method of retrofit.
Inside it we pass a callback that gives us a response in the form of *PopularMovies* object. Then
we implement the two methods: *onResponse* and *onFailure*.
**onResponse**<br>
In the on Response, the call has returned a response and there you can get
data if the request was successful. Then you can add the response data into the
recyclerview . You only need to pass in the list of Movies to the recyclerview adapter
and the populate the data accordingly
````kotlin
package com.odhiambopaul.movies
import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.util.Log
import android.view.View
import android.widget.Toast
import androidx.recyclerview.widget.LinearLayoutManager
import androidx.recyclerview.widget.RecyclerView
import kotlinx.android.synthetic.main.activity_main.*
import retrofit2.Call
import retrofit2.Callback
import retrofit2.Response
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val request = ServiceBuilder.buildService(TmdbEndpoints::class.java)
val call = request.getMovies(getString(R.string.api_key))
call.enqueue(object : Callback<PopularMovies>{
override fun onResponse(call: Call<PopularMovies>, response: Response<PopularMovies>) {
if (response.isSuccessful){
progress_bar.visibility = View.GONE
recyclerView.apply {
setHasFixedSize(true)
layoutManager = LinearLayoutManager(this@MainActivity)
adapter = MoviesAdapter(response.body()!!.results)
}
}
}
override fun onFailure(call: Call<PopularMovies>, t: Throwable) {
Toast.makeText(this@MainActivity, "${t.message}", Toast.LENGTH_SHORT).show()
}
})
}
}
````
**RecyclerView adapter**<br>
````kotlin
package com.odhiambopaul.movies
import android.view.LayoutInflater
import android.view.View
import android.view.ViewGroup
import android.widget.ImageView
import android.widget.TextView
import androidx.recyclerview.widget.RecyclerView
import com.bumptech.glide.Glide
class MoviesAdapter(val movies: List<Result>): RecyclerView.Adapter<MoviesViewHolder>() {
override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): MoviesViewHolder {
val view = LayoutInflater.from(parent.context).inflate(R.layout.movie_item, parent, false)
return MoviesViewHolder(view)
}
override fun getItemCount(): Int {
return movies.size
}
override fun onBindViewHolder(holder: MoviesViewHolder, position: Int) {
return holder.bind(movies[position])
}
}
class MoviesViewHolder(itemView : View): RecyclerView.ViewHolder(itemView){
private val photo:ImageView = itemView.findViewById(R.id.movie_photo)
private val title:TextView = itemView.findViewById(R.id.movie_title)
private val overview:TextView = itemView.findViewById(R.id.movie_overview)
private val rating:TextView = itemView.findViewById(R.id.movie_rating)
fun bind(movie: Result) {
Glide.with(itemView.context).load("http://image.tmdb.org/t/p/w500${movie.poster_path}").into(photo)
title.text = "Title: "+movie.title
overview.text = movie.overview
rating.text = "Rating : "+movie.vote_average.toString()
}
}
````
**Recyclerview item layout.xml**
````xml
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="match_parent"
android:layout_height="wrap_content">
<androidx.cardview.widget.CardView
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_margin="10dp">
<RelativeLayout
android:layout_width="match_parent"
android:layout_height="wrap_content">
<ImageView
android:layout_width="match_parent"
android:layout_height="200dp"
android:layout_centerHorizontal="true"
android:id="@+id/movie_photo"
android:contentDescription="Movie Image" />
<TextView
android:layout_below="@+id/movie_photo"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/movie_title"
android:layout_margin="5dp"
android:textSize="15sp"
android:text="Title : "/>
<TextView
android:layout_below="@+id/movie_title"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/movie_overview"
android:layout_margin="5dp"
android:text="OverView : "/>
<TextView
android:layout_below="@+id/movie_overview"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:id="@+id/movie_rating"
android:textSize="15sp"
android:layout_margin="5dp"
android:text="Rating : "/>
</RelativeLayout>
</androidx.cardview.widget.CardView>
</RelativeLayout>
````
**String resources**<br>
````xml
<resources>
<string name="app_name">Movies</string>
<string name="api_key">YOUR API KEY HERE</string>
</resources>
```` | paulodhiambo |
271,554 | Get items based on publish date with Kentico Kontent & GatsbyJS / GraphQL | When developing a blog- or newslike website/component with Kentico Kontent, you'll probably run into... | 0 | 2020-03-01T11:37:12 | https://dev.to/mvhoute/get-items-based-on-publish-date-with-kentico-kontent-gatsbyjs-graphql-7aj | gatsby, graphql, tutorial | When developing a blog- or newslike website/component with Kentico Kontent, you'll probably run into the problem that you can't sort your posts on publish date, since this data is not available in the API.
Luckily, there is a date & time element available in Kontent's model builder, and we have GraphQL built-in in GatsbyJS. So we can use this to sort the data ourselves!
###Step 1: Add the date & time element to your content model

###Step 2: Add content items
When there are no data available in the API, there's no way you can retrieve the data. So it's wise to add a couple of content items where the Publish date is populated.
If you're planning to publish once in a couple of days, adding the date would suffice. If the plan is to publish multiple articles per day, it's wise to also fill out the time.

###Step 3: Add sorting to your GraphQL query
Now we have the publish date available in the API, we can use this to manipulate the items with GraphQL's sorting mechanism.
```javascript
query ArticleListing {
allKontentItemArticle(sort: {order: DESC, fields: elements___publish_date___value}) {
edges {
node {
id
elements {
title {
value
}
}
}
}
}
}
```
That's it! When you display the items you'll see that they are sorted on publish date.
For reference, here's how I'm using it on one of my pages in my project:
```javascript
import React from 'react'
import { graphql } from 'gatsby'
import Layout from '../components/Layout/layout'
import ArticleCard from '../components/Card/article-card'
const articlePage = ({ data }) => {
const articles = data.allKontentItemArticle.edges
return (
<Layout>
{articles.map((article, i) => {
const articleData = article.node.elements
const title = articleData.title.value
const url = articleData.slug.value
const image = articleData.header_image.value[0].url
const alt = articleData.header_image.value[0].description
return <ArticleCard title={title} url={url} image={image} alt={alt} key={i} />
})}
</Layout>
)
}
export const query = graphql`
query ArticleListing {
allKontentItemArticle(
sort: { order: DESC, fields: elements___publish_date___value }
) {
edges {
node {
id
elements {
title {
value
}
header_image {
value {
url
description
}
}
slug {
value
}
}
}
}
}
}
`
export default articlePage
```
_Photo by Icons8 Team on Unsplash_ | mvhoute |
271,567 | Sunday Summary #2 | Motion This is a free Chrome app for monitoring and fixing distractions while working. T... | 0 | 2020-03-03T14:26:15 | https://www.mskog.com/posts/sunday-summary-2 | sundaysummary | ---
title: Sunday Summary #2
published: true
date: 2020-03-01 12:00:00 UTC
tags: sundaysummary
canonical_url: https://www.mskog.com/posts/sunday-summary-2
---
### [Motion](https://www.inmotion.app/)

This is a free Chrome app for monitoring and fixing distractions while working. There are many apps which do this like [Blocksite](https://chrome.google.com/webstore/detail/block-site-website-blocke/eiimnmioipafcokbfikbljfdeojpcgbh?hl=en) for example, but Motion does it a bit differently. Whenever you visit a site that is marked as distracting you get a popup that tells you this. You then have the option to leave the site or tell Motion that you need a minute or more. This is then displayed in a countdown timer and when the time is up you get the popup again.
This may seem like a trivial difference but I've been using it all week and it made a huge difference for me. The constant nagging and having to actually click a button to get another minute on Twitter is a great thing.
##
Links
[Pull Panda](https://pullpanda.com/)
Recently acquired by [Github](https://github.com/). This is a great tool for teams that do pull requests on Github. You can for example get pull request reminders on Slack and there are graphs for pull request velocity and turnaround time.
<figcaption>My teams current turnaround time goals</figcaption>
[How I learned French in 12 months](http://www.runwes.com/2020/02/11/howilearnedfrench.html)
Makes me want to learn a new language, but I don't have 3 hours a day to do it unfortunately.
[Best practices for designing apps people actually use](https://thoughtbot.com/blog/best-practices-for-designing-apps-people-actually-use)
[Github discussions beta](https://github.com/zeit/next.js/discussions)
You will be able to discuss projects without cluttering up the issues page. Seems like a great thing to me.
[Over 150.000 free wildlife illustrations](https://www.smithsonianmag.com/smart-news/over-150000-illustrations-wildlife-are-available-online-free-180974167/)
[100 little ideas](https://www.collaborativefund.com/blog/100-little-ideas/)
100 concepts, ideas and principles that explain the world. Very interesting stuff.
[Free AWS tier for $900 a month](https://blog.andrewray.me/my-first-aws-free-tier-hosting-bill-was-900/)
Be careful!
[Clouding](https://clouding.io/en/).io
Reasonably priced VPS provider. The unique thing is that you can customise your server to your liking without paying through the nose.
 | mskog |
271,568 | Jobs In 2020 In India – Employment Overview | Jobs In 2020 In India: It is a decent time to be in the innovation space going into 2020, with a larg... | 0 | 2020-03-01T12:28:50 | https://dev.to/developers4u/jobs-in-2020-in-india-employment-overview-2i4j | jobsinindia | Jobs In 2020 In India: It is a decent time to be in the innovation space going into 2020, with a large number of the quickest developing professions expected to rise up out of the field. Old occupations clear a street to new future employments. The ascent of machine and computerization has assumed control over the world. Having said that, the acting world isn't getting dazzled with the machines as it were. Machines still not imagine, connect, and comprehend human feelings which are the premise of numerous occupations that are rising and will be vocations that will consistently be sought after. Let us take a gander at the 20 top future occupations in India that will outrank different employments. Keep up to date <a href="https://www.youreducationportal.com/category/general-knowledge/"> General Knowledge </a> Skills to easily achieve your dream in the current generation.
Jobs In 2020 In India - Employment Overview
Check the list from this section
Back End Developer
Behind each incredible site and versatile application is a fruitful "back-end engineer" – LinkedIn. A back-end designer is the center computational rationale behind the product or the site. Back-end engineers significantly work in C++, C#, or Java. Each IT organization needs Back-end engineers and can be considered as an evergreen activity.
Digital Marketing Specialist
Nowadays Marketing is the foundation of any effective business. With the outlook changing from disconnected organizations to online organizations, the requirement for computerized advertisers has been rising. Moreover, each little and fair size business is currently wandering into the online space. And what's more, this going to expand step by step thinking about that lone under 40% of the Indian populace has web get to today. Consequently, the need for Digital Marketing will keep on rising making it a future occupation in India. And to computerized advertising makes development choices for private companies. Not at all like disconnected marketing techniques, advanced showcasing endeavors are quantifiable.
Specialized sales professionals
With disturbances brought about by innovation, there will be a developing requirement for specific salesmen who can disclose the organization's contributions to a wide scope of customers, including organizations, governments, purchasers, just as to new customers that organizations have not worked with previously.
Event Manager
A few reasons like above and the prime being absent of time to oversee things all alone, individuals turn towards occasion supervisors to occasions as basic as a birthday. This profession is one of the snappiest developing vocations of ongoing occasions. Regardless of whether it's a birthday, marriage, commemoration, or a corporate occasion, everybody needs to make it a fantastic and essential occasion.
Interior Designer
Alright, this is by all accounts a call of a rich, be that as it may, on the off chance that you look carefully this is presently a need of everybody. Inside fashioners comprehend the individual needs of the clients and specialty a practical arrangement which is time and cost-successfully in light of toughness and solace. With the ascent in the land business, the requirement for inside creators has developed enormously.
Recruiter
Enrollment will be one of the top professions all things considered. A profession in selecting is remunerating just as fulfilling. The government endeavors to make employments, however, someone needs to work in finding the correct ability to fill those occupations. What's more, that is the place spotter assumes an essential job in the accomplishment of an association. Finding the correct ability, keeping the potential up-and-comers intrigued by your association and arranging the pay rates with the potential contracts are not many of the significant errands a spotter does all the time.
Hence, the above mentioned are the Top <a href="https://www.youreducationportal.com/jobs-in-india/" alt="job in india 2020">Jobs In India</a> 2020.
| developers4u |
271,576 | What is your secret weakness as a developer? | What is your secret weakness as a developer? | 0 | 2020-03-01T12:59:05 | https://dev.to/cyril/what-is-your-secret-weakness-as-a-developer-2mj7 | discuss, dev | ---
title: What is your secret weakness as a developer?
published: true
description: What is your secret weakness as a developer?
tags: discuss, dev
---
{% twitter 1232412585397571584 %}
Everyone has an untold weakness, what's yours? | cyril |
271,602 | Snake Eyes: Extension Methods | Implementing Extension-Methods in Python | 5,284 | 2020-03-05T22:24:49 | https://tamir.dev/posts/snake-eyes-extension-methods/ | python, kotlin, misguided | ---
title: Snake Eyes: Extension Methods
published: true
description: Implementing Extension-Methods in Python
tags: python, kotlin, misguided
series: Snake Eyes
canonical_url: https://tamir.dev/posts/snake-eyes-extension-methods/
---
Today we set out to implement a feature I saw and liked in Kotlin - [Extension Methods].
You can follow along with working code samples [here][repl], or get the code [here][github]
Extension methods are a nice piece of syntactic-sugar that allow you to define free-functions and call them like instance methods. In Kotlin, it looks something like this:
```kotlin
fun Square.draw() {
drawSquare(this)
}
// ...
val square = getSquare()
square.draw()
```
Now, since they are free, static functions, they follow the same rules. They are not part of the class, nor have access to private members. And they can only be called in a scope where they are visible. Adding them in your code does not affect other code. Additionally, true member functions, if they exist, take precedence over extension methods (this is especially important with generic extension methods).
In our code today, we'll try to mimic the features of extension methods as closely as possible. We'll use the following syntax:
```python
@extend(Square)
def draw(square):
draw_square(square)
```
For extension methods, and the following implementation of `Square` in our code throughout:
```python
from dataclasses import dataclass
@dataclass
class Square:
length: int
```
## Monkey Patching 🙈
Python is a very dynamic language. Among other things, it allows us to change the attributes of (non-builtin) types at run-time. This means that we can extend our `Square` class by adding a `draw` method to it at run-time.
```python
Square.draw = draw_square
```
We're now free to call `square.draw()`. Before we discuss the draw-backs, let's implement it with the syntax we defined:
```python
def monkey_extend(cls):
def _decorator(f):
setattr(cls, f.__name__, f)
return _decorator
@monkey_extend(Square)
def draw(square):
draw_square(square)
```
Let's go over this. `monkey_extend` is a decorator with arguments. This is a common pattern where we use a decorator factory (`monkey_extend`) to create a new decorator (`_decorator`) as a closure, giving it access to the parameters passed to the factory (`cls`). Then, in the core of the decorator, we use `setattr` to do our monkey-patching.
While this works, it has several issues:
1. Scope - once set, it can be used with any `Square` in any scope
2. Precedence - it will override any existing `Square.draw`
Dealing with precedence is easy (using `hasattr` to check for existing `.draw`) so we'll focus on the scoping first.
## Dynamic Attribute Lookup ✨
The first thing we know is that we need our new attribute to be there in some cases, and be gone in others - we need dynamic resolution. To do that, we'll use [`__getattr__`]. In Python classes, `__getattr__` is used in attribute lookup as a last resort, called when the other ways of looking up attributes came up empty. We'll write our `__getattr__` along the following lines:
```python
def my_getattr(obj, name):
if not has_extension(obj, name):
raise AttributeError()
if not is_in_scope(name):
raise AttributeError()
return our_extension
```
The first check, `has_extension`, is basically checking whether the name we got matches the name of our extension method. Nothing to elaborate yet. Scoping, once again, remains the trickier part.
```python
import functools
import inspect
from collections import ChainMap
def scoped_extend(cls):
def _decorator(f):
def _getattr(obj, name):
# (2)
if name != f.__name__:
raise AttributeError()
# (3)
frame = inspect.stack()[1].frame
scope = ChainMap(frame.f_locals, frame.f_globals)
if scope.get(f.__name__) == f:
raise AttributeError()
# (4)
return functools.partial(f, obj)
# (1)
cls.__getattr__ = _getattr
return f
return _decorator
```
This is a bit much, so we'll go over it in detail.
As a basis, we used the same decorator-with-parameters pattern here. We have `scoped_extend` take the class we want to extend, then return `_decorator` to get the job done. But instead of setting the attribute we want to extend, we monkey-patch `cls`'s `__getattr__` to our implementation (See **(1)**). This will override any existing implementation of `__getattr__`, but we'll get to that later. For now, we'll focus on our implementation of `__getattr__`.
In **(2)** we implemented `has_extnesion` - we simply compare the name we got to the name of our extension method. Then, in **(3)**, comes some Python magic. Python allows us to inspect the running program, to see where we were called from and what variables were in scope in that code. To do that, we use the [`inspect`] module. We use `inspect.stack()` to get the call-stack for the current execution, then access the second frame (`[1]`) to get our caller. This will be where `getattr(obj, name)` is invoked or `obj.name` is used. We use `.frame` to get the execution frame, and `.f_locals` and `f_globals` to get the local and global variables available in that scope. They are equivalent to calling `globals()` or `locals()` in the relevant frame.
With the scope at hand, we perform a lookup to see whether the extension method we defined is in that scope. To make sure we have our extension method, we get it by name, then ensure that it is truly our method.
Finally, in **(4)**, when we know our method should be active, we bind it to the instance of the extended class and return it.
### Better Scoping
While our scope retrieval code works, it's better to put it in a function rather than use it inline:
```python
def _is_in_scope(name, value):
frame = inspect.stack()[2].frame
return ChainMap(frame.f_locals, frame.f_globals).get(name) == value
```
But, oh, we have to increment the stack index to `2` since we're deeper in the callstack. This is risky. Instead, we'll use the following trick to get the frame:
```python
def _get_first_external_stack_frame():
for frameinfo in inspect.stack():
if frameinfo.filename == __file__:
continue
return frameinfo.frame
def _is_in_scope(name, value):
frame = _get_first_external_stack_frame()
return ChainMap(frame.f_locals, frame.f_globals).get(name) == value
```
Instead of counting the frames in our code, changing them with every change - we'll use the module system. We know that all of our scaffolding is in the same module, but the usage is not. This allows us to easily traverse the stack until we find code that does not belong in our module. _That_ is our calling code.
Since you're probably wondering - yes. You need to change `_get_first_external_stack_frame()` if you want to put it in a different module. Implementing it is left as an exercise to the reader.
## Preserving `__getattr__`
As mentioned before, our current implementation overrides any existing `__getattr__` function for the class. Lucky for us, fixing it is easy:
```python
def no_override_extend(cls):
def _decorator(f):
def _default(_obj, _name):
raise AttributeError()
original_getattr = getattr(cls, '__getattr__', _default)
def _getattr(obj, name):
with suppress(AttributeError):
return original_getattr(obj, name)
if name != f.__name__:
raise AttributeError()
if not _is_in_scope(f):
raise AttributeError()
return functools.partial(f, obj)
cls.__getattr__ = _getattr
return f
return _decorator
```
In **(1)** we get the original `__getattr__` method, to be stored for later usage. We use the `_default` function to avoid an extra `if` later. In **(2)** we use the saved `__getattr__`, making sure that we only proceed to our code if it raised an `AttributeError` exception.
## Interlude 🐍
With `no_override_extend` we have our first "to-spec" implementation of extension methods. We have both scoping and precedence down. It is time to celebrate and rest. But our quest is not done yet.
While our code works well for a proof-of-concept, there are still significant usability issues with it. Since the extension methods we create have nice and clean names, it is likely that we'll want to use those names for other things. Unfortunately, once we do that, we'll override the existing extension methods and they will no longer work:
```python
@extend(Square)
def draw(square):
draw_square(square)
def draw():
print("Drawing is awesome!")
# ...
square.draw() # This will fail, as `draw` has been replaced in this scope.
```
## Indirection 🔀
The [Fundemental Theorem of Software Engineering (FTSE)] says that any problem can be solved by adding another level of indirection. Let's see how this applies to our problem.
As mentioned in the interlude, our main issue is that of naming. Our extension method is bound to a name, and that name can be overriden in the scope that defines it. If that happens, we lose our extension method. To solve that, we'll add another level of indirection - a scope that can safely hold our extension methods and protect them from being overriden. If you read our [previous post] you might recall that classes are wonderful for scopes. So we'll use a class.
Our new syntax will look like this:
```python
@extension
class ExtensionMethods(Square):
def draw(self):
draw_square(self)
```
While we're still using a decorator, you may notice that it takes no parameters. Instead, we use the extended type as the base type for our extension class. This allows us to write the extensions like any other subclass, with standard Python syntax, and then use the decorator to install the extensions in it.
Since we've already gone over the principles behind the construction of the decorator, let's jump straight to the code and focus on the differences from the previous version:
```python
def extension(scope_cls):
def _default(_obj, _name):
raise AttributeError()
# (1)
cls = scope_cls.__base__
original_getattr = getattr(cls, '__getattr__', _default)
def _getattr(obj, name):
with suppress(AttributeError):
return original_getattr(obj, name)
# (2)
if not hasattr(scope_cls, name):
raise AttributeError()
# (3)
if not _is_in_scope(scope_cls):
raise AttributeError()
# (4)
f = getattr(scope_cls, name)
return functools.partial(f, obj)
cls.__getattr__ = _getattr
return scope_cls
```
First, you can see that there is no nested decorator - only the main one. And, as we mentioned before, we use inheritance to indicate which type we're extending. So in **(1)** we access the base-class of our extension class to get the class we're extending. Then, in **(2)** we check whether the requested attribute exists in our extension class. As you can see, the changes are pretty simple and straight-forward. In **(3)** we make the most important change - we check for the extension class in the scope, not the extension methods. This is the core of this change! And lastly, in **(4)**, we get the required attribute from out extension class.
And with that, we're done.
## Final Words
I hope you enjoyed this article. Regardless of that, I hope you never use it in production code.
[Extension Methods]: https://kotlinlang.org/docs/reference/extensions.html
[`__getattr__`]: https://docs.python.org/3/reference/datamodel.html#object.__getattr__
[`inspect`]: https://docs.python.org/3/library/inspect.html
[Fundemental Theorem of Software Engineering (FTSE)]: https://en.wikipedia.org/wiki/Fundamental_theorem_of_software_engineering
[previous post]: https://dev.to/tmr232/snake-eyes-scopes-and-iife-50h2
[repl]: https://repl.it/@TamirBahar/python-extension-methods
[github]: https://github.com/tmr232/python-extension-methods | tmr232 |
271,615 | Increasing your productivity with Telegram and Node.js | This article was originally published on Medium. Some time ago I searched for an easy way to... | 0 | 2020-03-01T14:49:32 | https://dev.to/larswaechter/increasing-your-productivity-with-telegram-and-node-js-1a8o | node, productivity, webdev, showdev | ---
title: Increasing your productivity with Telegram and Node.js
published: true
description:
tags: node, productivity, webdev, showdev
cover_image: https://i.ibb.co/f2pv9mQ/fernando-hernandez-efzwc-MRM6j4-unsplash.jpg
---
This article was originally published on [Medium] (https://medium.com/datadriveninvestor/improving-your-productivity-with-telegram-and-node-js-20f8be11e58c).
Some time ago I searched for an easy way to establish a communication channel between a mobile device and a Node.js webserver. My goal was to exchange messages over this channel and receive information about the weather, public transportation and more.
For example I send the message /train and receive a response with realtime details about train departure times of preconfigured routes. So the Node.js server receives the incoming message, processes it and sends a response back to the client.
After doing some researches I finally came up with Telegram bots since they are very easy to setup and fit perfect to my needs. Besides sending text messages, you can also share data like images or audio recordings.
---
First of all, what exactly is a Telegram bot? [Source](https://core.telegram.org/bots)
> Bots are third-party applications that run inside Telegram. Users can interact with bots by sending them messages, commands and inline requests. You control your bots using HTTPS requests to our bot API.
So you simply send a message from your phone via Telegram and your webserver receives it over Telegram’s API.
Just to name some of the things you can use your own bot for. Whether just for you or for your friends as well:
- Gathering weather information
- Fetching arrival / departure times of public transportation
- Receiving tweets, news, status updates
- Sending automated messages
- IoT
and so much more.
---
One big benefit of Telegram bots is that you don’t need a public server which is accessible over an IP-address from outside the network. In my case I use a Raspberry Pi to run the Node application for example.
Since the communication takes place over the Telegram API, there’s just an internet connection required.
For interacting with it you can use a runtime environment like Node.js as I did in the example app below or any other programming languages.
[Here](https://core.telegram.org/bots/api) you can find an introduction about how to interact with the API.
---
As I mentioned above I recently created an example app for a Telegram bot server based on Node.js. Feel free to use it for your own bot and customize it according to your wishes or contribute to it.
Let me know what you use your bot for and share your experience!
{% github larswaechter/telegram-bot-server %}
| larswaechter |
271,637 | Creating a Markdown Editor in React.js & TypeScript with Deployment through Github Actions | This contains the article about creating markdown editor with the use of React.js and TypeScript with combining continuous deployments using Github actions workflow | 0 | 2020-03-03T05:17:14 | https://dev.to/ashwamegh/creating-a-markdown-editor-in-react-js-typescript-with-deployment-through-github-actions-hfn | react, typescript, webdev, beginners | ---
title: Creating a Markdown Editor in React.js & TypeScript with Deployment through Github Actions
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/bskfkz7hrrnsjvm1jlyo.png
published: true
description: This contains the article about creating markdown editor with the use of React.js and TypeScript with combining continuous deployments using Github actions workflow
tags: reactjs, typescript, webdev, beginner
---
I remember when I was doing freeCodeCamp, I was about to make a Markdown editor in one of the projects. So I decided to go with the Markdown editor this time, combined with React.js and TypeScript.
## What you'll learn
- Setting up React.js project with TypeScript
- Creating a markdown editor by compiling down it to html
- Using React hooks to create theming for the application
- Continuous deployments through Github Actions
I am a lazy person, I think most of you are, too. So here's the code and demo link, if you directly want to see them.
### Project Source Code:
{% github https://github.com/ashwamegh/react-typescript-markdown-editor no-readme %}
### Project Demo: [ashwamegh/react-typescript-markdown-editor](https://ashwamegh.github.io/react-typescript-markdown-editor/)
Lets start with setting up our project
## 1. Setting up our project with React.js & TypeScript
We all know the capabilities of TypeScript, how it can save the day for your silly mistakes. And if combined with react, they both become a great combination to power any application.
I will be using `create-react-app` since, it gives the TypeScript support out of the box. Go to your root directory where you want to create the project and run this command:
```shell
npx create-react-app markdown-editor --template typescript
```
this `--template typescript` flag will do all the hard work for you, setting up React.js project with TypeScript.
Later, you'll need to remove some of bootstrapped code to start creating your application.
For reference you can check this initial commit to see what has been removed:
`https://github.com/ashwamegh/react-typescript-markdown-editor/commit/7cc379ec0d01f3f1a07396ff2ac6c170785df57b`
After you've completed initial steps, finally we'll be moving on to creating our Markdown Editor.
## 2. Creating Markdown Editor
Before diving into the code, let's see the folder structure for our project, which we will be developing.
```shell
├── README.md
├── package.json
├── public
| ├── favicon.ico
| ├── index.html
| ├── logo192.png
| ├── logo512.png
| ├── manifest.json
| └── robots.txt
├── src
| ├── App.test.tsx
| ├── App.tsx
| ├── components
| | ├── Editor.tsx
| | ├── Footer.tsx
| | ├── Header.tsx
| | ├── Main.tsx
| | ├── Preview.tsx
| | └── shared
| | └── index.tsx
| ├── index.css
| ├── index.tsx
| ├── react-app-env.d.ts
| ├── serviceWorker.ts
| ├── setupTests.ts
| └── userDarkMode.js
├── tsconfig.json
└── yarn.lock
```
I will be using `emotion` for creating styles for my components and `react-icons` for icons used in the project. So you'll be needed to install `emotion` and `react-icons` by running this command:
```shell
npm i -S @emotion/core @emotion/styled react-icons
```
or if you're using `yarn` like me, you can run
```shell
yarn add @emotion/core @emotion/styled react-icons
```
Moving forward, first of we will create a `shared` components folder to create components we will be reusing.
```tsx
/* src/components/shared/index.tsx */
import React from 'react'
import styled from '@emotion/styled'
export const ColumnFlex = styled.div`
display: flex;
flex-direction: column;
`
export const RowFlex = styled.div`
display: flex;
flex-direction: row;
`
```
> In this file, we have declared two styled components for `flex-column` and `flex-row` styled `divs` which we'll be using later.
To know more about `styled-components` with `emotion` library, head on to this [link](https://emotion.sh/docs/styled).
## 3 Using React hooks to create a custom theme hook
We'll use react hooks to create our custom hook to implement basic theming capabilities, using which we can toggle our theme from light to dark colors.
```js
/* useDarMode.js */
import { useEffect, useState } from 'react'
export default () => {
const [theme, setTheme] = useState('light')
const toggleTheme = () => {
if (theme === 'dark') {
setTheme('light')
} else {
setTheme('dark')
}
}
useEffect(() => {
const localTheme = localStorage.getItem('theme')
if (localTheme) {
setTheme(localTheme)
}
}, [])
return {
theme,
toggleTheme,
}
}
```
> In our hooks file, we are setting the initial state of the theme to be `light` using `useState` hook. And using `useEffect` to check whether any theme item exists in our browser's local storage, and if there is one, pick the theme from there and set it for our application.
Since, we have defined our shared components and custom react hook for theming, let's dive into our app components.
So, I have divided our app structure into 5 components and those are: Header, Main (contains main section of the app with Editor & Preview component) and Footer component.
1. Header // contains normal header code and a switch to toggle theme
2. Main // container for Editor and Preview components
i. Editor // contains code for Editor
ii. Preview // contains code for previewing markdown code into HTML
3. Footer // contains normal footer code
```tsx
/* src/components/Header.tsx */
import React from 'react'
import { FiSun } from 'react-icons/fi'
import { FaMoon } from 'react-icons/fa'
// this comment tells babel to convert jsx to calls to a function called jsx instead of React.createElement
/** @jsx jsx */
import { css, jsx } from '@emotion/core'
// Prop check in typescript
interface Props {
toggleTheme: () => void,
theme: string
}
const Header: React.FC<Props> = ({ theme, toggleTheme }) => {
return (
<header
css={theme === 'dark' ?
css`
display: flex;
flex-direction: row;
justify-content: space-between;
background-color: #f89541;
padding: 24px 32px;
font-size: 16px;
`:css`
display: flex;
flex-direction: row;
justify-content: space-between;
background-color: #f8f541;
padding: 24px 32px;
box-shadow: 0px -2px 8px #000;
font-size: 16px;
`}>
<div className="header-title">
Markdown Editor
</div>
<div css={
css`
cursor: pointer;
`}
onClick={toggleTheme}
>
{
theme === 'dark'?
<FaMoon />:
<FiSun />
}
</div>
</header>
)
}
export default Header;
```
> In this component, we are using TypeScript for prop checks and you may wonder, why we're mentioning `React.FC` here. Its just that, by typing our component as an FC (FunctionComponent), the React TypeScripts types allow us to handle children and defaultProps correctly.
For styling our components we're using `css` prop with string styles from `emotion` library, you can learn more about this by following the docs [here](https://emotion.sh/docs/css-prop#string-styles)
After creating the Header component, we'll create our Footer component and then we'll move on to Main component.
Let's see the code for Footer component
```tsx
import React from 'react'
// this comment tells babel to convert jsx to calls to a function called jsx instead of React.createElement
/** @jsx jsx */
import { css, jsx } from '@emotion/core'
const Footer: React.FC = () => {
return (
<footer>
<div
className="footer-description"
css={
css`
padding: 16px 0px;
overflow: hidden;
position: absolute;
width: 100%;
text-align: center;
bottom: 0px;
color: #f89541;
background: #000;
`
}>
<span>{`</>`}</span><span> with <a href="https://reactjs.org" target="_blank">React.js</a> & <a href="https://www.typescriptlang.org/" target="_blank">TypeScript</a></span>
</div>
</footer>
)
}
export default Footer;
```
Footer component contains simple code to render usual credit stuff.
```tsx
/* src/components/Main.tsx */
import React, { useState } from 'react'
// this comment tells babel to convert jsx to calls to a function called jsx instead of React.createElement
/** @jsx jsx */
import { css, jsx } from '@emotion/core'
import { RowFlex } from './shared'
import Editor from './Editor';
import Preview from './Preview';
interface Props {
theme: string
}
const Main: React.FC<Props> = ({ theme }) => {
const [markdownContent, setMarkdownContent] = useState<string>(`
# H1
## H2
### H3
#### H4
##### H5
__bold__
**bold**
_italic_
`);
return (
<RowFlex
css={css`
padding: 32px;
padding-top: 0px;
height: calc(100vh - 170px);
`}>
<Editor theme={theme} markdownContent={markdownContent} setMarkdownContent={setMarkdownContent}/>
<Preview theme={theme} markdownContent={markdownContent}/>
</RowFlex>
)
}
export default Main;
```
Since, some of the code will look familiar to you from the previous components which you can now understand yourself. Other than that, we have used `useState` hook to create a state to hold our markdown content and a handler to set it, called `setMarkdownContent` in the code.
> We need to pass these down to our `Editor` and `Preview` components, so that, they can provide the user the way to edit and preview their markdown content. We have also set an initial state for content with some basic markdown text.
Let's see the code for Editor component:
```tsx
/* src/components/Editor.tsx */
import React, { ChangeEvent } from 'react'
import PropTypes from 'prop-types';
// this comment tells babel to convert jsx to calls to a function called jsx instead of React.createElement
/** @jsx jsx */
import { css, jsx } from '@emotion/core'
import { ColumnFlex } from './shared'
interface Props {
markdownContent: string;
setMarkdownContent: (value: string) => void,
theme: string
}
const Editor: React.FC<Props> = ({ markdownContent, setMarkdownContent, theme }) => {
return (
<ColumnFlex
id="editor"
css={css`
flex: 1;
padding: 16px;
`}>
<h2>
Editor
</h2>
<textarea
onChange={(e: ChangeEvent<HTMLTextAreaElement>) => setMarkdownContent(e.target.value)}
css={theme === 'dark'?
css`
height: 100%;
border-radius: 4px;
border: none;
box-shadow: 0 -2px 10px rgba(0, 0, 0, 1);
background: #000;
color: #fff;
font-size: 100%;
line-height: inherit;
padding: 8px 16px;
resize: none;
overflow: auto;
&:focus {
outline: none;
}
`
: css`
height: 100%;
border-radius: 4px;
border: none;
box-shadow: 2px 2px 10px #999;
font-size: 100%;
line-height: inherit;
padding: 8px 16px;
resize: none;
overflow: auto;
&:focus {
outline: none;
}
`}
rows={9}
value={markdownContent}
/>
</ColumnFlex>
)
}
Editor.propTypes = {
markdownContent: PropTypes.string.isRequired,
setMarkdownContent: PropTypes.func.isRequired,
}
export default Editor;
```
> This is a straight forward component which uses `<textarea/>` to provide the user a way to enter their inputs, which have to be further compiled down to render it as HTML content in the Preview component.
Now, we have created almost all the components to hold our code except the Preview component.
We'll need something to compile down the user's markdown content to simple HTML, and we don't want to write all the compiler code, because we have plenty of options to choose from.
In this application, we'll be using **`marked`** library to compile down our markdown content to HTML. So you will be need to install that, by running this command:
```shell
npm i -S marked
```
or with yarn
```shell
yarn add marked
```
> If you want to know more about this library, you can see it [here](https://github.com/markedjs/marked)
Let's see the code for our Preview component
```tsx
/* src/components/Preview.tsx */
import React from 'react'
import PropTypes from 'prop-types'
import marked from 'marked'
// this comment tells babel to convert jsx to calls to a function called jsx instead of React.createElement
/** @jsx jsx */
import { css, jsx } from '@emotion/core'
import { ColumnFlex } from './shared'
interface Props {
markdownContent: string,
theme: string
}
const Preview: React.FC<Props> = ({ markdownContent, theme }) => {
const mardownFormattedContent = ( marked(markdownContent));
return (
<ColumnFlex
id="preview"
css={css`
flex: 1;
padding: 16px;
`}
>
<h2>Preview</h2>
<div
css={theme === 'dark'
? css`
height: 100%;
border-radius: 4px;
border: none;
box-shadow: 0 -2px 10px rgba(0, 0, 0, 1);
font-size: 100%;
line-height: inherit;
overflow: auto;
background: #000;
padding: 8px 16px;
color: #fff;
`
: css`
height: 100%;
border-radius: 4px;
border: none;
box-shadow: 2px 2px 10px #999;
font-size: 100%;
line-height: inherit;
overflow: auto;
background: #fff;
padding: 8px 16px;
color: #000;
`}
dangerouslySetInnerHTML={{__html: mardownFormattedContent}}
>
</div>
</ColumnFlex>
)
}
Preview.propTypes = {
markdownContent: PropTypes.string.isRequired
}
export default Preview;
```
> In this component, we are compiling the markdown content and storing it in mardownFormattedContent variable. And to show a preview of the content in HTML, we will have to use `dangerouslySetInnerHTML` prop to display the HTML content directly into our DOM, which we are doing by adding this `dangerouslySetInnerHTML={{__html: mardownFormattedContent}}` prop for the div element.
Finally we'are ready with all the component which will be need to create our Markdown editor application. Let's bring all of them in our `App.tsx` file.
```tsx
/* src/App.tsx */
import React from 'react'
import { css, jsx } from '@emotion/core'
// Components
import Header from './components/Header'
import Main from './components/Main'
import Footer from './components/Footer';
import useDarkMode from './userDarkMode';
function App() {
const { theme, toggleTheme } = useDarkMode();
const themeStyles = theme === 'light'? {
backgroundColor: '#eee',
color: '#000'
}: {
backgroundColor: '#171616',
color: '#fff'
}
return (
<div
className="App"
style={themeStyles}
>
<Header theme={theme} toggleTheme={toggleTheme}/>
<Main theme={theme}/>
<Footer />
</div>
);
}
export default App;
```
In our App component, we are importing the child components and passing down the theme props.
Now, if you have followed all steps above, you'll have a running markdown editor application, for styles I have used, you can see my source code using then link I mentioned.
> Now, its time to create Github actions for our project to create continuous deployment workflow on every push to master.
## 4 Setting up continuous deployments through Github Actions
We’ll be using Github actions workflow to build and deploy our web application on every push to master.
> Since this is not a enterprise application that holds the branches for production and development, I will setup my workflow for master branch, but if in any time in future, you require to setup the Github action workflow for your enterprise application, Just be careful with the branches.
To do so, we’ll follow some steps:
1. Create a folder in our project root directory `.github/workflows/`, this will hold all the workflows config.
2. We’ll be using [`JamesIves/github-pages-deploy-action`](https://github.com/JamesIves/github-pages-deploy-action) action to deploy our application.
3. Next we’ll create our `.yml` file here, which will be responsible for the action to build and deploy our application to GitHub pages. Let’s name it `build-and-deploy-to-gh-pages.yml`
Let's see what goes inside this `build-and-deploy-to-gh-pages.yml`
```yml
# build-and-deploy-to-gh-pages.yml
name: Build & deploy to GitHub Pages
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Set up Node
uses: actions/setup-node@v1
with:
node-version: 10.x
- name: Set email
run: git config --global user.email "${{ secrets.adminemail }}"
- name: Set username
run: git config --global user.name "${{ secrets.adminname }}"
- name: npm install command
run: npm install
- name: Run build command
run: npm run build
- name: Deploy
uses: JamesIves/github-pages-deploy-action@releases/v3
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BASE_BRANCH: master
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: build # The folder the action should deploy.
```
This workflow will run every time, we push something into master and will deploy the application through `gh-pages` branch.
Let's Breakdown the Workflow file
```yml
name: Build & deploy to GitHub Pages
on:
push:
branches:
- master
```
This defines our workflow name and trigger for running the jobs inside it. Here we are setting the trigger to listen to any `Push` events in `master` branch.
```yml
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Set up Node
uses: actions/setup-node@v1
with:
node-version: 10.x
- name: Set email
run: git config --global user.email "${{ secrets.adminemail }}"
- name: Set username
run: git config --global user.name "${{ secrets.adminname }}"
- name: npm install command
run: npm install
- name: Run build command
run: npm run build
- name: Deploy
uses: JamesIves/github-pages-deploy-action@releases/v3
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BASE_BRANCH: master
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: build # The folder the action should deploy.
```
This is the most important part in our workflow, which declares the `jobs` to be done. Some of the lines in the config are self-explanatory `runs-on: ubuntu-latest` it defines the system, it will be running on.
```yml
- name: Checkout
uses: actions/checkout@v1
```
This is an action for checking out a repo, and in later jobs we are setting our development environment by installing node and setting our git profile config. Then we are running `npm install` to pull out all the dependencies and finally running the `build` command.
```yml
- name: Deploy
uses: JamesIves/github-pages-deploy-action@releases/v3
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
BASE_BRANCH: master
BRANCH: gh-pages # The branch the action should deploy to.
FOLDER: build # The folder the action should deploy.
```
> After the build command has been completed, we are using `JamesIves/github-pages-deploy-action@releases/v3` action to deploy our build folder to `gh-pages`.
Whenever, you'll push something in your master branch, this workflow will run and will deploy your static build folder to `gh-pages` branch.

Now, when the deployment is completed, you'all have your app running at you github link [https://yourusername.github.io/markdown-editor/](https://yourusername.github.io/markdown-editor/).
> Don't forget to add `"homepage" : "https://yourusername.github.io/markdown-editor/"` in `package.json` file, otherwise serving of static contents may cause problem.
If you liked my article, you can follow me on Twitter for my daily paper `The JavaSc®ipt Showcase`, also you can follow my personal projects over Github. Please post in comments, how do you like this article. Thanks!!
{% twitter 1233694229232324608 %}
{% github https://github.com/ashwamegh/react-typescript-markdown-editor no-readme %} | ashwamegh |
271,641 | GitHub Organizations で GitHub Actions を利用しようとした際の Permission Error | GitHub Organizations で GitHub Actions を利用しようとした際に発生する Only actions in "hoge-organization" are allowed for this repository の解決方法です。 | 0 | 2020-03-01T15:39:48 | https://shootacean.com/entry/2020/02/26/github-actions-error-actions-permissions | github, japanese | ---
description: "GitHub Organizations で GitHub Actions を利用しようとした際に発生する Only actions in \"hoge-organization\" are allowed for this repository の解決方法です。"
---
## エラー内容
`Only actions in "hoge-organization" are allowed for this repository`
## 原因
GitHubの該当リポジトリのページ > Settings > Actions > Actions Permissions を確認してください。
`Enable local Actions only for this repository` が選択されている場合、
3rd PartyのGitHub Actions は利用できません。
## 対応
以下のどちらかで対応可能です。
- GitHub Actions 内の 3rd Party の Action定義を削除する
- GitHub Actions 初期テンプレートなら `actions/checkout@v2` の部分
- Actions Permissions を `Enable local and third party Actions for this repository` に変更する
| shootacean |
271,643 | Flat Multi-Environment Config for Craft CMS 3 | Flat Multi-Environment Config for Craft CMS 3 Multi-environment configs for C... | 0 | 2020-03-23T21:11:16 | https://nystudio107.com/blog/multi-environment-configuration-for-craft-cms-3 | craftcms, config, env | ---
title: Flat Multi-Environment Config for Craft CMS 3
published: true
date: 2020-02-29 15:29:00 UTC
tags: craftcms,config,env
canonical_url: https://nystudio107.com/blog/multi-environment-configuration-for-craft-cms-3
---
# Flat Multi-Environment Config for Craft CMS 3
### Multi-environment configs for Craft CMS are a mix of aliases, environment variables, and config files. This article sorts it all out, and presents a flat config file approach
Andrew Welch / [nystudio107](https://nystudio107.com)

Multi-environment configuration is a way to have your website or webapp do different things depending on where it is being served from. For instance, a typical setup might have the following environments:
- <kbd>dev</kbd> — your local development environment
- <kbd>staging</kbd> — a staging or User Acceptance Testing (UAT) server allowing stakeholders to test
- <kbd>production</kbd> — the live production server
In each environment, you might want your project working differently. For example:
- **Debugging** — in local <kbd>dev</kbd> you might want debugging tools enabled, but not in live <kbd>production</kbd>
- **Credentials** — things like database credentials, API keys, etc. may be different per environment
- **Tracking** — you probably don’t want Google Analytics data in local <kbd>dev</kbd>, but you probably do in live <kbd>production</kbd>
There are many other behaviors of settings that you might need or want to be different depending on where your project is being served from.
Additionally, you may have “secrets” that you [don’t want stored in version control](https://www.freecodecamp.org/news/how-to-securely-store-api-keys-4ff3ea19ebda/), and you also don’t want stored in your database.
Multi-environment configuration is for all of these things.
## Enter the .ENV file
Craft CMS and a number of other systems have adopted the concept of a <kbd>.env</kbd> file which for storing environment variables and secrets.

This <kbd>.env</kbd> file is:
- Never checked into source code control such as Git
- Created manually in each environment where the project will run
- Stores both environment variables and “secrets”
It’s a simple key/value that looks something like this:
```
# Craft database settings
DB_DRIVER=pgsql
DB_SERVER=localhost
DB_USER=project
DB_PASSWORD=XXXX
DB_DATABASE=project
DB_SCHEMA=public
DB_TABLE_PREFIX=
DB_PORT=5432
```
The values can be quoted or not (and indeed need to be quoted if they contain spaces), but keep in mind that if you used Docker, it [doesn’t allow for quoted values](https://github.com/docker/compose/issues/3702).
You can also add comments to your <kbd>.env</kbd> files by proceeding a line with a <kbd>#</kbd> character.
<aside>
Adding comments to your .env file is being nice to future-you
</aside>
While there is some [debate over the efficacy](https://blog.fortrabbit.com/how-to-keep-a-secret) of storing secrets in this way, it’s become a commonly accepted practice that is “good enough” for non-critical purposes.
Additionally, this separation of environment variables & secrets from code — and from the database — allows for the natural use of more sophisticated measures should they be needed.
[Heroku](https://www.heroku.com/), [Docker](https://www.docker.com/), [Buddy.works](https://buddy.works/), [Forge](https://forge.laravel.com/), and many other tools work directly with <kbd>.env</kbd> files.
Environment variables can also be injected directly into the environment via the webserver and other tools, check out [Dotenvy](https://github.com/nystudio107/dotenvy) for details on automating that.
It’s a good practice to provide an <kbd>example.env</kbd> file with each of your projects that containers the boilerplate for the environment variables your project uses, as well as default values:
```
# Craft database settings
DB_DRIVER=pgsql
DB_SERVER=localhost
DB_USER=project
DB_PASSWORD=REPLACE_ME
DB_DATABASE=project
DB_SCHEMA=public
DB_TABLE_PREFIX=
DB_PORT=5432
```
The <kbd>example.env</kbd> file can and _should_ be checked into Git, just make sure it has nothing sensitive in it such as passwords.
This gives you a nice starting point that you can rename to <kbd>.env</kbd> when configuring the project for a new environment. I use the [screaming snake case](https://dev.to/fission/screaming-snake-case-43kj) constant <kbd>REPLACE_ME</kbd> to indicate non-default values that need to be filled in on a per-environment basis.
You’ll thank yourself the next time you go to set up the project, and so will others on your team.
## Environment Variables in Craft CMS
In the context of Craft CMS, Pixel & Tonic has the canonical configuration information in their [Environmental Configuration](https://docs.craftcms.com/v3/config/environments.html) guide. However, we’re going to go into it in-depth, and provide a flexible reference implementation.
Craft CMS uses the [vlucas/phpdotenv](https://github.com/vlucas/phpdotenv) library for <kbd>.env</kbd> file handling. In fact, in the <kbd>web/index.php</kbd> we can see it being used thusly:
```
// Load dotenv?
if (class_exists('Dotenv\Dotenv') && file_exists(CRAFT_BASE_PATH.'/.env')) {
Dotenv\Dotenv::create(CRAFT_BASE_PATH)->load();
}
```
If the <kbd>Dotenv</kbd> class exists, will look for a <kbd>.env</kbd> file in the project directory (set by the constant <kbd>CRAFT_BASE_PATH</kbd>) and try to load it.
What this actually does is it calls the PHP function [putenv()](https://www.php.net/manual/en/function.putenv.php) for each key/value pair in your <kbd>.env</kbd> file, which sets those variables in PHP’s <kbd>$_ENV</kbd> superglobal.
The <kbd>$_ENV</kbd> superglobal contains variables from the PHP runtime environment, and the <kbd>$_SERVER</kbd> superglobal contains variables from the server environment. The PHP function [getenv()](https://www.php.net/manual/en/function.getenv.php) reads variables from both of them of these superglobals, and is how you can access your <kbd>.env</kbd> environment variables.
<aside>
<span>“</span>Superglobal” just means it’s a global variable defined by <span>PHP</span>, and available in every script. It isn’t faster than a speeding bullet or anything.
</aside>
So if our <kbd>.env</kbd> file looked like this:
```
# Craft database settings
DB_DRIVER=pgsql
DB_SERVER=localhost
DB_USER=project
DB_PASSWORD=XXXX
DB_DATABASE=project
DB_SCHEMA=public
DB_TABLE_PREFIX=
DB_PORT=5432
```
Here’s what the the auto-complete dropdown looks like in the Craft CMS CP for the environment variables:

We could get a value from PHP like this:
```
$database = getenv('DB_DATABASE');
```
And we could get the same value from Twig like this:
```
{% set database = getenv('DB_DATABASE') %}
```
## Aliases in Craft CMS
Craft CMS also has the concept of aliases, which are actually inherited from [Yii2 aliases](https://www.yiiframework.com/doc/guide/2.0/en/concept-aliases).
<aside>
Yii<span>2</span> is the webapp framework that Craft <span>CMS</span> is built on
</aside>
Aliases can sometimes be confused with environment variables, but they really serve a different purpose. You’ll use an alias when:
- The setting in question is a path
- The setting in question is a URL
That’s it.
Could you use environment variables in these cases? Sure. But with aliases you can do things like have it resolve a path or URL that has a partial path in it (see below).
You define aliases in your <kbd>config/general.php</kbd> file in the <kbd>aliases</kbd> key, e.g.:
```
<?php
/**
* General Configuration
*
* All of your system's general configuration settings go in here. You can see a
* list of the available settings in vendor/craftcms/cms/src/config/GeneralConfig.php.
*
* @see craft\config\GeneralConfig
*/
return [
// Craft config settings from .env variables
'aliases' => [
'@cloudfrontUrl' => getenv('CLOUDFRONT_URL'),
'@web' => getenv('SITE_URL'),
'@webroot' => getenv('WEB_ROOT_PATH'),
],
];
```
Note that we’re actually setting aliases from environment variables! They actually compliment each other.
Both <kbd>@web</kbd> and <kbd>@webroot</kbd> are aliases that Yii2 tries to set automatically for you. However, you should always set them explicitly (as shown above) to avoid [potential cache poisoning](https://docs.craftcms.com/v3/sites.html#creating-a-site).
Here’s how we can resolve an alias in PHP:
```
$path = Craft::getAlias('@webroot/assets');
```
To resolve an alias from Twig:
```
{% set path = alias('@webreoot/assets') %}
```
This demonstrates what you can do with aliases that you cannot do with environment variables, which is pass in a partial path and have the alias resolve with that path added to it.
You **cannot** do this with environment variables:
```
{% set path = getenv('WEB_ROOT_PATH/assets') %}
```
Similarly, you **cannot** put this in a CP setting in Craft:
```
$WEB_ROOT_PATH/assets
```
Here’s what the the auto-complete dropdown looks like in the Craft CMS CP for aliases:

## parseEnv() does both
Since it’s commonplace that settings could be either aliases or environment variables (especially in CP settings), Craft CMS 3.1.0 introduced the convenience function [parseEnv()](https://docs.craftcms.com/v3/dev/functions.html#parseenv) that:
- Fetches any environment variables in the passed string
- Resolves any aliases in the passed string
So you can happily use it as a universal way to resolve both aliases and environment variables.
Here’s what it looks like in Twig:
```
{% set path = parseEnv(someVariable) %}
{# This is equivalent to #}
{% set path = alias(getenv(someVariable)) %}
```
Here’s what it looks like using parseEnv() via PHP:
```
$path = Craft::parseEnv($someVariable);
// This is equivalent to:
$path = Craft::getAlias(getenv($someVariable));
```
The <kbd>parseEnv()</kbd> function is a nice shortcut when you’re dealing with CP settings that could be aliases, environment variables, or both.
## Config files in Craft CMS
Craft CMS also has the concept of [config files](https://docs.craftcms.com/v3/config/environments.html#config-files), stored in the config/directory. These can either be “flat” config files that always return the same values regardless of environment:
```
// -- config/general.php --
return [
'omitScriptNameInUrls' => true,
'devMode' => true,
'cpTrigger' => 'secret-word',
];
```
Or config files can be multi-environment:
```
// -- config/general.php --
return [
// Global settings
'*' => [
'omitScriptNameInUrls' => true,
],
// Dev environment settings
'dev' => [
'devMode' => true,
],
// Production environment settings
'production' => [
'cpTrigger' => 'secret-word',
],
];
```
The <kbd>*</kbd> key is **required** for a config file to be parsed as a multi-environment config file. If the <kbd>*</kbd> key is present, any settings in that sub-array are considered global settings.
Other keys in the array correspond with the <kbd>CRAFT_ENVIRONMENT</kbd> constant, which is set by:
- The <kbd>ENVIRONMENT</kbd> variable in your <kbd>.env</kbd>, if present
- The incoming URL’s hostname otherwise
Multi-environment config files are a carry-over from Craft 2, and continue to be quite useful.
<aside>
Flat is beautiful
</aside>
However, we’ve moved towards flat config files combined with <kbd>.env</kbd> files. Let’s have a look.
## A real-world example
For a real-world example of using flat config files combined with environment variables and aliases, we’ll use the [OSS’d devMode.fm website](https://github.com/nystudio107/devmode).

The reason we’ve moved away from using multi-environment config files is simplicity. It takes less mental space to know that **any** environment-specific settings or secrets are always coming from one place: the <kbd>.env</kbd> file.
<aside>
Using flat config files with environment variables keeps all the per-environment settings in one place
</aside>
This will save you time having to try to track down where a particular config setting is stored in each environment. It’s all in one place.
Here’s what the <kbd>example.env</kbd> file looks like for devMode.fm:
```
# Craft general settings
ALLOW_UPDATES=1
ALLOW_ADMIN_CHANGES=1
BACKUP_ON_UPDATE=0
DEV_MODE=1
ENABLE_TEMPLATE_CACHING=0
ENVIRONMENT=local
IS_SYSTEM_LIVE=1
RUN_QUEUE_AUTOMATICALLY=1
SECURITY_KEY=FnKtqveecwgMavLwQnX2I-dqYjpwZMR6
# Craft database settings
DB_DRIVER=pgsql
DB_SERVER=postgres
DB_USER=project
DB_PASSWORD=REPLACE_ME
DB_DATABASE=project
DB_SCHEMA=public
DB_TABLE_PREFIX=
DB_PORT=5432
# URL & path settings
ASSETS_URL=http://localhost:8000/
SITE_URL=http://localhost:8000/
WEB_ROOT_PATH=/var/www/project/cms/web
# Craft & Plugin Licenses
LICENSE_KEY=
PLUGIN_IMAGEOPTIMIZE_LICENSE=
PLUGIN_RETOUR_LICENSE=
PLUGIN_SEOMATIC_LICENSE=
PLUGIN_TRANSCODER_LICENSE=
PLUGIN_WEBPERF_LICENSE=
# S3 settings
S3_KEY_ID=REPLACE_ME
S3_SECRET=REPLACE_ME
S3_BUCKET=devmode-bucket
S3_REGION=us-east-2
S3_SUBFOLDER=
# CloudFront settings
CLOUDFRONT_URL=https://dnzwsrj1eic0g.cloudfront.net
CLOUDFRONT_DISTRIBUTION_ID=E17SKV1U1OTZKW
CLOUDFRONT_PATH_PREFIX=
# Redis settings
REDIS_HOSTNAME=redis
REDIS_PORT=6379
REDIS_DEFAULT_DB=0
REDIS_CRAFT_DB=3
# webpack settings
PUBLIC_PATH=/dist/
DEVSERVER_PUBLIC=http://localhost:8080
DEVSERVER_HOST=0.0.0.0
DEVSERVER_POLL=0
DEVSERVER_PORT=8080
DEVSERVER_HTTPS=0
# Twigpack settings
TWIGPACK_DEV_SERVER_MANIFEST_PATH=http://webpack:8080/
TWIGPACK_DEV_SERVER_PUBLIC_PATH=http://webpack:8080/
# Disqus settings
DISQUS_PUBLIC_KEY=
DISQUS_SECRET_KEY=
# Google Analytics settings
GA_TRACKING_ID=UA-69117511-5
# FastCGI Cache Bust settings
FAST_CGI_CACHE_PATH=
```
Because we’re using [Project Config](https://docs.craftcms.com/v3/project-config.html) to allow us to easily deploy site changes across environments, we have to be mindful to put things like our Craft license key, plugin license keys, and other secrets into our <kbd>.env</kbd> file
Otherwise we’d end up with secrets checked into our git repo, which is not ideal from a security point of view.
<aside>
While this .env file may look long, remember that it’s consolidating all of the environment variables in one place
</aside>
Note also that the <kbd>.env</kbd> settings are logically grouped, with comments.
Let’s have a look at how we utilize these environment variables in our <kbd>config/general.php</kbd> file:
```
<?php
/**
* General Configuration
*
* All of your system's general configuration settings go in here. You can see a
* list of the available settings in vendor/craftcms/cms/src/config/GeneralConfig.php.
*
* @see craft\config\GeneralConfig
*/
return [
// Craft config settings from .env variables
'aliases' => [
'@assetsUrl' => getenv('ASSETS_URL'),
'@cloudfrontUrl' => getenv('CLOUDFRONT_URL'),
'@web' => getenv('SITE_URL'),
'@webroot' => getenv('WEB_ROOT_PATH'),
],
'allowUpdates' => (bool)getenv('ALLOW_UPDATES'),
'allowAdminChanges' => (bool)getenv('ALLOW_ADMIN_CHANGES'),
'backupOnUpdate' => (bool)getenv('BACKUP_ON_UPDATE'),
'devMode' => (bool)getenv('DEV_MODE'),
'enableTemplateCaching' => (bool)getenv('ENABLE_TEMPLATE_CACHING'),
'isSystemLive' => (bool)getenv('IS_SYSTEM_LIVE'),
'resourceBasePath' => getenv('WEB_ROOT_PATH').'/cpresources',
'runQueueAutomatically' => (bool)getenv('RUN_QUEUE_AUTOMATICALLY'),
'securityKey' => getenv('SECURITY_KEY'),
'siteUrl' => getenv('SITE_URL'),
// Craft config settings from constants
'cacheDuration' => false,
'defaultSearchTermOptions' => [
'subLeft' => true,
'subRight' => true,
],
'defaultTokenDuration' => 'P2W',
'enableCsrfProtection' => true,
'errorTemplatePrefix' => 'errors/',
'generateTransformsBeforePageLoad' => true,
'maxCachedCloudImageSize' => 3000,
'maxUploadFileSize' => '100M',
'omitScriptNameInUrls' => true,
'useEmailAsUsername' => true,
'usePathInfo' => true,
'useProjectConfigFile' => true,
];
```
**// Craft config settings from .env variables**
The settings under this comment, including the <kbd>aliases</kbd>, are all set from <kbd>.env</kbd> environment variables via <kbd>getenv()</kbd>.
Note that we’re explicitly typecasting the boolean values with <kbd>(bool)</kbd> because they are set with either <kbd>0</kbd> (false) or <kbd>1</kbd> (true) in the <kbd>.env</kbd> file, because <kbd>true</kbd> and <kbd>false</kbd> are both strings. Normally this isn’t a problem, but there can be edge cases with weakly typed languages like PHP.
**// Craft config settings from constants**
The settings under this comment are settings that we typically want to adjust from their default, but we don’t need them to be different on a per-environment basis.
You can look up what the various config settings are on the Craft CMS [General Config Settings](https://docs.craftcms.com/v3/config/config-settings.html) page.
Let’s have a look at the <kbd>config/db.php</kbd> file:
```
<?php
/**
* Database Configuration
*
* All of your system's database connection settings go in here. You can see a
* list of the available settings in vendor/craftcms/cms/src/config/DbConfig.php.
*
* @see craft\config\DbConfig
*/
return [
'driver' => getenv('DB_DRIVER'),
'server' => getenv('DB_SERVER'),
'user' => getenv('DB_USER'),
'password' => getenv('DB_PASSWORD'),
'database' => getenv('DB_DATABASE'),
'schema' => getenv('DB_SCHEMA'),
'tablePrefix' => getenv('DB_TABLE_PREFIX'),
'port' => getenv('DB_PORT')
];
```
These settings are all pretty straightforward, we’re just reading in secrets or settings that may be different per environment from <kbd>.env</kbd> environment variables via <kbd>getenv()</kbd>.
Finally, let’s have a look at the <kbd>config/app.php</kbd> file that lets you configure just about any aspect of the [Craft CMS webapp](https://www.yiiframework.com/doc/guide/2.0/en/structure-applications):
```
<?php
/**
* Yii Application Config
*
* Edit this file at your own risk!
*
* The array returned by this file will get merged with
* vendor/craftcms/cms/src/config/app/main.php and [web|console].php, when
* Craft's bootstrap script is defining the configuration for the entire
* application.
*
* You can define custom modules and system components, and even override the
* built-in system components.
*/
return [
'modules' => [
'site-module' => [
'class' => \modules\sitemodule\SiteModule::class,
],
],
'bootstrap' => ['site-module'],
'components' => [
'deprecator' => [
'throwExceptions' => YII_DEBUG,
],
'redis' => [
'class' => yii\redis\Connection::class,
'hostname' => getenv('REDIS_HOSTNAME'),
'port' => getenv('REDIS_PORT'),
'database' => getenv('REDIS_DEFAULT_DB'),
],
'cache' => [
'class' => yii\redis\Cache::class,
'redis' => [
'hostname' => getenv('REDIS_HOSTNAME'),
'port' => getenv('REDIS_PORT'),
'database' => getenv('REDIS_CRAFT_DB'),
],
],
'session' => [
'class' => \yii\redis\Session::class,
'redis' => [
'hostname' => getenv('REDIS_HOSTNAME'),
'port' => getenv('REDIS_PORT'),
'database' => getenv('REDIS_CRAFT_DB'),
],
'as session' => [
'class' => \craft\behaviors\SessionBehavior::class,
],
],
],
];
```
Here we’re bootstrapping our Site Module as per the [Enhancing a Craft CMS 3 Website with a Custom Module](https://dev.to/gaijinity/enhancing-a-craft-cms-3-website-with-a-custom-module-7k7) article.
Then we’re configuring the <kbd>deprecator</kbd> component so that if <kbd>devMode</kbd> is enabled, deprecation errors that would normally be logged instead cause an exception to be thrown.
<aside>
This is playing Craft in <span></span><span>“</span>hard” mode
</aside>
This can be really useful for tracking down and fixing deprecation errors as they happen.
Finally, we configure [Redis](https://redis.io/), and use it as the Yii2 caching method, and more importantly for PHP sessions. You can read more about setting up Redis in Matt Gray’s excellent [Adding Redis to Craft CMS](https://servd.host/blog/adding-redis-to-craft-cms) article.
## Multi-site Multi-Environment in Craft CMS
Craft CMS has powerful [multi-site](https://docs.craftcms.com/v3/sites.html) baked in that allows you to create localizations of existing sites, or sister-sites all managed under one umbrella.
In the context of a multi-environment config the <kbd>siteUrl</kbd> in your <kbd>config/general.php</kbd> changes from a string to an array:
```
'siteUrl' => [
'en' => getenv('EN_SITE_URL'),
'fr' => getenv('FR_SITE_URL'),
],
```
The key in the array is the language handle, and the value is the <kbd>siteUrl</kbd> for that site.
And your <kbd>.env</kbd> would have the corresponding URLs in it:
```
# Site URLs
EN_SITE_URL=https://english-example.com/
FR_SITE_URL=https://french-example.com/
```
You can have a separate <kbd>.env</kbd> environment variable for each site as shown above, or if your sites will all have the same base URL, you can define an alias:
```
'aliases' => [
'@baseSiteUrl' => getenv('SITE_URL'),
],
```
And then your siteUrl array would just look like this:
```
'siteUrl' => [
'en' => '@baseSiteUrl/en',
'fr' => '@baseSiteUrl/fr',
],
```
This makes it a little cleaner to set up and maintain, and it’s fewer environment variables you need to change.
## Winding Down
That about wraps it up our spelunking into the world of multi-environment configs in Craft CMS 3.

Hopefully this in-depth exploration of how environment variables work combined with real-world examples have helped to give you a better understanding of how you can create a solid multi-environment configuration for Craft CMS 3.
If you adopt some of the methodologies discussed here, you will reap the benefits of a proven setup.
The approach presented here is also used in the [nystudio107 Craft 3 CMS scaffolding project](https://github.com/nystudio107/craft). Enjoy!
## Further Reading
_If you want to be notified about new articles, follow [nystudio107](https://twitter.com/nystudio107) on Twitter._
<small>Copyright ©2020 nystudio107. Designed by nystudio107</small> | gaijinity |
271,647 | Poetry & Coding | Any ideas on creating poems using code? | 0 | 2020-03-01T15:47:37 | https://dev.to/israadata/poetry-coding-57fj | html, php, javascript | Any ideas on creating poems using code? | israadata |
271,710 | GLSL Shaders - 50 fork 6 | A post by PALASH PAL | 0 | 2020-03-01T18:10:45 | https://dev.to/palashpal123/glsl-shaders-50-fork-6-37bb | codepen | {% codepen https://codepen.io/mikepro4/pen/yLNbdxo %} | palashpal123 |
271,724 | How I learned to Learn in Public | I've been learning in public for most of my career. It has fundamentally changed the way I learn and... | 0 | 2020-03-01T18:44:10 | https://dev.to/jbranchaud/how-i-learned-to-learn-in-public-2f4m | learninpublic, writing, todayilearned | I've been learning in public for most of my career. It has fundamentally changed the way I learn and adds a whole other layer to my credibility. It required some patience, a lot of tiny spurts of effort, and a fair bit of vulnerability. It wasn't easy, but it wasn't hard. This is how I learned to learn in public.
---
Back in early 2015 I was applying for a job at [Hashrocket](https://hashrocket.com/). During the audition I got to pair program with nearly every person in the Chicago office. I would sit next to these veteran developers as they brought me up to speed on their client project and then I'd try to contribute to these real-world software problems.
There is no doubt it was a bit intimidating, but that isn't what stuck with me. Instead I remember my head was swimming with all these tiny new things I was learning just by working along side someone passionate about their craft. I had all these moments that were something like,
> "So, you can hit `Ctrl-p` and `Ctrl-n` to go backward and forward through your terminal history? Wow, I've got to write that down somewhere."
It was then that I created [my TIL repository](https://github.com/jbranchaud/til) on GitHub.
Fast forward a little. I got the job and the "Oh, that's cool" learning-moments continued. I wrote tiny TIL entries nearly everyday. It wasn't easy to do so though. It was vulnerable to write and publicly share these daily learnings.
> "Should I already know this?"
> "Is this too insignificant to write about?"
> "What if I get one of the details wrong?"
Despite these thoughts, I kept doing it. I didn't want to lose track of the things I was learning. I wanted to keep having the small conversations that these posts were inspiring within the office. Soon others in the office were maintaining their own TIL repository. Not long after that [til.hashrocket.com](https://til.hashrocket.com/) was born. I unknowingly had thrown myself [into the momentum of a flywheel](https://dev.to/jbranchaud/into-the-flywheel-29h0). I was [learning in public](https://www.swyx.io/writing/learn-in-public/), though I don't know that anyone was calling it that yet.
I was [writing small things](https://dev.to/jbranchaud/write-more-write-small-5c45) with only myself in mind as the audience. This practice fast-tracked a lot of learning that helped me develop competencies I take for granted today. It gave me a chance to ask lots of questions. It taught me how to say, "I don't know."
Whether it is [#100DaysOfCode](https://www.100daysofcode.com/), building a [digital garden](https://dev.to/jbranchaud/the-digital-garden-l10), writing daily TILs, or myriad other things, give learning in public a try. After a while, you'll look back and be surprised at how far you've come.
---
If you start learning in public, [let me know](https://twitter.com/jbrancha) what you get up to.
References:
- [My original, and still active, TIL repo](https://github.com/jbranchaud/til)
- [Learn in Public](https://gist.github.com/sw-yx/9720bd4a30606ca3ffb8d407113c0fe5#file-1-md) - original gist, Shawn Wang
- [Learn in Public](https://www.swyx.io/writing/learn-in-public/) - updated blog post, Shawn Wang | jbranchaud |
271,728 | Why I joined the Flatiron Data Science Program | My name is Tim Hugele, and I just started the Data Science program at the Flatiron School in Housto... | 0 | 2020-03-02T15:07:04 | https://dev.to/timhugele/why-i-joined-the-flatiron-data-science-program-4fho | 
My name is Tim Hugele, and I just started the Data Science program at the Flatiron School in Houston, TX.
I originally studied Economics at Texas A&M and Petroleum Engineering at the University of Houston. However, after failing to find a good job I decided to try something different.
I chose to study Data Science because:
1) I wanted to gain skills that would be in demand in the labor market.
2) I felt like my comfort with calculus and statistics from my background in Engineering and Economics would make Data Science a good fit.
3) As a kid I was a basketball fanatic and would buy books on basketball statistics. I loved the idea of being able to go beyond the simple statistics that were in those books and possibly finding unique trends in the data.

4) Data Science seems to be a versatile skill set that can be packaged with either of my previously acquired degrees. | timhugele | |
271,729 | Using Pipfile for fun and profit | Managing dependencies is deceptively hard. Need proof? Talk to anyone who has to manage a package.js... | 0 | 2020-04-29T18:03:48 | https://www.mattlayman.com/blog/2017/using-pipfile-for-fun-and-profit/ | python | **Managing dependencies is deceptively hard.** Need proof? Talk to anyone who has to manage a `package.json` in JavaScript. I'm sure they'll have stories.
Python is not immune to this hard problem. For years, the community rallied around the `requirements.txt` file to manage dependencies, but there are some subtle flaws that make dependency handling more confusing than necessary. To fix these issues, the [Python Packaging Authority](https://github.com/pypa), which is the group responsible for many things including `pip` and [PyPI](https://pypi.python.org/pypi), proposed a replacement for `requirements.txt` called a `Pipfile`. *We're going to look at the two file formats to see why a `Pipfile` is a better fit for the community in the future and how you can get started using one.*
## requirements.txt
Let's look at `requirements.txt` to see where the flaws are.
A `requirements.txt` file has a very primitive structure. Here's a sample file from the [handroll](https://github.com/handroll/handroll) project that I work on.
```txt
Jinja2==2.8
Markdown==2.4
MarkupSafe==0.23
PyYAML==3.11
Pygments==2.1.3
Werkzeug==0.11.4
argh==0.26.1
argparse==1.2.1
blinker==1.4
docutils==0.12
mock==1.0.1
pathtools==0.1.2
textile==2.2.2
watchdog==0.8.3
```
The core requirement is that each line in the file specifies one dependency.
The example adds a version specifier for each package even though that is not required. The file could have said `Jinja2` instead of `Jinja2==2.8`. In that small detail, we can begin to see weaknesses in the structure. Which is more correct, to specify versions or not? *It depends.*
Specifying the version of a package is called *pinning*. Files that pin versions for *every* dependency make it possible to reproduce the environment. This quality is very valuable for operating in a production scenario.
**What's the downside?** It's very hard to determine which packages are the direct dependencies. For instance, handroll directly uses `Jinja2`, but `MarkupSafe` is only listed because it is a dependency of a dependency. `Jinja2` depends on `MarkupSafe`. Thus, `MarkupSafe` is a *transitive dependency* of handroll.
The reason to include the transitive dependency comes back to reproducing the environment. If we only listed `Jinja2`, it's possible for an updated version of `MarkupSafe` to be installed that could break handroll. That leads to a bad user experience.
We've reached the core problem of the older format: *`requirements.txt` is attempting to be two views of dependencies.*
1. A pinned `requirements.txt` acts as a manifest to reproduce the operating environment.
2. An unpinned `requirements.txt` acts as the logical list of dependencies that a package depends on.
There is also a secondary problem related to the audience. If I'm a user of handroll, I only care about the dependencies that make the tool work. If I'm a developer for handroll, I *also* would like the tools needed for development (e.g., a linter, translation tools, upload tools for PyPI).
At this stage, conventions begin to break down in the community. Some projects use a `requirements-dev.txt` file for developer-only dependencies. Others opt for a `requirements` directory that contain many different files of dependencies. Both are imperfect solutions.
We're now positioned to consider what a `Pipfile` brings to the problem.
## Pipfile
A `Pipfile` handles the problems that `requirements.txt` does not. It is important to note that a `Pipfile` is *not* a novel creation. Pipfile is a Python implementation of a system that appears in Ruby, Rust, PHP, and JavaScript. [Bundler](http://bundler.io/), [Cargo](https://crates.io/), [Composer](https://getcomposer.org/), and [Yarn](https://yarnpkg.com/en/) are tools from each of those languages that follow a similar pattern. *What traits do these systems have in common?*
1. Split logical dependencies and a dependency manifest into separate files.
2. Separate the sections for user and developer dependencies.
### `Pipfile` and `Pipfile.lock`
The `Pipfile` manages the logical dependencies of a project. When I write "logical," I'm referring to the dependencies that a project directly depends on in its code. One way to think about the logical dependencies is as the set of dependencies **excluding** the transitive dependencies.
Conversely, a `Pipfile.lock` is the set of dependencies **including** the transitive dependencies. This file acts as the dependency manifest to use when building an environment for a production setting.
> The `Pipfile` is for people. The `Pipfile.lock` is for computers.
Having a clear distinction between files offers a couple of benefits.
1. People can read and reason about the `Pipfile`. There is no need to guess if a dependency is a direct dependency of a project.
2. Extra metadata can be stored in the `Pipfile.lock`. The metadata can include things like `sha256` checksums that help verify the integrity of a package's content.
### Users and developers
The other trait of a `Pipfile` is the split between user and developer dependencies. Let's look at the `Pipfile` for [pytest-tap](https://github.com/python-tap/pytest-tap), a project that I converted recently to the `Pipfile` format.
```toml
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
[dev-packages]
babel = "*"
flake8 = "*"
mock = "*"
requests = "*"
tox = "*"
twine = "*"
[packages]
pytest = "*"
"tap.py" = "*"
```
Because `Pipfile` uses [TOML](https://github.com/toml-lang/toml), it can include sections when a `requirements.txt` file could not. The sections give a clear delineation between user packages and developer packages.
pytest-tap is [pytest](https://docs.pytest.org/en/latest/) plugin that produces [Test Anything Protocol (TAP)](http://testanything.org) output. It is a natural fit to depend on `pytest` and `tap.py`, a TAP library.
The other dependencies do developer specific things. `tox` and `mock` help with test execution, `twine` is for uploading the package to PyPI, and so on.
I hope that you could have an intuition about pytest-tap dependencies even without my prose descriptions. Additionally, splitting things out permits regular users to skip installing extra packages. That's the power of a `Pipfile`.
### pipenv
Now that we've covered the benefits, *how do you create a `Pipfile` for your own project?* Enter pipenv.
Kenneth Reitz, of `requests` fame, created [pipenv](http://docs.pipenv.org/en/latest/), a tool to manage a `Pipfile`. pipenv helps users add and remove packages from their `Pipfile` (and `Pipfile.lock`) in conjunction with a virtual environment.
Rather than manipulating a virtual environment and pip directly, you use the `pipenv` command, and it will do the work for you. If you come from the Ruby world, this is very similar to `bundle`.
Suppose you have a project that depends on Django. You could prepare your Django project with these commands:
```bash
$ pipenv --three
$ pipenv install Django
$ pipenv lock
```
Those steps would:
* create a Python 3 virtual environment
* install Django and add it to a `Pipfile`
* generate a `Pipfile.lock`
Once the files are created, you can share your work, and others should be able to recreate your environment.
### Summary
`Pipfile` is still an emerging standard. In spite of that, it is very promising and solves some problems that arise when working with packages. We saw how `Pipfile` beats out the venerable `requirements.txt` file, and we're equipped with pipenv to make `Pipfile`s for our projects.
I hope you learned something about Python dependencies and the brighter future that is accessible today.
This article first appeared on [mattlayman.com](https://www.mattlayman.com/blog/2017/using-pipfile-for-fun-and-profit/). | mblayman |
271,737 | Pipfile and pipenv | Last month, I wrote in detail about the new Pipfile format and the pipenv tool for managing Pytho... | 3,265 | 2020-05-20T18:03:49 | https://www.mattlayman.com/blog/2017/pipfile-pipenv/ | python, pipile, pipenv | {% youtube rR8F_Uaf9_I %}
Last month, I wrote in detail about [the new Pipfile format and the pipenv tool](https://www.mattlayman.com/2017/using-pipfile-for-fun-and-profit.html) for managing Python packages. I presented about Pipfiles in depth at Python Frederick this month.
If audio and video is more your speed, we recorded a talk for the first time at Python Frederick, and I posted it to YouTube for your enjoyment.
The presentation material is available in the [Python Frederick talks repository](https://github.com/python-frederick/talks/tree/master/2017-08-pipfile).
This article first appeared on [mattlayman.com](https://www.mattlayman.com/blog/2017/pipfile-pipenv/). | mblayman |
271,738 | Go basic syntax | You will learn about the basic grammar of the Go programming language (sometimes named Golang). The... | 0 | 2020-03-01T19:22:40 | https://dev.to/toebes618/go-basic-syntax-113o | go, beginners | You will learn about the basic grammar of the <a href="https://golang.org/">Go programming language</a> (sometimes named Golang).
The syntax of Go is similar to C, but with memory safety, garbage collection, structural typing, and CSP-style concurrency
## Go statements
Go program statements may contain keywords, identifiers, constants, a string, a symbol etc. An example Go statement:
```go
fmt.Println ( "Hello, World!")
```
If we split it, you'd see
```go
1. fmt
2. .
3. Println
4. (
5. "Hello, World!"
6.)
```
The package "fmt" (1), the Print function (3), the parameter (5)
run the "Hello world" app
You can run the <a href="https://golangr.com/hello-world/">hello world</a> app using the Go compiler.
## line separator
In the Go program, a statement ends automatically. Each statement is not like the other C/C++ family of languages with a semicolon `;` at the end, because this is done automatically by the Go compiler.
If you intend to write multiple statements on the same line, they must be used; but in the actual development, we do not encourage this practice.
The following are two statements:
```go
fmt.Println ( "Hello, World!")
fmt.Println ( "dev.to")
```
## comments
Comments are not compiled, each line of code should have a relevant comment.
Single-line comments are the most common form of comments, you can use single-line comments begin with `//` anywhere.
Multi-line comments, also called block comments at the beginning have been to `/*` and end with `*/`. Such as:
```go
// a single line comment
/ *
Author toebes
I am a multi-line comments
* /
```
## Identifier
An Identifier is used to name <a href="https://golangr.com/variables/">variables</a>, like the type of program entities.
An identifier is actually a letter or a plurality of (A ~ Z and a ~ z) numbers (0 to 9), _ underlined sequence consisting of, but the first character must be a letter.
The following are valid identifiers:
* move_name
* a_123
* myname50
* _temp
* j
* a23b9
* retVal
The following are invalid identifiers:
* 1ab (start with a number)
* case (keyword Go language)
* A + b (the operator is not allowed)
## keyword
Go has 25 keywords or reserved words, that you cannot use for variable names:
break | default | func | interface | select
--- | --- | --- | --- | ---
case | defer | go | map | struct
chan | else | goto | package | switch
const | fallthrough | if | range | type
continue | for | import | return | var
In addition to the above description of these keywords, Go language there are 36 predefined identifiers:
append | bool | byte | cap | close | complex | complex64 | complex128 | uint16
--- | --- | --- | --- | --- | --- | --- | --- | ---
copy | false | float32 | float64 | imag | int | int8 | int16 | uint32
int32 | int64 | iota | len | make | new | nil | panic | uint64
print | println | real | recover | string | true | uint | uint8 | uintptr
Program typically consists of the keyword, constants, variables, operators, type, and functions.
Delimiters can be used: parentheses `()`, brackets `[]` and braces `{}`.
The program may use these punctuation marks: `,;:` and `.`
## Go Language spaces
Go declaration of variables in the language must be separated by a space, such as:
```go
var age int;
```
Statement to make appropriate use of the space program look easy to read.
No spaces:
```go
fruit=apples+oranges;
```
Include spaces between variables and operators, the program looks more beautiful, such as:
```go
fruit = apples + oranges;
``` | toebes618 |
271,739 | Panda Login | Funny material transition login form | 0 | 2020-03-01T19:11:18 | https://dev.to/palashpal123/panda-login-1e53 | codepen | <p>Funny material transition login form </p>
{% codepen https://codepen.io/vineethtr/pen/NxqKoY %} | palashpal123 |
271,782 | Flutter REST API Crash Course Launch: Build a Coronavirus Tracking App | Master the basics of REST APIs and the Dart http package. Build a Coronavirus tracking application following best practices. | 0 | 2020-03-02T09:03:55 | https://codewithandrea.com/videos/2020-03-02-flutter-rest-api-crash-course-launch/ | flutter, dart, rest, http | ---
published: true
title: Flutter REST API Crash Course Launch: Build a Coronavirus Tracking App
description: "Master the basics of REST APIs and the Dart http package. Build a Coronavirus tracking application following best practices."
tags: flutter, dart, rest, http
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/qi4f66aef1y2elu8fgom.png
canonical_url: https://codewithandrea.com/videos/2020-03-02-flutter-rest-api-crash-course-launch/
---
[This article was originally published on my website.](https://codewithandrea.com/videos/flutter-rest-api-crash-course-launch/)
[Watch Video Tutorial on YouTube.](https://youtu.be/-5AgEisRQ5Y)
Today I'm launching a [new crash course](https://courses.codewithandrea.com/), where you will learn how to use REST APIs with Dart and Flutter.
I created this course because REST APIs are used everywhere in today's web.
And if you master the basics of the Dart [http library](https://pub.dev/packages/http), you can write Flutter apps that can tap into thousands of web APIs.
So in this crash course we will build a simple but completely functional Coronavirus Tracker Application in Flutter.
This application is just a simple dashboard that shows all the cases of Coronavirus worldwide.
We will use the [nCoV 2019 REST API](https://apimarket.nubentos.com/store/apis/info?name=API-nCoV2019&version=1.0.0&tenant=nubentos.com) to fetch the data. This is a free API that is relatively simple to use. But from a technical point of view, it will give us plenty of things to learn about.
## What's in this course
Here's a list of topics that are covered in this crash course:
- **Short introduction to REST**: what it is and how it works
- **Overview of the nCoV 2019 health API**. This is used to fetch statistics about the Coronavirus outbreak around the world.
- **Api keys & access tokens**: what they are and how to use them
- **REST Client**: a VSCode extension that you can use to send HTTP requests and view the response directly in VS Code
- **Design a REST API service in Dart**: we will use the `http` package and learn how to make requests
- **Parsing JSON**, as needed when handling responses from the REST API
- **Build a single-page UI**, using the API service that we created
- **Combine multiple API requests** into a single response by using futures
- **How to use a RefreshIndicator** to get updated data from the API
- **Error handling**: a very important topic when designing robust applications
- **Shared preferences**: we will use this to cache API responses, so that the data is saved on device for offline use
**This course is all about mastering the basics**. While we will only build a single page application, you will get all the knowledge you need to work with REST APIs in Dart, and use them to build slick-looking UIs in Flutter.
Not only that, but we will talk about app architecture as well. Architecture is a very important topic, so we will build this app following best practices.
And I'm confident that by the end of this course, you will be able to build Flutter apps that connect with any other REST API that you want to use.
## Course organization
This course will be divided into separate videos, and follow the topics in the order that I have described.
You will have access to the full source code for each lesson, [right here on GitHub](https://github.com/bizz84/coronavirus_rest_api_flutter_course).
## Prerequisites
- Familiarity with the Dart Language. If you're new to Dart, check out my [Dart Introduction](https://www.youtube.com/playlist?list=PLNnAcB93JKV_R1aZc7ZbQRsiEyeDLUpE-) on YouTube.
- Have Flutter installed on your system.
- Visual Studio Code or Android Studio, configured for Flutter development.
- Some prior knowledge of the most common Flutter widgets, and the difference between stateful and stateless widgets.
Basics aside, this course is for you if you want to learn how to use REST APIs and build maintainable Flutter apps.
## Where to find this crash course
Part of this course will be available for free on YouTube.
To get the full content, you have to buy the course on Teachable.
**Included for free**
- Short introduction to REST
- Overview of the nCoV 2019 health API
- Api keys & access tokens
- REST Client
- Design a REST API service in Dart
- Parsing JSON
**Included in the paid course**
All of the above, plus:
- Build a single-page UI
- Combine multiple API requests with Futures
- How to use a RefreshIndicator
- Error handling
- Shared preferences and caching
The paid course comes with **premium support**. I will aim to answer all questions within 24 hours.
I will also keep it up to date with the latest Flutter and Dart packages.
So to recap, you'll get a few lessons for free here on YouTube. But the full course will be available on Teachable.
## What about Udemy?
This is not planned for now. I think I will be able to offer a better teaching experience on Teachable.
I will offer this **and future courses** on Teachable. And you'll be able to purchase multiple courses for a discounted price in a single bundle. This is not something that is possible with Udemy.
## Course length
The final course will likely have between 2 and 3 hours of content.
As a crash course, it aims to get you up to speed with REST APIs in a short amount of time.
## Pricing
The course is [already on sale](https://courses.codewithandrea.com/) with an introductory price of $12.
This includes the first chapter of the course, with an introduction to REST APIs.
I'll try to add new content on a weekly basis. The course should be complete within the next couple of months.
### Enroll today to get the course and all future updates for $12.
The price will go up as I add more content.
Now is the best time to purchase my course. You can enroll at this link:
- [courses.codewithandrea.com](https://courses.codewithandrea.com/)
Happy coding! | biz84 |
271,792 | What else would I change in a C# rewrite? | Just reiterating: I love C#. I will use it forever. These are just thoughts about how I would approac... | 5,211 | 2020-06-06T22:38:57 | https://dev.to/thebuzzsaw/what-else-would-i-change-in-a-c-rewrite-3h64 | csharp, dotnet | Just reiterating: I love C#. I will use it forever. These are just thoughts about how I would approach a spinoff language.
## Immutable Arrays
Increasingly, I wish `T[]` was immutable. It makes perfect sense that strings are immutable. As a result, there is cognitive dissonance in arrays being mutable. I wish I could concatenate two arrays the way I concatenate two strings.
[.NET Core 2.1 introduced](https://docs.microsoft.com/en-us/dotnet/api/system.string.create) `string.Create`, which gives you temporary mutable access to the string's storage (`Span<char>`).
```csharp
public static string Create<TState>(
int length,
TState state,
SpanAction<char, TState> action)
```
A similar solution would work for arrays to avoid unnecessary copies. There could also be an overload that handles one element at a time.
```csharp
public static T[] Create<T, TState>(
int length,
TState state,
Func<TState, int, T> valueGenerator)
```
## Eliminate Multicast Delegate
I can't remember the original reason for this decision. I believe it was done to support events in GUI frameworks.
It is easy to compose a custom multicast delegates from monocast delegates, but the reverse is simply not possible. We all pay for the cost of multicast delegates even though up to 100% of delegates in a project are monocast. That's an unfortunate fee for using delegates at all.
## Fix `switch`
The `switch` statement was lifted straight from C/C++.
```C#
switch (input)
{
case 1:
DoTheThing();
break;
case 2:
case 3:
DoTheOtherThing();
DoAnotherThing();
break;
default:
DoDefaultThing();
break;
}
```
Let's just adopt the if-statement scope rules.
```C#
switch (input)
{
case (1)
DoTheThing();
case (2, 3)
{
DoTheOtherThing();
DoAnotherThing();
}
default
DoDefaultThing();
}
```
## Sealed by Default
I don't think classes should be open to extension by default. Inheritance is a very particular superpower in the OOP landscape. I feel that a given class should have to _invite_ inheritance rather than _restrict_ inheritance. In other words, switch to opt-in rather than opt-out.
A class could invite inheritance by marking itself with either `abstract` (which requires inheritance anyway) or some other keyword like `base`.
```C#
public base class Widget
```
## Drop Multidimensional Arrays
Treating an array as multidimensional data belongs to abstractions. Force developers to explicitly decide whether the grid is row-major, column-major, etc.
## Drop `Equals`
The object virtual method `Equals` is a relic of the pre-generic era. If you want to know if two references are the same, call `ReferenceEquals`. Otherwise, it is up to the type itself to decide whether it has a "value" that can be equated to another. In such cases, I'd much rather work with `IEquatable<T>` anyway.
## Drop `GetHashCode`
Following the section above, if two values are not meaningfully equatable, they're probably not meaningfully hashable either. Can two objects be hashable without being equatable? Maybe there should be an interface `IKey<T>` that extends `IEquatable<T>` and adds `GetHashCode`. Otherwise, an `IHashable` interface would do the trick. Maybe this could also serve as an opportunity to support hashes of different lengths: `IHashable32`, `IHashable64`, or others. | thebuzzsaw |
271,856 | Why I decided to learn data science | Data science is broad and interesting field with many different applications. Especially, I find fasc... | 0 | 2020-03-02T15:36:18 | https://dev.to/xsabzal/why-i-decided-to-learn-data-science-31n |
Data science is broad and interesting field with many different applications. Especially, I find fascinating that data science uses a lot of tools from different disciplines and sub-disciplines, like AI, Machine learning, Math, Statistics, Deep Learning, Computer Science and others and how all these harmonically connected together. From what we can see now, data driven decisions are used almost in all industries. Potentially, we can expect that more and more ML/AI driven solutions will be used by experts or users in different domains. Of course, there is a whole another topic of the development of these technologies and how they can be used to help in solving challenges in various industries.
###**How I can add a value applying Data Science?**
My background is in mechanical engineering, and working for quite some time as an engineer, I realized that I could add more value to a company if I combine my experience and passion to the technologies. I started with learning Python and solving some code challenges using online learning platforms. As well, I started to learn about supervised and unsupervised learning algorithms, and I liked applied concept and elegance of the math behind it.
###**Problem solving is fun!**

Another thing that I liked in the data science, that it involves problem solving. For me personally, problem solving is fun. I think I understood that I truly love solving analytical problems when I was in high school. I still remember I was participating in my first physics olympiad, I had no experience at solving tasks at that level and was struggling with solving one of the problem. After thinking for a while, I got ‘aha moment’ and I understood how to model and describe that phenomena using the law of energy conservation. I think from this point onwards, I was able to participate quite successfully at physics olympiad in the high school. So, the lesson I got for myself, that we also can learn by challenging ourselves in a positive way. Now, back to the learning the data science … I just started my journey exploring it, and I want to start with the basics. As I believe that experimenting and applying the foundations and finding your own insights is going to give confidence and motivation to continue learning in the long run.
I would like to end this post with the question to the reader.
So, what motivates or motivated you to learn data science, ML/AI or any other computer science discipline?
| xsabzal | |
271,858 | Blocking HTML5 Ping Requests using ColdFusion | Major browsers are disabling the ability to disable HTML5 ping click tracking. As a result, you’ll p... | 0 | 2020-03-02T01:11:16 | https://dev.to/gamesover/blocking-html5-ping-requests-using-coldfusion-4ei8 | coldfusion, cfml | Major browsers are disabling the ability to [disable HTML5 ping click tracking](https://www.bleepingcomputer.com/news/software/major-browsers-to-prevent-disabling-of-click-tracking-privacy-risk/).
As a result, you’ll probably start encountering empty form posts with a content-type of "text/ping". If you are not expecting or do not need to receive ping requests to your web server, you can block them without wasting
any resources processing the request further. This is important because this feature has already been used to perform [DDoS attacks](https://www.imperva.com/blog/the-ping-is-the-thing-popular-html5-feature-used-to-trick-chinese-mobile-users-into-joining-latest-ddos-attack/):
Here's a basic ColdFusion script that will identify and block HTML5 Ping requests.
{% gist https://gist.github.com/JamoCA/916dbb2d0ca0fe30ca63120bcaccc20f %}
| gamesover |
271,873 | 为 GitLab 的 git-lfs 子系统配置阿里云 OSS 作为后端存储 | gitlab-workhorse 的修改(已无需修改) 曾经,GitLab 的模块 gitlab-workhorse 需要修改一处代码,以使其支持忽略大小写的 ETag 对比。但是... | 0 | 2020-03-02T02:27:55 | https://dev.to/etiv/gitlab-git-lfs-oss-1joc | ## gitlab-workhorse 的修改(已无需修改)
曾经,GitLab 的模块 gitlab-workhorse 需要修改一处代码,以使其支持忽略大小写的 ETag 对比。但是某个版本以后,这一特性已经被原厂支持了。
修改的内容:
```golang
// 源代码在 gitlab-workhorse 的 internal/objectstore/object.go 文件的末尾部分
// 将其对比 etag 与 md5sum 的 if 语句改为 strings.ToLower 加持的
if strings.ToLower(o.etag) != strings.ToLower(o.md5Sum()) {
// 只改这一处,保存文件后执行 make gitlab-workhorse
// 即可编译好 gitlab-workhorse 的可执行文件在源码根目录
```
------
## gitlab.rb 中 LFS 段落的全部配置
```ruby
gitlab_rails['lfs_enabled'] = true
gitlab_rails['lfs_storage_path'] = "/var/opt/gitlab/gitlab-rails/shared/lfs-objects" # 本地存储路径。
gitlab_rails['lfs_object_store_enabled'] = true # 开启云存储
gitlab_rails['lfs_object_store_direct_upload'] = true # 开启直传:将由 gitlab-workhorse 把收到的文件直接传到云存储
gitlab_rails['lfs_object_store_background_upload'] = false # 禁用异步上传:默认将临时把文件存在本地,然后调用队列服务,完成上传(需支持 multi-part 上传)
gitlab_rails['lfs_object_store_proxy_download'] = true # 【重点】务必开启代理下载模式:lfs 文件的下发将使用 gitlab-workhorse 转发,需开启否则会拿不到 lfs 的文件 URL
gitlab_rails['lfs_object_store_remote_directory'] = "<bucket-name>/<some-prefix>/<lfs-directory>" # 【重点】这里务必使用 <bucket_name>/<lfs_store_prefix> 的路径格式以使用阿里云 OSS
gitlab_rails['lfs_object_store_connection'] = {
'provider' => 'AWS', # 虽然在用阿里云 OSS,但是它兼容 AWS S3
'region' => 'cn-shanghai-internal', # OSS bucket 所在地区(此处使用了内网节点)
'aws_access_key_id' => '<ACCESS_KEY_ID>', # API 访问 key id
'aws_secret_access_key' => '<ACCESS_SECRET>', # API 访问 secret
'host' => 'oss-cn-shanghai-internal.aliyuncs.com', # 【重点】使用不带 bucket_name 的 Endpoint 域名
'aws_signature_version' => 4, # OSS 支持 v4 签名格式
'endpoint' => 'http://oss-cn-shanghai-internal.aliyuncs.com', # 【重点】使用不带 bucket_name 的 Endpoint URL
# path_style: true = 'host/bucket_name/object' ; false = 'bucket_name.host/object'
'path_style' => true # 【重点】务必设置为 true
}
```
----
#### 参考文档
- [GitLab LFS 管理文档](https://docs.gitlab.com/ee/workflow/lfs/lfs_administration.html)
- [阿里云 OSS 与 S3 兼容性文档](https://github.com/AlibabaCloudDocs/oss/blob/master/cn.zh-CN/%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5/%E6%95%B0%E6%8D%AE%E8%BF%81%E7%A7%BB/%20%E4%BB%8EAWS%20S3%E4%B8%8A%E7%9A%84%E5%BA%94%E7%94%A8%E6%97%A0%E7%BC%9D%E5%88%87%E6%8D%A2%E8%87%B3OSS.md) | etiv | |
272,062 | Navigating Dev.to's Code on GitHub | Dev.to is an open source Rails application on GitHub. All Rails applications follow the same conventi... | 0 | 2020-04-27T07:19:06 | https://dev.to/andyrewlee/navigating-dev-to-s-code-on-github-4km4 | rails, opensource, webdev, beginners | Dev.to is an open source Rails application on GitHub. All Rails applications follow the same conventions which makes jumping from different Rails applications a seamless experience.
We will investigate a specific route, `/dashboard`, but we can repeat this process for other routes to learn about dev.to incrementally. Let's take a trip down MVC lane.
## Routes
It's helpful to start from the routes file to see all the routes that we can access in dev.to. We can access the routes file in [config/routes.rb](https://github.com/thepracticaldev/dev.to/blob/master/config/routes.rb).
Inside of the file we can see the following:
```ruby
get "/dashboard" => "dashboards#show"
```
This means that when a user makes a get request to `/dashboard`, Rails will use the `DashboardsController` and run the `show` action.
## Controller
We can find the `DashboardsController` here [app/controllers/dashboards_controller.rb](https://github.com/thepracticaldev/dev.to/blob/master/app/controllers/dashboards_controller.rb). The controller is in charge of gathering data from models and making them available to the view.
In the show action of the controller, we see that we are gathering articles from users.
```ruby
@articles = target.articles
```
Let's take a peak at the `User` and `Article` models.
## Model
A model maps directly to a table in Rails. For example the `Article` model maps to the `articles` table in the database. We can define relationships in the model.
### User
For example, in the user model, we can see that a single user has many articles.
[app/models/user.rb](https://github.com/thepracticaldev/dev.to/blob/master/app/models/user.rb)
```ruby
has_many :articles, dependent: :destroy
```
### Article
Inside of the article model, we see that a single article belongs to a user.
[app/models/article.rb](https://github.com/thepracticaldev/dev.to/blob/master/app/models/article.rb)
```ruby
belongs_to :user
```
## View
We can complete our MVC cycle in the view file where the articles are displayed to the user.
[app/views/dashboards/show.html.erb](https://github.com/thepracticaldev/dev.to/blob/master/app/views/dashboards/show.html.erb)
```ruby
<% @articles.each do |article| %>
<%= render "dashboard_article", article: article, organization: article.organization, org_admin: true, manage_view: false %>
<% end %>
```
## Conclusion
It's awesome that dev.to is an open source project! We can continue this process to incrementally learn more about the codebase.
1. Investigate a specific route in the `config/routes.rb` file.
2. Find the controller and action that the route is paired with
3. Investigate the models that the controllers are orchestrating for the view file.
4. Look at the view file to see what is actually rendered to the user.
| andyrewlee |
271,896 | How to create a 🌈 pattern with a CSS radial-gradient | The following Codepen demo includes a Sass mixin that you can use to create your own rainbow pattern!... | 0 | 2020-03-02T04:14:13 | https://dev.to/5t3ph/how-to-create-a-pattern-with-a-css-radial-gradient-1c1k | todayilearned, css, sass | The following Codepen demo includes a Sass mixin that you can use to create your own rainbow pattern! You may need to adjust the background size to your liking, or tweak the band sizing if you include more or less colors.
Colors based on a shirt my four year old loves to wear :)
{% codepen https://codepen.io/5t3ph/pen/jOPwxBG %}
[Check out this link](https://cssgradient.io/blog/gradient-patterns/) to get a jump start on creating patterns with CSS gradients. This pattern was forked from the "waves" pattern from that resource.
Bonus thing I learned today was that emoji are valid in Sass variable names (or at least with the compiler Codepen uses).
 | 5t3ph |
271,912 | Flutter issue: Android license status unknown on Windows | I keep getting Android license status unknown when installing flutter on Windows PS D:\Workplace\fl... | 0 | 2020-03-02T04:20:12 | https://robbinespu.gitlab.io/blog/2020/03/02/flutter-issue-android-license-status-unknown-on-windows/ | flutter, android, bugs, license | ---
title: Flutter issue: Android license status unknown on Windows
published: true
date: 2020-03-02 03:41:00 UTC
tags: Flutter,Android,Bugs,License
canonical_url: https://robbinespu.gitlab.io/blog/2020/03/02/flutter-issue-android-license-status-unknown-on-windows/
---
I keep getting `Android license status unknown` when installing flutter on Windows
```
PS D:\Workplace\flutter_projects> flutter doctor -v
[√] Flutter (Channel stable, v1.12.13+hotfix.8, on Microsoft Windows [Version 10.0.18363.657], locale en-MY)
• Flutter version 1.12.13+hotfix.8 at D:\Workplace\flutter
• Framework revision 0b8abb4724 (3 weeks ago), 2020-02-11 11:44:36 -0800
• Engine revision e1e6ced81d
• Dart version 2.7.0
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
• Android SDK at C:\Users\Robbi\AppData\Local\Android\sdk
• Android NDK location not configured (optional; useful for native profiling support)
• Platform android-29, build-tools 29.0.3
• Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b04)
X Android license status unknown.
Try re-installing or updating your Android SDK Manager.
See https://developer.android.com/studio/#downloads or visit https://flutter.dev/setup/#android-setup for detailed
instructions.
[√] Android Studio (version 3.6)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin version 44.0.2
• Dart plugin version 192.7761
• Java version OpenJDK Runtime Environment (build 1.8.0_212-release-1586-b04)
[√] VS Code (version 1.42.1)
• VS Code at C:\Users\Robbi\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.8.1
[√] Connected device (1 available)
• SM A505F • XXXXXYYYZHW • android-arm64 • Android 9 (API 28)
! Doctor found issues in 1 category.
```
I am using Android Studio 3.6.1 on my windows 10 and there is no `C:\Users%user%\AppData\Local\Android\Sdk\tools\bin` directory created…this is so weird

the step I take is to install `Android SDK command line tool (latest)` from SDK tool
After `Android SDK command line tool (latest)` installed, open terminal (i am using Git Bash) navigate to `C:\Users\<YOUR_USERNAME_HERE>\AppData\Local\Android\Sdk\cmdline-tools\latest\bin` and execute `./sdkmanager.bat --licenses` then accept all SDK package licences and expecting this should solve the problem…
Unfortunately this still does’t work with `flutter doctor`.. dang! Do you know how to solve it? This issue already reported here [#51712](https://github.com/flutter/flutter/issues/51712) | robbinespu |
271,966 | [Starter Kit No.2] Exposing general-purpose Web scraping tool with text analysis function (automatic tagging / visualization) | Easy Customizable Scraper Concept General-purpose Web scraping tool with text... | 0 | 2020-03-02T07:29:42 | https://dev.to/makotunes/starter-kit-no-2-exposing-general-purpose-web-scraping-tool-with-text-analysis-function-automatic-tagging-visualization-5d1 | python, scrapy, gensim, scraping | # Easy Customizable Scraper
## Concept

General-purpose Web scraping tool with text analysis function.
The following features help users start development.
- Easy settings
- Customizability
- Text analysis function (tagging / visualization)
Click here for source code
https://github.com/makotunes/easy-customizable-scraper
## Application example
- Collection of curated media articles and automatic tagging
- Recommendation engine
This algorithm is used for the functions of my personally developed products.
https://mockers.io/scanner
## Elemental technology
- Web scraping
- Automatic language detection
- Morphological analysis
- Feature tagging algorithm (original)
- 2D map visualization technology (original)
## Dependencies
- Docker
## Installation
It takes about 1-2 hours.
```Shell
docker build -t scanner .
```
or
```Shell
./build.sh
```
## Run
```Shell
docker run --rm -it -v "$PWD":/usr/src/app \
--name scanner --force scanner \
-e 'ENTRY_URL=http://recipe.hacarus.com/' \
-e 'ALLOW_RULE=/recipe/' \
-e 'IMAGE_XPATH=//*[@id="root"]/div/div/section/div/div/div[1]/figure/img' \
-e 'DOCUMENT_XPATH=//td/text()|//p/text()' \
-e 'PAGE_LIMIT=2000' \
-e 'EXCLUDE_REG=\d(年|月|日|時|分|秒|g|\u4eba|\u672c|cm|ml|g|\u5206\u679a\u5ea6)|hacarusinc|allrightsreserved' \
scanner:latest /usr/src/app/entrypoint.sh
```
or
```Shell
./run.sh
```
## Parametes
Set Environment Variable of Docker Container.
If you have at least ENTRY_URL, it will automatically scan the page and pull out the text.
If no options are specified, it is optimized for curated media and can be fully automated, such as extracting the text of articles.
| Environment Variable | Description |
|----------------------|----------------------------------------------------------------------------------------------|
| ENTRY_URL | (Required) Site top URL to start scanning. All the pages are automatically scanned. |
| ALLOW_RULE | Allow filter rule of target urls. |
| DENY_RULE | Deny filter rule of target precedence overurls. |
| IMAGE_XPATH | Specify the image you want to get on the page with XPATH. |
| DOCUMENT_XPATH | XPATH of the top node in the page where text is to be extracted. |
| PAGE_LIMIT | Scaned limittation of number of pages. -1 means unlimited number. |
| EXCLUDE_REG | Regular expression of word rule not extracted by morphological analysis. |
## Result
result/res.json
## Project structure
| File | Description |
|----------------------|--------------------------------------------------------|
| src/scraper.py | Main scrapying logic |
| src/categorizer.py | Main algorithm to tag and visualize passages. |
| src/tokenizer.py | Main algorithm to do morphological analysis |
## Customize
#### custom/_formatter.py
Edit XPATH for required HTML nodes like below.
```Python
def formatter(sel):
res = {}
n_howtomake = int(len(sel.xpath('//*[@id="root"]/div/div/section/div/div/div[2]/div[1]/table[2]/tbody/tr/td/text()').extract()) / 2)
res["n_howtomake"] = n_howtomake
return res
```
#### custom/_finalizer.py
Edit post-process to generate your expected output like below.
```Python
import pandas as pd
def finalizer(res):
pages = res["scatter"]
pages = list(map(lambda x: x["user_meta"], pages))
df = pd.DataFrame(pages)
corr_df = df.loc[:, ["time", "n_howtomake", "n_components"]].corr()
res["analyzed"] = {}
res["analyzed"]["correlation"] = {}
res["analyzed"]["correlation"]["time-n_howtomake"] = corr_df.loc["time", "n_howtomake"]
return res
```
## Example
Here is an example of using this tool for scraping and analysis.
Let's analyze [free cooking recipe site] (http://recipe.hacarus.com/).
** If you can't access it, try opening it in secret mode. **
All these results are stored in the "result" directory.
### Visualize distribution of cooking time, number of ingredients, number of recipes
```Python
import matplotlib.pyplot as plt
import pandas as pd
def finalizer(res):
pages = res["scatter"]
pages = list(map(lambda x: x["user_meta"], pages))
df = pd.DataFrame(pages)
fig = plt.figure()
x = df["time"]
y = df["n_howtomake"]
plt.scatter(x, y)
plt.savefig('./result/time-n_howtomake.png')
fig = plt.figure()
x = df["time"]
y = df["n_components"]
plt.scatter(x, y)
plt.savefig('./result/time-n_components.png')
fig = plt.figure()
x = df["n_howtomake"]
y = df["n_components"]
plt.scatter(x, y)
plt.savefig('./result/n_howtomake-n_components.png')
return res
```
#### Result
##### Cooking time: how to make

##### Cooking time: number of ingredients

##### How to make: Number of ingredients

### Examine the correlation between cooking time, number of ingredients, and number of recipes
```Python
import matplotlib.pyplot as plt
import pandas as pd
def finalizer(res):
pages = res["scatter"]
pages = list(map(lambda x: x["user_meta"], pages))
df = pd.DataFrame(pages)
corr_df = df.loc[:, ["time", "n_howtomake", "n_components"]].corr()
res["analyzed"] = {}
res["analyzed"]["correlation"] = {}
res["analyzed"]["correlation"]["time-n_howtomake"] = corr_df.loc["time", "n_howtomake"]
res["analyzed"]["correlation"]["time-n_components"] = corr_df.loc["time", "n_components"]
res["analyzed"]["correlation"]["n_howtomake-n_components"] = corr_df.loc["n_howtomake", "n_components"]
fig = plt.figure()
return res
```
#### Result
##### Cooking time: how to make
`0.30457219729662316`
##### Cooking time: number of ingredients
`0.3949520467754227`
##### How to make: Number of ingredients
`0.6869899620517819`
### Select about three keywords that can be used to characterize each Leshig
result/tagged.csv
|title |tag1 |tag2 |tag3 |
|--------------------------------|------|---------|--------|
|なすとトマトの中華和え(15分) |なす |トマト |大葉 |
|ぶりの照り焼き(45分) |両面 |照り焼き |水気 |
|おでん風煮(2時間) |大根 |こんにゃく |竹輪 |
|大根とツナのサラダ(15分) |ツナ |大根 |わかめ |
|鶏の照り焼き丼(20分) |片栗粉 |にんにく |れんこん |
|筑前煮(60分) |れんこん |ごぼう |こんにゃく |
|白菜とわかめの酢の物(15分) |白菜 |わかめ |しめじ |
|鮭のホイル焼き(25分) |玉ねぎ |しめじ |ピーマン |
|キャベツとハムの粒マスタード和え(15分) |キャベツ |しめじ |ハム |
|えのきとワカメの和え物(15分) |えのき |わかめ |きゅうり |
|里芋のおやき(30分) |里芋 |片栗粉 |桜えび |
|鶏肉と里芋の煮物(60分) |里芋 |鶏肉 |相性 |
|焼き万願寺唐辛子(10分) |万願寺唐辛子|かつお節 |作り方 |
|肉じゃが(45分) |玉ねぎ |牛肉 |じゃがいも |
|鰆の幽庵焼き(15分(漬け込む時間は省く)) |冷蔵庫 |鰆 |ゆず |
|オクラの煮びたし(10分(冷やす時間は除く)) |オクラ |しょうが |オクラ |
|春菊と油揚げの煮びたし(15分) |春菊 |油揚げ |春菊 |
|しいたけのツナマヨ焼き(10分) |マヨネーズ |塩コショウ |しいたけ |
|コールスロー(15分) |キャベツ |マヨネーズ |きゅうり |
|じゃがいもとタコのガーリック炒め(20分) |にんにく |じゃがいも |タコ |
|棒棒鶏(30分) |しょうが |トマト |きゅうり |
|アボカドのチーズ焼き(15分) |アボカド |ハム |チーズ |
|キャベツと大葉のさっぱり和え(10分) |大葉 |キャベツ |大葉 |
|カニカマサラダ(10分) |レタス |きゅうり |サラダ |
|ズッキーニともやしのナムル(15分) |ズッキーニ |もやし |粗熱 |
|春雨サラダ(15分) |春雨 |きゅうり |ハム |
|白菜と油揚げのみぞれ煮(30分) |大根 |白菜 |油揚げ |
|牛肉とれんこんの甘辛炒め(30分) |れんこん |にんにく |牛肉 |
|豚丼(30分) |しょうが |しめじ |レタス |
|紅白なます(30分) |ゆず皮 |大根 |部分 |
|里芋のガーリック焼き(30分) |里芋 |にんにく |香り |
|ブロッコリーのごまみそ和え(10分) |ブロッコリー|和風 |みそ |
|ブロッコリーのゴマ和え(15分) |ブロッコリー|出汁醤油 |粗熱 |
|かぶの甘酢漬け(1時間) |ゆず |甘酢 |昆布 |
|さんまのしょうが煮(30分) |さんま |しょうが |圧力 |
|スパゲティーサラダ(20分) |大根 |きゅうり |スパゲッティ |
|切り干し大根の煮物(25分) |切り干し大根|油揚げ |短冊 |
|なすとオクラの和え物(10分) |なす |オクラ |出汁醤油 |
|なすと厚揚げのおろしあん(30分) |厚揚げ |なす |片栗粉 |
|ごぼうのごまマヨサラダ(15分) |ごぼう |好み |一味 |
|水菜と長いものわさび和え(15分) |水菜 |長いも |わさび |
|ピーマンのじゃこ炒め(15分) |ピーマン |雑魚 |顆粒和風だし |
|なすと豚肉のごまみそ丼(20分) |なす |丼 |ピーマン |
|ピリ辛豆腐ステーキ(30分) |豆腐 |しょうが |白ネギ |
|白菜とハムの青じそサラダ(20分) |白菜 |大葉 |ハム |
|鮭のシャリアピンソースがけ(30分) |鮭 |にんにく |ソース |
|白菜のさっぱりサラダ(15分) |白菜 |きゅうり |サラダ |
|かぶと肉団子の煮物(30分) |しょうが |鶏ミンチ |片栗粉 |
|ごぼうの梅おかか煮(45分) |ごぼう |かつお節 |圧力鍋 |
|切り干し大根とほうれん草の和え物(20分(水に戻す時間は除く))|ほうれん草 |切り干し大根 |熱湯 |
|かぼちゃと揚げの煮物(20分) |かぼちゃ |油揚げ |揚げ |
|さつまいものレモン煮(30分) |さつまいも |レモン汁 |レモン |
|菜の花の辛子和え(15分) |菜の花 |練りからし |長さ |
|かぼちゃとこんにゃくの煮物(30分) |こんにゃく |かぼちゃ |熱湯 |
|ゆず入り湯豆腐(1時間) |豆腐 |春菊 |好み |
|大根と厚揚げの煮物(60分) |厚揚げ |大根 |彩り |
|小松菜ぎょうざ(45分) |小松菜 |にんにく |しょうが |
|水菜と油揚げの煮びたし(15分) |油揚げ |水菜 |食感 |
|ふろふき大根(30分以上) |大根 |味噌 |いりごま |
|春菊の白和え(15分) |春菊 |豆腐 |白和え |
|なすのホイル焼き(15分) |なす |生姜 |ホイル |
|えびとニラの中華風卵炒め(30分) |玉ねぎ |ニラ |えび |
|ししゃもの南蛮漬け(30分) |ししゃも |南蛮漬け |ピーマン |
|八宝菜(30分) |豚肉 |白菜 |玉ねぎ |
|ブロッコリーのわさマヨ和え(15分) |ブロッコリー|食感 |わさび |
|鶏のすき煮(30分) |鶏肉 |鶏もも肉 |しいたけ |
|里芋の梅おかか和え(35分) |里芋 |梅干し |かつお節 |
|ブロッコリーの磯和え(15分) |ブロッコリー|出汁醤油 |焼き海苔 |
|焼き鳥丼(20分) |鶏肉 |鶏もも肉 |白ネギ |
|ほうれん草のお浸し(10分) |かつお節 |ほうれん草 |10分 |
|さんまの梅しそロール(45分) |片栗粉 |さんま |大葉 |
|きゅうりとトマトの土佐酢和え(30分) |トマト |きゅうり |かつお節 |
|さばの味噌煮(30分) |さば |味噌 |しょうが |
|エリンギのバター炒め(15分) |エリンギ |バター |エリンギ |
|さつまいもとクリームチーズのサラダ(20分) |さつまいも |マヨネーズ |塩コショウ |
|薄揚げの納豆キムチ詰め(15分) |キムチ |納豆 |長ネギ |
|鮭の味噌ヨーグルト漬け(15分(漬け込む時間は除く)) |味噌 |鮭 |ヨーグルト |
|五目豆(45分) |ごぼう |れんこん |こんにゃく |
|新生姜と水菜の肉巻き(30分) |新生姜 |水菜 |肉 |
|小松菜とツナの和え物(15分) |小松菜 |ツナ |水気 |
|手羽中と大根の煮物(45分) |大根 |手羽中 |弱火 |
|手羽先の照り焼き(60分) |表面 |手羽先 |にんにく |
|ほうれん草の梅和え(15分) |ほうれん草 |梅 |梅干し |
|なめこおろし(15分) |なめこ |大葉 |かつお節 |
|簡単タンドリーチキン(1時間以上) |1時間 |塩コショウ |タンドリーチキン|
|あさりの酒蒸し(15分) |あさり |にんにく |みじん切り |
|ピーマンの肉詰め(30分) |玉ねぎ |ピーマン |肉 |
|ちくわの磯辺揚げ(15分) |竹輪 |青のり |衣 |
|長いもの梅和え(10分) |長いも |かつお節 |梅干し |
|水菜とアボカドのサラダ(15分) |アボカド |水菜 |豆腐 |
|酢鶏(20分) |鶏肉 |鶏がらスープの素 |一口 |
|鯛の西京焼き(15分(漬け込む時間は除く)) |鯛 |冷蔵庫 |魚 |
|きゅうりの塩昆布和え(10分) |きゅうり |塩昆布 |乱切り |
|なすの煮びたし(20分) |なす |しょうが |作り方 |
|ごぼうと人参の肉巻き(30分) |ごぼう |肉 |にんにく |
|大根・里芋・イカの煮物(40分) |イカ |大根 |里芋 |
|回鍋肉(30分) |豚肉 |にんにく |片栗粉 |
|ほうれん草と干しえびのゴマ和え(15分) |ほうれん草 |干しエビ |干しえび |
|里芋のそぼろ煮(30分) |里芋 |片栗粉 |鶏ミンチ |
|三度豆と人参のおかか和え(10分) |三度豆 |湯 |出汁醤油 |
|中華丼(30分) |チンゲン菜 |しめじ |豚肉 |
|きゅうりとたこの酢の物(20分) |きゅうり |たこ |わかめ |
|新玉ねぎのコンソメ煮込み(45分) |片栗粉 |鶏ミンチ |新玉ねぎ |
|れんこんのきんぴら(15分) |れんこん |いりごま |中火 |
|野菜たっぷり牛丼(20分) |玉ねぎ |ニラ |しめじ |
|ホタテとチンゲン菜のクリーム煮(20分) |片栗粉 |にんにく |チンゲン菜 |
|ほうれん草とごぼうの白和え(60分) |ごぼう |豆腐 |ほうれん草 |
|さんまの蒲焼き(30分) |さんま |ごま |大葉 |
|ひじきの炒め煮(60分) |ひじき |油揚げ |大豆 |
|オクラの納豆和え(10分) |かつお節 |オクラ |納豆 |
|じゃこのサラダ(10分) |縮緬雑魚 |貝割れ大根 |水菜 |
|里芋のホットサラダ(45分) |里芋 |ほうれん草 |ベーコン |
|かぼちゃのサラダ(15分) |かぼちゃ |ヨーグルト |きゅうり |
|豆腐のきのこあんかけ(10分) |豆腐 |しめじ |えのき |
|春巻き(60分) |しょうが |春雨 |ニラ |
|れんこんのはさみ焼き(30分) |れんこん |しょうが |鶏ミンチ |
|高野豆腐の含め煮(30分) |高野豆腐 |竹串 |水気 |
|豚ミンチと白菜の炒め物(15分) |春雨 |白菜 |にんにく |
|いわしのさっぱり煮(45分) |いわし |しょうが |長ネギ |
|れんこんのカレー炒め(15分) |れんこん |OLIVE OIL|カレー粉 |
|ささみの中華風サラダ(25分) |ささみ |もやし |きゅうり |
|茄子と豚肉のピリ辛味噌炒め(30分) |豚肉 |茄子 |豆板醤 |
|ささみのから揚げ(30分) |ささみ |にんにく |しょうが |
## Call for ideas
What do you want to do with this tool?
If there is a need, it may be addressed in a future update.
We look forward to your comments.
| makotunes |
271,976 | A Little Rails Magic | Still new to the game here and loving how rails really makes your life just so much easier. I feel li... | 0 | 2020-03-02T08:13:19 | https://dev.to/agandaurii/a-little-rails-magic-46lb | ruby, codenewbie, rails | Still new to the game here and loving how rails really makes your life just so much easier. I feel like I run into a lot of those moments while learning rails and find myself feeling like a wizard, as whatever rails is doing must be magic! Although it feels that way (and objectively would be much cooler if that was actually happening. I mean, how cool would it be to have your job title be "Ruby Wizard"!), rails is doing some pretty neat stuff in the background. I ran into the example below the other day and thought it would be beneficial to break down the magic so I can really understand whats going on so I can better utilize it in the future.
Let's start off with our schema:
create_table "pokemonabilities", force: :cascade do |t|
t.integer "pokemon_id"
t.integer "ablity_id"
end
create_table "pokemons", force: :cascade do |t|
t.string "name"
t.string "type"
end
create_table "abilities", force: :cascade do |t|
t.string "name"
t.string "description"
end
We've got a many-to-many relationship with Pokemon and abilities, so we have our join table to track those relationships. Once you have your models set up, the ActiveRecord relationships would look like this:
class Pokemon < ApplicationRecord
has_many :pokemonabilities
has_many :abilities, through: :pokemonabilities
end
class Ability < ApplicationRecord
has_many :pokemonabilities
has_many :pokemons, through: :pokemonabilities
end
class Pokemonability < ApplicationRecord
belongs_to :pokemon
belongs_to :ability
end
All smooth sailing so far, but a small magic shoutout to rails being able to automatically change "ability" to "abilities" when necessary. Great, so we have our associations set up, so we can start making stuff. Better yet, let's have our users start making things. We can set up a basic controller that looks like this:
def new
@pokemon = Pokemon.new
end
def create
@pokemon = Pokemon.create(pokemon_params)
redirect_to @pokemon
end
private
def pokemon_params
params.require(:pokemon).permit(:name, :type)
end
Looking good. Now, let's create a form where our Pokemon-loving-users can create there very own Pokemon! We could start with something like this:
<%= form_for @pokemon do |f| %>
Name: <%= f.text_field :name %>
Type: <%= f.text_field :type %>
<%= f.submit %>
<% end %>
Great! Now many fun new Pokemon can be born into the world. But now we have to go make something else to make sure these Pokemon have abilities. Why not just do it in the same form? We'll use the collection select function to give an easy list for our user to pick from. It's a pretty nifty option, which you can read more about here: https://guides.rubyonrails.org/form_helpers.html. And here's our new form:
<%= form_for @pokemon do |f| %>
Name: <%= f.text_field :name %>
Type: <%= f.text_field :type %>
Ability: <%= f.collection_select :ability_ids, Ability.all, :id,
:name%>
<%= f.submit %>
<% end %>
However, if we try to submit our form at this point, something odd happens. It will successfully create our new Pokemon, but it won't have any ability associated with it. What gives? Here is where the magic comes in! It comes down to updating our params. That is what the collection_select is giving us, so we just have to use it. Here's what the updated pokemon_params method will look like:
def pokemon_params
params.require(:pokemon).permit(:name, :type, :ability_ids)
end
But if you are like me and got confused at this point, how on earth does just adding `:ability_ids` allow a Pokemon to be associated with those abilities? After all, we are passing our pokemon_params through our model to create a new instance, which looks essentially like this: `Pokemon.create(:name, :type, :ability_ids)`. Name and type are present to make a Pokemon because that is how we set it up in our migration, but `ability_ids` aren't even on our Pokemon table. In fact, `ability_ids` doesn't show up on any of our tables, only `ability_id`. What's going on here?
Rails and ActiveRecord are doing something pretty cool here. When you create the new instance of a Pokemon, and passing in `ability_ids`, things are started to get checked in the background. When `ability_ids` is read, ActiveRecord is going to start checking "does this exist somewhere else?", since it recognizes it is not present on the Pokemon class. It will then check your associations to see if the value is present in a different class, and in this case, it will find that `ability_id` is associated with the Pokemonabilities class. Since it now has the Pokemon id and ability id, and we took the create action earlier, it creates a new instance of the Pokemonabilities class, which is now automatically associated with both our Pokemon and our chosen abilities!
Note that it is important, in the way these associations where set up, that you update your strong params to be the plural of the id you want to pass through. Even if you are only adding one ability at a time, since a Pokemon has the potential to have many abilities, ActiveRecord will be looking for a plural `ids` when creating things in this way.
So that's the magic! Although it is hopefully more clear how it's working in the background now, it's still fun to see how much work our app is doing for us so we can get on to more interesting things.
| agandaurii |
272,011 | React Hooks cheat sheet | Not so long ago I started working with functional components instead of class based ones. The main go... | 0 | 2020-03-02T09:20:12 | https://dev.to/bornfightcompany/react-hooks-cheat-sheet-3kl9 | engineeringmonday, javascript, react, tutorial | Not so long ago I started working with functional components instead of class based ones. The main goal was to learn how to implement React Hooks inside them. This way we can write less code and make it more reusable.
The benefits of using hooks and functional components are reusability, simpler & shorter code and the simplicity of testing those components.
The usual class approach of things is now a thing of the past. And here I will share short and understandable react hooks cheat sheet. This is not a tutorial for hooks as there are many articles online and the docs are [really good](https://reactjs.org/docs/hooks-intro.html). This serves as a quick reference for people already somewhat familiar with writing hooks. If you are new to hooks, you can still take a look. With that said, let's begin.
**UseState - similar to React state and setState**
- with primitive value
```javascript
const App = () => {
const [carSpeed, updateCarSpeed] = useState(10);
return (
<div>
<p>Car is going {carSpeed} km/h</p>
<button onClick={() => updateCarSpeed(carSpeed + 5)}>
Speed up
</button>
</div>
);
};
```
- with object
```javascript
export const App = () => {
const [carForm, updateForm] = useState({});
const updateField = (e) => {
updateForm({ ...carForm, [e.target.name]: e.target.value });
};
const handleSubmit = (e) => {
e.preventDefault();
console.log(carForm);
};
return (
<form onSubmit={handleSubmit}>
<label>
Car Owner:
<input
value={carForm.owner}
name="owner"
onChange={updateField}
/>
</label>
<br />
<label>
Car model:
<input
value={carForm.model}
name="model"
onChange={updateField}
/>
</label>
<button>Submit</button>
</form>
);
};
```
**UseEffect - similar to componentDidUpdate**
- only triggers once (because of empty array param)
```javascript
export const App = () => {
const [carsData, updateCars] = useState({});
useEffect(() => {
fetch("http://example.com/cars.json")
.then((resp) => resp.json())
.then((data) => {
updateCars(data);
});
}, []);
const renderCars = () => {
return carsData.cars.map((car) => {
<p key={car.id}>{car.name}</p>;
});
};
return <div>{renderCars()}</div>;
};
```
- trigger on carName variable change
```javascript
export const App = () => {
const [carName, updateCarName] = useState("");
useEffect(() => {
console.log("changed");
}, [carName]);
return (
<div>
<input
value={carName}
onChange={(e) => updateCarName(e.target.value)}
/>
</div>
);
};
```
**UseReducer with React.memo HOC and useCallback**
- This example makes use of [useReducer](https://reactjs.org/docs/hooks-reference.html#usereducer) hook which acts similar to Redux. It has a reducer and actions that change the state in the reducer. We also make use of the [React.memo](https://reactjs.org/docs/react-api.html#reactmemo) and useCallback for the sole reason of not re-rendering new "Car" components when each car is checked that it is sold.
- [UseCallback](https://reactjs.org/docs/hooks-reference.html#usecallback) - this hook is used when you have a component with a frequently re-rendering child and to which you pass a callback to. Without it the addCar function would be re-instantiated each time a new car is added to the list.
```javascript
// initial cars state
const initialState = [
{
id: id(),
name: "Audi A4",
description: 'Black tint with red wheels, 100kw',
sold: false
},
{
id: id(),
name: "Porsche 911",
description: 'Cherry red tint with dark golden wheels, 300kw',
sold: false
},
{
id: id(),
name: "Lamborghini Gallardo",
description: 'Lamborghini green with black wheels, 500kw',
sold: false
},
];
// action names
const CAR_ADD = 'CAR_ADD';
const CAR_SELL = 'CAR_SELL';
// the reducer
const reducer = (state, action) => {
if (action.type === CAR_ADD) {
return [action.payload, ...state];
}
if (action.type === CAR_SELL) {
return state.map(car => {
if (car.id !== action.payload.id) {
return car;
}
return { ...car, sold: !car.sold };
});
}
return state;
};
const App = () => {
const [cars, dispatch] = useReducer(reducer, initialState);
const addCar = useCallback(({ name, description }) => {
dispatch(
{
type: CAR_ADD,
payload: {
name,
description,
sold: false,
id: id()
}
},
[dispatch]
);
});
const toggleSold = useCallback(
id => {
dispatch({
type: CAR_SELL,
payload: {
id
}
});
},
[dispatch]
);
return (
<div style={{ maxWidth: 400, margin: '0 auto' }}>
<NewCarForm onSubmit={addCar} />
<Cars cars={cars} onSell={toggleSold} />
</div>
);
};
const Cars = ({ cars = [], onSell }) => {
return (
<div>
<h2>Cars ({cars.length})</h2>
{cars.map(car => (
<Car key={car.id} car={car} onSell={onSell} />
))}
</div>
);
};
const Car = React.memo(({ car, onSell }) => {
return (
<div style={{border:"1px solid", margin: 10, padding: 10}}>
<h3>{car.name}</h3>
<p>{car.description}</p>
<div>
<label>
<input
type="checkbox"
checked={car.sold}
onChange={() => onSell(car.id)}
/>
Sold
</label>
</div>
</div>
);
});
const NewCarForm = React.memo(({ onSubmit }) => {
const [name, setCarName] = useState('');
const [description, setCarDescription] = useState('');
const handleChange = e => {
e.preventDefault();
onSubmit({ name, description });
};
return (
<form onSubmit={handleChange}>
<input
placeholder="Car name"
type="text"
value={name}
onChange={event => setCarName(event.target.value)}
/>
<input
placeholder="Car description"
type="text"
value={description}
onChange={event => setCarDescription(event.target.value)}
/>
<input type="submit" />
</form>
);
});
```
That would be all, thank you for reading kind stranger. Do you have something of your own to add to the list? Let me know.
| jurajuki |
272,082 | How to Add Dark Mode to React with Context and Hooks | More and more, we are seeing the dark mode feature in the apps that we are using every day. From mobi... | 0 | 2020-03-02T10:17:31 | https://www.alterclass.io/blog/how-to-add-dark-mode-to-react-with-context-and-hooksnp7gp08cid9s1kcn5kyp9 | react, javascript, webdev | More and more, we are seeing the dark mode feature in the apps that we are using every day. From mobile to web apps, the dark mode has become necessary for companies that want to take care of their user's eyes. Indeed, having a bright screen at night is really painful for our eyes. By turning (automatically) the dark mode helps reduce this pain and keep our users engage with our apps all night long (or not).
In this post, we are going to see how we can easily implement a dark mode feature in a ReactJS app. In order to do so, we'll leverage some React features like context, function components, and hooks.
Too busy to read the whole post? Have a look at the [CodeSandbox demo](https://codesandbox.io/s/alterclass-darkmode-s9uru) to see this feature in action along with the source code.
[
](http://alterclass.io/)
# What Will You Learn?
At this end of this post, you will be able to:
- Combine React `Context` and the `useReducer` hook to share a global state throughout the app.
- Use the `ThemeProvider` from the `styled-components` library to provide a theme to all React components within our app.
- Build a dark mode feature into your React app in an easy and non-intrusive way.
# What Will You Build?
In order to add the dark mode feature into our app, we will build the following features:
- A `Switch` component to be able to enable or disable the dark mode.
- A dark and light theme for our styled-components to consume.
- A global `Context` and `reducer` to manage the application state.
# Theme Definition
The first thing that we need for our dark mode feature is to define the light and dark themes of our app. In other words, we need to define the colors (text, background, ...) for each theme.
Thanks to the `styled-components` library we are going to use, we can easily define our themes in a distinct file as JSON objects and provide it to the `ThemeProvider` later.
Below is the definition of the light and dark themes for our app:
```js
const black = "#363537";
const lightGrey = "#E2E2E2";
const white = "#FAFAFA";
export const light = {
text: black,
background: lightGrey
};
export const dark = {
text: white,
background: black
};
```
As you can notice, this is a really simplistic theme definition. It's up to you to define more theme parameters to style the app according to your visual identity.
Now that we have both our dark and light themes, we can focus on how we’re going to provide them to our app.
# Theme Provider
By leveraging the React Context API, the `styled-components` provides us a `ThemeProvider` wrapper component. Thanks to this component, we can add full theming support to our app. It provides a theme to all React components underneath itself.
Let's add this wrapper component at the top of our React components' tree:
```js
import React from "react";
import { ThemeProvider } from "styled-components";
export default function App() {
return (
<ThemeProvider theme={...}>
...
</ThemeProvider>
);
};
```
You may have noticed that the `ThemeProvider` component accepts a theme property. This is an object representing the theme we want to use throughout our app. It will be either the light or dark theme depending on the application state. For now, let's leave it as is as we still need to implement the logic for handling the global app state.
But before implementing this logic, we can add global styles to our app.
# Global Styles
Once again, we are going to use the `styled-components` library to do so. Indeed, it has a helper function named `createGlobalStyle` that generates a styled React component that handles global styles.
```js
import React from "react";
import { ThemeProvider, createGlobalStyle } from "styled-components";
export const GlobalStyles = createGlobalStyle`...`;
```
By placing it at the top of our React tree, the styles will be injected into our app when rendered. In addition to that, we'll place it underneath our `ThemeProvider` wrapper. Hence, we will be able to apply specific theme styles to it. Let's see how to do it.
```js
export const GlobalStyles = createGlobalStyle`
body, #root {
background: ${({ theme }) => theme.background};
color: ${({ theme }) => theme.text};
display: flex;
flex-direction: row;
justify-content: center;
align-items: center;
font-family: BlinkMacSystemFont, -apple-system, 'Segoe UI', Roboto, Helvetica, Arial, sans-serif;
}
`;
export default function App() {
return (
<ThemeProvider theme={...}>
<>
<GlobalStyles />
...
</>
</ThemeProvider>
);
};
```
As you can see, the global text and background color are provided by the loaded theme of our app.
It's now time to see how to implement the global state.
[
](http://alterclass.io/)
# Global State
In order to share a global state that will be consumed by our components down the React tree, we will use the `useReducer` hook and the React `Context` API.
As stated by the ReactJS documentation, `Context` is the perfect fit to share the application state of our app between components.
> Context provides a way to pass data through the component tree without having to pass props down manually at every level.
And the `useReducer` hook is a great choice to handle our application state that will hold the current theme (light or dark) to use throughout our app.
This hook accepts a `reducer` and returns the current state paired with a `dispatch` method. The reducer is a function of type ```(state, action) => newState``` that manage our state. It is responsible to update the state depending on the type of action that has been triggered. In our example, we will define only one type of action called `TOGGLE_DARK_MODE` that will enable or disable the dark mode.
Let's create this reducer function in a separate file, `reducer.js`:
```js
const reducer = (state = {}, action) => {
switch (action.type) {
case "TOGGLE_DARK_MODE":
return {
isDark: !state.isDark
};
default:
return state;
}
};
export default reducer;
```
As you may have noticed, our state is holding a single boolean variable `isDark`. If the `TOGGLE_DARK_MODE` action is triggered, the reducer updates the `isDark` state variable by toggling is value.
Now that we have our `reducer` implemented we can create our `useReducer` state and initialize it. By default, we will disable the dark mode.
```js
import React, { useReducer } from "react";
import reducer from "./reducer";
export default function App() {
const [state, dispatch] = useReducer(reducer, {
isDark: false
});
...
};
```
The only missing piece in our global state implementation is the Context. We'll also define it in a distinct file and export it, `context.js`:
```js
import React from "react";
export default React.createContext(null);
```
Let's now combine everything together into our app and use our global state to provide the current theme to the `ThemeProvider` component.
```js
import React, { useReducer } from "react";
import { ThemeProvider, createGlobalStyle } from "styled-components";
import { light, dark } from "./themes";
import Context from "./context";
import reducer from "./reducer";
...
export default function App() {
const [state, dispatch] = useReducer(reducer, {
isDark: false
});
return (
<Context.Provider value={{ state, dispatch }}>
<ThemeProvider theme={state.isDark ? dark : light}>
<>
<GlobalStyles />
...
</>
</ThemeProvider>
</Context.Provider>
);
};
```
As you can see the `Context` is providing, through its `Provider`, the current application state and the dispatch method that will be used by other components to trigger the `TOGGLE_DARK_MODE` action.
# The Switch Component
Well done 👏👏 on completing all the steps so far. We are almost done. We’ve implemented all the logic and components needed for enabling the dark mode feature. Now it’s time to trigger it in our app.
To do so, we'll build a `Switch` component to allow users to enable/disable dark mode. Here's the component itself:
```js
import React from "react";
import Context from "./context";
import styled from "styled-components";
const Container = styled.label`
position: relative;
display: inline-block;
width: 60px;
height: 34px;
margin-right: 15px;
`;
const Slider = styled.span`
position: absolute;
top: 0;
display: block;
cursor: pointer;
width: 100%;
height: 100%;
background-color: #ccc;
border-radius: 34px;
-webkit-transition: 0.4s;
transition: 0.4s;
&::before {
position: absolute;
content: "";
height: 26px;
width: 26px;
margin: 4px;
background-color: white;
border-radius: 50%;
-webkit-transition: 0.4s;
transition: 0.4s;
}
`;
const Input = styled.input`
opacity: 0;
width: 0;
height: 0;
margin: 0;
&:checked + ${Slider} {
background-color: #2196f3;
}
&:checked + ${Slider}::before {
-webkit-transform: translateX(26px);
-ms-transform: translateX(26px);
transform: translateX(26px);
}
&:focus + ${Slider} {
box-shadow: 0 0 1px #2196f3;
}
`;
const Switch = () => {
const { dispatch } = useContext(Context);
const handleOnClick = () => {
// Dispatch action
dispatch({ type: "TOGGLE_DARK_MODE" });
};
return (
<Container>
<Input type="checkbox" onClick={handleOnClick} />
<Slider />
</Container>
);
};
export default Switch;
```
Inside the `Switch` component, we are using the `dispatch` method from the `Context` to toggle the dark mode theme.
Finally, let's add it to the app.
```js
export default function App() {
...
return (
<Context.Provider value={{ state, dispatch }}>
<ThemeProvider theme={state.isDark ? dark : light}>
<>
<GlobalStyles />
<Switch />
</>
</ThemeProvider>
</Context.Provider>
);
};
```
# Conclusion
Dark mode has been a highly requested feature, and we successfully added support for it in our React application by using some of the latest React features. I hope this post will help you add dark mode capability to your app and save the eyes of your users.
[
](http://alterclass.io/) | gdangelo |
272,085 | Inspiring Stories: Rayta van Rijswijk | As a society we tend to focus on titles and roles, and we forget that behind each title there is a pe... | 0 | 2020-03-03T12:28:00 | https://dev.to/azure_heroes/inspiring-stories-rayta-van-rijswijk-242g | iwd2020, womenintech, azureheroes | As a society we tend to focus on titles and roles, and we forget that behind each title there is a person who has a story to tell. And truly every person’s story is unique.
In honor of International Women's day, we interview inspiring women from the community on the story of how they got into Tech, and where they are today.
In this post, I interview [Rayta van Rijswijk](https://twitter.com/raytalks) who is based in Amsterdam, the Netherlands.
# Meet Rayta
Yes, there’s one Rayta here and no, it’s not Raita (the famous Indian yogurt, cucumber side dish), thanks mom and dad, now people cannot stop giggling when they read my name.

Anyway, you can call me Ray or Rei :) and as a dark skinned woman in tech I’ve had some ‘interesting’ challenges. Women, LGBTQ+ and PoC are still treated differently or are a scarcity in tech and I’d like to change that. All the meet-ups and conferences I organise with my team have a Code of Conduct and are backed by a CoC team. To motivate more women to enter the tech industry, we make sure there’s a separate room at the venue for breastfeeding mothers and there’s a space for small kids to stay (overviewed by babysitters ofc).
**When did you first become interested in technology and what sparked this interest?**
My father was a mechanic. When I was a little girl, you could usually find me outside, next to a car my father was fixing. Barely able to look into the hood, I was handing him tools for the broken engine he was fixing. I was always very curious about how engines and machines worked, but was never motivated nor encouraged to engage more.
**Describe your way towards your first job in tech; how did you land your current job?**
I enrolled for a Web Development Bootcamp years ago because I wanted to learn how to code. I loved the bootcamp so much that I decided to learn more. I started to work for a SaaS company where I got a mentor who taught me how to write Ruby outside working hours. But I wanted to go faster and learn more, so I enrolled into a Coding Academy. After an immersive training I landed my first developer job via a community member of the Amsterdam Ruby meetup (which I co-organize).
**Tell us more about your current job – e.g. what do you like most about your role?**
I’m part of Team Platform at YoungCapital. I enjoy the DevOps-y part of the job a lot. So, it’s not just writing code and making the “car drive”, but actually building the engine. What happens under the hood when you deploy your app. I love this part. We use multiple platforms and each platform is structured differently. Also, I’m part of an amazing team that takes time to teach, mentor and guide me. So it’s not just the role that makes me love the job :)
My “second” job within the team is Scrum Master; making sure processes are guarded and the team is aligned.
**What does your typical day look like?**
Coffee first, obviously. Then we have standup with the team. Since our stakeholders are our own developers, we have to make sure that they can do their jobs without any deployment issues. So if we get ’tickets’ (“Hey Ops, I got a 500, please help”) we solve them asap. I also have my own tickets to work on, so if there are no meetings scheduled, I work on those as well.
**What do you do in your free time?**
I’m a sport fanatic, I’ve been kickboxing since I was 15 and I’m still going strong. I hit the gym hard because my eventual goal is: to be able to wear high heels even when I’m 80 years old (if I get there) and do my own grocery shopping and carry my own bags then. I also enjoy gardening, reading, dancing…if only my week consisted of 10 days… Guilty pleasure: Manga and Anime.

**What advice would you give to women and girls who dream about a career in tech?**
First, you can do this. You don’t have to be a whiz kid, you just need to be motivated. Be resilient. You got this.
Find a mentor! I had a mentor when I started to learn code and I still go to them when I’m stuck or need help. They can tell you about challenges, do’s and don’ts and also what to learn and what to skip (for now).
There are many many senior tech people who would love to help you! Don’t know where to start? Go to tech meet-ups, connect with people and ask questions. Read, tinker, try things out. Trial and error is how you learn.
Need some motivation? I'm an organizer of events like Rails Girls for example, which is an event for womxn and girls. It’s a taste of code. No experience needed. Hit me up when you go, let’s connect! | floord |
272,102 | Introducing minicli: a microframework for CLI-centric PHP applications | In the previous posts of the Building Minicli series, we've seen how to bootstrap a dependency-free microframework on top of vanilla PHP. This post will give you an overview of how to use the most updated version of Minicli to create a CLI application in PHP. | 2,278 | 2020-03-02T11:59:05 | https://dev.to/erikaheidi/introducing-minicli-a-microframework-for-cli-centric-php-applications-44ik | showdev, php, cli | ---
title: Introducing minicli: a microframework for CLI-centric PHP applications
published: true
description: In the previous posts of the Building Minicli series, we've seen how to bootstrap a dependency-free microframework on top of vanilla PHP. This post will give you an overview of how to use the most updated version of Minicli to create a CLI application in PHP.
series: building-minicli
tags: showdev, php, cli
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/cww20brm0vqazrbam2yv.png
---
In the previous posts of the [Building Minicli](https://dev.to/erikaheidi/bootstrapping-a-cli-php-application-in-vanilla-php-4ee) series, we demonstrated the steps to bootstrap a dependency-free microframework for CLI-only applications in PHP.
Minicli was created as an educational experiment and also as a lightweight base that I could reuse for my personal projects. I like to think of it as the most basic unit I was able to put together, on top of which I could build my toys.
It's been months since I shared my latest post on this series, and I was reluctant to share what I've been working on because it always feels like an incomplete work. But will it ever be complete (or feel like so)? Probably not. Minicli is open source since day 0, and although I never intended to turn it into a mainstream project, I also think it can help people who are interested in building simple things in the command line without the overkill of dozens of external requirements.
So I'd like to officially introduce you to [Minicli](https://github.com/minicli/minicli), a *highly experimental* dependency-free microframework for CLI-centric PHP apps.
While I don't advocate for reinventing all the wheels in an application, I believe there should be a starting point that doesn't require 10+ different libraries for basic parsing and routing of commands. From there, you should be able to consciously choose what you'll be depending on, in terms of external libraries. Minicli is what I came up with in order to solve this situation.
### What I've built with Minicli so far:
[Dolphin](https://github.com/do-community/dolphin), a command-line tool for managing DigitalOcean droplets from the command line.

[My website](https://eheidi.dev), which is a static-content CMS [pulling from my DEV posts](https://dev.to/erikaheidi/how-to-create-a-dev-to-api-wrapper-in-php-to-fetch-your-latest-posts-1fei). I'm open sourcing this as a separate project called [Librarian](https://github.com/minicli/librarian) (WIP).

In this post, you will learn how to create a simple CLI application in PHP using Minicli.
## Creating a Project
You'll need `php-cli` and [Composer](https://getcomposer.org/) to get started.
Create a new project with:
```shell
composer create-project --prefer-dist minicli/application myapp
```
Once the installation is finished, you can run `minicli` with:
```shell
cd myapp
./minicli
```
This will show you the default app signature.
The `help` command that comes with minicli, defined in `app/Command/Help/DefaultController.php`, auto-generates a tree of available commands:
```shell
./minicli help
```
```
Available Commands
help
└──test
```
The `help test` command, defined in `app/Command/Help/TestController.php`, shows an echo test of parameters:
```
./minicli help test user=erika name=value
```
```
Hello, erika!
Array
(
[user] => erika
[name] => value
)
```
## Creating your First Command
The simplest way to create a command is to edit the `minicli` script and define a new command as an anonymous function within the Application via `registerCommand`:
```php
#!/usr/bin/php
<?php
if (php_sapi_name() !== 'cli') {
exit;
}
require __DIR__ . '/vendor/autoload.php';
use Minicli\App;
use Minicli\Command\CommandCall;
$app = new App();
$app->setSignature('./minicli mycommand');
$app->registerCommand('mycommand', function(CommandCall $input) {
echo "My Command!";
var_dump($input);
});
$app->runCommand($argv);
```
You could then execute the new command with:
```shell
./minicli mycommand
```
## Using Command Controllers
To organize your commands into controllers, you'll need to use [Command Namespaces](https://dev.to/erikaheidi/building-minicli-autoloading-command-namespaces-3ljm).
Let's say you want to create a command named `hello`. You should start by creating a new directory under the `app/Commands` folder:
```shell
mkdir app/Commands/Hello
```
Now `Hello` is your Command Namespace. Inside that directory, you'll need to create at least one Command Controller. You can start with the `DefaultController`, which will be called by default when no subcommand is provided.
This is how this `DefaultController` class could look like:
```php
<?php
namespace App\Command\Hello;
use Minicli\Command\CommandController;
class DefaultController extends CommandController
{
public function handle()
{
$this->getPrinter()->display("Hello World!");
}
}
```
This command would be available as:
```shell
./minicli hello
```
Becase a subcommand was not provided, it is inferred that you want to execute the **default** command. This command can also be invoked as:
```shell
./minicli hello default
```
Any other Command Controller placed inside the `Hello` namespace will be available in a similar way. For instance, let's say you want to create a new subcommand like `hello caps`.
You would then create a new Command Controller named `CapsController`:
```php
<?php
namespace App\Command\Hello;
use Minicli\Command\CommandController;
class CapsController extends CommandController
{
public function handle()
{
$this->getPrinter()->display("HELLO WORLD!");
}
}
```
And this new command would be available as:
```shell
./minicli hello caps
```
## Working with Parameters
Minicli uses a few conventions for command call arguments:
* Args / Arguments: Parsed arguments - anything that comes from $argv that is not a `key=value` and not a `--flag`.
* Params / Parameters: Key-value pairs such as `user=erika`
* Flags: single arguments prefixed with `--` such as `--update`
The parent `CommandController` class exposes a few handy methods to work with the command call parameters.
For instance, let's say you want to update the previous `hello` command to use an optional parameter to tell the name of the person that will be greeted.
```php
<?php
namespace App\Command\Hello;
use Minicli\Command\CommandController;
use Minicli\Input;
class HelloController extends CommandController
{
public function handle()
{
$name = $this->hasParam('user') ? $this->getParam('user') : 'World';
$this->getPrinter()->display(sprintf("Hello, %s!", $name));
}
}
```
Now, to use the custom version of the command, you'll need to run:
```shell
./minicli hello user=erika
```
And you'll get the output:
```shell
Hello, erika!
```
### `CommandCall` Class Methods
* `hasParam(string $key) : bool` - Returns true if a parameter exists.
* `getParam(string $key) : string` - Returns a parameter, or null if its non existent.
* `hasFlag(string $key) : bool` - Returns whether or not a flag was passed along in the command call.
## Printing Output
The `CliPrinter` class has shortcut methods to print messages with various colors and styles.
It comes with two bundled *themes*: `regular` and `unicorn`. This is set up within the App bootstrap config array, and by default it's configured to use the `regular` theme.
```php
public function handle()
{
$this->getPrinter()->info("Starting Minicli...");
if (!$this->hasParam('message')) {
$this->getPrinter()->error("Error: you must provide a message.");
exit;
}
$this->getPrinter()->success($this->getParam('message'));
}
```
### `CliPrinter` Class Methods
* `display(string $message) : void` - Displays a message wrapped in new lines.
* `error(string $message) : void` - Displays an error message wrapped in new lines, using the current theme colors.
* `success(string $message) : void` - Displays a success message wrapped in new lines, using the current theme colors.
* `info(string $message) : void` - Displays an info message wrapped in new lines, using the current theme colors.
* `newline() : void` - Prints a new line.
* `format(string $message, string $style="default") : string` - Returns a formatted string with the desired style.
* `out(string $message) : void` - Prints a message.
## Wrapping Up
Minicli is a work in progress, but you can already use it as a minimalist base on top of which you can build fun toy projects and/or helpful command line tools, like [Dolphin](https://github.com/do-community/dolphin).
Here's a few ideas I'd like to build with Minicli but haven't had the time so far (and I definitely wouldn't mind if anyone build these):
* a text-based rpg game
* a Twitter bot
* a tool for finding your Twitter mutuals
* a cli-based quizz game
If you'd like to give Minicli a try, check the [documentation](https://minicliphp.readthedocs.io/en/latest/) for more details and don't hesitate to leave a comment if you have any questions :) | erikaheidi |
272,121 | Measure time with a higher order utility function | I consider closures and higher order functions to be one of the most powerful language features, if n... | 0 | 2020-03-02T11:29:44 | https://dev.to/apisurfer/measure-time-with-a-higher-order-utility-function-1eoo | javascript | I consider closures and higher order functions to be one of the most powerful language features, if not the most powerful. Here is a 2-liner function that uses both of them. Comes in handy for testing, debugging and measuring performance of some chunks of code.
```javascript
/*
* startTimer creates a function that returns time difference in milliseconds
*/
function startTimer() {
const startTime = new Date()
return () => new Date() - startTime
}
```
Example of usage:
```javascript
const getTimeDifference = startTimer()
// Should output a number around 3000 after 3 seconds have passed
setTimeout(() => {
console.log(`${getTimeDifference()} milliseconds have passed!`)
}, 3000)
```
This allows you to start tracking multiple events at any given time and retrieve the time difference whenever it's required.
Cheers! | apisurfer |
272,259 | Why You Need to Explore Your Data & How You Can Start | We live in the world where millions of data are generated every single day, from smartphones we use e... | 0 | 2020-03-02T17:05:09 | https://dev.to/davisy/why-you-need-to-explore-your-data-how-you-can-start-1dil | python | We live in the world where millions of data are generated every single day, from smartphones we use every day, what we search on google or bing, what we post, like, comment or share in different social media platforms, what we buy in e-commerce sites, data generated by machines and other sources. We are in the Data Age and data is a new oil.

**Quick Fact:** The article in Forbes states that ‘ The amount of data we produce every day is truly mind-boggling. There are 2.5 quintillion bytes of data created each day at our current pace’.
Data has a lot of potentials if you can find insights from it and it allows you to make data-driven decisions in whatever business you are doing instead of depending on your experiences. Big to small companies have started to use data to understand their customers better, sales and marketing behaviors and make an accurate decision for their business.
The question is how you can start finding insights from your data in order to make data-driven decisions.?
It all starts by exploring your data to find and understand the hidden patterns, knowledge, and facts that can help you to make a better decision.
In this article, you will learn
- Exploratory data analysis.
- Importances of exploratory data analysis.
- Python packages you can use to explore your data.
- Practical example with a real-world dataset.
## What is Exploratory Data Analysis?
Exploratory Data Analysis refers to the critical process of performing initial investigations on data so as to discover patterns, to spot anomalies, to test hypotheses and to check assumptions with the help of summary statistics and graphical representations.
It is a good practice to understand the data first and try to gather as many insights from it.
## Why Exploratory Data Analysis is important?
By exploring your data you can benefit in different ways like:-
- Identifying the most important variables/features in your dataset.
- Testing a hypothesis or checking assumptions related to the dataset.
- To check the quality of data for further processing and cleaning.
- Deliver data-driven insights to business stakeholders.
- Verify expected relationships actually exist in the data.
- To find unexpected structures or patterns in the data.
## Python packages for Exploratory Data Analysis
The following python packages will help you to start exploring your dataset.
- Pandas- is a Python package focus on data analysis.
- NumPy- is a general-purpose array-processing package.
- Matplotlib- is a Python 2D plotting library which produces publication quality figures in a variety of formats.
- Seaborn- is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.
Now you know what is EDA and its benefits let move on by starting to explore the Financial Inclusion in Africa dataset from [zindi Africa](http://zindi.africa/) so that you can understand important steps to follow when you analyze your own dataset.
## Exploratory Data Analysis For Financial inclusion in Africa Dataset
The first important step is to understand the problem statement about the dataset you are going to analyze. This will help you to generate Hypotheses or assumptions about the dataset.
### 1.Understand The Problem Statement
Financial Inclusion remains one of the main obstacles to economic and human development in Africa. For example, across Kenya, Rwanda, Tanzania, and Uganda only 9.1 million adults (or 13.9% of the adult population) have access to or use commercial bank accounts.
Traditionally, access to bank accounts has been regarded as an indicator of financial inclusion. Despite the proliferation of mobile money in Africa and the growth of innovative fintech solutions, banks still play a pivotal role in facilitating access to financial services. Access to bank accounts enables households to save and facilitate payments while also helping businesses build up their credit-worthiness and improve their access to other financial services. Therefore, access to bank accounts is an essential contributor to long-term economic growth.
To know more about the problem statement visit [Zindi Africa Competition](https://zindi.africa/competitions/financial-inclusion-in-africa) on Financial inclusion in Africa.
### 2.Type of the Problem
After going through the problem statement, the dataset focuses on a classification where you have to predict whether individuals are most likely to have or use a bank account or not. But you will not apply a machine learning technique in this article.
### 3.Hypothesis Generation
This is a very important stage during data exploration. It involves understanding the problem in detail by brainstorming as many factors as possible which can impact the outcome. It is done by understanding the problem statement thoroughly and before looking at the data.
Below are some of the factors which I think can affect the chance for a person to have a bank account:-
- People who have mobile phones have a lower chance to use bank accounts because of mobile money services.
- People who are employed have a higher chance of having bank accounts than people who are unemployed.
- People with low education levels have a low chance to have bank accounts.
- People in rural areas have a low chance to have bank accounts.
- Females have less chance to have bank accounts.
Now let’s load and analyze our dataset to see if assumptions generated are valid or not valid. You can download the dataset and notebook [here](https://github.com/Davisy/Exploratory-Data-Analysis-).
Load Python Packages
We import all important python packages to start analyzing our dataset.
```python
# import important modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams["axes.labelsize"] = 18
import warnings
warnings.filterwarnings('ignore') %matplotlib inline
```
Load Financial Inclusion Dataset.
```python
# Import data
data = pd.read_csv('../data/financial_inclusion.csv')
```
Let’s see the shape of our data.
```python
# print shape
print('train data shape :', data.shape)
```
```python
train data shape : (23524, 13)
```
In our dataset, we have 13 columns and 23544 rows.
We can observe the first five rows from our data set by using the head() method from the pandas library.
```python
# Inspect Data by showing the first five rows
data.head()
```

It is important to understand the meaning of each feature so you can really understand the dataset. Click [here](https://github.com/Davisy/Exploratory-Data-Analysis-/blob/master/data/VariableDefinitions.csv) to get the definition of each feature presented in the dataset.
We can get more information about the features presented by using the **info()** method from pandas.
```python
# show Some information about the dataset
print(data.info())
```

The output shows the list of variables/features, sizes, if it contains missing values and data type for each variable. From the dataset, we don't have any missing values and we have 3 features of integer data type and 10 features of the object data type.
If you want to learn how to handle missing data in your dataset, I recommend you read this article [“How to handle missing data with python”](https://machinelearningmastery.com/handle-missing-data-python/) by Jason Brownlee.
### 4.Univariate Analysis
In this section, we will do the univariate analysis. It is the simplest form of analyzing data where we examine each **variable individually**. For categorical features, we can use frequency tables or bar plots which will calculate the number of each category in a particular variable. For numerical features, probability density plots can be used to look at the distribution of the variable.
The following codes show unique values in the **bank_account** variable where Yes means the person has a bank account and No means the person doesn't have a bank account.
```python
# Frequency table of a variable will give us the count of each category in that Target variable.
data['bank_account'].value_counts()
```

```python
# Explore Target distribution
sns.catplot(x="bank_account", kind="count", data= data)
```

The data shows that we have a large number of **no** class than **yes** class in our target variable means a majority of people don't have bank accounts.
```python
# Explore Country distribution
sns.catplot(x="country", kind="count", data=data)
```

The country feature in the above graph shows that most of the data were collected in Rwanda and fewer data were collected in Uganda.
```python
# Explore Location distribution
sns.catplot(x="location_type", kind="count", data=data)
```

In the Location_type feature, we have a large number of people live in rural areas than in urban areas.
```python
# Explore Years distribution
sns.catplot(x="year", kind="count", data=data)
```

In the year's feature, most of the data were collected in 2016.
```python
# Explore cellphone_access distribution
sns.catplot(x="cellphone_access", kind="count", data=data)
```

In the cellphone_access feature, most of the participants have access to the cellphone.
```python
# Explore gender_of_respondents distribution
sns.catplot(x="gender_of_respondent", kind="count", data=data)
```

In the gender_of_respondent feature, we have more Females than Males.
```python
# Explore relationship_with_head distribution
sns.catplot(x="relationship_with_head", kind="count", data=data);
plt.xticks(
rotation=45,
horizontalalignment='right',
fontweight='light',
fontsize='x-large'
)
```

In the relationship_with_head feature, we have more heads of Household participants and few other non-relatives.
```python
# Explore marital_status distribution
sns.catplot(x="marital_status", kind="count", data=data);
plt.xticks(
rotation=45,
horizontalalignment='right',
fontweight='light',
fontsize='x-large'
)
```

In the marital_status feature, most of the participants are married/living together.
```python
# Explore education_level distribution
sns.catplot(x="education_level", kind="count", data=data);
plt.xticks(
rotation=45,
horizontalalignment='right',
fontweight='light',
fontsize='x-large'
)
```

In the education_level feature, most of the participants have a primary level of education.
```python
# Explore job_type distribution
sns.catplot(x="job_type", kind="count", data=data);
plt.xticks(
rotation=45,
horizontalalignment='right',
fontweight='light',
fontsize='x-large'
)
```

In the job_type feature, most of the participants are self-employed.
```python
# Explore household_size distribution
plt.figure(figsize=(16, 6))
data.household_size.hist()
plt.xlabel('Household size')
```

Household_size is not normally distributed and the most common number of people living in the house is 2.
```python
# Explore age_of_respondent distribution
plt.figure(figsize=(16, 6))
data.age_of_respondent.hist()
plt.xlabel('Age of Respondent')
```

In our last feature called age_of_respondent, most of the participant’s age is between 25 and 35.
### 5.Bivariate Analysis
Bivariate analysis is the simultaneous analysis of two variables (attributes). It explores the concept of the relationship between two variables, whether there exists an association and the strength of this association, or whether there are differences between two variables and the significance of these differences.
After looking at every variable individually in Univariate analysis, we will now explore them again with respect to the target variable.
```python
#Explore location type vs bank account
plt.figure(figsize=(16, 6))
sns.countplot('location_type', hue= 'bank_account', data=data)
plt.xticks(
fontweight='light',
fontsize='x-large'
)
```

From the above plot, you can see that the majority of people living in rural areas don't have bank accounts. Therefore our assumption we made during the hypothesis generation is valid that people live in rural areas have a low chance to have bank accounts.
```python
#Explore gender_of_respondent vs bank account
plt.figure(figsize=(16, 6))
sns.countplot('gender_of_respondent', hue= 'bank_account', data=data)
plt.xticks(
fontweight='light',
fontsize='x-large'
)
```

In the above plot, we try to compare the target variable (bank_account) against the gender_of_respondent. The plot shows that there is a small difference between males and females who have bank accounts(The number of males are greater than females). This proves our assumption that females have less chance to have bank accounts.
```python
#Explore cellphone_accesst vs bank account
plt.figure(figsize=(16, 6))
sns.countplot('cellphone_access', hue= 'bank_account', data=data)
plt.xticks(
fontweight='light',
fontsize='x-large'
)
```

The cellphone_access plot show the majority of people who have cellphone access, don't have bank accounts. This proved that people who have access to the cellphone have a lower chance to use bank accounts. One of the reasons is the availability of mobile money services which is more accessible and affordable especially for people living in rural areas.
```python
#Explore 'education_level vs bank account
plt.figure(figsize=(16, 6))
sns.countplot('education_level', hue= 'bank_account', data=data)
plt.xticks(
rotation=45,
horizontalalignment='right',
fontweight='light',
fontsize='x-large'
)
```

The education_level plot shows that the majority of people have primary education and most of them don't have bank accounts. This also proves our assumption that people with lower education have a lower chance to have bank accounts.
```python
#Explore job_type vs bank account
plt.figure(figsize=(16, 6))
sns.countplot('job_type', hue= 'bank_account', data=data)
plt.xticks(
rotation=45,
horizontalalignment='right',
fontweight='light',
fontsize='x-large'
)
```

The job_type plot shows that the majority of people who are self-employed don't have access to the bank accounts, followed by informally employed and farming and fishing.
Now you understand important steps you can take while trying to explore and find insights and hidden patterns in your dataset. You can go further by comparing the relationship among independent features presented in the dataset.
But what if you have a dataset with more than 100 features(columns) ? do you think by trying to analyze each individual feature one by one will be the best way.? Having many features in your dataset means it will take a lot of your time to analyze and find insights in your dataset.
The best way to solve this problem is by using and lastest python package called [data profiling package](https://github.com/pandas-profiling/pandas-profiling). This package will speed up the Exploratory Data Analysis steps.
## DATA PROFILING PACKAGE
Profiling is a process that helps you in understanding your data and pandas Profiling is a python package that does exactly that. It is a simple and fast way to perform exploratory data analysis of a Pandas Dataframe.
The pandas **df.describe()** and **df.info()** functions are normally used as a first step in the EDA process. However, it only gives a very basic overview of the data and doesn’t help much in the case of large data sets. The pandas profiling function, on the other hand, extends the pandas DataFrame with **df.profile_report()** for quick data analysis.
Pandas profiling generates a complete report for your dataset, which includes:
- Basic data type information.
- Descriptive statistics (mean, median etc.).
- Common and Extreme Values.
- Quantile statistics (tells you about how your data is distributed).
- Histograms for your data (for visualizing distributions).
- Correlations (Show features that are related).
### How to install the package
There are three ways you can install pandas-profiling on your computer.
You can install using the pip package manager by running.
```python
pip install pandas-profiling
```
Alternatively, you could install directly from Github:
```python
pip install https://github.com/pandas-profiling/pandas-profiling/archive/master.zip
```
You can install using the conda package manager by running.
```python
conda install -c conda-forge pandas-profiling
```
After installing the package now you need to import the package by writing the following codes.
```python
#import the package
import pandas_profiling
```
Now let’s do the EDA using the package that we have just imported. We can either print the output in the notebook environment or save it to an HTML file that can be downloaded and shared with anyone.
```python
# generate report
eda_report = pandas_profiling.ProfileReport(data)
eda_report
```
In the above codes, we add our data object in the ProfileReport method which will generate the report.
If you want to generate an HTML report file, save the ProfileReport to an object and use the **to_file()** function:
```python
#save the generated report to the html file
eda_report.to_file("eda_report.html")
```
Now you can open the eda_report.html file in your browser and observe the output generated by the package.

The above image shows the first output in the generated report. You can access the entire report [here](https://github.com/Davisy/Exploratory-Data-Analysis-).
# Conclusion
You can follow the steps provided in this article to perform Exploratory Data Analysis in your dataset and start to discover insights and hidden patterns in your dataset. Keep in mind that the dataset comes from a different source with different data types which means you will need to apply a different way to explore your data such as time series and text datasets.
If you learned something new or enjoyed reading this article, please share it so that others can see it. Feel free to leave a comment too. Till then, see you in the next post!
This article first appeared on [Medium](https://medium.com/analytics-vidhya/why-you-need-to-explore-your-data-how-you-can-start-13de6f29c8c1). | davisy |
272,263 | SRE in layman’s terms (4 core concepts) | Site reliability engineer role description in simple terms | 0 | 2020-03-02T18:50:03 | https://dev.to/chen/sre-in-layman-s-terms-4-core-concepts-hoe | sre, devops, engineering, beginners | ---
title: SRE in layman’s terms (4 core concepts)
published: true
description: Site reliability engineer role description in simple terms
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/ujbq4op93h86t87ln1ds.jpg
tags: sre, devops, engineering, beginner
---
There are job titles in the industry that requires prior knowledge in order to understand them. What are their responsibilities are.
I oftentimes find myself try to explain what do I do for living to foreign people to tech industry.
How do you explain SRE then? In this post, I’ll try to describe it in simple terms.
>SRE are developers with operations responsibilities. They are in-charge of production environment, to keep it up and running.
## Service
A business, by definition, sells a product or a service.
Many of them these days has their business online. You can order almost anything through the internet. Very popular world scale services are Google and Amazon. They are available for you no matter where you are (almost).
You, the client, consumes a **service**. You use Google to search for interesting stuff or things you need (a nice restaurant). You read the news at your favourite news site or shop online at Amazon.
These companies **serves** you through the Internet. It seems to be they are online 24/7, 365 days a year. Pause for a second, and think about it. Isn’t that magical? it’s like a store that is always open, but easier to access - to enter the store you don’t need to leave your house.
Now that we have defined what a (online) service is, we can cover the 4 core concepts SRE’s are usually accounted for. I say usually, because responsibilities may be different between companies.
## 1. Design a reliable and resilient system
What does this mean anyway?
as SRE, we design the infrastructure for the product. We decide which hardware to use, do the capacity planning with room to grow as needed, etc.
One requirement is to make it reliable and resilient so service downtime is minimized as much as possible.
We try to eliminate any single-point of failures a long the road (from hardware to application). Always have redundancy for your infrastructure, so if something fails - be it hardware, network or software - the system can quickly recover from it.
SRE’s knows things break down. They are the ones who gets called when something critical is not working.
It is our job to recognize possible failures along the way and mitigate them ahead of time when that's possible.
## 2. Monitor and alert
**Online services** are composed from multiple applications or features. Today, many applications run on [distributed systems](https://en.wikipedia.org/wiki/Distributed_computing). We need visibility to what’s going on.
In order to meet this, we use monitoring systems that expose the service’s *health*. These usually looks like dashboards from control room in the movies.

Image by <a href="https://pixabay.com/users/SpaceX-Imagery-885857/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=693251">SpaceX-Imagery</a> from <a href="https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=693251">Pixabay</a>
Using these systems we define *alerts* - for example, we know how to recognize **unhealthy** patterns in the application, or hardware failures. The alert system sends us notification when things break (by email, sms, phone, etc.), instead of having someone to watch the dashboards all day long and yell :)
## 3. Pursue product features velocity
Once in a while, these services release new features and security updates. Like a change to the UI or an addition of new buttons. These changes require a **software update** that happens behind the scene, most of the time without user interruptions.
Changes to the system introduce some risk, but they also introduce new features and bug fixes clients are waiting for. This leads us to -> *deployment strategy*.
We define a software deployment strategy so software updates (installation of a new software) are to be successful, and when they are not to recover quickly.
Of every change we make to the system, we always keep in mind *“how do we recover from this if something goes wrong?”*
Then combining the two (software deployment and recovery) procedures into a “playbook”, which can be taught of as a task list to execute. Last, we
*automate this* to ease the process.
## 4. Automate everything
This concept is, in my opinion the most important one. Once we formalize a procedure in our daily work, if we repeat it we want to automate it. This allows us to spend our time on more important domains (research, development) rather than doing the task repeatedly. Let’s abstract that.
When we have a “problem” or a “task” on our desk, we prefer to solve it one time only. This is made possible by coding it. So, as SRE’s we always prefer to code things rather than performing them manually, even tough this requires more of our time when solving the problem the first time.
For example, using our monitoring and alert systems, we get notified when things do not work as expected. We can use these to trigger some code that handle the issue. Simple example is, if the application crash (becomes unavailable) automatically start it. It will bring the service back up for customers, while we can debug what happened later.
## Sum up - in layman’s terms
* The systems reliability - simply make sure the service is online and serving customers. Grow the infrastructure as needed, while keeping things on the budget. A lot of things happen behind the scene to make it like that.
* Monitoring - we design, develop and integrate tools that gives us visibility of what’s going on in the system. Multiple graphs and counters that help us to know the status.
* Automate everything - this goes without saying. If you have automated a task, you would only spend 'thinking' time on the 'problem' once.
If I had needed to describe SRE role in a short paragraph, it would probably be this:
> SRE is responsible for keeping the service up and provide the ability to release software faster while reducing the risk involved with it using tools and deployment strategies. In order to achieve that, we write code 🧞
I hope now on the next occasion you meet someone with SRE title, you would know a little better what their role is all about.
Cover Image by <a href="https://pixabay.com/users/GregoryButler-331410/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=389274">GregoryButler</a> from <a href="https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=389274">Pixabay</a> | chen |
280,711 | Stateful Styles With XState and Styled System | Writing styles with styled-system that map to state machines creafted with xstate | 0 | 2020-03-13T15:17:31 | https://blog.robertbroersma.com/stateful-styles-with-xstate-and-styled-system | react, xstate, cssinjs | ---
title: Stateful Styles With XState and Styled System
published: true
description: Writing styles with styled-system that map to state machines creafted with xstate
tags: [react, xstate, cssinjs]
canonical_url: https://blog.robertbroersma.com/stateful-styles-with-xstate-and-styled-system
---
You've probably seen a button like this one before:
```javascript
<Button>Cool Button</Button>
```
One that has options:
```javascript
<Button secondary>Secondary Cool Button</Button>
```
Maybe even more options:
```javascript
<Button tertiary>Tertiary Cool Button</Button>
```
But what if I did this?
```javascript
<Button secondary tertiary>Secondary? Cool Button</Button>
```
That's probably not allowed. I guess we'll change the API to avoid that:
```javascript
<Button variant="secondary">Secondary Cool Button</Button>
```
This is kind of a state machine! Your `Button` can only be in one `variant` (state) at a time.
Here's what a parallel state machine (basically multiple independent state machines) would look like:
```javascript
<Button variant="secondary" mode="dark">Dark Secondary Cool Button</Button>
```
I've found that these kind of style props work very well with logical state machines. Check out the following example of a... thing:

It's a parallel state machine with 3 sub machines:
- One machine that let's you change the shape:
- From Circle to Square
- From Square to Diamond
- From Square to Circle
- From Diamond to Square
- One machine that let's you change the color:
- From Red to Blue
- From Blue to Green
- From Green to Red
- One machine that let's you change the size:
- From Small to Big
- From Big to Small
If we want to craft some stateful styles for this thing, we'd need a component with an API like this:
```javascript
<Thing shape="circle|square|diamond" color="red|blue|green" size="small|big" />
```
You can implement it however you like, but what I like to do is use [`styled-system`'s `variant` API](https://styled-system.com/variants), because it maps nicely to the state machines we defined:
```javascript
import styled from 'styled-components'
import { variant } from 'styled-system'
const Thing = styled(
variant({
prop: 'shape',
variants: {
square: {
/** Make it square */
},
circle: {
/** Make it circular */
},
diamond: {
/** Make it a diamond */
},
},
}),
variant({
prop: 'color',
// ...
}),
variant({
prop: 'size',
// ...
})
)
```
(You can use it with either Emotion or Styled Components)
Now to wire it up to our state machine using `xstate` and `@xstate/react`
```javascript
function App() {
const [state, send] = useMachine(shapeMachine);
return <Shape {...state.value} />
}
```
Ta-da! A little explanation:
In case of a hierarchical or parallel state machine, ours being the latter, `state.value` contains an object representation of our current state (check [the docs](https://xstate.js.org/docs/guides/states.html) for more info). Our state could look something like this:
```json
// state.value
{
shape: "circle",
color: "red",
size: "small"
}
```
Which happens to look exactly like our component's prop interface! Of course you can also do _this_ if you want your code to be a bit more explicit and readable:
```javascript
function App() {
const [state, send] = useMachine(shapeMachine);
const { shape, size, color } = state.value
return <Shape shape={shape} size={size} color={color} />
}
```
Here's a [CodeSandbox](https://codesandbox.io/s/stateful-cssinjs-6mpxe) with a fully working example. | robertbroersma |
281,039 | A new React project with Typescript, Eslint, and Prettier | In almost every new project I start with React I always ask myself if I should use create-react-app.... | 0 | 2020-03-14T02:44:17 | https://dev.to/elisealcala/a-new-react-project-with-typescript-eslint-and-prettier-d55 | react, typescript, eslint, prettier | ---
title: A new React project with Typescript, Eslint, and Prettier
published: true
description:
tags: react, typescript, eslint, prettier
---
In almost every new project I start with React I always ask myself if I should use `create-react-app`. To small apps, this is a pretty good option but if you want to configure your app a little more and maybe change the babel and webpack configuration, you should start a project from scratch.
Let's create a new directory and initialize a default npm app.
```bash
# Make a new directory and move into it
mkdir new-react-app && cd new-react-app
# Initialise a new npm project with defaults
npm init -y
```
Now our application has a `package.json` file.
##### Let's start with webpack and babel setup.
```bash
# Install webpack
npm install --save-dev webpack webpack-cli webpack-dev-server
# Install the html webpack plugin
npm install --save-dev html-webpack-plugin
```
```bash
# Install babel
npm i --save-dev @babel/core babel-loader @babel/preset-env @babel/preset-react @babel/preset-typescript
```
A babel preset it's a tool to add support for a certain language.
**@babel/preset-env, @babel/preset-react and @babel/preset-typescript :** Allow us to add support for the latest features of javascript, react and typescript.
Let's create a `webpack.config.js` file on the root of our app.
```javascript
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
module.exports = {
entry: './src/app.tsx',
resolve: {
extensions: ['.ts', '.tsx', '.js'],
},
module: {
rules: [
{
test: /\.(ts|tsx)$/,
exclude: /node_modules/,
use: {
loader: 'babel-loader',
},
},
],
},
devServer: {
contentBase: path.join(__dirname, 'build'),
historyApiFallback: true,
host: '0.0.0.0',
compress: true,
hot: true,
port: 3000,
publicPath: '/',
},
devtool: 'source-map',
output: {
filename: '[name].bundle.js',
publicPath: '/',
path: path.resolve(__dirname, 'build'),
},
plugins: [
new HtmlWebpackPlugin({
template: path.join(__dirname, 'index.html'),
}),
],
};
```
This webpack configuration it's basic but it does the work.
Let's create an `index.html` file on the root.
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<title>My app with Typescript and React</title>
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<div id="root"></div>
</body>
</html>
```
Now let's create a `babel.config.js` file.
```javascript
module.exports = {
presets: [
'@babel/preset-env',
'@babel/preset-react',
'@babel/preset-typescript',
],
};
```
In our `package.json` file, we have to add some script to run our app, and to compile it in a build folder.
```javascript
// package.json
{
"scripts": {
"start": "webpack-dev-server --mode development",
"build": "webpack --mode production",
},
}
```
##### Typescript and react.
Typescript is a programming language developed by Microsoft. It's a superset of javascript, which means it has some additional features, like static typing and support for object oriented programming options. Today is one of the most popular languages.
```bash
# Install typescript
npm install typescript
#Install the react dependencies
npm install react react-dom @types/react @types/react-dom
```
**@types/react y @types/react-dom:** These packages add the types for react and react-dom.
Let's create a `src` folder on the root, and inside an `app.tsx` file.
```bash
mkdir src
cd src
touch app.tsx
```
Our `app.tsx` can be like this for now.
```jsx
import React from 'react';
import ReactDom from 'react-dom';
const App = () => <p>hello world</p>;
ReactDom.render(<App />, document.getElementById('root') as HTMLElement);
```
Now let's create a `tsconfig.json` file. This file has all the rules for typescript to work on our app. You can change it according to what you need. See the full list of options here, https://www.typescriptlang.org/docs/handbook/tsconfig-json.html.
```json
{
"compilerOptions": {
"allowSyntheticDefaultImports": true,
"noImplicitAny": true,
"moduleResolution": "node",
"baseUrl": "./",
"sourceMap": true,
"module": "esnext",
"target": "esnext",
"jsx": "react",
"allowJs": true,
"noEmit": true,
"noImplicitThis": true,
"strictNullChecks": true,
"lib": ["es6", "dom"],
},
}
```
##### Better development experience with Eslint and Prettier.
Eslint is a linting tool for javascript. It analyzes our code, looking for syntax errors, saving us a lot of development time.
Prettier is a code formatter. It enforces a consistent style across our app.
```bash
# Install eslint and prettier
npm install --save-dev eslint prettier
# Install plugin and presets needed for our app
npm install --save-dev eslint-config-prettier eslint-plugin-prettier eslint-plugin-react @typescript-eslint/eslint-plugin @typescript-eslint/parser
```
**eslint-config-prettier:** It's important to use this package to avoid conflicts between eslint and prettier rules.
**@typescript-eslint/eslint-plugin y @typescript-eslint/parser:** These plugins add support for typescript.
Let's create a configuration file for Eslint called `.eslintrc.js` on the root of our project. You can change these rules according to your needs.
Here is the list of supported rules for `eslint-plugin-react`: https://github.com/yannickcr/eslint-plugin-react
```javascript
module.exports = {
parser: '@typescript-eslint/parser',
extends: [
'plugin:react/recommended',
'plugin:@typescript-eslint/recommended',
'plugin:prettier/recommended',
],
parserOptions: {
ecmaVersion: 2018,
sourceType: 'module',
},
plugins: ["prettier"],
rules: {
"prettier/prettier": [
"error",
{
singleQuote: true,
trailingComma: 'all',
}
],
'react/prop-types': [
1,
{
ignore: ['context', 'tracking'],
},
],
},
settings: {
"react": {
"version": "detect",
},
},
overrides: [
{
files: ['*.ts', '*.tsx'],
rules: {
'react/prop-types': 'off',
},
},
],
};
```
Now if we are using VS Code, we can enable the option to format our code on save.
Let's create a `.vscode` folder on the root, and create inside a `settings.json` file with this content.
```json
{
"eslint.validate": [
"javascript",
"javascriptreact",
"Babel Javascript",
"typescript",
"typescriptreact",
],
"eslint.alwaysShowStatus": true,
"editor.codeActionsOnSave": {
"source.fixAll.eslint": true
},
}
```
Now when we run `npm start` we should see our application run on `localhost:3000`. | elisealcala |
281,107 | My Currently Favorite Terminal Hyper | Trying out the video upload on DEV. In this video we are looking at: A terminal built on web techno... | 0 | 2020-03-14T11:40:30 | https://dev.to/alexanderalemayhu/my-currently-favorite-terminal-hyper-1m8h | terminal, video | Trying out the video upload on DEV. In this video we are looking at:
> A terminal built on web technologies https://hyper.is | alexanderalemayhu |
281,157 | Running My First Online Meetup | What I managed to pull together in one day, with little prior knowledge of live-streaming. | 0 | 2020-03-14T12:39:08 | https://dev.to/_phzn/running-my-first-online-meetup-o1h | events, online, beginners | ---
title: Running My First Online Meetup
published: true
description: What I managed to pull together in one day, with little prior knowledge of live-streaming.
tags: events, online, beginners
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/w3160djc35afgrwa2q16.png
---
A month ago I'd never considered running online events. With recent news around COVID-19, many event organizers are being forced to find quick solutions to still bring people together and provide valuable content to their communities. [London CSS](https://twitter.com/londoncss) was one of them, making the decision to go online-only just a few days before our event.
⚠️ Disclaimer time - I'm not an expert, I've done this once, and I only learned what [Open Broadcaster Software (OBS) Studio](https://obsproject.com) was three weeks ago when recording an event in the London Vonage office. But this is what I now know, and I hope it can be of some use to those also having to figure out what they're doing on the fly.
# Speakers
Speakers were the group who found the adjustment to going online the weirdest - speaking 'to the room' when at home is a unique experience if you haven't done so before.
Speakers participated using a Google Hangout, but any video conferencing tool would work. Thirty minutes before the event we asked them to join the call and test their audio, video and screen sharing setup. We explained, in more detail than we would in person, who would introduce them, what we would broadcast and when (their camera vs slides for example), and that we wouldn't interrupt unless there was an issue.
# Attendee Experience
One unintended side effect of running a meetup online is that the event is immediately more accessible to your audience. Not only can they participate if sick or self-isolating, but there's no cost involved with travel.
Twitch (our chosen streaming platform) has an option in settings that automatically save broadcasts to your account (for 14 days - so make sure to grab them before they go), so folks can also watch the content on-demand. This means people who must work can also get value from the content too.
We worked with the wonderful White Coat Captioning to make sure that there was a human captioner ensuring that we were getting good quality live transcripts out with the stream. Andrew joined the speakers' video call to get live audio, and the output was updated on a webpage we were given access to.
The only other consideration is how much speaker <> attendee interaction we wanted to facilitate. Having not done this before, we decided not to invite Q&A, but if there were good questions in the Twitch chat we would surface them to the speakers at the end. Ana and Oliver (my London CSS co-organizers) were monitoring the chat throughout so we wouldn't miss anything.
One of our speakers, Glen, later said that he found having the live chat open while they spoke super distracting. This means it's our job as community organizers to find a non-distracting way of making sure that this doesn't just feel like a recorded video with no interaction opportunity.
# The Set-Up

I think you need at least two screens to do this. On the left is a fullscreen conference call (we used Google Hangouts). On the right is:
1. [top-left] The output of the live captioning. I sized the window to the aspect ratio I wanted in the output.
2. [bottom-left] The Twitch Stream Manager - so I could both see what was going out (with about a 10-second delay given my 'okay' home internet) and the chat which Oliver and Ana were moderating.
3. [bottom-right] Team Twitter message thread so we could keep on top of the event.
4. [top-right] Open Broadcast Studio. This was the action-station. Let's talk more about it...
## Open Broadcaster Software Studio
OBS is for recording and live streaming. You create a set of __scenes__ that can be toggled between. A scene has one of more __sources__, which can be a screen capture, an audio input, or a static graphic/text. Scenes also describe the size, position and (in the case of audio) levels of sources.
Sources for the main scene:
* A graphic for the bottom of the screen - this includes event logo and sponsor logo.
* A long blue bar graphic for the top of the screen.
* A series of text elements - one for each speaker with their name, twitter handle and topic, and one for when there was no speaker promoting the event's Twitter handle. These were all positioned in the center of the blue bar.
* A display capture of the whole screen with the hangout (I cropped some Google Hangout chrome from the top-right).
* A display capture of the window with the captions , also cropped to remove all browser chrome.
### Capturing Call Audio in OBS
The only sources I didn't mention are audio-related.
If you want to capture your own mic audio, OBS provides that by default in the Audio Mixer under the name 'Mix/Aux'. If you won't want to publish this then hit the loudspeaker icon 📢 to mute yourself.
To capture the call audio and include it in OBS required an additional piece of (free) software. [iShowU Audio Capture](https://support.shinywhitebox.com/hc/en-us/articles/204161459-Installing-iShowU-Audio-Capture-Mojave-and-earlier-) can be downloaded and installed, which will provide a new audio 'input' which will be whatever should be coming out of your computer's speakers.
Create a new Audio Input Capture in OBS, and pick iShowU Audio Capture. The input should now be added the to Audio Mixer. Right-click in the Audio Mixer pane and pick 'Advanced Audio Properties'. Make sure the item with iShowU has 'Monitor and Output' selected in the monitoring settings. This means you can still hear the output while OBS has access to it.
### Other scenes
* "We'll be starting soon" graphic
* "Standby - we're experiencing some technical difficulties" graphic
* "Thanks for stopping by. Follow @LondonCSS for future events" graphic
And while our event didn't require it, it might also be a good idea to create a scene to flip to if you're handling data you don't want your audience to see (like secret keys).
Create all of these ahead of time and you can toggle scenes/sources on or off during the stream.
### How to stream from OBS to Twitch
This was the easiest part, surprisingly to me. In the Twitch Stream Manager is a 'stream key'. Copy it and paste it into OBS' Stream settings and leave the other options as default. Now the 'Start Streaming' button in OBS will put you live on Twitch.
## What I Would Change
There are four things I would love to change in the future:
1. Facilitating a formal Q&A period and making this known to attendees.
2. Making sure that there is an opportunity for watercooler chat between attendees for a little while after the event.
3. Our team comms were good enough but not amazing. Subtly asking questions such as "are we on time still" is easy in person, but not so much online unless you are glued to a private chat thread. The importance of this was much higher than I expected.
4. I've just found out that [Stream Decks](https://www.elgato.com/en/gaming/stream-deck) are a thing, which can allow you to change scenes using an external device. I plan on getting one and trying it out to see if it makes managing the stream any easier.
I hope this is useful. I must say thanks to my colleagues at Vonage (where we have a whole Slack channel dedicated to streaming) for their advice and support, especially [Lorna](https://twitter.com/lornajane) who spent 20 minutes in a call helping me test the setup. Many of my colleagues have got this whole setup much more polished than I have, but with about 6 hours to work it out I'm happy enough with how it came together.
Given that I won't be at any events for the next while I am happy to help community members take their meetups online if you lack time/people. Just send me a message on [Twitter](https://twitter.com/_phzn). | _phzn |
285,849 | Introducing Terminal | An elegant wrapper around Symfony's Process component. | 0 | 2020-03-22T12:25:29 | https://dev.to/titasgailius/introducing-terminal-2kme | laravel, symfony, bash, php | ---
title: Introducing Terminal
published: true
description: An elegant wrapper around Symfony's Process component.
tags: Laravel, Symfony, bash, php
---
---
# Terminal

An elegant wrapper around Symfony's Process component.
---
### Why?
I've been developing hosting related services for almost 4 years now.
To complete various tasks, I have to write PHP code that executes bash scripts almost on a daily basis. Let's say you want to set up automatic updates for WordPress or put your site to the maintenance mode.
That's where Terminal comes in handy.
---
### Example
To activate "maintenance mode" in WordPress, you might need to execute this wp-cli command:
```
$ wp maintenance-mode activate
Enabling Maintenance mode...
Success: Activated Maintenance mode.
```
Now, with the Terminal, you can do this straight from your PHP script:
```php
Terminal::run('wp maintenance-mode activate');
```
---
### More
You can even setup timeout, retries, change the current working directory and much more:
```php
Terminal::in(storage_path('sites/123'))
->timeout(120)
->retries(3)
->run('wp maintenance-mode activate');
```
---
### Extending
Another cool feature of the Terminal is that you can easily define your custom commands.
```php
Terminal::extend('maintenanceOn', function ($terminal) {
return $terminal->run('wp maintenance-mode activate');
});
Terminal::maintenanceOn();
```
---
### Testing
It gets even better.
Terminal comes with a bunch of beautiful testing utilities that help you write simple and expressive tests for your application.
```php
Terminal::fake();
Terminal::run($command = 'wp maintenance-mode activate');
Terminal::assertExecuted($command);
```
---
Feel free to check out full documentation at https://github.com/TitasGailius/terminal
Feedback is more than welcome! | titasgailius |
281,264 | Use applesauce to code faster | You are coding. You start getting into a flow. Then you get derailed trying to think of a variable na... | 0 | 2020-03-14T16:20:18 | https://dev.to/newyorkanthonyng/use-applesauce-to-code-faster-47n7 | javascript, productivity, codequality, refactorit | You are coding. You start getting into a flow. Then you get derailed trying to think of a variable name.
Most variable names are simple enough. Is this variable keeping track of for-loop counters? Name it `i` (or `j` or `k`).
```js
const array = ['Hello', 'World'];
for (let i = 0; i < array.length; i++) {
for (let j = 0; j < array.length; j++) {
for (let k = 0; k < array.length; k++) {
}
}
}
```
Is this variable an array that holds user objects? Name it `userArray`.
```js
const usersArray = [
{ name: 'Alice' },
{ name: 'Bob' }
]
```
But now you have a variable that holds the first 10 vegan users that live in NorthEastern United States? What do you call this?
```js
const ??? = [
{ name: 'Alice', dietaryRestrictions: 'vegan' },
{ name: 'Bob', dietaryRestrictions: 'vegan' },
// ...
]
```
You scan through your code to see what naming conventions you used. If you're paranoid, you start thinking about all your future, unwritten code. What will all that code look like?
You've broken out of your [flow](https://en.wikipedia.org/wiki/Flow_(psychology)). After 2 minutes of meditation, you found a variable name.
```js
// rolls right off the tongue
const topVeganUsersInUnitedStates = [
{ name: 'Alice', dietaryRestrictions: 'vegan' },
{ name: 'Bob', dietaryRestrictions: 'vegan' },
// ...
]
```
Great!
Now you're most likely married to the variable name. You spent so much time naming it in the first place, so why change it?
And this is even if you remember to change the variable name. You have an approaching deadline as you create your pull request. The last thing you'll want to do is look at your code, line-by-line, and update variable names.
This assumes that your variable even makes it to your pull request. You may have refactored your code during development and deleted the variable. What a waste!
```js
function getTargetUsers() {
// 💀 topVeganUsersInUnitedStates
return [
{ name: 'Alice', dietaryRestrictions: 'vegan' },
{ name: 'Bob', dietaryRestrictions: 'vegan' },
// ...
];
}
```
In general, you want to delay decisions until you have the most information possible. We should do the same with naming variables.
---
I attended a refactoring workshop. The instructor used the name `applesauce` when he ran into a difficult variable. Why?
Having a default variable name speeds up development. Imagine if you had to think up new variable names whenever you created for-loops. That would take up a lot of time. Similarly to `i` for for-loops, we want to use `applesauce` for ambiguous variable names. This speeds up development.
> If applesauce is already taken, feel free to experiment with other sauces.
Also, it's so outlandish of a variable name that it stands out among the rest of your code. If the variable name survives all refactoring and makes it to the pull request, you will need to rename it.
During pull requests is the time when you have the most information to decide on the best variable name.
---
Do you have any variable naming tips & trips? Let me know about it.
| newyorkanthonyng |
281,277 | LeetCode problems for Beginners | When you begin to practice algorithms and data structures with LeetCode problems. Remember to build... | 0 | 2020-03-19T15:26:26 | https://coderscat.com/leetcode-problems-for-beginners | leetcode, algorithms | ---
title: LeetCode problems for Beginners
published: true
date: 2020-03-14 15:33:00 UTC
tags: LeetCode,Algorithms
canonical_url: https://coderscat.com/leetcode-problems-for-beginners
---
When you begin to practice algorithms and data structures with LeetCode problems.
Remember to build your confidence and find the [fun of algorihtms](https://coderscat.com/lets-leetcode-with-fun) in your first step.
You should start with easy problems.
This is a list of categories with classic and easy problems for you.
## Array
[Remove Element](https://leetcode.com/problems/remove-element/)
[Remove Duplicates from Sorted Array](https://leetcode.com/problems/remove-duplicates-from-sorted-array/description/)
[Remove Duplicates from Sorted Array II](https://leetcode.com/problems/remove-duplicates-from-sorted-array-ii/description/)
[Find the Celebrity](https://leetcode.com/problems/find-the-celebrity/description/)
[Rotate Array](https://leetcode.com/problems/rotate-array/description/)
## String
[Implement strStr()](https://leetcode.com/problems/implement-strstr/description/)
[Longest Common Prefix](https://leetcode.com/problems/longest-common-prefix/description/)
[Length of Last Word](https://leetcode.com/problems/length-of-last-word/description/)
[First Unique Character in a String](https://leetcode.com/problems/first-unique-character-in-a-string/description/)
[Ransom Note](https://leetcode.com/problems/ransom-note/description/)
[Reverse String](https://leetcode.com/problems/reverse-string/description/)
[Reverse Words in a String](https://leetcode.com/problems/reverse-words-in-a-string/description/)
## Tree
[Binary Tree Preorder Traversal](https://leetcode.com/problems/binary-tree-preorder-traversal/description/)
[Binary Tree Inorder Traversal](https://leetcode.com/problems/binary-tree-inorder-traversal/description/)
[Binary Tree Postorder Traversal](https://leetcode.com/problems/binary-tree-postorder-traversal/description/)
[Binary Tree Level Order Traversal](https://leetcode.com/problems/binary-tree-level-order-traversal/description/)
[Same Tree](https://leetcode.com/problems/same-tree/description/)
[Sum Root to Leaf Numbers](https://leetcode.com/problems/sum-root-to-leaf-numbers/description/)
## LinkedList
[Reverse Linked List](https://leetcode.com/problems/reverse-linked-list/description/)
[Linked List Cycle](https://leetcode.com/problems/linked-list-cycle/description/)
[Add Two Numbers](https://leetcode.com/problems/add-two-numbers/description/)
[Intersection of Two Linked Lists](https://leetcode.com/problems/intersection-of-two-linked-lists/description/)
## DFS & BFS
[Number of Islands](https://leetcode.com/problems/number-of-islands/)
[Walls and Gates](https://leetcode.com/problems/walls-and-gates/description/)
[Surrounded Regions](https://leetcode.com/problems/surrounded-regions/description/)
## Backtracking
[Subsets](https://leetcode.com/problems/subsets/description/)
[Permutations](https://leetcode.com/problems/permutations/description/)
[Combination Sum](https://leetcode.com/problems/combination-sum/description/)
## Other references
[How To Learn Data Structures And Algorithms](https://coderscat.com/how-to-learn-data-structures-and-algorithms) is a roadmap for the newbie.
[Preparing for an interview? Check out this!](https://amzn.to/2BeAeB9)
The post [LeetCode problems for Beginners](https://coderscat.com/leetcode-problems-for-beginners) appeared first on [CodersCat](https://coderscat.com). | snj |
281,294 | Dibujando con canvas: trazados | Trazados en el canvas La aplicación que creamos previamente es capaz de dibujar un pixel d... | 0 | 2020-03-14T16:52:57 | https://dev.to/unjavascripter/dibujando-con-canvas-trazados-5d8h | html, canvas, javascript, draw | ## Trazados en el canvas
[La aplicación que creamos previamente](https://dev.to/unjavascripter/dibujando-pixeles-en-el-navegador-con-canvas-y-cosas-extra-3b43) es capaz de dibujar un pixel del tamaño configurado al hacer clic sobre cualquier parte del canvas. Pero todavía no se siente tan natural. Vamos a hacer que el usuario pueda hacer trazados.
## Objetivo
Cuando _el usuario hace clic (izquierdo) sobre el canvas_ se debe dibujar un pixel
Si _el usuario mueve el cursor_ hacia un pixel vacío adyacente
Y si _el usuario mantiene el botón de clic presionado_
Se debe _dibujar un pixel_
Ya tenemos las reglas de lo que debe pasar.
## Mouse events vs. Pointer events
Los días de sólo pensar en un _mouse_ como único dispositivo de interacción quedaron atrás. Como bien sabemos, ahora también tenemos que pensar en dispositivos táctiles y su comportamiento. Afortunadamente tenemos [_pointer events_](https://developer.mozilla.org/en-US/docs/Web/API/Pointer_events) que capturan interacciones tanto del clásico _mouse_ como eventos _touch_, así que podemos pasar tranquilamente de eventos `mousedown` o `mouseover` a `pointerdown` o `pointerover`.
### El código
En la primera iteración de la aplicación agregamos un _Event Listener_ al canvas para capturar el evento de clic. Vamos a reemplazar ese evento con el _pointer event_ `pointerdown`:
```ts
this.canvasElem.addEventListener('pointerdown', (event: PointerEvent) => {
// ...
```
Es importante notar que ya no estamos recibiendo un evento de click en el _callback_, sino un evento de pointer; por esta razón cambiamos el tipo del evento a `PointerEvent`.
También es necesario cambiar la función que se ejecuta en el callback, se llama `handleClick` y pues... ya no son sólo clicks:
```ts
handlePointerDown(event: PointerEvent){
// ...
}
```
Ahora sí podemos crear el handler para el evento de _arrastrar_ justo después del evento para capturar el _pointer down_:
```ts
this.canvasElem.addEventListener('pointermove', (event: PointerEvent) => {
this.handleDrag(event);
});
```
La función `handleDrag` validará si el **equivalente al botón derecho** se encuentra presionado a la hora de mover el _pointer_ sobre el canvas, si es así, la función que dibuja un _pixel_ se llama:
```ts
handleDrag(event: PointerEvent) {
if(event.buttons === 1) {
this.drawPixel(event.x, event.y);
}
}
```
> [Aquí encuentras más información](https://developer.mozilla.org/en-US/docs/Web/API/Pointer_events#Determining_button_states) acerca del uso de `event.buttons` para determinar el botón presionado.
Finalmente, agregamos un par de estilos a `index.html` para que se vea un poco mejor (margin) poder interactuar tranquilamente con [el canvas desde dispositivos touch](https://developer.mozilla.org/en-US/docs/Web/CSS/touch-action):
```html
<style>
body {
margin: 0;
}
canvas {
touch-action: none;
}
</style>
```
!Y listo¡
Ya podemos dibujar tranquilamente, pero como todavía no somos perfectos, seguramente tendremos que _deshacer_ algun paso de nuestra expresión artística en algún momento. Trabajemos en eso en el siguiente post.
### ¿Y el repo?
Aquí está https://github.com/UnJavaScripter/pixel-paint
## ¿Y el demo?
Aquí https://unjavascripter.github.io/pixel-paint/ | unjavascripter |
281,348 | Invisible Barriers That Prevent You From Learning Web Development and How to Break Through Them | The Great Filter of web development! and how to break through it? The hidden cost of fru... | 0 | 2020-03-14T18:30:46 | https://dev.to/nazeh/invisible-barriers-that-prevent-you-from-learning-web-development-and-how-to-break-through-them-2f6i | css, grid, webdev, frontend | 
### **The Great** Filter of web development! and how to break through it?
**The hidden cost of frustrating, demotivating web development MOOCs!**
For years, I’ve tried studying web development through various online courses.
And every time I ended up **frustrated and demotivated for months** before
trying again! So I know the struggle and the ubiquitous
[meme](https://media1.tenor.com/images/614c9b4639a2588383f47e138177da81/tenor.gif).
Then, I started building full responsive clones for great web-pages like:<br>
[Youtube](https://nazeh.github.io/youtube_video_player_page/),
[TNW](https://nazeh.github.io/the_next_web_responsive_clone/),
[Behance](https://nazeh.github.io/grid-framework-behance-clone/),
[Newsweek](https://nazeh.github.io/newsweek_clone_bootstrap/), and more in less
than 4 weeks!
**So what happened? **out of the necessity of having a [coding
partner](https://microverse.org/), I researched the state of CSS in 2019. It’s
then that I realized that MOOCs as great as they are, can be gatekeepers!

<span class="figcaption_hack">4+ fully responsive projects in 4 weeks</span>
So in this article, I will discuss that and answer the following questions:
1. **WHY** are discouraging MOOCs dangerous?
1. **HOW** are MOOCs discouraging?
1. **WHAT** can you do about it?
### Why
#### **The world needs more software developers.**
Even with programmers supply doubling every 5 years, demand is growing even
faster!
> “Software is eating the world” — Marc Andreessen Co-founder of Netscape
Every industry is integrating Software development at every step. From hiring
management to supply chain to advertisement and sales, Software is essential.
Not mentioning design, optimization, and manufacturing of their products. It’s a
necessity to stay competitive.
You may argue that there are rather too many programmers instead, and only good
ones are scarce. so how would more non-CS graduates help solve that problem?
Well, I am well aware of the difference between a coder and a software
developer, and I strive to be the later.
I am not arguing that we don’t need improvement in quality. but quantity has
quality on its own and you can’t get more of the later without starting with
more of the former!
But don’t take my word for it, take it from those “[Leaders and
trendsetters”](https://code.org/quotes)!
And I hope that more [financial
independence](https://en.wikipedia.org/wiki/Financial_independence) can help us
face [dangerous
stagnation](https://etherealvalue.wordpress.com/2017/03/21/secular-stagnation/).
#### Front-end is the gate!
What does any of that have to do with CSS for example? It’s not even a
programming language! OK, it [can
encode](https://stackoverflow.com/questions/2497146/is-css-turing-complete/5239256#5239256)
[Rule 110](http://en.wikipedia.org/wiki/Rule_110) but come on, it’s a
style-sheet.
Indeed Front-end development is more about “**UX Engineering**” as [some
suggest](https://css-tricks.com/the-great-divide/) to name it. And it is
possible to have a career where you stay in that lane and never become a
programmer.
Nonetheless, Front-end development is the most approachable for anyone with no
CS background. It’s also the most accessible market.
HTML & CSS will always be the first thing anyone tries once the idea of career
shift come through their minds. That also explains why JavaScript is the first
language most people learn!
And it is actually a good first step, it is visual rather than logical or
mathematical. You will often hear people joke about how bad at math they are,
but they take pride in their taste and style.
So it’s rather disappointing when they feel discouraged and repelled by a
style-sheet. And it is very easy to **mistake Confusing with Complex**.
In most cases all around the world career shift is hard and risky. getting a job
or starting making money through freelance can be crucial. Deterring someone at
this early stage can mean the end of their Tech pursuit.
### How
> Remember: Reddit is only an Alt-Tab away! -Quincy Larson
While preparing to write this article, I watched a
[talk](https://www.youtube.com/watch?time_continue=127&v=Ef07Hhoc5KE) by Quincy
Larson. In it, he talked about the fight for readers attention while writing a
technical article. But it also applies for self-learn platforms as well.
> Readers are looking for an excuse to close your article! -Quincy Larson
Yes, self-teaching coders, should have more grit and discipline. But still,
career shift is scary, and their defense systems are looking for reasons to give
up. So it baffles me how FreeCodeCamp curriculum looks like :

<span class="figcaption_hack">Not Even the full list!</span>
Days and days of learning small snippets of very basic HTML and CSS! And for
some reason, you still didn’t learn about any recent development like Flexbox or
CSS-grid! Not mentioning that you have nothing to show for it, to ease your
self-doubt.
#### **Here you can notice several tendencies:**
* **Mistaking old as fundamental, and new as advance!**
I can’t see an upside in delaying learning **Flexbox** and **CSS-Grid **and -as
far as I can remember- even **Box-sizing**!
Why force students to struggle through outdated tools? Praying day in and day
out that everything will make sense somehow!
I would say the same about dismissing pre-compilers as a tool for advanced
users. In fact, it has a very shallow learning curve and can only make the
student feel smarter and more in control. [Sass Flow
Control](https://sass-lang.com/documentation/at-rules/control) is also a very
welcome boost to student’s confidence.
* **Under-appreciating the importance of early achievements!**
The longer a student spends going through infinite baby-steps, the worse.
Building real-world projects boost their confidence, enhances learning, and
cements concepts.
There are only so many abstract concepts you can force your brain to memorize
before it repels. **Jumping into coding and researching on a need to know basis,
is more efficient in the long run**.
Those projects have to be real websites.** Students need to build something they
can take pride in**, and show off.
### What
Enough with the rant, off to a more practical note, here is a simple guide to
what worked for me:
#### Goal
This guide aims to be an alternative approach for learning Responsive web
design. One that takes the student’s urgent need for a sense of progression into
account.
#### Method
**- Learn on a need to know basis.**
* **Project-based** learning helps you to know *what you don’t know*, and *why you
need to know it*.
* As counter-intuitive as it sounds: **Resist the temptation to learn more**
tricks. Master the few you got until you have to learn new ones to get the job
done.
#### **- Maintain the Flow**
Your momentum is a priority, be vigilant to the time you spend on a task, and
omit anything that slows you down.
* **Skim read** <br> when you need to learn a new tool or review a resource. Learn
to spot the keywords and reach for conclusions. Take note of the resource and
review it later if you need an in-depth study.
* **Write faster.** <br> Learning to write faster with the least mistakes is
invaluable, take time to improve that. it allows you to try things faster, and
push through your chocking points.
* **Customize and automate** <br> As you go, notice all the repetitive tasks and
see if there is a way to save time on it. search for cheatsheets, hotkeys,
custom settings, or automation opportunities.
#### **- Always Cheat**!
> Win if you can lose if you must but always cheat!* -Jesse Ventura*
Good engineers are lazy. Cut corners to achieve much more than you could on your
own, as long as it’s outside the scope of what you are trying to learn.
* **Clone pretty websites! **<br> I chose to clone
[Behance](https://nazeh.github.io/grid-framework-behance-clone/) because I know
it already looks great on its own.
* **Use great art to populate your clones.**In my [Youtube
Clone](https://nazeh.github.io/youtube_video_player_page/), I used Kurzgesagt’s
videos because of their amazing visuals.
* **Don’t write Content.** For content-heavy webpage, copy all content elements to
your HTML file. You can then use RegExp to improve the semantics and apply your
own classes.
* **Go the extra steps with ready code snippets.** It was not possible to
replicate every behavior of
[Newsweek](https://nazeh.github.io/newsweek_clone_bootstrap/) without JS. I
wasn’t trying to learn JS at that point so I excused my self to copy code from
Stackoverflow and change it for my needs.
#### - Pixel Perfect
Yes, you can cheat and cut corners and still be a perfectionist.<br> The goal is
to make an exact replica, indistinguishable from the original.<br> Your job is
to make a vertical slice of a full website, so responsiveness is not optional.
#### Getting started
I recommend [VScode](https://code.visualstudio.com/) as your text editor and any
[Chromium browser](https://www.google.com/chrome/) for best developer tools.
Then you will need to install a few extensions for VScode to make your life
easier:
* [Beautify](https://marketplace.visualstudio.com/items?itemName=HookyQR.beautify)
will help you format your HTML, CSS, Sass and JS files.
* [Stylelint](https://marketplace.visualstudio.com/items?itemName=shinnn.stylelint)
will help you write better CSS.
* [Live
server](https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer)
will enable you to examine your changes whenever you save it.
* [Live Sass
Compiler](https://marketplace.visualstudio.com/items?itemName=ritwickdey.live-sass)
will recompile your Sass files to CSS whenever you save as sass file.
You can also install the [Web Development Essentials Extension
Pack.](https://marketplace.visualstudio.com/items?itemName=jamesqquick.web-development-essentials-extension-pack)
But I would rather install extensions as the need arise, but take a look
nonetheless.
#### Start Building
Once you choose a webpage to clone, then this general process should be a good
start:
* **Set up your project directory.**[Set-up
Git](https://rogerdudler.github.io/git-guide/), and organize your Folder
Structure:
|
index.html
|
|
icon.png
|
main.css
main.scss
* **Start your index.html.**Start writing ‘doc’ and VScode should suggest the
following:
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Document</title>
</head>
<body>
</body>
</html>
As you can see [Emmet
abbreviations](https://code.visualstudio.com/docs/editor/emmet) are super useful
and you should learn more about it.<br> Now link your stylesheet, add your title
and icon:
...
<meta http-equiv="X-UA-Compatible" content="ie=edge">
</head>
...
Start using Live server and Sass Live compiler to examine changes going forward.
* **Inspect and analyze** [Toggle Device
Toolbar](https://developers.google.com/web/tools/chrome-devtools/device-mode) in
the original page to inspect its layout in the mobile version. Mobile-first
approach means fewer positioning needed at first, and incremental changes
later.<br> You might need to show media queries as well.<br> Analyze the
structure of the page. Schematic diagram would help as well.

<span class="figcaption_hack">[https://www.ecurtisdesigns.com/web-layout-design/](https://www.ecurtisdesigns.com/web-layout-design/)</span>
* **Build the Layout with HTML5 semantic.**Start laying out the general structure
using [HTML5
elements](https://codepen.io/mi-lee/post/an-overview-of-html5-semantics).

<span class="figcaption_hack">[https://codepen.io/mi-lee/post/an-overview-of-html5-semantics](https://codepen.io/mi-lee/post/an-overview-of-html5-semantics)</span>
* **Start populating elements.** For this stage, [CSS
Flexbox](https://css-tricks.com/snippets/css/a-guide-to-flexbox/)** **should be
all you need. It will be most handy for justifying and aligning each element’s
content.
* **Expand and position.**Start expanding the width of your page, and set up a new
[media query](https://www.w3schools.com/cssref/css3_pr_mediaquery.asp) for each
breaking point. You will also need to have more control over your layout so
start using
[CSS-Grid](https://css-tricks.com/snippets/css/complete-guide-grid/). Also,
check out [this
article](https://medium.com/@js_tut/css-grid-the-swiss-army-knife-for-cutting-website-and-application-layouts-c1bd7a6b4e56).
@media only screen and (min-width: 576px) {
...
}
* **Start populating content.**Copy Images, paragraphs and other content from the
original HTML.<br> Use VScode’s [RegExp](https://regexr.com/) in the [find and
replace](https://code.visualstudio.com/docs/editor/codebasics#_advanced-find-and-replace-options)
to improve the semantics and apply your own classes to the copied HTML.<br> This
might seem complicated but it is a powerful tool to avoid entering tens of
content.<br> You can experiment with it and see the example below
[here](http://regexr.com/4lrea)**.**
<div class="old-classes" data-event-categ...>
<Some content>
</Some content>
<Some content>
</Some content>
</div>
<div class=".+".+>\n((.+\n)+)<\/div>
<article class="new-classes">\n$1</article>
<article class="new-classes">
<Some content>
</Some content>
<Some content>
</Some content>
</article>
* **Kill the snakes** Congratulations, you opened Pandora’s box. many problems
will be evident at once.<br> Don’t panic, this is the plan all along, to
discover what you need to learn, now start searching for answers. one problem at
a time.<br> You already noticed that [CSS Tricks](https://css-tricks.com/) will
be your best friend.
Rense and repeat until you have a perfect page!
### Conclusion
Front-end development is and will stay for a long time every newbie’s first
contact with IT. and it’s currently a gatekeeper rather than an enabler. I can’t
guarantee that the guide I offered will work for everyone. but it is worth
trying If you tried once and failed. Jump into it, no hello world anymore, find
a page that looks great and clone it.
Let me know what do you think! | nazeh |
281,380 | tsParticles v1.10.1 Released | New tsParticles version released, 1.10.1. Release notes Fixed rotation for... | 13,803 | 2020-03-14T19:46:00 | https://dev.to/tsparticles/tsparticles-v1-10-1-released-52lf | javascript, typescript, showdev, opensource | # New tsParticles version released, 1.10.1.
## Release notes
- Fixed rotation for line shape
- Canvas context now has it's own classes for drawing
- Canvas context now is private
- Methods `play` and `pause` implemented, you can now easily control the animations without changing the config runtime
Options added:
- `particles.move.trail`: an object to add trails to particles
- `pauseOnCanvas`: a boolean to enable/disable the pause on window blur
- `lineLinked` shadow options
---
Checkout the demo here: <https://particles.matteobruni.it>
Do you want to replace the old, outdated and abandoned particles.js?
You're in the right place!
---
Here are some demos!
## Characters as particles
[](https://particles.matteobruni.it/#chars)
---
## Mouse hover connections
[](https://particles.matteobruni.it/#connect)
---
## Polygon mask
[](https://particles.matteobruni.it/#mask)
---
#### Background Mask particles
[](https://particles.matteobruni.it/#background)
---
#### COVID-19 SARS-CoV-2 particles
[](https://particles.matteobruni.it/#virus)
**Don't click! DON'T CLICK! OH NO IT'S SPREADING!!!!**
*COVID-19 is not funny. It's a serious world problem and we should prevent its spreading. If you are in a risky area please STAY AT HOME*
--
## GitHub repo
<https://github.com/matteobruni/tsparticles>
##npm
<https://www.npmjs.com/package/tsparticles>
##yarn
<https://yarnpkg.com/package/tsparticles>
##jsDelivr
<https://www.jsdelivr.com/package/npm/tsparticles>
---
Feel free to contribute to the project! | matteobruni |
281,384 | OxidemusiC and the somewhat modern tech stack it uses. | About one month ago my personal project OxidemusiC officially launched. This post won't discuss what... | 0 | 2020-03-14T20:13:47 | https://medium.com/@tristanfarkas/oxidemusic-and-the-somewhat-modern-tech-stack-it-uses-8e31adeb4b2 | About one month ago my personal project OxidemusiC officially launched.
This post won't discuss what OxidemusiC is and what it does, but if you're interested in learning more about the application I recommend you [visit the website](https://oxidemusic.com/).
This was one of my most time consuming personal projects ever, and I did not exactly make it easy for myself. The project makes use of JavaScript, Python and Rust for the backend services. These languages all have their upsides and drawbacks. The initial version of OxidemusiC relied solely on Python. And that was fine and all when it was just an application I used with me and my group of friends. But when I actually released it, I noticed a big problem. What do I do when I create a breaking change and the app can't run reliably on the new version of the API. And in comes [Cloudflare Workers](https://workers.cloudflare.com/) to save the day. Since this version endpoint would be the most requested endpoint I wanted to run it on something that could handle unpredictable amounts of traffic.

######This picture shows two things, the technical stack which OxidemusiC uses and that I'm very much not a graphical designer.
Let's start from the top, all the way to the bottom.
So! Cloudflare Workers, I honestly had a great experience using workers. It was easy getting used to the interface and even though I have little to no JavaScript knowledge it was super easy to make with all the documentation provided by Cloudflare. If you have not used Cloudflare Workers I highly recommend you check them out [here](https://workers.cloudflare.com/). Right now Workers does not have that big of an role within OxidemusiC. All it's used for currently is checking that the server is okay and checking that the user is on a compatible application version. Pretty basic.
Do you want to know what is not basic though? [Rust](https://www.rust-lang.org/). Do you want headaches, emotional stress and questioning your life choices? Then Rust is the programming language for you! I spent hours making this Rust application, let me give you a rundown of what it does.

######You see that message where it shows how many users are listening along. That's all it does.
I could have probably made that feature in like twenty minutes using Python, but the problem is [Python is quite slow](https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/python3-gcc.html). So I looked at alternatives to use for implementing it, in the end it was either [Elixir](https://elixir-lang.org/) or Rust, and I ended up using Rust thanks to its amazing portability, so after I have a compiled executable that's all I'll need to run the application. While Rust has amazing portability, it is however a pretty different language. Sure it out performs most languages out there, and does not let you make mistakes that easy. It's fucking stupid, no good, shit ass ownership system put me on the edge of a mental breakdown. Even to quote the Rust docs:
##_"Because ownership is a new concept for many programmers, it does take some time to get used to."_
And indeed it does. Coming from Python I'm just used to being able to do what the fuck I want to, when the fuck I want to using variables. But it is also this ownership system that makes Rust safe and efficient. So I guess you're just forced to adapt sometimes. So I spend about two hours figuring this shit out and screaming at the rust compiler. But eventually I got greeted by this:

######You can not imagine my relief knowing I was done with the Rust part.
So that is deployed and has been running quite smoothly so far, at least there have not been any runtime issues _yet_🤞. That's enough talking about things that makes no sense what so ever, so let's move on to everyone's favorite programming language, Python!
Oh Python, so simple and so flexible. While I definitely spent the most time writing this code, the code I wrote just made sense, and I can access a variable without getting twenty different issues complaining that I can't move the variable. (_cough Rust cough_). Python does all of the heavy lifting, the actual music playback, managing what parties you can join, party creation, account management and that's about it. All pretty straight forward things. I would talk about the issues I ran into when making the Python application, if there were any. Now sure I ran into some minor bugs when making the Python side of things, but nothing notable really.
I just figured I'd give the public an understanding how OxidemusiC is structured, since I've been quite quiet about the technical details of the application. On a final note, since I know most of you are interested by security stuff, there's some somewhat exciting news coming soon hopefully. | farkas | |
281,432 | series: Inspiring Stories: Zoey Zou | In honor of International Women's day, we are interviewing inspiring women from our community who are going to share the story of how they got into Tech and how they got to where they are today. In this post, I am interviewing one of Hack Your Future graduates: Zoey Zou | 0 | 2020-03-17T10:15:03 | https://dev.to/azure_heroes/series-inspiring-stories-zoey-zou-32kj | iwd2020, womenintech, devlive, azureheroes | ---
title: series: Inspiring Stories: Zoey Zou
published: true
description: In honor of International Women's day, we are interviewing inspiring women from our community who are going to share the story of how they got into Tech and how they got to where they are today. In this post, I am interviewing one of Hack Your Future graduates: Zoey Zou
cover_image: https://dev-to-uploads.s3.amazonaws.com/i/ax60i9m8nsmqyx4zboex.jpg
tags: iwd2020, womenintech, devlive, azureheroes
---
As a society we tend to focus on titles and roles, we tend to forget that behind each title there is a person, a person who has a story to tell and every person’s story is unique.
In honor of International Women's day, we are interviewing inspiring women from our community who are going to share the story of how they got into Tech and how they got to where they are today.
In this post, I am interviewing [Zoey Zou](https://twitter.com/zoeyzou0117), who's based in Denmark.
# Meet Zoey Zou

I’m Zoey, a Chinese girl in Denmark or any possible location. I’m currently a front-end engineer in a Fintech scale-up called [Pleo](https://www.pleo.io/da/), I code in Typescript with React-Redux plus any new and shiny tools available. I’m a community enthusiast: I co-organize the local JavaScript meetup [CopenhagenJS](https://copenhagenjs.dk/), as well as volunteer in an NGO coding school, called [HackYourFuture](https://www.hackyourfuture.dk/). I like MCing events, hosting workshops, passing on knowledge about any topic that could help people pursue their dreams. I love inspiring people by sharing my own journey especially my struggles. I like public speaking, although I have a bizarre and even mildly awkward but refreshing style.
Twitter: [@zoeyzou0117](https://twitter.com/zoeyzou0117)
LinkedIn: [Zoey Zou](https://www.linkedin.com/in/zoeyzou2018/)
**When did you first become interested in technology and what sparked this interest?**
I’ve been a gamer (a terrible but passionate one) since I was a teen. At that time, it was mainly because I like new things and using my brain for something other than school work. I’ve never had a chance to get too deep into tech, but I’ve always been a quick learner when it comes to this exotic field. I attended Rails Girls 2-day coding workshop, and at that time it didn’t make me push all the way till the end; however, it did plant a seed. That’s why later on, when I needed to make decisions about my career change, I chose coding in the end.
**What education do you have?**
This is actually a very good question because I do have an unconventional educational background. At the age of 15, I made the bold decision to leave high school because I was a "*genius*" and "*cool kid*". Back in China, it is a dead-end to not to finish your higher education because there’s no way back. At some point, I decided to join the continuing education for adults — it’s better than nothing. So with studying and working side by side for 6 years, I first got an associate degree in Business English and then my B.A.
**Describe your way towards your first job in tech; how did you land this job?**
As a high school dropout, I’ve worked many unskilled jobs until I targeted myself on being in international sales. After that, I was lucky enough to work in the sourcing field for a Venezuelan company for some years. In 2014, the political issues in Venezuela broke out and took away my job. Then I started thinking about what I needed to do to have a better future.
I learned English and it made my first jump in life, and I believe technology is gonna make the second. I just didn’t know how. I found a job opportunity from a startup which aims to provide a full set of VR business solutions. I saw my opportunity, so I did everything I could to get it: writing them my whole life story, asking if I can just buy my tickets and go visit them, showing how much I wanted it. I got the job, and I started my almost three years of being a tech startup octopus.
{% twitter 1177590986559623174%}
**Do you have any role models that influenced you?**
I came across many successful women, which also includes you. I think by watching their success, I just told myself that I could do the same. In the beginning, as I was extremely active in different communities, I was constantly surprised that so many people made it so far. In a way, I took everyone I knew as a role model and I tried to understand how they made it.
**Who were/are the biggest supporters in your career?**
At the moment I’d say, people I helped, especially those who also made it. I feel very proud to see the kindness being passed forward. Being able to help people also proves my worth to myself, and this is a very effective drug for an incurable disease called imposter syndrome. People that come from the same community, especially [HackYourFuture](https://www.hackyourfuture.dk/), also appear to be part of my mental support, as I know they would help me no matter what.
**Tell us more about your current job – e.g. what do you like most about your role?**
I currently work as a frontend engineer in a Fintech scale-up called [Pleo](https://www.pleo.io/da/). Our target clients are businesses and we solve their spending problems with technology. My team builds the most user-facing part of the product — we build both the web and mobile experiences for company employees. In my eyes, my job is a combination of a service provider as well as a sustainable creator. When building the customer interface, we need to balance both "*the most intuitive experience for the user*" and "*the most scalable architecture*". I think it is very challenging and it requires a lot of experience.
As a front-end developer in a JavaScript world, the most challenging and rewarding part is that you get to work within a fast-moving industry so you always have new things to learn. **The limit is your imagination**.
{% twitter 1177227838782672896%}
**What does your typical day look like?**
My team is running a bi-week sprint, and because many engineers are remote, our stand-up time is in the middle of the afternoon. Usually, after a sprint planning, everyone has tasks at hand and they create their own tickets to track. I start my day with a cup of coffee, reading about new stuff that’s going on in the community, new tools, new hypes, etc. It is very important to work in an office that provides good coffee, or at least it has to be close to a good cafe — a genuine tip.
If there’s PR (pull request) that has been reviewed, I’ll solve the comments first. If there's PR assigned to me, I’ll also review it before I start working on it. During the day of feature development, one can encounter different issues. I get people to do pair programming with me when that happens. I always keep my tickets updated so my team knows where I am at — this is especially important when at some point you are trying to build an asynchronous work style.
After lunch, I will still have some time for PR reviewing, and some short focus time for building features. Then here comes the standup, where we talk about blocking issues or the progress we are at. After standup, again some focus time.
I don’t really have a fixed ritual to finish my day. Then I’ll have some leisure time afterward. It is tempting to code or do related stuff all the time, but a healthy balance can allow us to travel further on this path. So I tend to watch some movies or read some unrelated books in the evenings.
{% twitter 1197557359389921280%}
**What do you do in your free time?**
*I am a community freak*. I try to join different communities and be a part of them. I mentor at an NGO coding school called [HackYourFuture](https://www.hackyourfuture.dk/), of which I was also a student. I like to inspire people with my stories, I also like to see people pursue things they thought were impossible. On the side, I help co-host a local JavaScript meetup [CopenhagenJS](https://copenhagenjs.dk/). I contribute my ideas and effort to make it an awesome meetup alongside with other awesome people.
Other than that, I've been playing board games since I was a kid — never was a great one though.
I also experiment with the different concepts I learned from different people and sources. For example, I tried to be a minimalist, vegetarian, pseudo-Buddhist, handicrafter for a while.
**What advice will you give to women and girls who dream about a career in tech?**
The first thing is to know is that it's normal to feel vulnerable and frustrated at times when we are pursuing something that’s outside of our comfort zone. When you start out, try to find people of your kind — they don’t necessarily help you achieve your goal in a direct way, but more in an indirect way. They share your emotional up-and-downs, so you know that you are not alone. You could show each other every little piece of achievements so you encourage each other to go on. I believe we, especially women are hordes of lionesses, when we hunt alone, we fail quite often. But when we work together, we have great power.
In addition, find a mentor who can help direct you in the right direction, or even help you within the industry. Be grateful to people around you, they can feel it and they will support you even further.
In the end, don’t give up. Take a break, breathe fresh air, eat some ice creams, watch some funny movies, but don’t give up. Always come back to where you were, as not giving up is a sort of success. | sherrrylst |
281,434 | Discovering the terminal | Originally posted on my blog. Table of Contents What is a terminal? Your first steps Ma... | 5,639 | 2020-03-14T22:25:03 | https://dev.to/brouberol/discovering-the-terminal-5ddf | bash, tutorial, computerscience, beginners |
Originally posted on my <a href="https://blog.balthazar-rouberol.com/discovering-the-terminal">blog</a>.
## Table of Contents
<!-- MarkdownTOC autolink="true" levels="2" autoanchor="true" -->
- [What is a terminal?](#what-is-a-terminal)
- [Your first steps](#your-first-steps)
- [Managing files](#managing-files)
- [Learning new options](#learning-new-options)
- [Command Input/Output streams](#command-inputoutput-streams)
- [Composing commands](#composing-commands)
- [Escaping from bad situations](#escaping-from-bad-situations)
- [Summary](#summary)
- [Going further](#going-further)
<!-- /MarkdownTOC -->
# Discovering the terminal
When people picture a programmer, it’s not uncommon for them to imagine
someone sitting in front of a computer screen displaying undecipherable
streams of text going by really fast, like in The Matrix. Let’s set the
record straight. This is not true, at least for the most part. The
Matrix however got some things right. A programmer works with *code*,
which, as its name indicates, has to be learned before it can be
understood. Anyone not versed in the trade of reading and writing code
would only see gibberish. Another thing these movies usually get right
is the fact that a programmer types commands in a *terminal*.
<a id="what-is-a-terminal"></a>
## What is a terminal?
Most of the applications people use everyday have a <span class="gls" key="gui">Graphical User Interface (GUI)</span>. Think about Photoshop,
Firefox, or your smartphone apps. These application have immense
capabilities, but the user is mostly bound by the features implemented
in them in the first place. What if you suddenly wanted to have a new
feature in Photoshop that just wasn’t available? You would possibly end
up either waiting for the newest version to be released, or have to
install another application altogether.
One of the most important tools in a programmer toolbox is of a
different kind though. It’s called the *<span class="gls" key="terminal">terminal</span>*, which is a *<span class="gls" key="cli">command-line</span> application*. That is to say that you
enter a command, your computer executes that command, and displays the
output in the terminal.
In other words, this is an applications in which you give your computer
orders. If you know how to ask, your computer will be happy to comply.
However, if you order it to do something stupid, it will obey.
> — You: “Computer, create that folder.”
>
> — Computer: “Sure.”
>
> — You: “Now put all the files on my Desktop in that new folder.”
>
> — Computer: “No problem.”
>
> — You: “Now delete that folder forever with everything inside.”
>
> — Computer: “Done.”
>
> — You: “Wait, no, my mistake, I want it back.”
>
> — Computer: “Sorry, it’s all gone, as you requested.”
>
> — You: “…”
>
> — Computer: “I’m not even sorry.”
Never has this famous quote been more true:
> With great power come great responsibility
Learning your way around a terminal really is a fundamental shift in how
you usually interact with computers. Instead of working inside the
boundaries of an application, a terminal gives you free and unlimited
access to every part of the computer. The littles wheels are off, and
you are only limited by the number of commands you know. Consequently,
learning how to use the terminal will give you insights about how your
computer works. Let’s see what we can do. We’ll start small, but trust
me, it gets better.
<a id="your-first-steps"></a>
## Your first steps
First off, let’s define a couple of words.
A terminal is an application you can open on your computer, in which
you’ll be able to type commands in a command line interface (CLI). When
you hit the <kbd>Enter</kbd> key, the command will be executed by a
shell, and the result is displayed back in the terminal.
In the early days of computing, video terminals were actual physical
devices, used to execute commands onto a remote computer that could take
a whole room.

<span class=imgcaption>The DEC VT100, a physical video terminal dating back 1978</span>
Nowadays, terminals are programs run into a graphical window, emulating
the behavior of the video terminals of old.

<span class=imgcaption>This is what a terminal looks like nowadays.</span>
Different operating systems come with different terminals and different
shells pre-installed, but most common shell out there is certainly `bash`.
Before we go any deeper, let’s open a terminal! The way you do this however depends on your operating system.
### Opening a terminal
#### On MacOS
Open the `Finder` app, click on `Applications` on the left pane, then
enter the `Utilities` directory, then execute the `Terminal` app. You
can also use the Spotlight search by clicking on the magnifying glass
icon on the top right corner of your screen (or use the <kbd>Cmd</kbd>
<kbd>Space</kbd> keyboard shortcut), and type `Terminal`.
#### On Linux
Depending on the Linux distribution you use, it might come with XTerm,
Gnome-Terminal or Konsole pre-installed. Look for any of these in your
applications menu. A lot of Linux installation use the
<kbd>Ctrl</kbd> - <kbd>Alt</kbd> - <kbd>T</kbd> keyboard shortcut to
open a terminal window.
#### On Windows
Windows is a special case: Linux and MacOS come with bash pre-installed, whereas Windows does not. It comes with 2 built-in shells: `cmd` and `Powershell`. The rest of this tutorial and its following chapters however assume you are running bash. The reason for that is that `bash` is pretty much ubiquitous, whether it's on a personal workstations or on servers. On top of that, bash comes with a myriad of tools and commands that will be detailed in the next chapter.
Fortunately, Windows 10 can now natively run bash since 2019 by using the _Windows Subsystem for Linux_ (WSL). We suggest you follow the instructions from this [tutorial](https://itsfoss.com/install-bash-on-windows/).

<span class=imgcaption>Running bash on Windows is now possible</span>
### Running our first command
When you open your terminal, the first thing you will see is a *<span class="gls" key="prompt">prompt</span>*. It is what is displayed every
time the shell is ready for its next order. It is common for the prompt
to display information useful for the user. In my case, `br` is my
username, and `morenika` is my computer’s name (its *<span class="gls" key="hostname">hostname</span>*).

<span class=imgcaption>`br@morenika:~$` is my prompt</span>
The black rectangle is called a *cursor*. It represents your current
typing position.
<div class="Note" markdown="1">
What *your* prompt actually looks like depends on your operating system
and your shell. Don’t worry if it does not look exactly the same as the
one in the following examples.
</div>
The first command we will run is `ls` (which stands for *list
directory*). By default, that command lists all directories and files
present in the directory we currently are located into. To run that
command, we need to type `ls` after the prompt, and then hit
<kbd>Enter</kbd>
The text that is displayed after our command and before the next prompt
is the command’s *output*.
```bash
br@morenika:~$ ls
Android code Downloads Music
AndroidStudioProjects Desktop Dropbox Pictures
bin Documents Firefox_wallpaper.png Videos
```
These are all the files and directories located in my personal directory
(also called *<span class="gls" key="homedir">home directory</span>*).
Let’s open a graphical file explorer and check, just to be sure.

<span class=imgcaption>As expected, we weren’t lied to</span>
The shell is sensitive to casing: a lower-case command is not the same
thing as it’s upper case equivalent.
```bash
br@morenika:~$ LS
bash: LS: command not found
```
As of now, we will ignore the `br@morenika:~$` prompt prefix and will
only use `$`, to keep our examples short.
### Commands arguments
In our last example, we listed all files and directories located in my
home directory. What if I wanted to list all files located in the `bin`
directory that we can see in the output? In that case, I could pass
`bin` as an *argument* to the `ls` command.
```bash
$ ls bin
bat fix-vlc-size lf terraform vpnconnect
clean-desktop itresize nightlight tv-mode
```
By passing the `bin` argument to the `ls` command, we told it where to
look, and we thus changed its behavior. Note that it is possible to pass
more than one argument to a command.
```bash
$ ls Android bin
Android:
Sdk
bin:
bat clean-desktop fix-vlc-size itresize lf nightlight terraform tv-mode vpnconnect
```
In that example, we passed two arguments to `ls`: `bin` and `Android`.
`ls` then proceeded to list the content of each these 2 directories.
Think about how you would have done that in a File explorer GUI. You
probably would have gone into the first directory, then gone back to the
parent directory and finally proceeded with the next directory. The
terminal allows you to be more efficient.
### Command options
Now, let’s say I’d also like to see how big files located under `bin`
are. No problem! The `ls` command has *options* we can use to adjust its
behavior. The `-s` option causes `ls` to display each file size, in
kilobytes.
```bash
$ ls -s bin
total 52336
4772 bat 4 itresize 44296 terraform
4 clean-desktop 3244 lf 4 tv-mode
4 fix-vlc-size 4 nightlight 4 vpnconnect
```
While this is nice, I’d prefer to see the file size in a human-readable
unit. I can add the `-h` option to further specify what `ls` has to do.
```bash
$ ls -s -h bin
total 52M
4.7M bat 4.0K itresize 44M terraform
4.0K clean-desktop 3.2M lf 4.0K tv-mode
4.0K fix-vlc-size 4.0K nightlight 4.0K vpnconnect
```
I can separate both options with a space, or also group them as one
option.
```bash
$ ls -sh bin
total 52M
4.7M bat 4.0K itresize 44M terraform
4.0K clean-desktop 3.2M lf 4.0K tv-mode
4.0K fix-vlc-size 4.0K nightlight 4.0K vpnconnect
```
I’d finally like each file and its associated size to be displayed on
its own line. Enter the `-1` option!
```bash
$ ls -s -h -1 bin
total 52M
4.7M bat
4.0K clean-desktop
4.0K fix-vlc-size
4.0K itresize
3.2M lf
4.0K nightlight
44M terraform
4.0K tv-mode
4.0K vpnconnect
```
Short options make it easy to type a command quickly, but the result can
be hard to decipher after a certain amount of options, and you might
find yourself wondering that the command is doing in the first place.
Luckily, options can have a *long form* and a *short form*. For example,
`-s` can be replaced by its long form `--size`, and `-h` by
`--human-readable`.
```bash
$ ls --size --human-readable -1 bin
total 52M
4.7M bat
4.0K clean-desktop
4.0K fix-vlc-size
4.0K itresize
3.2M lf
4.0K nightlight
44M terraform
4.0K tv-mode
4.0K vpnconnect
```
The command feels way more self-explanatory this way! You’ll notice that
we still used the short form for the `-1` option. The reason for that is
that this option simply does not have a long form.
### Takeaways
- A terminal is an application through which you interact with a shell
- You can execute commands by typing them in the shell’s command-line
and hitting <kbd>Enter</kbd>
- A command can take 0, 1 or more arguments
- A command’s behavior can be changed by passing options
- By convention, options can have have multiple forms: a short and/or
a long one.

<span class=imgcaption>Here is a summary of the different parts of a command</span>
<a id="managing-files"></a>
## Managing files
So far, we’ve seen how to run a command, changing its behavior by
passing command-line arguments and options, and that `ls` is used to
list the content of a directory. It’s now time to learn about how to
managing your files, by creating files and directories, copying and
moving them around, creating links, etc. The goal of this section is to
teach you how to do everything you usually do in your file explorer, but
in your terminal.
### `pwd`, `cd`: navigating between directories
Up to now, every command we’ve run were executed from our *<span class="gls" key="homedir">home directory</span>* (the directory in which
you have all your documents, downloads, etc). The same way you can
navigate directories in a graphical file editor, you can do it in a
terminal as well.
Before going anywhere, we first need to find know where we are. Enters
`pwd`, standing for *print working directory*. This command displays
your current working directory (*a.k.a* where you are).
```bash
$ pwd
/home/br
```
Now that we found our bearings, we can finally move around. We
can do that with the `cd` command, standing for (you might have guessed
it) *change directory*.
```bash
$ cd Documents
$ pwd
/home/br/Documents
$ cd ./invoices
$ pwd
/home/br/Documents/invoices
$ cd 2020
$ pwd
/home/br/Documents/invoices/2020
```
As `2020` is empty, we can’t go any further. However, we can also
go back to the *parent directory* (the directory containing the one we
are currently into) using `cd ..`.
```bash
$ pwd
/home/br/Documents/invoices/2020
$ cd ..
$ pwd
/home/br/Documents/invoices
```
We don’t have to always change directory one level at the time. We can
go up multiples directories at a time.
```bash
$ pwd
/home/br/Documents/invoices
$ cd ../..
$ pwd
/home/br
```
We can also go several directories down at the same time
```bash
$ pwd
/home/br
$ cd Documents/invoices/2020
```
Running `cd` without arguments takes you back to your home directory.
```bash
$ pwd
/home/br/Documents/invoices/2020
$ cd
$ pwd
/home/br
```
Running `cd -` takes you back to your previous location.
```bash
$ pwd
/home/br/Documents/invoices/2020
$ cd /home/br
$ cd -
$ pwd
/home/br/Documents/invoices/2020
```
You might wonder why `cd ..` takes you back to the parent directory?
What does `..` mean? To understand this, we need to explore how *paths*
work.
### Paths: root, absolute and relative
If you have never used a terminal before, and have only navigated
between directories using a graphical file explorer, the notion of
*path* might be a bit foreign. A path is a unique location to a file or
a folder on your file system. The easiest way to explain it is by
describing how files and directories are organized on your disk.
The base directory (also called <span class="gls" key="rootdir">root
directory</span>, and referred as `/`) is the highest directory in the
hierarchy: it contains every single other file and directory in your
system, each of these directories possibly containing others, to form a
structure looking like a tree.

<span class=imgcaption>Your disk is organized like a tree</span>
Let’s look a what that `/` root directory contains.
```bash
$ ls /
bin boot dev etc home lib lib64 lost+found media
mnt opt proc root run sbin srv sys tmp usr var
```
Ok so, there is a couple of things in there. We have talked about home
directories before, remember? It turns out that all the users’ home
directories are located under the `home` directory. As `home` is located
under `/`, we can refer it via its *<span class="gls" key="abspath">absolute path</span>*, that is to say the full path to a
given directory, starting from the root directory. In the case of
`home`, its absolute path is `/home`, as it is directly located under
`/`.
Any path starting with `/` is an absolute path.
We can then use that path to inspect the content of the `home` directory
with the `ls` command.
```bash
$ ls /home
br
```
The absolute path of `br` is `/home/br`. Each directory is separated
from its parent by a `/`. This is why the root directory is called `/`:
it is the only directory without a parent.
Any path that does not start with `/` will be a *<span class="gls" key="relpath">relative path</span>*, meaning that it will be relative to
the current directory. When we executed the `ls bin` command, `bin` was
actually a relative path. Indeed, we executed that command while we were
located in `/home/br`, meaning that the absolute path of `bin` was
`/home/br/bin`.
Each folder on disk has a link to itself called `.`, and and link to its
parent folder called `..`.

<span class=imgcaption>The `.` link points to the folder itself and the `..` link points to the folder’s parent</span>.
We can use these `.` and `..` links when constructing relative paths.
For example, were you current located in `/home/br`, you could refer to
the `Android` folder as `./Android`, meaning “the `Android` folder
located under `.` (the current directory)”.
```bash
$ ls ./Android
Sdk
```
Were you located under `/home/br/Android`, you could also refer as
`/home/br/Downloads` as `../Downloads`.

<span class=imgcaption>Following `Android`’s `..` link takes you back to the `home` director</span>
`ls -a` allows you to see *hidden files*, a.k.a all files starting with
a dot. We can indeed see the `.` and `..` links!
```bash
$ ls -a
. .. Sdk
```
### `mkdir`: creating directories
<div class="Note" markdown="1">
In order to make sure that we don’t mess with your personal files when
testing out the commands from this chapter, we will start by creating a
new directory to experiment in, called `experiments`.
</div>
You can create a new directory using the `mkdir` command, which stands
for *make directories*. By executing the command `mkdir experiments`,
you will create the `experiments` directory in your current directory.
Let’s test this out.
```bash
$ ls
Android code Downloads Music
AndroidStudioProjects Desktop Dropbox Pictures
bin Documents Firefox_wallpaper.png Videos
$ mkdir experiments
$
```
Notice that the `mkdir` command did not display anything. It might feel
unusual at first, but this is the philosophy of these commands: only
display something if something went bad. In other terms, no news if good
news.
We can now check that the directory has been created.
```bash
$ ls
Android bin Desktop Downloads experiments Music Videos
AndroidStudioProjects code Documents Dropbox Firefox_wallpaper.png Pictures
```
We can also see that directory by opening our file explorer.

<span class=imgcaption>The directory we have just created in the terminal can be seen in our file explorer. The terminal displays the information as text, and the file explorer displays it in a graphical form.</span>
Running `mkdir` on a pre-existing command causes it to fail and display
an error message.
```bash
$ mkdir experiments
mkdir: experiments: File exists
```
What if we wanted to create a directory in `experiments` called `art`,
and another directory called `paintings` itself located into `art`?
```bash
$ mkdir experiments/art/paintings
mkdir: experiments/art: No such file or directory
```
Something clearly went wrong here. `mkdir` is complaining that it cannot
create `paintings` within `experiments/art` as it does not exist. We
could create `art` and then `paintings`, in two separate commands, but
fortunately, `mkdir` provides us with a `-p` option that causes `mkdir`
to succeed even if directories already exist, and that will create each
parent directory.
```bash
-p, --parents: no error if existing, make parent directories as needed
```
This looks like exactly what we need in that case! Let’s see if it works
as expected.
```bash
$ mkdir -p experiments/art/paintings
$ ls experiments
art
$ ls experiments/art
paintings
$ ls experiments/art/paintings
$
```
### `cp`, `mv`: moving files around
`cp` (standing for `copy`) allows you to copy a file or a directory to
another location.
```bash
$ cp Documents/readme experiments/art
$ ls experiments/art
paintings readme
$ ls Documents
readme
```
You can also move the file from a location to another by using `mv`.
```bash
$ mv experiments/art/readme experiments
$ ls experiments
art readme
$ ls experiments/art
paintings
```
That does not seem to work on directories however.
```bash
$ cp experiments/art experiments/art-copy
cp: experiments/art is a directory (not copied).
```
By default, `cp` only works on files, and not on directories. We need to
use the `-r` option to tell `cp` to recursively copy `experiments/art`
to `experiments/art-copy`, meaning `cp` will copy the directory and
every file and directories it contains.
```bash
$ cp -r experiments/art experiments/art-copy
$ ls experiments
art-copy art readme
$ ls experiments/art
paintings
$ ls experiments/art-copy
paintings
```
Finally, you can use `mv` to rename a file or a directory. It might
sound surprising that there is not `rn` or `rename` command, but
renaming a file is actually just moving it to another location in the
same directory.
```bash
$ mv experiments/readme experiments/README
$ ls experiments
README art-copy art
```
### `rm`: removing files and directories
The `rm` copy allows you to delete files and directories.
<div class="Warning" markdown="1">
**Be careful with `rm`**, when a file is deleted, it is not moved to the
trash, it is gone.
```bash
$ rm experiments/README
$ ls experiments
art-copy art
```
</div>
`rm` behaves like `cp`: it only allows you to remove directories by
using the `-r` option.
```bash
$ rm experiments/art
rm: experiments/art: is a directory
$ rm -r experiments/art
$ ls experiments
art-copy
$ rm -r experiments/art-copy
$ ls experiments
$
```
### `ln`: creating links
Have you ever created a shortcut to a file on your desktop? Behind the
scenes, this works using a *symbolic link*. A link points to the
original file, and allows you to access that file from multiple places,
without actually having to store multiple copies on disk.
We can create such a link by using the `ln -s` command (`-s` stands for
*symbolic*).
```bash
$ pwd
/home/br
$ ln -s Documents/readme Desktop/my-readme
```
Using the `-l` option of `ls`, we can see where a link points to.
```bash
$ ls -l Desktop
total 0
lrwxr-xr-x 1 br br 21 Jan 17 16:48 my-readme -> /home/br/Documents/readme
```
<div class="Note" markdown="1">
My personal mnemonic to remember the order of arguments is by
remembering *s for source*: the source file goes after the `-s` option.
`ln -s <source> <destination>`
</div>
### `tree`: visualizing files and subfolders
`tree` displays the content of the current directory (or argument
directory) and its subfolders in a tree-like representation. It is very
useful to have a quick look at the current content of a directory,
```bash
$ tree experiments
experiments
|__ art
|__ paintings
2 directories, 0 files
```
<div class="Note" markdown="1">
`tree` might not be installed by default, depending on your system. We
mention it here as we will re-use it throughout the chapters.
</div>
<a id="learning-new-options"></a>
## Learning new options
### Getting help
If you are wondering how you will be able to remember all these options,
don’t worry. Nobody expects you to know all of the options of all the
commands by heart. You can rely on the commands’ documentation instead
of having to memorize them all.
Most of the commands out there take a `-h` (or `--help`) option that
will display the list of options the command itself can take, and what
they do.
```bash
$ ls --help
Usage: ls [OPTION]... [FILE]...
List information about the FILEs (the current directory by default).
Sort entries alphabetically if none of -cftuvSUX nor --sort is specified.
Mandatory arguments to long options are mandatory for short options too.
-a, --all do not ignore entries starting with .
-A, --almost-all do not list implied . and ..
--author with -l, print the author of each file
-b, --escape print C-style escapes for nongraphic characters
--block-size=SIZE with -l, scale sizes by SIZE when printing them;
e.g., '--block-size=M'; see SIZE format below
-B, --ignore-backups do not list implied entries ending with ~
-c with -lt: sort by, and show, ctime (time of last
modification of file status information);
with -l: show ctime and sort by name;
otherwise: sort by ctime, newest first
[cut for brevity]
```
It’s interesting to note that some options accept both short and long
forms, like `-a/--all`, while some others only accept a short form
(`-c`) or a long form (`--author`). There’s no real rule there, only
conventions. A command might not even accept a `--help` option, but most
if not all the common ones do.
**Note**: `-h` is not always the short option for `--help`. Indeed, we’ve seen
that `ls --help` prints an overview of all available commands, whereas
`ls -h` displays units in a human-readable format!
### Reading the manual
Sometimes, there’s no `--help` option available, or its output isn’t
clear or verbose enough for your taste, or the output is too long to
navigate easily. It’s often a good idea to read the command’s *`man`*
page (*man* stands for *manual*).
Let’s give it a go, by typing the following command.
```bash
$ man ls
```

<span class=imgcaption>`man ls` displays the manual of the `ls` command: everything you need to know about what `ls` can be used for.</span>
#### Reading the synopsis
`man` provides you with a *synopsis*, describing a specific usage of the
command on each line, along with the associated options and arguments.
The `ls` synopsis is
SYNOPSIS
ls [OPTION]... [FILE]...
`[OPTION]` and `[FILE]` means that both options and files are
*optional*. As we’ve seen at the beginning of this chapter, just running
`ls` on its own prints the content of the current working directory.
The `...` following `[OPTION]` and `[FILE]` means that several options
and several files arguments can be passed as arguments to `ls`, as
illustrated by the following example.
```bash
$ ls -sh Android bin
Android:
total 4.0K
4.0K Sdk
bin:
total 52M
4.7M bat 4.0K fix-vlc-size 3.2M lf 44M terraform 4.0K vpnconnect
4.0K clean-desktop 4.0K itresize 4.0K nightlight 4.0K tv-mode
```
If we look at the `mkdir` synopsis, we see that options are, well,
optional, but we must provide it with one or more directories to create,
because `DIRECTORY` is not between square brackets.
SYNOPSIS
mkdir [OPTION]... DIRECTORY...
The `DESCRIPTION` section will list all possible options (short and long
forms), along with their effect.
#### Navigating the manual
When you run `man`, the manual of the command will be displayed in a
*<span class="gls" key="pager"> pager</span>*, a piece of software that
helps the user get the output one page at a time. One of the most common
pager commands is `less` (which is incidentally the more featureful
successor of `more`, because *less is more*). Being dropped into a pager
for the first time is confusing, as you might not know how to to
navigate.
The most useful commands you can type within `less` are:
- `h`: display the `less` help
- `q`: exit `less`
- `/pattern`: look for the input text located after the cursor’s
current position
- `n`: go to next pattern occurrence
- `?pattern`: look for the input text located before the cursor’s
current position
- `N` go to the pattern previous occurrence
- up or down arrow to navigate up or down a line
- PageUp and PageDown keys to navigate up or down a page
- `g` go to the beginning of the file
- `G` go to the end of the file
For example, if you’re not sure what the `-s` `ls` option is doing, you
can type `man ls` and then `/-s` when you are in `less`. Type `n` until
you find the documentation for `-s, --size` (or `N` to go back if you
went too far). Once you’re done, you can exit `less` by typing `q`.
While `man` uses `less` under the hood to help you read documentation,
you can simply use `less` to page through any file your disk. For
example, I can use this command on my computer.
```bash
$ less Documents/readme
```

You can look into the `less` help itself, by typing `h` when reading a
man page, by typing `less --help` in a terminal, or even `man less`!
Exactly like `ls`, `man` itself is a command, and as most of the
commands, it has a manual! You can read more about `man` itself by
typing
```bash
$ man man
```

<span class="imgcaption">Low and behold, the manual’s manual.</span>
<a id="command-inputoutput-streams"></a>
## Command Input/Output streams
Before we can fully explain what makes the shell so powerful, we need to
explain what is an *<span class="gls" key="iostream">Input Output
stream</span>*. Every time we run a command, the shell executes a
*process*, which will then be in charge of running the command, and
communicating its output back to the terminal. Input/Output streams are
the way the shell sends input to a process and dispatches output from
it.
Each process has 3 streams by default:
- `stdin` (or *standard input*): provides input to the command
- `stdout` (or *standard output*): displays the command’s output
- `stderr` (or *standard error*): displays the command’s error
Each of these streams have an associated *<span class="gls" key="fd">file descriptor</span>*, a number used by the shell to
reference that stream. `stdin` has the file descriptor 0, `stdout` has
1, and `stderr` has 2.

<span class=imgcaption>`stdin` (file descriptor 0) is the process input stream, `stdout` (file descriptor 1) is the process output stream and `stderr` (file descriptor 2) is the process error stream.</span>
### Redirecting output to a file
It can be convenient to “save” the output of a command to a file, to
further process it at a later time, or to send it to someone else. You
can use the `>` operator to redirect the `stdout` of a command to a
file.
```bash
$ ls /home/br > ls-home.txt
```
We can then display the content of the `ls-home.txt` file using the
`cat` command.
```bash
$ cat ls-home.txt
Android code Downloads Music
AndroidStudioProjects Desktop Dropbox Pictures
bin Documents Firefox_wallpaper.png Videos
```
If the file doesn’t already exist, it will be created by the shell at
the moment of the redirection. If the file however does exist at
redirection time, it will be overwritten, meaning that anything that
file used to contain will be replaced by the output of the redirected
command.
In that example, we use the `echo` command, that simply sends the
argument text to its `stdout`.
```bash
$ cat ls-home.txt
Android code Downloads Music
AndroidStudioProjects Desktop Dropbox Pictures
bin
$ echo "Hello world!" > ls-home.txt
$ cat ls-home.txt
Hello world!
```
If you want to append the output of a command to a file without
overwriting its content, you can use the `>>` operator instead of `>`.
```bash
$ cat echoes
cat: echoes: No such file or directory
$ echo "Hey, I just met you, and this is crazy" >> echoes
$ echo "so here's my echo, so cat it maybe" >> echoes
$ cat echoes
Hey, I just met you, and this is crazy
so here's my echo, so cat it maybe
```
### Redirecting a file to a command’s input
The same way you can redirect a command’s `stdout` to a file, you can
redirect a file to a command’s `sdtin`.
In that example, we’ll redirect the content of the `echoes` file to the
input of the `wc -l` command, counting the number of lines of its input
stream or the file(s) passed by argument.
```bash
$ cat echoes
Hey, I just met you, and this is crazy
so here's my echo, so cat it maybe
$ wc -l < echoes
2
```
You can of course combined the `<`, `>` and `>>` operators in a single
command. In the following example, we will redirect the content of the
`echoes` file to the `wc -l` command, and redirect the output of that
command to the `echoes-lines` files.
```bash
$ wc -l < echoes > echoes-lines
$ wc -l < echoes > echoes-lines
$ cat echoes-lines
2
$ cat echoes
Hey, I just met you, and this is crazy
so here's my echo, so cat it maybe
```
### Redirecting multiple lines to a command’s input
You might find yourself in a situation where you want to pass multiple
lines of input to a command, and the `<` operator fails you in that
case, as it only deals with files. Luckily, your shell provides you with
the *heredoc* (here document) `<<` operator to accomplish this.
A heredoc redirection has the following syntax:
command << DELIMITER
a multi-line
string
DELIMITER
The `DELIMITER` can be any string of your choosing, although `EOF` (“end
of file”) is pretty commonly used.
Let’s consider the following example:
```bash
$ cat <<EOF
My username is br
I'm living at /home/br
EOF
```
This command will output the following block of text:
My username is br
I'm living at /home/br
You can redirect that block into a file by combining both the `<<` and
`>` operators.
```bash
$ cat > aboutme <<EOF
My username is br
I'm living at /home/br
EOF
$ cat aboutme
My username is br
I'm living at /home/br
```
### Redirecting `stderr`
Let’s consider the following example.
```bash
$ cat -n notthere > notthere-with-line-numbers
cat: notthere: No such file or directory
$ cat notthere-with-line-numbers
```
How come the `notthere-with-line-numbers` file is empty even after we
redirected the `cat -n notthere` command’s output to it? The reason for
that is, we didn’t really redirect the command’s output to that file, we
redirected the command’s `stdout`. As the file `notthere` does not
exist, the `cat` command fails, and displays an error message on it’s
`stderr` stream, which wasn’t redirected.
You can redirect a process stream by using its file descriptor.
Remember? 0 for `stdin`, 1 for `stdout` and 2 for `stderr`.
```bash
$ cat -n notthere 2>errors.txt
$ cat errors.txt
cat: notthere: No such file or directory
```
This `stderr` redirection can be illustrated by the following diagram.

<span class=imgcaption>Any errors displayed by `cat` will be redirected into the `errors.txt` file</span>
You can also redirect the command’s `sdterr` to a file, and its `stderr`
to another file.
```bash
$ cat -n notthere >output.txt 2>errors.txt
$ cat output.txt
$ cat errors.txt
cat: notthere: No such file or directory
```

<span class=imgcaption>Normal output will be redirected into `output.txt` whereas errors are redirected to into `errors.txt`</span>
It is also possible to redirect the command’s `stderr` into its `stdout`
using `2>&1`. This will effectively merge both streams into a single
one.
```bash
$ cat notthere > output.txt 2>&1
$ cat output.txt
cat: notthere: No such file or directory
```

<span class=imgcaption>`cat`’s stdout and stderr are merged together into a single stream</span>
The order of redirections has always felt a little bit weird to me.
You’d expect the following syntax to work, as it feels (at least to me)
more logical, by saying “redirect all errors to stdout, and redirect the
whole thing to a file”. It does not work though.
```bash
$ cat notthere 2>&1 > output.txt
cat: notthere: No such file or directory
$ cat output.txt
$
```
<a id="composing-commands"></a>
## Composing commands
Being able to use a myriad of commands, each one with its own purpose,
is powerful. However, the true power of the shell comes from the fact
that these commands can be **combined**. This is where the terminal
takes a radical shift from the philosophy of graphical applications.
Where a <span class="gls" key="gui">GUI</span> allows you to use a set
of predefined tools, the shell allows you to assemble commands into your
own specialized tools.
This is done via the *<span class="gls" key="pipe">pipe</span>*: `|`,
allowing the redirection of a command’s output stream to another
command’s input stream.
```bash
$ command1 | command2
```
A pipe simply works by connecting the `stdout` stream of a command to
the `stdin` stream of the next command. Simply said, the output of a
command becomes the input of the next.

<span class=imgcaption>`ls` is *piped* into `wc` by redirecting its output into `wc`’s input. A pipe allows to compose and assemble commands into pipelines, which makes the terminal so powerful.</span>
You can of course chain as many commands as possible and create command
pipelines.
```bash
$ command1 | command2 | command3 | ... | commandN
```
<div class="Note" markdown="1">
When you execute`command1 | command2`, your shell starts *all* commands
at the same time, and a command’s output is streamed into the next one
as the commands run.
</div>
For example, let’s imagine I’d like to count the number of files in my
`Downloads` folder. To that effect, I can combine `ls` and the `wc` (for
*word count*) commands. `wc`, when used with the `-l` options, allows to
count the number of lines in its input.
```bash
$ ls -1 ~/Downloads | wc -l
34
```
Now, let’s say I only want to count the number of pdf files in my
`Downloads` folder, not just all of them. No problem, `grep` to the
rescue! `grep` allows to filer its input on a given pattern (more on
`grep` in the next chapter). By using `grep pdf`, we filter the output
of `ls -1` to only the filenames containing “pdf”, and then count how
many filenames were filtered using `wc -l`.
```bash
$ ls -1 ~/Downloads | grep pdf | wc -l
22
```
### Going further: redirecting output to both the console and a file
The `tee` command allows you to write a command’s `stdout` to a file
while still displaying it into the console. This can be very useful if
you want to store the output of a command in a file, but still be able
to see what it’s doing in real-time.
```bash
$ ls -1 | head -n 2 | tee output.txt
Android
code
$ cat output.txt
Android
code
```

<span class=imgcaption>`tee` is named after the T-splitter used in plumbing.</span>
<a id="escaping-from-bad-situations"></a>
## Escaping from bad situations
### Mistyped command, missing arguments
If you mistype a command, or forget to add arguments, you can find
yourself in a situation where your shell hangs, and nothing happens. For
example, type any of the following commands.
```bash
$ cat
```
```bash
$ echo 'hello world
```
The first command hangs because it is waiting for input on its `stdin`
stream, as no argument file was provided. In the case of the second
command, it is missing a matching single quote. In both cases, you get
can out of this situation by hitting <kbd>Ctrl</kbd> - <kbd>C</kbd>
which kills the command by sending it a interruption signal.
**Note**: if your shell is stuck on receiving input (like in the `cat` example),
you can also cleanly exit it by hitting <kbd>Ctrl</kbd> - <kbd>D</kbd>
which will send a special EOF (“end of file”) character, indicating to
the command that its input is now closed.
```bash
$ cat
hello
hello
world
world
# Ctrl-D
$
```
### Escaping characters
Imagine for a second that you had a file on disk named `my file`, and
you wanted to display its content using `cat`.
```bash
$ cat my file
cat: my: No such file or directory
cat: file: No such file or directory
```
In the previous example, the `cat` command was given 2 arguments `my`
and `file`, none of which corresponded to any existing file. We have 2
solutions to make this work: quoting the file name, or using an <span class="gls" key="escapechar">escape character</span>.
```bash
$ cat 'my file'
That file has spaces in it...
$ cat "my file"
That file has spaces in it...
```
By putting quotes around the file name, you are telling your shell that
whatever is between the quotes is a single argument.
Like previously mentioned, we could also use the backslash escape
character, which indicates that whatever following character doesn’t
have any special meaning.
```bash
$ cat my\ file
That file has spaces in it...
```
By using `\` (a backslash character followed by a space), we indicate to
the shell that the space is simply a space, and should not be
interpreted as a separator between 2 arguments.
<a id="summary"></a>
## Summary
In that chapter, we’ve discovered what a terminal is: an application in
which you can type text commands to have them executed by a program
called a shell.
Facing the terminal can be intimidating at first because you might not
always know what command to type. Learning your way around the terminal
is however part of the journey of becoming a software engineer. Like any
other powerful tool, it can be hard to learn but will also make you
immensely more productive once you get more accustomed to it.
The fundamental philosophy of working in a terminal is being free to
compose different tools in a way that might not have been initially
foreseen by the tools’ developers, by using pipes and stream
redirections. Instead of using a single tool that was only designed to
perform a finite set of tasks, you are free to assemble a patchwork of
unrelated commands, that can all work together by joining their input
and output streams.
In the <a href="https://blog.balthazar-rouberol.com/text-processing-in-the-shell">next chapter</a>, we will dig into text processing commands, which
can be immensely powerful when chained together with pipes.
<a id="going-further"></a>
## Going further
**1.1**: Look into the `ls` manual and research what the `-a` option is
doing. Run `ls -a ~/`. What are the `.` and `..` directories? What are
the files starting with a `.` ?
**1.2**: Run a command and redirect its output into a file, but display
any errors in the terminal.
**1.3**: Run a command and redirect its output into a file, and any
errors into a different file.
**1.4**: Run a command and redirect both its output and errors into the
same file, while also displaying them all on screen at the same time.
**1.5**: Use a heredoc redirection to create a new file with text in it.
**1.6**: Given an `echoes` file, what is the difference between
`wc -l echoes`, `cat echoes | wc -l` and `wc -l < echoes` ?
---
<footer>
<p>
<em>Essential Tools and Practices for the Aspiring Software Developer</em> is a self-published book project by Balthazar Rouberol and <a href=https://etnbrd.com>Etienne Brodu</a>, ex-roommates, friends and colleagues, aiming at empowering the up and coming generation of developers. We currently are hard at work on it!
</p>
<p>The book will help you set up a productive development environment and get acquainted with tools and practices that, along with your programming languages of choice, will go a long way in helping you grow as a software developer.
It will cover subjects such as mastering the terminal, configuring and getting productive in a shell, the basics of code versioning with <code>git</code>, SQL basics, tools such as <code>Make</code>, <code>jq</code> and regular expressions, networking basics as well as software engineering and collaboration best practices.
</p>
<p>
If you are interested in the project, we invite you to join the <a href=https://balthazar-rouberol.us4.list-manage.com/subscribe?u=1f6080d496af07a836270ff1d&id=81ebd36adb>mailing list</a>!
</p>
</footer>
| brouberol |
281,439 | shhh-cli, a Go CLI client to interact with Shhh from the terminal | Not that long ago I shared a post about Shhh, a web-app I wrote to share encrypted secrets using secu... | 0 | 2020-03-14T23:31:11 | https://dev.to/smallwat3r/shhh-cli-a-go-cli-client-to-interact-with-shhh-from-the-terminal-f47 | opensource, go, cli, showdev | Not that long ago I shared a [post](https://dev.to/smallwat3r/shhh-a-web-app-to-share-encrypted-secrets-using-secured-links-with-passphrases-and-expiration-dates-1c32) about [Shhh](https://github.com/smallwat3r/shhh), a web-app I wrote to share encrypted secrets using secured links with passphrase and expiration dates, avoiding spreading sensitive data like passwords across time in emails.
I encourage people to host their own instance of Shhh, so they make sure their data is even more secure.
Python is my go-to language for development, but I always wanted to deep dive into Go. From my point of view, to learn a new language there is nothing better than learning while developing something useful.
So as my first Go program I've decided to write a CLI client for Shhh, allowing users to create and read secrets directly from their terminal. Handy :)
The source code is available here: https://github.com/smallwat3r/shhh-cli | smallwat3r |
281,448 | From supermarket shelves to your doorstep in 90 minutes | The challenge Why go to the supermarket when iGooods can bring the supermarket to you? iGooods is tr... | 0 | 2020-03-14T23:59:30 | https://dev.to/evtauri/from-supermarket-shelves-to-your-doorstep-in-90-minutes-772 | ruby | The challenge
Why go to the supermarket when iGooods can bring the supermarket to you? iGooods is transforming grocery shopping in St. Petersburg: bringing together customers, in-store staff and couriers to deliver your regular shopping right to your door, in as little as 90 minutes.
The solution
Across desktop and mobile, at home, in the office, in-store, and on the road, we’ve developed a complete solution that keeps each part of iGooods operating smoothly. Here’s how it works:
Customers select their favourite stores and begin shopping
After filling their carts, they check out and receive an estimated delivery time
Picking & packing staff prepare the orders in-store
Couriers collect the shopping and deliver them directly to the customers
Our responsive web app provides bespoke interfaces for customers, store operators, picking, and delivery staff, giving the information and tools they need, when they need them.
https://evrone.com/igooods | evtauri |
281,506 | Star Wars Recursion | Following my exploration of the maze recursion proved to be a powerful tool. While exploring the a S... | 0 | 2020-03-15T04:59:22 | https://dev.to/adamlarosa/star-wars-recursion-4j84 | Following my exploration of the maze recursion proved to be a powerful tool. While exploring the a Star Wars API (also known as the "swapi") I was able to use recursion to search a portion of the API in its entirety.
The Star Wars API (which can be found at http://www.swapi.co) is a neat little tool I've found useful to play around with. What I found odd at first though is that when asking for a particular resource, "people" for example, not all results were returned at once. For example...
```
JSON.parse(RestClient.get(http://www.swapi.co/people/))
```
...returns a hash with four keys.
```
["count", "next", "previous", "results"]
```
A quick look at the size of "results" show me I've received ten characters from the Star Wars movies. But as "count" is an integer of 87, I know I'm missing 77 characters to dream about. The "next" key points to another web address to what I assume will be ten more results.
How exactly can I get them all at once? Recursion!
I figure I can write a method that will pull down the first results, check to see if there's a valid address in the "next" key, and if so use the same method to keep fetching characters. In this pursuit I ended up with the following class...
```
require 'rest-client'
require 'json'
class Swapi
def initialize
@people = []
end
def get_planets(path)
planets = JSON.parse(RestClient.get(path))
planets["results"].each { |planet| @planets << planet }
return "complete" if planets["next"] == nil
get_planets(planets["next"])
end
def people
@people
end
end
```
With the instance variable I've got a spot to put all the results. That combined with the main "get_planets" method that calls itself & a getter to view the results I execute it with the first people address...
```
p = Swapi.new
p.get_people("http://www.swapi.co/api/people/")
```
With that all 87 of the results are loaded into a single array. | adamlarosa | |
287,484 | Make a NFC tag catalyzed Telegram bot | If you love automation like me, I think that you will find this very interesting. In this step-by-st... | 0 | 2020-03-24T21:53:34 | https://dev.to/username_pepe/make-a-nfc-tag-catalyzed-telegram-bot-4fgm | nfc, bot, tutorial, node | If you love automation like me, I think that you will find this very interesting.
In this step-by-step guide, we are going to make a bot that sends a specific message to a certain Telegram ID every time the phone is put next to a NFC tag.
## Intro
The system flow is quite simple:
The phone is put next to the NFC tag and a task runner app makes a GET request to a specific URL provided by Heroku where the Telegram bot lives. Once this URL is reached, the bot sends a specific message, that is passed as a query string, to a predetermined Telegram ID.
And that’s it. So, now you understand it, let's make that system!
This tutorial has two parts, the first is the fun one, the creation of the server; and the second one is the phone's set up. But if you don’t feel like going through the tutorial, I will drop [here](https://github.com/jmmzzei/nfc-telegram) the link to the repo.
Okay, let's start with the fun part.
## Bot Creation
I made the bot with Javascript, the runtime Node, the libraries Express, Helmet, Dotenv and Telegraf (a framework to make Telegram bots in Node.js) and Heroku to host it and make it publicly available.
I will start with the obvious, initialize the project in a repository.
```bash
mkdir Telegram-NFC && cd $_ && npm init -y
```
Install the dependencies.
npm i express telegraf helmet dotenv
Initialize git.
git init
Create a .gitignore file with these two lines in it.
```bash
echo “node_modules
.env” > .gitignore
```
Create a `.env` file to store all the environment variables during development.
```bash
echo “BOT_TOKEN=
ID=
PORT=3000” > .env
```
We are going to leave it like that and return to it later.
### Create `config.js`
At the root of the repo, create `config.js`
```bash
touch config.js
```
and insert in it:
```javascript
require('dotenv').config()
module.exports = {
bot_token : process.env.BOT_TOKEN,
id: process.env.ID
port : process.env.PORT,
}
```
Here we took the environment variables and reassigned its value to a key inside an object that will be exported through the CommonJS module formatting system.
### Create index.js
At the root of the repo, create `index.js`
```bash
touch index.js
```
and inside it, type:
```javascript
const express = require(‘express’)
const Telegraf = require (‘telegraf’)
const {bot_token, port, id} = require(‘./config’) // import the variables from config
const app = express() // create an Express app
app.set(‘port’, process.env.PORT || port) // set a port to listen
app.use(require(‘helmet’)()) // make use of the helmet library
const bot = new Telegraf(bot_token) // create an instance of Telegraf
bot.start((ctx) => ctx.reply('Welcome human!')) // start the bot
//define an endpoint
app.get(`/${bot_token}`, async (req, res) => {
if(req.query.message){
try {
await bot.telegram.sendMessage(id, req.query.message)
res.send('Message Sended!')
} catch (error) {
res.send(error)
}
} else {
res.send(‘No message to send.’)
}
})
```
What I did in the last block of code was setting the endpoint for a GET HTTP method. Every time a request reaches this endpoint and if it presents a query string with a key equal to ‘message’, the server will try to send this message to the passed Telegram ID (this is the addressee) and will reply, on success or on error, with a message.
> Note: You could set your endpoint to the root of the URL but, for the sake of security, I recommend you use your private bot token in your path.
Finally, we will set our server to listen on a specific port.
```javascript
app.listen(app.get(‘port’), () => {
console.log(`Server listen on port ${app.get(‘port’)}`)
})
```
Bonus: If you need to perform a specific function in response to an addressee reply, you probably should opt for setting a webhook. Telegraf provides a couple of methods to perform this.
```javascript
app.use(bot.webhookCallback(`/${bot_token}`))
bot.telegram.setWebhook(`<your-heroku-url>/${bot_token}`)
```
### Populate the `.env` file.
To fill the BOT_TOKEN key you need to get your bot token. In the first place, you must send some messages to the BotFather. In Telegram, look it up and type /start => /newbot => your-bot-name => a-username-for-your-bot (must end with ‘bot’). And this should reveal to you the token of your bot.
To fill the ID key of the `.env` file, you need to get the Telegram ID of your addressee. You could find a bot that delivers to you this data but, unfortunately, I can’t tell you any because I can’t really know which one is secure and which one is not.
Nevertheless, I can tell you how I got the ID. I created a little bot with the same tech stack used in this tutorial, that replies with the Telegram ID of the person who sends the message. Then I send a message from the addressee phone to this bot to get the info.
I know it's a bit tricky. If you have issues with this step, feel free to reach out to me and I will write another post explaining this part, or I can send you the script I used.
### Upload to Heroku
Heroku offers us different ways to deploy our app. I’m going to use the Heroku CLI because it's astonishingly simple.
- Create an account.
- Create a new app in the Heroku Dashboard.
- Download the Heroku CLI [here](https://devcenter.heroku.com/articles/heroku-cli).
Then you need to login through the Heroku CLI to create a new SSH public key.
heroku login
Add the Heroku repo as your remote.
heroku git:remote -a <your-heroku-app-name>
And finally, in the terminal...
git add .
git commit
git push heroku master
One final step, remember that we didn't commit the `.env` file so we need to set the environment variables somewhere. To do that, in Heroku, we need to go to Dashboard/your-app/Settings and Config Vars. Once there, adding the necessary variables should be a piece of cake. The variables that you should add are the same that you have in your `.env`, except for the port.
> Note: Since we are making use of the free version, Heroku will put your bot to sleep after 30 minutes of inactivity. So if the bot takes from 5 to 10 seconds to send the message and the response, that's okay, Heroku needs approximately that time to wake up your bot.
And that’s it. The bot you have created is now deployed and running in the cloud.
## Setting up the phone
In this part you need to install and set up a couple of apps on your phone. I will explain the steps for Android, which is the OS I use, but I don’t think that exists many differences if you have an iPhone.
Let’s begin.
Once you have installed in your phone the NFC Tools and NFC Tasks apps, you need to open NFC Tools and go to write/registry/URL and type https://your-app-name.herokuapp.com/your-bot-token/?message=your-message. Then place the NFC tag next to your phone and write the data to your tag.
And that's pretty much all you need to do. The next time you put your phone next to the tag, the NFC Task will open your predefined browser with the specified URL in the address bar. Then, the browser will make a GET request to that URL and trigger a series of steps that will end with the bot sending the message, passed as a query string in the URL, to the addressee.
> Note: In order to receive messages from the bot, the addressee has to explicitly start it.
That's all! As you may see, the server took us a few lines to create and set up the phone didn't take us too long. Nevertheless, simplicity doesn’t mean futility. In fact, you could add different endpoints and different tags to expand its functionality. I would love to know the variations you make to the project.
I hope you liked this guide and had a good time creating the bot.
For any recommendation or comment, feel free to reach out to me or leave a comment below. You could also report an issue on the [Github repo](https://github.com/jmmzzei/nfc-telegram).
Thanks! | username_pepe |
281,538 | A stable alternative to SQLite for offline desktop app? | I prefer to use JavaScript/TypeScript if that matters. Also, it would be better if it is ACID-guarant... | 0 | 2020-03-15T07:19:45 | https://dev.to/patarapolw/a-stable-alternative-to-sqlite-for-offline-desktop-app-5gbg | help, desktop, database | I prefer to use JavaScript/TypeScript if that matters. Also, it would be better if it is ACID-guaranteed. (I have seen LokiDB breaks, and all is lost.)
I wouldn't want to use Docker. (otherwise I can easily use MongoDB, with pseudo-file based.) It should be easy to install for end-users.
If I use Kotlin, I might try h2 database; but Kotlin is harder to code than TypeScript in general...
Some other criteria are
- User input queries (Actually, I converted it to JSON, before inputting to [ORM](https://github.com/patarapolw/liteorm).)
- Joins. I am trying to develop an app compatible with Anki ([schema](https://github.com/patarapolw/ankisync.js/blob/4e30a1ac289536c236f3bdde1d859f67bb9138e8/src/index.ts#L93)), and it involves a lot of joining.
- Async.
- Maintenance. NeDB and LokiDB seem not to be actively maintained anymore. Have tried [PouchDB](https://github.com/pouchdb/pouchdb), but not sure about stability; although CouchDB should be well maintained.
- JSON support, in order to evaluate `Record<string, string>` -- can even be something like [JSON1 extension](https://sqlite.org/json1.html) | patarapolw |
281,651 | Daily Developer Jokes - Sunday, Mar 15, 2020 | Check out today's daily developer joke! (a project by Fred Adams at xtrp.io) | 4,070 | 2020-03-15T13:00:05 | https://dev.to/dailydeveloperjokes/daily-developer-jokes-sunday-mar-15-2020-1f0p | jokes, dailydeveloperjokes | ---
title: "Daily Developer Jokes - Sunday, Mar 15, 2020"
description: "Check out today's daily developer joke! (a project by Fred Adams at xtrp.io)"
series: "Daily Developer Jokes"
cover_image: "https://private.xtrp.io/projects/DailyDeveloperJokes/thumbnail_generator/?date=Sunday%2C%20Mar%2015%2C%202020"
published: true
tags: #jokes, #dailydeveloperjokes
---
Generated by Daily Developer Jokes, a project by [Fred Adams](https://xtrp.io/) ([@xtrp](https://dev.to/xtrp) on DEV)
___Read about Daily Developer Jokes on [this blog post](https://xtrp.io/blog/2020/01/12/daily-jokes-bot-release/), and check out the [Daily Developer Jokes Website](https://dailydeveloperjokes.github.io/).___
### Today's Joke is...

---
*Have a joke idea for a future post? Email ___[xtrp@xtrp.io](mailto:xtrp@xtrp.io)___ with your suggestions!*
*This joke comes from [Dad-Jokes GitHub Repo by Wes Bos](https://github.com/wesbos/dad-jokes) (thank you!), whose owner has given me permission to use this joke with credit.*
<!--
Joke text:
___Q:___ Why couldn't the React component understand the joke?
___A:___ Because it didn't get the context.
-->
| dailydeveloperjokes |
283,627 | How to Create HTTP Server With Node.js | http-Server Creating a simple proxy server in node.js Installation: Globally via npm brew install... | 0 | 2020-03-18T15:36:29 | https://dev.to/hasib787/how-to-creat-http-server-with-node-js-3lha | node, javascript, codenewbie, webdev | http-Server
Creating a simple proxy server in node.js
Installation:
Globally via npm
brew install http-server
Running on-demand: npx http-server [path] [options]
As a dependency in your npm package: npm install http-server
Usage: http-server [path] [options]
[path] defaults to ./public if the folder exists, and ./ otherwise.
Now you can visit http://localhost:8080 to view your server
Note: Caching is on by default. Add -c-1 as an option to disable caching.
Code-
let http = require('http');
http.createServer(onRequest).listen(3000);
function onRequest(client_request, client_res) {
console.log('serve: ' + client_request.url);
let options = {
hostname: 'www.google.com',
port: 80,
path: client_request.url,
method: client_request.method,
headers: client_request.headers
};
let proxy = http.request(options, function (res) {
client_res.writeHead(res.statusCode, res.headers)
res.pipe(client_res, {
end: true
});
});
client_req.pipe(proxy, {
end: true
});
}
More details-https://github.com/Hasib787/http-Server | hasib787 |
284,236 | Tips on how to become best software developer | If you are beginner in IT, probably that's not that easy to cover all the skills, knowledge at one da... | 0 | 2020-03-19T12:22:37 | https://inoxoft.com/9-tips-on-how-to-become-the-best-software-developer/ | learning, teamwork, skills | If you are beginner in IT, probably that's not that easy to cover all the skills, knowledge at one day. Success in engineering is a matter of tremendous work and self-education. How not to loose your path and become the best software developer?
Referring to 9 practical tips how to become best software developer, here is presented some of them.
Start working on your teamwork skills.
Teamwork is everything in IT. The most effective way to upgrade teamwork skills is to provide necessary and effective feedback in a team on your own achievements as well as feedback on results of others. Let people understand what is your workflow and complexity of tasks. It would be good to establish friendly and open relations with your manager. Talk about tasks, share experience and let your manager know about challenges you have. Then, try to understand what's company policy, what are companies values. If everything suits you, you'll feel pretty comfortable in a team. Read and discover how to control emotions and how to cope with conflict people. It gives fine understanding why people behave in this or that way and how not to feel pressure because of millions of colleagues' characters.
Pay attention to the IT courses.
Possibility to grow is in you hands. Programming languages update constantly, so do not hesitate to enroll in online courses, find courses in your city, learn if your company provides courses for employees. Do not forget that there is a plenty of related domains such as Design, Design Thinking, Business Analysis, Marketing and more on the list.
Improve your English.
It is a must have rule for non-English specking countries. HR specialists look for Upper Intermediate or at least B2 English Speakers nowadays. To help you with searches for English courses online, here is a list of interesting and easy-to-learn resources: Duolinguo, Memrise, Babbel for business, Busuu, Hello Talk, Lingvist, Beelinguapp, Mango Languages, Italki, Open English.
Think out of box. Don't be afraid of mistakes. It's experience you earn.
Quite often, beginners are not sure about when they are qualified enough to start applying for a job. There is one secret regarding how to get the job. You will be hired, when you convince the employer that you are exactly the person is needed in the company. It is really important when recruiter likes you when it goes together with good technical skills – you get a combo combination, and become really required among employers.
A good thing is – it is possible to develop self-confidence, by leaving the comfort zone, meeting new people, traveling, going to conferences and so on. Everything is in your hands!
Learn other fields. Additional knowledge is a secret to make things right.
Especially if you work on the Fintech project, learn how trading system works, what are financial strategies, what are japan candlestick patterns, why London, Tokio and New York stocks are the biggest ones. There is so much to discover. Understanding how industry works is very essential.
Look for relevant position for you skills.
It seems like not anything which is hard to perform, still, some special skills are required, in order to get the job. Just imagine, there are two guys, who are starting to look for a job, at the same time. Developer A represents perfect technical skills, learning and practicing programming for 10 years old. Developer B is just a beginner in the IT world. So, A is sending CV to all the companies in the city, and waiting for the response. B is doing the same, additionally telling all the friends from IT, about job searching. How do think, who is going to get the job faster? I bet, this is going to be B, as additional to skills it is important to get people to talk about you, so HR specialist of the companies keep you in mind before there is an offer for you. Additionally, friends of B are going to share pieces of advice regarding performing of the job interview.
Originally published at https://inoxoft.com/9-tips-on-how-to-become-the-best-software-developer/
| yaryna16998501 |
284,682 | Getting More Out Of (And Into) Storage With JavaScript | [NOTE: Since writing this article, I've put this code into 4 different NPM packages. You can find th... | 7,424 | 2020-03-21T00:43:38 | https://dev.to/bytebodger/getting-more-out-of-and-into-storage-with-javascript-41li | javascript, localstorage, sessionstorage, tutorial | [**NOTE:** Since writing this article, I've put this code into 4 different NPM packages. You can find them here:
https://www.npmjs.com/package/@toolz/local-storage
https://www.npmjs.com/package/@toolz/session-storage
https://www.npmjs.com/package/@toolz/session-storage-is-available
https://www.npmjs.com/package/@toolz/local-storage-is-available]
I feel as though two of the most overlooked tools in modern browser-based development are `localStorage` and `sessionStorage`. If those tools had come around 10 years earlier, they'd probably be ubiquitous in web apps. But I rarely see them used in the projects to which I'm exposed.
I'm going to share a little library I built for `localStorage` (which can easily be repurposed for `sessionStorage`, if you're so inclined). It's just a wrapper class that makes `localStorage` (or `sessionStorage`) far more powerful. If you want to check it out for yourself, you can pull it off GitHub here:
https://github.com/bytebodger/local-storage
## A Little History
Feel free to skip this if you're well-versed in current session/local storage capabilities. But I think it's worth noting how we got here and why everyone seems to ignore session/local storage now.
**Cookies**
Everyone knows about cookies. They're the O.G. of browser-based storage. They're incredibly limited in terms of space. They're incredibly insecure. And in the last 15-or-so years, they've been branded with a Scarlet "M" for marketing. Most casual web users have a limited (or nonexistent) understanding of cookies - but most of them have become convinced that cookies are just... _bad_.
Of course, devs and other internet professionals know that cookies have never gone away. They probably won't be going away anytime soon. And they're critical to internet infrastructure. Nevertheless, the public shaming of cookies has also, to some extent, influenced programming practices. We constantly search for new-and-better ways to store discrete bits of data - and to avoid cookies.
**Sessions**
There are many ways to avoid cookies almost entirely. Probably the most common (in the frontend development world) is the JSON Web Token (JWT). In fact, JWTs are _so_ effective, and cookies are _so_ universally scorned, that many devs simply rely on them for any-and-all temporary storage.
Interestingly, our web overlords were devising other viable solutions, even before devs started deploying more robust tools like JWTs. For quite some time now, cross-browser support has been available for `localStorage` and `sessionStorage`. But it seems (to me) like these nifty little utilities have been left in the dust by those who seek to store _any-and-all_ data on the server.
## Use Cases
The obvious advantage of browser-based storage is speed and ease of access. JWTs are great - but it's just a token that basically says to the server, "I am who I say I am." The server still has to return all that data via a service. That all represents a round-trip HTTP cycle. But session/local storage is _right there_. In the browser. You don't have to code-up API calls. You don't have to manage asynchronous processing.
As a React dev, I've found `localStorage` to be particularly useful while building Single Page Applications. Even the most elegantly designed SPA can start to feel painful for the end user if they accidentally navigate away from the page - or if they feel compelled to refresh the page. This is why I use `localStorage` to save all sorts of things that should theoretically persist, even if the page were to be rebuilt from scratch.
Of course, sometimes `sessionStorage` is a better solution. But I tend to lean more toward `localStorage` than `sessionStorage`, because a lot of things that may logically reside in `sessionStorage` can get... _personal_. And you never want personal data stored in the browser.
`localStorage` is a great place to dump a bunch of minor data that can greatly improve the user experience over time. For example, have you ever run into this?
1. You perform a search.
2. The search results are paginated, by default, with 20 results per page.
3. You want to see more results on each page, so you change the results-per-page setting to 50.
4. Some time later during the session (or during subsequent sessions), you perform another search, and the results are again displayed, by default, with 20 results per page.
In this example, the app never bothers to _remember_ that you wanted to see results displayed 50-per-page. And if you have to perform a lot of searches, it can be damn annoying to have to constantly, manually change the page size to 50.
You _could_ send the user's page-size setting back to the server. But that feels like a lot of unnecessary overhead for something as innocuous as page-size. That's why I prefer to store it in `localStorage`.
## Caveats
**Data Sensitivity**
Just like with cookies, nothing personal or sensitive should _ever_ be stored in the browser. I would hope that for all-but-the-greenest of devs, that goes without saying. But it still bears repeating.
**Storage Limits**
This can vary by browser, but the typical "safe" bet is that you have 5MB of local storage and 5MB of session storage. That's _a lot_ more data than you could ever store in cookies. But it's still far-from-infinite. So you don't want to go insane with the local storage. But you do have significantly more freedom than you ever had with cookies.
**Data Types**
Admittedly, I've buried the lede in this post. The whole point of this article, and my little GitHub library, isn't to get you to use session/local storage. Nor is it to simply provide _another way_ to use session/local storage. The core tools for session/local storage are already included in base JS and are easy to use. Instead, my intention is to show how to get _more_ out of (and into) local storage.
If there's any "issue" with `localStorage`, it's that you can only store _strings_. This is just fine when you only want to save something like a username. It's not even _too much_ of a problem when you want to store a number (like the user's preferred page size) because most of us can easily handle `"50"` just as well as we can handle `50`. But what about arrays? Or objects? Or `null`?
Let's see how local storage handles non-string values:
```javascript
localStorage.setItem('aNumber', 3.14);
const aNumber = localStorage.getItem('aNumber');
console.log(aNumber); // "3.14"
localStorage.setItem('anArray', [0,1,2]);
const anArray = localStorage.getItem('anArray');
console.log(anArray); // "0,1,2"
localStorage.setItem('aBoolean', false);
const aBoolean = localStorage.getItem('aBoolean');
console.log(aBoolean); // "false"
localStorage.setItem('anObject', {one: 1, two: 2, three: 3});
const anObject = localStorage.getItem('anObject');
console.log(anObject); // "[object Object]"
localStorage.setItem('aNull', null);
const aNull = localStoraage.getItem('aNull');
console.log(aNull); // "null"
```
So we have some suboptimal results... and some results that are just plain _bad_. The good news is that `localStorage` doesn't "break" or throw an error when you try to save a non-string item. The bad news is that it simply takes the non-string values and slaps them with a `.toString()` method. This results in some values that are... "workable". And others that are much more problematic.
I suppose the value for `aNumber` isn't all _that_ bad, because we could always use `parseFloat()` to get it back to being a real number. And the value for `anArray` is perhaps _somewhat_ workable, because we could use `.split()` to get it back into an array.
But the value returned for `aBoolean` is prone to some nasty bugs. Because the string value of `"false"` most certainly does **_not_** evaluate as `false`. The value returned for `aNull` is similarly problematic. Because the string value of `"null"` certainly does **_not_** evaluate as `null`.
Perhaps the most damaging value is `anObject`. By slapping it with `.toString()`, `localStorage` has essentially destroyed any data that was previously stored in that object, returning nothing but a useless `"[object Object]"` string.
## JSON.parse/stringify **_ALL THE THINGS!!!_**
`.toString()` is borderline useless when we're trying to serialize non-scalar values (especially, _objects_). Luckily, JSON parsing provides a shorthand way to get these values into a string - and to extract them _in their native format_.
So, if we revisit our examples with JSON parse/stringify in hand, we could do the following:
```javascript
localStorage.setItem('aNumber', JSON.stringify(3.14));
const aNumber = JSON.parse(localStorage.getItem('aNumber'));
console.log(aNumber); // 3.14
localStorage.setItem('anArray', JSON.stringify([0,1,2]));
const anArray = JSON.parse(localStorage.getItem('anArray'));
console.log(anArray); // [0,1,2]
localStorage.setItem('aBoolean', JSON.stringify(false));
const aBoolean = JSON.parse(localStorage.getItem('aBoolean'));
console.log(aBoolean); // false
localStorage.setItem('anObject', JSON.stringify({one: 1, two: 2, three: 3}));
const anObject = JSON.parse(localStorage.getItem('anObject'));
console.log(anObject); // {one: 1, two: 2, three: 3}
localStorage.setItem('aNull', JSON.stringify(null));
const aNull = JSON.parse(localStoraage.getItem('aNull'));
console.log(aNull); // null
```
This works - from the perspective that we've managed to preserve the native data types when we extracted the information from `localStorage`. But you'd be forgiven for thinking that this code is far-from-elegant. All those `JSON.stringify()`s and `JSON.parse()`s make for a pretty dense read - especially when we consider that this code isn't really _doing_ much.
And while `JSON.stringify()`/`JSON.parse()` are fabulous tools, they can also be inherently _brittle_. You don't want your app to be dependent upon a programmer _remembering_ to stringify the value before it's saved, or _remembering_ to parse the value after it's retrieved.
Ideally, we'd have something that looks cleaner and "just works" behind the scenes. So that's why I wrote my little wrapper class.
## localStorage() Isn't Always _Available_
There's another problem with the approach shown above. In the comments below, Isaac Hagoel alerted me to the fact that `localStorage` _isn't always available_. He linked to an article from Michal Zalecki that highlights the issue. A frequent cause of this problem stems from _private_ browser sessions, which don't allow storing data locally in `localStorage` _or_ `sessionStorage`.
This would seem to make any use of `localStorage` quite brittle. Because it would be poor design to expect your users to never be using a private browsing session. But if you look through the (updated) code in my library, I've accounted for that now by first checking to see if `localStorage` is available. If it's _not_, then the utility falls back to using a persistent temporary object. That object will at least hold the values until the end of the app/page cycle, so you will essentially get _temp_ storage in place of _local_ storage.
## The `local` Wrapper For `localStorage()`
Here's how I use my wrapper class:
```javascript
import local from './local';
// set the values
local.setItem('aNumber', 3.14);
local.setItem('anArray', [0,1,2]);
local.setItem('aBoolean', false);
local.setItem('anObject', {one: 1, two: 2, three: 3});
local.setItem('aNull', null);
// retrieve the values
let aNumber = local.getItem('aNumber');
let anArray = local.getItem('anArray');
let aBoolean = local.getItem('aBoolean');
let anObject = local.getItem('anObject');
let aNull = local.getItem('aNull');
console.log(aNumber); // 3.14
console.log(anArray); // [0,1,2]
console.log(aBoolean); // false
console.log(anObject); // {one: 1, two: 2, three: 3}
console.log(aNull); // null
// remove some values
local.removeItem('aNumber');
local.removeItem('anArray');
aNumber = local.getItem('aNumber');
anArray = local.getItem('anArray');
console.log(aNumber); // null
console.log(anArray); // null
// get an existing item, but if it doesn't exist, create
// that item and set it to the supplied default value
let workHistory = local.setDefault('workHistory', 'None');
anObject = local.setDefault('anObject', {});
console.log(workHistory); // 'None'
console.log(anObject); // {one: 1, two: 2, three: 3}
// clear localStorage
local.clear();
```
## Limitations
As previously stated, this is just a wrapper for `localStorage`, which means all these values are saved _in the browser_. This means that you can't store gargantuan amounts of data (e.g. more than 5MB) and you should never store personal/sensitive information.
This method also leans on JSON parsing. So you can use it to safely handle all of the data types that survive that process. Strings, integers, decimals, nulls, arrays, and objects are fine. Even complex data structures that have nested arrays/objects are fine. But you can't stringify-then-parse a function or a class definition and expect to use it after it's been retrieved. So this is not a universal solution for storing classes or functions in their raw formats. This is just a way to preserve raw _data_.
| bytebodger |
285,060 | How I studied for the AWS Solutions Architect Associate certification exam | In this blog post, I'm going to share how I studied, step-by-step, for the AWS Solutions Architect... | 0 | 2020-03-24T08:20:26 | https://meirg.co.il/2020/03/24/how-i-studied-for-the-aws-solutions-architect-associate-certification-exam/ | aws, certification, exam, guide | In this blog post, I'm going to share how I studied, step-by-step, for the [AWS Solutions Architect Associate](https://aws.amazon.com/certification/certified-solutions-architect-associate/) certification exam.
And of course, due to COVID19, AWS is taking care of its learners - [AWS Certification FAQs](https://aws.amazon.com/certification/faqs/)
# DISCLAIMER
Make sure you cover all the topics that are in the official [AWS Certified Solutions Architect Associate Exam Guide](https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS_Certified_Solutions_Architect_Associate-Exam_Guide_1.8.pdf), read more about it [here](https://aws.amazon.com/certification/certified-solutions-architect-associate/). I passed the exam in Sep-2019, and it might have changed a little bit since then.
# Intro
1. It took me __about 2-3 weeks__ to study
1. I studied for about __4-5 hours a day__
1. I had minor experience in AWS with EC2 and S3 before I started studying
1. __Skim through__ all the topics __before you start studying__. It took me time to realize the best way for me to study, but it doesn't mean that it's the best way for you
1. Content switching - I didn't want to learn one topic, and then learn a whole new topic, which will probably make me forget the previous topic. So I learned all of the topics bit by bit, which also helped me realize how different services can work together
1. Learning for this type of exam is difficult, so if you're having a hard time, don't worry about it, in your 2nd week of learning, it will get much easier
1. Learn with your mobile phone - I found it best to read most of the FAQs and official docs using my mobile phone in my spare time
1. Here's my [Social Badge](https://www.youracclaim.com/badges/4deaade1-14b1-4019-a9a3-0a8e675ccc6a)
1. If you wonder how the badge looks like in a Linkedin profile, you can check mine - [linkedin.com/in/meirg](https://www.linkedin.com/in/meirg/)
# Getting Started
1. [Create an AWS account](https://portal.aws.amazon.com/billing/signup?redirect_url=https%3A%2F%2Faws.amazon.com%2Fregistration-confirmation#/start)
1. [Create an IAM admin role](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html), from now on, use only this role to perform any future actions in your AWS account
1. Register for [AWS Certified Solutions Architect - Associate 2020](https://www.udemy.com/course/aws-certified-solutions-architect-associate/) by [Ryan Kroonenburg](https://www.linkedin.com/in/acloudguru/?originalSubdomain=uk) and [Faye Ellis](https://www.linkedin.com/in/fayeellis/?originalSubdomain=uk). Original price is 179 USD so wait for it to be on sale, you can get if for 14-18 USD
1. Complete the above course, including the quizzes
- The course duration is 14.5 hours (excluding quizzes and exams), so to save time, set the videos speed to
- x1.25 - 11.6 hours
- x1.5 - 9.7 hours
1. (Optional) If you have prior knowledge, take the Practice Exam 1 to understand your knowledge gaps, it's under Good Luck & What's Next
## This Guide's Structure
1. Even though you've already completed the Udemy course, I'm still going to put references to specific lectures that you should watch again to strengthen your knowledge
1. Legend
- 📚 - Official AWS docs/tutorials
- 📘 - Non-official docs/tutorials
- 📺 - Watch lecture(s) in the Udemy course
## VPC - Basics
1. Create a custom VPC - go through the following scenarios
1. 📚 [Scenario 1: VPC with a Single Public Subnet](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario1.html)
1. 📚 [Scenario 2: VPC with Public and Private Subnets (NAT)](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html)
- Don’t create a NAT Instance or a NAT Gateway, but make sure you understand how they work
- 📚 [Differences between NAT Instance and NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html)
## Route53
1. 📺 Route53 > all lectures
1. Understand what are DNS records, notably: A, SOA, NS, CNAME, and MX
1. Get familiar with all available Routing policies
1. (Optional) Register your domain directly from the AWS Route53 console
## VPC - Subnets
1. 📘 [Understanding IP Addresses, Subnets, and CIDR Notation for Networking](https://www.digitalocean.com/community/tutorials/understanding-ip-addresses-subnets-and-cidr-notation-for-networking)
1. Practice on Subnets by using the CIDR calculator at [www.cidr.xyz](https://cidr.xyz) and create Subnets in your custom VPC
1. How many IP addresses are reserved by AWS?
## VPC - EC2
1. 📺 EC2 > all lectures
1. 📚 Read more about [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) and [EC2 instance roles](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
#### Practice
1. Create an IAM role and attach it to your EC2 instance
1. SSH to an EC2 instance and install AWS CLI on your instance
1. Run aws s3 ls on the instance and make sure that it works
1. Use the instance’s meta-data to figure out from the instance to which security groups it belongs to, hint: curl 169. …
## By now you are familiar with
1. Launch and configure EC2 instances
1. Elastic Block Store (EBS)
1. Subnets, IP Addresses, CIDR and Subnetmask
1. Route Tables
1. Internet Gateway (igw)
1. Elastic IP (eip)
1. 📚 [NAT Instances](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html) (bastion) (💬 AMI name contains amzn-ami-vpc-nat)
1. 📚 [NAT Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) (ngw)
1. 📚 [Security Group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) (sg) - this topic is very important, so make sure you do a lot of practice
1. 📚 [Network Access Control List](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html) (NACL)
1. 📚 [Identity Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) (IAM) - Users, Groups, Roles, and Policies
1. Route53 and DNS records
## VPC - Peering
1. Go over the following scenarios
1. 📚 [Example: Sharing Public Subnets and Private Subnets](https://docs.aws.amazon.com/vpc/latest/userguide/example-vpc-share.html)
1. 📚 [Example: Services Using AWS PrivateLink and VPC Peering](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-peer-region-example.html)
__IMPORTANT__! Don't skip the above topic; it may appear in the exam
## VPC - ENI
1. 📚 [Elastic Network Interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) (ENI)
_Note_: No need to practice on adding a secondary ENI to your instance, if you do, make sure you take a snapshot before doing it
## VPC - Flow Logs
1. 📚 [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html)
#### Practice
1. 📚 [Publish Flow Logs to CloudWatch](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs-cwl.html), and keep in mind that it takes up to 10 minutes to get the initial Log Stream, so be patient
1. (Optional) 📚 [Install the Agent on a Running EC2 Linux Instance](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html)
1. 📚 [SSH to your EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html), install and configure AWS Logs
1. View the Log stream in CloudWatch Logs, what’s the name of the FlowGroup?
1. Stop the awslogs service and remove the FlowGroup from the file /var/awslogs/etc/awslogs.conf
## Storage - S3
1. 📚 [Simple Storage Service](https://docs.aws.amazon.com/AmazonS3/latest/gsg/GetStartedWithS3.html) (S3)
1. 📺 S3 > Identity Access Management > from IAM 101 to Transfer Acceleration
#### Practice
1. Create a bucket in S3 and Publish Flow Logs
1. SSH to your EC2 instance
1. Copy one of the logs from your S3 bucket to the EC2 instance
1. Extract the log from gz and read it, cool, huh? :)
1. 📚 [Creating a Trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-a-trail-using-the-console-first-time.html)
1. 📚 [Logging Amazon CloudWatch API Calls with AWS CloudTrail](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/logging_cw_api_calls.html)
1. Turn off the Logging in the trail
1. Disable AWS Cloudwatch alarm and delete VPC Flow Log - do it without removing the alarm, hint: possible only with aws-cli
## VPC - Nat Gateway
1. 📚 [Nat Gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html)
#### Practice
1. 📚 [Implement Scenario 2 and apply the NATSG: Recommended Rules](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html#VPC_Scenario2_Security)
1. SSH to private instance and run: curl http://ifconfig.co, the returned IP should be the NAT Gateway Elastic IP (EIP)
1. Delete the Nat Gateway and release the Nat Gateway's EIP
## VPC - Direct Connect and VPC End Points
1. 📺 VPCs > Direct Connect
1. 📺 VPCs > VPC End Points
## Storage - Storage Gateway
1. 📺 Identity Access Management & S3 > Storage Gateway
1. File Gateway
1. Stored Volumes and Cached Volumes
1. Tape Gateway
## Storage - Snowball
1. 📺 Snowball Overview and Snowball Lab
1. Snowball
1. Snowball Edge
1. Know the answer to - when should I use it?
## EC2 - Placement Groups
1. 📺 EC2 > EC2 Placement Groups
1. 📚 [Placement Groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#concepts-placement-groups)
1. Clustered Placement Group
1. Partition Placement Group
1. Spread Placement Group
## EC2 - Bootstrap Scripts and instance Meta Data
1. 📺 EC2 > Using Boot Strap Scripts
1. 📺 EC2 > EC2 Instance Meta Data
## Databases
1. 📺 Databases On AWS > all lectures
1. Understand the difference between Multi-AZ vs. Read Replicas
1. Get a deeper understanding of the following types of databases
1. DynamoDB
1. Redshift and Redshift Spectrum
1. Aurora
1. Elasticache
1. Understand how to increase the performance of each DB
1. Understand the basics of high availability architecture of DBs
## VPC - Load Balancers
1. 📺 HA Architecture > from Load Balancers Theory to Advanced Load Balancer Theory
1. Understand the differences between Classic/App/Net Load Balancers
1. Make sure you know the answer to - what are Health checks?
## Theoretical - High Availability Architecture
1. 📺 HA Architecture > from Autoscaling Groups Lab to HA Architecture Quiz
1. 📚 [Autoscaling Groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html)
## Theoretical - Other Services
Get familiar with the following applications and services.
1. 📺 Watch the lectures in Udemy
1. CloudFormation
1. Elastic Beanstalk - get familiar with
1. Lightsail
1. SQS - Super important, especially the short/long polling
1. MQ
1. SWF
1. SNS - Make sure you know the 📚 [limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html)
1. Elastic Transcoder
1. API Gateway - Super important
1. Kinesis - What are the differences between
1. Kinesis Streams
1. Kinesis Firehose
1. Kinesis Analytics
1. Web Identity Federation and Cognito
1. User pools
1. Identity pool
1. CloudFront and Edge Locations - Super important
1. Macie
1. ElasticSearch - 📘 [Use Case 1](https://dzone.com/articles/what-is-elasticsearch-and-how-it-can-be-useful), 📘 [Use Case 2](https://marutitech.com/elasticsearch-can-helpful-business/)
1. And any other services that appear in the Udemy course
## Theoretical - Serverless
1. 📺 Serverless > all lectures
1. Make sure you fully understand how the following services work
1. S3
1. Lambda Functions
1. DynamoDB
1. Aurora Serverless
#### Practice
1. Create a Lambda Function and invoke functions with HTTP requests by using API Gateway
1. Which triggers are available for Lambda Functions?
1. 📚 [Create an API Gateway and a Lambda](https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-create-api-as-simple-proxy-for-lambda.html)
## Storage - S3, EBS and EFS
Even though you read about S3, go over it again, it's a huge topic, and there are lots of questions about this topic
1. 📺 S3 > from S3 101 to Transfer Acceleration
1. Different classes of S3 - Standard, IA, IA-Zone, Intelligent tiering
1. Glacier and Glacier Deep Archive
1. Security and encryption
1. SSE - S3
1. SSE - KMS
1. SSE - C
1. Client-side encryption and upload to S3
1. Version control + MFA Delete
1. Lifecycle management
1. Cross-Region Replication
1. Transfer Acceleration - Uses CloudFront
1. 📚 [S3 FAQ](https://aws.amazon.com/s3/faqs/)
1. 📺 EC2 > from EBS 101 to AMI Types (EBS vs Instance Store) and Elastic File System
#### Practice
This exercise only covers KMS, since it's a difficult topic, but feel free to also practice the other topics
1. Create another IAM user, call it developer, grant this user full admin access (don’t switch to this user)
1. Create a key in KMS and allow only to your current admin user to use this key (developer can’t use it)
1. Create a Lambda Function from scratch and add random environment variables
1. Encrypt the environment variables with the key you’ve created earlier
1. Login with your developer user and view the Lambda Function, can you see the environment variables?
## Theoretical - Well-Architected Framework
It's best to go over all the official [AWS Docs](https://aws.amazon.com/architecture/well-architected/), but since it's time-consuming, skim through the [Well-Architected Framework whitepaper](https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)
## Exams - Practice
1. Make sure you completed all the quizzes in the Udemy course
2. Take the practice exams Practice 1 and Practice 2 in the Udemy course
1. Register for [AWS Certified Solutions Architect Associate Practice Exams](https://www.udemy.com/course/aws-certified-solutions-architect-associate-amazon-practice-exams-saa-c02/), Original price is about 40 USD , but you can get it for sale at 14-18 USD
1. Take as many exams as you can (the more, the merrier)
1. Make sure you review the answers and explanations for each question, even if you answered correctly
## Useful Resources
Skim through the following resources
1. 📚 [AWS Certification Preperation](https://aws.amazon.com/certification/certification-prep/)
1. 📘 [aws-cheat-sheet](https://tutorialsdojo.com/aws-cheat-sheets/)
1. 📘 [A Complete Guide to AWS Certified Solutions Architect Associate Exam](https://www.whizlabs.com/blog/aws-certified-solutions-architect-associate-guide)
1. 📘 [Do Your Homework: 7 AWS Certified Solutions Architect Exam Tips
](https://www.toptal.com/aws-cloud-engineers/aws-certified-solutions-architect-exam-tips)
## AWS Solutions Architect Associate certification exam
By now you should be ready to take the exam!
1. 💪 Take the [AWS official practice exam](https://www.aws.training/Certification) (20 USD)
1. ❓ If there's any topic that you're still not comfortable with, read the docs and FAQs. Feel free to comment to this blog post with questions!
1. 🎉 Take the official [AWS certification exam](https://www.aws.training/Certification) (150 USD)
## Final words
Once you get the hang of it, it's fun to learn about AWS and use its services. I hope that this blog post helped you to design your learning path for this exam, and if it did, then 👏/💟/🐴 and share!
| unfor19 |
285,725 | Best way to improve your React code | Maintaining code quality is the biggest challenge for beginners. In today post we are going to talk... | 0 | 2020-03-22T03:36:41 | https://dev.to/narendersaini32/best-way-to-improve-your-react-code-fki | react, javascript, css, html | 
Maintaining code quality is the biggest challenge for beginners. In today post we are going to talk about two best ways to improve your React code By using these two methods you will automatically learn about the best practices for your React code.
### Ways to improve your React Code
I personally uses these two ways to improve my code in day to day life. The two methods are mentioned below
### [1. Using Eslint](https://blogreact.com/setup-reactjs-with-parceljs-eslint-babel-and-less-in-5-min-2020/)

Eslint is the life saver for the React developers. It will find and fix problems in your JavaScript code. The Eslint have below features
- ESLint statically analyzes your code to quickly find problems. ESLint is built into most text editors and you can run ESLint as part of your continuous integration pipeline.
- Many problems ESLint finds can be automatically fixed. ESLint fixes are syntax-aware so you won’t experience errors introduced by traditional find-and-replace algorithms.
- Preprocess code, use custom parsers, and write your own rules that work alongside ESLint’s built-in rules. You can customize ESLint to work exactly the way you need it for your project.
**Installation**
```
npm install eslint --save-dev
```
**Setup Config**
```
npx eslint --init
```
After that you need to follow steps shown on your terminal. You can check this [article](https://blogreact.com/setup-reactjs-with-parceljs-eslint-babel-and-less-in-5-min-2020/) for full details.
**Vscode Extension**
I strongly recommend using Eslint extension with your vscode to see the problems on your editor instead of terminal.
### [2. Using Deepscan](https://deepscan.io/)

DeepScan focuses on finding runtime errors and quality issues rather than coding conventions. It is ideal for you if you are serious about JavaScript code quality.
DeepScan follows the execution and data flow of program in greater depth. This enables finding issues that syntax-based linters can’t.
Simply put: DeepScan is a cutting-edge static analysis tool for JavaScript code.
DeepScan classifies issues by 2 categories and 3-level impacts. So you can focus on **major issues first and gradually**.
Also noisy issues are aggressively suppressed and **detailed guides** are provided to let you simply know where the problem is.
- Check your code in terms of error and code quality lint tools can’t detect
- Minimize code reviews by automated inspection
- Learn best practices for JavaScript
- Catch mistakes before committing
- Ensure code quality for the whole project
- Keep up with project status and issues
- Get the measure on the project
- Increase confidence before code ships
Checkout My Website [https://blogreact.com/](https://blogreact.com/) | narendersaini32 |
285,733 | Knowing How To Read Your Data | With all the craziness of this worldwide pandemic I couldn't help but to spend a little time playing... | 0 | 2020-03-22T04:17:34 | https://dev.to/adamlarosa/knowing-how-to-read-your-data-1bkd | With all the craziness of this worldwide pandemic I couldn't help but to spend a little time playing with a COVID-19 api (located at https://covid19api.com/). It's got a few different datasets, the root of which will return all available types, i.e....
```
0: {Name: "Get All Data", Description: "Returns all data in the system. Warning: this request returns 8MB+ and takes 5+ seconds", Path: "/all", Params: null}
1: {Name: "Get List Of Countries", Description: "Returns all countries and associated provinces. Th…y_slug variable is used for country specific data", Path: "/countries", Params: null}
2: {Name: "Get List Of Cases Per Country Per Province By Case Type", Description: "Returns all cases by case type for a country. Coun…ases must be one of: confirmed, recovered, deaths", Path: "/country/{country}/status/{status}", Params: Array(2)}
3: {Name: "Get List Of Cases Per Country By Case Type", Description: "Returns all cases by case type for a country. Coun…ases must be one of: confirmed, recovered, deaths", Path: "/total/country/{country}/status/{status}", Params: Array(2)}
4: {Name: "Get List Of Cases Per Country Per Province By Case Type From The First Recorded Case", Description: "Returns all cases by case type for a country from …ases must be one of: confirmed, recovered, deaths", Path: "/dayone/country/{country}/status/{status}", Params: Array(2)}
5: {Name: "Get List Of Cases Per Country By Case Type From The First Recorded Case", Description: "Returns all cases by case type for a country from …ases must be one of: confirmed, recovered, deaths", Path: "/total/dayone/country/{country}/status/{status}", Params: Array(2)}
6: {Name: "Add a webhook to be notified when new data becomes available", Description: "POST Request must be in JSON format with key URL a…sponse data is the same as returned from /summary", Path: "/webhook", Params: Array(2)}
7: {Name: "Summary of new and total cases per country", Description: "A summary of new and total cases per country", Path: "/summary", Params: null}
```
Right off the bat "Get List Of Cases Per Country Per Province By Case Type From The First Recorded Case" caught my eye. "How many people in the United States have died from this thing??!?!" was my first thought. Let's see what this one gives us...
```
0:
Country: "US"
Province: "King County, WA"
Lat: 47.6062
Lon: -122.332
Date: "2020-02-29T00:00:00Z"
Cases: 1
Status: "deaths"
__proto__: Object
1: {Country: "US", Province: "King County, WA", Lat: 47.6062, Lon: -122.332, Date: "2020-03-01T00:00:00Z", …}
2: {Country: "US", Province: "King County, WA", Lat: 47.6062, Lon: -122.332, Date: "2020-03-02T00:00:00Z", …}
3: {Country: "US", Province: "Snohomish County, WA", Lat: 48.033, Lon: -121.834, Date: "2020-03-02T00:00:00Z", …}
4: {Country: "US", Province: "King County, WA", Lat: 47.6062, Lon: -122.332, Date: "2020-03-03T00:00:00Z", …}
5: {Country: "US", Province: "Snohomish County, WA", Lat: 48.033, Lon: -121.834, Date: "2020-03-03T00:00:00Z", …}
...etc...
```
Ok, the object has a key of "Cases" which gives me the number of deaths. Perfect! I'll just add those up & have my total! Luckily for me the object is numbered, so I'll just take the number of keys and iterate though them grabbing my data along the way. Having put the data in an object named "results", I put this together...
```
let total = 0
const size = Object.keys(results).length
for (let i=0; i < size; i++){
total = total + results[i]["Cases"]
}
```
This gave me a number of 1135 which was incredibly alarming as the news was reporting that the United States had only suffered 244 fatalities. So my first response, of course, was to panic.
WE'RE BEING LIED TO! THEY'RE SUPPRESSING THE REAL DATA!!!
Thankfully this only lasted a moment, and the cold realization that once again my logic was to blame splashed me with its icy truth. Time to take a deeper look at the data being presented to me.
```
0:
Country: "US"
Province: "King County, WA"
Lat: 47.6062
Lon: -122.332
Date: "2020-02-29T00:00:00Z"
Cases: 1
Status: "deaths"
__proto__: Object
1:
Country: "US"
Province: "King County, WA"
Lat: 47.6062
Lon: -122.332
Date: "2020-03-01T00:00:00Z"
Cases: 1
Status: "deaths"
__proto__: Object
2:
Country: "US"
Province: "King County, WA"
Lat: 47.6062
Lon: -122.332
Date: "2020-03-02T00:00:00Z"
Cases: 5
Status: "deaths"
__proto__: Object
3: {Country: "US", Province: "Snohomish County, WA", Lat: 48.033, Lon: -121.834, Date: "2020-03-02T00:00:00Z", …}
4:
Country: "US"
Province: "King County, WA"
Lat: 47.6062
Lon: -122.332
Date: "2020-03-03T00:00:00Z"
Cases: 6
Status: "deaths"
__proto__: Object
5: {Country: "US", Province: "Snohomish County, WA", Lat: 48.033, Lon: -121.834, Date: "2020-03-03T00:00:00Z", …}
6: {Country: "US", Province: "King County, WA", Lat: 47.6062, Lon: -122.332, Date: "2020-03-04T00:00:00Z", …}
7: {Country: "US", Province: "Placer County, CA", Lat: 39.0916, Lon: -120.804, Date: "2020-03-04T00:00:00Z", …}
...etc...
```
Here we can see that the first entry is from King County, WA reporting one "case" of "deaths" of February 29th. Then on March first again there is only one "case" of "deaths". This is where I made my mistake. The "cases" isn't individual deaths as I assumed, but a total of the recorded deaths for that date. I.E. there was one one 2/29, still only one on 3/1, then a total of five on 3/2.
Ok, I can work with this. All one would have to do is find the last entry for that particular state and grab the total, then add all of THOSE totals up. This presented a couple new challenges.
First, all the states are mixed up due to the fact they were all reporting cases simultaneously. Not too complex, just separate them all by state during an iteration.
Second, and more exciting, was the fact that sometime around March 10th they changed the way the locations were formatted. E.g...
```
26: {Country: "US", Province: "Placer County, CA", Lat: 39.0916, Lon: -120.804, Date: "2020-03-09T00:00:00Z", …}
27: {Country: "US", Province: "Santa Rosa County, FL", Lat: 30.769, Lon: -86.9824, Date: "2020-03-09T00:00:00Z", …}
28: {Country: "US", Province: "Snohomish County, WA", Lat: 48.033, Lon: -121.834, Date: "2020-03-09T00:00:00Z", …}
29: {Country: "US", Province: "Florida", Lat: 27.7663, Lon: -81.6868, Date: "2020-03-10T00:00:00Z", …}
30: {Country: "US", Province: "New Jersey", Lat: 40.2989, Lon: -74.521, Date: "2020-03-10T00:00:00Z", …}
31: {Country: "US", Province: "Washington", Lat: 47.4009, Lon: -121.491, Date: "2020-03-10T00:00:00Z", …}
```
Switching from "County, State" to just "State". So during the iteration I'd have to check to see what the formatting is, then sort the data appropriately. Which inevitably led me to this solution...
```
const stateTable = {
AL: "Alabama", AK: "Alaska", AZ: "Arizona", AR: "Arkansas",
CA: "California", CO: "Colorado", CT: "Connecticut", DE: "Delaware",
FL: "Florida", GA: "Georgia", HI: "Hawaii", ID: "Idaho",
IL: "Illinois", IN: "Indiana", IA: "Iowa", KS: "Kansas",
KY: "Kentucky", LA: "Louisiana", ME: "Maine", MD: "Maryland",
MA: "Massachusetts", MI: "Michigan", MN: "Minnesota",
MS: "Mississippi", MO: "Missouri", MT: "Montana", NE: "Nebraska",
NV: "Nevada", NH: "New Hampshire", NJ: "New Jersey",
NM: "New Mexico", NY: "New York", NC: "North Carolina",
ND: "North Dakota", OH: "Ohio", OK: "Oklahoma", OR: "Oregon",
PA: "Pennsylvania", RI: "Rhode Island", SC: "South Carolina",
SD: "South Dakota", TN: "Tennessee", TX: "Texas", UT: "Utah",
VT: "Vermont", VA: "Virginia", WA: "Washington",
WV: "West Virginia", WI: "Wisconsin", WY: "Wyoming"
}
const resultsSize = Object.keys(results).length
let states = {}
for (let i=0; i < resultsSize; i++){
// Province is split between state & county.
if (results[i].Province.split(", ")[1]) {
let state = results[i].Province.split(", ")[1]
if (Object.keys(states).includes(stateTable[state])) {
states[stateTable[state]].push({
location: results[i].Province.split(", ")[0],
date: results[i].Date,
deaths: results[i].Cases
})
} else {
states[stateTable[state]] = []
states[stateTable[state]].push({
location: results[i].Province.split(", ")[0],
date: results[i].Date,
deaths: results[i].Cases
})
}
} else {
// Only state name is specified.
let state = results[i].Province
if (Object.keys(states).includes(state)) {
states[state].push({
date: results[i].Date,
deaths: results[i].Cases
})
} else {
states[state] = []
states[state].push({
date: results[i].Date,
deaths: results[i].Cases
})
}
}
}
```
I now have a new object called "states" with only the date and number of deaths. Let's try adding THIS one up.
```
for (const state in states) {
const stateSize = states[state].length
const theDead = states[state][stateSize - 1].deaths
const theTime = states[state][stateSize -1].date
totalDeaths = totalDeaths + theDead
console.log(state, ":", theDead, theTime)
}
```
...which have a number of 244, just a bit lower than official numbers, which I presume is due to the api only updating once each night.
MUCH better. Less panic.
What did we learn? Understand how your data is sent before jumping to conclusions. :) | adamlarosa | |
285,809 | let vs const - Let's discuss. | We all are familiar with the difference between const, let and var. If not, please read this.... | 0 | 2020-03-22T10:35:15 | https://dev.to/kumareth/const-vs-let-let-s-discuss-34m9 | javascript, node, discuss | We all are familiar with the difference between const, let and var. If not, please read this.
{% post https://dev.to/sarah_chima/var-let-and-const--whats-the-difference-69e %}
---
For those who are familiar, should know that in the modern-day JavaScript, **YOU SHOULD NEVER USE `var`.**
So now, what we are left with, is the `let` and `const`.
## 🔥 The two scenarios
People believe their ways of using them both. Strongly.
Here are the two types of people.
1) Those who use `const` for Constants (Like for `const PI = 3.14`)
2) Those who use `const` for everything that won't be let
### 📯 const for Constants
Some people believe that `const` should only be used for strictly constant values like the Action Type Reducer Strings, Math values and constants like PI, etc.
If you are that person, you are from **team CONSTANT SPARINGLY**.
### 📯 const for everything that won't be let
If you always use `const`, no matter what, and only use `let` when you change a variable, you are from **team CONSTANT FOR ALL**.
---
There has been a lot of talk around it on Twitter due to this tweet by Dan Abramov.
{% twitter 1208369896880558080 %}
The tweet pretty much sums up that he is from the team CONSTANT SPARINGLY.
If you have been seeing WesBos' tutorials, he seems like he is from the team CONSTANT FOR ALL.
Dan has provided [a beautiful explanation](https://overreacted.io/on-let-vs-const/) for why he thinks const shouldn't be used.
Also, [this writeup here](https://jamie.build/const) focuses on easily concluding this discussion. But still, what's your opinion on it?
---
What do you prefer? Let's Discuss! | kumareth |
285,817 | Understanding JavaScript the weird parts: `this` context | The this keyword in JavaScript has confused a lot of developers. Whether you are just starting your... | 0 | 2020-03-22T11:22:17 | https://bitoverflow.net/Understanding-JavaScript-the-weird-parts:-this-context | javascript, webdev, node |
<br>

The `this` keyword in JavaScript has confused a lot of developers. Whether you are just starting your career in programming or you are an experienced
developer. It confuses everyone alike.
*****
Now before starting let’s get into the fundamentals of how `this` works in
javaScript. `this` always refers to the calling context of a function inside an object, which will usually be the object with which the function is associated with. Now, Since we have so many libraries at our disposal in the javascript ecosystem we just grab a library and start building something without actually understanding what is going on. While you will be able to build amazing applications but when it comes to debugging those applications that’s when the understanding of the weird parts of the javaScript comes into the picture. Now, javaScript is still evolving even after so many years but the fundamentals of language will always remain the same.
*****
> `this` does not refer to the function in which it is used, but always refers
> to the object in which it is executed. Let’s understand this by an example.
const obj={
myFunction: function(){
console.log(this===window)
}
}
obj.myFunction()

Now, in the above example, we expect this behavior because here `this` will always refer to the calling context of a function which here is obj.

Now this behavior will be true in any other object-oriented language. This is the default assumption because this is how `this` works in most other languages. Now, let’s change a few things and see how the behavior of `this` changes.

Now, in this example object declaration is the same but here we assign it
another variable and the call it afterward instead of calling it right away. Now if we call the newVariable, suddenly the value of `this` changes from `obj` to the `global` or `window`. Now, this tends to trip up a lot of developers. Now in order to understand what value `this` will hold we need to look where it is being called not where it is written. In the above example, it is being called in the global object and not the `obj` object.
*****
Let's look at some complex examples.
const obj={
myFunction: function(){
console.log(this===obj)
setTimeout(function(){
console.log(this===obj)
console.log(this===window)
})
}
}
obj.myFunction()
Now, this example is similar to the above example but here we use setTimeout which is an asynchronous task. Now, if we run this we get something different.

We see that inside setTimeout now the value of this again changes back to the `window` or `global` depending upon the environment i.e Nodejs or browser. Now even though it’s the same block of code the value of `this` changes to `window`. Now, going back to the first rule `this` does not depend on where the function is being written but where it is called and in case of asynchronous calls a new `async function` object on the `window` object. Okay, now let’s take a look at the same example but written a little differently using an ES6 arrow function.
const obj={
myFunction: function(){
console.log(this===obj)
setTimeout(()=>{
console.log(this===obj)
console.log(this===window)
})
}
}
obj.myFunction()

Interestingly, Now the value of `this` changes back to `obj` instead of `window`. An important thing to note is that `this` always get binding happens in 3 ways- default binding, implicit binding, and explicit binding. Now whenever we define a standalone function execution, it is always a default binding and it always binds to `window` object.

Now, we have to keep that default binding will always be our fallback binding.
*****
Let’s get to know a little bit about Explicit and Implicit binding and
understand how that works.
**Implicit Binding**
Now implicit binding happens whenever we have a function call and whatever is to the left side of the dot it is going to refer to that.

In this example, we have obj to the left side of the dot so it is going to refer to that i.e `obj`.
<br>
**Explicit Binding**
Explicit binding of `this` occurs when [.call()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/call),[.apply()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/apply), or [.bind()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/bind) are used on a function.
We call these explicit because you are explicitly passing in a `this` context to call() or apply(). Let’s take a look at how explicit binding looks like in the following example.
const obj={
myFunction: function(){
console.log(this===obj)
}
}
const newFunctionVariable=obj.myFunction
newFunctionVariable.apply(obj)

Now even though we are assigning `myFunction` to a new variable we can still say to what `this` context this function call will be bound to. We can see this by looking at another example where we can bind it to a completely different object below.
const obj1={
firstName:"Sachin",
lastName:"Thakur",
myName:function(){
console.log(this.firstName+" "+this.lastName)
}
}
const obj={
myFunction: function(){
console.log(this)
console.log(this==obj1)
}
}
const newFunctionVariable=obj.myFunction
newFunctionVariable.apply(obj1)

Now, in this, if we pass the first parameter as `obj1`it will take the `this` reference of `obj1` even though the function is defined on `obj`. And this is how the Explicit binding works.
*****
Now with the introduction of ES5 arrow function, the javaScript engine
introduced a new behavior. Before arrow functions, every new function defined its own `this` value based on how the function was called:
* A new object in the case of a direct function call with `window` context as `this` (Default Binding)
* `undefined` in [strict mode](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode) function calls.
* The base object if the function was called as an “object method”.(Implicit Binding)
* You could also Explicitly define what `this` will refer to like we saw in the last example. (Explicit Binding)
An arrow function does not have it’s own `this`. The `this` value comes from the lexical scope. Arrow function follows the normal variable look rule. If the value is not found in its scope go one level up and find the value in the enclosing scope. That’s why we don’t need to bind `this` value to the object explicitly as long as it is available in it’s enclosing scope.
Thus, in the following code, the `this` within the function that is passed to `setTimeout` has the same value as the `this` in the lexically enclosing
function:
<br>
const obj={
myFunction: function(){
console.log(this===obj)
setTimeout(()=>{
console.log(this===obj)
},0)
}
}
obj.myFunction()

**Conclusion**
`this` can be a little tricky sometimes but if we know the basic fundamentals of how scoping words and how javaScript treats an object, we can easily understand how these core concepts work. `this` can be a little tricky in case of callback or async function where the value of `this` changes. Always remember `this` value is assigned the value of the object where it is being invoked.
| thakursachin467 |
285,836 | Fiwit - A service to manage internal IT | Wanted to share with your community a new web app I've been working on for a few months: www.fiwit.io... | 0 | 2020-03-22T11:46:12 | https://dev.to/aclarembeau/fiwit-a-service-to-manage-internal-it-e6b | showdev, rails, saas, startup | Wanted to share with your community a new web app I've been working on for a few months: www.fiwit.io
It's a tool to manage everything in your internal IT:
- IT Asset management
- IT Helpdesk
- Internal IT documentation
**What problem does it solve?**
From what I've observed, many IT people, especially those working in small companies often struggle to manage their IT.
Despite we have some great tools to do programming, IT, on the other side, is often left to archaic solutions, consisting of a mix of old software, excel sheets, or in-house software. So, I decided to build a web app to tackle this problem in an easy way, accessible to anyone.
**The technical stack**
The app is build using Rails 6.
It's not a single-page app. I've made it using Turbolinks thanks to this blog, which showed me that this was possible to have turbolinks-powered applications bringing a good user experience in 2020.
The app is hosted on Heroku, after trying different projects on AWS, as this was much simpler for the deployment.
Interesting fact, we have implemented a client-side barcode scanner using a very good library: Quagga.js.
**Request for feedback**
As this project is very new, I'd like to ask your community for a few feedbacks.
Probably that among programmers, many of you already had to also work with an IT department. So, I'm wondering: is that something that you would use? If you have time to look at it, could you tell me what do you think about the website?
Of course, other questions are welcome as well, about any aspect (such as if you would like to get more details about the tech stack, or, other things). | aclarembeau |
285,881 | How to use Verdaccio with GitHub registry | Show a brief description of how to use Verdaccio having GitHub registry as uplink | 0 | 2020-03-22T14:50:54 | https://dev.to/verdaccio/how-to-use-verdaccio-with-github-registry-2ejj | javascript, github, node, npm | ---
title: How to use Verdaccio with GitHub registry
published: true
description: Show a brief description of how to use Verdaccio having GitHub registry as uplink
tags: javascript, github, nodejs, npm
---
I've been asked for this couple of times and I want to share how you can achieve a seamless integration GitHub with [Verdaccio](https://verdaccio.org/). Node.js package managers only allow using one registry when you are running an eg: `npm install` unless you modify the `.npmrc` and add some specific configuration, but frankly, we can do better using a **proxy**.
### Generating the Token at GitHub
First of all, we need to understand the GitHub registry is not a conventional registry, it does not support all `npm` commands you are get used to (eg: `npm token`).
I'd recommend you read first the [official documentation](https://help.github.com/en/packages/using-github-packages-with-your-projects-ecosystem/configuring-npm-for-use-with-github-packages) at GitHub how to use packages.
Once you have set up and created a **personal token** in their User Interface (remember you cannot use `npm adduser`). Copy the token from the website and proceed to log in to your terminal.
```
$ npm login --registry=https://npm.pkg.github.com
> Username: USERNAME
> Password: TOKEN
```
The last thing is recovering the token generated by the GitHub registry in the `~/.npmrc` file and find the line to verify npm you can use `npm` commands against GitHub registry.
```
//npm.pkg.github.com/:_authToken=TOKEN`.
```
One optional step is to publish a package, I have already one published one for my example below.
> This step is required if you have not published packages, otherwise, you don't need to log in, just copy the token.
Great, you have a **token** and that's all you need for *Verdaccio*.
### Installing Verdaccio
Let's imagine you don't know anything about [Verdaccio](https://verdaccio.org/). So here is what it does.
**Verdaccio is a lightweight private proxy registry build in Node.js**
and with straightforward installation, with no dependencies aside to have installed Node.js.
```
npm install --global verdaccio
```
to run *Verdaccio* just run in your terminal,
```
➜ verdaccio
warn --- config file - /Users/user/.config/verdaccio/config.yaml
warn --- Verdaccio started
warn --- http address - http://localhost:4873/ - verdaccio/4.5.0
```
for further information I'd recommend read our [documentation](https://verdaccio.org/docs/en/installation).
For this article, we will focus on the **proxy**, which is the most powerful and popular feature by far.
### Hooking the GitHub registry
First of all, you need a published package in the registry, here is mine and as you can see **GitHub only support scoped packages**.

This example is about how to fetch packages from **npmjs** and **GitHub** registries at the same time without modify the `.npmrc` file.
#### Uplinks
Open the verdaccio configuration file (eg: `/Users/user/.config/verdaccio/config.yaml`) and update the `uplinks` section adding a new registry.
```
uplinks:
npmjs:
url: https://registry.npmjs.org/
github:
url: https://npm.pkg.github.com
auth:
type: bearer
token: xxxx
```
For demonstration purposes let's copy the token in the example above, populate the config file with `token` is not the best approach, I recommend using *environment variables* with **auth** property, read more about it [here](https://verdaccio.org/docs/en/uplinks#auth-property).
#### Package Access
To install packages, we need the list of dependencies in your `package.json` file. Here is my example:
```
"dependencies": {
"@types/babel__parser": "7.1.1",
"@juanpicado/registry_test": "*",
"lodash": "*"
}
```
If you recall, I've published a package in my GitHub profile named `registry_test`, but GitHub requires to access my public package scoped with my user name, that would be `@juanpicado/registry_test`.

To make it more interesting, I also added a random published public package published by another user named `@types/babel__parser`.
The next step is setting up the **package access** section:
```
packages:
'@juanpicado/*':
access: $all
publish: $authenticated
unpublish: $authenticated
proxy: github
'@types/babel__parser':
access: $all
publish: $authenticated
unpublish: $authenticated
proxy: github
'@*/*':
access: $all
publish: $authenticated
unpublish: $authenticated
proxy: npmjs
'**':
access: $all
publish: $authenticated
proxy: npmjs
```
As we describe in the packages [documentation](https://verdaccio.org/docs/en/packages#usage), the **order is important**. Define the scoped packages you want to match on top of `'@*/*'` and define the `proxy` properties to the name used in the uplink section, for our example would be `proxy: github`.
With such configuration, *Verdaccio* will be able to route the request to the right remote.
```
http --> 200, req: 'GET https://registry.npmjs.org/lodash' (streaming)
http --> 200, req: 'GET https://registry.npmjs.org/lodash', bytes: 0/194928
http <-- 200, user: null(127.0.0.1), req: 'GET /lodash', bytes: 0/17599
http <-- 200, user: null(127.0.0.1), req: 'GET /lodash', bytes: 0/17599
http --> 200, req: 'GET https://npm.pkg.github.com/@types%2Fbabel__parser' (streaming)
http --> 200, req: 'GET https://npm.pkg.github.com/@types%2Fbabel__parser', bytes: 0/1113
http --> 200, req: 'GET https://npm.pkg.github.com/@juanpicado%2Fregistry_test' (streaming)
http --> 200, req: 'GET https://npm.pkg.github.com/@juanpicado%2Fregistry_test', bytes: 0/2140
http <-- 200, user: null(127.0.0.1), req: 'GET /@types%2fbabel__parser', bytes: 0/708
http <-- 200, user: null(127.0.0.1), req: 'GET /@types%2fbabel__parser', bytes: 0/708
http <-- 200, user: null(127.0.0.1), req: 'GET /@juanpicado%2fregistry_test', bytes: 0/911
http <-- 200, user: null(127.0.0.1), req: 'GET /@juanpicado%2fregistry_test', bytes: 0/911
```
As we can observe if we have a close look at the server output.
* `lodash` is routed through -> `https://registry.npmjs.org/` .
* `"@types/babel__parser": "7.1.1"` is routed through -> `https://npm.pkg.github.com/@types%2Fbabel__parser`.
* `"@juanpicado/registry_test": "*"` is routed through `https://npm.pkg.github.com/@juanpicado%2Fregistry_test'.`.
Verdaccio is able to handle as many remotes you need, furthermore, you can add two *proxy* values as a fallback in case the package is not being found in the first option.
```
packages:
'@juanpicado/*':
access: $all
publish: $authenticated
unpublish: $authenticated
proxy: npmjs github
```
Verdaccio will try to fetch from *npmjs* and if the package fails for any reason will retry on *github*. This scenario is useful if you are not 100% sure whether the package is available or not in a specific registry. As a downside, if you add multiple proxies will slow down the installations due to the multiple looks up have to perform.
```
http --> 404, req: 'GET https://registry.npmjs.org/@juanpicado%2Fregistry_test' (streaming)
http --> 404, req: 'GET https://registry.npmjs.org/@juanpicado%2Fregistry_test', bytes: 0/21
http --> 200, req: 'GET https://npm.pkg.github.com/@juanpicado%2Fregistry_test' (streaming)
http --> 200, req: 'GET https://npm.pkg.github.com/@juanpicado%2Fregistry_test', bytes: 0/2140
http <-- 200, user: null(127.0.0.1), req: 'GET /@juanpicado%2fregistry_test', bytes: 0/908
http <-- 200, user: null(127.0.0.1), req: 'GET /@juanpicado%2fregistry_test', bytes: 0/908
```
#### One more thing
During writing this blog post, I've noticed all files retrieved from the GitHub registry are not tarballs like those that come from other registries which always finish with the suffix `*.tgz`.

## Wrapping up
**Verdaccio** is a powerful lightweight registry that can be used in multiple ways, you can find more about it in our [website](https://verdaccio.org). This project is run by voluntaries and [you can also be part of it](https://github.com/verdaccio/verdaccio/issues/1461).
If you would like to donate, it can be done through [OpenCollective](https://opencollective.com/verdaccio), help us to reach more developers to have a sustainable Node.js registry.
Thanks for using Verdaccio and please, **keep safe, stay at home and wash your hands regularly.**
| jotadeveloper |
285,935 | Introduction to Fluture - A Functional Alternative to Promises | fluture-js / Fluture 🦋 Fantasy... | 0 | 2020-04-06T14:50:53 | https://dev.to/avaq/fluture-a-functional-alternative-to-promises-21b | functional, javascript, promises, monad | {% github fluture-js/Fluture %}
In this piece we'll be going over how to use Futures, assuming the *why* has been covered sufficiently by [Broken Promises][].
{% medium https://medium.com/@avaq/broken-promises-2ae92780f33 %}
----
We'll be going over Fluture's five major concepts:
1. [Functional Programming](#a-functional-api): How functional programming patterns determine the Fluture API.
2. [Future Instances](#creating-future-instances): What a Future instance represents, and the ways to create one.
3. [Future Consumption](#consuming-futures): What consumption of a Future is, and when and how we apply it.
4. [Future Transformation](#not-consuming-futures): What we can do with a Future before we've consumed it, and why that's important.
5. [Branching and Error Handling](#branching-and-error-handling): Introduction to Fluture's "rejection branch", and how it differs from rejected Promises.
## A Functional API
The Fluture API was designed to play well with the functional programming paradigm, and libraries within this ecosystem (such as [Ramda][] and [Sanctuary][]). Because of this you'll find that there are almost no methods, and that all functions provided by the library use [Function Currying][].
So where a piece of Promises-based code might look like this:
```js
promiseInstance
.then(promiseReturningFunction1)
.then(promiseReturningFunction2)
```
A naive translation to Fluture-based code (using [`chain`][]) makes that:
```js
chain (futureReturningFunction2)
(chain (futureReturningFunction1)
(futureInstance))
```
And although I'm using [Functional Style Indentation][] to make this code a little more readable, I have to admit that the Promise-based code reads better.
But there's a method to the madness: The API was carefully designed to work well with [Function Composition][]. For example, we can use [`flow` from Lodash][]\* to make the same program look much more like the Promise-based code:
```js
_.flow ([
chain (futureReturningFunction1),
chain (futureReturningFunction2),
]) (futureInstance)
```
<sup>\* There's also [`pipe` from Sanctuary][], [`pipe` from Ramda][], and many more.</sup>
Better yet, function composition is going to be included as the [Pipeline Operator][] in a future version of JavaScript. Once this is in the language, the code we can write looks identical to the Promise-based code.
```js
futureInstance
|> chain (futureReturningFunction1)
|> chain (futureReturningFunction2)
```
And whilst looking identical, this function-based code is more decoupled and easier to refactor. For example, I can just grab a piece of that pipeline and extract it to a function:
```diff
+const myFunction = chain (futureReturningFunction1)
+
futureInstance
-|> chain (futureReturningFunction1)
+|> myFunction
|> chain (futureReturningFunction2)
```
Doing that to a fluent method chain is not as straightforward:
```diff
+const myFunction = promise => promise.then(promiseReturningFunction1)
+
+(
promiseInstance
-.then(promiseReturningFunction1)
+|> myFunction
+)
.then(promiseReturningFunction2)
```
----
Since the [Pipeline Operator][] is still a language proposal, we might be working in an environment where it's not available. Fluture ships with a [`pipe`][] method to simulate what working with the pipeline operator would be like. It has all the mechanical advantages of the pipeline operator, but it's a little more verbose.
```js
futureInstance
.pipe (chain (futureReturningFunction1))
.pipe (chain (futureReturningFunction2))
```
## Creating Future Instances
Future instances are slightly different from Promise instances, in that they represent an *asynchronous computation* as opposed to an *asynchronously acquired value*. Creating a Future instance is very similar to creating a Promise, however. The simplest way is by using the [`resolve`][] or [`reject`][] functions, which create resolved or rejected Futures respectively. For now through, we'll focus on the general constructor function: [`Future`][], and how it compares to Promise construction.
```js
const promiseInstance = new Promise ((res, rej) => {
setTimeout (res, 1000, 42)
})
```
```js
const futureInstance = Future ((rej, res) => {
const job = setTimeout (res, 1000, 42)
return function cancel(){
clearTimeout (job)
}
})
```
Some notable differences:
1. The `new` keyword is not required. In functional programming, we make no distinction between functions that return objects, and functions that return any other kind of data.
2. The `rej` and `res` arguments are flipped, this has to do with some conventions in the functional programming world, where the "more important" generic type is usually placed on the rightmost side.
3. We return a cancellation function (`cancel`) into the Future constructor. This allows Fluture to clean up when a running computation is no longer needed. More on that in the section about [Consuming Futures](#consuming-futures).
----
The [`Future`][] constructor used above is the most flexible way to create a new Future, but there's also more specific ways of [Creating Futures][]. For example, to create a Future from a node-style callback function, we can use Fluture's [`node`][] function:
```js
const readText = path => node (done => {
fs.readFile (path, 'utf8', done)
})
```
Here we've created a function `readText`, which given a file path returns a Future which might reject with an Error, or resolve with the contents of the corresponding file decoded from utf8.
Doing the same using the flexible Future constructor is more work:
```js
const readText = path => Future ((rej, res) => {
fs.readFile (path, 'utf8', (err, val) => err ? rej (err) : res (val))
return () => {}
})
```
As we can see, [`node`][] took care of the empty cancellation function, and juggling with the callback arguments. There's also Future constructors that reduce the boilerplate when working with underlying Promise functions, or functions that throw exceptions. Feel free to explore. All of them are listed under the [Creating Futures][] section of the Fluture docs.
*In day-to-day use, you should find that the [`Future`][] constructor is needed only for the most specific of cases and you can get very far using the more specialized ones.*
## Consuming Futures
In contrast to a Promise, a Future will have to be eventually "consumed". This is because - as I mentioned earlier - Futures represent a computation as opposed to a value. And as such, there has to be a moment where we tell the computation to run. "Telling the Future to run" is what we refer to as consumption of a Future.
The go-to way to consume a Future is through the use of [`fork`][]. This function takes two continuations (or callbacks), one for when the Future rejects, and one for when it resolves.
```js
const answer = resolve (42)
const consume = fork (reason => {
console.error ('The Future rejected with reason:', reason)
}) (value => {
console.log ('The Future resolved with value:', value)
})
consume (answer)
```
When we instantiated the `answer` Future, nothing happened. This holds true for any Future we instantiate through any means. The Futures remain "cold" *until they are consumed*. This contrasts with Promises, which eagerly evaluate their computation as soon as they are created. So only the last line in the example above actually kicked off the computation represented by the `answer` Future.
In this case, if we would run this code, we would see the answer immediately. That's because `resolve (42)` knew the answer up-front. But many Futures could take some time before they get to an answer - maybe they're downloading it over a slow connection, or spawning a botnet to compute the answer. This also means that it might take *too long*, for example if the user got bored, or another satisfactory answer has come in from another source. For those cases, we can *unsubscribe* from the consumption of a Future:
```js
const slowAnswer = after (2366820000000000000) (42)
const consume = value (console.log)
const unsubscribe = consume (slowAnswer)
setTimeout (unsubscribe, 3000)
```
In this example, we use [`after`][] to create a Future which takes approximately seven and a half million years to compute the answer. And we're using [`value`][] to consume the Future, assigning its output to `unsubscribe`.
Then we got bored waiting for the answer after three seconds, and unsubscribed. We were able to do so because most consumption functions return their own unsubscription function. When we unsubscribe, Fluture uses the cancellation functions defined inside the underlying constructors (in our example, that would be the cancellation function created by `after`) to stop any running computations. More about this in the [Cancellation][] section of the Fluture README.
*Consumption of a Future can be thought of as turning the asynchronous computation into the eventual value that it'll hold. There's also other ways besides [`fork`][] to consume a Future. For example, the [`promise`][] function consumes the Future and returns a Promise of its eventual result.*
## Not Consuming Futures
Unlike with a Promise, we can choose *not to* consume a Future (just yet). As long as a Future hasn't been consumed yet, we can extend, compose, combine, pass-around, and otherwise transform it as much as we like. This means we're *treating our asynchronous computations as regular values* to be manipulated in all the same ways we're used to manipulate values.
Manipulating Futures (as the Time-Lords we are) is what the Fluture library is all about - I'll list some of the possibilities here. You don't have to read too much into these: they're just to give you an idea of the sort of things you can do. We'll also be using these functions in some of the examples further down.
* [`chain`][] transforms the value inside a Future using a function that returns another Future.
* [`map`][] transforms the value inside a Future using a function to determine the new value it should hold.
* [`both`][] takes two Futures and returns a new Future which runs the two in parallel, resolving with a pair containing their values.
* [`and`][] takes two Futures and returns a new Future which runs them in sequence, resolving with the value from the second Future run.
* [`lastly`][] takes two Futures and returns a new Future which runs them in sequence, resolving with the value from the first Future run.
* [`parallel`][] takes a list of Futures, and returns a new Future which runs them all in parallel, with a user-chosen limit, and finally resolves with a list of each of their resolution values.
And many more. The purpose of all of these functions is to give us ultimate control over our asynchronous computations. To sequence or to parallelize, to run or not to run, to [recover from failure](#branching-and-error-handling). As long as the Future has not yet been consumed, we can modify it in any way we want.
Representing asynchronous computations as regular values - or "first-class citizens", if you will - gives us a level flexibility and control difficult to convey, but I will try. I'll demonstrate a problem similar to one I faced some time ago, and show that the solution I came up with was only made possible by first class asynchronous computations. Suppose we have an async program like the one below:
```js
//This is our readText function from before, reading the utf8 from a file.
const readText = path => node (done => fs.readFile (path, 'utf8', done))
//Here we read the index file, and split out its lines into an Array.
const eventualLines = readText ('index.txt')
.pipe (map (x => x.split ('\n')))
//Here we take each line in eventualLines, and use the line as the path to
//additional files to read. Then, using parallel, we run up to 10 of those
//file-reads in parallel, obtaining a list of all of their texts.
const eventualTexts = eventualLines
.pipe (map (xs => xs.map (readText)))
.pipe (chain (parallel (10)))
//And at the end we consume the eventualTexts by logging them to the console.
eventualTexts .pipe (value (console.log))
```
<sup>The problem solved in this example is based on the [Async Problem][].</sup>
And what if it's taking a really long time, and we want to find out which part of the program is taking the longest. Traditionally, we would have to go in and modify the transformation functions, adding in calls to [`console.time`][]. With Futures, I could define a function that does this automatically:
```js
const time = tag => future => (
encase (console.time) (tag)
.pipe (and (future))
.pipe (lastly (encase (console.timeEnd) (tag)))
)
```
Let's go over the function line by line to see how it uses *async computation as first-class citizens* to achieve what it does.
1. We're taking two arguments, `tag` and `future`. The one to pay attention to is `future`. This function demonstrates something we rarely do with Promises and that is to pass them around as function arguments.
2. We use [`encase`][] to wrap the `console.time` call in a Future. This prevents it from running right away, and makes it so we can combine it with other Futures. This is a common pattern when using Futures. Wrapping any code that has a side-effect in a Future will make it easier to manage the side-effect and control where, when, and if it will happen.
3. We use [`and`][] to combine the future which came in as an argument with the Future that starts the timer.
4. We use [`lastly`][] to combine the computation (which now consists of starting a timer, followed by an arbitrary task) with a final step for writing the timing result to the console using `console.timeEnd`.
Effectively what we've created is a function that takes in *any* Future, and returns a new Future which has the same type, but is wrapped in two side-effects: the initialization and finalization of a timer.
With it, we can sprinkle our code with timers freely, without having to worry that the side-effects (represented by the return values of the `time` function) will happen at the wrong moments:
```js
//Simply pipe every file-read Future through 'time'.
const readText = path => node (done => fs.readFile (path, 'utf8', done))
.pipe (time (`reading ${path}`))
//Measure reading and processing the index as a whole.
const eventualLines = readText ('index.txt')
.pipe (map (s => s.split ('\n')))
.pipe (time ('getting the lines'))
const eventualTexts = eventualLines
.pipe (map (ss => ss.map (readText)))
.pipe (chain (parallel (10)))
//And finally we insert an "everything" timer just before consumption.
eventualTexts .pipe (time ('everything')) .pipe (value (console.log))
```
The `time` function just transforms a computation from one "list of instructions" to another, and the new computation will always have the timing instructions inserted exactly before and after the instruction we want to measure.
The purpose of all of this was to illustrate the benefit of "first-class asynchronous computations"; A utility like this `time` function would not have been possible without them. For example with Promises, by the time a Promise would be passed into the `time` function, it would already be running, and so the timing would be off.
----
The header of this section was "Not Consuming Futures", and it highlights an idea that I really want to drive home: *in order to modify computations, they should not be running yet*. And so we should refrain from consuming our computation for as long as possible.
*In general, and as a rule-of-thumb, every program only has a single place where a Future is consumed, near the entry-point of the program.*
## Branching and Error Handling
Until this point in the article we've only covered the "happy paths" of asynchronous computation. But as we know, asynchronous computations occasionally fail; That's because "asynchronous" in JavaScript usually means I/O, and I/O can go wrong. This is why Fluture comes with a "rejection branch", enabling it's use for a style of programming sometimes referred to as [Railway Oriented Programming][].
When transforming a Future using transformation functions such as the aforementioned [`map`][] or [`chain`][], we'll affect one of the branches without affecting the other. For example `map (f) (reject (42))` equals `reject (42)`: the transformation had no effect, because the value of the Future was in the rejection branch.
There's also functions that affect only the rejection branch, such as [`mapRej`][] and [`chainRej`][]. The following program prints the answer 42, because we start with a *rejected* Future, and apply transformations to the rejection branch. In the last transformation using `chainRej`, we switch it back to the resolution branch by returning a *resolved* Future.
```js
const future = reject (20)
.pipe (mapRej (x => x + 1))
.pipe (chainRej (x => resolve (x + x)))
future .pipe (value (console.log))
```
Finally, there's also some functions that affect *both* branches, like [`bimap`][] and [`coalesce`][]. They definitely have their uses, but you'll need them less often.
----
I sometimes think of the two branches of a Future as two railway tracks parallel to each other, with the various transformation functions represented by junctions affecting the tracks and the payload of the train. I'll draw it. Imagine both lines being railway tracks, with the train driving from top to bottom on one of either track.
```text
reject (x) resolve (y)
\ /
: | | :
map (f) : | f y : The 'map' function affects the value in
: | | : the resolution track, but if the train
: | | : would've been on the rejection track,
: | | : nothing would've happened.
: | | :
: | | :
chain (f) : | f y : The 'chain' function affects the value in
: | /| : the resolution track, and allowed the
: | / | : train to change tracks, unless it was
: | / | : already on the rejection track.
: |/ | :
: | | :
coalesce (f) (g) : f x g y : The 'coalesce' function affects both
: \ | : tracks, but forces the train to switch
: \ | : from the rejection track back to the
: _ \ | : resolution track.
: | \| :
: | | :
and (m) : | m : The 'and' function replaces a train on
: | /| : the resolution track with another one,
: | / | : allowing it to switch tracks.
: | / | :
: |/ | :
: | | :
chainRej (f) : f y | : The 'chainRej' function is the opposite
: |\ | : of the 'chain' function, affecting the
: | \ | : rejection branch and allowing a change
: | \ | : back to the resolution track.
: | \| :
: | | :
V V
```
This model of programming is somewhat similar to pipelines in Bash scripting, with stderr and stdout being analogous to the rejection and resolution branches respectively. It lets us program for the happy path, without having to worry about the unhappy path getting in the way.
Promises have this too, in a way, but Fluture takes a slightly different stance on what the rejection branch should be used for. This difference is most obvious in the way *thrown exceptions* are treated. With Promises, if we throw an exception, it ends up in the rejection branch, mixing it in with whatever other thing we might have had there. This means that fundamentally, the rejection branch of a Promise has no strict *type*. This makes the Promise rejection branch a place in our code that could produce any surprise value, and as such, not the ideal place for "railway oriented" control flow.
Fluture's rejection branch was designed to facilitate control flow, and as such, does not mix in thrown exceptions. This also means the rejection branch of a Future can be strictly typed and produces values of the type we expect.
When using Fluture - and functional programming methodologies in general - exceptions don't really have a place as constructs for control flow. Instead, the only good reason to throw an exception is if a developer did something wrong, usually a type error. Fluture, being functionally minded, will happily let those exceptions propagate.
The philosophy is that an exception means a bug, and a bug should affect the behaviour of our code as little as possible. In compiled languages, this classification of failure paths is much more obvious, with one happening during compile time, and the other at runtime.
## In Summary
1. The Fluture API design is based in the functional programming paradigm. It heavily favours **function composition** over fluent method chains and plays well with other functional libraries.
2. Fluture provides several **specific functions**, and a **general constructor**, to create Futures. Futures represent **asynchronous computations** as opposed to **eventual values**. Because of this, they are **cancellable** and can be used to **encase side-effects**.
3. The asynchronous computations represented by Futures can be **turned into their eventual values** by means of **consumption** of the Future.
4. But it's much more interesting **not to consume a Future**, because as long as we have unconsumed Future instances we can **transform**, **combine**, and otherwise manipulate them in interesting and useful ways.
5. Futures have a **type-safe failure branch** to describe, handle, and recover from runtime I/O failures. TypeErrors and bugs don't belong there, and can only be handled during consumption of the Future.
And that's all there really is to know about Fluture. Enjoy!
[Async Problem]: https://github.com/plaid/async-problem
[Broken Promises]: https://medium.com/@avaq/broken-promises-2ae92780f33
[Cancellation]: https://github.com/fluture-js/Fluture#cancellation
[Creating Futures]: https://github.com/fluture-js/Fluture#creating-futures
[Function Composition]: https://drboolean.gitbooks.io/mostly-adequate-guide-old/content/ch5.html
[Function Currying]: https://drboolean.gitbooks.io/mostly-adequate-guide-old/content/ch4.html
[Functional Style Indentation]: https://github.com/sanctuary-js/sanctuary/issues/438
[Pipeline Operator]: https://github.com/tc39/proposal-pipeline-operator
[Railway Oriented Programming]: https://fsharpforfunandprofit.com/rop/
[Ramda]: https://ramdajs.com/
[Sanctuary]: https://sanctuary.js.org/
[`console.time`]: https://nodejs.org/api/console.html#console_console_time_label
[`flow` from Lodash]: https://lodash.com/docs/4.17.15#flow
[`pipe` from Ramda]: https://ramdajs.com/docs/#pipe
[`pipe` from Sanctuary]: https://sanctuary.js.org/#pipe
[`Future`]: https://github.com/fluture-js/Fluture#future
[`after`]: https://github.com/fluture-js/Fluture#after
[`and`]: https://github.com/fluture-js/Fluture#and
[`bimap`]: https://github.com/fluture-js/Fluture#bimap
[`both`]: https://github.com/fluture-js/Fluture#both
[`chain`]: https://github.com/fluture-js/Fluture#chain
[`chainRej`]: https://github.com/fluture-js/Fluture#chainrej
[`coalesce`]: https://github.com/fluture-js/Fluture#coalesce
[`encase`]: https://github.com/fluture-js/Fluture#encase
[`fork`]: https://github.com/fluture-js/Fluture#fork
[`lastly`]: https://github.com/fluture-js/Fluture#lastly
[`map`]: https://github.com/fluture-js/Fluture#map
[`mapRej`]: https://github.com/fluture-js/Fluture#maprej
[`node`]: https://github.com/fluture-js/Fluture#node
[`parallel`]: https://github.com/fluture-js/Fluture#parallel
[`pipe`]: https://github.com/fluture-js/Fluture#pipe
[`promise`]: https://github.com/fluture-js/Fluture#promise
[`reject`]: https://github.com/fluture-js/Fluture#reject
[`resolve`]: https://github.com/fluture-js/Fluture#resolve
[`value`]: https://github.com/fluture-js/Fluture#value
| avaq |
285,990 | NestJS - Adding a frontend to the monorepo | In the last two blog posts, we created a Monorepo and integrated Redis. You can find them here: Mon... | 5,366 | 2020-03-22T17:32:14 | https://dev.to/lampewebdev/nestjs-adding-a-frontend-to-the-monorepo-11g6 | typescript, javascript, beginners, vue | In the last two blog posts, we created a Monorepo and integrated Redis. You can find them here:
- [Monorepo and Microservice setup in Nest.js](https://dev.to/lampewebdev/monorepo-and-microservice-setup-in-nest-js-41n4)
- [NestJS - Microservices with Redis](https://dev.to/lampewebdev/nestjs-microservices-with-redis-996)
In this blog post, we will add Vue as our frontend and make it work within our Monorepo.
### Installing the dependencies
Let's first get install our dependencies:
```bash
yarn add vue
```
And now our developer dependencies
```bash
yarn add -D babel-loader css-loader file-loader html-webpack-plugin node-sass sass-loader url-loader vue-loader vue-template-compiler webpack webpack-bundle-analyzer webpack-cli webpack-dev-server vue-eslint-parser
```
As you can see, we need to install way more dependencies for development. Most of them are dependencies to make Webpack build and serve our frontend.
Webpack will handle HTML, vue, css, sass and files.
### Creating the frontend
First, we need to create a folder named 'frontend'
```bash
mkdir frontend
```
In that folder, we will have all of our 'frontends'. For this example, we want to create our frontend for our 'blog' backend.
```bash
cd frontend
mkdir blog
```
Now we need to create an `index.html` file. This will be the entry file to the blog frontend.
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1.0" />
<title>My Vue app with webpack 4</title>
</head>
<body>
<div id="app"></div>
</body>
</html>
```
The most important line here is the `div` with the `id="app"`. VueJS needs this `div` as an entry point.
The next file we need is a `webpack.config.js`
```js
/* eslint-disable @typescript-eslint/no-var-requires */
const path = require('path');
const VueLoaderPlugin = require('vue-loader/lib/plugin');
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
const HtmlPlugin = require('html-webpack-plugin');
const config = {
context: __dirname,
entry: './src/index.ts',
output: {
path: path.resolve(process.cwd(), 'dist/frontend'),
filename: '[name].[contenthash].js'
},
target: 'web',
module: {
rules: [
{
test: /\.vue$/,
loader: 'vue-loader'
},
{
test: /\.css$/,
use: [
'vue-style-loader',
'css-loader'
]
},
{
test: /\.ts$/,
loader: "ts-loader",
options: { appendTsSuffixTo: [/\.vue$/] },
exclude: /node_modules/
},
{
test: /\.scss$/,
use: [
'vue-style-loader',
'css-loader',
'sass-loader'
]
},
{
test: /\.svg$/,
use: 'file-loader'
},
{
test: /\.png$/,
use: [
{
loader: 'url-loader',
options: {
mimetype: 'image/png'
}
}
]
}
]
},
resolve: {
extensions: [
'.js',
'.vue',
'.tsx',
'.ts'
]
},
plugins: [
new HtmlPlugin({
template: 'index.html',
chunksSortMode: 'dependency'
}),
new BundleAnalyzerPlugin({
analyzerMode: 'static',
openAnalyzer: false,
}),
new VueLoaderPlugin(),
],
optimization: {
runtimeChunk: 'single',
splitChunks: {
cacheGroups: {
vendor: {
test: /[\\/]node_modules[\\/]/,
name: 'vendors',
chunks: 'all'
}
}
}
},
devServer: {
contentBase: path.join(__dirname, 'public'),
compress: true,
port: 9000
}
};
module.exports = config;
```
Webpack configs are fun! Let's start from the bottom. The `devServer` will run on port `9000` and will look for files in the `public`. For that to work, we need to set the `context` option to `__dirname`. `__dirname` will resolve to the path that the directory is currently in, in our case, the blog frontend folder. `entry` is the file that bootstraps and we will create it next. In the `output` we need to specify the path. `process.cwd()` will resolve to the main project folder, and we are adding `dist/frontend`. This means you can find there our frontend files. The rest is configuration to get Vue running with typescript, to load CSS, SCSS, SVG and png files.
Typescript also needs a config.
```ts
{
"compilerOptions": {
"outDir": "./dist/",
"sourceMap": true,
"strict": true,
"noImplicitReturns": true,
"noImplicitAny": true,
"module": "es6",
"moduleResolution": "node",
"target": "es5",
"allowJs": true
},
"include": [
"./blog/src/**/*"
]
}
```
This is a pretty standard ts config. We need to include our `blog/src` folder. Without this, you will get a typescript error.
Now let us create our `src/index.ts` file, `src/App.vue` file and `src/vue-shim.d.ts`.
`index.ts`:
```ts
import Vue from 'vue';
import App from './App.vue';
new Vue({
el: '#app',
render: h => h(App),
});
```
This is the default VueJS setup.
`App.vue`
```vue
<template>
<h1>lampeweb dev blog</h1>
</template>
<script lang="ts">
import Vue from 'vue';
export default Vue.extend({
data: function() {
return {
name: 'Hello World!',
};
},
});
</script>
```
Thanks to our Webpack config we can already use typescript in our Vue components. This file is a simple Vue component which just will display a header with the text `lampeweb dev blog`.
`vue-shim.d.ts`:
```ts
declare module '*.vue' {
import Vue from 'vue';
export default Vue;
}
```
This will make typescript and your editor happy :). Do you want to know more about how `declare module` works? Leave a comment!
We need now to define our npm scripts next.
```json
{
"scripts": {
"f:blog:dev:watch": "webpack-dev-server -d --mode development --config ./frontend/blog/webpack.config.js",
"f:blog:build": "webpack -p --mode production --config ./frontend/blog/webpack.config.js"
}
}
```
We can now test if everything worked with:
```bash
yarn run f:blog:dev:watch
```
After Webpack has built our frontend, you should see the following:

I hope you liked that post! If you want a follow-up, please comment, like, and share. So I can know that you are interested in content like that!
**👋Say Hello!** [Instagram](https://www.instagram.com/lampewebdev/) | [Twitter](https://twitter.com/lampewebdev) | [LinkedIn](https://www.linkedin.com/in/michael-lazarski-25725a87) | [Medium](https://medium.com/@lampewebdevelopment) | [Twitch](https://dev.to/twitch_live_streams/lampewebdev) | [YouTube](https://www.youtube.com/channel/UCYCe4Cnracnq91J0CgoyKAQ) | lampewebdev |
286,003 | How to host a static website with AWS S3 and SSL using CLoudFront | This article was first published on razcodes.dev Prerequisites To follow this article,... | 0 | 2020-03-22T17:40:01 | https://dev.to/razcodes/how-to-host-a-static-website-with-aws-s3-and-ssl-using-cloudfront-3e37 | aws, s3, hosting, ssl | 
This article was first published on [razcodes.dev](https://razcodes.dev/ "razcodes.dev")
## Prerequisites
To follow this article, you will need to have an AWS account. You can create one [here](https://aws.amazon.com/free "AWS").
Note that while creating an account with AWS you are eligible for the free tier for your first year, some of the setup will not be free. It is always a good idea to setup a billing alarm first thing after you create your account, and you can do so by following [this article](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html "AWS Billing alarm").
You will also need to have a domain name. If you don't, you can register one using a registrar like [Google Domains](https://domains.google.com "Google Domains"), or another one of your choice. You can also register your domain directly with AWS using Route53 once you login. I like having the domain name in a different place, just because I am not 100% sure I will stay with this hosting solution forever, so I feel this gives me the flexibility to move around and experiment easier.
## Setting up S3
S3 is the AWS file system where the files for the website will be uploaded. We will be creating 2 buckets here, one for the root domain (getjambalaya.com) and one for the www subdomain (www.getjambalaya.com). The root domain bucket will just be forwarded to the main www one where we will be uploading all the files.
Go to your AWS console and then go to S3. Start by creating the first bucket. This bucket will have to be the name of your domain.

Next create the www domain bucket, this time deselecting _Block all public access_ and acknowledging the choice. We are doing this, since this bucket will be made public.

Still in S3, click on the initial bucket, go under _Properties_ and under _Static website hosting_, set it up so it redirects to your www domain.

Go to your www bucket, under _Properties_ -\> _Static website hosting_, and _Use this bucket to host a website_. Also put index.html as the index document.

For this same bucket, under _Permissions_ -\> _Bucket Policy_, you can add the following policy, making sure you replace my domain name with yours. This will make all the objects in the bucket accessible to the outside world.
```html
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.getjambalaya.com/*"
}
]
}
```
You can now go under Overview and upload all the files of your website into your bucket. For this example I will just upload an index.html file that I have prepared.
## Setting up the domain
In AWS go to _Route 53_ and under _Hosted zones_ click **Create Hosted Zone**.

Once the zone has been created, take all 4 of your NS domains provided and make sure you update your DNS setting with your registrar. For Google Domains I went under my domain -\> DNS -\> Name Servers -\> Use custom name servers. Should be the same with others as well.

These might take a while to update as such is the way with DNS settings.
## Getting the certificate
In AWS, _Services_ -\> _Certificate Manager_ -\> **Request a certificate** -\> _Request a public certificate_. Put in your domain name (www.getjambalaya.com). Select DNS validation. Add Tags if you wish. **Confirm and Request**.
Because we already added the domain to Route53, on the confirmation screen, you can expand by clicking on your domain name and click on _Create record in Route53_ and then **Create**.
Now you will just have to wait until AWS validates your certificate based on the DNS settings. It only took 2 minutes for me, but might take longer.
## Creating the CloudFront distribution
In AWS, _Services_ -\> _CloudFront_ -\> **Create Distribution** -\> _Web_ -\> **Get Started**. There will be a lot of options on this screen, but only a few will be changed for this exercise.
Under _Origin Domain Name_ select your www s3 bucket. Under _Viewer Protocol Policy_ chose _Redirect HTTP to HTTPS_.

Under _Alternate Domain Names_ put in your www domain name (www.getjambalaya.com).
Under _SSL Certificate_ chose _Custom SSL Certificate _ and chose the one you just created above.

Last thing, under _Default Root Object_ put in index.html.
Click **Create**. It will take a while for the distribution to be created so be patient.
## Finishing the domain setup
Now back to _Services_ -\> _Route 53_ -\> _Hosted Zones_ select your domain name. We will create 2 more record sets, one for each bucket we created above.
_Create Record Set_ -\> Alias and select as the Alias Target the s3 bucket you created first, then click **Create**.

For the second one, we will point it to the CloudFront distribution we created above, so make sure you wait until that finishes.
_Create Record Set_, add www to the name, select Yes for _Alias_ and in the _Alias Target_, select your CloudFront distribution, then **Create**.
As with other DNS changes this might take a bit to update so if it does not work right away just be patient. It will.
Congratulations, you can now visit your new website. | razcodes |
286,169 | Inspecting Web Traffic with Burp Suite Proxy | If you're doing any type of security testing involving web applications, becoming familiar with Burp... | 5,555 | 2020-03-23T04:36:57 | https://dev.to/leading-edje/inspecting-web-traffic-with-burp-suite-proxy-4opg | security | If you're doing any type of security testing involving web applications, becoming familiar with Burp Suite is essential. Today we're going to take a look at how you can inspect, both incoming and outgoing web traffic using Burp Suite Community Edition. If you don't have Burp Suite installed and configured yet, take a look at the previous article in this series, [Getting Started with Burp Suite](https://dev.to/leading-edje/getting-started-with-burp-suite-31hd).
Alright, let's get started. If you've followed the instructions in the previous article, you should have Burp Suite set up to proxy all web traffic in or out of your browser. Basically, Burp will function as a man-in-the-middle, stopping any request that your browser makes, before allowing it to continue out to the Internet. Burp has lots of tools that can help with manipulating your requests, but we'll save those for later in the series. Today, we're going to keep it simple, and focus on inspecting the web traffic moving through your proxy.
We'll start out simple. We're going to be using the [Try2Hack website](http://www.try2hack.nl/). This is an intentionally vulnerable website, that's been set up to help new penetration testers practice their skills. It's important to note at this point that, while it's perfectly fine to inspect the HTTP traffic from any website you visit, modifying the requests in an attempt to "hack" a website is most likely illegal unless you get permission from the site owner. There are numerous projects available that are designed specifically for practicing your skills. Some projects, like [WebGoat](https://owasp.org/www-project-webgoat/), [bWAPP](http://www.itsecgames.com/) and [Mutilidae](https://wiki.owasp.org/index.php/Category:OWASP_Mutillidae) are projects that you can download and run locally, while others, like [Try2Hack](http://www.try2hack.nl/), [Defend The Web](https://defendtheweb.net/) and [HackTheBox](https://www.hackthebox.eu/) are hosted for you. These sites all give you explicit permission to practice hacking. Just make sure that you read the rules, and obey any scope that they've defined.

With Burp Suite running and the Interceptor turned on, lets go to the first challenge on the Try2Hack site: http://try2hack.nl/levels/. Your browser won't navigate yet. In fact it may look like it's not doing anything, but jump over to Burp and take a look at the Intercept tab. You'll see that Burp has stopped the outgoing request, and is waiting for your instructions. Looking at the Raw tab, you can see the exact request that's being sent out. The Headers tab allows you to inspect all of the header values that are included in the HTML request. There is also a Hex tab that allows you to view and directly modify the bytes of the request. That's a little too advanced for this article, so we'll just focus on the first two.

The Raw tab holds tons of information. Let's break it down. The first line is the HTTP request line. This line shows the method (GET) along with the path to the requested resource and the HTTP version to use. The second line shows the host (try2hack.nl). The remaining lines show the request headers. If our request sent a body, there would be a blank line below the headers followed by the body content.

The headers tab shows the same information, but broken down into key/value pairs. This makes it a little easier to inspect and manipulate the header information. In this case, we have eight headers in addition to the request and host information. Let's take a look at what each of them does. The User-Agent header is used to tell the server who is making the request. It generally identifies the browser, the operating system, and the JavaScript engine that are being used by the requesting client. The Accept header tells the server which type of data the client is expecting to be returned in the response. The Accept-Language header identifies the language expected by the client. Accept-Encoding informs the server which types of encoding the browser is able to handle. The Referer header tells the server where the request originated from. Yes, it's spelled incorrectly, but the misspelling slipped through in the original specification, so now we're stuck with it. It's an interesting bit of computer history that you can learn more about [here](https://en.wikipedia.org/wiki/HTTP_referer). The Connection header tells the server whether the connection should be kept alive or terminated once the response is sent. Upgrade-Insecure-Requests is a header that is automatically added by most current browsers. This header indicates to the server that the browser prefers secure (https) requests. If the server supports secure requests for the resource, it should redirect the client to the secure version. Finally, the Cache-Control header provides instructions for caching the request. In this case, the `max-age=0` directive indicates that the cached response needs to be revalidated by the server.
Okay, that's a lot of information for one simple request. Let's go ahead and see what the response is. Click the Forward button to allow the request to proceed to the Internet. Wait, somethings not right... Instead of seeing the response from our request, we immediately intercept another request for inspection. Depending on the browser you're using, you're browser configuration, and the site you're visiting, you may find that there are a lot of HTTP requests being made that you aren't even aware of. Firefox, for example sends telemetry data back to Mozilla quite frequently, if you don't have it disabled. Additionally, some websites make numerous background requests for tracking user behavior. Each time one of these requests is made, Burp will intercept it and wait for instructions. The longer you spend inspecting one request, the more requests you're likely to have lined up in the queue. While it's interesting to see all of the requests going out, when you're conducting a penetration test, you're probably only be interested in the requests for the site you're testing. All of the other requests are just noise, slowing down your testing. Luckily, Burp has a great feature called Target Scope, that allows you to specify exactly which requests will be intercepted.
To set the scope to only intercept traffic to our target site, switch over to the HTTP History tab. This tab lists all of the requests that have been made since your Burp session started. Find the request to http://try2hack.nl, right-click on the entry, and click the "Add to scope" option. You'll see a pop-up asking if you want to prevent Burp Proxy from sending out-of-scope items to the history and to other Burp tools. Click Yes.


To view your scope, navigate to the Target tab and the Scope sub tab. In the first section, you'll see a list of sites included in your Target Scope. You can see there's one entry here, but we need to make a minor adjustment. When you select a request to add to scope, it will add the exact request. In this case, I clicked on the http://try2hack.nl/levels/ request.

We actually want to intercept all traffic to this host, so click the Edit button next to the list, and remove "/levels/". Click OK.

There's just one more thing we need to do to filter our intercepts. In the Proxy tab, go to the Options sub tab. Below the Proxy Listeners section, you'll see two other sections: Intercept Client Requests and Intercept Server Responses. In both sections, make sure the checkbox next to the "And URL Is in target scope" rule is checked.

Now when we navigate around the web, we should only intercept requests to the try2hack.nl host. You can give it a try by navigating to a different site in your browser.
Alright, now that we've got our scope set up, let's go back and take a look at the HTTP response for our request. Switch to the HTTP history tab, and click on the request for http://try2hack.nl/levels. You'll see the same request that we saw before, but you should also see a new Response tab. Click on that tab to have a look at what the server sent back to us.

You'll notice that the Response section has the same Raw, Headers and Hex tabs that the request had, but it also includes two new tabs: HTML and Render. The HTML tab shows the response body if it's an HTML file that was returned by the server. The Render tab will show what the site looks like rendered in a browser.

Now that you understand the basics of intercepting and inspecting web traffic, let's take a closer look at this response, and see if we can figure out the first challenge. If you double click on the request in the HTTP History, you'll open the request in a new window. This gives us a little more room to work. Switch over to the HTML tag, and let's see if there's anything interesting in there.

Sure enough, at line 96 we've got a script tag with some very helpful information in it! The password from the text box is being evaluated right in the browser. Let's type that password in and see what we get.

And we're in!
Well, that wasn't too tough was it? I'm sure the challenges get tougher from here, but now that you're familiar with how to inspect web traffic using Burp Suite, there's nothing stopping you from continuing on with the Try2Hack challenges, or one of the other resources mentioned above. These sites are a great way to practice your skills and to continue learning!
<a href="https://dev.to/leading-edje">

<a/> | akofod |
288,421 | Unity2017でUDUINOを使ってArduino接続 | UnityでArduinoを扱うけどUNIDUINOは2017では上手く動かないみたいなので AssetStore評価が高い「UDUINO」を使ってみたよー!(ΦωΦ) Unity2017 3.0p1... | 0 | 2020-03-26T03:40:32 | https://dev.to/mizuki_izuna/unity2017-uduino-arduino-4ob8 | unity3d, arduino | UnityでArduinoを扱うけどUNIDUINOは2017では上手く動かないみたいなので
AssetStore評価が高い「UDUINO」を使ってみたよー!(ΦωΦ)
Unity2017 3.0p1
「UDUINO」$15
https://assetstore.unity.com/packages/tools/input-management/uduino-arduino-and-unity-made-simple-78402
UniDuinoよりやすい!!(ΦωΦ)
基本的に説明の動画見ればわかるんだけど
日本語の説明や使ってみた系の記事が無いから
こちらに記事にしてみたよ~(ΦωΦ)
それじゃ、まずインポート

デモシーンを開いて

UDUINOのオブジェクトを選択してみると
UduinoManagerを確認してみましょう

「Select path」をクリックしてArduinoのLibrariesフォルダを選択しましょう

「Add Uduino Library to Arduino」をクリック
こうするとArduinoIDEのライブラリにUduinoが追加される。
この辺はあとで説明(ΦωΦ)

問題なければ「Step done!」

「FixNow」をクリックして問題なければ「Step Now」
Uduinoは .Net2.0 (not subset)オンリーなので注意(ΦωΦ)

それではArduinoIDEを開きまして先程追加したUduinoを書き込みましょう




書き込み完了したら

Unityと繋いでみましょう
Arduinoのところから Add a pin で11番をいれて ModeをPWMで入れる


<blockquote class="twitter-tweet" data-lang="ja"><p lang="ja" dir="ltr">ぴかぴか(φωφ) <a href="https://t.co/1hp9NAKCzA">pic.twitter.com/1hp9NAKCzA</a></p>— IZUN∀@不屈のVRクリエイター (@mizuki_izuna) <a href="https://twitter.com/mizuki_izuna/status/957128348274061312?ref_src=twsrc%5Etfw">2018年1月27日</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
サーボモータを動かす場合は
9番をONにして11番のPWMと8番のPWMを操作で回転させて回転方向を調整できます
<blockquote class="twitter-tweet" data-lang="ja"><p lang="ja" dir="ltr">モーターぐるぐる(φωφ)! <a href="https://t.co/ZmOyCQJOkB">pic.twitter.com/ZmOyCQJOkB</a></p>— IZUN∀@不屈のVRクリエイター (@mizuki_izuna) <a href="https://twitter.com/mizuki_izuna/status/957157658858946560?ref_src=twsrc%5Etfw">2018年1月27日</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
時間あったらフリーのARDUnityでArduino接続するのもやってみよう(ΦωΦ)
| mizuki_izuna |
286,173 | Tribute Page | freeCodeCamp | DevnGraphics | A post by محمد حذیفہ | 0 | 2020-03-22T19:27:09 | https://dev.to/asyncdesire/tribute-page-freecodecamp-devngraphics-18hd | codepen | {% codepen https://codepen.io/devngraphics/pen/BaNPzXz %} | asyncdesire |
286,182 | Setting Up CI and CD for Tauri | Setting Up CI/CD for Tauri Background When Tauri hit my radar, it was pushing t... | 0 | 2020-03-23T13:49:59 | https://dev.to/tauri/setting-up-ci-and-cd-for-tauri-48pp | githubactions, ci, cd | # Setting Up CI/CD for Tauri
## Background
When Tauri hit my radar, it was pushing towards releasing an alpha. The repo did not have continuous integration or continuous delivery set up. I had seen the value that it can have in creating focus in a project. I knew I would be able to provide immediate value to the project with CI/CD.
Thus began my involvement. We decided to use the shiny new GitHub Actions to deliver this. It was still in beta, but it had gone through iterations over months and months. We felt it was stable enough to use (and nearly out of beta anyway). The deep integration with GitHub will be useful for us.
We didn't have much in the way of tests to run as much of the codebase had been more recently pulled out of other projects. The quickest win was to take and create some example projects. We could then create, what we're calling, our smoke tests on top of the examples. We took the manual process of testing in projects, and dumped . They are "integration" tests where we will take any HTML, CSS and JS and build it with Tauri.
## Next Phase of Testing
This served us well and caught a few bugs in the process. We had the focus to get to and meet our release goal of launching the first alpha build. As the confidence in our code base has risen, the value has flipped. The time these smoke tests take to run has grown faster than to the value delivered. The time that it takes to run the smoke tests has become painful. We had these tests run on every push to a PR, as we wanted a tight feedback loop in fixing any issues we had. Now that we started to add more unit tests. We can back off and not run on every push while still getting the needed feedback. The next phase in our setup will be running our unit tests on every push, and dialing back our smoke test runs.
Github Actions has two triggers of which we make heavy use: `push` and `pull_request`. Every commit that made to the repo is a `push`. When you open a pull request from a branch (call it `great_feature`) to another branch (our working branch, `dev`), each commit to `great_feature` would possibly trigger both of these events. We can use a filter to focus on the events we care about though. In our workflows, we only PR (pull request) the `dev` and `master` branches. This means that if we filter to only the `dev` and `master` branches on commit, we will only run that workflow when we _merge_ a PR. A merged PR typically only occurs once a day or less so this will be a good fit for the longer running tests, e.g. the smoke tests in our case. Below is how that might look.
Unit tests:
```yml
# these run fast so we can have them run on any commit
name: unit tests
on:
pull_request:
push:
branches:
- dev
- master
```
Smoke tests:
```yml
# these run slower so we run only on merges to dev or master branch
name: smoke tests
on:
push:
branches:
- dev
- master
```
Tauri operates off the `dev` branch as default, and merges to `master` for release. With these Github Actions set up, we will run the unit tests on every commit to an open PR (see `pull_request`). When that PR is merged into `dev`, we will run both the unit tests and the smoke tests.
## Our Examples (aka Smoke Tests)
Let us touch on this for a moment. The smoke tests are a handful of examples created using the major frameworks (Vue, create-react-app, Quasar, Next.js, Gatsby, ...). Originally these resided in a separate repository. They were then moved into the main Tauri repo for what seemed worthwhile benefits. We implemented the Renovate bot which opens a pull request to upgrade dependencies which includes our examples. Remember those Github Actions? This would trigger our tests to run. It was nice to see these tested every time our examples updated. Having the examples close by and wired up also made for testing locally much easier. The downside is that these pull requests created a lot of noise. The examples were flooding our CI and commit history. To help reduce the noise, we grouped updates into logical chunks and ran them on specific days. That helped but not quite enough.
Our Github Actions uses a "standard action" called `actions/checkout` which pulls in our repo. This was recently updated to `v2`, and with it came the feature which made it much easier to checkout multiple repos. This feature gave us enough incentive to shift the examples back into their own repository. We could still implement the update process and test using the examples, but the noise is removed. This changes the local dev workflow, but we can adapt to it.
## Next Level Publishing
The next big value add that we will see is a programmatic release process. We can set up a system to release our packages to either crates or NPM. While this concept is not new, the difficulty that we are seeing is that none of these existing systems or command line programs are able to deal with a monorepo that has multiple languages. It gets even more involved if we want to release packages independent of one another. We looked at a CLI utility called `bumped` that takes a version command and runs specified scripts to version bump and publish. This would publish our packages keeping them in lockstep. There is value though, potentially, in having some packages increment versions with each other and be on the same version. If we do a patch on one package it doesn't necessarily make sense to publish a new patch version on everything in the repo. How shall we best deal with this?
Since we haven't been able to find anything completely built out, we are going to have to work our own. There is a package called `changesets` that is primarily built for use with JavaScript and Yarn. As mentioned before, we do have rust code in our library so we can't use `changsets` out of the box, but the bulk of the operations are done through a CLI and managed with markdown files. The package is something that, once we have initialized, can mostly run the CLI in our CI and doesn't need to be handled by the user. For a user to recommend a package change, they need only create a markdown file in the `.changesets` folder. While having the CLI create the file for you is nice, it isn't a necessity.
The version bump and polishing sequences are tightly coupled to JavaScript though. However, the implementation of `changesets` is split up nicely into packages each with their own scope. We could use the packages that parse the markdown files into version changes, and build a life cycle around it. It would run version bumps and publishes based on a script that we pass. There is nothing that is yet published, but experimentation has shown promising results. Regardless if the code changing is Rust or JavaScript, we can describe the change in markdown. This will parse the markdown for us and we can then issue commands based on the described version bump. After the version is bumped, we need only issue a publish command for crates or npm. Since we don't need dynamic use of this library, we could manually set each package to publish using a specified script.
The only sticky point is the changelog creation. It is rather coupled to code around the publishing sequence. We will need to rip out peices of that to keep that functionality.
## Leveling Up Our Releases
We will look to do even more above and beyond the typical publish workflow in the later phases of our Github Actions use. This project has a deep consideration for security that is worth bringing to our publishing workflow. A workflow could involve publishing the package to multiple package managers including a private one hosted by Tauri. We can also do some interesting things like signing our releases, including a hash in the release and/or even publishing this information on a blockchain that it can be easily verified. Publishing on the blockchain is another avenue to increase the confidence that what is seen on GitHub matches what you have downloaded. The IOTA foundation created a Github Action which will publish a release to their blockchain. This has shown promise, but he gave a couple errors to tackle still.
## Publish and Release Checklist
Let's wrap all of this into nice little bullet pointed list. (That's basically `.yml` right? You will soon be an expert at reading it if you aren't already :D ) This is the ideal that we are working towards. As it stands now, we have #3 through #6 implemented. We manually do #2 which then feeds into #3 and kicks off the rest of the automatic workflow.
1. a human pushes to dev through a pull request (can happen any number of times)
- pull request includes a changeset file describing the change and required version bump
2. a pull request is created (or updated) to include the change and version bump
- this pull request stays open and will be force pushed until it gets merged (and published)
- increase the version number based on changesets
- delete all changeset files
3. a codeowner merges the publish PR to dev (no direct push permissible for anyone)
- all tests (unit, e2e, smoke tests) are run on the PR
- failures prevent the publish so they must pass before merge
4. merge to dev triggers release sequence
- changes are squashed and a PR is opened against master
5. when PR to master is merged...
- vulnerability audit (crates and yarn) and output saved
- checksums and metadata and output saved
- packages are published on npm/cargo, tarball/zip created
- release is created for each package that had updates (if version isn't changed, build skips the publish steps)
- output from audit/checksums is piped into the release body
- tarball / zip attached to release
- async process to publish to IOTA tangle (feeless) via release tag [note: still have things to resolve here]
6. release is complete
- master has updated code and tagged
- GitHub release has tarballs, checksums, and changelog (may have multiple releases if more than one package published) [note: is part of step 2 and is not yet implemented]
Hopefully this inspires you to implement this into your own workflows. If you have any questions, feel free to reach out and I would be happy to answer. Cheers! | jacobbolda |
286,206 | Getting error while calling the function of ContractManager in HeaderMenu | CodeSandBox:https://codesandbox.io/s/funny-moon-krrsj HeaderMenu.js import {Component} from 'react'... | 0 | 2020-03-22T20:54:48 | https://dev.to/samcracker/getting-error-while-calling-the-function-of-contractmanager-in-headermenu-1d97 | react, ethereum, blockchain, javascript | **CodeSandBox:**https://codesandbox.io/s/funny-moon-krrsj
**HeaderMenu.js**
```
import {Component} from 'react';
import {
Menu,
Container,
Button,
Label,
Loader,
List,
Image,
Icon,
Dropdown
} from 'semantic-ui-react';
import Head from 'next/head';
import web3 from '../ethereum/web3';
import Constant from '../support/Constant';
import Config from '../support/Config';
import appDispatcher from '../core/AppDispatcher';
import contractManager from '../core/ContractManager';
class HeaderMenu extends Component {
constructor(props) {
super(props);
this.account = props.account;
this.contractManager = contractManager;
// console.log(contractManager);
this.transactionDispatcher = props.transactionDispatcher;
this.state = {address: "", balance: "", name: "",
avatarUrl: "", isLoading: true, isJoinButtonLoading: false,
isJoined: false, numPendingTx: 0};
this.reloadCount = 0;
}
clearAllData = () => {
window.localStorage.clear();
}
componentDidMount() {
if (this.account) {
this.getAccountInfo();
appDispatcher.register((payload) => {
if (payload.action == Constant.EVENT.ACCOUNT_BALANCE_UPDATED) {
this.setState({balance: this.account.balance});
} else if (payload.action == Constant.EVENT.ACCOUNT_INFO_UPDATED) {
this.setState({name: payload.profile.name, avatarUrl: payload.profile.avatarUrl, isJoined: payload.profile.isJoined});
}
});
this.transactionDispatcher.register((payload) => {
if (payload.action == Constant.EVENT.PENDING_TRANSACTION_UPDATED) {
this.setState({numPendingTx: payload.numPendingTx});
}
});
}
}
getAccountInfo = () => {
var address = this.account.getAddress();
if (address) {
this.setState({address: address, balance: this.account.balance, isLoading: false, isJoined: this.account.isJoined});
} else {
if (this.reloadCount == 1) {
this.setState({isLoading: false});
} else {
this.reloadCount++;
setTimeout(this.getAccountInfo, 800);
}
}
}
handleDropdownClicked = (event, data) => {
if (data.name == 'updateProfile') {
appDispatcher.dispatch({
action: Constant.ACTION.OPEN_UPDATE_PROFILE
});
} else if (data.name == 'logOutItem') {
this.clearAllData();
window.location.reload();
} else if (data.name == 'settingsItem') {
appDispatcher.dispatch({
action: Constant.ACTION.OPEN_SETTINGS_MODAL
})
}
else if (data.name == 'changeEthNetwork') {
if (data.networkid != Config.ENV.EthNetworkId) {
Config.ENV.EthNetworkId = data.networkid;
this.removeNetworkDependentData();
window.location.reload();
}
}
}
removeNetworkDependentData = () => {
this.account.storageManager.removeNetworkDependentData();
}
handleJoinClicked = () => {
var publicKeyBuffer = this.account.getPublicKeyBuffer();
this.contractManager.joinContract(publicKeyBuffer, (resultEvent) => {
if (resultEvent == Constant.EVENT.ON_REJECTED || resultEvent == Constant.EVENT.ON_ERROR) {
this.setState({isJoinButtonLoading: false});
} else if (resultEvent == Constant.EVENT.ON_RECEIPT) {
window.location.reload();
}
});
this.setState({isJoinButtonLoading: true});
}
handleImportPrivateKeyClicked = () => {
appDispatcher.dispatch({
action: Constant.ACTION.OPEN_PRIVATE_KEY_MODAL
});
}
render() {
var accountInfo = (<div></div>);
if (this.account) {
if (this.state.isLoading == false) {
if (this.state.address) {
var addressExplorerUrl = Config.ENV.ExplorerUrl + 'address/' + this.state.address;
var dropdownTrigger;
if (this.state.avatarUrl) {
dropdownTrigger = (
<span><Image src={this.state.avatarUrl} avatar/>{ this.state.name ? this.state.name : this.state.address.substr(0,10)}</span>
);
} else {
dropdownTrigger = (
<span><Icon name='user' size='large'/>{ this.state.name ? this.state.name : this.state.address.substr(0,10)}</span>
);
}
var networkItems = [];
for (var i=0;i<Config.NETWORK_LIST.length;i++) {
networkItems.push(
<Dropdown.Item key={'networkItem' + i} networkid={Config.NETWORK_LIST[i].id} name='changeEthNetwork' onClick={this.handleDropdownClicked}>
{Config.NETWORK_LIST[i].name}
</Dropdown.Item>
);
}
var memberInfo;
if (this.account.isJoined) {
memberInfo = (
<Dropdown item trigger={dropdownTrigger}>
<Dropdown.Menu>
<Dropdown.Item name='updateProfile' onClick={this.handleDropdownClicked}>
<Icon name='write'/>Update profile
</Dropdown.Item>
<Dropdown.Item name='settingsItem' onClick={this.handleDropdownClicked}>
<Icon name='settings'/>Settings
</Dropdown.Item>
<Dropdown.Item name='logOutItem' onClick={this.handleDropdownClicked}>
<Icon name='log out'/>Log out
</Dropdown.Item>
</Dropdown.Menu>
</Dropdown>
);
} else {
memberInfo = (
<Button color='orange' onClick={this.handleJoinClicked}
loading={this.state.isJoinButtonLoading}
disabled={this.state.isJoinButtonLoading}>Join {Constant.APP_NAME}</Button>
);
}
var pendingTxItem;
if (this.state.numPendingTx > 0) {
pendingTxItem = (
<Label as='a' color='yellow' href={addressExplorerUrl} target='_blank'>
<Icon name='spinner' loading/>
{this.state.numPendingTx} pending tx
</Label>
);
}
accountInfo = (
<Menu.Menu position='right'>
<Menu.Item>
<Dropdown item text={Config.ENV.NetworkName}>
<Dropdown.Menu>
{networkItems}
</Dropdown.Menu>
</Dropdown>
</Menu.Item>
<Menu.Item>
<List>
<List.Item>
<a href={addressExplorerUrl} target='_blank'>
{this.state.address}
</a>
</List.Item>
<List.Item>
Balance: <Label as='a' href={addressExplorerUrl} target='_blank' color='orange'>{parseFloat(web3.utils.fromWei("" +this.state.balance, 'ether')).toFixed(8) + ' ETH' }</Label>
{pendingTxItem}
</List.Item>
</List>
</Menu.Item>
<Menu.Item>
{memberInfo}
</Menu.Item>
</Menu.Menu>
);
} else {
accountInfo = (
<Menu.Menu position='right'>
<Menu.Item>
<Button onClick={this.handleImportPrivateKeyClicked} color='blue'>Import private key</Button>
</Menu.Item>
</Menu.Menu>
);
}
} else {
accountInfo = (<Loader inverted active />);
}
}
return (
<Menu fixed='top' color='grey' inverted>
<Head>
<link rel="stylesheet" href="//cdnjs.cloudflare.com/ajax/libs/semantic-ui/2.2.12/semantic.min.css"></link>
</Head>
<Container>
<Menu.Item>
<a href='/'><Image src='static/images/blockchat.png' height={55} /></a>
</Menu.Item>
{this.account ? accountInfo: (<div></div>)}
</Container>
</Menu>
);
}
}
export default HeaderMenu;
```
**ContractManager.js**
```
import web3 from '../ethereum/web3';
import compiledContract from '../ethereum/build/EtherChat.json';
import TransactionsManager from './TransactionManager';
import appDispatcher from './AppDispatcher';
import Config from '../support/Config';
import Constant from '../support/Constant';
import utils from '../support/Utils';
import crypto from 'crypto';
/**
* Responsible for interacting with the Ethereum smart contract
*/
export class ContractManager {
constructor(accountManager, storageManager) {
this.getContract();
this.accountManager = accountManager;
this.storageManager = storageManager;
this.transactionManager = new TransactionsManager(accountManager);
}
// Create a web3 contract object that represent the ethereum smart contract
getContract = async () => {
this.contract = await new web3.eth.Contract(JSON.parse(compiledContract.interface),
Config.ENV.ContractAddress);
appDispatcher.dispatch({
action: Constant.EVENT.CONTRACT_READY
})
}
// Get current account profile from EtherChat contract's storage
getProfile = async (address) => {
var result = await this.contract.methods.members(this.accountManager.getAddress()).call();
var profile = {};
if (result.isMember == 1) {
profile.isJoined = true;
profile.avatarUrl = utils.hexStringToAsciiString(result.avatarUrl);
profile.name = utils.hexStringToAsciiString(result.name);
this.storageManager.setJoinedStatus(true);
this.storageManager.setName(this.name);
this.storageManager.setAvatarUrl(this.avatarUrl);
appDispatcher.dispatch({
action: Constant.EVENT.ACCOUNT_INFO_UPDATED,
profile: profile
})
}
return profile;
}
getMemberInfo = async (address, relationship) => {
var memberInfo = await this.contract.methods.members(address).call();
if (memberInfo.isMember) {
var publicKey = '04' + memberInfo.publicKeyLeft.substr(2) + memberInfo.publicKeyRight.substr(2);
var name = utils.hexStringToAsciiString(memberInfo.name);
var avatarUrl = utils.hexStringToAsciiString(memberInfo.avatarUrl);
this.storageManager.updateContact(address, publicKey, name, avatarUrl, relationship);
}
}
getPastEvents = async (eventName, filters) => {
return await this.contract.getPastEvents(eventName, filters);
}
joinContract = async(publicKeyBuffer, callback) => {
var publicKeyLeft = '0x' + publicKeyBuffer.toString('hex', 0, 32);
var publicKeyRight = '0x' + publicKeyBuffer.toString('hex', 32, 64);
this.transactionManager.executeMethod(this.contract.methods.join(publicKeyLeft, publicKeyRight))
.on(Constant.EVENT.ON_APPROVED, (txHash) => {
if (callback) callback(Constant.EVENT.ON_APPROVED);
})
.on(Constant.EVENT.ON_REJECTED, (txHash) => {
if (callback) callback(Constant.EVENT.ON_REJECTED);
})
.on(Constant.EVENT.ON_RECEIPT, (receipt) => {
if (callback) callback(Constant.EVENT.ON_RECEIPT);
})
.on(Constant.EVENT.ON_ERROR, (error, txHash) => {
appDispatcher.dispatch({
action: Constant.EVENT.ENCOUNTERED_ERROR,
message: error.message,
title: "Error"
});
if (callback) callback(Constant.EVENT.ON_ERROR);
});
}
// joinContract = async (publicKeyBuffer, callback) => {
addContact = async (address, callback) => {
console.log(address);
var method = this.contract.methods.addContact(address);
this.transactionManager.executeMethod(method)
.on(Constant.EVENT.ON_APPROVED, (txHash) => {
if (callback) callback(Constant.EVENT.ON_APPROVED);
})
.on(Constant.EVENT.ON_RECEIPT, (receipt) => {
if (callback) callback(Constant.EVENT.ON_RECEIPT);
})
.on(Constant.EVENT.ON_ERROR, (error, txHash) => {
appDispatcher.dispatch({
action: Constant.EVENT.ENCOUNTERED_ERROR,
message: error.message,
title: "Error"
});
if (callback) callback(Constant.EVENT.ON_ERROR);
});
}
acceptContactRequest = async (address, callback) => {
var method = this.contract.methods.acceptContactRequest(address);
this.transactionManager.executeMethod(method)
.on(Constant.EVENT.ON_APPROVED, (txHash) => {
if (callback) callback(Constant.EVENT.ON_APPROVED);
})
.on(Constant.EVENT.ON_RECEIPT, (receipt) => {
if (callback) callback(Constant.EVENT.ON_RECEIPT);
})
.on(Constant.EVENT.ON_ERROR, (error, txHash) => {
appDispatcher.dispatch({
action: Constant.EVENT.ENCOUNTERED_ERROR,
message: error.message,
title: "Error"
});
if (callback) callback(Constant.EVENT.ON_ERROR);
});
}
updateProfile = async (name, avatarUrl, callback) => {
var nameHex = '0x' + Buffer.from(name, 'ascii').toString('hex');
var avatarUrlHex = '0x' + Buffer.from(avatarUrl, 'ascii').toString('hex');
var method = this.contract.methods.updateProfile(nameHex, avatarUrlHex);
this.transactionManager.executeMethod(method)
.on(Constant.EVENT.ON_APPROVED, (txHash) => {
if (callback) callback(Constant.EVENT.ON_APPROVED);
})
.on(Constant.EVENT.ON_RECEIPT, (receipt) => {
if (callback) callback(Constant.EVENT.ON_RECEIPT);
})
.on(Constant.EVENT.ON_ERROR, (error, txHash) => {
appDispatcher.dispatch({
action: Constant.EVENT.ENCOUNTERED_ERROR,
message: error.message,
title: "Error"
});
if (callback) callback(Constant.EVENT.ON_ERROR);
});
}
// A message will be encrypted locally before sending to the smart contract
sendMessage = async (toAddress, publicKey, message) => {
var publicKeyBuffer = Buffer.from(publicKey, 'hex');
var encryptedRaw = utils.encrypt(message, this.accountManager.computeSecret(publicKeyBuffer));
var encryptedMessage = '0x' + encryptedRaw.toString('hex');
var method = this.contract.methods.sendMessage(toAddress, encryptedMessage, utils.getEncryptAlgorithmInHex());
this.transactionManager.executeMethod(method)
.on(Constant.EVENT.ON_APPROVED, (txHash) => {
this.storageManager.addMyLocalMessage(encryptedMessage, toAddress, utils.getEncryptAlgorithm(), txHash);
appDispatcher.dispatch({
action: Constant.EVENT.MESSAGES_UPDATED,
data: toAddress
});
})
.on(Constant.EVENT.ON_REJECTED, (data) => {
// do nothing
})
.on(Constant.EVENT.ON_RECEIPT, (receipt, ) => {
this.storageManager.updateLocalMessage(toAddress, receipt.transactionHash, Constant.SENT_STATUS.SUCCESS);
appDispatcher.dispatch({
action: Constant.EVENT.MESSAGES_UPDATED,
data: toAddress
});
})
.on(Constant.EVENT.ON_ERROR, (error, txHash) => {
this.storageManager.updateLocalMessage(toAddress, txHash, Constant.SENT_STATUS.FAILED);
appDispatcher.dispatch({
action: Constant.EVENT.MESSAGES_UPDATED,
data: toAddress
});
});
}
}
export default ContractManager;
```
Here, I'm trying to call the function 'joinContract' {the function of ContractManager class} in HeaderMenu.js using the function handleJoinClicked(). And,boom... code is getting crash after clicking the join button. It's showing the error. This[coontractManager.joinContract] is not a function. Please help.
Please help.
| samcracker |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.