id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
235,819 | Remote network over WiFi | I wanted to be able to connect to a remote network by just connecting to a different wifi network on... | 0 | 2020-01-11T00:04:40 | https://dev.to/milolav/remote-network-over-wifi-1ehc | softethervpn, networking | I wanted to be able to connect to a remote network by just connecting to a different wifi network on my local router. Locally I have FreshTomato on Netgear R8000, and remotely I have OpenWrt 18.06 running on TP-LINK Archer C7 v4. I used SoftEther VPN is used for connection between them.
## Software installation
Installing SoftEther VPN on **OpenWrt** was easy, just installed the softethervpn package from `System > Software`

On **Tomato** it was a bit more complicated because I had to install Entware, and then install SoftEther VPN from the shell.
Entware installation guide: https://github.com/Entware/Entware/wiki/Install-on-the-TomatoUSB
To make a LAN-to-LAN bridge, it is enough to install softethervpn5-bridge package
```
opkg install softethervpn5-bridge
```
There isn't version from stable branch (4.x) but development branch (5.x) works just fine.
## Router and SoftEther configuration
Following SoftEther's guide, OpenWrt is the "headquarter location", and Tomato is a "branch":
https://www.softether.org/4-docs/2-howto/1.VPN_for_On-premise/3.LAN_to_LAN_Bridge_VPN
https://www.softether.org/4-docs/1-manual/A._Examples_of_Building_VPN_Networks/10.5_Build_a_LAN-to-LAN_VPN_(Using_L2_Bridge)
### OpenWrt
On OpenWrt I created a new hub named bridge42 with one user "tomato" that will be used for cascade connection.



Under the `Local Bridge Setting` on main window I created a bridge using new tap device, bridge42, which will create new adapter named `tap_bridge42`.


I'm not sure if this adapter is actually needed, but it makes it easier to manage in LuCI. In addition I had some issues when using SoftEther VPN on Raspberry Pi and using tap adapter with linux bridge sorted that out. So I did the same here, created tap device.
On OpenWrt device, in LuCI under `Interfaces > LAN > Physical Settings`, I added that new adapter `tap_bridge42` to the list so that every device that gets connected on the other side of the bridge becomes a member of this LAN network.

Under `Network > Firewall > Traffic Rules` I added a new rule to allows inbound traffic for SoftEther. It can be any port that SoftEther is listening to. List of ports is manageable form the main screen in SoftEther VPN Server manager.

Ok, that's all for the "headquarters" now for the "branch".
### FreshTomato
Base tutorial for setting up guest wifi network is here: https://learntomato.flashrouters.com/setup-guest-network-guest-wifi-tomato-vlan/
There are two of differences though.
1. When creating a new LAN (`Basic > Network`) I used the IP address that belongs to the OpenWrt's LAN and disabled DHCP since this is only a bridge to the main network on OpenWrt

2. When creating new VLAN (`Advanced > VLAN`) I added Port 1 to the new VLAN so that I can use wired connection as well

Since only bridge module is installed it shows only one virtual hub called "BRIDGE".

Under the `Local Bridge Setting` I just bridged "BRIDGE" virtual hub with br2 adapter that was created in previous step without creating additional tap adapter. Tried it and it worked, without need for tap and scripts to add it into the bridge.

Under `Manage Virtual Hub > Manage Cascade Connection` I added new connection to my OpenWrt. Entered hostname, port and virtual hub name of OpenWrt router, entered username and password.

After clicking "Online" connection was established. All good.

And that's it. Connecting to new "guest" wifi or Port 1 on Netgear router I get connected to the remote network as if I am there.
## Final thoughts
Speed that I'm getting through VPN is around 25/25 which isn't great but C7 is among cheap routers so it is good enough. I'm not an expert in networking so this can probably be done in a better or more secure way. But it works so it's worth sharing. | milolav |
235,841 | Modular Multi Step Form with NgRx in less than 20 minutes | Modular Multi Step Form with NgRx in less than 20 minutes This is the third part of the... | 0 | 2020-01-10T18:39:51 | https://labs.thisdot.co/modular-multi-step-form-with-ngrx-in-less-than-20-minutes | angular, a11y, ngrx, reactiveforms | ---
title: Modular Multi Step Form with NgRx in less than 20 minutes
published: true
date: 2020-01-10 17:57:50 UTC
tags: angular, a11y, ngrx, reactiveforms
canonical_url: https://labs.thisdot.co/modular-multi-step-form-with-ngrx-in-less-than-20-minutes
---
# Modular Multi Step Form with NgRx in less than 20 minutes
This is the third part of the ReactiveForm series. Now that you know how to use ReactiveForms and techniques to make it accessible, it's time to do the real thing. We are gonna work on a multi step form, with validation that has to be accessible. If that wasn't enough, we are going to use NgRx to keep the multiple steps in sync.
## Problem
At This Dot, we are continuously growing and evolving. Hiring is key in our process, and we empower developers through mentoring. While this is great, it does mean that we receive a lot of applications. We needed to create a multi step form that developers looking to join This Dot could fill out.
Since we are an inclusive company, we need to make sure everybody is able to use the form. So accessibility is a first class citizen here. We'll use the techniques discussed in Part II of this series to do so. But that's not all. Becuase we don't know who is going to apply, we need to make sure all the applications are valid.
There's a lot of information we require to start the process: personal information, address details, and experience. Because of this, if we make it a single page form, it will be really hard to use or worst, will bore people so badly that they just forget about trying.
Now that you know the reasoning behind the design, let's get started.
## Solution
I believe that, when you are motivated, you work better, and crappy looking apps are super boring to work with. An app can be buggy, but if it looks good, it will probably motivate you to fix it or improve it. (Or at least that is my case being a visual person.)
Since I'm leading this development, and I want us to be motivated, let's start by making the multi-step form look good. Once we feel comfortable with how it looks, we'll continue with its functionality.
The application will be built using Angular. Instead of creating all the folders, files, and configuration files, we'll rely on the Angular CLI. To do that, follow the next steps:
- Open your favorite command line tool
- Install globally the Angular CLI by using the command `npm install -g @angular/cli`
- Go to where you want to create the app, and run the command `ng new embrace-power`
At this point, you have a freshly generated application. One thing I like to do is create a variables.scss file storing all the variables I want to use. In this case, I have only one `$base-color: #444` so I save it in `src/assets/styles/`. Then, inside any scss file that wants to access it, you can use `@import '~src/assets/styles/variables.scss';`.
> If you are like me, you must be wondering why there is a _~_ at the beginning.
> _That way you tell webpack to use the base source_.
Replace the content of the app template with this:
```html
<!-- src/app/app.component.html -->
<main>
<router-outlet></router-outlet>
</main>
```
Now, set the base styles for the app
```scss
// src/styles.scss
body,
html {
margin: 0;
background: #333;
font-family: 'Roboto';
}
```
Finally, add the Roboto font family, and the material icons in the head tag
```html
<!-- /src/index.html -->
<link
href="https://fonts.googleapis.com/css?family=Roboto:300,400,500&display=swap"
rel="stylesheet"
/>
<link
href="https://fonts.googleapis.com/icon?family=Material+Icons"
rel="stylesheet"
/>
```
Web apps that look like native apps are awesome. So I'm gonna try to give it a mobile/desktop app look'n'feel.
### Steps Header
If you are building a multi step form, you'll need a way to navigate through the steps. Sometimes, it's useful to allow users to quickly jump to any step. In this case, we'll need a header component that has a link to each step.
Since our application will only have a single instance of the header, we could argue that it's part of the core of the application. Let's first create the core module with the Angular CLI using the command `ng generate module core` inside the application folder. By now, you have the core module- you'll now need the header component.
Angular CLI to the rescue yet again!
Just run `ng generate component core/header`. This will create a new component in the core module.
In order to use this new component, you'll need to add the component to the exports array of the core module declaration. Alternatively, you can use the `--export=true` flag to tell the CLI to add the component to the exports array.
Now is time to write the actual template.
```html
<!-- src/app/core/header/header.component.html -->
<header>
<nav>
<ul>
<li><a href="#">Personal</a></li>
<li><a href="#">Address</a></li>
<li><a href="#">Experience</a></li>
</ul>
</nav>
</header>
```
If you generated the component following the above instructions, the header component is already part of the declarations, and exports is part of the `CoreModule`. Sadly, you wont be able to use it in the app component until you import the `CoreModule` in the `AppModule`. You will only import this module once. If you need to import it somewhere else for some reason, you should probably think about following the `SharedModule` strategy.
Import the CoreModule in the AppModule
```typescript
// src/app/app.module.ts
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppRoutingModule } from './app-routing.module';
import { AppComponent } from './app.component';
import { CoreModule } from './core/core.module';
@NgModule({
declarations: [AppComponent],
imports: [BrowserModule, AppRoutingModule, CoreModule],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule {}
```
Now, you can use it like this in the app component template
```html
<app-header></app-header>
<main>
<router-outlet></router-outlet>
</main>
```
But that looks awful right? Let's make it a little better by adding some CSS.
```scss
// src/app/core/header/header.component.scss
@import '~src/assets/styles/variables.scss';
:host {
display: block;
}
header {
background-color: darken($base-color, 20);
min-height: 10vh;
nav {
height: 100%;
ul {
display: flex;
flex-direction: column;
margin: 0;
padding: 0;
height: 100%;
li {
display: flex;
margin: 0.5rem;
list-style-type: none;
justify-content: center;
& > * {
color: darken(white, 20);
padding: 0.5rem;
font-size: 1.5rem;
letter-spacing: 0.1rem;
line-height: 1.5;
}
a {
text-decoration: none;
&:visited {
color: darken(white, 30);
}
&.active,
&:active,
&:hover,
&:focus {
text-decoration: underline;
color: white;
}
&:hover,
&:focus {
outline: 1px white solid;
}
}
}
}
}
}
@media all and (min-width: 768px) {
header {
nav {
ul {
flex-direction: row;
justify-content: space-around;
}
}
}
}
```
If you wonder why we are using the media query, it is becuase, in my experience, Mobile First Design will always take the prize. Here's how it goes: start by defining the mobile styles, and when the viewport is greater or equal than 768px, we just slightly adjust the styles. We take advantage of the cascade nature of CSS.
### Step Component
Every step will be different. But they share some layout logic. They all have a title, a previous, and next button. Before continuing, we'll create a new component that will be used to wrap the step specific logic. That way we can ensure we have a consistent interface.
All the steps will be separated into modules. We'll discuss this later on. For now, lets focus on creating this reusable component. I like to store all the _reusable_ components into a shared module. Then, I can import the shared module, and use those reusable components as I require.
We'll fallback to the Angular CLI again:
- Open your favorite command line tool
- Change directory to the location of the project
- Run the command `ng generate module shared`
- Run the command `ng generate component shared/wizard-step --export=true`
Let's start by writing the template content.
```html
<!-- src/app/shared/wizard-step/wizard-step.component.html -->
<section>
<header>
<h1>{{ title }}</h1>
</header>
<div>
<button id="previous-button" (click)="goToPreviousStep()">
<i class="material-icons">navigate_before</i> <span>Previous</span>
</button>
<ng-content></ng-content>
<button id="next-button" (click)="goToNextStep()">
<span>Next</span> <i class="material-icons">navigate_next</i>
</button>
</div>
</section>
```
By using content projection with `<ng-content>`, we can use this new module to share all the markup logic of the step. As you can see, there's a title property, and two methods being executed through event binding in the buttons. Let's see how those look in the `.ts` file.
```typescript
// src/app/shared/wizard-step/wizard-step.component.ts
import { Component, OnInit, Input, Output, EventEmitter } from '@angular/core';
@Component({
selector: 'app-wizard-step',
templateUrl: './wizard-step.component.html',
styleUrls: ['./wizard-step.component.scss']
})
export class WizardStepComponent implements OnInit {
@Input() title: string;
@Output() previousStepClicked = new EventEmitter();
@Output() nextStepClicked = new EventEmitter();
constructor() {}
ngOnInit() {}
goToPreviousStep() {
this.previousStepClicked.emit();
}
goToNextStep() {
this.nextStepClicked.emit();
}
}
```
And the styles, don't forget them.
```scss
// src/app/shared/wizard-step/wizard-step.component.scss
@import '~src/assets/styles/variables.scss';
header {
background-color: darken($base-color, 15);
height: 10vh;
display: flex;
align-items: center;
justify-content: center;
h1 {
color: white;
margin: 0;
padding: 1rem;
text-align: center;
font-size: 2.8rem;
}
}
section {
height: 80vh;
div {
display: flex;
justify-content: space-around;
height: 100%;
button {
border: none;
background: darken($base-color, 10);
height: max-content;
align-self: center;
display: flex;
align-items: center;
justify-content: center;
color: white;
font-size: 2rem;
padding: 0.5rem 0;
outline: 0.1rem darken(white, 30) solid;
cursor: pointer;
&#previous-button {
padding-right: 1rem;
}
&#next-button {
padding-left: 1rem;
}
span {
display: none;
}
&:focus,
&:hover {
outline: 0.2rem white solid;
background: darken($base-color, 20);
}
}
}
}
@media all and (min-width: 768px) {
section {
div {
button {
span {
display: block;
}
}
}
}
}
```
### Step Content
Now, it's time to use everything together. We'll start by creating a new module dedicated to the _personal information_ step. This can be achieved by using the command you already know for generating a module `ng generate module personal`. But that's not enough, right? Now, we need a component to store the actual form, which can be done by using the command `ng generate component personal`.
Redirect the user to the personal page when the app boots.
```typescript
// src/app/app.component.ts
import { Component, OnInit } from '@angular/core';
import { Router } from '@angular/router';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.scss']
})
export class AppComponent implements OnInit {
title = 'embrace-power';
constructor(private router: Router) {}
ngOnInit() {
this.router.navigate(['personal']);
}
}
```
Import the SharedModule, and the RouterModule, to set the default route.
```typescript
// src/app/personal/personal.module.ts
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { ReactiveFormsModule } from '@angular/forms';
import { PersonalComponent } from './personal.component';
import { SharedModule } from '../shared/shared.module';
import { RouterModule } from '@angular/router';
@NgModule({
declarations: [PersonalComponent],
imports: [
CommonModule,
SharedModule,
RouterModule.forChild([{ path: '', component: PersonalComponent }]),
ReactiveFormsModule
]
})
export class PersonalModule {}
```
> NOTE: Remember to import ReactiveFormsModule and SharedModule in your new module.
We still have a few more things to do. We'll need to wire the new module into the route structure in order to properly navigate.
```typescript
// src/app/app-routing.module.ts
import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';
const routes: Routes = [
{
path: 'personal',
loadChildren: () =>
import('./personal/personal.module').then(m => m.PersonalModule)
}
];
@NgModule({
imports: [RouterModule.forRoot(routes)],
exports: [RouterModule]
})
export class AppRoutingModule {}
```
Now, update the link in the header, but first import the `RouterModule` in the `CoreModule`.
```typescript
// src/app/core/core.module.ts
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { HeaderComponent } from './header/header.component';
import { RouterModule } from '@angular/router';
@NgModule({
declarations: [HeaderComponent],
imports: [CommonModule, RouterModule.forChild([])],
exports: [HeaderComponent]
})
export class CoreModule {}
```
In HeaderComponent template, update the link to use `routerLink`, and `routerLinkActive` from `RouterModule` in the header.
```html
<header>
<nav>
<ul>
<li>
<a [routerLink]="['/personal']" routerLinkActive="active">Personal</a>
</li>
<li><a href="#">Address</a></li>
<li><a href="#">Experience</a></li>
</ul>
</nav>
</header>
```
Now it's time to work on the component class declaration.
```typescript
// src/app/personal/personal.component.ts
import { Component, OnInit } from '@angular/core';
import { FormBuilder, Validators } from '@angular/forms';
@Component({
selector: 'app-personal',
templateUrl: './personal.component.html',
styleUrls: ['./personal.component.scss']
})
export class PersonalComponent implements OnInit {
title = 'Personal';
personalForm = this.fb.group(
{
firstName: [null, [Validators.required]],
lastName: [null, [Validators.required]],
age: [
null,
[Validators.required, Validators.min(18), Validators.max(120)]
],
about: [null, [Validators.required]]
},
{
updateOn: 'blur'
}
);
firstNameCtrl = this.personalForm.get('firstName');
lastNameCtrl = this.personalForm.get('lastName');
ageCtrl = this.personalForm.get('age');
aboutCtrl = this.personalForm.get('about');
submitted = false;
constructor(private fb: FormBuilder) {}
goToNextStep() {
this.submitted = true;
}
ngOnInit() {
// this method comes from OnInit interface
}
}
```
So what's going on here? We are declaring the title that will be passed as input to the WizardStepComponent, and the ReactiveForm to handle form data in the PersonalComponent. If you have read the previous posts of this series, you'll notice something new. The usage of Validators and the `{ updateOn: 'blur' }` configuration object.
The updateOn option is pretty self explanatory. It just makes the ReactiveForm register the changes only when the user goes out of the input. Validators are a bit more complicated. You can have an array of validators which are basically functions that return a boolean. All the validators used in this example are built-in in the library, but you could write your own.
Now that we have validators in the form, whenever an error is found, it will be added to the property error of it. That way, using `ngIf` directive, we can conditionally show the errors. One last trick that is really cool is to have a `submitted` property that defaults to false. It will be changed after submitting the form, that way, the errors are displayed only if the form has been submitted.
This is how the template will look now:
```html
<!-- src/app/personal/personal.component.html -->
<app-wizard-step [title]="title" (nextStepClicked)="goToNextStep()">
<form [formGroup]="personalForm" [attr.aria-label]="title">
<label>
<span>First name *</span>
<input
class="form-control"
type="text"
formControlName="firstName"
required
/>
</label>
<span
class="form-error"
*ngIf="submitted && firstNameCtrl?.errors?.required"
>
First name is required
</span>
<label>
<span>Last name *</span>
<input
class="form-control"
type="text"
formControlName="lastName"
required
/>
</label>
<span
class="form-error"
*ngIf="submitted && lastNameCtrl?.errors?.required"
>
Last name is required
</span>
<label>
<span>Age *</span>
<input
class="form-control"
type="number"
formControlName="age"
required
/>
</label>
<span class="form-error" *ngIf="submitted && ageCtrl?.errors?.required">
Age is required
</span>
<span class="form-error" *ngIf="submitted && ageCtrl?.errors?.min">
Age has to be greater or equal than 18
</span>
<span class="form-error" *ngIf="submitted && ageCtrl?.errors?.max">
Age has to be less or equal than 120
</span>
<label>
<span>About *</span>
<textarea
class="form-control"
rows="4"
formControlName="about"
required
></textarea>
</label>
<span class="form-error" *ngIf="submitted && aboutCtrl?.errors?.required">
About is required
</span>
</form>
</app-wizard-step>
```
Don't forget about the styling. Remember that I told you I hate working with something I don't visually like? After some improvements, I got the next stylings.
```scss
// src/app/personal/personal.component.scss
@import '~src/assets/styles/variables.scss';
form {
width: 100%;
max-width: 700px;
padding: 2rem;
background: darken($base-color, 10);
overflow-y: auto;
}
label {
display: flex;
justify-content: space-around;
min-height: 2rem;
padding: 1rem;
flex-direction: column;
span {
color: white;
font-size: 1.2rem;
width: 100%;
}
&:not(:last-child) {
margin-bottom: 0.5rem;
}
&:hover,
&:focus-within {
outline: 1px white solid;
}
}
.form-control {
width: 100%;
background: transparent;
border: none;
border-bottom: 1px solid white;
color: white;
font-size: 1.3rem;
padding-bottom: 0.3rem;
margin: 1rem 0;
&:focus {
outline: none;
}
}
.form-error {
display: block;
color: red;
margin: 0.5rem 0;
}
@media all and (min-width: 768px) {
label {
flex-direction: row;
span {
font-size: 1.5rem;
width: 30%;
}
}
.form-control {
width: 60%;
margin: 0;
}
}
```
You'll find some good ole mobile-first design also in this stylesheet. Feel free to take a look. I'll leave it as optional homework.
### The other steps
Now that the first step is done, we can easily reuse all that logic for the others. We can have as many steps as we want. Just remember to connect it through the Router, and to add it to the header as one of the steps. I'm sure you can do that by yourself, so I'll just skip ahead.
If you dont wan't to do all we've done by yourself, but do want to skip to the state management [here's a ready to customize version](https://github.com/danmt/embrace-power-solution/tree/steps-layout).
### The State
If you created the new steps, you're maybe wondering <em>what now?</em> All these modules are separated and now is hard to keep track of the state of the whole form. You may have even noticed that, if you jump between states, you lose the values you entered. None of those are problems to us, because we know that NgRx is here to help. What you'll need to do now is:
- Create reducers for the steps in the form.
- Create selectors for each step.
- All the steps will hydrate the form with the selectors.
- Create a set of actions for each step.
- Everytime a value is changed in a form it will be patched into the store.
First of all, we'll need to install NgRx Store, which can be easily done by running the command `npm install --save @ngrx/store` in the application directory.
> NOTE: I recommend you install the StoreDevtools for testing by executing `npm install --save @ngrx/store-devtools`
Now let's create our reducers (I'll focus on the _personal_ step, but it's the same strategy with all the others).
Create a folder called `state` under the `src/app/core` and put the `personal.reducer.ts` file there, with the following content:
```typescript
import { createReducer, on } from '@ngrx/store';
import { PersonalPageActions } from '../../personal/actions';
import { Personal } from '../interfaces/personal.interface';
import { PersonalGroup } from '../models/personal.model';
export interface State {
data: Personal;
isValid: boolean;
}
const initialState = new PersonalGroup();
const personalReducer = createReducer(
initialState,
on(
PersonalPageActions.patch,
(state: State, action: ReturnType<typeof PersonalPageActions.patch>) => ({
...state,
data: { ...state.data, ...action.payload }
})
),
on(
PersonalPageActions.changeValidationStatus,
(
state: State,
{ isValid }: ReturnType<typeof PersonalPageActions.changeValidationStatus>
) => ({
...state,
isValid
})
)
);
export function reducer(state: State, action: PersonalPageActions.Union) {
return personalReducer(state, action);
}
export const selectPersonalGroupData = (state: State) => state.data;
export const selectPersonalGroupIsValid = (state: State) => state.isValid;
```
There's an interface (`src/app/core/interfaces/personal.interface.ts`) and a model (`src/app/core/models/personal.model.ts`) that looks like this:
```typescript
export interface Personal {
firstName: string;
lastName: string;
age: number;
about: string;
}
```
```typescript
import { Personal } from '../interfaces/personal.interface';
export class PersonalGroup {
data = {
firstName: '',
lastName: '',
age: 18,
about: ''
} as Personal;
isValid = false;
}
```
I'll start by using a barrel import in the reducer, which will help you later with the other reducers (`src/app/core/state/index.ts`).
```typescript
import { ActionReducerMap, createSelector, MetaReducer } from '@ngrx/store';
import * as fromPersonal from './personal.reducer';
import { PersonalGroup } from '../models/personal.model';
export interface State {
personal: PersonalGroup;
}
export const reducers: ActionReducerMap<State> = {
personal: fromPersonal.reducer
};
export const metaReducers: MetaReducer<State>[] = [];
export const selectPersonalGroup = (state: State) => state.personal;
export const selectPersonalGroupData = createSelector(
selectPersonalGroup,
fromPersonal.selectPersonalGroupData
);
export const selectPersonalGroupIsValid = createSelector(
selectPersonalGroup,
fromPersonal.selectPersonalGroupIsValid
);
```
There's also some action related stuff. I simply created an action file for each page. That way, actions are specific to a context, and easier to think of in the future. The actions are stored directly in the module that can dispath them. For example, the "personal" actions are stored at `src/app/personal/actions/personal-page.actions.ts`.
```typescript
import { createAction, props } from '@ngrx/store';
import { Personal } from '../../core/interfaces/personal.interface';
export const patch = createAction(
'[Personal Page] Patch Value',
props<{ payload: Partial<Personal> }>()
);
export const changeValidationStatus = createAction(
'[Personal Page] Change Validation Status',
props<{ isValid: boolean }>()
);
export type Union = ReturnType<typeof patch | typeof changeValidationStatus>;
```
Also, don't forget to create an index file for the actions (`src/app/personal/actions/index.ts`):
```typescript
import * as PersonalPageActions from './personal-page.actions';
export { PersonalPageActions };
```
The only thing missing now is to use all these new super powers. First, we'll add the reducers to the AppModule, and then we'll hook everything up in the respective component.
```typescript
import { StoreDevtoolsModule } from '@ngrx/store-devtools';
import { StoreModule } from '@ngrx/store';
import { reducers, metaReducers } from './core/state';
@NgModule({
imports: [
// ...
StoreModule.forRoot(reducers, { metaReducers }),
StoreDevtoolsModule.instrument({
maxAge: 25
})
// ...
]
// ...
})
export class AppModule {}
```
> NOTE: I'm also injecting the StoreDevtools to enable the redux widget of the Chrome Devtools.
Cool. We are almost there. It is just a matter of hooking the step's component that can be found in `src/personal/personal.component.ts`.
```typescript
// 1) New imports
import { Router } from '@angular/router';
import { Store } from '@ngrx/store';
import * as fromRoot from '../core/state';
import { PersonalPageActions } from './actions';
import { map, take, distinctUntilChanged } from 'rxjs/operators';
import { merge } from 'rxjs';
import { Personal } from '../core/interfaces/personal.interface';
// ...
export class PersonalComponent implements OnInit {
// ...
// 2) Inject the router and the store
constructor(
private router: Router,
private fb: FormBuilder,
private store: Store<fromRoot.State>
) {}
ngOnInit() {
// 3) Get the last state of the personal data and patch the form with it
this.store
.select(fromRoot.selectPersonalGroupData)
.pipe(take(1))
.subscribe((personal: Personal) =>
this.personalForm.patchValue(personal, { emitEvent: false })
);
// 4) For each field create an observable that maps the change as a key value
const firstName$ = this.firstNameCtrl.valueChanges.pipe(
map((firstName: string) => ({ firstName } as Partial<Personal>))
);
const lastName$ = this.lastNameCtrl.valueChanges.pipe(
map((lastName: string) => ({ lastName } as Partial<Personal>))
);
const age$ = this.ageCtrl.valueChanges.pipe(
map((age: number) => ({ age } as Partial<Personal>))
);
const about$ = this.aboutCtrl.valueChanges.pipe(
map((about: string) => ({ about } as Partial<Personal>))
);
// 5) For each change trigger an action to update the store
merge(firstName$, lastName$, age$, about$).subscribe(
(payload: Partial<Personal>) => {
this.store.dispatch(PersonalPageActions.patch({ payload }));
}
);
// 6) If the validaty status of the form changes dispatch an action to the store
this.personalForm.valueChanges
.pipe(
map(() => this.personalForm.valid),
distinctUntilChanged()
)
.subscribe((isValid: boolean) =>
this.store.dispatch(
PersonalPageActions.changeValidationStatus({ isValid })
)
);
}
// 7) Add method to go to next step through navigation if the form is valid
goToNextStep() {
if (this.personalForm.invalid) {
this.submitted = true;
return;
}
this.router.navigate(['address']);
}
}
```
What we are doing here is simply getting the latest state from the store, and patching it in the form. Then, we are creating a stream that emits an action every time an input has changed.
If you repeat this for each step, you'll have a full multi step form with validation that is also accessible. If you want to skip ahead [here's a fully working version of app](https://github.com/danmt/embrace-power-solution).
## Conclusion
ReactiveForms are incredibly powerful. In previous parts, we talked about some of the core concepts, but now we have to unleash the real power of it. If you use all the concepts mentioned here, you will probably be able to do any complex form. In case you're wondering about testing, this article is long enough by itself, so I'm planning to write one specific for testing ReactiveForms.
| danmt |
235,917 | What are your tech survival skills for 2020? | Do you know with every new tech we accept some old and current tech jobs reduce in number. What are s... | 0 | 2020-01-10T21:08:44 | https://dev.to/baskarmib/what-are-your-survival-skills-for-2020-3km0 | discuss, career | Do you know with every new tech we accept some old and current tech jobs reduce in number. What are some of your experiences?
For example Cloud Computing reduced in house datacenter teams. SAAS products reduced in house development jobs.
So how are you preparing yourself with the rise of #serverless #machinelearning #artificialintelligence
"Be prepared to disrupt or get disrupted." | baskarmib |
235,922 | [Heroku] No app specified. 😅 how to omit -a, --a option | 🤔 Situation $ heroku apps:info ▸ No app specified. ▸ USAGE: heroku info my-app... | 0 | 2020-01-10T21:37:44 | https://dev.to/n350071/heroku-no-app-specified-how-to-omit-a-a-option-2j94 | heroku | ## 🤔 Situation
```sh
$ heroku apps:info
▸ No app specified.
▸ USAGE: heroku info my-app
```
## 👍 Solution
### 1. Check your git remote
```sh
$ git remote -v
origin git@github.com:n350071/my-app.git (fetch)
origin git@github.com:n350071/my-app.git (push)
```
### 2. set the git remote to heroku
Then, Heroku will automatically detect your app when you're in the git managed directory.
```sh
$ heroku git:remote --app my-app-prototype
set git remote heroku to https://git.heroku.com/my-app-prototype.git
$ git remote -v
heroku https://git.heroku.com/my-app-prototype.git (fetch)
heroku https://git.heroku.com/my-app-prototype.git (push)
origin git@github.com:n350071/my-app.git (fetch)
origin git@github.com:n350071/my-app.git (push)
```
## 🦄 Solved
```sh
$ heroku apps:info
=== my-app-prototype
Addons: cleardb:ignite
sendgrid:starter
Dynos: web: 1
Git URL: https://git.heroku.com/my-app-prototype.git
Region: us
Repo Size: 0 B
Slug Size: 71 MB
Stack: heroku-18
``` | n350071 |
235,927 | How to migrate all your Git repositores to a new computer? | This helps you create a super clone script from the old computer and run in the new, keeping all the folder structure. | 0 | 2020-01-15T02:11:31 | https://dev.to/douglaslise/how-to-migrate-all-your-git-repositores-to-a-new-computer-2c72 | git, bash | ---
title: How to migrate all your Git repositores to a new computer?
published: true
description: This helps you create a super clone script from the old computer and run in the new, keeping all the folder structure.
tags: git, bash
---
Have you ever needed to change from an old computer to a new one and needed to manually clone all the Git repositories?
The below script keeps your folder structure and migrates all repositories to a new computer.
```bash
dirs=$(find . -name '.git' -type d | sed 's/\/\.git//')
for dir in $dirs; do
GIT_DIR=$dir/.git
# Creates the folder structure
echo mkdir -p $dir;
# Clones the repository in the same folder
echo git clone $(git --git-dir=$GIT_DIR remote get-url origin) $dir;
# Re-adds other remotes
for r in $(git --git-dir=$GIT_DIR remote | grep -v origin); do
echo git --git-dir=$GIT_DIR remote add $r $(git --git-dir=$GIT_DIR remote get-url $r);
done
echo
done > clone-all.sh
```
You just need to paste it into a shell session at the base folder where all your repositories are stored. The script creates a file named `clone-all.sh`. Next, you just need to copy and run this file in the new computer.
## Script explanation
The script initially finds all sub-folders that contain a `.git` folder, which denotes it as a Git repository. From this list it starts to output commands that will be executed in the new computer.
The output commands (for each folder) are these:
* A command to create the folder, keeping the same structure;
* A command to clone the repository, to the same folder name, using the `origin` remote;
* A command to add the other remotes (different than `origin`) to the newly cloned repository.
## Example
In the old computer, before run the script, the folder structure was this:
```bash
➜ pwd
/home/douglas/code
➜ tree -L 2
.
├── create-clone-script.sh
├── other
│ ├── audited_views
│ ├── bitmovin-ruby
│ ├── cucumber-ruby
│ ├── devise-i18n
│ ├── douglaslise.github.io
│ ├── gitlab-ci-monitor
│ ├── rubocop
│ ├── sendgrid-ruby
│ └── summernote
└── private
├── ping-monitor
└── qd
```
So I pasted the script in the shell and it generated this file `clone-all.sh`:
```
➜ cat clone-all.sh
mkdir -p ./other/bitmovin-ruby
git clone git@github.com:bitmovin/bitmovin-ruby.git ./other/bitmovin-ruby
git --git-dir=./other/bitmovin-ruby/.git remote add fork git@github.com:douglaslise/bitmovin-ruby.git
mkdir -p ./other/audited_views
git clone git@github.com:douglaslise/audited_views.git ./other/audited_views
mkdir -p ./other/cucumber-ruby
git clone git@github.com:cucumber/cucumber-ruby.git ./other/cucumber-ruby
mkdir -p ./other/rubocop
git clone git@github.com:rubocop-hq/rubocop ./other/rubocop
mkdir -p ./other/gitlab-ci-monitor
git clone git@github.com:globocom/gitlab-ci-monitor.git ./other/gitlab-ci-monitor
git --git-dir=./other/gitlab-ci-monitor/.git remote add fork git@github.com:douglaslise/gitlab-ci-monitor.git
mkdir -p ./other/summernote
git clone git@github.com:summernote/summernote.git ./other/summernote
mkdir -p ./other/devise-i18n
git clone git@github.com:tigrish/devise-i18n.git ./other/devise-i18n
git --git-dir=./other/devise-i18n/.git remote add fork git@github.com:douglaslise/devise-i18n.git
mkdir -p ./other/douglaslise.github.io
git clone git@github.com:douglaslise/douglaslise.github.io.git ./other/douglaslise.github.io
mkdir -p ./other/sendgrid-ruby
git clone git@github.com:sendgrid/sendgrid-ruby.git ./other/sendgrid-ruby
git --git-dir=./other/sendgrid-ruby/.git remote add fork git@github.com:douglaslise/sendgrid-ruby.git
mkdir -p ./private/ping-monitor
git clone ssh://hg@bitbucket.org/douglaslise/ping-monitor ./private/ping-monitor
git --git-dir=./private/ping-monitor/.git remote add heroku https://git.heroku.com/pingmonitor.git
mkdir -p ./private/qd
git clone ssh://hg@bitbucket.org/douglaslise/qd ./private/qd
➜
```
Next, I copied the generated file to the new computer and executed it in the base folder where I wanted to clone all the repositories:
```bash
➜ pwd
/home/douglas-new/code
➜ ls
clone-all.sh
➜ sh clone-all.sh
Cloning into './other/bitmovin-ruby'...
remote: Enumerating objects: 56, done.
remote: Counting objects: 100% (56/56), done.
remote: Compressing objects: 100% (43/43), done.
remote: Total 1707 (delta 21), reused 33 (delta 10), pack-reused 1651
Receiving objects: 100% (1707/1707), 245.62 KiB | 1.00 MiB/s, done.
Resolving deltas: 100% (1051/1051), done.
Cloning into './other/audited_views'...
remote: Enumerating objects: 314, done.
remote: Total 314 (delta 0), reused 0 (delta 0), pack-reused 314
Receiving objects: 100% (314/314), 48.93 KiB | 331.00 KiB/s, done.
Resolving deltas: 100% (81/81), done.
Cloning into './other/cucumber-ruby'...
remote: Enumerating objects: 371, done.
remote: Counting objects: 100% (371/371), done.
remote: Compressing objects: 100% (203/203), done.
remote: Total 58832 (delta 227), reused 264 (delta 163), pack-reused 58461
Receiving objects: 100% (58832/58832), 11.76 MiB | 4.73 MiB/s, done.
Resolving deltas: 100% (40075/40075), done.
Cloning into './other/rubocop'...
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (11/11), done.
remote: Total 90142 (delta 0), reused 2 (delta 0), pack-reused 90131
Receiving objects: 100% (90142/90142), 29.81 MiB | 4.98 MiB/s, done.
Resolving deltas: 100% (68283/68283), done.
Cloning into './other/gitlab-ci-monitor'...
remote: Enumerating objects: 14, done.
remote: Counting objects: 100% (14/14), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 390 (delta 3), reused 3 (delta 0), pack-reused 376
Receiving objects: 100% (390/390), 471.38 KiB | 1.36 MiB/s, done.
Resolving deltas: 100% (210/210), done.
Cloning into './other/summernote'...
remote: Enumerating objects: 41, done.
remote: Counting objects: 100% (41/41), done.
remote: Compressing objects: 100% (36/36), done.
remote: Total 21101 (delta 21), reused 11 (delta 5), pack-reused 21060
Receiving objects: 100% (21101/21101), 13.13 MiB | 4.58 MiB/s, done.
Resolving deltas: 100% (13576/13576), done.
Cloning into './other/devise-i18n'...
remote: Enumerating objects: 114, done.
remote: Counting objects: 100% (114/114), done.
remote: Compressing objects: 100% (79/79), done.
remote: Total 4852 (delta 45), reused 56 (delta 19), pack-reused 4738
Receiving objects: 100% (4852/4852), 1.33 MiB | 2.81 MiB/s, done.
Resolving deltas: 100% (2928/2928), done.
Cloning into './other/douglaslise.github.io'...
remote: Enumerating objects: 463, done.
remote: Total 463 (delta 0), reused 0 (delta 0), pack-reused 463
Receiving objects: 100% (463/463), 4.46 MiB | 3.07 MiB/s, done.
Resolving deltas: 100% (92/92), done.
Cloning into './other/sendgrid-ruby'...
remote: Enumerating objects: 23, done.
remote: Counting objects: 100% (23/23), done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 2861 (delta 7), reused 14 (delta 4), pack-reused 2838
Receiving objects: 100% (2861/2861), 680.79 KiB | 1.60 MiB/s, done.
Resolving deltas: 100% (1313/1313), done.
Cloning into './private/ping-monitor'...
remote: Counting objects: 490, done.
remote: Compressing objects: 100% (436/436), done.
remote: Total 490 (delta 256), reused 75 (delta 25)
Receiving objects: 100% (490/490), 72.60 KiB | 352.00 KiB/s, done.
Resolving deltas: 100% (256/256), done.
Cloning into './private/qd'...
remote: Counting objects: 632, done.
remote: Compressing objects: 100% (418/418), done.
remote: Total 632 (delta 288), reused 332 (delta 128)
Receiving objects: 100% (632/632), 151.00 KiB | 322.00 KiB/s, done.
Resolving deltas: 100% (288/288), done.
```
Now we can see the newly created structure:
```
➜ tree -L 2
.
├── clone-all.sh
├── other
│ ├── audited_views
│ ├── bitmovin-ruby
│ ├── cucumber-ruby
│ ├── devise-i18n
│ ├── douglaslise.github.io
│ ├── gitlab-ci-monitor
│ ├── rubocop
│ ├── sendgrid-ruby
│ └── summernote
└── private
├── ping-monitor
└── qd
```
## Conclusion
This could be used to clone all your projects just by copying a unique, small file, without the needing to copy all the files and folders.
If you have suggestions, please let me know in the comments.
Thanks. | douglaslise |
235,948 | Go Notes: Omitting empty structs | How to ignore structs when marshaling them | 0 | 2020-01-10T23:30:29 | https://dev.to/uris77/go-notes-omitting-empty-structs-19d7 | go | ---
title: Go Notes: Omitting empty structs
published: true
description: How to ignore structs when marshaling them
tags: golang,go
---
Go provides a convenient way of marshaling and unmarshaling structs by using struct field tags. This gives us the benefit of not letting the Go naming convention to leak into the JSON structure, and it also doesn't force us to use a non-idiomatic naming convention for our APIs that will be consumed outside our team.
Let's draft an example. We are modeling a census application that will capture basic information about people.
```go
type Person struct {
Name string `json:"name"`
Address string `json:"address"`
DateOfBirth string `json:"dob"`
Occupation string `json:"occupation"`
}
```
With this struct, we can encode and decode the following JSON:
```json
{
"name": "Rob Pike",
"address": "On a Street somewhere, in some city",
"dob": "1970-01-01",
"occupation": "engineer"
}
```
If we didn't know the address, then that field would be unmarshaled to the default value. In this case, it would be an empty string. Hence, this struct:
```go
Person{
Name: "Rob Pike",
Address: "",
DateOfBirth: "197-01-01",
Occupation: "engineer",
}
```
would be marshaled to:
```json
{
"name": "Rob Pike",
"address": "",
"dob": "1970-01-01",
"occupation": "engineer"
}
```
There are times we don't want non-existent fields to be set to their zero values when unmarshalling them. This can be configured by using `omitempty`:
```go
type Person struct {
Name string `json:"name"`
Address string `json:"address,omitempty"`
DateOfBirth string `json:"dob"`
Occupation string `json:"occupation"`
}
```
Now this person:
```go
Person{
Name: "Rob Pike",
Address: "",
DateOfBirth: "197-01-01",
Occupation: "engineer",
}
```
would be marshaled to:
```json
{
"name": "Rob Pike",
"dob": "1970-01-01",
"occupation": "engineer"
}
```
We also want to record how many children a person has. We can modify `Person` like this:
```go
type Person struct {
Name string `json:"name"`
Address string `json:"address"`
DateOfBirth string `json:"dob"`
Occupation string `json:"occupation"`,
Dependent Children `json:"dependent,omitempty"`,
}
type Children struct { Name string `json:"name, omitempty"`}
```
I know that someone can have more than one child, but I'm struggling coming up with didactic examples.
If we want to unmarshal a JSON struct with no dependent:
```json
{
"name": "Rob Pike",
"dob": "1970-01-01",
"occupation": "engineer"
}
```
Our struct will try to create a zero value for `Children`, in this case, it will be a struct with empty fields:
```go
Person{
Name: "Rob Pike",
Address: "",
DateOfBirth: "197-01-01",
Occupation: "engineer",
Dependent: Person{Name: ""},
}
```
There are instances where we would actually want this to be `nil`. For example, if we are pushing this to an elasticsearch index that has mappings that do not allow empty strings for `name`. For this use case, we have to use pointers.
```go
type Person struct {
Name string `json:"name"`
Address string `json:"address"`
DateOfBirth string `json:"dob"`
Occupation string `json:"occupation"`,
Dependent *Children `json:"dependent"`,
}
type Children struct { Name string `json:"name, omitempty"`}
```
Our unmarshaled person with no dependent now looks like this:
```go
Person{
Name: "Rob Pike",
Address: "",
DateOfBirth: "197-01-01",
Occupation: "engineer",
Dependent: nil,
}
```
## Summary
Use `omitempty` to avoid marshaling and unmarshaling non-existent fields. However, if a field is a struct, we should use pointers.
| uris77 |
235,994 | 12 Things every Software Developer should be doing in 2020. | Have you set any goals for 2020 professionally? While you should focus on personal goals (such as going to the gym more, eating healthy, etc.), you should plan to grow yourself professionally. | 0 | 2020-01-13T16:27:37 | https://dev.to/mbcrump/12-things-every-software-developer-should-be-doing-in-2020-5hbp | productivity, developer | ---
title: 12 Things every Software Developer should be doing in 2020.
published: true
description: Have you set any goals for 2020 professionally? While you should focus on personal goals (such as going to the gym more, eating healthy, etc.), you should plan to grow yourself professionally.
tags: productivity, developer
cover_image: https://thepracticaldev.s3.amazonaws.com/i/omdb1ai3676z80zsboxh.png
---
#### Introduction
Have you set any goals for 2020 professionally? While you should focus on personal goals (such as going to the gym more, eating healthy, etc.), you should plan to grow yourself professionally. Even if you **love** your current job, it is up to you to keep your skill-set relevant for future opportunities and to explore other areas that interest you. If you wait for someone else to manage your career then you'll be waiting for a while or maybe forever. Below are my top 12 things that I believe you should be doing in 2020 if you are in the software development space in no particular order.
### 12 Things every Software Developer should be doing in 2020.
1. **Create an account on** [**Twitter**](https://www.twitter.com/) – Yes. This one sounds simple and you probably even used your Twitter account to create a dev.to account but in my conversations with attendees at conferences, there is still a lot of folks who claim that they don't want an account due to a) they won't have followers or b) they don't want "noise" such as political tweets or c) they don't want to waste time. HINT: You decide who to follow and you can even mute someone. :) Anyways, I have a couple of reasons why you should still have an account.
1. Followers don't matter. They don't. Regardless if you have 0 followers, 1 follower or 10000 followers an account gives you the ability to share your thoughts, bookmark other great dev tweets, and search.
2. Get software developer news straight from the source by following other developers. This has been one of my top benefits since I joined Twitter. I know there are so many smart people out there and I love being able to not only follow them but interact with them. Don't be shy.
3. Monitor your favorite technology hashtags – To monitor topics important to you. For example, I use it to monitor #azure, #nodejs, #dotnet amongst others.
4. To stay engaged in a conversation with other developers and to see what projects they are working on. Again, you don't need to have tons of followers to be engaged in a conversation or have a GitHub repo with thousands of stars.
5. Direct Message - Many devs [like myself](http://twitter.com/mbcrump) have our DMs open. If you feel uncomfortable creating a public tweet, then DM them. On a personal note, this is typically one of the best ways to reach me.
2. **Read** [**StackOverflow**](http://www.stackoverflow.com/) – StackOverflow is the number one forum for asking and answering a coding question. If you use the site already, then you are probably aware of "site-rot" where the best answer could be the third or fourth comment. Even with this hurdle, I've found it's worth it just for browsing questions and learning how different devs solve the problem. I think it is a wise investment of your time to spend at least 10 minutes a day reading StackOverflow.
1. Take advantage of [tags](https://stackoverflow.com/questions/tagged/azure) to quickly skim the recent questions. Again, I monitor similar tags as the hashtags mentioned earlier for Twitter.
2. Try to solve issues to your product that you work on (if it is listed as a tag) by adjusting the filters to see the most commonly asked question, upvotes, etc. and provide an updated answer if need be.
3. Volume - It probably has more questions and answers then your favorite programming language official site and you can share your single login.
4. Cross-discipline audience - Having the ability as a dev to ask a data question and not only have developers answer but database admins, etc.
3. **Start a Blog** – Every Developer should have a blog. But Why?
1. It is a footprint that we leave for other developers studying our craft.
2. It allows you to become engaged in the community.
3. It helps you market yourself as a professional.
4. It shows your technical ability and passion.
5. It allows you to challenge yourself and help educate others.
6. On the flip side, DON'T start a blog for revenue. Be yourself and the money will come naturally at some point.
4. **Get out there** - Try your very best to get out of your comfort zone and start talking to other developers at local events, meetups, conferences, etc. You have something amazing to contribute!
1. In my many years of attending conferences – most everyone is shy at the beginning. If you start a conversation with someone, then it usually takes off very fast because you should have at least one thing in common (such as technology in general).
2. Networking with other developers is key to your professional career. You start building connections in the industry and if you ever need help then you have someone to go to. It also works the other way around.
3. Present on a topic for your co-workers, meetup, conference or even your mom. You may even like it.
5. **Start watching live streamers in the development space or create your live stream** - While you may be thinking "Who would watch someone code?", the answer is thousands of developers do it every day and for the following reasons:
1. Online streaming is a safe place for all, but especially for those with social anxiety - you can join a live stream and stay quiet or participate in the conversation. While I'd suggest participating in the conversation, do what is safe for you.
2. You can help others and others can help you learn to code regardless of your skill level. My daughter who is 10 years old was working with makecode.com on [my stream](https://twitch.tv/mbcrump) and a viewer suggested another way to solve the problem and she learned something she (or even myself) didn't have insight into before.
3. If you are trying something, you don't have the pressure to be perfect as a professional video tutorial. You open your IDE or editor and start coding, you get to learn about your mistakes and so does your audience.
4. Don't stress having followers or concurrent viewers at the start as your video can be posted to YouTube and the notes on dev.to further create your brand for your current or future employer.
5. If you don't know who to start following, then you should create an account on Twitch and follow the entire [Live Coders group](https://www.twitch.tv/team/livecoders) that is led by [Jeff Fritz](https://www.twitch.tv/csharpfritz).
6. **Spend money on solid hardware -** I don't care if you are in the Mac or PC camp - don't try to save money here in 2020. In my early 20s, I tried to save money here every time but always spent more money to fix broken or slow hardware.
1. This applies to mobile too - If you're a developer and carrying around a 3 or 4-year-old phone then its time to upgrade. I remember way back someone that told me they couldn't download my app because it required iOS 9 and I remember thinking "This person is in software development?" While I don't suggest to get a new model every year, I would stay current enough to run the latest generation of apps regardless of iOS or Android hardware.
2. Just like some industries you need to drive a fancy car to have nice clothes or the latest jewelry. In technology, we need to stay as current as we can afford with our computers, software and mobile phones as our customers may be using the latest technology.
7. **Think more clearly about mobile –** While I'm sure this will be the most controversial topic in this post, I'll say it anyway. Be wary of all the offerings of cross-platform development - "write once, deploy everywhere" messages.
1. What app are you creating? - I have been involved in iOS development in some sort since iPhone 3GS was released. Yup, I watched the keynote and started learning Objective-C. While I eventually needed revenue from other platforms like Android and eventually Windows Phone, I always took a step back from anyone promising to "write once and deploy everywhere" because it depends on the app you are creating. Do you want to wait until your cross-platform tools support the latest mobile OS version? What about performance? What about documentation and help for a problem that occurs?
2. Keep it simple - Do you need your app on every device on the current market? If so, can you do it with a web site or \*gasp\* PWA? To you need blazing fast speed and performance, well I'd probably suggest native. Do you have a simple LOB app that needs to work on Android or iOS, then a cross-platform tool may be the answer. This is something a modern developer cannot ignore in 2020.
8. **Learn at least one programming design pattern** - I am not going to tell you which one you should learn or focus as it depends on what technology area you focus on but you need at least one.
1. If you are familiar with at least one design pattern then not only would your code be structured better, but it would make your future employer feel better about hiring you.
2. Since I typically work with OO programming, I started with this book: [Gang of Four – Design Patterns: Elements of Reusable OOS](http://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612/ref=sr_1_1?ie=UTF8&qid=1324612498&sr=8-1) (_not an affiliate link_) and eventually worked towards others.
9. **Set** **reachable** **goals every year –** Create a short-list of goals that you are looking to accomplish in the next year.
1. Track progress with numbers - I typically start with 10 items that I am proud of in the last year and 10 goals for the current year (2020). For example, if you have 200 followers on a social media platform, then you might set a goal to hit 500 followers.
2. Not only should you be challenging yourself with a reachable goal you should also create a monthly or quarterly email reminder or use a reminder app, browser plugin, etc. to evaluate your progress.
3. Hold yourself accountable by sharing it with close friends, the whole world or just creating a private imgur image post to look back on in a year.
10. **Learn a different programming language –** Simply put it broadens your perspective and permits a deeper understanding of how a computer and programming languages work.
1. Keep in mind that while the goal is to learn (maybe 1) programming language, you might find yourself with a new language that you can use to solve problems differently.
2. Wise words - If the only tool you have is a hammer you'll treat every problem as a nail.
11. **Believe in yourself –** It amazes me when I hear other developers telling me about their low self-confidence. Why? Because I look at them as way smarter than I am. Here I am copying and pasting code from StackOverflow and they think I know what I'm doing? Hah!
1. If you struggle with this as I do, then one of the ways to soften this anxiety is to spend time with a bit of self-reflection. While I know I'll never be known as a superstar developer (if this is a thing), I have found that my knowledge in a form of a blog post, twitter, etc has helped many folks.
2. Teaching - Teaching others has many benefits but the one that I find the most valuable is the incentive it is to learn the material to prevent looking like a fool. This could be for a live stream, speaking session or just to 5 co-workers. It helps so try it.
3. Learning - In my army dad's voice - There aren't any excuses for staying ignorant in this industry. Even if you don't have access to the latest books, hardware, development tools, etc., there are just too many FREE resources on the web.
12. **Read written content such as blogs and books –** Do you read blogs or books consistently?
1. I believe a good developer would read or skim at least 3-5 blog posts per day and have at least 1-2 books on a backlog. Why? Well, to at least have a high level of knowledge on a topic regardless if you plan to use it. It gives you options.
2. How can you get better if your not constantly reading?
P.S. If you want to stay in touch then I can be found live streaming on [Twitch](http://twitch.tv/mbcrump), or short-form software development news on [Twitter](http://twitter.com/mbcrump).
### What would your tip for software developers for 2020 be? Leave it in the comments below and thanks for reading. | mbcrump |
236,114 | Cross-site scripting Attack Tutorial | A post by Mayur Kadam | 0 | 2020-01-11T10:09:02 | https://dev.to/mayurkadampro/cross-site-scripting-attack-tutorial-di1 | xss, scripting, vulnerability, javascript | {% youtube l4pVpsV7aQw %} | mayurkadampro |
820,729 | Alhamdulillah | God Is The Greatest | 0 | 2021-09-11T12:25:28 | https://dev.to/billyzyxx/alhamdulillah-5784 | God Is The Greatest | billyzyxx | |
236,123 | Reading Snippets [32 => CSS] | Custom properties contain specific values to be reused in a document. Custom property notation (e.g... | 0 | 2020-01-11T10:38:30 | https://dev.to/calvinoea/reading-snippets-32-css-2o48 | css, beginners | Custom properties contain specific values to be reused in a document.
Custom property notation (e.g., <code>--main-color:black;</code>) is used to set custom properties.
Declaration of custom property on the <code>:root</code> pseudo-class allows for use where needed throughout a document.
The <code>var()</code> function is used to access custom properties.
<kbd><b>Example:</b></kbd>
<small><i>Declaring a custom property:</i></small>
<code>
element {
--main-bg-color:brown;
}
</code>
<small><i>Using the custom property:</i></small>
<code>
element {
background-color:var(--main-bg-color);
}
</code>
<kbd>Source:<small><a href="https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties">Developer.Mozilla</a></small></kbd>
| calvinoea |
236,145 | Makers - Week 2 | I used this time to eat, catch up with family, and code! I found a fantastic course on udemy that wen... | 0 | 2020-01-11T11:48:18 | https://dev.to/davidpaps/makers-week-2-1jo8 | ruby, beginners, bootcamp |
I used this time to eat, catch up with family, and code! I found a fantastic course on udemy that went through the specifics of rspec (shout out to the coach Boris!) and spent some time reading the fantastic book 'Practical Object-Orientated Design in Ruby: An Agile Primer' by Sandi Metz.
I really wish i could go back in time and read this before the on site course started, but better late than never. It really helped with basic OO principals such as SRP, dependancy injection and how to build effective classes.
I took some time away from the screen, and started the course with a new found enthusiasm and motivation to be a better developer. Week 3 here we come!
As always wish me luck! | davidpaps |
236,207 | How to Deploy with ZEIT NOW | ZEIT Now is a deployment tool by ZEIT. ZEIT Now is a cloud platform for static websites and server... | 0 | 2020-01-11T14:13:13 | https://dev.to/akshay2742/zeit-now-5f09 | zeit, deploy, web, hackcbs | **ZEIT Now** is a deployment tool by ZEIT. **ZEIT Now** is a cloud platform for static websites and serverless functions. It launches web applications, docker containers or even static websites to the cloud platform. Getting started with Now is easy and takes just a few steps to get up and running with new projects in less than a minute.
**ZEIT Now** is free and well use even just for a new-commer. It allows developers to put their proposal to their custom domain It enables web developers to host websites and web services that deploy instantly, scale automatically, and requires no supervision, all with no configuration.
## Deploying with ZEIT Now
**ZEIT Now** can certainly be defined as the easiest way to deploy websites, regardless of what the requirements are. The people at **ZEIT Now** aim to make cloud computing accessible to everyone. It is a cloud platform for static sites and Serverless Functions which enables developers to host websites and web services that deploy instantly, scale automatically, and requires no supervision, all with no configuration.
### Following steps will present you how to get initiated quickly:
1) **Use Quickstarts:** They let you get initiated speedily. Various
quickstarts, paired with their guides and a deploy button are present on
their website to help you get going with your project as quick as possible.
2) **Deploy Project Locally:** If doing things manually is your way, then you can deploy with **ZEIT Now** over your terminal with any local project.
i) **Install Now CLI:** For deploying with **ZEIT** over your terminal,
you’ll need to install Now CLI, a frequently updated, and open-
source, command-line interface. You can get it from either npm or
Yarn.
ii) **Create a project & deploy:** The alternate way is to create a new
next.js application and move it into the directory and then deploy
your app with a single command through your terminal. Once deployed,
you will get a preview URL that is assigned on each deployment to
share the latest changes under the same address.
Any technology that can be served over HTTP and distributed through
their CDN network can be deployed to **ZEIT Now**.
### What it includes:
● Static websites and static generators (React, Vue, Angular, etc)
● Code that renders HTML on the server-side
● API endpoints that query databases or web APIs and return dynamic data
**ZEIT Now** makes serverless useful to you and to your team directly with tools and workflows, that make the underlying cloud infrastructure useful, productive and configuration-free.
### Why ZEIT Now?
**ZEIT Now** provides the framework to avoid re-writing and re-learning everything from scratch to take advantage of serverless today. You can deploy popular client-side frameworks (like Next.js, create-react-app, Vue), Node.js or Go APIs as a monorepo with nearly zero configuration. Moreover, they integrate directly with GitHub to deploy upon every push. All these features and facilities make using **ZEIT Now** a seamless experience that proves why it’s said to be the easiest way to deploy websites currently anywhere on the internet.
Using **ZEIT Now**, developers can deploy their website quickly, without having to manually configure DNS, SSL, CDN or hosting. Developers can integrate with their favorite tools, and bring the entire team of developers and designers closer together.
It is a push-to-deploy platform that works with the developer’s web framework, integrates with GitHub and GitLab with Free Automatic SSL so as to avoid tedious renewals and DNS.
###How to deploy with ZEIT Now:
To deploy with **ZEIT Now**, a developer-only needs to install the Now CLI, a frequently updated and open-source command-line interface through the npm, the javascript package manager. When a web application is ready to deploy, the only thing to do is running the “now” command which instantly deploys the web app and a preview URL is returned, that is assigned on each deployment to share the latest changes under the same address. Once deployed, the projects can be assigned to a custom domain or specified name of one’s choice to give it a primary place to see the latest version of the application.
Hence, **ZEIT Now** makes it a lot easier for developers to deploy their static web apps with zero configuration and full trust and security. | akshay2742 |
236,232 | Some problems the software developer faces | Not understanding the user Debugging Keeping up with Technology Communication Time Estimation Sittin... | 0 | 2020-01-11T15:07:06 | https://dev.to/michong/some-problems-the-software-developer-faces-2pc3 | 1. Not understanding the user
2. Debugging
3. Keeping up with Technology
4. Communication
5. Time Estimation
6. Sitting for hiurs
7. Security threats
8. Working with another person's code | michong | |
236,300 | How to Securely Set Laravel 5 Files Folders Permission and Ownership Setup | Files Folders Permission and Ownership: Deploying a Laravel application to a production en... | 0 | 2020-01-11T19:11:07 | https://anansewaa.com/how-to-securely-set-laravel-5-files-folders-permission-and-ownership-setup/ | laravel, security | ## Files Folders Permission and Ownership:
Deploying a Laravel application to a production environment can be challenging at times. So, in this post, we will discuss how to deal with file folders' permission and ownership problems.
Every developer wants his/her application to be setup in a secure environment when deploying it to a production server. One key area you should look at is the folder permissions because that could be the simple point of failure. Making the application venerable to hackers.
## Security Issue: Files and Folders Permission
The first error you might get is due to improper file and folder permissions. Because of that most people quickly set their files and folders permission to 777 on the production server.
However, it is a bad practice to set your files and folders directory permission to 777, because that makes your server open to the world.
That approach makes it possible for anyone using the application to have read, write and execute permission on your production server.
Simply, anyone can read, write and execute files using your application. So, hackers can upload malicious files that damage your project.
Therefore, always avoid setting 777 Permission for your files and folders.
## Setting Files Folders Permission and Ownership for Laravel 5:
Firstly, find the webserver user, for apache it is www-data. But use the following command to check:
```
ps aux | egrep '(apache|httpd)'
```
The output should be similar to this:
```
root@new:/var/www/html# ps aux | egrep '(apache|httpd)'
root 1759 0.0 0.1 503792 49164 ? Ss Aug10 0:09 /usr/sbin/apache2 -k start
root 17344 0.0 0.0 13136 988 pts/0 S+ 15:46 0:00 grep -E --color=auto (apache|httpd)
www-data 41090 0.0 0.2 522396 75612 ? S 14:09 0:01 /usr/sbin/apache2 -k start
www-data 41185 0.0 0.2 516140 68920 ? S 14:11 0:01 /usr/sbin/apache2 -k start
www-data 41805 0.0 0.2 522584 80692 ? S 14:27 0:01 /usr/sbin/apache2 -k start
www-data 42333 0.0 0.2 524240 65612 ? S 14:39 0:00 /usr/sbin/apache2 -k start
www-data 42890 0.0 0.2 524296 80380 ? S 14:54 0:00 /usr/sbin/apache2 -k start
www-data 43618 0.0 0.2 526288 69388 ? S 15:09 0:00 /usr/sbin/apache2 -k start
www-data 43619 0.0 0.2 522304 76708 ? S 15:09 0:00 /usr/sbin/apache2 -k start
www-data 43620 0.0 0.2 522492 73348 ? S 15:09 0:00 /usr/sbin/apache2 -k start
www-data 43891 0.0 0.1 513680 59224 ? S 15:17 0:00 /usr/sbin/apache2 -k start
www-data 43941 0.0 0.1 513644 54344 ? S 15:18 0:00 /usr/sbin/apache2 -k start
```
From the output, apache user is www-data.
Now, change the owner of the project directory to www-data using the following command:
```
sudo chown -R www-data:www-data /var/www/path/to/your/project/
```
**Note**: you must specify the path to the project directory
Set Folders permissions to 755 and file of your project to 644:
## Files and Folders/Directories Permission:
### Setting Folders/Directories Permission
The appropriate permissions to be given to your folders or directories is 755. The command below set the folder permissions for your project to 755:
```
sudo find /var/www/path/to/your/project/ -type d -exec chmod 755 {} \;
```
### Setting Files Permissions:
The appropriate permissions for your files should be 644. Use the following command to set the file permissions for your project to 644:
```
sudo find /var/www/path/to/your/project/ -type f -exec chmod 664 {} \;
```
Giving the appropriate permissions and ownership to your project folders and files limit the end user’s permission to read-only and cannot write and execute malicious files on the production server.
Even though we have secure the files and folders Laravel still needs read-write permission to the storage and the cache folder.
Use the following commands to fix the read-write permission:
```
sudo chgrp -R www-data /var/www/path/to/your/project/storage /var/www/path/to/your/project/bootstrap/cache
sudo chmod -R ug+rwx /var/www/path/to/your/project/storage /var/www/path/to/your/project/bootstrap/cache
```
You can assign the same permission to your file upload directory as well.
### Setting Permissions for SFTP/FTP Upload Files:
Add your user to the group using the following command:
```
sudo usermod -a -G www-data yourusername
```
Now change the ownership using the following command:
```
sudo chown -R root:www-data /var/www/path/to/your/project/
```
Finally, assign files and folder permissions using the following command:
```
sudo find /var/www/path/to/your/project/ -type f -exec chmod 664 {} \;
sudo find /var/www/path/to/your/project/ -type d -exec chmod 775 {} \;
```
We can be sure that our production environment is relatively secure for the Laravel project. | ayekpleclemence |
236,352 | Remove duplicates rows with SQL | Last week I made a small update error on my application and I ended up with duplicates values in a ta... | 0 | 2020-01-14T11:23:41 | https://blog.pagesd.info/2020/01/14/remove-duplicate-rows-sql/ | sql, snippet | Last week I made a small update error on my application and I ended up with duplicates values in a table. Of course, this would not have happened if I had a unique key, but as I check before inserting, I thought I was safe.
Unfortunately, as I couldn't delete everything and just start updating data again, I had to figure out how to delete duplicates rows.
As a first step, I run a simple query to find out how much I was in trouble.
```sql
SELECT Place_ID, Event_ID, StartDate, COUNT(*)
FROM Showings
GROUP BY Place_ID, Event_ID, StartDate
HAVING COUNT(*) > 1
```
Good news first: there are no triplets :)
Less good news: I have more than a thousand rows to delete. So no way to do this by running one request after the other...
Good thing: since my table has a primary key, I can identify duplicate data:
```sql
SELECT Place_ID, Event_ID, StartDate, MAX(Showing_ID) AS ID
FROM Showings
GROUP BY Place_ID, Event_ID, StartDate
HAVING COUNT(*) > 1
```
This way, I find the IDs of all the rows added when there was already a record with the same data (Place_ID, Event_ID and StartDate). I only have to delete these useless values (since the others were there first) :
```sql
DELETE
FROM Showings
WHERE Showing_ID IN (
SELECT MAX(Showing_ID)
FROM Showings
GROUP BY Place_ID, Event_ID, StartDate
HAVING COUNT(*) > 1
)
```
Sometimes, IT is not that complicated.
---
This post was originally published on my [blog](https://blog.pagesd.info/2020/01/14/remove-duplicate-rows-sql/).
Cover image : [The Lady from Shanghai - Rita Hayworth](https://en.wikipedia.org/wiki/The_Lady_from_Shanghai) | michelc |
236,354 | my first webpage | A post by Mita | 0 | 2020-01-11T19:29:42 | https://dev.to/thecoder203/my-first-webpage-k47 | codepen | {% codepen https://codepen.io/thecoder203/pen/eYmrzaE %} | thecoder203 |
236,360 | Adeus Medium. Olá DEV Community | Minha decisão sobre a liberdade e a gratuidade do meu conteúdo para a comunidade | 0 | 2020-01-11T19:36:02 | https://dev.to/rcrd/adeus-medium-ola-dev-community-4n4p | liberdade, conteúdo, compartilhamento, dev | ---
title: Adeus Medium. Olá DEV Community
published: true
description: Minha decisão sobre a liberdade e a gratuidade do meu conteúdo para a comunidade
cover_image: https://images.unsplash.com/photo-1578759009600-e6c4212e0887?ixlib=rb-1.2.1&ixid=eyJhcHBfaWQiOjEyMDd9&auto=format&fit=crop&w=900&q=60
tags: liberdade, conteúdo, compartilhamento, dev
---
Esse texto é a continuidade do meu último texto na plataforma Medium.
Nos últimos meses passei pela experiência de desenvolver uma aplicação de tecnologia e ambiente onde não estou totalmente acostumado, e sem ter acesso a internet externa. Sucintamente: foi horrível.
Não era necessário ter passado por isso para que eu já tivesse consciência da importância de compartilhar conhecimento na comunidade de desenvolvedores, mas certamente evidenciou a falta que faz ter respostas "a um google de distância".
Não é mais um tema tão recente de que o Medium está aplicando a política de Paywall para não assinantes. Eu pessoalmente não li em detalhes que ações específicas a plataforma está tomando, mais vi várias pessoas no Twitter postando prints de alertas sobre "cota mensal de leitura", algo bem semelhante a sites de notícias de revistas atualmente.
A empresa é livre para determinar como quer ganhar com o seu serviço - que de maneira nenhuma é ruim, e por isso meu conteúdo esteve lá por anos. Mas a decisão de cobrar por conteúdo desencaixou a empresa das necessidades da nossa comunidade, na minha opinião.
Temos uma comunidade que já quebrou alguns padrões, se por assim dizer, que nos fez ser pessoas e profissionais melhores. O abandono da noção de que ajudar alguém e ajudar a concorrência é o que mais me orgulha desse grupo. E por mais que essa noção ainda exista na profissão, quanto mais pessoas se engajarem e meetups, eventos, grupos e discussões onde estivermos, mais vão deixar de acreditar nessa ideia, e vamos evoluir juntos.
Contrariando a opinião de alguns de que a quebra dos devs com o Medium é o momento perfeito para abrir um blog com sistema de posts, eu acredito que a comunidade DEV tem potencial para ser o substituto perfeito do Medium, em especial nas features de interatividade que blogs isolados não podem prover.
É importante dizer que nunca recebi nenhum comunicado do Medium de que meu conteúdo estava sob paywall, nem de ninguém falando que não conseguiu acessar algo que tenha produzido e disponibilizado lá. Pelo contrário, como autor, meus posts aparecem lá, para mim, com um aviso de que não estão dentro da política de paywall.
Ainda assim, não acho que vale a pena investigar se isso realmente quer dizer que essa política não afeta o meu conteúdo; até porque a empresa pode mudar isso quando quiser. Fazer uso de uma plataforma que acredita no compartilhamento de conteúdo da mesma maneira que nós é muito mais vantajoso.
Estou como sempre aberto aos comentários sobre essa discussão. E recomendo a leitura do texto do @ben sobre como [o objetivo do Medium jamais foi atingir nossa comunidade](https://dev.to/devteam/medium-was-never-meant-to-be-a-part-of-the-developer-ecosystem-25a0) da maneira em que produzimos e compartilhamos, e por isso, ele na verdade não está fazendo nada de errado (em inglês). | rcrd |
236,374 | Best Brainstorming Tool? | What are the best brainstorming or diagraming tool for use cases and flow diagrams? | 0 | 2020-01-11T20:13:55 | https://dev.to/hashimlokasher/best-brainstorming-tool-29kg | diagram, brainstorm | ---
title: Best Brainstorming Tool?
published: true
description: What are the best brainstorming or diagraming tool for use cases and flow diagrams?
tags: #diagram #brainstorm
---
| hashimlokasher |
236,414 | 5 Things That Will Make Your Dream Come True | We are so lucky we live in a time of opportunity. We have a multitude of options available to us. The question is, why so many of us are stressed and live unfulfilled in life? | 0 | 2020-01-11T20:55:18 | https://foundsiders.com/blog/5-things-that-will-make-your-dream-come-true/ | productivity, motivation, psychology | ---
title: 5 Things That Will Make Your Dream Come True
published: true
canonical_url: https://foundsiders.com/blog/5-things-that-will-make-your-dream-come-true/
cover_image: https://thepracticaldev.s3.amazonaws.com/i/8vucjyp568iqlugh252h.jpg
description: We are so lucky we live in a time of opportunity. We have a multitude of options available to us. The question is, why so many of us are stressed and live unfulfilled in life?
tags: productivity, motivation, psychology
---
We are so lucky we live in a time of opportunity.
We have a multitude of options available to us. The question is, why so many of us are stressed and live unfulfilled in life?
We supposed to have everything we want.
Unfortunately, we are trying to do everything at once, because there are many open options what to do, but in this way, we are burning ourselves out. We fail, give up, and do not try over and over again.
We’re unable to persevere.
## 🎯 Focus and routine
The secret to your dream fulfillment is, concentrate on pursuing your dream—focus on one thing!
Your dream is the situation you wanted to happen in your life.
For that, you should be ready to sacrifice your comfort zone, even painfully transform your habits if it's necessary, and whatever you have to do, you do for it without a compromise.
Despite that, focus your mind on your dream and remove all the other things that take over your mind.
Start adding things to your daily schedule that can help you get closer to your dream.
And here are the useful points which in the long run will help you to stay focused on your dream:
## 💭 Think of the big picture
If we want to get anywhere in life, we will have to struggle. Fulfilling your dream will require hard work and plenty of perseverance.
There will be setbacks and hardships on the way, but you should keep thinking of the big picture.
No matter what your dream is, there going to be some part of it that you’ll find mundane and struggle to focus on.
In these moments, you should remember that you are doing this all to contribute to that big picture.
Enjoy the process. You cannot fulfill your dream by skipping the parts you don’t like. You need to bear down and get on with it.
## 🌱 Get into the right daily routine
It's incredibly challenging to get out of something that once becomes a habit for you.
And a daily routine is a powerful weapon in determining your focus level. Because it begins to revolve around your long-term goal.
The correctly organized daily routine includes everything you need to do to stay focused on your dream and keep progressing every day.
Without this consistency, the progression may become very difficult if you will not be putting in enough practice in what's needed.
Write down your daily routine, and slowly incorporate what is important, one thing at a time. Because starting with doing all at once can make things harder for you to stick to, months down the line.
## 📆 Plan first what is important
Identify which big task is the most important and will yield the most significant result and progression towards your dreams.
Next, divide it into smaller tasks that will be the easiest to do, because they are quick and straightforward. Moreover, they will give you that same satisfaction of completion that a big task does, but quicker and more often.
Subconsciously, it will make you think that you are progressing more quickly, because of completing more tasks within a specific time frame.
I called it the lazy way of doing things and is something that many people fall down on.
Plan out your workload, work on what is essential first.
## 🙅♀️ Learn when to use "No"
Observing successful people or people who are fulfilling their dream on the way, the word "No" is something that they know when and how to use.
They have the discipline to say "No," when people ask them to hang out, for example.
They value their time first. They focus on what they need to get done first. They are not a setback in the quest to live their dream. They teach themselves the discipline of telling yourself and others 'No,' in order to progress quickly towards their dream, and not get sidetracked by distractions.
## Conclusion
I know you have probably been working so hard for such a long time toward your dream.
Possibly with not many results coming your way, but there are some, I am sure.
You may have forgotten entirely the exact reason that made you start in the first place. Because when you first set out to do something, your dream and reasons behind it are fresh in your memory. They are so clear to you.
Whenever you struggle to focus on the things that have to get done at hand, go back to your dream. Dream it and dream it again.
It is your motivation, your purpose, and it is why you are doing what you are doing right now.
Finishing is hard, but you can learn how to do it.
Make your best life,
Ilona
___
_Photo by Luis Quintero from Pexels_
| ilonacodes |
236,441 | Javascript: Printing Object in Console | If you're trying to print object in console and you're getting [object Object], then most probably yo... | 0 | 2020-01-11T21:30:19 | https://dev.to/victoromondi1997/javascript-printing-object-in-console-352n | If you're trying to print object in console and you're getting `[object Object]`, then most probably you're trying to print the object as a string value. Add comma(`,`), and not plus operator(`+`)
Ex:
> ```javascript
console.log('The value of object is: ' + obj)
```
❌❌
> ```javascript
console.log('The value of object is: ' , obj)
```
✔️✔️ | victoromondi1997 | |
236,541 | Thoughts, Tips, and Observations about speaking at conferences: a thread from Corey Qinn | I just finished reading this 100+ long, entertaining, and amazing thread: ... | 0 | 2020-01-12T03:24:26 | https://dev.to/piannaf/thoughts-tips-and-observations-about-speaking-at-conferences-a-thread-from-corey-qinn-1dd0 | speaking, motivation, beginners, techtalks | ---
title: Thoughts, Tips, and Observations about speaking at conferences: a thread from Corey Qinn
published: true
description:
tags: speaking, motivation, beginners, techtalks
---
I just finished reading this 100+ long, entertaining, and **amazing** thread:
{% twitter 1215710451343904768 %}
Posting here in case you haven't seen it. Highly recommended. If you've never thought about speaking, it may inspire you. If you have just started ([like me](https://dev.to/touchlab/a-first-time-speaker-s-journey-from-cfp-to-stage-5pk)), you'll learn a lot. If you are a veteran, I suspect there's interesting tidbits in there for you, too. | piannaf |
236,546 | Implementing View Types in Python | A pattern to implement view types that truly hide implementation details in Python | 0 | 2020-01-12T03:54:50 | https://dev.to/rvprasad/implementing-view-types-in-python-5ba2 | python, coding, programming, patterns | ---
title: Implementing View Types in Python
published: true
description: A pattern to implement view types that truly hide implementation details in Python
tags: Python, Coding, Programming, Patterns
cover_image: https://miro.medium.com/max/1920/0*fNBo-jVoO-eSl23r.jpg
---
In object-oriented languages like Java, C#, or Kotlin, given a type `T`, an associated *view type* `TView` is used to expose a specific view (parts) of an object of type `T`. This helps hide implementation details.
For example, in the following [Kotlin](https://kotlinlang.org/) example, `Ledger` interface is used to provide access to a ledger while hiding the underlying implementation details, i.e., `LedgerImpl` provides the functionalities of a ledger and it has a `process` and `container` members.
```kotlin
interface Ledger {
fun getValue(i: Int): Int?
}
class LedgerImpl: Ledger {
val container = HashMap<Int, Int>()
override fun getValue(i: Int) = container.get(i)
fun process() {
// processing
}
}
fun getLedger(): Ledger {
val c = LedgerImpl()
c.process()
return c as Ledger
}
```
# Can we achieve the same in Python?
Yes, we can mimic the above code structure in Python as follows.
```python
from abc import ABC
from collections import defaultdictclass Ledger(ABC):
def get_value(self, i: int) -> int:
passclass _LedgerImpl(Ledger):
def __init__(self):
self._container = defaultdict(int)
def get_value(self, i: int) -> int:
return self._container[i] def process(self) -> None:
...
def facade() -> Ledger:
l = _LedgerImpl()
l.process()
return l
```
While `_container` is marked as private by convention (i.e., the name is prefixed with an underscore), callers of `facade` can still access `_container` in the returned value as Python does not enforce access restrictions at runtime. So, the implementation details are not truly hidden.
# Can we do better?
(Accidentally) Yes, we can do better. We can use `namedtuple` support in Python to realize the view type.
```python
from abc import ABC
from collections import defaultdict
from typing import Callable, NamedTuple
class Ledger(NamedTuple):
get_value: Callable[[int], int]
class _LedgerImpl():
def __init__(self):
self._container = defaultdict(int)
def process(self) -> None:
...
def get_view(self) -> Ledger:
return Ledger(lambda x: self._container[x])
def facade() -> Ledger:
l = _LedgerImpl()
l.process()
return l.get_view()
```
With this implementation, unless we thread our way thru the lambda function created in `get_view`, the implementation details stay truly hidden when compared to the previous Python implementation.
Also, this implementation pattern relies on composition instead of inheritance. While the earlier implementation pattern can be changed to use composition, it still does not truly hide implementation details.
# When should we use this pattern?
This pattern is ideal to use when implementation details need to be truly hidden.
And, here’s my yardstick for when should implementation details be truly hidden.
If the client programs of a library/program will respect the access restrictions communicated via conventions, then this pattern is not helpful/required. This is most likely the case when modules within a library or program act as clients of other modules in a library or program. In these situation, simpler realizations of view types (e.g., the first python example) will suffice.
On the other hand, if client programs may take dependence on the implementation of a library/program (e.g., for performance reasons) when the current version of the library/program does not support the capabilities need by the client programs, then this pattern can be helpful to truly hide the implementation.
# Note
I stumbled on this pattern during my coding sessions. Since I found it to be interesting and useful, I blogged about it. That said, as with all patterns, use them only when they are required.
(Originally posted [here.](https://medium.com/@rvprasad/implementing-view-types-in-python-7bed2d2c2a1c))
| rvprasad |
236,550 | deciduously | I Can't Trace Time This is what you get when you try "deciduously" at Dictionary.com: de... | 0 | 2020-01-12T13:19:25 | https://dev.to/deciduously/deciduously-159g | watercooler, mentalhealth | # I Can't Trace Time
This is what you get when you try [`"deciduously"`](https://www.dictionary.com/browse/deciduously) at [Dictionary.com](https://www.dictionary.com/):
> deciduous [ dih-sij-oo-uh s ] / dɪˈsɪdʒ u əs /
1. shedding the leaves annually, as certain trees and shrubs.
1. falling off or shed at a particular season, stage of growth, etc., as leaves, horns, or teeth.
1. not permanent; transitory.
That's never been an accident. There's some of my past I want to carry into my future and some I emphatically don't, and I'm glad my story is always in flux.
I mean, right? Who's with me?
I've been a little extra-prolific this January. I don't think I'm alone, though, in feeling arbitrarily empowered by not only a new year, but a new *decade*.
It doesn't make any sense. It doesn't actually apply to *my life* at all. It's just a month since `x`, three months until `y`. But it is how history partitions time, and we're in a new one now, like it or not.
I've got depression and anxiety problems, but who doesn't? There's so much to LEARN out there. It's exciting. Let's define the decade on our terms, based on what's really important.
Everything is always changing and the mere fact that the [Internet](https://en.wikipedia.org/wiki/Internet) and [FOSS software](https://en.wikipedia.org/wiki/Free_and_open-source_software) exist empowers everyone everywhere. Let's optimize that empowerment, and learn more about our craft.
*Photo by K. Mitch Hodge on Unsplash* | deciduously |
236,573 | 5 Free Python Courses & Tutorials to Start Today | 1. Simple Blogging Analytics Dashboard in Python Build a small data pipeline in Python by... | 0 | 2020-01-12T10:38:08 | https://dev.to/coursesity/5-free-python-courses-tutorials-to-start-today-4dli | ### 1. [Simple Blogging Analytics Dashboard in Python](https://click.linksynergy.com/deeplink?id=Fh5UMknfYAU&mid=39197&u1=quickcode&murl=https://www.udemy.com/course/simple-blogging-analytics-dashboard-in-python/)
Build a small data pipeline in Python by scraping a blog. - Free Course
***Course Rating: 4.0 out of 5.0 (7040 total enrollment)***
In this course, you will :
- Understand the basics of web scraping.
- Understand how to setup a manual data pipeline.
- Learn how to modularize code into functions.
- See how to setup a basic dashboard in Flask.
### 2. [Free Python Tutorial - Python For Beginners: Learn Python For Free](https://click.linksynergy.com/deeplink?id=Fh5UMknfYAU&mid=39197&u1=quickcode&murl=https://www.udemy.com/course/python-for-beginners-free/)
Learn Python through variable & data types, building games, calculators, quizzes and MUCH MORE - Free Course
***Course Rating: 4.3 out of 5.0 (11106 total enrollment)***
In this course, you will :
- Understand and Implement basic Python Code.
- Build Python Projects.
- Automate tasks on the computer by writing simple Python Programs.
- Acquire all the skills to demonstrate an expertise with Python Programming in Your Job Interviews.
- Learn the basics of Object Oriented Programming - Inheritance, Abstract Class and Constructors.
- Acquire all the Python Skills needed to transition into Analytics, Machine Learning and Data Science Roles.
### 3. [Python for Everyone](https://click.linksynergy.com/deeplink?id=Fh5UMknfYAU&mid=39197&u1=quickcode&murl=https://www.udemy.com/python-for-every1/)
This course created for Data Science, AI , ML, DL , Automation Testers, Big Data , Web Developer Aspirants etc.
***Course Rating: 4.2 out of 5.0 (21241 total enrollment)***
In this course, you will :
- Acquire the prerequisite Python skills to move into specific branches - Data Science(Machine Learning/Deep Learning) , Big Data , Automation Testing, Web development etc...
- Have the skills and understanding of Python to confidently apply for Python programming jobs.
### 4. [Python Programming: A Concise Introduction](https://coursera.pxf.io/c/1137078/1213622/14726?u=https%3A%2F%2Fwww.coursera.org%2Flearn%2Fpython-programming-introduction&subId1=devTo)
Python Programming: A Concise Introduction from Wesleyan University. The goal of the course is to introduce students to Python Version 3.x programming using hands on instruction
***Course Rating: 4.6 out of 5.0 (84535 total enrollment)***
In this course, you will learn :
- Python Syntax And Semantics
- Python Libraries
- Computer Programming
- Python Programming
### 5. [An Introduction to Python Programming](https://click.linksynergy.com/deeplink?id=Fh5UMknfYAU&mid=39197&u1=quickcode&murl=https://www.udemy.com/an-introduction-to-python-programming/)
Learn the Fundamentals of Procedural, Object-Oriented, and Functional Programming in Python. - Free Course
***Course Rating: 4.0 out of 5.0 (14866 total enrollment)***
In this course, you will :
- Learn about the most important paradigms of computer programming, including object-oriented and functional programming.
- Learn introductory Python programming constructs. You will be exposed to all of the fundamental constructs of programming such as loops, data structures, and operators.
- Learn procedural programming first to develop a strong basis of computational logic.
- Learn Object-Oriented Programming (OOP) and functional programming. Altogether, this course will unlock the doors to learn GUI development, conduct computer science research, and begin website development in Python.
**We have collection of [best python courses](https://coursesity.com/best-tutorials-learn/python) to learn python.**
| tushar16992 | |
236,685 | Who am I following? | Hello Dev team! Is there a way to check the people that I am following? The reason that I am asking i... | 0 | 2020-01-12T14:06:37 | https://dev.to/pristakos/who-am-i-following-46na | help | Hello Dev team!
Is there a way to check the people that I am following?
The reason that I am asking is that I receive notifications for posts from people that I have not selected to follow but who appear to be followed by me! Maybe there is a specific setting that I can toggle to stop that from happening?
Thank you very much!
Giannis | pristakos |
236,708 | Study Log: Reading Vue's RFCs | In the last post of "My Dev Journal of 2020" series, I decided to research and summarize RFCs of Vue.... | 4,014 | 2020-01-12T14:41:51 | https://dev.to/nozomuikuta/study-log-reading-vue-s-rfcs-793 | motivation, devjournal | In [the last post](https://dev.to/nozomuikuta/change-on-study-plan-5b8o) of "My Dev Journal of 2020" series, I decided to research and summarize [RFCs of Vue.js](https://github.com/vuejs/rfcs).
I've read 9 of 18 [active RFCs](https://github.com/vuejs/rfcs/tree/master/active-rfcs) and am reading 10th, which is about [new design of `v-model` API](https://github.com/vuejs/rfcs/blob/master/active-rfcs/0011-v-model-api-change.md).
I would be able to finish reading all RFCs in tomorrow. | nozomuikuta |
236,729 | Black Hat 2020 (20th anniversary edition) | Black Hat 2020 (20th anniversary edition) and this year you can go three times to learn the latest ab... | 0 | 2020-01-12T15:46:49 | https://dev.to/osde8info/black-hat-2020-20th-anniversary-edition-3km8 | blackhat, whitehat, hacking, security | Black Hat 2020 (20th anniversary edition) and this year you can go three times to learn the latest about information security
in ASIA
in US
in UK
https://www.blackhat.com/
```
#devops #devsecops #security #sysadmin
``` | osde8info |
236,745 | Swimming in code | This post offers a little bit of background into my journey as a developer and how slow its been unti... | 0 | 2020-01-12T16:27:33 | https://dev.to/calliernie/swimming-in-code-4bc0 | java, html, css, android | This post offers a little bit of background into my journey as a developer and how slow its been until I found refuge in Amalitech, a training academy for young developers in the disciplines of Software Development, Software Testing and Verification and Data Science.
As you may know already in the introductory threads, I'm a Ghanaian,27 years old. My developer journey began 3 years ago after I completed university with a degree in Integrated Rural Art and Industry, I was aiming for a degree in Communication Design but some things happened and I ended up with that.
After acquiring my degree, I knew very well it wasn't that excitiing to me to enter into a profession along those lines so when my dad asked that I go for my masters in the same field, I never told him no but I decided to forge my own path with his help because I couldn't contest his decisions. I opted to enter into NIIT with the hope of acquiring as much industry related skills as possible to make me ready for software development. Their training modules were very good but there wasn't enough time to really delve deeper into projects. We actually completed web development (html and css only) and also java without any projects to show.
Fast forward, after NIIT in August last year, there's no mentor to guide us through anything so a group of three from our class decide to create a store management app but because we have no collaboration techniques, we decided that each person works on his own app and we later share ideas and decide on what to integrate into the other. The app never got built but I'm currently working to see if I can whip up something.
A few months later, here I am in Amalitech training academy receiving international standard training in Software Testing and Verification, a lot of teamwork here and there, a culture of constructive criticism and a well structured course with hands-on projects to make us full fledged developers in our various fields.
I'm very happy to have found a place with resources to thrive in this industry. I've had my fair share of experience in java programming and I'm trying my hands on android development right now and I believe at the end of my training, I will be a superman in Software Testing.
You will be hearing more about my passion projects in subsequent posts. I look forward to probably working with you on a project or two.
System.out.println("Have a nice day" + 😉); | calliernie |
236,799 |
Can content authors be happy on the JAMstack? | A common JAMstack setup has the content of a site, live in a git repository alongside code and templa... | 0 | 2020-01-12T17:53:54 | https://dev.to/shortdiv/can-content-authors-be-happy-on-the-jamstack-38mb | jamstack, jamuary | A common JAMstack setup has the content of a site, live in a git repository alongside code and templates. This setup makes working with JAMstack sites easy since setting up, running and deploying sites is a matter of simply pulling a git repository and running a build command. Generally, a separate content database is not a requirement on the JAMstack though it is strongly encouraged. It is a rare case where every contributor on a JAMstack project is technical. Expecting a non technical content author to write in markdown and commit directly to Git is unrealistic. Especially since many have found proficiency in feature rich CMSes like WordPress that provide tools and plugins to customize how content looks.
Fortunately, there are numerous headless CMS options that integrate well with the JAMstack approach to keep both developers and content authors productive and happy. [Contenful](https://www.contentful.com/), [WordPress REST API](https://developer.wordpress.org/rest-api/), and [Netlify CMS](https://www.netlifycms.org/) are some examples of headless CMS solutions available today. Through an easy to use, WYSIWYG UI inspired by traditional CMSes like WordPress, headless CMSes provide a rich content authoring experience that content authors crave. Features like [open authoring in Netlify CMS](https://www.netlifycms.org/docs/open-authoring/?utm_source=devto&utm_medium=netlifycms-sad&utm_campaign=devex) moreover, offer a seamless way to support guest content contributors so content can be easily submitted without the overhead of setting up new user accounts in a CMS and with the backing of version control through Git. All this gives content authors the freedom they need to make changes and confidently deploy directly to production, thanks to the inner workings of version control to back them up. | shortdiv |
236,820 | Ideas for portfolio site | I am trying to create a portfolio site to increase my online site, I am hoping this site would help me secure a job too. | 0 | 2020-01-12T19:19:09 | https://dev.to/ilozuluchris/ideas-for-portfolio-site-107l | discuss, career, hiring, help | ---
title: Ideas for portfolio site
published: true
description: I am trying to create a portfolio site to increase my online site, I am hoping this site would help me secure a job too.
tags: discuss, career, hiring, help
---
Hi, guys.
I have been working as a software engineer for three years now, I am trying to create a portfolio site, to increase my online presence and help my current job search, but I am having challenges with content generation for the site because though I can do frontend work quite well, my more complex work has been 'backend stuff' and a lot of it was for my previous company(signed an NDA so don't know how much I can reveal about my work there)
Please help, what do non-frontend people put on their portfolio site?
| ilozuluchris |
236,848 | Hello World with OCaml and ReasonML | ReasonML Hello World without Bucklescript | 0 | 2020-01-12T19:53:57 | https://dev.to/idkjs/hello-world-with-ocaml-and-reasonml-406b | reason, ocaml | ---
title: Hello World with OCaml and ReasonML
published: true
description: ReasonML Hello World without Bucklescript
tags: reasonml, ocaml
---
A note to self on using `OCaml` with the `ReasonML` syntax without `Bucklescript`.
## Setup
1. Install [opam](https://opam.ocaml.org/)
2. Install `reason` language options with `opam install reason`
## HelloWorld
Create a `hello.re` file:
```ocaml
// hello.re
print_string("Hello world!\n");
```
Compile `hello.re` by running `ocamlc -o hello -pp "refmt -p ml" -impl hello.re`.
Open your terminal and run `./hello`.
Your output is:
```sh
➜ helloworld ocamlc -o hello -pp "refmt -p ml" -impl hello.re
➜ helloworld ./hello
Hello world!
```
## Sources
[https://riptutorial.com/ocaml/example/7096/hello-world](https://riptutorial.com/ocaml/example/7096/hello-world)
[repo](https://github.com/idkjs/reason-native-hello-world)
### [Notes from Discord Channel](https://discordapp.com/channels/235176658175262720/235199057398726660/666030807089152040):
> @idkjs the ocaml toolchain read only ocaml syntax, binary ast (which is shared between reason/ocaml). Hopefully the reason package come with a tool that is able to translate both syntax refmt.
the -p option of refmt is print. So refmt -p ml hello.re > hello.re.ml take the reason syntax in the re file and output in the hello.re.ml file. I use -p ml for debugging but -p binary is used by dune because it is quicker. Then you can use OCaml binary ocamlc -o my_target hello.re.ml (the -o is for ouput default name is machine dependant). because file end with .ml the compiler know it must build .cmo/.exe, if passed a .mli it will build .cmi, etc. The problem with .re file is it is not .ml nor .mli so we need to say it will be a implementation -impl or a interface -intf. and finally -pp is run this command on this file to have a valid ocaml file (here we use refmt because it output valid OCaml)
```sh
-impl <file> Compile <file> as a .ml file
-intf <file> Compile <file> as a .mli file
-intf-suffix <string> Suffix for interface files (default: .mli)
-pp <command> Pipe sources through preprocessor <command>
``` | idkjs |
236,963 | How I Explained Commit and Pull Request | It was somewhere in November last year, I had a friend with little Git experience sitting beside me,... | 0 | 2020-01-13T12:40:09 | https://dev.to/marcelcruz/how-i-explained-commit-and-pull-request-2i5l | git, webdev | It was somewhere in November last year, I had a friend with little Git experience sitting beside me, and as the day was passing by he was accumulating files with changes and not being able to make a decision of when and how to commit them, and later on, to PR.
It felt hard to give him any advice not knowing what he was working on (we were working on different projects), so I used a non-IT example. Now I want to share it with you.
*Note: if you want to understand the command-wise "how", well, that's not quite the post for you. But if you want to understand what should belong to the same commit, what shouldn't or when to send a PR, read on.*
I fired: imagine you're revising a book, finding and correcting punctuation, misspellings and changing words wherever needed. We'll work from here.
##### Correcting punctuation
We're going from the first to the last line of chapter one, finding missing or misplaced commas, periods and whatnot. We make a few fixes. Are we done here? Yes! So let's commit it.
Commit message: `fix punctuation on chapter one`
##### Correcting misspellings
We go back to the beginning of chapter one, but now we have a new task: fixing all words that are misspelled. We find and fix a few here and there and again we reach the end of this chapter. Guess what? Let's commit again!
Commit message: `fix misspellings on chapter one`
##### Improving understanding
While working on punctuation and misspellings we noticed that some sentences were too complex to be understood, so what if we **refactor** it with a few simpler synonyms? We change a few words, improve readability, and voilà, all makes sense now. Yes, we commit once more.
Commit message: `refactor chapter one with simpler synonyms`
I guess we're done with this chapter.
##### It's time for a pull request
PR title: `Fix punctuation and misspellings and improve readability of chapter one`
PR description: `This PR contains fixes to missing and misused punctuation, fixes for misspellings using en-GB and improvements of understanding and readability by using simpler synonyms that replace uncommon words on chapter one.`
Let's review what we just did:
- The commits were **atomic**. What does that mean? It means they should have only related changes within itself. Go back and read our commit messages again. Do you see it? Each commit deals with one and only one type of fix and/or change. If later on the author of the book says that the book should indeed be hard to be understood, fine, we just need to revert back the third commit, the one that adds synonyms to facilitate the understanding. Easy peasy.
- The commit messages were straight forward, short and described what happened in the code that's being sent. Messages like `fix bug` or `add feature` are not descriptive and won't allow you to easily remember what was done at a later point in time. Also, good practice says that the first word should indicate the type of changes the commit contains: `fix`, `refactor`, `style`, `docs`, etc. You can find more about it <a href="https://seesparkbox.com/foundry/semantic_commit_messages" target="_blank">here</a>.
- We grouped all commits that belong to the current changes on chapter one in the same PR. Nothing else. This makes the life of the person that is going to review it easier because the scope of the changes is easily identified, plus it does not interfere in chapter two, that in turn might already be under revision by the colleague next to you.
- The PR has a succinct title, like our commits, but its description goes deeper into how the changes were applied.
It's likely that your case isn't revising chapter one of some book, but instead adding a new feature to your app, a new section on your website, or anything that belongs to the same "box", but the logic should remain the same.
I'd be happy to know if this analogy makes sense to you, or how else you would tackle such situation.
Happy coding! | marcelcruz |
236,970 | PHP is still awesome (even though it's Awful!) | My thoughts on why PHP still has a prominent place in web development. | 0 | 2020-01-13T03:54:01 | https://dev.to/the3rdc/php-is-still-awesome-even-though-it-s-awful-5ga8 | php | ---
title: PHP is still awesome (even though it's Awful!)
published: true
description: My thoughts on why PHP still has a prominent place in web development.
tags: php
---
When I got into web development PHP was the default. It's what the web was built on. (To be fair - it seemed like half the web was WordPress). LAMP was the defacto stack - and you were expected to know it.
Since then PHP seems to have fallen out of style hard. I still work on a SAAS product with a mainly PHP back-end, and often have to answer "why PHP?" The truth there is that it's been around since that was the standard and there's no meaningful reason to change it (at least it's not Ruby!) But it got me thinking - are there scenarios where I'd still reach for PHP first today? Yes.
Now I'll still be the first to roll my eyes and say PHP is "awful". It's got no shortage of annoyances. Two of my favorites are:
- The type juggling! Being a loosely typed language is enough to turn a lot of people off in it's own - but PHP seems to make some exceptionally unintuitive decisions. And undefined variables? Meh, let's just throw a warning and give it a try.
- The most inconsistent function names and signatures I've ever seen!
*(Go ahead and share your favorite gripes in the comments, we'll all have a good chuckle)*
But here's my humble list of things I think are really great about PHP.
**Web First**
PHP was designed for websites. It has the request/response lifecycle built into it's DNA. Query params and request payloads are available in super-globals. STDout is the response by default.
In any other language I've used to build a web site/app I start with importing the right package/library to run a webserver. I don't have to do that in PHP.
Yes, that means I need Apache, nginx, lighttpd or similar. But it also means that I can write code pretty oblivious as to which one of those is in use. It also generally means that each request will be served in it's own thread.
In my experience there's a lot of boilerplate to serve content over http in other languages that's just "already there" in PHP.
**Easy to work with APIs/Databases**
I do a lot of work in the data integration space, and PHP makes it pretty easy to talk to other services.
Using curl is simple and nicely structured (if you don't mind looking up a lot of flags) and Oauth is a breeze, so REST APIs are pretty easy.
PHP's built in SOAP client is surprisingly good to (considering how much I hate working with SOAP APIs) - just point it at a valid WSDL and every method is defined for you.
Connectivity to most of the popular relational databases comes out of the box - with more available with right drivers.
Also, PHP has been ubiquitous for long enough that msjor platforms offering an SDK for their API will include a PHP variant.
**Easy to Hire/Train**
Again, PHP has been so popular for so long we have no trouble finding developers who are comfortable with it (even if they say "well I prefer python but...)"
But I've also had several people who are right out of college - which seems to mean they know either Java or C++ and for some reason CSS (weird mix... I'd like to see these curriculums). I've never had much trouble getting them spun up on PHP.
Some folks will point out that the ease to jump into can be dangerous (a lot of people trying to walk before they run - and with the aforementioned loose types). That's true, and it's a way JS and PHP seem similar. But in a setting with good mentorship it's been an asset to us.
**So when should I use PHP?**
Here's a few examples of times I would reach for PHP first.
- Writing a RESTful API.
- Writing a back-end that needs to interface with a lot of different platforms and APIs.
- Any web app that needs to serve dynamic content from a DB but does *not* require any real-time interaction between users (like a blog or a forum).
- An app that I want to be easily "moddable" or have a "plugin ecosystem". Though NodeJS is a good contender here to.
So... Gimme your thoughts! What did I miss that you love or hate about PHP. Is it dead? Let me know. ; ) | the3rdc |
236,999 | Fork Me! FCC: Test Suite Template | A post by sammysmart95 | 0 | 2020-01-12T23:00:31 | https://dev.to/sammysmart95/fork-me-fcc-test-suite-template-ked | codepen | {% codepen https://codepen.io/SamSmart/pen/rNaKVNq %} | sammysmart95 |
237,017 | Angular `ng serve`: importing SCSS files in the global styles.scss vs including them in angular.json | With this article I'd like to share a tip I've discovered to quicken the SCSS rebuild time when using... | 0 | 2020-01-19T18:29:47 | https://dev.to/lucanardelli/angular-ng-serve-importing-scss-files-in-the-global-styles-scss-vs-including-them-in-angular-json-1f5n | angular, webpack, webdev | With this article I'd like to share a tip I've discovered to quicken the SCSS rebuild time when using `ng serve` and several 3rd party SCSS files. By rebuild time, I mean the time it takes for webpack to parse the modifications in the project's SCSS files and update the web app whenever there's a modification in said SCSS files.
While working on Angular projects, whenever I had to include external SCSS files (e.g. Bootstrap, FontAwesome) I always `@import`'ed them in my global `styles.scss` file.
Example from a recent project:
```scss
@import "global-styles/fontawesome.scss";
@import "global-styles/bootstrap.scss";
@import "global-styles/nebular.scss"; // This file calls nb-install() to init Nebular's theme system, after setting the relevant variables and overrides for my theme
// [My own SCSS rules below]
```
This worked perfectly, until I had to edit one of my global SCSS rules. Every modification triggered a rebuild of the whole styles.scss, and that took somewhere along 15 seconds every time.
By selectively commenting out some of the imports, I saw that the biggest time-consumer was `nebular.scss`, I guess because of their theme system. Without it, my rebuild time was ~3 seconds.
I did not want to wait for 15+ seconds for every small SCSS modification, especially because 99% of my modifications did not have anything to do with the SCSS files I was importing! I couldn't find much on the internet apart from this StackOverflow question (https://stackoverflow.com/questions/55309150/importing-styles-in-angular-json-vs-importing-in-styles-css), however this made me think that, since having multiple entries in the `styles` section of `angular.json` is basically the same as adding multiple independent CSS files to the app, maybe I could split my SCSS files in such a way that Webpack could be smart about it and only recompile some parts of them whenever a change was detected.
In my case, the change was simple. Since the 3 files were independent one from the other (i.e. no file was referencing the other) I simply removed the 3 import lines and I moved them to my `angular.json` styles section:
```
"styles": [
"src/global-styles/fontawesome.scss",
"src/global-styles/bootstrap.scss",
"src/global-styles/nebular.scss",
"src/styles.scss"
],
```
By doing this, whenever I had to change something, Webpack only rebuilt the relevant SCSS file where the change happened. If I had to change something in the `nebular.scss` file I would have to wait ~13 seconds, but if the change was, for example, in the `styles.scss` file, the rebuild time went down to **800ms**! I did not observe any changes in the app, so I'd say that, CSS wise, the output should be the same as the `@import` approach.
Of course, this approach only works if we have several independent SCSS files, but this should be the case whenever we import external SCSS dependencies in our project. This could also be used in chase the project's SCSS files start to become heavy in terms of build time. Whenever the files can be isolated with their own dependencies, then they could be moved in `angular.json` instead of being imported in the main `styles.scss` file. | lucanardelli |
237,036 | Cloud: Multi-Tenant Architecture and it’s Issues | What is multi-tenant architecture? Virtualization + Recource Sharing = Multi-Tenancy Multitenancy... | 0 | 2020-01-13T01:21:08 | https://dev.to/sciencebae/multi-tenant-architecture-and-it-s-issues-h06 | cloud, azure, cloudcomputing, newbie |
**What is multi-tenant architecture?**
`Virtualization + Recource Sharing = Multi-Tenancy`
Multitenancy is a type of computer architecture in which one or more software instances are created and executed on top of primary software. Multitenancy allows multiple users (tenants) to work in the same software environments at the same time at their own user interfaces.
Multitenancy in cloud computing is basically resource sharing; it is a “natural result of trying to achieve economic gain in Cloud Computing by utilizing virtualization” (“Multi-Tenancy inCLoud Computing”, p. 345).
Some of the types of cloud computing services that exist are:
• SaaS/AaaS: Software-as-a-Service
• PaaS: Platform-as-a-Service
• IaaS: Infrastructure-as-a-Service [hardware and software available for service]
SaaS uses a highly multi-tenant architecture and the user contexts are separated from one another logically at both runtime and rest. SaaS/AaaS is defined as a software model where both the application and the data are hosted on a cloud by independent developers, which enables a user to access the software when needed from any location. An example of such software would be Microsoft Business Productivity Online Suite, Dropbox, Google Apps, etc. SaaS, in essence, is a software delivery model where a provider or a third party hosts an application and makes it available to customers on a subscription basis where they would not have to commit to long-term contracts and can quit at any given moment when the services are no longer required. In SaaS customers cannot monitor or control the underlying infrastructure.
SaaS also has two models: simple multi-tenancy and fine-grained multi-tenancy. The simple multi-tenancy means that every user has their own resources that are different from other users. In fine-grained multi-tenancy all resources are shared between users except customer-related data.
Some advantages of multi-tenancy are:
• Same software version is available to all customers
• Global accessibility
• Software development and maintenance are done by the provider
• Provider hosted software is centrally located to be made easily accessible through the web
• APIs allow for integration between different pieces of software
------------
SaaS and multi-tenancy, while being a powerful business model with many advantages, also has issues and challenges.
*Security*. Putting your data into someone else’s hands and running your software using someone else’s CPU is a great risk and requires a tremendous amount of trust. Some of the well-known cloud security issues are data loss, hacks, and some others. The multi-tenancy model introduced new security challenges and vulnerabilities that require new techniques to deal with. The examples could be the following: one tenant gaining access to the neighbor’s data, data is accidentally returned to the wrong tenant, or one tenant negatively affecting another in terms of resource sharing. These vulnerabilities can be exploited for personal gain.
*Performance*. Because SaaS applications reside in different locations, the response time in accessing the may vary from time to time. While cloud infrastructure focuses on enhancing the overall system performance as a whole, it is impossible to predict the response time of a specific application, and in general SaaS applications run at slightly lower speeds than server applications.
*Interoperability*. Each cloud provider has its own way of how clients, applications and users interact with the cloud. This undermines the development of cloud ecosystems by forcing the clients to be locked in with a particular provider. This prohibits the users to choose from alternative vendors and providers in order to optimize the performance within their company/organization. Proprietary cloud APIs make it extremely difficult to integrate cloud services with an organization’s own existing system, such as an on-premises data center. The goal of interoperability is to create seamless fluid data across clouds and between cloud and local applications.
| sciencebae |
237,051 | Web APIs in Node.js Core: Past, Present, and Future | Web APIs developed and standardized by the browsers have been serving client-side JavaScript applicat... | 0 | 2020-01-13T02:32:46 | https://dev.to/i_am_adeveloper/web-apis-in-node-js-core-past-present-and-future-459b | node, webdev, javascript, tutorial | Web APIs developed and standardized by the browsers have been serving client-side JavaScript applications with a wide selection of features out of the box, while Node.js have been developing another set of APIs that are today the de-facto standards for server-side JavaScript runtimes. There is now a conscious effort to bring the two worlds closer together, in particular by introducing more Web APIs into Node.js core, but it’s not an easy ride - not every Web API, designed for the browsers, makes sense for Node.js.
In this talk, we are going to take a look at the story of Web APIs in Node.js core - what Node.js have implemented, what are being discussed, what are blocking more APIs from being implemented, and what we can do to improve the developer experience of the JavaScript ecosystem.
{% youtube kUylNB6RZ9Q %} | i_am_adeveloper |
237,073 | Building a Pokedex with Next.js | Next.js is a React framework for JavaScript created by Zeit which allows you to build server-side ren... | 0 | 2020-01-13T05:29:53 | https://dev.to/marcdwest32/building-a-pokedex-with-next-js-4jjj | javascript, react, nextjs, codenewbie | Next.js is a React framework for JavaScript created by Zeit which allows you to build server-side rendered single-page web applications. We're going to be using Next.js to create a Pokedex application. All you need to get started making an application with Next.js is npm version 5.2 or higher, and you can simply run the create next app command in the cli, passing the name of your app as the second parameter.
`npx create-next-app pokedex`
You should see this message `Installing react, react-dom, and next using npm...`, and then a success message. Your newly created app now has everything necessary to start. Entering `npm run dev` in the console will get your development page up and running on `http://localhost:3000/`, and upon visiting the site you will see this Next.js welcome screen -

Back inside your code editor, you will see a `pages` folder that was automatically generated for you when the application was created. This is where the top-level React components will be. Inside this folder is the `index.js` file, which is currently being rendered on localhost:3000. For our Pokedex application we won't need anything below the closing `</Head>` tag, nor will we need to import the Nav component. It should now look like this -

Now, to get our pokemon we'll be using the pokemon api found here - `https://pokeapi.co/`. Next.js has a unique lifecycle hook, `getInitialProps` that allows us to access route related data such as request and response and use that data in our app as props. Like all lifecycle hooks, we just need to tell it what we need it to do; in this case, catch us some pokemon! Start by importing `axios` and then below the `Home` functional component in `index.js` craft your `getInitialProps` method to reach out to the pokemon api and give you back all 964 of the critters. You can now pass the retrieved data to the `Home` component as props and, using regular JavaScript, map the captured pokemon to your site.

Resulting in -

--missing section--
Sweet! Next you're going to want to display the individual pokemon and their information on their own page. Inside the `pages` folder, create a new folder named `pokemon`. In your new `pokemon` folder, make a file called `[number].js`. This odd looking naming convention is unique to Next.js. It signifies to Next.js that you will be creating dynamic routes for each of your pokemon. Craft your `getInitialProps` here with `query` being passed as the parameter. The `query` will contain the number in the url for the corresponding pokemon you wish to display. You can then use that number to make your axios call to the pokeapi for the specific critter you need and display their data. I've chosen to display the name, default image, and shiny image for each pokemon.

Almost done! Back in `index.js` there's just a few changes to make to tie it all together. Once again import `Link`, this time to link us to the pokemon pages. Add a `<Link href={`/pokemon/${i + 1}`}>` tag to the return statement inside your map function which renders the pokemon list. This will tie the corresponding pokemon to the query in `[number].js`.

That's it! Now head to your browser to catch some pokemon. Clicking on a pokemon will dynamically route you to that pokemon's page and show you their details.

I hope you enjoyed building a Pokedex using Next.js, and if you really liked your app, they also make it very easy to deploy at `zeit.co`. With a few simple steps you can have your Pokedex on the web for free.
tl/dr https://pokedex.marcdwest.now.sh/ | marcdwest32 |
237,132 | Does Expo Support React Native Web? | Short answer: yes, expo supports react NATIVE web since SDK version 33. Expo SDK v33 is the first SD... | 0 | 2020-01-13T07:22:06 | https://dev.to/evenmik/does-expo-support-react-native-web-6po | reactnative | <b>Short answer: yes, <a href="https://codersera.com/blog/does-expo-support-react-native-web/">expo supports react NATIVE web</a> since SDK version 33.</b>
Expo SDK v33 is the first SDK which supports the web. It also comes with TypeScript which is based on React Native, which includes hooks. It is the combination of many new features: APIs, workflows, developers tools, and many more upgraded version in SDK v33. Now the developers also like to use this versions as it is new and up-to-date, and also it’s the trend to follow the new and latest versions of technology.
<img src="https://codersera.com/blog/wp-content/uploads/2019/07/SDK-page-art-2x._CB505259190_-1024x488.png">
As well as, Four new APIs are added in the SDK v33-
Crypto API and Random API, which provide to generate, as well as work with cryptographically secure strings, also they used as primitives to build fully featured crypto libraries in JavaScript. And sharing API provides the technique to share media and different files applications on the device. As well as, VideoThumbnails API, allow to generate an image thumbnail from the video file.
<b>Phases of technology- React to React-Native-Web!</b>
Phase 1. Firstly in the market, React appeared, this influence and attract the developers towards it’s and change the ways of building apps and creating websites. React was like a relief to all the developers as it becomes the easiest way to developing.
Phase 2. Secondly, here introduce React-Native, take all the good and updated version of React. And this version helps the developers to develop the mobile apps, it the attributes and commands make things easier to the developers. As well as it hold some key elements like JavaScript, markup with JSX and Flexbox.
Phase 3. Last but not least, finally, after the React Native, here comes React-Native-web which is the mixture of React and React Native. As it is newly introduced in the market it is really advanced and upgraded version. In which websites and application both can run. By using different functions and command you can build your websites more effective and attractive to the clients and users. Even, it is now possible to translate React Native primitives to the DOM language using HTML tags. This is only done to React Native Web.
Also you can read article related the same on https://github.com/expo/expo-sdk
You can read the official blog here as well. https://blog.expo.io/expo-sdk-v32-0-0-is-now-available-6b78f92a6c52 | evenmik |
237,137 | Time tracking software - SkyTime Review | In this review we'll tell you about Skytime — our own time tracking software. It provides information... | 0 | 2020-01-13T07:55:58 | https://dev.to/artemkobilinskiy/time-tracking-software-skytime-review-m8k | startup, productivity, functional | In this review we'll tell you about Skytime — our own time tracking software. It provides information on the productivity and efficiency of each employee. Also, it allows them to work from any corner of the globe and still contribute. It supports all major operating systems and has integration with platforms such as GitLab, Jira and Asana.
We plan to make this app a part of our upcoming business intelligence ERP system. Stay tuned for that if you want to see Skytime in action.
We were motivated to create such a tool to provide our clients with detailed reports and further increase the transparency of the development process.
Need custom software for your business? Contact us and see it for yourself!
Learn more about us on https://digitalskynet.com | artemkobilinskiy |
237,163 | Debug Kubernetes Operator-sdk locally in Goland | how to setup the debug for the operator-sdk in Goland | 0 | 2020-01-14T08:18:18 | https://dev.to/austincunningham/debug-kubernetes-operator-sdk-locally-in-goland-kl6 | kubernetes, debug, operatorsdk, goland | ---
title: Debug Kubernetes Operator-sdk locally in Goland
published: true
description: how to setup the debug for the operator-sdk in Goland
tags: Kubernetes, Debug, Operator-sdk, Goland
cover_image: https://thepracticaldev.s3.amazonaws.com/i/4n0nt3dyb3k20u8h7d7x.png
---
This is a follow on from this [article](https://austincunningham.ddns.net/2019/operatorvscode) setting up the operator-sdk debug in vscode.
## Setup Goland to debug
The setup for Goland is pretty similar to Vscode.
Delve is a debug tool for golang, it can be downloaded here https://github.com/go-delve/delve/tree/master/Documentation/installation or by just using go
```bash
go get -u github.com/go-delve/delve/cmd/dlv
```
In Goland go to `Run\Edit Configurations...`

Click on the Plus symbol `+` and add `Go Remote` add a Name and click `Apply` the defaults are fine

You need to run delve with the command line switch `--enable-delve` on the `up local` command
e.g. The operator I am working on is called `integreatly-operator` so the commands to run it are as follows
```bash
# You need to set the namespace to watch
$ export WATCH_NAMESPACE=integreatly-operator
# You can then run the up local with delve enabled
$ operator-sdk up local --namespace=integreatly-operator --enable-delve
# you will see something like
INFO[0000] Running the operator locally.
INFO[0000] Using namespace integreatly-operator.
INFO[0000] Delve debugger enabled with args [--listen=:2345 --headless=true --api-version=2 exec build/_output/bin/integreatly-operator-local --]
API server listening at: [::]:2345
```
> *NOTE*: command changed with v0.15.0 `operator-sdk run --local --namespace=integreatly-operator`
Click on `Run\Debug 'whatYouCallYourGoRemote'`
Goland will start to debug and stop at your breakpoints.
 | austincunningham |
237,164 | Do you think remote mentoring could work? | I am considering creating an online mentoring program for everybody who would like to start to code o... | 0 | 2020-01-13T08:26:00 | https://dev.to/starbist/do-you-think-remote-mentoring-could-work-2p2e | discuss, beginners | I am considering creating an online mentoring program for everybody who would like to start to code or improve coding skills, especially in the UI field.
I would like to know do you know of such sites or programs already. What do you think of the idea? | starbist |
237,247 | A Rubyist's guide to Javascript | To start this post of, I feel it fitting to put one popular misconception to rest: Javascript is not,... | 0 | 2020-02-17T15:04:48 | https://dev.to/cpatercodes/a-rubyist-s-guide-to-javascript-5ank | javascript | To start this post of, I feel it fitting to put one popular misconception to rest: Javascript is not, in fact, related to Java. Mine, at least, is beginning to seem like a distant cousin of working script (and sometimes, of the kind that does things!) I've come to learn a couple of things about the language along the way, and similarities/differences to Ruby.
**Semi-colons, semi-colons everywhere!**
At the end of most lines of code that are run, the developer needs to put a semi-colon unlike in Ruby. Exceptions can be made, however, when defining a function (what a rubyist would call a method) or even some simpler logic.
This is less extreme and consistent than languages such as C++ which outright ignore whitespace and only move to the next line after a semi-colon, but it appears to nonetheless be possible to use a semi-colon in place of a linebreak (as evidenced by some rather unsightly source files... looking at you, JQuery!).
**..Don't forget empty brackets!**
If I've learnt anything from struggling with some particularly nerve-wracking bugs, it's that you need parentheses in front of any method call more complex than to return a stored value. Your method doesn't take arguments? Empty parentheses it is!
**C'est ne pas 'puts'**
Firstly, as a Rubyist you may be familiar both with *puts* (or sometimes *print*) for outputting text and with *p* for displaying the value of a variable during specs.
When first learning of *console.log* in javascript, many will see parallels to the former but it is in fact in between the two.
**The actual 'puts' of JS**
If you really, really want to say something to the user, you want to use either *document.GetElementById(element_id).innerHTML = desiredText*, (swap out for *GetElementsByClassName* or *GetElementsByTagName* as desired) to manipulate the content inside a HTML element.
Because you see, reader, Javascript is a front end language intended to manipulate HTML (and sometimes CSS).
**Function? Class? Was this ever meant to be?**
While the most recent standard for Javascript (ES6) does have a class syntax of sorts (and has long had a syntax for 'prototypes' of functions), the distinction between classes and methods which exists for many backend languages doesn't translate as cleanly on to JavaScript for the most part as a matter of design. Functions are added to a 'class' by means of *className.prototype.functionName = function(){ code here }*, and instances of said class defined by *var instanceName = new className*.
Javascript, ultimately, is a front end tool intended to manipulate HTML and CSS on the fly.
Few could have anticipated the complexity of logic which it has evolved to be able to take on - especially of the kind that traditionally would be relegated to back end logic - but methods exist to create essentially the entirety of a web application's logic in Javascript.
It is for this reason I think it is felicitous to touch on two main approaches that can be taken:
**Front end single page web app:**
Usually the fact that pure JS can only really perform actions within the scope of the rendered page can come across as quite daunting; how on earth do you carry data entered or produced within one part of your app across the app as a whole? But what if we don't move between pages at all, and do all our logic manipulations right there and then? Well then, reader, this curse can be made into a blessing.
The great thing about not moving between different pages in a web app is that you don't have to go to all the trouble of constantly sending out requests to the server.
This can be a lifesaver for an app's users (in figurative terms, but sometimes literal depending on what your app *does*) if it just so happens that their internet is pretty terrible or their provider charges a lot for that precious extra traffic.
**Using Node.js:**
While Node.js is a technology I must still delve further into and learn, its main appeal is that it allows both the frontend and the backend logic to be unified under a single language. From the outset, this makes it far easier to take calculations made by interactive elements on the frontend and update records held on the server-side accordingly, and in turn carry these between pages.
**In conclusion**
JavaScript is a surprisingly versatile - and at times confusing - language which has grown from a controlling medium for dynamic frontend elements to hosting capabilities on the level of a backend language.
It is by understanding its history and the way its scope has grown profoundly from its original intended purpose that we can understand the quirks and conventions that distinguish it from other languages. There are many more I could list, but I wanted to cover what was most striking to me about JS coming from a Ruby background.
| cpatercodes |
237,265 | How I Teach: Version Control *with G Sheets* | Hello and (a very late) happy new year! For the first blog post of the decade I thought I'd talk a l... | 0 | 2020-01-13T12:03:15 | https://dev.to/miameroi/how-i-teach-version-control-with-g-sheets-4135 | javascript, techlead, mentor, training |
Hello and (a very late) happy new year!
For the first blog post of the decade I thought I'd talk a little bit about how I like to introduce students to version control ✨.
I was first taught version control through a GitHub Desktop demo, I got there in the end but let me tell you ... it took months before I finally understood what a commit is! After being taught version control by two courses after that, I believe I have narrowed down the issue that a lot of people encounter when they first teach version control:
###There isn't anything you can compare it to - you could say it's like 'saving' a file but let's be real, it's not really!
A fantastic and visual example of how GitHub works is Google Sheets version history feature. Go ahead and create a sheet, you could even name it repository to introduce the concept!
Once the sheet is created, you can access its version history like so:

The version history has a list of all the changes that have been made by the user over time:

These are commits, you can click on them to view what the user changed and you can even revert back to that commit:

After running through this example, you can go back to GitHub in the browser and show the exact same process:
- Create a repo
- Make a change (do this online first)
- Commit it and show the commit in the commits history
##Top Tips
I would only introduce the idea of local and remote repositories after both these demos. Since you have just created a remote repository, the transition should be quite smooth. Simply clone or download the repository and continue working on it locally to convey the concept of a local repo.
Here are some common mistakes to avoid:
- Demo version control from the command line;
- Even worse getting the students to code along with you in the command line, chances are half of the students will get lost in spelling mistakes and not focus on the wider concepts;
- Talking about branches and merges straight away;
I hope this was helpful and let me know in the comments what topic you want me to cover next!
| miameroi |
237,283 | Hacktoberfest goodies are here! | So, after 3-4 months, my Hacktoberfest goodies arrived! I am very happy since this was my first part... | 0 | 2020-01-13T12:28:37 | https://dev.to/maenad/hacktoberfest-goodies-are-here-4npb | hacktoberfest, hackathon, goodies, merchandise | So, after 3-4 months, my Hacktoberfest goodies arrived!
I am very happy since this was my first participation in this type of events and even if it seems cliche, it's very motivating and inspiring!
| maenad |
237,352 | while | learning kotlin at hack station from facebook | 0 | 2020-01-13T14:30:17 | https://dev.to/grandpa44997/while-4bal | replit, kotlinbeta | learning kotlin at hack station from facebook
{% replit @Grandpa/aula-01-while %} | grandpa44997 |
237,394 | Blazor Full-Stack Web Dev in ASP .NET Core 3.1 | This is the second of a new series of posts on ASP .NET Core 3.1 for 2020. In this series, we’ll co... | 0 | 2020-01-13T15:40:33 | https://wakeupandcode.com/blazor-full-stack-web-dev-in-asp-net-core-3-1/ | csharp, dotnet, webdev | ---
title: Blazor Full-Stack Web Dev in ASP .NET Core 3.1
published: true
date: 2020-01-13 15:00:00 UTC
tags: csharp, dotnet, webdev
canonical_url: https://wakeupandcode.com/blazor-full-stack-web-dev-in-asp-net-core-3-1/
---

This is the second of a new [series of posts](https://wakeupandcode.com/aspnetcore/#aspnetcore2020) on ASP .NET Core 3.1 for 2020. In this series, we’ll cover 26 topics over a span of 26 weeks from January through June 2020, titled **ASP .NET Core A-Z!** To differentiate from the [2019 series](https://wakeupandcode.com/aspnetcore/#aspnetcore2019), the 2020 series will mostly focus on a growing single codebase ([NetLearner!](https://wakeupandcode.com/netlearner-on-asp-net-core-3-1/)) instead of new unrelated code snippets week.
Previous post:
- [Authentication & Authorization in ASP .NET Core 3.1](https://wakeupandcode.com/authentication-authorization-in-asp-net-core-3-1/)
**NetLearner on GitHub** :
- Repository: [https://github.com/shahedc/NetLearnerApp](https://github.com/shahedc/NetLearnerApp)
- v0.2-alpha release: [https://github.com/shahedc/NetLearnerApp/releases/tag/v0.2-alpha](https://github.com/shahedc/NetLearnerApp/releases/tag/v0.2-alpha)
# In this Article:
- B is for Blazor Full-Stack Web Dev
- Entry Point and Configuration
- Rendering the Application
- LifeCycle Methods
- Updating the UI
- Next Steps
- References
# B is for Blazor Full-Stack Web Dev
In my 2019 A-Z series, I covered [Blazor for ASP .NET Core](https://wakeupandcode.com/blazor-full-stack-web-dev-in-asp-net-core/) while it was still experimental. As of ASP .NET Core 3.1, server-side Blazor has now been released, while client-side Blazor (currently in preview) is expected to arrive in May 2020. This post will cover server-side Blazor, as seen in [NetLearner](https://wakeupandcode.com/netlearner-on-asp-net-core-3-1/).
To see the code in action, open the solution in Visual Studio 2019 and run the [NetLearner.Blazor](https://github.com/shahedc/NetLearnerApp/tree/master/src/NetLearner.Blazor) project. All modern web browsers should be able to run the project.
<figcaption>NetLearner.Blazor web app in action</figcaption>
# Entry Point and Configuration
Let’s start with [Program.cs](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Program.cs), the entry point of the application. Just like any ASP .NET Core web application, there is a Main method that sets up the entry point. A quick call to **CreateHostBuilder** () in the same file ensures that two things will happen: The Generic Host will call its own **CreateDefaultBuilder** () method (_similar to how it works in a typical ASP .NET Core web application_) and it will also call **UseStartup** () to identify the Startup class where the application is configured.
```
public class **Program** { public static void **Main** (string[] args) { **CreateHostBuilder** (args).Build().Run(); } public static IHostBuilder **CreateHostBuilder** (string[] args) =\> Host. **CreateDefaultBuilder** (args) . **ConfigureWebHostDefaults** (webBuilder =\> { webBuilder. **UseStartup** \<Startup\>(); }); }
```
Note that the Startup class doesn’t _have _to be called Startup, but you do have to tell your application what it’s called. In the [Startup.cs](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Startup.cs) file, you will see the familiar **ConfigureServices** () and **Configure** () methods, but you won’t need any of the regular MVC-related lines of code that set up the HTTP pipeline for an MVC (or Razor Pages) application. Instead, you just need a call to **AddServerSideBlazor** () in **ConfigureServices** () and a call to **MapBlazorHub** () in **Configure** () while setting up endpoints with **UseEndPoints** ().
```
public class **Startup** { public void **ConfigureServices** ( **IServiceCollection** services) { ... services. **AddServerSideBlazor** (); ... } public void **Configure** ( **IApplicationBuilder** app, **IWebHostEnvironment** env) { ... app.UseEndpoints(endpoints =\> { endpoints.MapControllers(); endpoints. **MapBlazorHub** (); endpoints. **MapFallbackToPage** ("/\_Host"); }); } }
```
Note that the **Configure** () method takes in an app object of type **IApplicationBuilder** , similar to the IApplicationBuilder we see in regular ASP .NET Core web apps. A call to **MapFallBackToPage** () indicates the “/\_Host” root page, which is defined in the [\_Host.cshtml](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/_Host.cshtml) page in the Pages subfolder.
# Rendering the Application
This “app” (formerly defined in a static “index.html” in pre-release versions) is now defined in the aforementioned “\_Host.cshtml” page. This page contains an \<app\> element containing the main App component.
```
\<html\> ... \<body\> \< **app** \> \< **component** type="typeof( **App** )" render-mode="ServerPrerendered" /\> \</ **app** \> ... \</body\> \</html\>
```
The HTML in this page has two things worth noting: an \< **app** \> element within the \< **body** \>, and a \< **component** \> element of type “ **App** ” with a render-mode attribute set to “ServerPrerendered”. This is one of 3 render modes for a Blazor component: Static, Server and ServerPrerendered. For more information on render modes, check out the official docs at:
- Blazor Hosting Models: [https://docs.microsoft.com/en-us/aspnet/core/blazor/hosting-models?view=aspnetcore-3.1](https://docs.microsoft.com/en-us/aspnet/core/blazor/hosting-models?view=aspnetcore-3.1)
According to the documentation, this setting “_renders the component into static HTML and includes a marker for a Blazor Server app. When the user-agent starts, this marker is used to bootstrap a Blazor app._“
The App component is defined in [App.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/App.razor), at the root of the Blazor web app project. This App component contains nested [authentication-enabled](https://wakeupandcode.com/authentication-authorization-in-asp-net-core-3-1/) components, that make use of the MainLayout to display the single-page web application in a browser. If the user-requested routedata is found, the requested page (or root page) is displayed. If the user-requested routedata is invalid (not found), it displays a “sorry” message to the end user.
```
\<CascadingAuthenticationState\> \<Router AppAssembly="@typeof(Program).Assembly"\> \< **Found** Context=" **routeData**"\> \< **AuthorizeRouteView** RouteData="@ **routeData**" DefaultLayout="@typeof( **MainLayout** )" /\> \</ **Found** \> \< **NotFound** \> \< **LayoutView** Layout="@typeof( **MainLayout** )"\> \<p\>Sorry, there's nothing at this address.\</p\> \</ **LayoutView** \> \</ **NotFound** \> \</Router\> \</CascadingAuthenticationState\>
```
The [MainLayout.razor component](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Shared/MainLayout.razor) (under /Shared) contains the following:
- NavMenu: displays navigation menu in sidebar
- LoginDisplay: displays links to register and log in/out
- \_CookieConsentPartial: displays GDPR-inspired cookie message
- @Body keyword: replaced by content of layout when rendered
The [\_Layout.cshtml layout file](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/Shared/_Layout.cshtml) (under /Pages/Shared) acts as a template and includes a call to @RenderBody to display its content. This content is determined by route info that is requested by the user, e.g. [Index.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/Index.razor) when the root “/” is requested, [LearningResources.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/LearningResources.razor) when the route “/learningresources” is requested.
```
\<div class="container"\> \<main role="main" class="pb-3"\> @ **RenderBody** () \</main\> \</div\>
```
The [NavMenu.razor component](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Shared/NavMenu.razor) contains NavLinks that point to various routes. For more information about routing in Blazor, check out the official docs at:
- Routing in Blazor: [https://docs.microsoft.com/en-us/aspnet/core/blazor/routing?view=aspnetcore-3.1](https://docs.microsoft.com/en-us/aspnet/core/blazor/routing?view=aspnetcore-3.1)
# LifeCycle Methods
A Blazor application goes through several lifecycle methods, including both asynchronous and non-asynchronous versions where applicable. Some important ones are listed below:
- **OnInitializedAsync** () and **OnInitialized** (): invoked after receiving initial params
- **OnParametersSetAsync** () and **OnParametersSet** (): called after receiving params from its parent, after initialization
- **OnAfterRenderAsync** () and **OnAfterRender** (): called after each render
- **ShouldRender** (): used to suppress subsequent rendering of the component
- **StateHasChanged** (): called to indicate that state has changed, can be triggered manually
These methods can be overridden and defined in the @ **code** section (formerly @ **functions** section) of a .razor page, e.g. [LearningResources.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/LearningResources.razor).
```
@ **code** { ... protected override async Task **OnInitializedAsync** () { ... } ... }
```
# Updating the UI
The C# code in [LearningResources.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/LearningResources.razor) includes methods to initialize the page, handle user interaction, and keep track of data changes. Every call to an event handler from an HTML element (e.g. OnClick event for an input button) can be bound to a C# method to handle that event. The **StateHasChanged** () method can be called manually to rerender the component, e.g. when an item is added/edited/deleted in the UI.
```
\<div\> \<input type="button" class="btn btn-primary" value="All Resources" @ **onclick** ="(() =\> **DataChanged** ())" /\> \</div\> ... private async void **DataChanged** () { learningResources = await learningResourceService. **Get** (); ResourceLists = await resourceListService. **Get** (); **StateHasChanged** (); }
```
Note that the **DataChanged** () method includes some asynchronous calls to **Get** () methods from a service class. These are the service classes in the shared library that are also used by the MVC and Razor Pages web apps in [NetLearner](https://wakeupandcode.com/netlearner-on-asp-net-core-3-1/).
Parameters defined in the C# code can be used similar to HTML attributes when using the component, including RenderFragments can be used like nested HTML elements. These can be defined in sub-components such as [ConfirmDialog.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Shared/ConfirmDialog.razor) and [ResourceDetail.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/ResourceDetail.razor) that are inside the [LearningResources.razor](https://github.com/shahedc/NetLearnerApp/blob/master/src/NetLearner.Blazor/Pages/LearningResources.razor) component.
```
\< **ResourceDetail** LearningResourceObject="learningResourceObject" ResourceListValues="ResourceLists" DataChanged="@DataChanged"\> \<CustomHeader\>@customHeader\</CustomHeader\> \</ **ResourceDetail** \>
```
Inside the subcomponent, you would define the parameters as such:
```
@ **code** { [**Parameter**] public LearningResource **LearningResourceObject** { get; set; } [**Parameter**] public List **ResourceListValues** { get; set; } [**Parameter**] public Action **DataChanged** { get; set; } [**Parameter**] public **RenderFragment** **CustomHeader** { get; set; } }
```
For more information on the creation and use of Razor components in Blazor, check out the official documentation at:
- Razor Components in Blazor: [https://docs.microsoft.com/en-us/aspnet/core/blazor/components?view=aspnetcore-3.1](https://docs.microsoft.com/en-us/aspnet/core/blazor/components?view=aspnetcore-3.1)
# Next Steps
Run the Blazor web app from the [NetLearner repo](https://github.com/shahedc/NetLearnerApp) and try using the UI to add, edit and delete items. Make sure you remove the restrictions mentioned in a previous post about [NetLearner](https://wakeupandcode.com/netlearner-on-asp-net-core-3-1/), which will allow you to register as a new user, log in and perform CRUD operations.
<figcaption>NetLearner.Blazor: Learning Resources </figcaption>
There is so much more to learn about this exciting new framework. Blazor’s reusable components can take various forms. In addition to server-side Blazor (released in late 2019 with .NET Core 3.1), you can also host Blazor apps on the client-side from within an ASP .NET Core web app. Client-side Blazor is currently in preview and is [expected in a May 2020 release](https://devblogs.microsoft.com/aspnet/blazor-server-in-net-core-3-0-scenarios-and-performance/).
# References
- Official Blazor website: [https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor](https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor)
- Intro to Blazor: [https://docs.microsoft.com/en-us/aspnet/core/blazor](https://docs.microsoft.com/en-us/aspnet/core/blazor)
- Jeff Fritz on Blazor: [https://jeffreyfritz.com/2020/01/whats-old-is-new-again-web-forms-meets-blazor/](https://jeffreyfritz.com/2020/01/whats-old-is-new-again-web-forms-meets-blazor/)
- Michael Washington’s Blazor Tutorials: [https://blazorhelpwebsite.com/](https://blazorhelpwebsite.com/)
- Chris Sainty’s Blog: [https://chrissainty.com/blazor/](https://chrissainty.com/blazor/)
- Edward Charbeneau on YouTube: [https://www.youtube.com/user/Backslider64/videos](https://www.youtube.com/user/Backslider64/videos)
- Blazor on YouTube: [https://www.youtube.com/results?search\_query=blazor](https://www.youtube.com/results?search_query=blazor) | shahedc |
237,455 | Error: Actions Not Showing Up As Buttons On Lightning Pages | Resolving object-specific actions not showing up on Lightning Pages after creation, when Chatter/Feed Tracking is enabled. | 4,258 | 2020-02-03T15:31:36 | https://dev.to/rachelsoderberg/error-actions-not-showing-up-as-buttons-on-lightning-pages-4n1e | salesforce, lightning, quickaction | ---
title: Error: Actions Not Showing Up As Buttons On Lightning Pages
published: true
description: Resolving object-specific actions not showing up on Lightning Pages after creation, when Chatter/Feed Tracking is enabled.
tags: Salesforce, Lightning, Quick Action
series: Salesforce Lightning
---
Imagine a scenario: You've replaced your URL Hacked buttons with new object-specific actions and dragged them into the Salesforce Mobile and Lightning Experience Actions section part of the Page Layout. You click save, check the object and... your action isn't there? What gives?


The issue is that Quick Actions in the Salesforce Mobile and Lightning Experience Actions simply show in the Chatter tab when Feed Tracking is enabled. It's not made very clear in the help text so many people are finding themselves confused if they haven't actually thought to check Chatter The help text on the Page Layout properties is as follows:
<i>Feed tracking is disabled for this object, but you can still customize actions for Lightning Experience and the mobile app action bar. Actions in this section appear only in Lightning Experience and the mobile app, and may appear in third party apps that use this page layout.</i>
Only after you've disabled Feed Tracking for the object will your actions display at the top of the object details as expected. So let's go ahead and disabled that so our actions show up where users would expect them.
Disable Feed Tracking:
1. Navigate to Setup and search "Feed Tracking"
2. Select the object you'd like to stop tracking from the list on the left
3. Uncheck the "Enable Feed Tracking" checkbox

Navigate back to a record on your object detail page and confirm that your actions now show as expected. If you don't see them, double check the Page Layout and whether you placed the correct action in Salesforce Mobile and Lightning Experience Actions and that you don't already have too many actions selected for the section.

---
If you'd like to catch up with me on social media, come find me over on [Twitter](https://twitter.com/RachSoderberg) or [LinkedIn](https://www.linkedin.com/in/rachelsoderberg/) and say hello! | rachelsoderberg |
237,546 | Module 2 Discussion | Read Session 1.2 (pp. 22 - 45) and then answer the following questions and post them in "https://dev.... | 0 | 2020-01-13T20:48:18 | https://dev.to/antonioprican/module-2-discussion-33bm | Read Session 1.2 (pp. 22 - 45) and then answer the following questions and post them in "https://dev.to/":
1. Write code to mark the text Gourmet Thai Cooking as a heading with the second level of importance.
<!DOCTYPE html>
<html>
<head>
<title>Gourmet Thai Cooking</title>
</head>
<body>
<h1>Main heading</h1>
<p>Paragraph</p>
</body>
</html>
2. What is the div element and why will you often encounter it in pre-HTML5 code?
-div-is a name that uniquely identifies the division.
-Prior to HTML5, sections were defined as divisions created using the following div element:
3. What element would you use to indicate a change of topic within a section?
-The HTML <hr> element represents a thematic break between paragraph-level elements
-
4. Write the code to mark the text Daily Special as emphasized text.
<em>Daily Special</em>
-
5. Write code to mark the text H2SO4 with subscripts.
<sub>H2SO4</sub>
6. Write the code to link the web page to the CSS file mystyles.css.
<link href="mystyles.css" rel="stylesheet" />
7. Write the expression to insert the em dash into a web page using the character code 8212.
—
8. Write the code to insert an inline image using the source file awlogo.png and the alternate text Art World.
<img src="awlogo.png" alt="Art World" />
| antonioprican | |
237,577 | 5 Myths about WordPress Backups | Everyone agrees that backups are essential. Not only for websites but also for desktops, laptops, pho... | 0 | 2020-01-13T21:35:42 | https://scaledynamix.com/blog/scaling-wordpress-5-myths-about-backups/ | linux, webdev, aws, wordpress | Everyone agrees that backups are essential. Not only for websites but also for desktops, laptops, phones, and even git repositories. Yet, when it comes to protecting mission-critical WordPress sites, the backup solutions in place are often inadequate.
More often than not, these backup shortcomings are a result of myths and assumptions about the underlying infrastructure your site runs on. Here are several myths I come across when onboarding new clients.
Myth 1: My site uses EBS, which is replicated across multiple servers. I don’t need backups.
While it’s true that EBS volumes are replicated across multiple servers, they are not protected against any file changes or deletions done by the user. Failed WordPress updates, user error, or malware can alter files on your server. If you don’t have a backup solution in place, EBS can’t recover file changes or restore older versions on demand.
Myth 2: I store my media on S3. I don’t need backups.
Similar to EBS, storing WordPress uploads on S3 is also not an alternative to backups. Without backups, any files deleted from wp-admin even by mistake are lost forever. One caveat to this is if you enable versioning on your S3 bucket, it protects against file changes and deletions. Remember that this is not enabled by default. If your site offloads static assets to S3, make sure that versioning is enabled and working as expected.
Myth 3: AWS takes snapshots of my server. I don’t need backups.
While snapshots are better than not having any backups, restoring them is time-consuming and costly. Imagine your team asking for a copy of wp-config.php from last week. Backup archives can selectively restore this one file in seconds. Server snapshots, on the other hand, need to be restored entirely on a new volume before any files are accessible. If you restore server snapshots on a new instance, there is a risk of wp-cron altering the database with stale values. Snapshots also make local development difficult compared to backup archives.
Myth 4: I don’t need to backup dev sites.
While chances of data loss on development sites are smaller compared to production, these sites should still have a backup solution in place. Development site backups enable developers to rollback 3rd party libraries to previous versions when things go wrong. Backups also protect against dependencies that are no longer available on public repositories, software bugs that result in malformed artifacts, and more.
Myth 5: I don’t need to backup git repositories
Git repos are distributed by design. Most teams also sync with git hosting services such as Github or Gitlab. So does it make sense to backup git repositories? In a high-availability environment, git hooks are often used for various post-deployment tasks. These hooks are not present on developers’ local repos or Github. Having a backup of the git repository on the server protects these hooks in case of a disaster. Git backups are a lifesaver in the event of cloud and SaaS outages.
A good backup strategy contains the following three elements:
Frequency: Backup frequency should be adjusted depending on how frequently your site and database change.
Destination: Backups should be stored in at least 2 separate regions to protect against any cloud outages.
Verification: Backups should be tested periodically to ensure the integrity of the archives.
Once you have a backup policy in place, you can start looking into the performance impact and overhead of your strategy. | nginxreload |
237,585 | Does JAMstack mean having to pre-render all the things? | As the JAMstack ecosystem matures, the debate over what counts as “truly JAMstack”™ continues. Given... | 0 | 2020-01-13T22:11:29 | https://dev.to/shortdiv/does-jamstack-mean-having-to-pre-render-all-the-things-4381 | jamstack, jamuary | As the JAMstack ecosystem matures, the debate over what counts as “truly JAMstack”™ continues. Given the elusive nature of the term and its [recent-most evolution](https://github.com/jamstack/jamstack.org/commit/41c0b767694c1f8c7e3fabcb1e0d770b154c00d7), its unsurprising that there remains some confusion over this. The JAMstack stands for JavaScript, APIs and Markup, and emphasizes the intermingling of these pieces to build websites that are scalable, performant and overall efficient. A key component of what makes a site JAMstack is its pre-rendering capabilities or how much of the site is generated ahead of time. The focus here of course is not on the implementation per say, but on the results instead; So a successful JAMstack site is determined by how fast and delightful the site is to work with rather than whether it uses a specific technology. To attain the end goal of fast, secure sites, the JAMstack approach encourages pre-rendering assets at build time. Frontloading the build step means serving your UI statically, which has significant benefits when it comes to performance and server costs down the line.
The JAMstack however doesn’t preclude serving assets dynamically. A SPA for instance still counts as JAMstack as long as it pre-renders parts of the UI (like data fetches or an initial app shell) to dynamically load content into. The idea of the JAMstack in a sense follows a similar ethos to the progressive enhancement movement. Start from the premise of pre-rendering content statically and then progressively add runtime elements like API calls and JavaScript. This way sites can effectively serve content well while providing delightful and meaningful experiences to users. | shortdiv |
237,615 | bizarre devto bug | just got jammed in a SAVE DRAFT and PUBLISH loop that i couldnt get out of hopefully this post posts... | 0 | 2020-01-13T22:31:09 | https://dev.to/osde8info/bizarre-devto-bug-pop | devto, blog, bug | just got jammed in a SAVE DRAFT and PUBLISH loop that i couldnt get out of
hopefully this post posts with no problem | osde8info |
237,617 | Dynamic view with STI in Rails | 🤔 Situation I have a STI model named Media which has two chirdren models which are Photo a... | 0 | 2020-01-20T22:16:34 | https://dev.to/n350071/dynamic-view-with-sti-in-rails-e94 | rails | ## 🤔 Situation
I have a STI model named `Media` which has two chirdren models which are `Photo` and `Video`. And I've wanted to not change the view logic by each class.
Example
```erb
<% @album.media.each do |medium| %>
<div>
<%= file_content(medium) %>
</div>
<% end %>
```
## 😅 But I didn't want to write like this.
If I ask object about class and then switch the logic, it's not OOP anymore. I have to write again when I add a new class like 'text', 'sound'.
```ruby
module MediaHelper
def file_content(medium)
return nil unless medium.file.attached?
if medium.class == Photo
return photo_content(medium)
else
return video_content(medium)
end
end
end
```
## 👍 How have I solved it?
I wrote the helper like this.
```ruby
module MediaHelper
def file_content(medium)
return nil unless medium.file.attached?
send("#{medium.class.name.underscore}_content", medium)
end
end
```
```ruby
module PhotosHelper
def photo_content(medium)
# do something
end
end
module VideosHelper
def video_content(medium)
# do something
end
end
```
## 🦄 But, I believe there is a better way.
I don't know it works. But I hope I will change my code like this.
### 1. remove the parent helper
```ruby
# module MediaHelper
# def file_content(medium)
# return nil unless medium.file.attached?
# send("#{medium.class.name.underscore}_content", medium)
# end
# end
```
### 2. align the children's helper names.
```ruby
module PhotosHelper
def file_content(medium)
# do something
end
end
module VideosHelper
def file_content(medium)
# do something
end
end
``` | n350071 |
237,633 | Using Markdown for Notes | I recently found myself with four different files opened long term in Notepad++ that all centered... | 4,263 | 2020-01-13T23:38:04 | https://coreydmccarty.dev/posts/2020_01_02_markdown_for_notes/ | productivity, markdown, vscode, css | ---
title: Using Markdown for Notes
published: true
date: 2020-01-02 00:00:00 UTC
tags:
- PRODUCTIVITY
- MARKDOWN
- VSCODE
- CSS
series: productivity with markdown
canonical_url: https://coreydmccarty.dev/posts/2020_01_02_markdown_for_notes/
cover_image: https://thepracticaldev.s3.amazonaws.com/i/p2jnamlo05dpwmgeihnj.png
---
I recently found myself with four different files opened long term in Notepad++ that all centered around one set of changes that I was working on, and that seemed a bit absurd. I was keeping one file open for schema definitions, another for java snippets, another for meeting notes, and another for the actual requirement description. I decided that I should be able to consolidate these notes in a meaningful way, and spent several hours walking through formatting these things together as YAML and then XML before settling on Markdown.
If you aren't familiar with Markdown, [Wikipedia](https://en.wikipedia.org/wiki/Markdown) says this:
> Markdown is a lightweight markup language with plain text formatting syntax. Its design allows it to be converted to many output formats, but the original tool by the same name only supports HTML.[9] Markdown is often used to format readme files, for writing messages in online discussion forums, and to create rich text using a plain text editor.
You are likely familiar with Markdown already as it is used on Github (README.md), Reddit comments, Discord, Slack and others. It has functionality to include code blocks with language specific highlighting, links, bullet lists, numbered lists, six levels of headers, not to mention **different** _text_ ~~decorations~~.
***
# Example #
This code
```md
# Heading 1
information
+ **bold**
+ *italic*
+ ***bold/italic***
+ nested
## Heading 2
stuff and things
1. [ ] Unchecked box
2. [x] Checked box
```
Looks like this:
# Heading 1 #
information
- **bold**
- _italic_
- _ **bold/italic** _
- nested
## Heading 2 #
stuff and things
1. [] Unchecked box
2. [x] Checked box
* * *
My primary goal when I set out on this journey was to have support to fold sections that I'm not currently looking at, but what I got with Markdown is so much more. I usually keep notes open in Notepad++, but for some reason I decided to use VSCode. I'm pretty glad that I did, in hindsight because what I currently have configured works significantly better than anything I've ever gotten in Notepad++.
Through the last week I've wound up with a few customizations that make life a bit better for me. I got a few plugins and wrote some custom CSS, and now I'm really happy with the whole thing.
## Markdown All in One plugin #
Helps alot, with live preview, formatting lists, toggling styles, generating linked table of contents, print to html, format tables, and pretty math symbols.
[The repository can be found here](https://github.com/yzhang-gh/vscode-markdown)
## Custom CSS #
This relates directly to the live preview which by default does not differentiate the headers from the primary text (although it does color the code blocks). markdown.styles setting allows you to define a css file to apply to the preview. I then used the markdown.extension.print.onFileSave setting to figure out how to select the bits that I wanted to cutomize. The parts that I thought to be important were having different colors for the different header levels, code block backgrounds that are visibly distinct from the other text,
## Insert Date String #
To quickly insert date or dateTime into my notes this plugin is helpful. I added a keybind to `Ctrl+Shift+i+d` to insert date without time. The formatting is configurable to your needs. [The repository can be found here](https://github.com/jsynowiec/vscode-insertdatestring)
## Snippets #
For my personal usage I also wanted to include information for frontmatter/header. This one is specifically for my 11ty blog entries which is also written in markdown.
```json
"frontmatter": {
"scope": "markdown",
"prefix": "frontmatter",
"body": [
"--- ",
"title: ${1:title} ",
"description: ${2:description} ",
"date: ${3:Ctrl+Shift+i+d} ",
"tags: ",
" - ${4:first} ",
" - ${5:second} ",
"layout: layouts/post.njk ",
"--- "
],
"description": "front-matter for 11ty blog post"
}
```
which get's pasted in like this, and i can tab through the variables easily.
```yaml
---
title: title
description: description
date: Ctrl+Shift+i+d
tags:
- first
- second
layout: layouts/post.njk
---
```
I'd also love to hear thoughts and experiences that you may have with markdown or other languages for taking your notes. Editor/plugin recommendations, tips, and tricks are all welcome as well.
<a href="https://undraw.co/">Cover image created by undraw.co</a> | xanderyzwich |
237,677 | lib3to6 - python compatability library | In light of the recent blog article "Mercurial's Journey to and Reflections on Python 3" by Gregory S... | 0 | 2020-01-14T00:50:39 | https://dev.to/mbarkhau/lib3to6-python-compatability-library-2oac | python | In light of the recent blog article ["Mercurial's Journey to and Reflections on Python 3" by Gregory Szorc](https://gregoryszorc.com/blog/2020/01/13/mercurial%27s-journey-to-and-reflections-on-python-3/) I thought I'd try to pimp my work in this area again.
In short, if you have a project that needs to be compatible with Python 2.7 or even just Python 3.4, then you might want to look at [lib3to6](https://pypi.org/project/lib3to6/). The idea is quite similar to Bable for JavaScript: It transforms valid Python 3.7 code, into valid Python 2.7 code such that the semantics match as closely as possible.
I still work with a Python 2.7 codebase, but most new work is now done in modules that are written for Python 3.7 and this library translates the code when the package is created. I've used it over the past year on half a dozen projects and I feel it is quite stable.
An example of how it works.
Say you have a `my_module` which is written for Python 3.7. Features used here that are not supported by Python 2.7 are
- type annotations
- f-strings
- print function
- implicit utf-8 file encoding
```python
# my_module/__init__.py
import sys
def hello(who: str) -> None:
print(f"Hello {who} from {sys.version.split()[0]}!")
print(__file__)
hello("世界")
```
The above code is translated to the following
```python
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import sys
def hello(who):
print('Hello {0} from {1}!'.format(who, sys.version.split()[0]))
print(__file__)
hello('世界')
```
Some changes that are made:
- Explicit utf-8 file encoding
- Future import boilerplate. This changes the semantics to match those of Python 3, i.e.:
- print is a function
- string literals are unicode literals
- f-string converted to an equivalent `string.format` invocation.
I've had some people puh puh the project, because the gut reaction appears to be that Python 2.7 should die already, but this project is also useful if you are a library author who, for example, wants to use f-strings and yet still have your library be compatible with Python 3.5. If you don't care about Python 2.7 then just don't test for it.
There is much more that could be said, but I think this is enough for now and I hope you find the library useful.
| mbarkhau |
237,809 | 7 Best Use Case of JavaScript Array Method map(), filter() and reduce() | If you are starting in the web development, maybe you haven’t heard about the functional programming.... | 0 | 2020-02-05T07:41:03 | https://codesquery.com/javascript-array-method-map-filter-and-reduce/ | javascript, tutorials | ---
title: 7 Best Use Case of JavaScript Array Method map(), filter() and reduce()
published: true
date: 2019-10-17 17:31:31 UTC
tags: Javascript,javascript,tutorials
canonical_url: https://codesquery.com/javascript-array-method-map-filter-and-reduce/
---
If you are starting in the web development, maybe you haven’t heard about the functional programming. Functional programming has been making prominent space in the field of web development. It will help you to write more precise and clean code which is easy to understand, refactor and test. There is some javascript array method available [...]
The post [7 Best Use Case of JavaScript Array Method map(), filter() and reduce()](https://codesquery.com/javascript-array-method-map-filter-and-reduce/) appeared first on [CodesQuery](https://codesquery.com). | hisachin |
237,812 | Javascript Date Object Explained In Detail | For any web developer, working with the date in the application is always a tricky part. While... | 0 | 2020-02-05T07:43:01 | https://codesquery.com/javascript-date-object/ | javascript, tutorials | ---
title: Javascript Date Object Explained In Detail
published: true
date: 2019-11-05 18:11:52 UTC
tags: Javascript,javascript,tutorials
canonical_url: https://codesquery.com/javascript-date-object/
---
For any web developer, working with the date in the application is always a tricky part. While working with the date, we have to make sure things works properly for every timezone. In Javascript, you might have to work with the calendar events, booking events or any other type of events which needs the relevant [...]
The post [Javascript Date Object Explained In Detail](https://codesquery.com/javascript-date-object/) appeared first on [CodesQuery](https://codesquery.com). | hisachin |
237,813 | Github Login Implementation In Node.js Using Passport.js | In this article, we are going to learn how to implement the Github Login in Node.js Application. We a... | 0 | 2020-02-05T07:42:48 | https://codesquery.com/github-login-implementation-in-node-js-using-passport-js/ | node, sociallogin, tutorials | ---
title: Github Login Implementation In Node.js Using Passport.js
published: true
date: 2019-11-15 09:31:43 UTC
tags: Node.js,node.js,social-login,tutorials
canonical_url: https://codesquery.com/github-login-implementation-in-node-js-using-passport-js/
---
In this article, we are going to learn how to implement the Github Login in Node.js Application. We are using the Node.js Express Framework and Passport.js library for this tutorial. Create a Github App For Your Application Before proceeding further into coding got the GitHub login in node.js, first, we have to create an app [...]
The post [Github Login Implementation In Node.js Using Passport.js](https://codesquery.com/github-login-implementation-in-node-js-using-passport-js/) appeared first on [CodesQuery](https://codesquery.com). | hisachin |
237,842 | The Future of Writing CSS - OOCSS | Object oriented CSS was proposed by web developer Nicole Sullivan in 2008. Her goal was to make dynam... | 0 | 2020-01-14T09:27:08 | https://dev.to/amjadkamboh/the-future-of-writing-css-oocss-2f6k | Object oriented CSS was proposed by web developer Nicole Sullivan in 2008. Her goal was to make dynamic CSS more manageable by applying the principles of object oriented design popularized by programming languages such as Java and Ruby. Using the OOCSS framework results in CSS that is reusable, scalable and easier to manage.
OOCSS draws upon many concepts in object oriented programming including the single responsibility principle and separation of concerns. The goal is to produce components that are flexible, modular and interchangeable.
<h2>CSS Code</h2>
<code>
.button-white {
width: 150px;
height: 50px;
background: #FFF;
border-radius: 5px;
}
</code>
<code>
.button-black {
width: 150px;
height: 50px;
background: #000;
border-radius: 5px;
}
</code>
<h2>OOCSS Code</h2>
<code>
.button-white {
background: #FFF;
}
</code>
<code>
.button-black {
background: #000;
}
</code>
<code>
.button{
width: 150px;
height: 50px;
border-radius: 5px;
}
</code>
class="button button-white" for White button
class="button button-black " for Black button
The Benefits of Object Oriented CSS
1. Speed
2. Efficiency
3. Scalability | amjadkamboh | |
237,848 | Multi Step Form Submit in Laravel with Validation | In this tutorial we will go over Example of Multi Page / Step Form in Laravel with Validation. This t... | 3,690 | 2020-01-14T09:37:20 | https://www.codechief.org/article/multi-step-form-submit-in-laravel-with-validation | laravel, multistep, formsubmit | In this tutorial we will go over Example of Multi Page / Step Form in Laravel with Validation. This tutorial does not use any javascript component to create multi step form.
Instead we will create multiple form pages and will use Laravel session to save the intermediate data.
https://www.codechief.org/article/multi-step-form-submit-in-laravel-with-validation | techmahedy |
237,861 | Goal setting and the year in review | So it’s officially 2020, and a new year brings with it all kinds of things. Retro... | 0 | 2020-01-17T16:49:31 | https://dev.to/documentednerd/goal-setting-and-the-year-in-review-48do | softskills, discuss, goals, growth | ---
title: Goal setting and the year in review
published: true
date: 2020-01-14 09:00:00 UTC
tags: Soft Skills,discussion,Goals,Growth
canonical_url:
---
So it’s officially 2020, and a new year brings with it all kinds of things. Retrospective, hope, dreams, and a variety of other feelings. I’m not a big party-er and have never been a massive fan of New Years Eve, but I do have to say in recent years, I have really come to appreciate two elements of new years as an important time of year for me. The first being retrospection, its a chance to look back at the year and be honest with ourselves about how things have gone. A chance to look at what worked, and what didn’t and have an honest conversation with yourself.
The second part I’ve come to enjoy is planning for the new year, sitting down and looking at my life and finding new ways to grow as a person, and improve things for the better. There’s something very empowering about sitting down and seeing a wealth of possibilities and excitement about the future prospects and opportunities that are ahead.
Now for most people, this is where the most dreaded word comes up, and its RESOLUTIONS. We’ve all heard it, and probably had it happen to us. The grand self-lie that is a resolution. Believe me over the years I’ve left a path of broken resolutions behind me, and as those who read this blog regularly know. I tend to read a lot on the subject of success, goals, and similar topics. I don’t claim to have an answer here, and over the past few years have come to the conclusion that everyone’s mileage on any option for trying to grow will vary.
Now I want to be clear about one thing here, I’m going to use the ever present weight loss example, I consider myself overweight, it is something I have struggled with I do not have the answer, and am not cla,am “throwing shade” on people who use these systems and find success. My experience only.
But what I can do, is call out some of the things I’ve tried, and how they worked out, and tell you what I’ve been finding lately:
### Setting SMART Goals:
We’ve all heard this one right, making sure that your goals are “SMART”, they practically drill this into us in grade school, the only “good goals” are SMART goals. And what does SMART mean:
- Simple
- Measurable
- Attainable
- Reasonable
- Time Bound
Now, the idea behind this is a good one, the idea behind this approach is make sure your setting goals that can be reached, and that you can verify that you have hit milestones on the path. Believe me, I do love the mantra “What gets measured, matters”, and this is based around it. It’s also though built around the satisfaction of achieving your goals. If you set something that’s measurable and attainable, then you feel pretty great when you hit that goal.
Let’s talk about an example, so an example of a “bad” goal in this model would be, “I’m going to lose weight” to steal the oldest resolution in the book. Now why is this a bad goal, because its not defined, its not something that I can measure (in a meaningful way). So a better goal would be “I’m going to lose 10 lbs, by June.” I can measure it, it has a deadline, its not outlandish by any means. Should be great right.
For a lot of people, this is a great system, and it helps them, but for me, it caused a lot more damage than it helped. The reason being is that a human being can tolerate anything for a time boxed amount of time. Look at people who have survived unimaginable conditions and then are able to return to their lives. But the problem for me, is that by doing this with the new year you aren’t doing anything to make a permanent change in your life.
Let’s go back to our weight loss example, as I’ve got to be honest, this isn’t hypothetical, its what really happened to me (more than once). You set this goal and in January you go after it…I had a coworker once who used to say “Let’s seize the day with vigor and determination never before sen by mankind.” And we’ve all been there right, we all hit the gym, get up early, and go after it.
And then a couple of outcomes happen:And then a couple of outcomes happen:
You start doing great, and by end of january you are down 5 lbs. Feeling amazing and saying “I got this”, at which point you end of convincing yourself “I can slow down, I don’t need to work as hard” and it all falls apart. And before you know it time flies and it’s June, you look at the number and say “I’m a failure”.
You start doing great, and middle of February, you hit your goal of 10 lbs down, you’re proud of yourself, and smart goals works. You move onto other things, and before you know it you fall into bad habits, and June hits and the scale looks pretty familiar, you look at the number and say “I’m a failure”
You stay on track, do what you set out to do, get to june and are down 10 lbs. You feel great, smart goals worked. You have a fun summer and end up back where you started, or god forbid worse off, look at yourself and say “I’m a failure”.
And now your probably saying “For loving the positive elements of new years, this is pretty damn depressing. And I’m not trying to be a debby downer. But this is my experience and as I said above, part of this process is honestly and retrospective. This has been my honest experience.
This is my problem here, SMART goals are built to be very short term focused to get a “job” done, but when it comes to personal growth, the job is never “done”, so the approach is fundamentally flawed. And at the end of the process those words / feelings of “I’m a failure” have a damaging and demoralizing effect that is completely counter productive.
At the end of the day, growth is a journey. And if you continue down this road and you miss your goal you are left with nothing, and feeling like you failed with nothing to show for the effort. I believe there is an old adage about eggs in a single basket for this.
### 10x Goals:
This is one that got a lot of attention, I’ve read the book the 10x Rule, and I have to say it is insightful,and I found it to be very interesting. For those not familiar the idea is this, take the idea of SMART goals and turn it around a bit. Keep the same ideas of goals being measurable and time boxed but instead of making them attainable, you make them 10x what the attainable goal is.
So take our weight loss example,, instead of saying “I’m going to lose 10 lbs by june” I would say “I’m going to lose 50 lbs by june”. Now before anyone jumps on me, I can do math. The idea is what could you do if you put in 10x the effort. So the idea then is if I put in the work and try to lose 50 lbs by June, one off two outcomes occur:
- I lose 50 lbs and cheer my success.
- I lose 30 lbs and I’m still better off than the 10lb goal.
In my experience though the problem is still the same. I haven’t changed behaviors or grown at all, I’ve hit a very finite and fixed in time goal, but the success won’t last. And at the end you still feel like a failure. And now you feel like a bigger one, because not only did you miss the 10x goal,but likely the 1x goal too.
### Finite Systems / Infinite Problem:
The crux of the problem I have with the above problems is that they are systems built around finite objectives, being applied to an infinite problem. I don’t want to lose weight, I want to be healthier, I don’t want to learn one thing, but build a foundation for learning. And at the end of the day, we are trying to fit a square peg into a round hole. Personal growth isn’t something that can be timeboxed like that.
Simon Sinek covers this in his book, the “Infinite Game”, which I admit I am still reading now, but here’s a video that gives some of the highlighting principles.
<iframe type="text/html" width="660" height="372" src="https://www.youtube.com/embed/0bFs6ZiynSU?version=3&rel=1&fs=1&autohide=2&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent" allowfullscreen="true" style="border:0;"></iframe>
The other problem I have is that in my experience this creates a lot of stress and pressure on yourself, and those words “I’m a failure” whether you say them aloud or not are devastating. If you become too fixated on goals, they can start to feel like a drug high. And I’m speaking from experience here, they become this thing where your life becomes about setting goals, pushing too hard, getting them and that feeling of euphoria, and then its on to the next one.
I was 100% in that boat, for better or worse, and don’t get me wrong I’m proud of any accomplishments I’ve made, but it really does take a toll on you mentally. While it can be satisfying to reach those goals, it isn’t always fulfilling. And if you find yourself questioning where to go next, that can be crippling in a lot of ways.
And now I’ve done it again, we are at the “Kevin, still depressing. Goals are meaningless, growth is meaningless, life is pain…”
Not quite, I’ve been doing a lot of reading and researching and had lots of discussions with people a lot wiser than me, and I’ve found something that in my opinion seems to be working better.
The final problem I have with these systems, is they make one basic assumption, and that is that pursuit of these goals exists in a vacuum. And what I mean by that is take our weightloss example, we say “I’m going to lose 10 lbs by march”, but then I get hurt, need surgery and spend 6 weeks in a cast, and then physical therapy. I know that the goal became unattainable, but I still feel like I failed.
Now again, not just weight loss, but let’s say I said “I’m going to put my phone away after dinner to spend more time with my family”. And then I get a smart watch which lets me check email without my phone, or I work with customers all over the world that have to call at off hours, then I feel like a failure due to circumstances outside of my control.
### Goals vs Values:
Now I can’t take credit for this, there is a psychological principle called value based living, and the idea being this. Here’s a video that does a way better job than I ever could at summarizing it.
<iframe type="text/html" width="660" height="372" src="https://www.youtube.com/embed/T-lRbuy4XtA?version=3&rel=1&fs=1&autohide=2&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent" allowfullscreen="true" style="border:0;"></iframe>
So looking at the above, if we get away from these ideas of goals, and look more at what we as a person value. That is what drives us, and that is what matters. And as long as the actions we take align with those values, the journey is part of the reward. If you watched the video above with Simon Sinek, this probably sounds familiar, and that should be no surprise. There is a direct through line between his concepts of actions being driven by values and value based living.
So the next question is how does this work any differently? How do I grow and push myself without goals? Is this just symantics at the end of the day. I don’t think so, but let me talk about what this journey has been like for me, and you can judge.
### Step 1 : Change your definition:
One thing that my wife and I are really trying to embrace is a family mission statement, and we are in the process of writing that now. When we are done I will probably do a blog post on that too. But along with that, we as a family have focused our energy and decisions about what we do around this motto for lack of a better term.
**There are only two outcomes to any action, success or you learn something.**
That’s it, not ground breaking, and truth be told we stole it from the movie Meet the Robinson’s, which has a similar sentiment, “From failure you learn, success not so much”. But if you stop and think about that statement, its rather profound, if you take away failure as an outcome. Some would say you take away accountability, but I would say you take away blockers. If you can’t fail, then what is stopping you from trying?
Thomas Edison had a similar statement, when asked about the 1000 failed attempts to make a light bulb, he said “I didn’t fail, I just found 1000 ways not to do it.”
At its core this is very freeing, and we need to say we can grow and push the limits because there is no outcome that we shouldn’t feel positive about, because the journey will yield learnings, and those learnings will help us to improve for the future.
### Step 2 : Define your values:
This one took a lot of soul searching for me. You need to take a step back and identify what above all else matters to you. What ideals and values do you aspire to above all else. And that’s not an easy question, and should not be taken lightly. I find that making these values something that need to be quantified in a single word helped a lot.
My values are the following:
- Family
- Learning
- Impact
- Innovation
- Creativity
And what I mean by these, is that my guiding principles in my life, at this time are these items. When I am long gone, I want my kids to know that above all else family mattered. I want them to see that I had a love of learning. That I focused on having an impact around me whether it be my career or community. I want them to see me as someone who was innovative and creative.
These values together really some up at this stage of my life, the legacy I want to leave behind.
### Step 3 : Values Define Action:
One common thread you will see in anything and everything is the idea that we as people have limited resources. Whether those be willpower, physical, financial, energy, attention, or the all mighty time. We can only put our resources into some much, and we can’t do it all. Greg McKoewn has a great book on this called “Essentialism”, which I really believe is a great book about applying your resources.
To that end, if we have values that are important to us, and we have limited resources. it isn’t a big logical leap to say that we should focus on putting our energy behind the actions that align with our values.
Not really rocket science, although it took me a while to get here if I’m being honest.
Now what I’ve found from doing this in practice in recent months is that I have seen my stress level go down, and my commitment to any actions I’ve taken go up. And results have been greater too. And at the end of the day I believe its easier to be committed to an action if it aligns to something you care deeply about.
Let me go back to our example, as mentioned above I want to get healthier, and I’d tried smart goals, 10x goals, etc. I tried a keto diet, joining a gym, nothing seemed to stick. And even when they did I could never cross the 15 lbs mark. And it was devastating to me. I have had to actively sit on the side lines at both work and family functions because of my body weight.
This all came to a head for me, when I took my son to Hershey, and all he wanted to do was ride a roller coaster, and he’s much too small to ride the big coasters, but we saw a roller coaster called the “coco cruiser” (a little kid roller coaster), and he wanted to ride it. We got in line, and when we got to the front, he was too short to ride by himself, and I couldn’t fit into the coaster. He and I stood on the platform, while his friends rode, and then he rode with one of their mom’s. Having to explain to your son that he can’t have what he wants because of your body weight is one of the lowest points in my life. I wanted to curl up and die.
I still could never get past that 15 lbs mark, and life would get in the way. I took a step back, and said…forget the numbers. I want to get healthy because it will let me be more to my family. I found a cross fit gym that I really like, with great people and a great coach. I try to go as much as I can, unfortunately recently being sick sidelined. But just out of curiosity I got on the scale today, I’m down 25 lbs from that horrible day. I feel better and have better energy, and even though I fell off the wagon and am going back when travel slows down. I feel like a success and look back on all the victories and fulfillment I feel with a positive attitude.
**The attention here being on the action, not the outcome. Its having a lasting impact as it leads to behavioral change.**
But let’s not make this all about weight, even if that is an easy example. Take my professional life, I decided to focus more on impact and now measuring all my actions by impact they have. This has led to greater results in my office with me feeling better about the work I’ve done, and if you look at the metrics much greater returns. My stress level has gone down, and I’ve stopped measuring myself against the impact and activities of my colleagues.
### Final Thoughts:
I know this has been a much longer blog post than normal, but thanks for sticking with me through this. The end result of which is this, I’m not going to be setting any resolutions this year. My new plan is to reaffirm and re-evaluate my values, and then make sure that I devote my energy and resources to actions that align. This will allow me the flexability to enjoy life, while still finding new ways to grow.
This is something my wife and i both feel strongly about and are working with our kids to internalize and I hope it at least sparks some thought for you about where you are and where you want to go. | documentednerd |
237,868 | Object Destructuring Javascript ES6 | // Example 1 // bind variables to different "car1" object properties const car1 = { name: "fiat",... | 0 | 2020-01-14T10:20:29 | https://dev.to/alhiane/object-destructuring-javascript-es6-4g68 | javascript, object, destructuring, es6 | // Example 1
// bind variables to different "car1" object properties
```javascript
const car1 = {
name: "fiat",
model: 500,
weight: 850,
color: "red"
};
const { name, color, weight } = car1;
```
// Example 2
// destruct an object property from a variable
// Rename a variable
// set a value to a variable
```javascript
const car2 = {
brand: "fiat",
model: 500,
weight: 850,
colors: {
red: true,
green: false
}
};
```
// Use ":" sign" to change the name of the variable
// Use the "=" sign to assign a value to a variable
```javascript
const {
colors: { red: redColor, white: whiteColor = false, brown = "true" }
} = car2;
``` | alhiane |
237,885 | Who is Speaking On Your Behalf? | Who is Speaking On Your Behalf? forloop Summit 2019 (L-R), Remy, Mohammed, Mustapha, Pros... | 0 | 2020-01-14T11:00:26 | https://dev.to/unicodeveloper/who-is-speaking-on-your-behalf-2c5e | devrel, opensource, growth |
# Who is Speaking On Your Behalf?

<em>forloop Summit 2019 (L-R), Remy, Mohammed, Mustapha, Prosper, and Funsho</em>
Prowling around Twitter like a roaring lion looking for an article to devour, a technical blog post to consume, a funny video to laugh at, a business post to bookmark, a software engineer to follow, I came across a video where the Vice Chairman of Morgan Stanley, Carla Harris was interviewed….
She said something that struck my nerves deeply. In her words:
> # *I realized that being smart and working hard was not enough. It still wasn’t getting me at the top of the class.*
..and later went on to say:
> I realized that there was somebody who had to be behind closed doors arguing passionately on my behalf. But at the end of the day while performance currency gets your name on the list that’s being discussed behind closed doors, when your name is called, if no one else in that room can speak on your behalf, they just go to the next name and it has nothing to do with your ability to do the job.
Your politicians have cracked this code that’s why they are up there grabbing juicy opportunities while you’re here arguing baselessly everyday on which is the better frontend framework, or trying to show off your technical prowess by telling every Tom, Dick and Harry that you are the best backend developer the world will ever know (*now, this is not bad at all*)…but apart from your work that can probably speak for itself :
* Who can speak on your behalf?
* Who can send in that letter of recommendation?
* Who can boast and argue passionately for you that you deserve a seat at the table where you can influence a lot of decisions in place of the other engineer that’s equally as good?
* Where the hell is your advocate?
* Who has encountered you in ways that can spread your gospel to their networks?
In the short span of my career (~6 years), I have discovered that the folks(*asides being born with a silver spoon*) that appear as incredibly lucky due to the kind of opportunities they have access to in their career or business have a ridiculous knack for connecting with people.
They don’t have to be extroverts. They simply possess the willpower and drive to observe people, get to know people, appear in gatherings that involve people that are aligned with their goals, and connect people with one another.
One of my close friends looked at me a couple of months ago and said “Prosper, you are very lucky” and I didn’t fail to ask him how. With all honesty and sincerity, he let me know that over the few years I have been neck deep in the software engineering and technology world, I’ve had several access to opportunities that are hard to come by especially if one is from this region *(Lagos, Nigeria)*
Perhaps he is right, because I know for sure that anyone that has had a fair bit of whatever is classified as “success” achieved it with some dozes of luck here and there(..in combination with hard-work and book/street smart).
Perhaps, a few portions of that luck was unconsciously engineered to work in my favor. Perhaps, the thousands of people I have connected with, *and stayed in connection with*, are speaking on my behalf in hundreds of places I’d originally never have access to. Perhaps, I’m not just doing the work (coding everyday & speaking to my laptop alone), I’m also actively sharing that work with other people. Perhaps, I spend a huge chunk of my time actively stalking people I want to be like and connecting them with other people I’ve met.
## Engineering Luck & People To Speak For You
As a software engineer, your daily work involves putting lego blocks together in form of 1s and 0s and stringing language APIs together logically to build products.
You are building on Monday, Tuesday, Wednesday, Thursday, Friday. Heck yeah, you’re also building on Saturday, because it’s fun, it’s addictive, you feel incredibly happy and satisfied by your work, the dopamine effects of creating products slaps greatly!….but pause and ask yourself these questions:
* What time of the week, or month, or year have I dedicated to connecting with people?
* Who am I talking to about my work?
* What time have I set aside to connect that random designer with that other frontend developer?
* How am I helping that CEO in ways that it will be hard for them to forget that I exist?
* Who am I sharing my work with?
* Who am I helping to become better at their work?
* Which clubs or communities are am I affiliated with?
* Have I been too embraced and locked up in my work that I fail to connect with the 1% of the 1%?
> # The best way to ensure that lucky things happen is to make sure a lot of things happen — Bo Peabody
It’s great to be ***smart***, ***hardworking***, and ***world class*** in the work that you do, but there’s a high probability that if someone doesn’t discover you, or you don’t deliberately do the leg work of connecting with people…you’ll keep hacking away in a rabbit hole while folks with *half your intelligence*, but *rich in people currency* will have a mighty seat at the long table of opportunities, wealth and opulence.
Build powerful alliances and maintain a diverse mix of relationships. We are in a very competitive economy. In fact, in the technology industry, there are tons of smart people, even smarter than you. When 10 people are drafted for an opportunity, and y’all have an amazing body and portfolio of work…***WHO WILL SPEAK ON YOUR BEHALF?***
> # I have observed something else under the sun. The fastest runner doesn’t always win the race, and the strongest warrior doesn’t always win the battle. The wise sometimes go hungry, and the skillful are not necessarily wealthy. And those who are educated don’t always lead successful lives. It is all decided by chance, by being in the right place at the right time. — Ecclesiastes 9:11
I’m reminded heavily of the PayPal Mafia and how they kept connecting each other & speaking on behalf of each other in new circles. Yelp, Youtube, SpaceX, Tesla, LinkedIn, Slide, etc and a group of modern millionaires & billionaires emerged this way!
I’m reminded of how people with similar ideas and information tend to hang out with another. You can see clear examples of this in the various elite clubs, groups, political parties, cults and communities that exist in the world. Majority of the times, the only way to break into a circle is for someone within that circle to speak positively on your behalf.
I’m reminded of how I have spoken on behalf of certain people that got them great jobs instantly without rigorous interview processes. I’m reminded of how so many people have spoken on my behalf that got me great gigs, jobs, opportunity to travel the world while speaking at technical conferences, and meeting great decision makers that I’d have never dreamt of sharing the same room with.
I'm reminded that when someone powerful speaks on your behalf, protocols are broken, "due processes" are discarded, the power of network effects start to work for you. "We don't employ people from this region" becomes a fallacy. New roles that have never existed within an organization will be created for you, because someone spoke on your behalf!
## Again, Who Is Your Advocate?
I’m writing this short piece because I have seen that millennials would rather have advocates for their romantic relationships than for their life-long careers and businesses.
* Who’s speaking on behalf of your startup or company?
* When you make mistakes, who will speak on your behalf to afford you a second, third, fourth, fifth and infinite chance?
* When everything comes crashing down (*which always happens at some point*), who will be your advocate?
> And I sought for a man among them, that should make up the hedge, and stand in the gap before me for the land, that I should not destroy it: but I found none.
A new day, another opportunity to invest in yourself, invest heavily in connecting with people and transform your entire life.
| unicodeveloper |
237,897 | Must have command line tools! | It's been a while since my last post, and I thought it would be a nice new year start sharing my favo... | 0 | 2020-01-14T11:36:19 | https://dev.to/flrnd/must-have-command-line-tools-109f | productivity, beginners, linux, macos | It's been a while since my last post, and I thought it would be a nice new year start sharing my favourite command-line tools.
Here we go:
* [bat](https://github.com/sharkdp/bat), A cat(1) clone with syntax highlighting and Git integration.

* [ag](https://github.com/ggreer/the_silver_searcher), The silver searcher. A code-searching tool similar to ack, but faster. http://geoff.greer.fm/ag/
* [ripgrep](https://github.com/BurntSushi/ripgrep), another alternative to ack and ag. (Thanks to Michael for reminding me of this great tool).
* [fd](https://github.com/sharkdp/fd), A simple, fast and user-friendly alternative to 'find'.
* [fzf](https://github.com/junegunn/fzf), A command-line fuzzy finder.

* [forgit](https://github.com/wfxr/forgit), Utility tool powered by fzf for using git interactively. Thanks to [Mr F.](https://dev.to/0xdonut/comment/k9gj).
* [ranger](https://github.com/ranger/ranger), A VIM-inspired file manager for the console https://ranger.github.io
* [tig](https://github.com/jonas/tig), Text-mode interface for git https://jonas.github.io/tig/

* [hub](https://github.com/github/hub), A command-line tool that makes git easier to use with GitHub. https://hub.github.com/
```shell
$ hub clone rtomayko/tilt
# expands to:
#=> git clone git://github.com/rtomayko/tilt.git
```
* [httpie](https://github.com/jakubroztocil/httpie), Modern command line HTTP client – user-friendly curl alternative with intuitive UI, JSON support, syntax highlighting, wget-like downloads, extensions, etc. https://httpie.org/

* [jq](https://stedolan.github.io/jq/), A lightweight and flexible command-line JSON processor.
* [exa](https://github.com/ogham/exa), A modern version of ‘ls’. https://the.exa.website/ (Thanks to [Mr F.](https://dev.to/0xdonut/comment/k8oe))
[Michael Kohl](https://dev.to/citizen428/comment/k8pi) Suggestions:
* [lab](https://github.com/lighttiger2505/lab), like hub but for Gitlab (also wraps hub, so can manage both from one tool).
* [broot](https://github.com/Canop/broot) Instead of `tree`.
* [rq](https://github.com/dflemstr/rq) record query, like jq but supporting more data formats.
| flrnd |
237,915 | 4 PHP Tricks to Boost Script Performance | Normally I write code by using the conventional, obvious PHP functions to solve corresponding problem... | 0 | 2020-04-14T12:07:57 | https://dev.to/devmount/4-php-tricks-to-boost-script-performance-ol1 | php, webdev, programming, performance | Normally I write code by using the conventional, obvious PHP functions to solve corresponding problems. But for some of these problems I came across alternative solutions that especially increase performance.
In this article I want to present some of these alternatives. This is useful, if you're searching for possibilities to decrease execution time even more in production. Let's see, which PHP methods might be replaced by a more performant approach and if there is any cost or trade-off.
ℹ *All these methods were tested with PHP 7.4 on a local web server*
## 1. Removing duplicates
You have a large array with duplicates and want to remove them to only have an array with unique values only.
### 🐌 Conventional
```php
array_unique($array);
```
### ⚡ Alternative
```php
array_keys(array_flip($array));
```
### ⏲ Performance
I created an array with more than 4 million elements having more than 3 million duplicates. Here is the top result:
| method | execution time |
|--------|---------------:|
| `array_unique` | 787.31 ms |
| `array_keys` `array_flip` | 434.03 ms |
The alternative approach is **1.8x** (44.87%) faster in this measurement. On average, it was ~1.5x (30%) faster. Trade-off: This is only applicable for simple, one-dimensional arrays since `array_flip` replaces keys by values.
## 2. Get random array element
You have a large array and want to pick a random value from it.
### 🐌 Conventional
```php
array_rand($array);
```
### ⚡ Alternative
```php
$array[mt_rand(0, count($array) - 1)];
```
### ⏲ Performance
I created an array with 5 million elements. Here is the top result:
| method | execution time |
|--------|---------------:|
| `array_rand` | 25.99 μs |
| `mt_rand` | 0.95 μs |
The alternative approach is **27.3x** (96.33%) faster in this measurement. On average, it was ~8x (87%) faster. This result is particularly surprising, as `mt_rand` is the implementation of the Mersenne Twister Random Number Generator and since PHP 7.1, the internal randomization algorithm [has been changed](https://www.php.net/manual/en/migration71.incompatible.php#migration71.incompatible.rand-srand-aliases) to use exactly that same algorithm.
## 3. Test for alphanumeric characters
You have a string and want to test, if it only contains alphanumeric characters.
### 🐌 Conventional
```php
preg_match('/[a-zA-Z0-9]+/', $string);
```
### ⚡ Alternative
```php
ctype_alnum($string);
```
### ⏲ Performance
I created an array with more than 100k alphanumeric and non-alphanumeric strings. Here is the top result:
| method | execution time |
|--------|---------------:|
| `preg_match` | 15.39 ms |
| `ctype_alnum` | 2.06 ms |
The alternative approach is **7.5x** (86.59%) faster in this measurement. On average, it was ~4x (76%) faster.
The same can be applied to `ctype_alpha()` (check for alphabetic characters) and `ctype_digit()` (check for numeric characters).
## 4. Replace substrings
You have a string and want to replace a part of it by another substring.
### 🐌 Conventional
```php
str_replace('a', 'b', $string);
```
### ⚡ Alternative
```php
strtr($string, 'a', 'b');
```
### ⏲ Performance
I created an array with 5 million random strings. Here is the top result:
| method | execution time |
|--------|---------------:|
| `str_replace` | 676.59 ms |
| `strtr` | 305.59 ms |
The alternative approach is **2.2x** (54.83%) faster in this measurement. On average, it was ~2x (51%) faster.
## Additional performance improvements
Here are some additional points I integrated into my coding convention that I found to improve perfomance slightly (if applicable):
- Prefer JSON over XML
- Declare variables before, not in every iteration of the loop
- Avoid function calls in the loop header (in `for ($i=0; $i<count($array); $i)` the `count()` gets called in every iteration)
- Unset memory consuming variables
- Prefer select statement over multiple if statements
- Prefer require/include over require_once/include_once (ensure proper opcode caching)
Some final words: I know the discussion about premature optimization. And I agree that performance in production is depending on bottlenecks like database queries which should be focused on when dealing with performance. But I think, if there are alternatives that are faster and e.g. in case of regex easier to handle and maintain, why not using them?
## Wrap it up
We've seen, that even with the current PHP 7.4 (which is already a lot faster than previous PHP versions) there are possibilities to boost script performance with alternative approaches even more. If you want to verify the figures presented in this article yourself, I created a repository with all tests:
{% github devmount/faster-php no-readme %}
I used [this great tool](https://github.com/bvanhoekelen/performance) by Bart van Hoekelen to measure execution time.
Please don't hesitate to comment here or [create an issue](https://github.com/devmount/faster-php/issues/new)/PR at the repo above if you know additional ways to improve performance of certain PHP functions.
---
*Published: 14th April 2020* | devmount |
237,939 | Microsoft's Web Template Studio walkthrough 🌐 | A walkthrough of Microsoft's WebTS extension for Visual Studio | 0 | 2020-01-29T14:20:48 | https://dev.to/vaibhavkhulbe/microsoft-s-web-template-studio-walkthrough-1122 | web, react, node, fullstack | ---
title: Microsoft's Web Template Studio walkthrough 🌐
published: true
description: A walkthrough of Microsoft's WebTS extension for Visual Studio
tags: web, react, node, fullstack
cover_image: https://i.imgur.com/ErrjIif.png
---
Okay, so recently I discovered a new extension for VS Code called **[Web Template Studio](https://github.com/Microsoft/WebTemplateStudio/)** (WebTS). It's a wizard-based tool built by Microsoft which basically helps to quickly create a new web-based project (mostly full-stack web application) using a wizard-like experience. It's like installing new software inside VS Code!
> The aim is to generate a boilerplate code by letting the user choose which tech stack they want. Optionally, to deploy it with cloud services.
As it's a Microsoft made extension, they offer you to add their Azure cloud services in your project while creating the new project.
If you're a Microsoft's [Universal Windows Platform](https://docs.microsoft.com/en-us/windows/uwp/get-started/universal-application-platform-guide) (UWP) fan (like I was years ago) or have used the Visual Studio IDE for that, you must've heard about [Windows Template Studio](https://github.com/Microsoft/WindowsTemplateStudio/) (WTS). WebTS takes the same _template_-like process but the difference here is in the code project they output. While WTS was aimed to quickly build a UWP app, this time around they made WebTS to generate a boilerplate web app with cloud integration.
As written in its GitHub repo, this was created using TypeScript and React. ⚛
A great thing we can get to know about this extension is that it was initially created by Microsoft Garage interns, kudos to them. 👏
Some of the popular frameworks/libraries can be used to generate a boilerplate project using WebTS. Here are a few examples:
- React
- Express
- Bootstrap
- Angular
- Vue
- Node.js
- Flask
- Moleculer
I found it interesting to use, so here's my walkthrough in simple words on how to use Microsoft's Web Template Studio extension...
---
### ⬇ Download and Install
First things first, we need to download and install the extension. Open up the 'Extensions' tab in VS Code and search for "Web Template Studio" by Microsoft. Else, you can head over to the [extension website](https://marketplace.visualstudio.com/items?itemName=WASTeamAccount.WebTemplateStudio-dev-nightly). Hit "Install" and "Reload" if required.
### 🔃 Start the WebTS
Start the [Command Pallete](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette) in VS Code by hitting <kbd>Ctrl+Shift+P</kbd> (Windows/Linux) or <kbd>Shift ⇧ + Command ⌘ + P</kbd> (Mac). Next, type or select "Web Template Studio: Launch" and press <kbd>Enter</kbd> to launch the extension.
It will start its server and you will be presented with the Web Template Studio wizard. This comprises of 5 steps where you'll add the project details.
Here's the complete process with GIF:

Here's what's happening...
1. **Creating a new project**: in the first step, you just mention the name and save location. I want to make a 'CrazyAppWithTemplate' as the name and will save it to the appropriate location as shown.
2. **Choosing the tech stack**: the exciting part comes in step 2! Here you choose what frontend and the backend framework you need according to the project. _The WebTS extension is made to work with a full-stack project_. I'm comfortable with React as the frontend library and Node/Express as the backend framework to work so I chose those as seen in the GIF above. You can even blend Vue.js with Flask!
3. **Adding web pages**: towards the left, you'll see some options in the form of cards where you can choose what type of page layout you want. You can add up to **20** pages to your app at one time. Some of the options available are _Blank_, _Grid_, _List_ etc. They do as the name suggest. The _Blank_ one will be your choice if you want to build the pages from scratch, the _Grid_ includes some images and other elements organised in a grid form and similar is the _List_. As you can see I added just one _Grid_ page for the demo.
4. **Optional cloud services**: if you think your app needs some cloud support from Microsoft, feel free to configure [Azure Cloud Services](https://azure.microsoft.com/en-in/services/cloud-services/) available in the final step of the wizard. You can use this to host your web app with Azure Cloud Hosting service.
5. **Summary of your project**: at last, you see all the information about the boilerplate app that will be generated. I recommend you to review this page so that if you ever did something wrong you can easily go back a step or two to configure accordingly.
Here's what I've used:
- _App name_: CrazyAppWithTemplate
- _Front-end framework_: React
- _Back-end framework_: Node/Express
- _Page(s)_: a single page with _Grid_ layout
- _Optional cloud services?_: No
All done, time to hit the "Create Project" button! 🤩
After a minute, you'll get the dialog which tells you that the project boilerplate was created and you can now click on "Open Project". This opens your project in a new VS Code window containing the following structure:
```
.
├── src - React front-end
│ ├── components - React components for each page
│ ├── App.jsx - React routing
│ └── index.jsx - React root component
├── server/ - Express server that provides API routes and serves front-end
│ ├── routes/ - Handles API calls for routes
│ ├── app.js - Adds middleware to the express server
│ ├── sampleData.js - Contains all sample text data for generate pages
│ ├── constants.js - Defines the constants for the endpoints and port
│ └── server.js - Configures Port and HTTP Server
└── README.md
```
As stated in the _Readme.md_ file, the front-end is served on `http://localhost:3000/` and the back-end on `http://localhost:3001/`.
Of course, the next step is to install all the dependencies required (or get that massive _node_modules_ folder 🥴). Open up a terminal (or the inbuilt VS Code terminal), run `npm install` or `yarn install` depending on your package manager.
After the dependencies are installed successfully start the development server with `npm start` or `yarn start`, on a browser visit `http://localhost:3000/` and (drumroll 🥁)... you've created the boilerplate for the full-stack web app of your choice!

---
### What's next? 🤔
The Readme file in the project's root directory gives you all the information about what to do next. You can do the following:
1. **Add your own data**: of course, right now you see some default text and images are being placed in the app you served, you can change it with your own data stored in _/server/sampleData.js_ file and for images, they're inside _/src/images_.
2. **Create a new page**: add your own React components on the front-end by creating a new folder inside _/src/components_, and then adding its route inside _/src/App.js_.
3. **Use Azure for deployment**: if you plan to add Azure App Service after creating the project then follow the steps as mentioned in the Readme. Or you can head over to the [deployment documentation on GitHub](https://github.com/Microsoft/WebTemplateStudio/blob/dev/docs/deployment.md) for the same.
---
### Additional resources 📚
1. The official GitHub repo of WebTS:
{% github Microsoft/WebTemplateStudio no-readme %}
2. [Microsoft's blog on WebTS announcement](https://blogs.windows.com/windowsdeveloper/2019/05/15/announcing-microsoft-web-template-studio/).
3. Dan Vega's tutorial video
{% youtube fi4ZjqNcSQc %}
---
### Your opinion? 💭
What do you think about Web Template Studio extension by Microsoft? Will you use if for your future projects or not? I'm sure gonna give it a chance for one of my full-stack apps in future. Are there any caveats you feel? Write it down in the comments and let me know.
---
<blockquote class="twitter-tweet" data-partner="tweetdeck"><p lang="en" dir="ltr">Can be all too familiar...<br><br>Source: <a href="https://t.co/JQ31pv8yO4">https://t.co/JQ31pv8yO4</a><a href="https://twitter.com/hashtag/DevHumour?src=hash&ref_src=twsrc%5Etfw">#DevHumour</a> <a href="https://twitter.com/hashtag/Developer?src=hash&ref_src=twsrc%5Etfw">#Developer</a> <a href="https://t.co/Kxc0cNDKtT">pic.twitter.com/Kxc0cNDKtT</a></p>— Microsoft Developer UK (@msdevUK) <a href="https://twitter.com/msdevUK/status/1221845326824398853?ref_src=twsrc%5Etfw">January 27, 2020</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
---
#### [📫 Subscribe to my weekly developer newsletter 📫](https://mailchi.mp/f59beeac6b9b/devupdates)
##### PS: From this year, I've decided to write here on DEV Community. Previously, I wrote on Medium. If anyone wants to take a look at my articles, [here](https://medium.com/@vaibhavkhulbe)'s my Medium profile. | vaibhavkhulbe |
237,952 | Tutorial: Fuzzy Text Search In MongoDB The Easy Way! | if you've ever worked with mongodb, you may already know that mongodb server does not have a built-in... | 0 | 2020-01-14T15:26:02 | https://dev.to/djnitehawk/mongodb-fuzzy-text-search-with-c-the-easy-way-3l8j | mongodb, csharp, dotnet, tutorial | if you've ever worked with mongodb, you may already know that mongodb server does not have a built-in mechanism to search for documents based on fuzzy matching. say for example you have a couple of people saved in your database with the following names:
```
- Katheryne Markus
- Catherine Marcus
- Marcus Katerin Thompson
- Jack Jonas
```
and the requirement is to retrieve all records that **sounds similar** to `Catheryn Marcus`.
we want the resulting record set to only include the first 3 people and the most relevant person to be on top.
let's see how we can achieve this goal step-by-step...
## Getting Started
if you haven't already, please see the introductory article mentioned below in order to get a new project scafolded and setup before continuing with the rest of this article.
{% link /djnitehawk/tutorial-mongodb-with-c-the-easy-way-1g68 %}
## Define The Entity Class
add a new class file called `Person.cs` and add the following code to it:
```csharp
public class Person : Entity
{
public FuzzyString Name { get; set; }
}
```
in order to make fuzzy matching work with mongodb we need to store text data in a special `FuzzyString` type property. that class/type is provided by the *MongoDB.Entities* library we are using.
## Create A Text Index
fuzzy text searching requires the use of a mongodb text index which can be easily created like this:
```csharp
await DB.Index<Person>()
.Key(p => p.Name, KeyType.Text)
.CreateAsync();
```
the above code should be self explanatory, if not please see the documentation [here](https://mongodb-entities.com/wiki/Indexes.html).
## Store The Entities
```csharp
await new[]
{ new Person { Name = "Jack Jonas" },
new Person { Name = "Marcus Katerin Thompson" },
new Person { Name = "Catherine Marcus" },
new Person { Name = "Katheryne Markus" }
}.SaveAsync();
```
nothing fancy here. just doing a bulk save of multiple records *MongoDB.Entities* style ;-)
### Do The Fuzzy Search
```csharp
var people = DB.Find<Person>()
.Match(Search.Fuzzy, "Catheryn Marcus")
.ExecuteAsync();
```
here we're saying find `Person` entities that fuzzily matches the words `Catheryn Marcus` from the text index. you can read more about how this works under the hood in the documentation [here](https://mongodb-entities.com/wiki/Indexes-Fuzzy-Text-Search.html).
### Sort By Relevance
now that we have the results from the database, the following utility method can be used to get a sorted list that uses the levenshtein distance method.
```csharp
var list = people.SortByRelevance("Catheryn Marcus", p => p.Name);
foreach (var person in list)
{
Console.WriteLine(person.Name);
}
Console.Read();
```
you will now see the following result displayed in the console window:
```
Catherine Marcus
Katheryne Markus
Marcus Katerin Thompson
```
which is exactly the end result we expected.
### Next Steps...
i've purposefully tried to keep this tutorial as brief as possible to get your feet wet on the concepts of the library. if the above code seems easy and interesting please refer to the [official website](https://mongodb-entities.com) of **MongoDB.Entities**. you can also check out the source code on github:
{% github dj-nitehawk/MongoDB.Entities %} | djnitehawk |
237,975 | Application-Level Rate Limiting in Facebook APIs
| Facebook Graph APIs have rate limits. It is the number of Graph API calls that you can make to Facebo... | 0 | 2020-01-14T14:17:24 | https://dev.to/lek890/application-level-rate-limiting-in-facebook-apis-4168 | facebook, facebookapi | Facebook Graph APIs have rate limits. It is the number of Graph API calls that you can make to Facebook in a specific period of time. All API calls from the FB app will fail if you exceed the rate limit. Application-level rate limits apply to calls made using any access token other than a Page access token and ads APIs calls.
Two types -
###Account level
Number of api calls from a user = undocumented-number-which-only-fb-knows
These calls could be from a user using many different apps.
If a specific user account is making too many calls, the user could get rate limited.
###Application level
Number of API calls an app can make = 200 * number of users.
If you have only a test user, your app can make 200 requests. Rate limiting happens real time on sliding window for past hours and the rate limit get added up real time. This can be observed in the Application Rate Limit in the dashboard.
| lek890 |
238,029 | Read data from a database with a data model | Calling a database from an Express API | 0 | 2020-01-14T16:12:30 | https://dev.to/cesareferrari/read-data-from-a-database-with-a-data-model-4enf | node, express, backend, javascript | ---
title: Read data from a database with a data model
published: true
description: Calling a database from an Express API
tags: node, express, backend, javascript
cover_image: https://ferrariwebdevelopment.s3.us-east-2.amazonaws.com/assets/20191118-read-data.jpeg
---
## Calling a database from an Express API
In the [previous article](https://cesare.substack.com/p/working-with-a-data-model) we started creating an `API` that responds with data coming from a data model connected to a database.
We have seen how a data model is an intermediary between the Express server and the database.
The server talks to the data model which in turn talks to the database.
Our data model has a method called `find` that retrieves an array of objects. `find` returns a *Promise* that we have to handle in our server code.
### The `find` method
`find` doesn't take arguments and just returns a `JSON` object that contains a list of all the records in our database table.
In our `API` we need to send these record objects back to the client that made the original request.
First let's see what happens when we call the `find` method and we actually get a `JSON` object back, that is, when everything goes well and we are on the so called *happy path*.
In this case, we handle the operation inside the `then()` method.
We need to do two things inside `then()`.
First, we return a success response status code (`200`).
Technically we don't need to do this, the `200` response code is returned by default by Express on success anyway. The reason we do it is to make it very explicit to indicate that this is indeed a successful response.
The second thing we need to do is convert our `JSON` object into `text` format.
What comes back from the find method is a `JSON` object, but what we need to send back over `HTTP` is plain text, so we take advantage of another method on the response object, the `json()` method provided by Express.
`json()` is similar to the `send()` method we have already seen, but performs an extra step of converting a `JSON` object into plain text and sending the text back to the client.
```js
server.get('/toys', (req, res) => {
db.find()
.then(toys => {
res.status(200).json(toys)
})
.catch()
})
```
### Handling errors
Sometimes, when we make a request to a database we may not get what we are expecting. We must be ready to handle an unexpected situation.
This is when `catch()` comes in. It takes the `error` that was generated and sends back a response with a status code of `500`, a generic error code which means Internal Server Error.
By the way, you can read all about `HTTP` status codes at the [`HTTP` Status Code Registry](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml)
```js
server.get('/toys', (req, res) => {
db.find()
.then(toys => {
res.status(200).json(toys)
})
.catch( err => {
res.status(500).json({error: err})
})
})
```
To better display the error, we also call the `json()` method so we can send back a stringified `JSON` object that contains the actual error text, represented by the variable `err`.
### API response
Now we are finally set up to actually respond to the `/toys` endpoint.
If we send a `GET` request to `localhost:4000/toys`, we will actually get something back that looks like a list of toys:
```
id 1
name "Sock Monkey"
created_at "2019-05-09 17:33:19"
updated_at "2019-05-09 17:33:19"
id 2
name "Microscope Set"
created_at "2019-05-09 17:33:19"
updated_at "2019-05-09 17:33:19"
id 3
name "Red Ryder BB Gun"
created_at "2019-05-09 17:33:19"
updated_at "2019-05-09 17:33:19"
(output formatted for clarity)
```
And now that we have fulfilled the `R` part of our `CRUD` operation (`R` as in: *Read from the database*), we will learn how to create a new a record by calling an `API` endpoint. We'll see how to do this in the next article.
---
*I write daily about web development. If you like this article, feel free to share it with your friends and colleagues.*
*You can receive articles like this in your inbox by [subscribing to my newsletter](https://cesare.substack.com).*
| cesareferrari |
238,072 | What is KNN Algorithm? | What is KNN Algorithm? Equally known as K-Nearest Neighbour, is one of the most common algorithms in... | 0 | 2020-01-14T17:52:37 | https://dev.to/akuks/what-is-knn-algorithm-1ph7 | machinelearning, python, knn | **What is KNN Algorithm?**
Equally known as **K-Nearest Neighbour**, is one of the most common algorithms in Machine Learning and is broadly used in regression and classification problems.
This article assumes you have some familiarity with **supervised learning,**
if not then please visit [here](https://ashutosh.dev/blog/post/2019/12/what-is-machine-learning).
To be more precise, KNN falls under Instance-based learning. Consequently,
there is one more key question to be asked: "What is **Instance-based learning**"?
Instance-based learning or lazy learning or memory-based learning or by heart learning is one of the most common algorithms used in Machine Learning.
In Instance-based learning, the system learns from the models and promptly
using the similarity pattern, positively identifies the possible solution for the
new data set.
Let's get back our focus on KNN.
KNN uses similarity to predict the result of new data points. It indicates the
data will be assigned a value based on how closely it relates the points
in the training set.
It's ok if you don't get the complete understanding of KNN, we'll understand
it more with the help of an iris dataset. Iris data is available [here](https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data). It's accessed
several times by the Machine Learning beginners and enthusiasts.
**Implementation of KNN (Python)**
I am using Pycharm to write the code but can use Jupyter too.
```python
import numpy as np
import pandas as pd
file_name = '/Users/ashutosh/Downloads/iris_data.csv'
dataset = pd.read_csv(file_name)
print(len(dataset))
# Only shows First 5 Lines
print(dataset.head())
```
Execute the above program and you will get the following output
```
> 150
> Sepal_Length(CM) Sepal_Width(CM) ... Petal_Width (CM) Species
> 0 5.1 3.5 ... 0.2 Iris-setosa
> 1 4.9 3.0 ... 0.2 Iris-setosa
> 2 4.7 3.2 ... 0.2 Iris-setosa
> 3 4.6 3.1 ... 0.2 Iris-setosa
> 4 5.0 3.6 ... 0.2 Iris-setosa
> [5 rows x 5 columns]
```
To print the info
```
# Print the info
print('---------- Info -------------')
print(dataset.info())
print('---------- Info Ends Here -------------')
```
Executing the above code will give the following output:
```
> ---------- Info -------------
> <class 'pandas.core.frame.DataFrame'>
> RangeIndex: **150 entries**, 0 to 149
> Data columns (total 5 columns):
> Sepal_Length(CM) 150 non-null float64
> Sepal_Width(CM) 150 non-null float64
> Petal_Length (CM) 150 non-null float64
> Petal_Width (CM) 150 non-null float64
> Species 150 non-null object
> dtypes: float64(4), object(1)
> memory usage: 6.0+ KB
> None
> --------------- Info Ends Here ----------
```
In the iris database, we have 150 entries and the index starts with 0.
According to the Python documentation, **describe()** function in pandas
generate statistics that summarize the central tendency, dispersion and
shape of a dataset's distribution, excluding ``NaN`` values. Analyzes both
numeric and object series, as well as ``DataFrame`` column sets of mixed
data types. The output will vary depending on what is provided.
```python
# Describe dataset
print("\n----- Describe ------\n")
print(dataset.describe())
print('-------------- Describe Ends Here ----------')
-- Output --
----- Describe ------
```
> Sepal_Length(CM) Sepal_Width(CM) Petal_Length (CM) Petal_Width (CM)
> count 150.000000 150.000000 150.000000 150.000000
> mean 5.843333 3.054000 3.758667 1.198667
> std 0.828066 0.433594 1.764420 0.763161
> min 4.300000 2.000000 1.000000 0.100000
> 25% 5.100000 2.800000 1.600000 0.300000
> 50% 5.800000 3.000000 4.350000 1.300000
> 75% 6.400000 3.300000 5.100000 1.800000
> max 7.900000 4.400000 6.900000 2.500000
> -------------- Describe Ends Here ----------
In order to check the unique species in the dataset
```python
print(dataset['Species'].unique())
```
After executing the command, if you receive the following output
*['Iris-setosa' 'Iris-versicolor' 'Iris-virginica'],* you are on the right track.
Next step is to import the following functions from the **sklearn** library.
import matplotlib.pyplot as plt
```python
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.neighbors import KNeighborsClassifier
attributes = ['Sepal_Length(CM)', 'Sepal_Width(CM)',
'Petal_Length (CM)', 'Petal_Width (CM)', 'Species'
]
features = ['Sepal_Length(CM)', 'Sepal_Width(CM)',
'Petal_Length (CM)', 'Petal_Width (CM)'
]
```
"attributes' is the python list, consist of all the headers in the CSV file.
If there are **no headers in the CSV file**. Please add it.
"features" is the python list consist of Iris parameters.
```python
def plot_hist_graph(data):
data.hist(bins=50)
plt.figure(figsize=(15, 10))
plt.show()
def plot_parallel_coordinates(data, attr):
plt.figure(figsize=(15, 10))
parallel_coordinates(data[attr], "Species")
plt.title(
'Iris Parallel Coordinates Plot',
fontsize=20, fontweight='bold'
)
plt.xlabel('Attributes', fontsize=15)
plt.ylabel('Values', fontsize=15)
plt.legend(
loc=1,
prop={'size': 15},
frameon=True,
facecolor="white",
edgecolor="black")
plt.show()
data_values = dataset[features].values
plot_hist_graph(dataset)
```
However we want to start with the implementation of the KNN algorithm
but there is one hinderance, KNN does not validate or allow *string* labels.
Hence we need to convert string into integer labels.
We remember, we only have three unique species in the dataset, so we
can easily labelled them as "0", "1" and "2". To set labels we have
LabelEncoder() from *sklearn* library**.** Here is the implementation.
```
def set_label_encoding(data_species):
le = LabelEncoder()
return le.fit_transform(data_species)
feature_values = set_label_encoding(dataset['Species'].values)
```
Once the data is labelled, Now it's time to implement the KNN algorithm.
```python
def test_train_data_split(data, data_species, test_ratio, state):
return train_test_split(
data, data_species, test_size=0.33, random_state=42
)
def get_knn_classifier(k, x_train, y_train):
classifier = KNeighborsClassifier(n_neighbors=k)
return classifier.fit(x_train, y_train)
# Train the dataset
x_train_set, x_test_set, y_train_set, y_test_set =
test_train_data_split(
data_values, feature_values, 0.2, 0
)
# KNN Classification
# K = 3
knn_classifier = get_knn_classifier(3, x_train_set, y_train_set)
# Predicting the test result
prediction = knn_classifier.predict(x_test_set)
print('--- Prediction ---')
print(prediction)
To check the model accuracy, we need to build the confusion matrix.# Confusion Matrix
c_matrix = confusion_matrix(y_test_set, prediction)
print(c_matrix)
accuracy = accuracy_score(y_test_set, prediction) * 100
print(accuracy)
```
Execute the above program. By implementing the above, we get the accuracy of about 96.67 %.
Important link I followed:
Iris dataset: https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data
KNN Algorithm: https://kevinzakka.github.io/2016/07/13/k-nearest-neighbor/
KNN Algo Introduction: [https://www.analyticsvidhya.com/blog/2018/03/
introduction-k-neighbours-algorithm-clustering/]( | akuks |
246,033 | No BART terminals were hacked in the making of this ad | Originally posted by my colleague Bonnie Pecevich on mux.com/blog In the process of creating a BART... | 0 | 2020-01-21T22:55:36 | https://mux.com/blog/no-bart-terminals-were-hacked-in-the-making-of-this-ad/ | devrel | Originally posted by my colleague Bonnie Pecevich on [mux.com/blog](https://mux.com/blog/no-bart-terminals-were-hacked-in-the-making-of-this-ad)
In the process of creating a BART ad for the first time, we had some learnings that we thought we would share that could hopefully help someone else on their out-of-home ad buying journey. (We’ll also remember to follow our own advice for next time.)
Our learnings:
1. Ask upfront for explicit restrictions on creative.
1. Build in extra time for more than one round of feedback and time to iterate on design.
1. Be realistic about what’s feasible, especially with an aggressive timeline.
1. Submit a draft of the concept and see if they’ll approve it before you spend extra time finalizing the details (and telling everyone about it at the company all hands meeting.)
All of these learnings actually stemmed from one preeminent learning:
**BART doesn’t allow any `code` on their ads. 😲**
## Why put code in an ad?
First, an introduction–Mux is a startup that does video, and one of our aspirational goals is for every developer to know that. With our headquarters located in San Francisco, we’re aware that our city has a great supply of developers so we thought we’d try advertising in some well-traveled, public spaces.
Doing ads in a BART station (the underground transit system in the Bay Area) is generally assumed to be expensive, maybe even beyond the reach of a startup which is what we thought, too. But we learned doing an ad could fit in our budget if we were flexible on timing–we were able to sign up for a single digital display at the Montgomery BART station with a 12/30/19 start date. Even though that only gave us about 2 weeks to create an ad (ignore the wailing coming from our one and only in-house designer), we were excited!
Since the ad is just :15 seconds long with a not-so-captive audience, we wanted to create something that quickly caught the attention of developers. We thought we could achieve this by showing a terminal with a blinking cursor and then typed code to show use of our API. Sure, it crossed our minds that a blank screen with a blinking cursor might look like the screen is broken (which adds to the eye-catching-ness), so we added browsers to frame the terminal and added our logo to the top left corner. Our hope was that someone would take away that Mux is for developers and, if we were lucky, that we do something with video.
## Insert wrench here
The process was to submit a final file at least a week in advance of the live date to include time to get BART’s approval. There weren’t any specific guidelines beforehand on what’s allowed and what’s not but we assumed some common sense restrictions would apply like no explicit/harmful language imagery, etc. We figured getting BART’s approval would be relatively simple, like checking a box.
Wrong. Our ad was rejected! We received feedback that the beginning of the ad that showed the terminal could give the impression that “the screen is malfunctioning or has been hacked into.”
Busted. Turns out they also thought having a terminal on the screen would be eye-catching but not in a good way. We did feel a bit deflated, though, as we were all ready for our BART debut.
We went through the five stages of grief and settled on “Bargaining.” We tried to come up with a creative solution where we could still use the same ad. Hey, what if we could add a persistent banner to the ad that said something like “Don’t worry, no BART terminals were hacked in the making of this ad.”?

Or what if we stylized the terminal so it looked more illustrated and cartoon-y?

Alas, BART held firm and said, in no uncertain terms, **“Nothing involving coding.”** Since we couldn’t come up with a brand new design in 48 hours, our plans for a BART ad had to be put on hold.
## Silver lining
All is not lost! We used the final video for [our homepage](https://mux.com/) and are genuinely excited at how it came out.
Although the BART approval process is still a bit of a black box, we're excited to continue to work with the same ad agency and pursue our out-of-home ad dreams. We’re looking forward to iterating our design and hopefully making a public appearance at CalTrain in the very near future. And if you see our ad, you’ll know the journey it took to get that little video up on those screens.
| dylanjha |
248,255 | What are the Best 404 pages examples you have ever found? | We have surfed the internet and hand-picked a few best 49 most creative 404 page designs: https://h... | 0 | 2020-01-25T05:09:41 | https://dev.to/hostingpill/what-are-the-best-404-pages-examples-you-have-ever-found-2844 | showdev, webdev, design, productivity | We have surfed the internet and hand-picked a few best 49 most creative 404 page designs:
https://hostingpill.com/best-404-page-examples
Please share your findings if you have any.
| hostingpill |
248,274 | Charity Website Design | NGO has always been a helping hand to the needy. However, they have not come under huge limelight bec... | 0 | 2020-01-25T07:01:08 | https://dev.to/helenstevens32/charity-website-design-2hcn | websitedesign | NGO has always been a helping hand to the needy. However, they have not come under huge limelight because of the absence of proper promotion. One of the reasons for this is because they did not have a good website. At a time when people get information Only from the internet, not having a website is a very poor idea. so, getting a website designed is crucial.
Charity Organizations mostly choose DataIT Solutions because we create a charity website design that both impresses and boosts client inquiries on the website, all while working within various budgets. In fact, we offer special pricing for nonprofits and charities. And we take care of all of the details - customer service is what we do best - so that you can focus on your organization - what you do best.
DataIT Solutions have a proven track record of supplying high-quality charity website design services to a broad spectrum of charities, non-profit organizations, and small businesses.
We have a full range of web development services that include: User Experience Design, Search Engine Optimization, Content Management Systems and Responsive, Bespoke Design.
If any Confusion?
Contact us:-http://bit.ly/309mZgR
| helenstevens32 |
248,320 | Publishing my blog using HTTP upload in PHP | After a hard struggle with travis and my FTP server, I decided to use a HTTP upload | 4,114 | 2020-01-25T10:09:37 | https://dev.to/gabbersepp/publishing-my-blog-using-http-upload-in-php-3aj7 | php, javascript, website, deployment | ---
published: true
title: "Publishing my blog using HTTP upload in PHP"
cover_image: "https://raw.githubusercontent.com/gabbersepp/dev.to-posts/master/blog-posts/private-page/travis-http-php/assets/header.jpg"
description: "After a hard struggle with travis and my FTP server, I decided to use a HTTP upload"
tags: php, javascript, website, deployment
series: creating_private_page
canonical_url:
---
In the last article I wrote about how to publish a website with `travis` and FTP. First everything seemed fine but the nightly build suddenly failed. It took long time until I realized that this was not because of my code or my `ftp server` but because of how travis has setup it's network layers. Read on here if you are interested: https://blog.travis-ci.com/2018-07-23-the-tale-of-ftp-at-travis-ci
But the fight is not lost! My webspace paket includes a PHP instance and thus I am able to write a small HTTP upload tool. A bit oversized I think but it enables me to continue using my webspace bundle.
# The PHP fileupload
Shame on me, it's been a long time since I programmed PHP. So I guess the following code is written very quick and dirty.
First I need a method for reading the `HTTP` Header to check a secret that I send along with the request.
```php
// code/upload.php#L3-L19
function getRequestHeaders() {
$headers = array();
foreach($_SERVER as $key => $value) {
if (substr($key, 0, 5) <> 'HTTP_') {
continue;
}
$header = str_replace(' ', '-', ucwords(str_replace('_', ' ', strtolower(substr($key, 5)))));
$headers[$header] = $value;
}
return $headers;
}
$headers = getRequestHeaders();
if ($headers['Secret'] !== "<your secret>") {
die("wrong secret");
}
```
The file can be accessed with `$_FILES`. To store the image somewhere, use `move_uploaded_file`.
```php
// code/upload.php#L21-L22
move_uploaded_file($_FILES['zip-file']['tmp_name'], './'.$_FILES['zip-file']['name']);
```
It is very basic but should be enough to accept files from anywhere. To speed up the upload process I moved the whole `/dist`directory into a ZIP archive. So I need to unzip it with PHP:
```php
// code/upload.php#L23-L30
$zip = new ZipArchive;
if ($zip->open('test.zip') === TRUE) {
$zip->extractTo('./');
$zip->close();
echo 'ok';
} else {
echo 'error duringunzip';
}
```
# Zip & send the files with NodeJS
For zipping the files I use [archiver](https://www.npmjs.com/package/archiver) and for making the upload request [request](https://www.npmjs.com/package/request).
`archiver` is very straightforward and only needs a few lines of code:
```js
// code/zip.js
var fs = require('fs');
var archiver = require('archiver');
var fileName = 'test.zip'
var fileOutput = fs.createWriteStream(fileName);
const archive = archiver('zip');
fileOutput.on('close', function () {
console.log(archive.pointer() + ' total bytes');
console.log('archiver has been finalized and the output file descriptor has closed.');
});
archive.pipe(fileOutput);
archive.directory('dist/', false);
archive.on('error', function(err){
throw err;
});
archive.finalize();
```
Sending the file is also very simple and done quickly:
```js
// code/send.js
const request = require("request");
const fs = require("fs");
const path = require("path");
var options = {
url: 'https://biehler-josef.de/upload.php',
headers: {
secret: process.env.JB_UPLOAD_SECRET
}
}
var r = request.post(options, function optionalCallback (err, httpResponse, body) {
console.log('Server responded with:', body, err);
})
var form = r.form()
form.append('zip-file', fs.createReadStream(path.join(__dirname, "..", 'test.zip')))
```
# Summary
I replaced the FTP deployment with a HTTP upload endpoint. The `/dist` directory is zipped and unzipped with `php`. This was required because FTP upload does not work with travis very well.
----
# Found a typo?
As I am not a native English speaker, it is very likely that you will find an error. In this case, feel free to create a pull request here: https://github.com/gabbersepp/dev.to-posts . Also please open a PR for all other kind of errors.
Do not worry about merge conflicts. I will resolve them on my own. | gabbersepp |
248,454 | Can you make a countdown timer in pure CSS? | I must first apologise for the somewhat rhetorical question as the title. About 3 minutes after I... | 0 | 2020-01-26T00:55:35 | https://www.chenhuijing.com/blog/can-you-make-a-countdown-timer-in-pure-css/ | html, css, javascript, webdev | ---
title: Can you make a countdown timer in pure CSS?
published: true
date: 2020-01-25 00:00:00 UTC
tags: html, css, javascript, webdev
canonical_url: https://www.chenhuijing.com/blog/can-you-make-a-countdown-timer-in-pure-css/
cover_image: https://thepracticaldev.s3.amazonaws.com/i/i0fdcpj38qggzt3rf69x.jpg
---
I must first apologise for the somewhat rhetorical question as the title. About 3 minutes after I wrote it, my brain exclaimed: “This is clickbait! Clearly if you wrote an entire blog post, the answer should be yes, right??”
Which led me to my next thought. When people write such titles, do they end with a negative conclusion, where the answer is no? What are the statistics on article titles like this? I have so many questions!
This is also why I don’t have many friends. Oh well.
Warning, blog post grew ridiculously long. TL:DR of things is, yes you can do it in CSS but there’s a much better way. Involves Javascript, [more details here](#raf) if you want to skip through the CSS stuff.
## Why even countdown in CSS?
Okay, I did not think about this topic out of the blue. I have a friend (I hope she thinks I’m her friend). She tweeted her problem:
{% twitter 1217776641998344194 %}
The way my brain works is to wonder if everything can be built with CSS (the correct answer is no, not really, but you can still try because it’s fun). Even though not _everything_ can nor should be built with only CSS, this timer thing seemed narrow enough to be plausible.
I describe this as a brute-force method, because the underlying markup consists of all the digits from 0 to 9. You then have to animate them to mimic a timer. So maybe it is not the most elegant approach. But it can fulfil the requirements from the tweet!
Here's the list of concepts used for this implementation:
- CSS transforms
- CSS animations
- Flexbox
- Demo-only: CSS custom properties
- Demo-only: Selectors
Demo-only just means that it's additional functionality sprinkled on to make the demo slightly more fancy. Feel free to cut it out if, for whatever reason, you want to fork the code and use it somewhere.
{% codepen https://codepen.io/huijing/pen/qBELxJo %}
## The general approach
If you Google “pure CSS countdown”, my approach of listing all the digits in the markup then doing some form of obscuring the irrelevant digits seems to be the most common solution. This is the markup for the 2 digits making up the timer:
```html
<div class="timer">
<div class="digit seconds">
<span>9</span>
<span>8</span>
<span>7</span>
<span>6</span>
<span>5</span>
<span>4</span>
<span>3</span>
<span>2</span>
<span>1</span>
<span>0</span>
</div><div class="digit milliseconds">
<span>9</span>
<span>8</span>
<span>7</span>
<span>6</span>
<span>5</span>
<span>4</span>
<span>3</span>
<span>2</span>
<span>1</span>
<span>0</span>
</div>
</div>
```
The idea is to animate the digits from 9 to 0 by vertically scrolling the block of digits and only showing the required digits at any point in time.

## CSS transforms
The only CSS properties that are “safe” for animation are `transform` and `opacity`. If you’re wondering why that is, allow me to point you to my favourite explanation by [Paul Lewis](https://twitter.com/aerotwist) and [Paul Irish](https://twitter.com/paul_irish) on [High Performance Animations](https://www.html5rocks.com/en/tutorials/speed/high-performance-animations/).
To animate my digits `<div>`s upward, I turned to the trusty `translateY` property. For this use case, my `<div>` is only moving along the y-axis anyway.
```css
.selector {
transform: translateY(0);
}
```
You could do the same with the `translate` property, but then you’d have to state the value for the x-axis as well because a single value in `translate` resolves to the x-coordinate.
```css
.selector {
transform: translate(3em);
}
/* is equivalent to */
.selector {
transform: translate(3em, 0);
}
```
Read more about the transform functions in the [CSS Transforms Module Level 1](https://www.w3.org/TR/css-transforms-1/#transform-functions) specification. The actual math is in there, and even if that’s not your cup of tea, there are numerous examples in there that can help with understanding how the properties work.
## CSS animations
The next step is to animate the transform over time. Cue CSS animations.
The CSS animation properties offer a pretty decent range of functionality to make such an approach feasible. I know them because I researched this when I [tried to animate](https://dev.to/huijing/figuring-out-css-animation-properties-with-a-magic-kittencorn-1b0h) the [SingaporeCSS](https://singaporecss.github.io/) and [React Knowledgeable](https://reactknowledgeable.org/) unofficial official mascots last year.
[Keyframes](https://www.w3.org/TR/css-animations-1/#keyframes) are a critical concept when you do animation. Keyframes are what you use to specify values for the properties being animated at specified points during the entire animation. They are specified with the `@keyframes` at-rule.
```css
@keyframes seconds {
0% { transform: translateY(0) }
10% { transform: translateY(-1em) }
20% { transform: translateY(-2em) }
30% { transform: translateY(-3em) }
40% { transform: translateY(-4em) }
50% { transform: translateY(-5em) }
60% { transform: translateY(-6em) }
70% { transform: translateY(-7em) }
80% { transform: translateY(-8em) }
90% {
transform: translateY(-10em);
width: 0;
}
100% {
transform: translateY(-10em);
width: 0;
}
}
@keyframes milliseconds {
0% {transform: translateY(0) }
10% { transform: translateY(-1em) }
20% { transform: translateY(-2em) }
30% { transform: translateY(-3em) }
40% { transform: translateY(-4em) }
50% { transform: translateY(-5em) }
60% { transform: translateY(-6em) }
70% { transform: translateY(-7em) }
80% { transform: translateY(-8em) }
90% { transform: translateY(-9em) }
100% { transform: translateY(-9em) }
}
```
I’ll explain the values after covering the animation properties needed for the countdown.
In my demo, I’ve gone with the shorthand of `animation` so the code looks like this:
```css
.seconds {
animation: seconds 10s 1 step-end forwards;
}
.milliseconds {
animation: milliseconds 1s 10 step-end forwards;
}
```
If you open DevTools on the demo, and go to the _Computed_ tab (for Firefox or Safari, Chrome displays this list under their box model in _Styles_), you will see the computed values for each of the different CSS properties used on your page.

From there you can see that the `animation` shorthand I used explicitly covers the following properties:
- <h3><code>animation-name</code></h3>
This is used to identify the animation, and you can use any combination of case-sensitive letters `a` to `z`, numerical digits `0` to `9`, underscores, and/or dashes.
The first non-dash character _must_ be a letter though, and you cannot use `--` nor reserved keywords like `none`, `unset`, `initial` or `inherit` to start the name.
- <h3><code>animation-duration</code></h3>
This sets the length of time your animation should take to complete 1 cycle. So for the seconds column of digits, I set it to `10s` while for the milliseconds column of digits, I set it to `1s`.
- <h3><code>animation-iteration-count</code></h3>
This sets the number of times the animation should cycle through before stopping. The seconds column only needs to run once, while the milliseconds column needs to run through its animation cycle 10 times.
- <h3><code>animation-timing-function</code></h3>
This describes how the animation progresses throughout the duration of each cycle. Timing functions can be fairly granular if you are familiar with `cubic-bezier()` functions but I most often see people use keyword values for general use-cases.
I used the `step-end` keyword, which resolves to `steps(1, jump-end)`. The `steps()` function allows us to have stepped animation, where the first argument indicates the number of stops during the transition. Each stop is displayed for an equal amount of time.
`jump-end` allows me move my `<div>` upward in steps instead of a smooth scroll, and pause at the end value of `translateY`. This is a terrible sentence and even more horrible explanation.
Please refer to [Jumps: The New Steps() in Web Animation](https://danielcwilson.com/blog/2019/02/step-and-jump/) by [Dan Wilson](https://twitter.com/dancwilson) for a much better explanation. Visual demos and code in there!
- <h3><code>animation-fill-mode</code></h3>
This lets you dictate how a CSS animation applies its styles to the target before and after the animation runs. I wanted the position of my `<div>`s to remain at the last keyframe when the animation ends, so I set this value to `forwards`.
For the seconds digit, the last 2 frames don’t need to be shown at all because the timer is not zero-padded. When the countdown hits 9, the seconds digit needs to not show up nor take up space. So those keyframes have an additional `width: 0` property on them.
Also, because I went with `forwards` for the `animation-fill-mode`, to make the 0 stay on screen at the end of the animation, the last frame for milliseconds remains at `-9em`.
Read more about CSS animations in the [CSS Animations Level 1](https://www.w3.org/TR/css-animations-1/) specification. It broadly explains how animations work in the context of CSS, then covers in detail each of the individual animation properties. Also, examples of working code aplenty.
## Flexbox
This is my favourite part. The requirement is that during the last second, when only the digits 9 to 0 remain on display, the whole timer has to be aligned center.
<a id="raf"></a>
Here’s where it is time to reveal the Javascript solution, which is honestly, much more straightforward. The key here is `Window.requestAnimationFrame()`. Here’s the [MDN entry for it](https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame).
You’re welcome.
```javascript
let end;
const now = Date.now;
const timer = document.getElementById("timer");
const duration = 9900;
function displayCountdown() {
const count = parseInt((end - now()) / 100);
timer.textContent =
count > 0 ? (window.requestAnimationFrame(displayCountdown), count) : 0;
}
function start() {
end = now() + duration;
window.requestAnimationFrame(displayCountdown);
}
```
This implementation is also so much easier to style, because Flexbox.
```html
<div class="timer-container">
<p class="timer" id="timer">99</p>
</div>
```
```css
.timer-container {
display: flex;
height: 100vh; /* height can be anything */
}
.timer {
margin: auto;
}
```
When I started this post, I already said, just because you can do something with pure CSS doesn’t mean you should. This is the prime example. Anyway, here’s the Codepen with the same enhanced-for-demo-purposes functionality sprinkled on.
{% codepen https://codepen.io/huijing/pen/YzPgYoG %}
But let us continue with the pure CSS implementation, even if it is just an academic exercise at this point.
```css
.timer-container {
display: flex;
height: 100vh; /* height can be anything */
}
.timer {
overflow: hidden;
margin: auto;
height: 1em;
width: 2ch;
text-align: center;
}
.digit {
display: inline-block;
}
.digit span {
display: block;
width: 100%;
height: 1em;
}
```
If you compare this with the Javascript implementation, you’ll notice a lot of similarities.
Yes, my friends. If you had suspected that I was using the modern-day CSS answer to vertical centring on the web, you are absolutely right. Auto-margins is the mechanism in play here.
To be fair, the `display: flex` and auto-margin on flex child technique centralises the whole timer block. Within the timer itself, the text should be centre-aligned with the `text-align` property.
Read more about Flexbox in the [CSS Flexible Box Layout Module Level 1](https://www.w3.org/TR/css-flexbox-1/) specification. It is the definitive resource for how Flexbox works and even though it is fairly lengthy, there are plenty of code examples in there to help you visualise how things work.
## Fun demo extra #1: Dynamic colour changing
Another requirement was for the font colour and background colour to be customisable. I’m pretty sure she meant in the code and not on the fly, but since we can do this on the fly, why not?
Cue CSS custom properties and the HTML colour input. Before you ask me about support for the colour input, I shall invoke first strike and display the [caniuse](https://caniuse.com/) chart for it.
[<picture>
<source type="image/webp" srcset="https://caniuse.bitsofco.de/image/input-color.webp"></source>
<img src="https://caniuse.bitsofco.de/image/input-color.png" alt="Data on support for the input-color feature across the major browsers from caniuse.com">
</picture>](http://caniuse.com/#feat=input-color)
Come on, this is pretty green here. So anyway, declare your custom properties for font colour and background colour like so:
```css
:root {
--fontColour: #000000;
--bgColour: #ffffff;
}
```
Use them in the requisite elements like so:
```css
.timer {
/* other styles not shown for brevity */
background-color: var(--bgColour, white);
}
.digit {
/* other styles not shown for brevity */
color: var(--fontColour, black);
}
```
That’s the set up for the timer itself. Now, control these colours with the colour input. Toss in 2 colour inputs into the markup and position them where you like. I went with the top-right corner.
```html
<aside>
<label>
<span>Font colour:</span>
<input id="fontColour" type="color" value="#000000" />
</label>
<label>
<span>Background colour:</span>
<input id="bgColour" type="color" value="#ffffff" />
</label>
</aside>
```
Then, you can hook up the colour picker with the custom properties you declared in the stylesheet like so:
```javascript
let root = document.documentElement;
const fontColourInput = document.getElementById('fontColour');
const bgColorInput = document.getElementById('bgColour');
fontColourInput.addEventListener('input', updateFontColour, false);
bgColorInput.addEventListener('input', updateBgColour, false);
function updateFontColour(event) {
root.style.setProperty('--fontColour', event.target.value);
}
function updateBgColour(event) {
root.style.setProperty('--bgColour', event.target.value);
}
```
It’s not that much code, and kind of fun to play with in a demo, IMHO.
## Fun demo extra #2: Checkbox hack toggle
I could have left the demo to start automatically when the page loaded and letting people refresh the page to start the animation again, but I was going all in with the pure CSS thing, so…
Anyway, checkbox hack plus overly-complicated selectors. That’s how this was done. If you had just gone with Javascript, which is probably the right thing to do, you could used a button with an event listener. But you’re too deep in this rabbit hole now.
I built this bit such that when unchecked, the label shows _Start_ but when the input is checked, the label shows _Restart_. Because why not make things more complicated?
```css
.toggle span {
font-size: 1.2em;
padding: 0.5em;
background-color: palegreen;
cursor: pointer;
border-radius: 4px;
}
input[type="checkbox"] {
opacity: 0;
position: absolute;
}
input[type="checkbox"]:checked ~ aside .toggle span:first-of-type {
display: none;
}
.toggle span:nth-of-type(2) {
display: none;
}
input[type="checkbox"]:checked ~ aside .toggle span:nth-of-type(2) {
display: inline;
}
```
The actual bit that triggers the animation looks like this:
```css
input[type="checkbox"]:checked ~ .timer .seconds {
animation: seconds 10s 1 step-end forwards;
}
input[type="checkbox"]:checked ~ .timer .milliseconds {
animation: milliseconds 1s 10 step-end forwards;
}
```
With the checkbox hack, the order of the elements on the page does matter because you can only target sibling selectors after an element and not before it. So the checkbox needs to be as near the top (and not nested) as possible.
## Wrapping up
Truth be told, I think I’m a terrible technical writer because most of my posts are so long I reckon only a tiny handful of people ever read through the whole thing.
But this is my blog, and not some official documentation, so I’m kinda going to keep doing whatever and writing these ramble-y posts.
At least I try to organise the content into coherent sections? Okay, to be fair, if I was writing for a proper publication, I’d put on my big girl pants and write concisely (like a professional, LOL).
Unfortunately, this is not a proper publication. ¯\\\_(ツ)_/¯ Anyway, much love if you really made it through the whole thing. Hope at least some of it was useful to you.
_<small>Credits: OG:image from <a href="https://www.instagram.com/p/B7rUvx2hisB/">autistic.shibe’s instagram</a></small>_ | huijing |
248,462 | Best approach for filter data, fetch again, array.filter? | Hi everyone, I'm thinking about it. We have a API like https://rickandmortyapi.com/ that return a... | 0 | 2020-01-25T15:13:09 | https://dev.to/joseluisrnp/best-approach-for-filter-data-fetch-again-array-filter-10kl | help | ---
title: Best approach for filter data, fetch again, array.filter?
published: true
tags: help
---
Hi everyone, I'm thinking about it.
We have a API like https://rickandmortyapi.com/ that return a array's character like
```javascript
{
"info": {
"count": 394,
"pages": 20,
"next": "https://rickandmortyapi.com/api/character/?page=20",
"prev": "https://rickandmortyapi.com/api/character/?page=18"
},
"results": [
{
"id": 361,
"name": "Toxic Rick",
"status": "Dead",
"species": "Humanoid",
"type": "Rick's Toxic Side",
"gender": "Male",
"origin": {
"name": "Alien Spa",
"url": "https://rickandmortyapi.com/api/location/64"
},
"location": {
"name": "Earth",
"url": "https://rickandmortyapi.com/api/location/20"
},
"image": "https://rickandmortyapi.com/api/character/avatar/361.jpeg",
"episode": [
"https://rickandmortyapi.com/api/episode/27"
],
"url": "https://rickandmortyapi.com/api/character/361",
"created": "2018-01-10T18:20:41.703Z"
},
// ...
]
}
```
We have a input to filter by name and a select to filter by status.
We want add to our project pagination and use the info.next and info.prev, we are using React what is your best aproach to do it?
- Call to the API each time that the input name change?
We have the endpoint https://rickandmortyapi.com/api/character/?name=rick
- Filter our array of characters with array.filter?
```javascript
characters.filter(({name}) => name.includes(filterInputValue))
```
- Have a state for allData and another one for filterData?
Right now, I call each time to the API but I think this is many calls...
I'm a junior web developer and I want learn about good practise to improve my code, help me!
P.D: Excuse me for my English I improve it soon, Be free to correct me!😅😅 | joseluisrnp |
248,472 | Step Up Your CSS Game With SASS - SASS for Beginners | So, recently I was experimenting with SASS and it is awesome. Once you get a hang of it, it is really... | 0 | 2020-01-25T16:08:50 | https://www.ayushmanbthakur.com/posts/step-up-css-with-sass | scss, sass, css | ---
title: Step Up Your CSS Game With SASS - SASS for Beginners
published: true
date: 2020-01-25 15:54:27 UTC
tags: SCSS, SASS, CSS
canonical_url: https://www.ayushmanbthakur.com/posts/step-up-css-with-sass
---
So, recently I was experimenting with SASS and it is awesome. Once you get a hang of it, it is really difficult to get back to normal CSS. Now, without further adieu let's jump into the details of SASS. But before starting, a disclaimer, everything done in SASS can be done in CSS, SASS just makes thing easier.
## What is SASS?
The acronym SASS stands for Syntactically Awesome Style Sheet. It is basically a CSS pre-processor which will be compiled down to normal CSS. Now, there are two versions of SASS: The actual SASS which works based on indentation, and the SCSS, which is syntactically very similar to normal CSS and will be used by me for this post.
## The Perks of SASS over CSS
So, let's see why we need a CSS pre-processor like SASS to work with:
#### 1. The nesting of element styles:
Suppose you have this HTML layout in your page:
```html
<div class="container">
<h1>Hello World</h1>
</div>
```
Now if you want to style this specific _h1_ in CSS you have to write like this:
```scss
.container h1{
/* Your style goes here */
}
```
But in similar condition with SCSS you can directly nest the h1 style inside the _.container_ specific style like so:
```css
.container{
// container style goes here
h1{
// h1 style goes here
}
}
```
That way if you change any style, for example, _background-color_ of the container, which may demand style change in the _h1_ as well, then you don't have to search the whole document for that specific _.container h1_. Instead, you can easily find it nested inside the _.container_ class. This brings me to the next point beautiful about SASS,
#### 2. Organized:
SCSS keeps your code organized. In normal CSS, the element related _:hover_, _:active_, _::before_, _::after_ etc. can be created anywhere. But, the nesting feature of SCSS helps us to keep the code organized. You can easily define _:hover_ like so:
```scss
.container{
// code here
&:hover{
// code here
}
}
```
If you are wondering, then this "&" is used to reference the parent under which the code is being written. So, in this case, the _&:hover_ translates to _.container:hover_.
#### 3. Browser Compatibility Coverage:
As SCSS is compiled to normal CSS, so it compiles the given CSS to the normal property as well as legacy browser compatible _-ms_ , _-moz_, _-webkit_ properties. For example:
_The SASS/SCSS I Wrote:_
```scss
.container {
display: flex;
}
```
_The CSS generated:_
```css
.container {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
}
```
Now you might be asking, but Ayushman, how to get started with this adrenaline injected version of CSS.
#### 4. Multi file Setup:
You can have multiple files imported into the main SCSS code as a partial. In that case, the partials won't be translated to separate files and will be written down into one file. In that way, your code stays modular and easy to access. For example:
_in \_variable.scss:_
```scss
//Having the _ in the name makes any compiler to treat the file as a partial file hence that is not translated it into a different file
//the notation below is the declaration of a variable in SCSS
$primary-color: #234467;
```
_in style.scss:_
```scss
// this imports the _variables.scss
@import "./_variables.scss";
.container{
bacground-color: $primary-color;
}
```
So, now if I compile it then the CSS generated will be:
```css
.container {
background-color: #234467;
}
```
That means the value of _primary-color_ is brought from _\_variables.scss_ and put into the compiled version of the code.
## Getting Started with writing SCSS code
There are many ways to set up SCSS in your project. There is a package called _node-sass_ for npm users which convert SCSS to CSS. But, in this post, I will be telling you an easier way to get started with SASS. If you use [Visual Studio Code](https://code.visualstudio.com) there is a nice extension named [Live Sass Compiler](https://marketplace.visualstudio.com/items?itemName=ritwickdey.live-sass) which depends on another extension named [Live Server](https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer). Both these extensions are developed by Ritwick Dey. With the help of this extension, whenever you write SCSS code you will get a _Watch SASS_ button. 
Clicking this button while working on a SASS file will generate a _filename.css_ and _filename.css.map_ file in the working directory. Referencing this CSS file will let you use SCSS to write required style and then have it compiled to normal CSS, which can be understood by the browsers.
So, with that let's have a look at how to use basic SCSS to step up your CSS game.
## Variables:
Though implemented in CSS. But I think at this moment the variables scene in CSS is half baked. But, the variables in SCSS are really robust. It is really easy to update variables and use them
## Extending the style of another element:
How many times have you wondered to have some style defined in one element also applied to another? Instead of having a lot of classes attached to your HTML element, you can extend those properties.
Here is a basic example of extending properties:
```scss
.container {
background-color: darkgray;
}
.block {
@extend .container;
color: green;
}
```
This translates to this CSS:
```css
.container, .block {
background-color: darkgray;
}
.block {
color: green;
}
```
## Using mixins
How many times you have to write the _display flex_ and _justify-content center_ just to display items in the center of the page. That's where mixins drop in. Using mixins we can use a specific piece of code repeatedly, and if we need a change there is only one place we need to change the code.
Here is a basic example of mixin:
```scss
@mixin flex-center {
display: flex;
justify-content: center;
}
.container {
@include flex-center();
}
```
This translates to this CSS:
```css
.container {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
}
```
As you can tell from this line: _flex-center()_ these mixins can have arguments too. For example, in the previous example if we wanted to have a different background for different elements implementing _flex-center_ mixin we can do this:
```scss
@mixin flex-center($bgcolor: transparent) {
display: flex;
justify-content: center;
background-color: $bgcolor;
}
.container {
@include flex-center();
}
.container_2 {
@include flex-center(yellow);
}
```
The _$bgcolor_ is the argument given to the mixin and the default value is given as transparent.
Now, in _container\_2_ I passed the argument as yellow. So, the compiled code looks like:
```css
.container {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
background-color: transparent;
}
.container_2 {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
background-color: yellow;
}
```
## if-else statement
Do you know, in SCSS you can also have if-else statements? For example, in the previous example, if we have a dark background we need to set the text color to white. We can set up our previous example like this, we can pass another variable making dark-background true and false:
```scss
@mixin flex-center($bgcolor: transparent, $dark-bg: false) {
display: flex;
justify-content: center;
background-color: $bgcolor;
@if $dark-bg {
color: #ffffff;
}
@else {
color: #000;
}
}
.container {
@include flex-center(black, true);
}
.container2 {
@include flex-center(yellow, false);
}
```
This translates to this CSS:
```css
.container {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
background-color: black;
color: #ffffff;
}
.container2 {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
background-color: yellow;
color: #000;
}
```
## Functions
There is a lot of confusions regarding functions and mixins in SCSS. While mixins let you have some CSS properties implemented for a specific class, functions help us to have values returned for specific things. For example, we can set white text for a div with a dark background and black text for div with a light background using functions like so:
```scss
@function return_text_color($bgcolor) {
@if lightness($bgcolor)<50 {
@return white;
}
@else {
@return black;
}
}
.container {
background-color: black;
color: return_text_color(black);
}
.container2 {
background-color: yellow;
color: return_text_color(yellow);
}
```
The compiled CSS looks like:
```css
.container {
background-color: black;
color: white;
}
.container2 {
background-color: yellow;
color: black;
}
```
Here, I used a built in function in SCSS for detecting the lightness of a color.
## Bonus: Using Mixins and Functions together
So, it will be a nice idea to end this post by using the last mixin along with the last function I wrote. It will make the text centered and properly colored. But, disclaimer, I was having some problem with transparent background, which can be fixed with another if statement. I left that as an exercise for the viewer.
The SCSS:
```scss
@function return_text_color($bgcolor) {
@if lightness($bgcolor)<50 {
@return white;
}
@else {
@return black;
}
}
@mixin flex-center($bgcolor) {
display: flex;
justify-content: center;
background-color: $bgcolor;
color: return_text_color($bgcolor);
}
.container {
@include flex-center(#333);
}
.container2 {
@include flex-center(yellow);
}
```
The CSS:
```css
.container {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
background-color: #333;
color: white;
}
.container2 {
display: -webkit-box;
display: -ms-flexbox;
display: flex;
-webkit-box-pack: center;
-ms-flex-pack: center;
justify-content: center;
background-color: yellow;
color: black;
}
```
## Conclusion
There is a lot more to explore in the world of SCSS. But these features I mentioned are really cool to start with. Once you get the hang of SCSS, it is really hard to get back to CSS. Hope you make awesome projects with superpowers of SCSS. Stay happy, stay coding. | abthakur |
248,481 | Iterating with recursion | Cover photo by Ludde Lorentz on Unsplash How many stairs are in a staircase? You're walki... | 4,356 | 2020-01-25T20:25:37 | https://dev.to/daniel13rady/iterating-with-recursion-3i5e | beginners, thinkdeep, programming | <small>Cover photo by Ludde Lorentz on [Unsplash](https://unsplash.com/photos/YfCVCPMNd38)</small>
# How many stairs are in a staircase?
You're walking along, thinking about dinner. _Should I have [ramen](https://www.instagram.com/tsurumendavis/) or [udon](https://www.instagram.com/yumegaarukara/)?_ you ask yourself.
>:exclamation: A wild staircase appears!
Thoughts of noodles recede from your mind as you consider this new development in staircase behavior, which may be worthy of a _Nat Geo_ special.
The staircase is not too tall; you can't tell exactly how many steps there are, but you're pretty confident you can conquer them.
But...how can you be sure?
You've climbed hundreds, if not thousands, of steps in your lifetime, enough that this new set shouldn't be a problem. The secret, you know, is twofold:
1. Know when to stop.
2. Take the next step.
And you really can't climb a staircase and emerge unscathed without knowing this secret.
You recall a time when you continued to think about noodles while trying to ascend a staircase, and tripped: you'd made a mistake taking the next step.
You also recall, during this very same ascension, how it felt when you reached the top but kept on climbing because you were _still_ weighing your noodle options: your foot came down hard and you pitched forward, nearly falling flat on your face. You'd momentarily forgotten the other secret of climbing stairs: knowing when to stop.
<!--
```scheme
(define (climb-stairs staircase)
(if (top? staircase)
(resume-noodle-thoughts)
(climb-stairs (rest-of staircase))))
```
-->

You eye the stairs before you, and, driven by the need to know just how many there are, you make the only reasonable decision: you clear your mind of noodles, and take the first step.
Confidence in your ability to conquer _any_ flight of stairs, even if you've never done it, seems valuable to you in your future as a human.
But how do you incorporate counting into it? :thinking:
It's simple, you realize: you merely keep a tally of the stairs you've stepped as you climb. The length of a staircase can thus be described as **the value of the first stair, plus the length of the rest of the staircase**.
<!--
```scheme
(+ 1 (count-stairs (rest-of staircase)))
```
-->

If you've reached the top, you stop climbing, and thus stop counting. Otherwise, you can focus on stepping and summing, and wait to evaluate the tally you've kept until you've stopped.
<!--
```scheme
(define (count-stairs staircase)
(if (top? staircase)
0
(+ 1 (count-stairs (rest-of staircase)))))
```
-->

Smiling and slightly winded, you reach the top and look at how far you've come. Full of the glow of victory, you wonder at the power of this simple algorithm. Your smile fades to a frown, however, as you remember a more pressing concern: what bowl of noodles should you have for dinner?
----
This approach to solving problems that can be broken down into a sequence of similar steps, where
- you do something at every step
- you know how to get to the next step, and
- you know how to determine if you should stop stepping
is fundamental in computing science and every-day programming.
**Iteration**, that's what we've landed on. Doing things over and over until a particular goal is reached.
The example functions I used to illustrate the story above implemented iteration in a **recursive** fashion.
**Recursive functions are the simplest form of programatic iteration**: they are functions designed to deconstruct a problem into a linear computation of its pieces, and then evaluate it as a whole.
Thinking about it so formally can get a bit confusing. I find it easiest to grok by reading a simple recursive function aloud. Revisiting our `#count-stairs` example:
<!--
```scheme
(define (count-stairs staircase)
(if (top? staircase)
0
(+ 1 (count-stairs (rest-of staircase)))))
```
-->

This can be read in plain English as:
>When counting the steps of a staircase, if you're at the top, stop counting.
>Otherwise, add 1 to the result of counting the rest of the staircase.
Bit of a strange loop, to be sure; but note how each time we "count the rest" of the staircase, the amount of things we are counting is _getting smaller_ until eventually, there's nothing left to count: we've reached the top.
Let's draw out an example computation to help us see this in action:
<!--
```scheme
(define staircase '("step1", "step2", "step3", "step4"))
(define staircase-length (count-stairs staircase)) ;=> 4
; <-- (+ 1 (count-stairs '("step2", "step3", "step4"))) => (+ 1 (+ 1 (+ 1 (+ 1 0))))
; <-- (+ 1 (count-stairs '("step3", "step4"))) => (+ 1 (+ 1 (+ 1 0)))
; <-- (+ 1 (count-stairs '("step4"))) => (+ 1 (+ 1 0))
; <-- (+ 1 (count-stairs '())) => (+ 1 0)
; -------------------------------------- 0
```
-->

Quite elegant, no?
Readers may notice, especially after seeing the computation diagram above, that we're arriving at the final summation **without tracking the intermediate results**. Good eyes, dear reader. :clap:
As we've seen, this style of recursion approaches a problem _wholistically_ and _lazily_: it views the solution to a problem as an aggregation of the solution to its parts, and thus doesn't compute the whole solution until it has broken it down fully.
But what happens if we fall off the stairs? What happens if we get too hungry, and need to abandon our climb in order to go eat noodles, but want to return to finish counting later? What happens if we want to text someone our current location, and tell them what step we're on?
In such situations, we care about **partially computing** results. At any given step, we may want to stop and reflect on how far we've come, beyond just asking "are we done yet?" And in these cases, the approach we've taken today may not be the best one for the job.
For this and other reasons, many programming languages give you access to **loop constructs**, and I'll talk more about those next time. :wave:
---
_Every tool has its use. Some can make your code clearer to humans or clearer to machines, and some can strike a bit of balance between both._
_"Good enough to work" should not be "good enough for you." **Hold yourself to a higher standard**: learn a little about a lot and a lot about a little, so that when the time comes to do something, you've got a fair idea of how to do it well._ | daniel13rady |
248,507 | Create an API service using nestframework | Original Post In this post, I want to show you how easy is to create a backend API using the nestf... | 0 | 2020-01-26T09:56:32 | https://dev.to/yggdrasilts/create-an-api-service-using-nestframework-2kop | nestjs, swagger, typescript, echarts | > [Original Post](https://blog.yggdrasilts.com/create-an-api-service-using-nestframework/)
In this post, I want to show you how easy is to create a backend API using the nestframework and also, create the [swagger](https://swagger.io/) documentation.
# Table Of Contents
* [Introduction](#introduction)
* [Part 1 - Project Creation](#part-1)
* [API Service](#api-service)
* [Requirements](#requirements)
* [Service Creation](#service-creation)
* [Part 2 - EchartsService Creation](#part-2)
* [Delete Unnecessary Files](#delete-unnecessary-files)
* [Modifying app.controller.ts file](#modifying-app-controller)
* [Modifying app.service.ts file](#modifying-app-service)
* [Part 3 - Logger, Validations and Pipes, Handling Errors and Modules](#part-3)
* [Logger](#logger)
* [Validations and Pipes](#validation-and-pipes)
* [Handling Errors](#handling-errors)
* [Modules](#modules)
* [Part 4 - Swagger](#part-4)
* [To Sum Up](#to-sum-up)
# Introduction <a name="introduction"></a>
I have to say that I am in love with [NestJS](https://nestjs.com/) to build backend applications, services, libraries, etc. using [TypeScript](https://www.typescriptlang.org/). This framework has been the [fastest-growing nodejs framework in 2019](https://risingstars.js.org/2019/en/#section-nodejs-framework) due to its awesome features that, besides helps you to build your code, if you are thinking about to build medium or large services and you want it to be maintainable and scalable, also it give you a way to have your project well structured.
# Part 1 - Project Creation <a name="part-1"></a>
## API Service <a name="api-service"></a>
The service that I am going to create is a simple API service that contains 2 endpoints. Each endpoint returns a chart as image in different formats:
- */image*: Return an image as attachment.
- */image-base64*: Return an image as base64 string data.
To build the charts, I am going to use [node-echarts](https://github.com/telco2011/node-echarts) library to get [ECharts](https://echarts.apache.org/en/index.html) images in the backend world. [ECharts](https://echarts.apache.org/en/index.html) is an open-sourced JavaScript visualization tool that in my opinion, it has great options to build tons of different chart types.
> *You can check its [examples](https://echarts.apache.org/examples/en/index.html) if you don't believe me* 😛
Let's begging to create.
## Requirements <a name="requirements"></a>
- [nodejs](https://nodejs.org/en/) ([nvm](https://github.com/nvm-sh/nvm))
- [NestJS CLI](https://github.com/nestjs/nest-cli)
- [node-echarts](https://github.com/telco2011/node-echarts) (*OS package dependencies*)
- [Visual Studio Code](https://code.visualstudio.com/) (*or your preferred Code Editor / IDE*)
## Service Creation <a name="service-creation"></a>
Once all requirements are installed, I am going to use the [NestJS CLI](https://github.com/nestjs/nest-cli) to create the project structure:
```shell
odin@asgard:~/Blog $ nest new echarts-api-service -p npm
```
After the execution, I have opened the `~/Blog/echarts-api-service` folder in my [Visual Studio Code](https://code.visualstudio.com/) and this is the project structure:

> *[Here](https://docs.nestjs.com/first-steps), you can see more about the [NestJS](https://nestjs.com/) project structure. I assume that you are familiar with this structure and I continue building the service.*
Now, you can run the service using `npm start` and the service will respond a `Hello World!` string doing the following request: [*http://localhost:3000*](http://localhost:3000/)*,* as you can see in the following image (*I'm using [REST Client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) for [Visual Studio Code](https://code.visualstudio.com/)*).

Before continue, I am going to add a *.prettierc* file for code formatting using [Prettier](https://prettier.io/) with the following options:
```json
{
"singleQuote": true,
"tabWidth": 2,
"useTabs": false,
"trailingComma": "all",
"printWidth": 140
}
```
And also I like to add the hot reload option to check faster my code changes. To do this, and because I am using [NestJS CLI](https://github.com/nestjs/nest-cli), it is only necessary to change the `start:dev` script inside the *package.json* as `nest start --watch --webpack`. In [NestJS Hot Reload documentation](https://docs.nestjs.com/recipes/hot-reload#hot-reload) you can see more options.
Now, I am ready to modify the code to add the above endpoints */image* and */image-base64*.
> *'End of Part 1' You can check the project code in the [part-1](https://github.com/yggdrasilts/echarts-api-service/tree/part-1) tag*
# Part 2 - EchartsService creation <a name="part-2"></a>
I am going to modify/delete all the not needed files to adapt the project in a better way.
## Delete unnecessary files <a name="delete-unnecessary-files"></a>
In my case, It is not needed to use the *app.controller.spec.ts*. I delete it.
## Modifying *app.controller.ts* file <a name="modifying-app-controller"></a>
This file is responsible for handling incoming **requests** and returning **responses** to the client and it is the site where I am going to create the endpoints to let [NestJS](https://nestjs.com/) know the code to use when receiving the requests.
```typescript
import { Controller, Post } from '@nestjs/common';
import { AppService } from './app.service';
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
@Post('image')
async getImage(): Promise<void> {
console.log('getImage');
}
@Post('image-base64')
async getImageInBase64(): Promise<string> {
return 'getImageInBase64';
}
}
```
> *More information about controllers in [NestJS Controllers documentation](https://docs.nestjs.com/controllers).*
Once modified, you can start the service again `npm run start:dev` and execute both requests:
- *POST http://localhost:3000/image*

- *POST http://localhost:3000/image-base64*

## Modifying *app.service.ts* file <a name="modifying-app-service"></a>
The service is responsible for data storage and retrieval, in my case, the service will be the chart image creator.
Because I use [ECharts](https://echarts.apache.org/en/index.html), with [node-echarts](https://github.com/telco2011/node-echarts), I am going to create a new folder called *echarts* and inside another folder called *entities*. After this, move the *app.service.ts* to this new folder *echarts* and renamed it to *echarts.service.ts*.
> *'WARN' If you use Visual Studio Code, you will see that in every change, the editor adapts the code. Take care if not and adapt the code to be compiled.*
After the changes, I am going to create the *getImage* method which has the code to create the [ECharts](https://echarts.apache.org/en/index.html) image. But first, I am going to install the necessary npm dependencies to create the method:
- `npm i --save imagemin imagemin-pngquant imagemin-jpegtran https://github.com/telco2011/node-echarts.git`
- `npm i --save-dev @types/imagemin @types/echarts`
And the method:
```typescript
async getImage(opt: Options): Promise<Buffer> {
return buffer(
node_echarts({
option: opt.echartOptions,
width: opt.options?.width || DEFAULT_IMAGE_WIDTH,
height: opt.options?.height || DEFAULT_IMAGE_HEIGHT,
}),
{
plugins: [
imageminJpegtran(),
imageminPngquant({
quality: [0.6, 0.8],
}),
],
},
);
}
```
Once all the changes are done, your code won't compile due to the compiler does not find the *Options* name. This name will be a new class with two options, one to store the [ECharts](https://echarts.apache.org/en/index.html) options to create the chart and other to store some properties to create the image that will contain the [ECharts](https://echarts.apache.org/en/index.html).
To do so, I am going to create the next 3 files:
- `src/echarts/entities/options.entity.ts`
```typescript
import { EChartOption } from 'echarts';
import { ImageOptions } from './image-options.entity';
/**
* Class to configure echarts options.
*/
export class Options {
echartOptions: EChartOption;
options?: ImageOptions;
}
```
- `src/echarts/entities/image-options.entity.ts`
```typescript
/**
* Class to configure image options.
*/
export class ImageOptions {
// Image width
width?: number;
// Image height
height?: number;
// Download file name
filename?: string;
}
```
> *'INFO' The file name structure is important in NestJS because it will be relevant for Part 4 of this post. It is **IMPORTANT** that the entity file name follows the next structure: [NAME].entity.ts*
- `src/echarts/constants.ts`
```typescript
/**
* Application constants.
*/
export const DEFAULT_IMAGE_WIDTH = 600;
export const DEFAULT_IMAGE_HEIGHT = 250;
export const DEFAULT_FILENAME = 'echarts.png';
```
Finally the *echarts.service.ts* code is the following:
```typescript
import { Injectable } from '@nestjs/common';
import * as node_echarts from 'node-echarts';
import { buffer } from 'imagemin';
import imageminPngquant from 'imagemin-pngquant';
import * as imageminJpegtran from 'imagemin-jpegtran';
import { DEFAULT_IMAGE_WIDTH, DEFAULT_IMAGE_HEIGHT } from './constants';
import { Options } from './entities/options.entity';
@Injectable()
export class EchartsService {
/**
* Get the echarts as image.
*
* @param {Options} opt {@link Options}.
*/
async getImage(opt: Options): Promise<Buffer> {
return buffer(
node_echarts({
option: opt.echartOptions,
width: opt.options?.width || DEFAULT_IMAGE_WIDTH,
height: opt.options?.height || DEFAULT_IMAGE_HEIGHT,
}),
{
// plugins to compress the image to be sent
plugins: [
imageminJpegtran(),
imageminPngquant({
quality: [0.6, 0.8],
}),
],
},
);
}
}
```
After these updates, the code compiles well and you can start the server again `npm run start:dev`.
> *'HINT' If you want, you can add/modify/delete all of these files without stopping the service and you will see how the [Hot Reload feature](https://docs.nestjs.com/recipes/hot-reload) works.*
Now, I am going to connect the controller to the service to get all the functionality on. To do this, I am going to make some modifications in the *app.controller.ts* file to call the *getImage* method inside the service:
```typescript
import { Controller, Post, Header, Body, Res } from '@nestjs/common';
import { Response } from 'express';
import { HttpHeaders, MimeType, BufferUtils } from '@yggdrasilts/volundr';
import { EchartsService } from './echarts/echarts.service';
import { Options } from './echarts/entities/options.entity';
import { DEFAULT_FILENAME } from './echarts/constants';
@Controller()
export class AppController {
constructor(private readonly echartsService: EchartsService) {}
@Post('image')
@Header(HttpHeaders.CONTENT_TYPE, MimeType.IMAGE.PNG)
async getImage(@Body() opt: Options, @Res() response: Response): Promise<void> {
const result = await this.echartsService.getImage(opt);
response.setHeader(HttpHeaders.CONTENT_LENGTH, result.length);
response.setHeader(HttpHeaders.CONTENT_DISPOSITION, `attachment;filename=${opt.options?.filename || DEFAULT_FILENAME}`);
response.end(result);
}
@Post('image-base64')
@Header(HttpHeaders.CONTENT_DISPOSITION, MimeType.TEXT.PLAIN)
async getImageInBase64(@Body() opt: Options): Promise<string> {
return BufferUtils.toBase64(await this.echartsService.getImage(opt));
}
}
```
The code does not compile because of a dependency is not installed yet. This dependency is *[@yggdrasilts/volundr](https://github.com/yggdrasilts/volundr)*, part of our toolset. This library is a set of utilities for [TypeScript](https://www.typescriptlang.org/) developments. To compile, you only need to install it using `npm i --save @yggdrasilts/volundr`.
> *If you want to know more about it, take a look to its repository [*@yggdratilsts/volundr*](https://github.com/yggdrasilts/volundr)*
If you see the code, it is very easy to understand it because the [NestJS decorators](https://docs.nestjs.com/controllers) are very explicit.
- *@Post()* decorator indicates that both endpoints are listening POST request.
- I am using the *@Header()* decorator to set the response Headers.
- And also I am using *@Body()* decorator to get the request body to be used inside the service.
At this point, if you start the service `npm run start:dev` you will have available the wanted endpoints:
- POST *http://localhost:3000/image*

- POST *http://localhost:3000/image-base64*

> *I'm getting the *echartOptions* data from the [ECharts Example - Stacked Area Chart](https://echarts.apache.org/examples/en/editor.html?c=area-stack)*
> *'End of Part 2' You can check the project code in the [part-2](https://github.com/yggdrasilts/echarts-api-service/tree/part-2) tag*
# Part 3 - Logger, Validations and Pipes, Handling Errors and Modules <a name="part-3"></a>
Now, the application is functional but before to finish, I would like to add, and talk, about some other features that [NestJS](https://nestjs.com/) provides to create a service.
## Logger <a name="logger"></a>
In my opinion, every project should have a good logger implementation because it is the principal way to know what is happening in the flow. In this case, [NestJS](https://nestjs.com/) has a built-in text-based [Logger](https://docs.nestjs.com/techniques/logger) that is enough for our service.
To use this logger, it is only necessary to instantiate the class and start using it. For example:
```typescript
import { Controller, Get, Logger } from '@nestjs/common';
@Controller()
export class MyController {
private readonly logger = new Logger(MyController.name);
@Get('log-data')
getImage(): void {
this.logger.debug('Logging data with NestJS it's so easy...');
}
}
```
The service that I am creating will use this logger for the *app.controller.ts*, *body.validation.pipe.ts* and *http-exception.filter.ts*. I am going to talk about these two last files in the following parts of the post.
> *If you want to know more about how to use the [NestJS](https://nestjs.com/) logger, you can go to its [documentation](https://docs.nestjs.com/techniques/logger). Also, in the following posts, I will talk about it.*
## Validations and Pipes <a name="validations-and-pipes"></a>
Like [NestJS](https://nestjs.com/) documentation says in its [Validation section](https://docs.nestjs.com/techniques/validation), it is best practice to validate the correctness of any data sent into a web application. For this reason, I am going to use the [Object Schema Validation](https://docs.nestjs.com/pipes#object-schema-validation) and creating a custom [Pipe](https://docs.nestjs.com/pipes) to do so.
First, I am going to adapt the project installing some needed dependencies and creating and modifying some files and folders.
### Installing needed dependencies
- `npm i --save @hapi/joi`
- `npm i --save-dev @types/hapi__joi`
> [@hapi/joi](https://hapi.dev/family/joi/) lets you describe your data using a simple, intuitive, and readable language
### Creating new files and folders
- `src/pipes/body.validation.pipe.ts`
```typescript
import { PipeTransform, Injectable, ArgumentMetadata, BadRequestException, Logger } from '@nestjs/common';
import { Schema } from '@hapi/joi';
/**
* Pipe to validate request body.
*/
@Injectable()
export class BodyValidationPipe implements PipeTransform {
private readonly logger = new Logger(BodyValidationPipe.name);
constructor(private readonly schema: Schema) {}
transform(value: any, metadata: ArgumentMetadata) {
this.logger.debug(`Input body: ${JSON.stringify(value)}`);
const { error } = this.schema.validate(value);
if (error) {
this.logger.error(`Error validating body: ${JSON.stringify(error)}`);
throw new BadRequestException(`Validation failed: ${error.message}`);
}
return value;
}
}
```
#### Modifying *app.controller.ts*
```typescript
import { Controller, Post, Header, Body, Res, UsePipes, Logger } from '@nestjs/common';
import { Response } from 'express';
import { HttpHeaders, MimeType, BufferUtils } from '@yggdrasilts/volundr';
import { EchartsService } from './echarts/echarts.service';
import { Options } from './echarts/entities/options.entity';
import { DEFAULT_FILENAME, IMAGE_BODY_VALIDATION_SCHEMA } from './echarts/constants';
import { BodyValidationPipe } from './pipes/body.validation.pipe';
@Controller()
export class AppController {
private readonly logger = new Logger(AppController.name);
constructor(private readonly echartsService: EchartsService) {}
@Post('image')
@Header(HttpHeaders.CONTENT_TYPE, MimeType.IMAGE.PNG)
@UsePipes(new BodyValidationPipe(IMAGE_BODY_VALIDATION_SCHEMA))
async getImage(@Body() opt: Options, @Res() response: Response): Promise<void> {
const result = await this.echartsService.getImage(opt);
response.setHeader(HttpHeaders.CONTENT_LENGTH, result.length);
response.setHeader(HttpHeaders.CONTENT_DISPOSITION, `attachment;filename=${opt.options?.filename || DEFAULT_FILENAME}`);
response.end(result);
}
@Post('image-base64')
@Header(HttpHeaders.CONTENT_DISPOSITION, MimeType.TEXT.PLAIN)
@UsePipes(new BodyValidationPipe(IMAGE_BODY_VALIDATION_SCHEMA))
async getImageInBase64(@Body() opt: Options): Promise<string> {
return BufferUtils.toBase64(await this.echartsService.getImage(opt));
}
}
```
#### Modifying *echarts/constants.ts*
```typescript
/**
* Application constants.
*/
import * as Joi from '@hapi/joi';
export const DEFAULT_IMAGE_WIDTH = 600;
export const DEFAULT_IMAGE_HEIGHT = 250;
export const DEFAULT_FILENAME = 'echarts.png';
export const IMAGE_BODY_VALIDATION_SCHEMA = Joi.object({
echartOptions: Joi.object().required(),
options: Joi.object().optional(),
});
```
Now, if you start the service `npm run start:dev` and do a request with an invalid body, you will see an error.
- POST *http://localhost:3000/image*

This is a simple error handling that [NestJS](https://nestjs.com/) provides by default but I like to customize it and show more readable error using [NestJS Exceptions Filters](https://docs.nestjs.com/exception-filters).
> *'End of Part 3.1' You can check the project code in the [part-3.1](https://github.com/yggdrasilts/echarts-api-service/tree/part-3.1) tag*
### Handling Errors <a name="handling-errors"></a>
First, I am going to create my custom Exception Filter and activated it in the global-scope to manage all endpoint errors.
To do this purpose, I am going to create a new folder and file called `src/exceptions/http-exception.filter.ts`
```typescript
import { ExceptionFilter, Catch, ArgumentsHost, HttpException, Logger } from '@nestjs/common';
import { Request, Response } from 'express';
import { HttpHeaders, MimeType } from '@yggdrasilts/volundr';
/**
* Filter to catch HttpException manipulating the response to get understandable response.
*/
@Catch(HttpException)
export class HttpExceptionFilter implements ExceptionFilter {
private readonly logger = new Logger(HttpExceptionFilter.name);
catch(exception: HttpException, host: ArgumentsHost) {
const ctx = host.switchToHttp();
const response = ctx.getResponse<Response>();
const request = ctx.getRequest<Request>();
const status = exception.getStatus();
const message = exception.message;
const errorData = {
timestamp: new Date().toISOString(),
message,
details: {
request: {
method: request.method,
query: request.query,
body: request.body,
},
path: request.url,
},
};
this.logger.error(`${JSON.stringify(errorData)}`);
response.setHeader(HttpHeaders.CONTENT_TYPE, MimeType.APPLICATION.JSON);
response.status(status).json(errorData);
}
}
```
This filter is similar to the one in [NestJS Exception Filter section](https://docs.nestjs.com/exception-filters#exception-filters-1), but I have made some personal customizations to get more information in the error data.
After the file creation, it is needed to activate this filter in the global-scope. To do so, I am going to modify the `main.js` file:
```typescript
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { HttpExceptionFilter } from './exceptions/http-exception.filter';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.useGlobalFilters(new HttpExceptionFilter());
await app.listen(3000);
}
bootstrap();
```
Now, you can start the service again `npm run start:dev` and see how the error has changed.
- POST *http://localhost:3000/image*

> *'End of Part 3.2' You can check the project code in the [part-3.2](https://github.com/yggdrasilts/echarts-api-service/tree/part-3.2) tag*
### Modules <a name="modules"></a>
To finalize this part, I am going to use [modules](https://docs.nestjs.com/modules) to follow the structure that NestJS propose because I think this structure it is very intuitive to follow having different modules with their own functionality.
To do this, I am going to create the `src/echarts/echarts.module.ts` file:
```typescript
import { Module } from '@nestjs/common';
import { EchartsService } from './echarts.service';
@Module({
providers: [EchartsService],
exports: [EchartsService],
})
export class EchartsModule {}
```
And modify the *app.module.ts* file to import this new module instead of importing the *EcharsService* directly:
```typescript
import { Module } from '@nestjs/common';
import { EchartsModule } from './echarts/echarts.module';
import { AppController } from './app.controller';
@Module({
imports: [EchartsModule],
controllers: [AppController],
})
export class AppModule {}
```
Finally, I am going to check if the service continues working as before executing `npm run start:dev` and checking the endpoints with the REST Client VS extension like at the end of Part 2.
> *'End of Part 3.3' You can check the project code in the [part-3.3](https://github.com/yggdrasilts/echarts-api-service/tree/part-3.3) tag*
# Part 4 - Swagger <a name="part-4"></a>
Providing good documentation for every API is essential if we want to be used by other people. There are lots of [alternatives](https://alternativeto.net/software/swagger-io/) to create this documentation but [NestJS](https://nestjs.com/) has an awesome [module](https://docs.nestjs.com/recipes/swagger) that use [Swagger](https://swagger.io/) for this purpose.
> *[Kamil Mysliwiec](https://twitter.com/kammysliwiec), [NestJS](https://nestjs.com/) creator, has written a great [article](https://trilon.io/blog/nestjs-swagger-4-whats-new) talking about the new [NestJS](https://nestjs.com/) Swagger module features. I recommend you to read it, it is very interesting.*
Before to start documenting, it is necessary to install the [NestJS Swagger module](https://docs.nestjs.com/recipes/swagger):
```shell-session
odin@asgard:~/Blog $ npm install --save @nestjs/swagger swagger-ui-express
```
Once installed, we bootstrap the service modifying the *main.ts* file like the documentation sais:
```typescript
import { NestFactory } from '@nestjs/core';
import { SwaggerModule, DocumentBuilder } from '@nestjs/swagger';
import { AppModule } from './app.module';
import { HttpExceptionFilter } from './exceptions/http-exception.filter';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.useGlobalFilters(new HttpExceptionFilter());
const options = new DocumentBuilder()
.setTitle('Echarts API')
.setDescription('API to get charts, using echartsjs, as image file.')
.setExternalDoc('More about echartsjs', 'https://echarts.apache.org/en/index.html')
.setVersion('1.0')
.build();
const document = SwaggerModule.createDocument(app, options);
SwaggerModule.setup('api', app, document);
await app.listen(3000);
}
bootstrap();
```
Now we can start the service `npm run start:dev` and automatically we will have the Swagger documentation at http://localhost:3000/api:

> *Don't you think this is so easy? 😉*
If you are read the above [Kamil Mysliwiec](https://twitter.com/kammysliwiec)'s [post](https://trilon.io/blog/nestjs-swagger-4-whats-new), with the new [NestJS Swagger plugin](https://docs.nestjs.com/recipes/swagger#plugin) version, the [NestJS CLI](https://github.com/nestjs/nest-cli) has the ability to search in all of our code and create the [Swagger](https://swagger.io/) documentation automatically. To do so, the only thing that we need to modify is the *nest-cli.json* file adding the following lines:
```json
{
...
"compilerOptions": {
"plugins": ["@nestjs/swagger/plugin"]
}
}
```
Once added, we can run the service again `npm run start:dev` but in this case, we are going to see the following error:

This is because the service is using the *EchartOption* and the [NestJS Swagger plugin](https://docs.nestjs.com/recipes/swagger#plugin) is not able to parse. Also easy to solve. I am going to document this particular object by myself adding the following [NestJS Swagger decorators](https://docs.nestjs.com/recipes/swagger#decorators) to the *echartOptions* variable inside the `src/echarts/entities/options.entity.ts` file:
```typescript
import { ApiProperty } from '@nestjs/swagger';
import { EChartOption } from 'echarts';
import { ImageOptions } from './image-options.entity';
/**
* Class to configure echarts options.
*/
export class Options {
@ApiProperty({
type: 'EChartOption',
description: 'Chart configuration.',
externalDocs: { url: 'https://echarts.apache.org/en/option.html' },
})
echartOptions: EChartOption;
options?: ImageOptions;
}
```
Now, we can run the service again `npm run start:dev` and the Swagger documentation will have changed and we will have the new properties:

> *As I said before, don't you think this is so easy?*
> *End of Part 4. You can check the project code in the [part-4](https://github.com/yggdrasilts/echarts-api-service/tree/part-4) tag*
## To Sum Up <a name="to-sum-up"></a>
As I said at the begging of this post, I am in love with [NestJS](https://nestjs.com/) because it is an awesome framework that helps you to create backend services in an efficient, reliable and scalable way, using [TypeScript](https://www.typescriptlang.org/), with which I am in love as well. In this post, it has been shown several parts of [NestJS](https://nestjs.com/) framework and how it helps you to build a backend service easily.
I hope you have enjoyed reading it and learn, improve or discover this great framework. In the following link, I leave you the Github repository where you can find the project code.
{% github yggdrasilts/echarts-api-service %}
Enjoy!! | telco2011 |
248,517 | You Are Being Tracked | I have never been a big fan of online tracking. I believe that online tracking fundamentally breaks m... | 0 | 2020-01-25T17:51:22 | https://timothymiller.dev/posts/2020/you-are-being-tracked/ | tracking, analytics, privacy | I have never been a big fan of online tracking. I believe that online tracking fundamentally breaks many of the precepts the web was built on, _however_ as of today, for the first time ever, I am tracking the traffic on [my own personal website](https://timothymiller.dev/).
To explain how I got here, let me tell you some of the main issues I've seen with online tracking.
## Tracking is slow
Online trackers are breathtakingly _slow_. Using the internet with an ad-blocker has become common specifically for this reason, I think: people got tired of waiting for their webpages to load. The internet is a _noticably_ nicer place with ad-blockers and privacy blockers on.
I don't think this is the fault of any one tracker or tool. It's generally the combination of _many_ trackers that creates these problems.
This is a _huge_ failing for all of us, and we only have ourselves to blame. The sheer number of trackers on most websites are actively breaking the internet, but we continue to add them to every webpage like they're candy. Like candy though, they create serious health issues for our websites, and we should treat them with more caution than we are.
## Most trackers dig too deep
Online trackers can be an invasion of privacy, especially when taken to their logical extreme. Simple anonomous traffic info is one thing, but many companies believe that _more data is always better_, and this contributes to many of the other issues with online tracking. The more we track, the more unnecessary problems we create.
In the last couple of years I have worked for dozens of companes, and every single one of them has a Google Analytics account that they barely use. Google Analytics is an _incredible_ tool, but let's just be honest here: the _vast_ majority of people have no idea how to use all of that data. Most people care about which pages get the most traffic, and maybe some simple e-commerce and email signup metrics. That's it. We don't need to be collecting _nearly_ as much information as we do.
## Most trackers “phone home”
Your Google Analytics data is not truly private. Your Facebook tracker is not truly private. Both tools have major marketing companies behind them, and the more they're able to track about a person's online identity, the better they can advertise to them.
There's nothing inherently wrong with them doing this: they provide incredible tools for free, and as payment they harvest whatever information they need. That's just how it works. But most people don't seem to _know_ that's how this works. I've heard many people rail against retargeted ads—ads that take your browsing history into account—and then they slap a Facebook pixel on their site, enabling the same behavior they reportedly dislike.
## Despite the downsides though, data can be useful...
Which is why I now have a [GoatCounter](https://www.goatcounter.com/) on my own website.
Up until today I had no idea if anyone was reading my blog or not, but I've always been curious. I've had my eyes peeled for a simple privacy aware tracker, and GoatCounter seems to fit the bill. A few things I like about it:
* It's open source. Security through transparency, I call this. If nothing is hidden, surely there's nothing to hide, right?
* Privacy aware. The creator of this tool has [written](https://www.arp242.net/dnt.html) [articles](https://www.arp242.net/goatcounter.html) on his views about online tracking, and his judgement seems sound.
* It's fast, light, and asynchronous. No extra bloat.
* It only tracks four metrics: page views, browser, rough screen size, and rough location. Nothing even remotely identifiable, and each metrics that are easy to use and apply to your project.
* I own my own data. It is fully exportable and deletable at any time. No phoning home, no perpetually saved data. Full control.
This is an experiment, but a good one, I think. I'm excited to see if GoatCounter will be a tool I will continue to use for years to come.
## One more thing about GoatCounter
A fun thing: you can also make your GoatCounter dashboard public. I don't know if I want to do that or not, but it would be a fun feature for specific projects, I think.
**This post was also published at [timothymiller.dev](https://timothymiller.dev/posts/2020/you-are-being-tracked/)** | webinspectinc |
248,528 | Python configs for Humans. Part #2 | Hello, dev.to! Today, I want to tell you about my library for configs (betterconf). I have already a... | 0 | 2020-01-25T18:21:37 | https://dev.to/prostomarkeloff/python-configs-for-humans-part-2-4269 | python, configs | Hello, dev.to!
Today, I want to tell you about my library for configs ([betterconf](https://github.com/prostomarkeloff/betterconf)). I have already article about it but I have implemented some features which weren't.
Okay, let's start!
At first, now you can get values not only from env vars. By default it's so but you can edit.
```python
from betterconf import field, Config
from betterconf.config import AbstractProvider
class NameProvider(AbstractProvider):
def get(self, name: str):
return name
class Cfg(Config):
my_var = field("my_var", provider=NameProvider())
cfg = Cfg()
print(cfg.my_var)
# my_var
```
And... you can do casting of your objects by simple and clear syntax:
```python
from betterconf import field, Config
# out of the box we have `to_bool` and `to_int`
from betterconf.caster import to_bool, to_int, AbstractCaster
class DashToDotCaster(AbstractCaster):
def cast(self, val: str):
return val.replace("-", ".")
to_dot = DashToDotCaster()
class Cfg(Config):
integer = field("integer", caster=to_int)
boolean = field("boolean", caster=to_bool)
dots = field("dashes", caster=to_dot)
cfg = Cfg()
print(cfg.integer, cfg.boolean, cfg.dots)
# -500, True, hello.world
```
This library is lightweight and dependency-free. I'm using it in my production environments and recommend you too!
You can see more at Github: https://github.com/prostomarkeloff/betterconf | prostomarkeloff |
248,564 | Technology and the environmental challenges in this decade | Recently came to an end the World Economic Forum 2020, in Davos, Switzerland. Its theme was about the... | 0 | 2020-01-25T20:59:59 | https://dev.to/asf89/technology-and-the-environmental-challenges-in-this-decade-5dg6 | technology, environment, engineering, discuss | Recently came to an end the World Economic Forum 2020, in Davos, Switzerland. Its theme was about the climatic change around the globe, with several leaders and activists speaking about the need to take energic action to avoid a catastrophic future. As a human and a scientist, I think it should be interesting to share some of my visions about the information given in Davos and our role as producers of technology.
**It is important to note that this is my personal point of view**. It is my belief that the plurality of perceptions is vital to better understand the reality and form the basis for creative and innovative thinking.
We have only one home at the moment: this planet. The actions that causes environmental changes reverberate through all the globe, slowly but surely. Our economic models don't create a sustainable development and the result is starvation, unemployment, enormous unequality and other sad consequences. Some of the leaders in Davos have sopken about some initiatives to recover environments in their countries, but it is, unfortunately, too little for the power of the challenge we must surpass.
We have knowledge and technology to address the problems of our time. We no longer are separated by geographical barriers and because of Internet we can discuss, plan and develop more solutions than past centuries. Seeing some of the speeches of Davos, I started to realize: *It's time to be decisive, active and start to liberate the potential of our technology to create a sustainable society, to better care for our world*.
We can go to Mars, create colonies on the Moon, but I think we should first put our humanity ahead of our technology. We still pollute our streets, our rivers, extinguish animals and plants without much thinking about the future consequences. **This reflects in the technology we create today. This reflects in the mindset we adopt today**.
We can do better. We as developers have the tools to create a sustainable future. I would like and example of projects you are working that are linked with the issues of environment in your countries.
This can be a unusual post, but I think it is worth discuss topics about the direction we want our technology to go. I invite you, reader, to post your thoughts about this topic. Thanks for reading.
| asf89 |
248,596 | Infinite Jest: toBe or not.toBe | What is Jest? Jest is an open source JavaScript testing framework and is used by lots of d... | 0 | 2020-01-26T02:49:53 | https://dev.to/iris/infinite-jest-tobe-or-not-tobe-1nk9 | testing, javascript, beginners | ## What is Jest?
Jest is an open source JavaScript testing framework and is used by lots of different companies including Facebook, Twitter, Spotify, and more. Jest is fast and intuitive to learn and set up.
To install using npm, navigate to the directory you want to add tests for (`mkdir david-foster-wallace` and then `cd david-foster-wallace`) create a package.json file (`npm init -y`) and enter `npm install --save-dev jest` in your terminal.
## What is Infinite Jest?
Infinite Jest is a book by David Foster Wallace I have never read but have decided to reference numerous times to make this blog vaguely themed.
## Let's write a Jest test
Once you've installed Jest you'll need to make a quick change to your package.json file and then you can start writing your first test.
1) Change the `"test":` value in the `"scripts":` object to "jest"
```javascript
{
"name": "david-foster-wallace",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "jest" // <-- this line!
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"jest": "^25.1.0"
}
}
```
2) Create a new file named pageCount.js
3) Write a function pageCount in this file
```javascript
const pageCount = () => {
return 1079 + ' pages';
// try this with 'pages' and the test in step 7 will fail!
}
module.exports = pageCount;
// export your function to access from your test file
```
4) Create a new file named pageCount.test.js or pageCount.spec.js
*Tests should be written in files ending in .test.js or .spec.js.*
5) Make sure to require your pageCount.js file in your test file
```javascript
const pageCount = require('./pageCount');
```
6) Write the test (we'll cover the terms and syntax in **Anatomy of a test** below)
```javascript
describe('these tests are all about the page count of Infinite Jest', () => {
test('pageCount should return the page count of Infinite Jest', () => {
expect(pageCount()).toBe('1079 pages');
})
})
```
7) Run your tests with `npm run test` in your terminal

*Did `npm run test` get stuck for you? (more like Infinite Test, am I right??) It happened to me too! I was able to solve it by running `brew uninstall watchman` in my terminal. Checkout this GitHub issue for more information on [npm run test hangs](https://github.com/facebook/create-react-app/issues/960).*
## Anatomy of a test
We'll briefly cover the following terms from the test we wrote above:
* Describe -- logically group your tests together
* Test -- this will hold your test
* Expect -- this is your assertion that checks to see if your test passes or fails
Describe is used to group tests together. If we wanted to write a few more tests all about the page count of Infinite Jest we could add them under the describe we wrote above. Describe takes 2 arguments, your summary of the tests included in describe and a function that holds the tests.
```javascript
describe('these tests are all about the page count of Infinite Jest', () => {
test('pageCount should return the page count of Infinite Jest', () => {
expect(pageCount()).toBe('1079 pages');
})
test('endnotesPageCount should return the page count of the endnotes in Infinite Jest', () => {
expect(endnotesPageCount()).toBe('too many...');
})
test('tooLong should return a boolean indicating if Infinite Jest is too long', () => {
expect(tooLong()).toBe(true);
})
})
```
Test takes 3 arguments, your summary of conditions to test, a function that holds your "expect", and an optional timeout. For the purposes of this tutorial I won't cover the optional timeout argument. A test does not need to be written inside a describe method.
```javascript
test('timeToReadThisBook returns how long it takes to read I.J. based on reading speed', () => {
expect(timeToReadThisBook('medium speed')).toBe('~50 hours');
})
```
Expect is where you write what should happen when you test for different scenarios. Expect is where you can think about different scenarios and edge cases that could arise for your code and how you want to handle them. For example, for our timeToReadThisBook function you could write an expect for when 'null' is provided as the reading speed.
```javascript
test('timetoReadThisBook...', () => {
expect(timeToReadThisBook(null).toBe(
'You will haunt your local public library.
Your unfinished business is to read Infinite Jest.
Ghosts do not have ghost library cards. Sad!'
);
})
```
>The expect function is used every time you want to test a value. You will rarely call expect by itself. Instead, you will use expect along with a "matcher" function to assert something about a value. - [Jest Docs](https://jestjs.io/docs/en/expect#expectvalue)
## Jest matchers
Matchers are used to check the values in your expect methods. I've listed some of the most common matches below:
* .toBe -- used for checking strict equality
* .toEqual -- used for checking objects and arrays
* .not -- `expect(pageCount()).not.toBe('1 page')`
* .toContain -- used to check if an array contains an item
* .toMatch -- used to check for regex matches
*[Complete list of matchers](https://jestjs.io/docs/en/expect)*
### .toBe vs .toEqual
The distinction between .toBe and .toEqual methods is that .toBe checks for strict equality (works for primitive types like strings and numbers) whereas 'toEqual recursively checks every field of an object or array' (thanks [Jest Docs](https://jestjs.io/docs/en/using-matchers)!).

## In conclusion...
The novel Infinite Jest by David Foster Wallace was named for a line from Shakespeare's Hamlet and that is a fact I definitely knew before today.
>Alas, poor Yorick! I knew him, Horatio: a fellow of **infinite jest**, of most excellent fancy: he hath borne me on his back a thousand times; and now, how abhorred in my imagination it is! - *Straight outta Hamlet's mouth*
.toBe and not.toBe are methods for Jest. Coincidence? I think not.
>To be, or not to be, that is the question - *Also straight outta Hamlet's mouth*

I hope my blog has inspired you to learn how to write tests with Jest and maybe even read Inifinite Jest or at least read one paragraph of the Infinite Jest Wikipedia page like I did.
## Sources
* [Jest Docs](https://jestjs.io/docs/en/getting-started)
* [Test Automation University - Jest Tutorial (really helpful)](https://testautomationu.applitools.com/jest-testing-tutorial/)
* [npm run test hangs](https://github.com/facebook/create-react-app/issues/960)
* [Inifinite Jest on Wikipedia](https://en.wikipedia.org/wiki/Infinite_Jest)
| iris |
248,696 | React/Vue components are just server side template components with worse performance. Change my mind. | To this day, even after going through react tutorials, I still feel like it so much more overhead tha... | 0 | 2020-01-26T06:12:40 | https://dev.to/spacerockmedia/react-vue-components-are-just-server-side-template-components-with-worse-performance-change-my-mind-59ge | discuss, react, vue, jinja | To this day, even after going through react tutorials, I still feel like it so much more overhead than what is needed compared to using a good templating engine like jinja. I can easily make components and in one file inject css and js that is only used on that component.
Plus, most times the templates are cached. So with very little css/js to load it's really fast. Basically like a static site.
But, data binding. Ok, so there's some is on the page that can change some elements. You can still make an Ajax request to some endpoint to get a json response and update the UI. Even faster with a websocket to subscribe to an endpoint. You still don't need react/Vue for that.
Another upside, is libraries getting out of date. No need to update your react version when a new one comes out. Less headache, no overhead. Sure there will be something if you're using is for Ajax requests, but that's likely an easier upgrade anyways.
I forget any other reason server site template rendering was just easier?
_**Note 1**: My goal here is to have a conversation about this. I am trying to learn better for both sides of the argument._
_**Note 2**: I do want to be fair in saying that I understand the desire for this for a rich interactive application interface that has a lot more moving parts that don't need to communicate with a back end. Such as where you would build an offline application_ | autoferrit |
249,166 | Start today with Surface Duo development on preview emulator and SDK | Surface Neo and Surface Duo are new devices by Microsoft, planned to launch for holidays season this... | 0 | 2020-01-27T05:54:12 | https://gunnarpeipman.com/surface-duo-development-preview/ | mobile, surface, android, xamarin | ---
title: Start today with Surface Duo development on preview emulator and SDK
published: true
date: 2020-01-27 05:10:57 UTC
tags: mobile,surface,android,xamarin
canonical_url: https://gunnarpeipman.com/surface-duo-development-preview/
---
[Surface Neo](https://www.microsoft.com/en-us/surface/devices/surface-neo "Surface Neo homepage") and [Surface Duo](https://www.microsoft.com/en-us/surface/devices/surface-duo "Surface Duo homepage") are new devices by Microsoft, planned to launch for holidays season this year. Surface Neo runs Windows and Surface Duo is based on Android. For Surface Duo there’s already preview tooling and SDK available by Microsoft. Here’s the introduction to Surface Duo development, tools and patterns.
With Surface Neo and Surface Duo Microsoft enters a era of hybrid devices. Both new devices are targeted between mobile phones and tablets. Surface Neo is more on tablets side while Surface Duo is closer to mobile devices.
> **Save the date!** If you are more than interested in new dual-screen experiences and want to see next big announcements when they are made then [Microsoft 365 Developer Day](https://developer.microsoft.com/en-us/microsoft-365/virtual-events "Microsoft 365 Developer Day") on 11th of February is the event for you.
### Unique dual-screen approach
Instead of targeting as thin folding line between two screens as possible, Microsoft intentionally decided to have suprisingly wide line. It seems unexpected and irrational at first but when we dig deeper it’s actually damn clever approach. Why? On real device a folding line between screens is wide and narrow at same time. It’s narrow enough to be not annoying for views extended over two screens and it’s wide enough to be a separation line between master-detail views where master part of view is opened on one and details part on second screen.
Dual nature in user experiences is something that makes Surface Neo and Surface Duo special. It also introduces new concepts in UI design and new UI patterns. Perhaps most challenging part of those new style UI-s is supporting one and two screen modes as users can decide on how many screens they want application to cover.
### Getting Surface Duo SDK preview
Surface Duo comes as zip-archive containing installer file. Make sure you have latest Visual Studio 2019 installed with Xamarin.
- [Download Surface Duo SDK Preview](https://www.microsoft.com/download/details.aspx?id=100847 "Download Surface Duo SDK Preview")
- [Use the Surface Duo emulator](https://docs.microsoft.com/en-us/dual-screen/android/use-emulator?tabs=windows "Use the Surface Duo emulator") (useful hints to make emulator work)
Installer creates folder for Surface Duo system images. Emulator can be started by running batch files in installation folder or using icon on desktop.
> **Hyper-V warning!** I ran into different troubles with emulator when Hyper-V was running on my machine. Stopping Hyper-V services was not enough – I had to uninstall Hyper-V and also Windows 10 virtualization features to get Surface Duo emulator running.
### Surface Duo emulator
As Surface Duo has foldable screen it needs specialized emulator where foldable or two-part screen is already considered at operating system level. Screenshot below shows Surface Duo emulator running with no applications open.

Regular applications open on one screen (left or right of black folding line). They can be moved from one screen to another. If application is moved under folding line then both screens are highlighted. At this point comes difference between regular and new applications if user releases window. Regular application moves to left or right screen leaving the other one as it was before. New application enters extended mode and covers both screens.
### Dual screen patterns
Before getting to code let me introduce new dual screen patterns for Surface Neo and Surface Duo. As with all new UI concepts there are at least some rules to follow to guarantee great user experience. Dual screens world is not the exception.

**Extended Canvas** – extending the canvas allows users to take advantage of the larger screen real-estate provided by dual-screen devices.

Use with map and drawing canvas apps.
**Master-Detail** – separating navigation or overview from details allows users to drill deeper into content while staying grounded regarding their position in the overall list/aggregate.

Use with apps that have lists or galleries, mail and scheduling apps, photos or image curation apps, music apps with playlists and song details, apps with strong navigation structure.
**Two Page** – leveraging the skeuomorphic metaphor of a book to showcase one page on each screen, so it’s more conducive to reading.

Use with document oriented apps, apps with content that is paginated, apps made for reading, apps with itemized canvas.
**Dual View** – having multiple views of the same app in the same container, allowing comparison of similar-type content side by side.

Use with editing tools that benefit from having before/after states side-by-side, content and context side-by-side, apps that let the user compare similar items, having two canvases with coordinated content but keeping each page separate.
**Companion Pane** – show complementary context to augment users’ tasks, usually with a primary/secondary relationship, by elevating to the surface previously buried level 2 functionalities for quicker access.

Use with productivity apps that have supplemental information that appears next to the main content, creative tools like image drawing app, music or video editor apps, gaming apps.
### Exploring Surface Duo examples
Surface Duo examples are available for Xamarin Native and Xamarin Forms. For native apps there are one project per demo. For Xamarin Forms there’s just one project demonstrating all dual screen application patterns. Those who want to get quick idea how new application works and how new UI patterns are implemented should start from Xamarin Forms applications.
> **NB!** Some sample forms need Google Maps key. You can acquire on from [Google Developers Console](http://console.developers.google.com "Google Developers Console"). Maps key is needed in HTML files located in assets folders. Once you have a key, just copy and paste it where it is said in HTML.
Before starting application make sure emulator is running. For Visual Studio emulator is like a device it can connect to.

Although it doesn’t have much informative name, it still works and Microsoft will come up with better name for Surface Duo emulator in future.
Screenshot below shows Xamarin Forms application opened on left screen. You can click buttons to see demos of different dual-screen application patterns implemented for Surface Duo. I used the very same application to make screenshots demonstrating dual-screen patterns.

On bottom part of left screen there’s white line on black border. We can point mouse on emulator (or finger on real device) on the white line and move application to right screen or let is span over both screens. Spanning application over both screens and moving it between two screens is supported in emulator.
Here’s the screen recording of Xamarin demo application.
***<a href="https://static.gunnarpeipman.com/wp-content/uploads/2020/01/surface-duo-demo-fixed.mp4">Watch video</a>***
With these examples we have implementations of all dual-screen patterns available.
### Exploring code in Visual Studio
Xamarin application uses Xamarin.DuoSdk NuGet package that provides access to hinge sensor and dual-screen features.

HingeSensor class is for monitoring hinge sensor. This class can tell us if device has a hinge and notify us when sensor value changed. ScreenHelper gives us information about hinge size and screen rotation. It has few helpful methods more that we can use to react to hinge sensor changes.
When sniffing around n Visual Studio project I found IHingeService and ILayoutService interfaces from “shared” Xamarin project. Their implementations – HingeService and LayoutService – are in Xamarin Android project. Both of these classes are so clean of sample application specifics that they should be part of some NuGet package in my opinion. There’s anyway Xamarin.DuoSdk NuGet package available.
Pages in sample application are all implemented in “shared” project. There are components like TwoPaneView and FormsWindow that are used in pages. Both components are generic enough to have them as NuGet package.
It was a little surprising for me that there are no base pages or ready-made templates for new dual-screen pages implementing corresponding patterns mentioned above. It’s possible that we will get something when preview version of Surface Neo is released.
### Wrapping up
Surface Duo preview SDK and emulator are first real tools for developers to get started with dual-screen apps development. Although emulator doesn’t look as slick as real device, it works well enough. As a developers we can really start experimenting now and extend our new applications to Surface Neo as soon as tooling for Surface Neo is available. Documentation that is available now is not complete yet, but still there’s enough information to get an idea how things work for dual-screen experiences.
### References
- [Create apps for dual-screen devices](https://docs.microsoft.com/en-us/dual-screen/ "Create apps for dual-screen devices")
- [Surface Duo device dimensions](https://docs.microsoft.com/en-us/dual-screen/android/duo-dimensions "Surface Duo device dimensions")
- [Get the Surface Duo SDK](https://docs.microsoft.com/en-us/dual-screen/android/get-duo-sdk?tabs=csharp "Get the Surface Duo SDK")
- [Use the Surface Duo emulator](https://docs.microsoft.com/en-us/dual-screen/android/use-emulator?tabs=windows "Use the Surface Duo emulator")
- [Announcing dual-screen preview SDKs and Microsoft 365 Developer Day](https://blogs.windows.com/windowsdeveloper/2020/01/22/announcing-dual-screen-preview-sdks-and-microsoft-365-developer-day/ "Announcing dual-screen preview SDKs and Microsoft 365 Developer Day")
- [Microsoft 365 Developer Day](https://developer.microsoft.com/en-us/microsoft-365/virtual-events "Microsoft 365 Developer Day")
The post [Start with Surface Duo development on preview emulator and SDK today](https://gunnarpeipman.com/surface-duo-development-preview/) appeared first on [Gunnar Peipman - Programming Blog](https://gunnarpeipman.com). | gpeipman |
250,855 | Benefit of 'key' prop in React | If you are a beginner in a react js you may have encountered the key warning in the console while lis... | 0 | 2020-01-29T04:47:34 | https://dev.to/pratham0182/benefit-of-key-prop-in-react-4l33 | react, javascript | If you are a beginner in a react js you may have encountered the key warning in the console while listing elements on the page in a loop.
KEY is a very important and useful concept in react js in improving user experience
Now the question is how?? -- Refer the image below

In the image above on the left side, we have a list were 2 elements are rendered, then now
we add the third element at end of the list as shown at right, react will compare both and will get to know that it needs to update only the third element in the list.

But if we add the element at first position as shown in image above(Diana added th list) the react match will fail and it re-render all list again.
to avoid this re rendering, we can use the "key" prop in the list which will keep track and update only elements that are new. as shown in the below image. ..so simple :)
 | pratham0182 |
251,684 | Random number generation in Python | In Python you can generate random numbers with the random module. To load this module use the line... | 0 | 2020-01-30T15:53:41 | https://dev.to/bluepaperbirds/random-number-generation-in-python-f8e | python, beginners | In <a href="https://python.org">Python</a> you can generate random numbers with the random module. To load this module use the line
```python
import random
```
To generate 10 numbers between 1 and 10 you can use:
```python
import random
for x in range(10):
print(random.randint(1,11))
```
*These are pseudo-random numbers, but for most purposes they are good.*
The lowest possible number here is 1 and the maximum number 10. The for loop is used to repeat this 10 times.
If instead of random integers you want random floats, you can use uniform() as shown in <a href="https://pythonbasics.org/random-numbers/">this example</a>
## Random examples
To get a larger number of random numbers, you can change the for loop. If
you want 20 numbers use:
```python
import random
for x in range(20):
print(random.randint(1,11))
```
For 50 numbers use:
```python
import random
for x in range(50):
print(random.randint(1,11))
```
You get the idea. To get numbers between 1 and 100 you can use:
```python
print(random.randint(1,101))
```
or in a loop
```python
import random
for x in range(50):
print(random.randint(1,101))
```
**Related links:**
* <a href="https://docs.python.org/3.8/library/random.html">Python documentation on random module</a>
* <a href="https://pythonspot.com/random-numbers/">More random data examples</a>
* <a href="https://pythonspot.com/">Learn Python</a>
| bluepaperbirds |
253,252 | Secure AWS Environments by deploying apps in Private/Public Subnets | This is a series of posts to introduce the importance of using public and private subnets to keep you... | 0 | 2020-02-02T06:01:19 | https://dev.to/raphael_jambalos/secure-aws-environments-with-private-public-subnets-2ei9 | aws, security | This is a series of posts to introduce the importance of using public and private subnets to keep your infrastructure secure in AWS.
In this post, we would: (i) create an EC2 instance, (ii) install NGINX there, and (iii) see the simple NGINX homepage in our browser. This would be a straightforward task if we made the EC2 instance publicly accessible. We would just create an EC2 instance in the public subnet, give it a public IP address, install NGINX, and see the NGINX homepage in the browser via the public IP address. _That's it_.
But this would also mean we are increasing the number of entry points that we have on our VPC. This pattern establishes a dangerous precedent. What if we had to deploy _more_ 10 different client-facing apps? Then, we would have to create 10 new publicly accessible entry points. _This gives more and more ways for hackers to get into our VPC_.
# VPC design patterns to keep our VPC secure
The best practice is to limit the number of entry points to our VPC by using the Application Load Balancer (ALB) for HTTP/HTTPS traffic and the bastion host for SSH traffic. With this, we can deploy hundreds of applications in our VPC yet still keep the entry points to our VPC to just the ALB and the bastion host.
The AWS environment I used for this post is detailed below. I don't discuss how to setup it up in this post but I will do so in another post. I will link it here when it's finished.
- VPC with 4 subnets: 2 private subnets and 2 public subnets. Instances in the private subnet cannot be accessed directly from the internet but the instances themselves can access the internet (i.e to get software updates and patches, etc).
- An application load balancer placed on the 2 public subnets. The ALB should be able to serve HTTP/HTTPs from anywhere. By design, an ALB has servers on the public subnets. Traffic goes in to these servers, and based on the request's path and host header, it should decide where to direct traffic.
- A bastion host that can serve SSH traffic from anywhere. We will use this as a way to access all of our instance in the private subnet.
For now, let's start to create our EC2 instance.
# 1 | Creating an EC2 instance
**(1.1)** In the services tab, go to EC2

**(1.2)** On the left-hand side menu, choose Instances. On that page, click "Launch Instance"

**(1.3)** Amazon Machine Image
The first step in creating EC2 instances is choosing an Amazon Machine Image (AMI) to create an instance from. Choose the Ubuntu 18.04 AMI.
The most basic AMIs are just plain installations of popular operating systems like Ubuntu, CentOS, Windows, etc. This saves us the pain of installing an OS from scratch (that usually takes hours!!). With AMIs, we get to use our Ubuntu 18.04 EC2 instance in less than a minute.

**(1.4)** EC2 instance type
We now choose our EC2 instance type. The instance type determines the amount of compute (CPU), memory, and networking resources that will be available to our EC2 instance. AWS provides a wide array of EC2 instance types for every possible workload. You can learn more about them [here](https://aws.amazon.com/ec2/instance-types/)
Choosing which EC2 instance is appropriate for your workload ultimately depends on how much resources your application will use. For this workload, choose t2.micro. The resources should be enough since we would just be installing an NGINX server.
Then, click next.

**(1.5)** Network Configuration of the EC2 instance
Now, we will configure our instance. Make sure you are in the correct VPC. For the subnet, choose any *private subnet* with an available IP address (you should see how many available IP addresses there are for that subnet below the field). Set _Auto-assign Public IP_ to disabled.
Then, click next.

**(1.5)** Storage
For the storage of our EC2 instance, keep the default. As of the writing of this post, the default for a root volume is an 8GB volume. A root volume is where the operating system will be installed.
You can provision more volumes (or increase the size of your root volume) as your workload demands.

**(1.6)** Tags
Now, we would add tags to our EC2 instance. Tags are key-value pairs. They serve as a way to sort and classify our AWS resources across our account.
Click "add a Name tag". This should show a field where you can add what the instance's Name tag would be. The value should be "nginx-one"
Then, click next.

**(1.7)** Security Groups
If we deploy our EC2 instance now, we would not be able to access it at all. This is because the security group of our EC2 instance aren't set up to accept any connections. Security groups are a set of rules for incoming and outgoing traffic. They govern which resources can communicate with a specific set of resources and in what way (i.e allow only connections via port 22 "ssh").
By default, a security group's rules for outgoing traffic are a pass-all (all traffic leaving the instance is allowed). For incoming traffic, however, we are left with the discretion of what resources we want to allow to connect to our EC2 instance and what kind of connections with them we would allow. We can specify these resources in 3 ways:
- a range of IP addresses (i.e allow all computers within the IP range of 192.168.0.0/24 to connect to my instance across all ports).
- a specific IP address (i.e allow 192.168.12.1 to connect to my ec2 instance via port 80 [http]),
- a security group (i.e allow instances with the security group "bastion-host-sg" to connect to my instance via port 22).
- Using this option is easier if the resources you are giving access to are within AWS. This is because you can just keep on adding instances into the chosen security group rather than add a new rule in this security group for every new instance we want to give access to
- For example: rather than creating a rule to allow SSH traffic from 192.168.12.1 ("EC2 instance A") and another rule to allow SSH traffic from 192.168.12.2 ("EC2 instance B"), we can create a security group ("bastion-host-sg"), add EC2 instance A and B there, and add this security group to the rules of the security group for this EC2 instance ("ec2-nginx-sg").
- With this, security groups serve 2 purposes. They contain a set of rules to govern incoming and outgoing traffic. It also serves as a grouping of AWS resources. This grouping can be referred to by other security groups in their own rules.
We have to set up the security group of our EC2 instance to be able to accept SSH traffic (so we can connect to it via SSH) and accept traffic from port 80 (http) from the load balancer.
For our setup, we will create a new security group and name it "ec2-nginx-sg". We would allow:
- port 22 (SSH) connections from the security group of the _bastion host_
- port 80 (HTTP) connections from the security group of the _application load balancer_.
Then, click _Review and Launch_.

**(1.8)** Review and choose key pair
After configuring the security group, we would be able to review all the configuration we made. Double check the configuration you made with the instructions in this document. If you're satisfied, click Launch.
Before launching, AWS will ask us to choose a key pair (or create a new one). A key pair is 2 mathematically related keys: one key you keep, and the other one AWS keeps. In the next section, we would connect to our EC2 instance using the key that we have. AWS will use a mathematical function to verify if the key that we have is "related" to the key that they have. (thus, the basis of "asymmetric cryptography"). If it is, we can access our EC2 instance.
For our example, we would create a new key-pair called "ec2_nginx_kp". We would be asked to download the file (keeping one key) and AWS will keep the other.
Then, finally, click "Launch Instances".

You should see this screen:

# 2 | Configure our EC2 instance
_Understanding Bastion Hosts_
Bastion hosts are EC2 instances located on the public subnet. These instances are usually publicly accessible from anywhere (or from a set of IP addresses). The best practice is that anyone who needs access to any of the computers inside the VPC _must_ SSH into the bastion host first before doing another SSH to the instance they want to go to.
In doing this practice, the point of SSH entry to the VPC is reduced to just the bastion host. It is also best practice that the bastion host is hardened and tightly monitored. Measures like access logging (who accesses the bastion host), automated intrusion detection, tougher security controls are usually in place for the bastion host./
**(2.1)** Access the bastion host via SSH. Copy the keypair you downloaded in 1.8 to the bastion host. I usually just copy paste the PEM file via `nano`. You can use `scp` if you want. Then, run `chmod 400 yourkeypairname.pem`.
**(2.2)** Follow steps 1.1 and 1.2. In the instances page, find the instance you just launched. You can identify it with the name you gave it in step 1.6 (see how tags already made our lives easier~!). Then, click "Connect".

**(2.3)** Once inside the EC2 instance, install NGINX. Then, validate if NGINX is really running inside the EC2 instance.
```sh
# update apt and install nginx
sudo apt update
sudo apt install nginx
# you should see its "active (running)"
systemctl status nginx
# validate if NGINX really is running
# - you should see "Welcome to nginx!" in the console
curl localhost
```
# 3 | Connecting your EC2 instance to the load balancer
_Understanding Application Load Balancers (ALBs)_
Traditional load balancers distribute the traffic of a single application to many servers. Application load balancers also distribute traffic to many servers. But it can support many applications, each having their own set of servers called target group. To make this possible, we need to configure rules in the ALB so it can discern where to direct a request.
In understanding ALBs, it is useful to think of a request as something like the hash below. The ALB, operating on the application layer of the OSI stack, sees this hash and uses it to determine where to direct the request.

The ALB has several layers of filtering. The first layer is to check which listener the packet belongs to. It is common for ALBs to have two listeners: port 80 for HTTP and port 443 for HTTPS. If the request does not belong to a listener, it is ignored / dropped (i.e traffic bound for port 22).

Each listener has its own set of rules. For the given packet, it is directed to the HTTPS listener. The rules under a listener looks like a long `if` block. The condition on each `if` statement on the block dictate what kind of traffic can get routed to a target group. In the image, the `if` statement with the condition `packet[:host_header] == stocks.jambyblogsite.com` will redirect traffic to the `store_tg(packet)`.
The ordering of the `if` conditions in the block also determine how the request will get served. If a broader `if` condition is above a more narrow `if` condition, the narrow `if` condition won't get served. For example, if the first `if` condition is `packet[:host_header] == '*.jambyblogsite.com'` and the second `if` condition is `packet[:host_header] == finance.jambyblogsite.com`; then, the second `if` condition will never get served.

Once a particular target group has been chosen, an algorithm will choose which instance in the target group the packet will be directed to.

With this functionality, the ALB can truly allow you to serve traffic to thousands of different HTTP/HTTPS-based applications and yet still maintain 2 points of entry in your VPC.
Now, let's dive in on how to make this work with our EC2 instance.
**(3.1)** Follow step 1.1, and then on the left-hand side menu, go to Target Groups. On that page, click "Create Target Group"

**(3.2)** In the window, name the target group "nginx-tg" and leave the defaults, just make sure to select the correct VPC

**(3.3)** Select the "nginx-tg" target you just created. Under the targets tab, select "Edit"

**(3.4)** A modal will appear. Select the `nginx-one` server you created in Section 1, then click "Add to registered", and click save.

You will see that the "Target group is not configured to receive traffic from the load balancer". We will work on that next.

**(3.5)** Follow step 1.1, and then on the left-hand side menu, go to Load Balancer. Select the appropriate ALB. In the listeners tab, select the HTTP (port 80) listener and select "View / Edit Rules".

_3.6 If you have a public hosted zone_
If you bought your own domain name and you have connected it with Route 53.
**(3.6.1)** On a separate window, open Route 53. Under your Public Hosted Zone, create your own subdomain. For me, my subdomain would be `nginx-web.<mydomainname>.com`

**(3.6.2)** From step 3.5, you should see the screen below. Click the (+) button and then select the uppermost "(+) Insert Rule".

**(3.6.3)** ALB rules consists of conditions and actions. For this rule, set the condition as "Host Header" with a value of `nginx-web.<mydomainname>.com`. For the action, forward it to target group `nginx-tg`, the one you made in step 3.2

Then, click the check button. You should see this screen. Then, click save on the upper right.

**(3.10)** After a few minutes, you should be able to see the NGINX web page displayed on your browser with the host value `nginx-web.<mydomainname>.com`

| raphael_jambalos |
253,305 | Creating a custom menu bar in Electron | Tutorial about how to create custom menu bar in Electron apps. | 0 | 2020-02-02T00:00:00 | https://dev.to/saisandeepvaddi/creating-a-custom-menu-bar-in-electron-1pi3 | tutorial, electron, javascript | ---
title: Creating a custom menu bar in Electron
date: "2020-02-02"
description: "Tutorial about how to create custom menu bar in Electron apps."
---
(Originally published at [my blog](https://saisandeepvaddi.com/how-to-create-custom-menu-bar-in-electron/))
Do you want to replace your electron app's menu bar to look something cool? Let's see how to build a custom menu bar by building a similar one to slack's menu bar.
## Pre-requisite
Basics of ElectronJS. Check [this tutorial](https://www.electronjs.org/docs/tutorial/first-app) to get started.
## Resources
Finished code is available at [https://github.com/saisandeepvaddi/electron-custom-menu-bar](https://github.com/saisandeepvaddi/electron-custom-menu-bar)
## What we'll build
Here is what it is going to look when we finish.
<p align="center">
<img alt="Result image before clicking on menu" src="https://dev-to-uploads.s3.amazonaws.com/i/l82onsspfdhgblq9ylam.jpg" width="500" />
</p>
<p align="center">
<img alt="Result image with menu open" src="https://dev-to-uploads.s3.amazonaws.com/i/ga36gkvnh2idyjpd7xyf.jpg" width="500" />
</p>
<p align="center">
<img alt="Result image with mouse over close" src="https://dev-to-uploads.s3.amazonaws.com/i/vsfupebma5c73eyty9xz.jpg" width="500" />
</p>
## Set up electron project
Set up a minimal electron app from electron's official quick start github repo.
```
# Clone the Quick Start repository
$ git clone https://github.com/electron/electron-quick-start
# Go into the repository
$ cd electron-quick-start
# Install the dependencies and run
$ npm install && npm start
```
## Main process code
When you first run `npm start` you will see a window with a default menu bar attached to it. To replace it with our menu bar, we need to do two things. In the `main.js` file we have,
1. Set the `frame: false` in the `options` object for `new BrowserWindow({frame: false, ...other-options})`. This will create a window without toolbars, borders, etc., Check [frameless-window](https://www.electronjs.org/docs/api/frameless-window) for more details.
2. Register an event listener on `ipcMain` which receives a mouse click position when the mouse is clicked on the hamburger icon.
```js
// main.js
mainWindow = new BrowserWindow({
width: 800,
height: 600,
webPreferences: {
preload: path.join(__dirname, "preload.js")
// (NOT RECOMMENDED)
// If true, we can skip attaching functions from ./menu-functions.js to window object in preload.js.
// And, instead, we can use electron APIs directly in renderer.js
// From Electron v5, nodeIntegration is set to false by default. And it is recommended to use preload.js to get access to only required Node.js apis.
// nodeIntegration: true
},
frame: false //Remove frame to hide default menu
});
// ...other stuff
}
// Register an event listener. When ipcRenderer sends mouse click co-ordinates, show menu at that position.
ipcMain.on(`display-app-menu`, function(e, args) {
if (isWindows && mainWindow) {
menu.popup({
window: mainWindow,
x: args.x,
y: args.y
});
}
});
// ... other stuff.
```
Create a file called `menu-functions.js` and define these functions. All the functions here take electron's `BrowserWindow` object (`mainWindow` in this app) and run minimize, maximize, close, open menu actions which we need to trigger from our custom menu bar.
```js
// menu-functions.js
const { remote, ipcRenderer } = require("electron");
function getCurrentWindow() {
return remote.getCurrentWindow();
}
function openMenu(x, y) {
ipcRenderer.send(`display-app-menu`, { x, y });
}
function minimizeWindow(browserWindow = getCurrentWindow()) {
if (browserWindow.minimizable) {
// browserWindow.isMinimizable() for old electron versions
browserWindow.minimize();
}
}
function maximizeWindow(browserWindow = getCurrentWindow()) {
if (browserWindow.maximizable) {
// browserWindow.isMaximizable() for old electron versions
browserWindow.maximize();
}
}
function unmaximizeWindow(browserWindow = getCurrentWindow()) {
browserWindow.unmaximize();
}
function maxUnmaxWindow(browserWindow = getCurrentWindow()) {
if (browserWindow.isMaximized()) {
browserWindow.unmaximize();
} else {
browserWindow.maximize();
}
}
function closeWindow(browserWindow = getCurrentWindow()) {
browserWindow.close();
}
function isWindowMaximized(browserWindow = getCurrentWindow()) {
return browserWindow.isMaximized();
}
module.exports = {
getCurrentWindow,
openMenu,
minimizeWindow,
maximizeWindow,
unmaximizeWindow,
maxUnmaxWindow,
isWindowMaximized,
closeWindow,
};
```
We need to attach these functions to the `window` object which we can use in the renderer process. If you are using older versions (<5.0.0) of electron or you set `nodeIntegration: true` in `BrowserWindow`'s options, you can use the above `menu-functions.js` file directly in the renderer process. Electron new versions have it `false` set by default for [security reasons](https://www.electronjs.org/docs/tutorial/security#2-do-not-enable-nodejs-integration-for-remote-content).
```js
// preload.js
const { remote } = require("electron");
const {
getCurrentWindow,
openMenu,
minimizeWindow,
unmaximizeWindow,
maxUnmaxWindow,
isWindowMaximized,
closeWindow,
} = require("./menu-functions");
window.addEventListener("DOMContentLoaded", () => {
window.getCurrentWindow = getCurrentWindow;
window.openMenu = openMenu;
window.minimizeWindow = minimizeWindow;
window.unmaximizeWindow = unmaximizeWindow;
window.maxUnmaxWindow = maxUnmaxWindow;
window.isWindowMaximized = isWindowMaximized;
window.closeWindow = closeWindow;
});
```
We need a menu now. Create a simple menu in a new `menu.js` file. You can learn how to add your own options to the menu at [official docs](https://www.electronjs.org/docs/api/menu). Electron has some easy to follow documentation with examples.
```js
// menu.js
const { app, Menu } = require("electron");
const isMac = process.platform === "darwin";
const template = [
{
label: "File",
submenu: [isMac ? { role: "close" } : { role: "quit" }],
},
];
const menu = Menu.buildFromTemplate(template);
Menu.setApplicationMenu(menu);
module.exports = {
menu,
};
```
We are done on the main process side. Now, let's build our custom menu bar. If you see the menu in the image, you'll see that we have these things on our menu bar.
1. On the left side, a hamburger icon which is where the menu will open.
2. On the right side, we have minimize button, maximize-unmaximize button, and close button.
I used fontawesome js file from [fontawesome.com](https://fontawesome.com/) for icons. Add it to HTML's `<head>` tag. I removed `Content-Security-Policy` meta tags to allow fontawesome js file to run for now. In production, make sure you properly allow which code should run. Check [CSP](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) for more details.
```html
<!-- index.html -->
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8" />
<!-- https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP -->
<title>My Awesome App</title>
<link rel="stylesheet" href="style.css" />
<script src="https://kit.fontawesome.com/1c9144b004.js" crossorigin="anonymous"></script>
</head>
</head>
<body>
<div id="menu-bar">
<div class="left" role="menu">
<button class="menubar-btn" id="menu-btn"><i class="fas fa-bars"></i></button>
<h5>My Awesome App</h5>
</div>
<div class="right">
<button class="menubar-btn" id="minimize-btn"><i class="fas fa-window-minimize"></i></button>
<button class="menubar-btn" id="max-unmax-btn"><i class="far fa-square"></i></button>
<button class="menubar-btn" id="close-btn"><i class="fas fa-times"></i></button>
</div>
</div>
<div class="container">
Hello there!
</div>
<!-- You can also require other files to run in this process -->
<script src="./renderer.js"></script>
</body>
</html>
```
```css
/* style.css */
body {
padding: 0;
margin: 0;
font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif;
color: white;
}
#menu-bar {
display: flex;
justify-content: space-between;
align-items: center;
height: 30px;
background: #34475a;
-webkit-app-region: drag;
}
#menu-bar > div {
height: 100%;
display: flex;
justify-content: space-between;
align-items: center;
}
.menubar-btn {
-webkit-app-region: no-drag;
}
.container {
height: calc(100vh - 30px);
background: #34475ab0;
color: white;
display: flex;
justify-content: center;
align-items: center;
font-size: 2em;
}
button {
height: 100%;
padding: 0 15px;
border: none;
background: transparent;
outline: none;
}
button:hover {
background: rgba(221, 221, 221, 0.2);
}
#close-btn:hover {
background: rgb(255, 0, 0);
}
button i {
color: white;
}
```
Now your window should look like this. Awesome. We are almost there.
<p align="center">
<img alt="Result image before clicking on menu" src="https://dev-to-uploads.s3.amazonaws.com/i/l82onsspfdhgblq9ylam.jpg" width="500" />
</p>
If you guessed it, none of the buttons in the menu bar work. Because we didn't add `onclick` event listeners for them. Let's do that. Remember we attached some utility functions to the `window` object in `preload.js`? We'll use them in button click listeners.
```js
// renderer.js
window.addEventListener("DOMContentLoaded", () => {
const menuButton = document.getElementById("menu-btn");
const minimizeButton = document.getElementById("minimize-btn");
const maxUnmaxButton = document.getElementById("max-unmax-btn");
const closeButton = document.getElementById("close-btn");
menuButton.addEventListener("click", e => {
// Opens menu at (x,y) coordinates of mouse click on the hamburger icon.
window.openMenu(e.x, e.y);
});
minimizeButton.addEventListener("click", e => {
window.minimizeWindow();
});
maxUnmaxButton.addEventListener("click", e => {
const icon = maxUnmaxButton.querySelector("i.far");
window.maxUnmaxWindow();
// Change the middle maximize-unmaximize icons.
if (window.isWindowMaximized()) {
icon.classList.remove("fa-square");
icon.classList.add("fa-clone");
} else {
icon.classList.add("fa-square");
icon.classList.remove("fa-clone");
}
});
closeButton.addEventListener("click", e => {
window.closeWindow();
});
});
```
That's it. Restart your app with `npm run start` and your new menu bar buttons should work.
**NOTE:** Some parts of code are removed in the above scripts for brevity. You can get the full code at [https://github.com/saisandeepvaddi/electron-custom-menu-bar](https://github.com/saisandeepvaddi/electron-custom-menu-bar).
If you want to see a bigger electron app with a lot more stuff, check the [https://github.com/saisandeepvaddi/ten-hands](https://github.com/saisandeepvaddi/ten-hands) app which uses the similar style menu bar (custom style menu bar is visible only on Windows for now though) but built with React and TypeScript. I wrote this tutorial after using this menu bar there.
Thank you. 🙏
| saisandeepvaddi |
253,309 | Serverless and the Dreaded CORS | While venturing farther in the Serverless world, I started making a list of things I encountered on m... | 0 | 2020-02-02T06:42:39 | https://dev.to/divyamohan0209/serverless-and-the-dreaded-cors-7l0 | serverless, webdev, cors, design | While venturing farther in the Serverless world, I started making a list of things I encountered on my journey that I would like to delve deeper into; plainly because, I figured learning just enough to make something work was never going to satisfy my voracious appetite. Towards cementing my understanding, I started making unorganized one-liner notes as reference material. This article (_and the ones that will follow, hopefully_) are logical extensions of those notepad entries.
My very first entry on the list, was CORS - more commonly known as *Cross Origin Resource Sharing*.
1. So what EXACTLY is CORS?
Cross Origin Resource Sharing (__CORS__) enables a web-app to access resources OUTSIDE of its domain in a controlled manner and offers flexibility + functionality over the default same-origin security policy.
2. What is Same-Origin Security Policy?
A policy, under which, a web browser permits web page #1 to access data in web page # 2, ONLY IF both web pages have the same origin i.e. belong to the same domain.
3. How is CORS relevant in Serverless?
Web API Backends being a popular use case in serverless, a web page might need to make calls to a backend API that lies on a different domain i.e. of different origin; therefore, requiring the call to be CORS-friendly.
4. What is a common error indicating that CORS is NOT enabled?
No 'Access-Control-Allow-Origin' header is present on the requested resource
4. How is CORS implemented?
Two main ways:
- Headers
- Access-Control-Allow-Origin header: Included in the response to specify the origin of where the request originated from as the value for allowing access to the resource's contents.
- Pre-flight requests (*__GET, POST, HEAD methods excluded__*)
- Browser will send a preflight request to the resource using the OPTIONS method; in response to which, the resource you're requesting will return with methods that are safe to send and may optionally return the headers that are valid to send across.
5. Ways that CORS can be exploited? (i.e. Poor design practices)
- Allowing access to any domain in the origin field
- Allowing access to all sub-domains (_including currently non-existent sub-domains, that could potentially be malicious_)
- Origin header specification supporting the value **null**
- Trusting origins with vulnerabilities to XSS i.e. Cross Site Scripting
- Whitelisting trusted subdomain using HTTP
- Lower security standards for authentication on intranets/internal websites
6. Ways to prevent CORS-based attacks?
- Allowing trusted websites ONLY
- Avoid whitelisting null
- Avoid use of wildcards in intranets/internal websites
- Judiciously configure cross-domain requests by correctly specifying origin against Access-Control-Allow-Origin header
- Stringent configuration of server-side security policies.
*__Credits: Major shout-out to Alex DeBrie for his amazing article on Serverless and CORS that helped me a lot while writing this post.__*
| divyamohan0209 |
253,380 | Old school developer vs new school developer | Every language has its quirks. PHP has been know to be a language where it is very easy to do things... | 0 | 2020-02-02T10:24:16 | https://dev.to/mafx/old-school-developer-vs-new-school-developer-3m8b | php, growth, learning, agency | Every language has its quirks. PHP has been know to be a language where it is very easy to do things badly. In reality it is not the languages fault but how we as developers treat it. Being a lead developer and seeing new employees (and candidates) in action has shown me that there are two types of PHP developers - new school and old school, each with their own issues.
*Old School Devleoper* is one that has been building functionality for a long time time (obviously). He/she started doing this before the advent of frameworks, most likely even before Composer was a thing. This is a person at whose CV you'll look and consider for a senior level position. But it is not always as great as it looks.
Some of the main risks of the old-school developer:
* old habits die hard - the longer you do something, the more you get used to it. It becomes hard to adapt to new techniques, new technologies and even new coding styles.
* if the person is not driven to keep improving the code will be functional but more costly. Features will be built from scratch instead of using a library loaded from composer. Templating will be built manually (and will likely include a risk for XSS vulnerability) instead of using an existing library like twig / blade.
* harder adaptation to new frameworks and languages.
* ego - after doing something for a long time developer will get the confidence of the approach. Whilst not bad on its own, this can cause issues when task is misunderstood or the approach/code does not follow the company's overall guidelines. Also can struggle with having respect for co-workers.
Some of the strenghts of old-school:
* usually a strong understanding of tasks and how to build it.
* strong time estimation skills.
* understanding of vulnerabilities and how to avoid security issues.
* usually some dev-ops experience.
* good patterns for analysing and debugging. Doesn't go in circles but knows where to look for issues and how to find them.
*New School Developer* is one that has started web development in the "green days" of PHP. These developers will likely use one of the major frameworks/platforms (Laravel/Symfony/Wordpress/Magento etc.) and will be a powerful tool in your company's arsenal as long as they don't have to step of the comfort of their knowledge.
Some of the main risks of the new-school developer:
* Knowledge is usually tied to one framework/platform. Likes to avoid and often shows negative attitude to other tools available.
* With frameworks like Laravel handling large amount of sanitation (through templating & PDO) is likely to not know to deal with security risks as they are always handled by the framework.
* Usually has less experience and can lead to functionality not matching requirements perfectly or required time may be underestimated.
Some of the strengths of new-school developer:
* Knows a framework very well and prefers usage of external libraries to build functionality quicker.
* On overall can be quicker as doesn't need to build functionality from scratch, knows the tools and doesn't consider security on every step taken.
* Is more fond of the new techniques like Test Driven Development and package building.
* Can bring a new and fresh perspective into a company.
So with these two contrasting developer types which is the right one? That largely would depend what kind of developer you need/want to be. For what I've seen in my work there needs to be a mix of both. Old-school developers can achieve this by encouraging new technologies, "keeping the finger on the pulse" to ensure they don't settle into their ways too much. New-school developers need to encourage diversity of frameworks, learn not only about what they are skilled in but also branch out into other frameworks/platforms.
*P.S. Where do I consider myself in this? I would probably call myself an old-school developer by heart. But to ensure I don't settle in too deep I spend a lot of time deep-diving into frameworks like Laravel and its underlying Symfony features. I build complex functionality like search filters and shopping baskets in vue.js. I try to ensure I keep learning and growing my skill-set.* | mafx |
253,758 | While Loop – JavaScript Series – Part 13 | Suppose we are going to print 1 to 10 on the console. But how we can do it ? We can do it easily co... | 4,305 | 2020-02-02T17:29:57 | https://blog.nerdjfpb.com/javascript-part-13/ | javascript, codenewbie, tutorial, beginners | Suppose we are going to print 1 to 10 on the console. But how we can do it ? We can do it easily
```
console.log(1)
console.log(2)
...
console.log(9)
console.log(10)
```
But this is not a good way to do it. Currently we can do it because it's only ten times. But suppose we need to print 1 - 100. How to do it ?
This is where we use loops. We're going to use `while` loop today!
While is easy. Just remember what we learned in our last tutorial. While loop syntax is -
```
while (condition) {
// code block to be executed
}
```
Let's write some real codes now. If we wants to print 1 - 100 then we are going to store the values in variable and we are going to start from 1. So `var number = 1`
Now we are going to start our while. While and the condition will be.
```
while (number < 101) {
//code block that we wants to do
}
```

Now we are going to print the values from the first one. We need to increase the value of number to break the condition, if the condition doesn't break then it will stuck forever. So we always break the loop condition

See the result in browser

Do you understand the while loop ?
You can see the graphical version here
{% instagram B8En8fkgtQf %}
Source Codes - { Check commits }
{% github nerdjfpb/javaScript-Series %}
Originally it published on [nerdjfpbblog](https://blog.nerdjfpb.com/javascript-part-13/). You can connect with me in [twitter](https://twitter.com/nerdjfpb) or [linkedin](https://www.linkedin.com/in/nerdjfpb/)! | nerdjfpb |
253,393 | How would you want the rich text editor for your end users to be? | Should it be WYSIWYG or some kind of markup language, like Markdown? How do I make it powerful witho... | 0 | 2020-02-02T11:08:32 | https://dev.to/patarapolw/how-would-you-want-the-rich-text-editor-for-your-end-users-to-be-23m8 | javascript, typescript, discuss | Should it be WYSIWYG or some kind of markup language, like Markdown?
How do I make it powerful without compromising security?
I have just created [Showdown-Extra](https://github.com/patarapolw/showdown-extra) with [Showdown.js](https://github.com/showdownjs/showdown) and [DOMPurify](https://github.com/cure53/DOMPurify), but recently, I think it would be best if I just use a WYSIWYG editor, like [Quill.js](https://quilljs.com/).
{% github patarapolw/showdown-extra no-readme %}
I have also created a demo. <https://patarapolw.github.io/showdown-extra/>
BTW, my plan is not yet something big like [Discourse](https://www.discourse.org/), but a self-hosted commenting system. I should probably limit features.
{% github patarapolw/aloud no-readme %}
Some other options, I think, are basically markdown language with custom components, like MDX (React)... | patarapolw |
253,482 | Quick tip: NTLM / Windows pass-through authentication with Selenium and ChromeDriver | Challenge I was on a project for a web application that used Windows Active Directory... | 0 | 2020-02-02T15:42:27 | https://seankilleen.com/2020/02/ntlm-pass-through-authentication-with-chromedriver/ | chromedriver, selenium, testing, registry | ---
title: Quick tip: NTLM / Windows pass-through authentication with Selenium and ChromeDriver
published: true
date: 2020-02-02 14:58:00 UTC
tags: chromedriver,selenium,testing,registry
canonical_url: https://seankilleen.com/2020/02/ntlm-pass-through-authentication-with-chromedriver/
---
## Challenge
I was on a project for a web application that used Windows Active Directory authentication for internal users.
We had some automated acceptance tests using Selenium and ChromeDriver. However, these tests would always fail on our build agents, and we couldn’t figure out why. There were errors around authentication.
## Solution
After a hunch and some intense googling, we found that there are registry settings where you can enable Chrome to allow ChromeDriver to accept NTLM authentication negotiation by default.
The key is to add the following to your registry, to ensure you’re enabling the desired auth schemes for the desired domains.
An example `.reg` file is below that you can modify to use. After running applying these settings to our build agents, the problems were resolved.
```
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\Software\Policies\Google\Chrome]
"AuthNegotiateDelegateWhitelist"="*.companydomain.org,*.companydomain.coop"
"AuthSchemes"="basic,digest,ntlm,negotiate"
"AuthServerWhitelist"="*.companydomain.org,*.companydomain.coop"
```
Happy testing! | seankilleen |
253,489 | crafting a workday | Originally posted on josephchekanoff.com. Running a development agency is really difficult. I (curre... | 0 | 2020-02-02T15:43:36 | https://josephchekanoff.com/blog/crafting-a-workday/ | productivity | Originally posted on [josephchekanoff.com](https://josephchekanoff.com/blog/crafting-a-workday/).
Running a development agency is really difficult. I (currently) don't have a workday "structure" that invokes creativity **and** growth.
[@dhh](http://twitter.com/dhh) shared a great workday schedule on Twitter:
{% twitter 1202243381490937859 %}
To incentivize creating an environment that promotes productivity _and_ focus on the the things that matter, I'm putting it out here as a reminder to myself. Here's my version of "the perfect workday."
| Hour | Focus |
|:--|:--|
| 5am | Walk Dog + Run |
| 6am | Shower + Breakfast |
| 7am | Reflection + Standup |
| 8am-12pm | Build things |
| 12pm | Walk Dog + Lunch (Friday Yoga) |
| 1pm-3pm | Project Management |
| 3pm-5pm | Prospecting + Business Development
| 5pm | Gym or Bike |
| 6pm | Dinner + Walk Dog |
| 7pm-8pm | Family time or Event |
| 9pm | Walk Dog + Read / TV |
| 10pm | Bed |
Things to note:
- Lots of time outside (running, biking, dog).
- AM focus (when I'm most productive/creative) on building.
- Frequent Breaks
- At least 7 hours of sleep.
- This is a goal.
> "It is your commitment to the _process_ that will determine your _progress_."
> _I'm (still) a fan of [James Clear](http://jamesclear.com)._
If [Chek Creative](http://chekcreative.com) is going to grow at the aggressive pace we have outlined as a team, we'll all need systems for getting things done. For me, that is contributing to our product development, planning for our team's success, and prospecting new business on a daily basis.
How do you design your workday? | jchekanoff |
253,500 | Workout your tasks with WorkManager — Advanced Topics | Workout your tasks with WorkManager — Advanced Topics “WorkManager is a library for managi... | 0 | 2020-06-27T13:57:44 | https://proandroiddev.com/workout-your-tasks-with-workmanager-advanced-topics-c469581c235b | androiddev, androidjetpack, workmanager, androidarchitecture | ---
title: Workout your tasks with WorkManager — Advanced Topics
published: true
date: 2019-09-15 17:09:04 UTC
tags: androiddev,androidjetpack,workmanager,androidarchitecture
canonical_url: https://proandroiddev.com/workout-your-tasks-with-workmanager-advanced-topics-c469581c235b
---
### Workout your tasks with WorkManager — Advanced Topics
_“WorkManager is a library for managing_ **_deferrable_** _and_ **_guaranteed_** _background work.”_

In my previous two posts about [WorkManage](https://developer.android.com/topic/libraries/architecture/workmanager) I covered topics like:
- Android memory model
- Android battery optimizations
- Current background processing solutions
- Where is WorkManager placed in the background work schema
- WorkManager components: Worker, WorkRequest and WorkManager
- Constraints
- Input/Output Data
- [Workout your tasks with WorkManager — Intro](https://dev.to/magdamiu/workout-your-tasks-with-workmanager-intro-pca-temp-slug-2242387)
- [Workout your tasks with WorkManager — Basics](https://dev.to/magdamiu/workout-your-tasks-with-workmanager-basics-466d-temp-slug-82140)
In this blog post I’ll cover some extra features of the WorkManager library like:
- how to identify a task
- how to get the status of a task
- _BackoffPolicy_
- how to combine the tasks and the graphs of tasks (chaining the work)
- how to merge the inputs and outputs
- what are the Threading options in WorkManager
<figcaption>WorkManager mind map diagram</figcaption>
#### 1️⃣ **Identify a task**
After a task (work) was created we will be interested in knowing the status of it, but in order to obtain this objective we should have some mechanisms that could be used to identify the task (work). There are 3 main ways that could be used to identify the work:
1. **Unique id (UUID)**: the id associated to the WorkRequest is generated by the library and it is not developer-friendly
2. **Tag** : a task could contain many tags
3. **Unique name** : a task could have only one unique name
{% gist https://gist.github.com/magdamiu/89d0108d4c39c8533ebe1e201aa072e6 %}
{% gist https://gist.github.com/magdamiu/5bb59f4b57520534f5f697620468d5d0 %}
{% gist https://gist.github.com/magdamiu/698371e4037e09501ae66395d548ab5f %}
{% gist https://gist.github.com/magdamiu/83fc10c76c9b78eadee8d12f4831fcce %}
#### 2️⃣ Get the status of a task
<figcaption>WorkInfo contains info about a particular WorkRequest</figcaption>

By having the possibility to identify a task we are able to know more about its status by using LiveData or we also have the possibility to cancel it.
**WorkManager & LiveData = ❤️**
{% gist https://gist.github.com/magdamiu/4d1ea5454cf7f57173255b69efcc45d1 %}
**Cancel a task ** ❌
{% gist https://gist.github.com/magdamiu/4a28bba668a337a8b2505c3de628fcd3 %}
#### **3️⃣ WorkManager Policies**
❗ **Existing Work Policy enums**
- **KEEP ** — keeps the existing unfinished WorkRequest. Enqueues it if one does not already exist.
- **REPLACE ** — always replace the WorkRequest. Cancels and deletes the old one, if it exists.
- **APPEND ** — appends work to an existing chain or create a new chain.
KEEP + REPLACE + APPEND = _ExistingWorkPolicy_
KEEP + REPLACE = _ExistingPeriodicWorkPolicy_
❗ **BackoffPolicy enum**
**EXPONENTIAL —** Used to indicate that WorkManager should increase the backoff time exponentially
**LINEAR — ** Used to indicate that WorkManager should increase the backoff time linearly
For a **BackoffPolicy** of 15 seconds, will be as next:
- For linear: **work start time + (15 \* run attempt count)**
- For exponential: **work start time + Math.scalb(15, run attempt count — 1)**
The **work start time** , is when the work was first executed (the 1st run attempt).
**Run attempt count** is how many times the WorkManager has tried to execute an specific Work.
Also note that the maximum delay will be capped at **WorkRequest.MAX\_BACKOFF\_MILLIS** and take into consideration that a retry will only happen if returning **WorkerResult.RETRY**
{% gist https://gist.github.com/magdamiu/bdf3165ba7e9230a9b462b7b6eea7c15 %}
#### 4️⃣ Chaining work

Sometimes it is necessary to run some tasks in parallel or to chain them one after another one, or even to create groups of tasks chained or in parallel. These features are available also in WorkManager library.
The code for the previous scheme looks like this:
{% gist https://gist.github.com/magdamiu/edb91686b34541aa2702b6d5432b8c1a %}
<figcaption>Parallel execution for Task 1 and Task 2</figcaption>
{% gist https://gist.github.com/magdamiu/c8f837f40f8ad884a6a7a25be12f7fb9 %}
<figcaption>Chained tasks and parallel chains</figcaption>
{% gist https://gist.github.com/magdamiu/1c284f5371c3911f39840fe61a84e820 %}
#### 5️⃣ Merge inputs and outputs
Like we already saw when we chain the work the outputs are inputs of the tasks, but they should be merged somehow in order to get the correct data. To do the merge we have 2 main strategies in place that are actually the provided implementations of the abstract class [_InputMerger_](https://developer.android.com/reference/androidx/work/InputMerger):
- [_ArrayCreatingInputMerger_](https://developer.android.com/reference/androidx/work/ArrayCreatingInputMerger.html)
- [_OverwritingInputMerger_](https://developer.android.com/reference/androidx/work/OverwritingInputMerger.html)



#### 6️⃣ Threading options in WorkManager

1. ListenableWorker
2. Worker
3. CoroutineWorker
4. RxWorker
5. Our own implementation :)

🧵 **_ListenableWorker_**
**Overview**
- A ListenableWorker only signals when the work should start and stop
- The start work signal is invoked on the main thread, so we go to a background thread of our choice manually
- A ListenableFuture is a lightweight interface: it is a Future that provides functionality for attaching listeners and propagating exceptions
**Stop work**
- It is always cancelled when the work is expected to stop. Use a CallbackToFutureAdapter to add a cancellation listener
🧵 **_Worker_**
**Overview**
- Worker.doWork() is called on a background thread, synchronously
- The background thread comes from the Executor specified in WorkManager’s Configuration, but it could also be customised
**Stop work**
- Worker.onStopped() is called. This method could be overridden or we could call Worker.isStopped() to checkpoint the code and free up resources when necessary
🧵 **_CoroutineWorker_**
**Overview**
- For Kotlin users, WorkManager provides first-class support for **coroutines**
- Instead of extending Worker, we should extend CoroutineWorker
- CoroutineWorker.doWork() is a suspending function
- The code runs on Dispatchers.Default, not on Executor (customisation by using CoroutineContext)
**Stop work**
- CoroutineWorkers handle stoppages automatically by cancelling the coroutine and propagating the cancellation signals
🧵 **_RxWorker_**
**Overview**
- For RxJava2 users, WorkManager provides interoperability
- Instead of extending Worker, we should extend RxWorker
- RxWorker.createWork() method returns a Single<Result> indicating the Result of the execution, and it is called on the main thread, but the return value is subscribed on a background thread by default. Override RxWorker.getBackgroundScheduler() to change the subscribing thread.
**Stop work**
- Done by default
#### 🎉WorkManager — Recap
- WorkManager is a wrapper for the existing background processing solutions
- Create one time or periodic work requests
- Identify our tasks by using ids, tags and unique names
- Add constraint, delay and retry policy
- Use input/output data and merge them
- Create chains of tasks
- Use the available threading options or create your own
That’s all folks! 🐰 Enjoy and feel free to leave a comment if something is not clear or if you have questions. And if you like it please 👏 and share !
Thank you for reading! 🙌🙏😍✌
* * * | magdamiu |
253,518 | Hello Kotlin | At this moment, in the world, there are more than 5000 programming languages available. Now, the fi... | 0 | 2019-11-10T10:35:58 | https://medium.com/@magdamiu/hello-kotlin-774b44cd9df0 | learnkotlinfromgde, android, gde, learnkotlin | ---
title: Hello Kotlin
published: true
date: 2019-11-10 10:35:58 UTC
tags: learnkotlinfromgde,android,gde,learnkotlin
canonical_url: https://medium.com/@magdamiu/hello-kotlin-774b44cd9df0
---

At this moment, in the world, there are more than 5000 programming languages available. Now, the first question asked by us, the developers, is why do we need another programming language like Kotlin?
**Kotlin** is a general-purpose language that supports both functional programming and object-oriented programming paradigms. It’s an open-source project developed mainly by _JetBrains_ with the help of the community. Like Java, Kotlin is a statically typed language, however, in Kotlin we can skip the types.It’s an open-source (_Apache 2.0_) project developed mainly by JetBrains with the help of the community.
The Kotlin philosophy is to create a **modern** and **pragmatic** language for the industry, not an academic one. Pragmatic means getting things done, and in terms of programming represents the capability to easily transform an idea into software.
Kotlin is based on many programming languages like Java, Scala, C#, Groovy, Python and it tries to reuse what works better in those languages.
The name of the language represents the name of an island according to [Wikipedia](https://en.wikipedia.org/wiki/Kotlin_Island): “ **Kotlin** is a Russian island, located near the head of the Gulf of Finland, 32 kilometres (20 mi) west of Saint Petersburg in the Baltic Sea.”

Kotlin is a popular language: in the latest Stack Overflow developer [survey](https://insights.stackoverflow.com/survey/2019#most-loved-dreaded-and-wanted), it ranks as the **4th most loved programming language** and it is used to developer apps like Pinterest, Uber, Slack, Trello, etc
The Kotlin project was started in **2010** and it took 6 years in order to validate all the design choices made. The community was involved in this project, because community it is not only about validation but also about inspiration.
In **2017** Google officially announced at Google I/O that Kotlin is a first-class citizen in the Android world.
In **2018** JetBrains launched the 1.3 version and this year they launched **1.3.50 version**.



#### **✅ Conventions**
- The same conventions like on Java
- Uppercase for types
- Lower camelCase for methods and properties
- Semicolons are optional
- Reverse notation for packages
- A file could contain multiple classes
- The folder names not have to match the package name
#### 🛠️ **Development & Build tools**
- Kotlin/JVM : JVM 1.6+
- Kotlin/JavaScript: transpile to JavaScript and it offers readable generated JS code
- Kotlin/Native (C, Swift, Objective-C): compiles to native binaries using LLVM and it does not suppose to have a Virtual Machine
- Editor or IDE: IntelliJ IDEA, Android Studio, NetBeans, Eclipse
- Build tools: On the JVM side, the main build tools include [Gradle](https://kotlinlang.org/docs/reference/using-gradle.html), [Maven](https://kotlinlang.org/docs/reference/using-maven.html), [Ant](https://kotlinlang.org/docs/reference/using-ant.html), and [Kobalt](http://beust.com/kobalt/home/index.html). There are also some build tools available that target client-side JavaScript.


#### 📌Main advantages of Kotlin
**Readability:**
It’s already known that developers spend more time reading existing code than writing new code. And we also know that a code is clean if it is easy to understand.
Imagine you’re a part of a team developing a big project, and you need to add a new feature or fix a bug. What are your first steps? You read a lot of code to find out what you have to do. This code might have been written recently by your colleagues, or by someone who no longer works on the project, or by you, but long ago. Only after understanding the surrounding code you can make the necessary modifications.
**Interoperability**
Regarding interoperability, our first concern probably is, “Can I use my existing libraries?” With Kotlin, the answer is, “Yes, absolutely!”
**Safety**
In general, when we speak about a programming language as being safe, we mean its design prevents some specific type of errors in a program. For example, in Kotlin, the designers of the language worked to eliminate the NPE.
**Tooling**
Kotlin is a compiled language. This means before we can run Kotlin code, we need to compile it. And in terms of tools, we have a lot of possibilities here.
#### 🔖 Best practices
1. Agree on a set of **conventions** before you start to write code in Kotlin
2. **Don’t treat it as Java** with a different syntax
3. Use a linter (like **ktlint** )
4. **Don’t hide** too much info
5. Choose **readable** over short expressions
#### **📚 Resources to learn Kotlin**
- [Kotlinlang.org](https://kotlinlang.org/): The official Kotlin website. Includes everything from a guide to [basic syntax](https://kotlinlang.org/docs/reference/basic-syntax.html) to the [Kotlin standard library reference](https://kotlinlang.org/api/latest/jvm/stdlib/index.html).
- [Kotlin Koans Online](http://try.kotlinlang.org/): A collection of exercises in an online IDE to help you learn the Kotlin syntax.
- [Udacity course](https://www.udacity.com/course/kotlin-bootcamp-for-programmers--ud9011): “Kotlin Bootcamp for Programmers”. Essentials of the Kotlin programming language from Kotlin experts at Google. For programmers coming from Java or other object- oriented languages.
- [O’Reilly course](http://shop.oreilly.com/product/0636920052982.do): An 8-hour Kotlin course, “Introduction to Kotlin Programming,” by Hadi Hariri, a developer at JetBrains. Requires subscription; 10-day free trial available.
- [Treehouse course](https://teamtreehouse.com/library/kotlin-for-java-developers): “Kotlin for Java Developers” teaches Kotlin with an emphasis on Android. Requires subscription; 7-day free trial available.
- [@kotlin](https://twitter.com/kotlin): The official Kotlin Twitter account.
- [Kotlin Community](https://kotlinlang.org/community/): A list of offline events and groups from kotlinlang.org.
- [Kotlin Slack](http://slack.kotlinlang.org/): A Slack chat community for Kotlin users.
- [Talking Kotlin](http://talkingkotlin.com/): A bi-monthly podcast on Kotlin and more.

Enjoy and feel free to leave a comment if something is not clear or if you have questions. And if you like it please 👏 and share !
Thank you for reading! 🙌🙏😍✌
Follow me on Twitter: [Magda Miu](https://twitter.com/MagdaMiu) | magdamiu |
253,749 | Kafka Connect: How it let us down? | About a year ago me and @minutis had a chance to try out Kafka Connect. We used it as the backbone of... | 0 | 2020-02-03T06:18:55 | https://dev.to/pdambrauskas/kafka-connect-how-it-let-us-down-2nnc | kafka, kafkaconnect, etl | ---
title: Kafka Connect: How it let us down?
published: true
description:
tags: kafka,kafkaconnect,etl
---
About a year ago me and [@minutis](https://github.com/minutis) had a chance to try out [Kafka Connect](https://docs.confluent.io/3.0.0/connect/). We used it as the backbone of one of our [ETL](https://www.webopedia.com/TERM/E/ETL.html) processes but eventually, we chose a different approach. In this post, I'll try to remember what problems we met, and why Kafka Connect didn't fit our needs.
For those of you, who do not know what Kafka Connect is, it is a framework for connecting Apache Kafka to external systems such as databases, search indexes and file systems.
Kafka Connect allows both: write data from external source system to Kafka topic and export data from Kafka topic to external system.
## Main Kafka Connect concepts
I'm not going in-depth to each and every Kafka Connect component, there is plenty of information online on how Kafka Connect is designed and how it works, however, I'll try to describe them, in short, to give you an idea on how Kafka Connect works so that you have more context on what I'm going to write further in this post.
So Main Kafka Connect components are:
- Connector - unit of work and also a logical implementation of integrations with external systems. There are two types of connectors: Source Connectors, responsible for reading from external systems to Kafka and Sink Connectors responsible for writing data from Kafka to external systems. Confluent Inc., the main contributor of Kafka Connect, has quite detailed [docummentation](https://docs.confluent.io/current/connect/devguide.html#) on how to implement your own Source and Sink connectors.
- Task - a unit of work. When you are configuring Connector for an external system, you can define the maximum number of tasks. This number defines how many processes in parallel should read from your external system (or write to it). So the work done by Connector is parallelized by the number of tasks.
- Worker - component, responsible for task execution. Kafka Connect can work in two modes: standalone and distributed. In standalone mode, you have one worker process responsible for executing your connector task, configured on properties file. In distributed mode, you can start many worker processes and distribute them all across your Kafka cluster. Also, in distributed mode, all connector configuration is done by using [Kafka Connect Rest API](https://docs.confluent.io/current/connect/references/restapi.html).
- Transform - transformation applied to Kafka's message after connector ingests data, but before data is written to Kafka topic. There are many [Transforms implementations](https://docs.confluent.io/current/connect/transforms/index.html). It is also very easy to implement custom transforms.
## Why it failed us?
The first time I found out about Kafka Connect, I was excited. It looked like really nice and thought through solution for managing different ETL pipelines. It has Rest API for adding/removing connectors, starting stopping and scaling tasks and monitoring task statuses. Extensibility looked really promising too, you can easily add your own connector and transform implementations, without forking Kafka source code and scale it through as many worker processes as you need.
Without further investigations, we decided to try it out. What we experienced wasn't as nice, as we hoped :).
### Too early to use in production
At the time we were experimenting with Kafka Connect it wasn't stable enough. It had some bugs, the quality of open-sourced connectors was quite poor, and there were a few architecture flaws which were a deal-breakers for us.
#### Bugs
In our use case, we wanted to write data to [HDFS](https://www.ibm.com/analytics/hadoop/hdfs). For that we decided to use open-sourced [kafka-connect-hdfs](https://github.com/confluentinc/kafka-connect-hdfs) connector implementation. At the time we used it, it was pretty much unusable:
- We had corrupted files after Kafka rebalance [#268(open)](https://github.com/confluentinc/kafka-connect-hdfs/issues/268).
- We had limited datatypes available [#49(open)](https://github.com/confluentinc/kafka-connect-hdfs/issues/49).
- We were not able to partition data by multiple fields (we had implemented our own solution for this one) [#commons-53(fixed)](https://github.com/confluentinc/kafka-connect-storage-common/issues/53).
- We had tasks failing to resume after a pause [#53(fixed)](https://github.com/confluentinc/kafka-connect-hdfs/issues/53).
After the experience with open source connectors we saw, that they are not only buggy but also lack the features we need. We decided to use our own connector implementations and didn't stop believing in Kafka Connect. We encountered some Kafka bugs too, but most of them were fixed fast enough ([KAFKA-6252](https://issues.apache.org/jira/browse/KAFKA-6252)).
Some minor Kafka Connect bugs are unfixed even today. One of them, worth mentioning is [KAFKA-4107](https://issues.apache.org/jira/browse/KAFKA-4107). In the process of testing, we had some cases when we needed to delete and recreate some of the connectors. Kafka Connect provides REST API endpoint for Connector deletion, however when you delete connector through this API, old task offset remain undeleted, so you can not create a connector with the same name. We found a workaround for this problem: we've added connector versioning (appended version numbers on connector name), to avoid conflicts with offsets from deleted tasks.
#### Rebalance all the time
This was the Kafka Connect design flaw. Kafka Connect rebalanced *all* of the tasks on its cluster every time you changed task set (add or delete a task or a connector, etc.). That meant all running tasks had to be stopped and re-started. The time needed to rebalance all your tasks grows significantly each time you add a new connector and becomes unacceptable when it comes to ~100 tasks.
This was the biggest roadblock for us since we had a dynamic environment where the task set was changing rapidly, so rebalancing was happening too.
Well, today this **is not a problem anymore**. With Kafka 2.3.0 which came not so long ago, this flaw was fixed. You can read more on that [here](https://cwiki.apache.org/confluence/display/KAFKA/KIP-415%3A+Incremental+Cooperative+Rebalancing+in+Kafka+Connect).
## Conclusion
We droped the idea of using Kafka Connect (some time in 2018). We droped it, because it wasn't production ready yet, and it didn't fully covered our use cases. Today, many of the problems we met are fixed (some of them are not, but you can find workarounds). I'm still kind of skeptikal about Kafka Connect, however trying and experimenting with it was really fun. I'd say you should consider Kafka Connect only if you are willing to invest time in implementing your own Connectors.
| pdambrauskas |
253,808 | The benefits of the "drink water and pee" routine | This text is also available in Portuguese. The beginning of the year is a great time to start new ha... | 0 | 2020-02-02T19:13:17 | https://dev.to/arthurmde/the-benefits-of-the-drink-water-and-pee-routine-4m7m | productivity, health, career, habit | [This text is also available in Portuguese.](https://arthurmde.me/pt-br/hacking/productivity/2020/02/02/habit-drink-water.html)
The beginning of the year is a great time to start new habits that can help you to achieve your personal and professional goals. That's why I decided to share one of the most important habits I developed in the last years which considerably impacted my **health** and **productivity** as a Software Engineer. Very straightforward: **Drink Water!**
Although everybody knows that drinking water is healthy, most people don't drink enough, especially programmers. I bet many of us drink more coffee than water. I do not intend to convince you by listing here the obvious and well-publicized benefits of drinking more water. Instead, I want to talk about a valuable consequence of this habit: going to the restroom frequently to pee.
**What are you thinking right now?**
1. "That's weird!"
2. "I hate going to the restroom too often"
3. "That's why I don't drink water"
4. "That's not water, it's beer"
5. "Ok, tell me more..."
I usually drink between 3.5 and 4 liters of water during my working hours. Consequently, I go to the restroom 5~7 times during this period. You may think that this is a major productivity killer. In fact, it is the opposite for several reasons, especially if you know how to use it to your advantage. To this end, all you need is a bottle of water with the proper size to implement the following cycle:
1. Fulfill the bottle with water
2. Drink the whole bottle gradually while working in a period
of time
3. Go to the restroom
4. Repeat
I call this the **"drink water and pee"** routine.
Ideally, whenever you get up to go to the restroom then you will complete your bottle again (and vice versa). If you use a bottle that is too big or too small, you will get up more often than necessary. That's why you will need to make some tests to figure out the proper size of the bottle that better works for you to perform this cycle (between 500 ml and 1000 l). Here is mine:
<img src="https://arthurmde.me/assets/img/others/water-bottle.jpeg" alt="Bottle of the Bugse Zot beer fulfilled by water" width="500">
Somehow, that beer bottle triggers something in my brain that makes me want to drink its contents, helping me achieve my 4 liters by day personal goal. By the way, what a great beer =)
**How does this routine can improve my productivity?**
Good performance is about the capacity of focusing to get the work done. However, everything around us seems to dispute for our attention: social media, chats, emails, and even your coworkers. They are distractions that surround us and **distractions are killers of productivity**. You can't go on 8 hours without any distractions, but you can manage your distractions in such a way that they don't affect your work so much.
In this sense, [Pomodoro](https://medium.com/swlh/how-to-work-40-hours-in-16-7-d9038681e652) is a highly used technique to improve your focus and performance to accomplish tasks. It is based on dividing the workflow into intense concentration blocks separated by distraction periods. If you haven't heard about the Pomodoro technique yet, you can easily find [tons of articles](https://dev.to/search?q=pomodoro%20technique) explaining how to use it, its benefits, challenges, drawbacks, possible adaptations, and so on.
At the end of the day, the best benefit of **Pomodoro** is providing a proper way to manage our time aimed to accomplish our tasks by proposing the periodization of focused work with intervals to handle distractions.
## You don't need a Pomodoro. You need to drink water!
Well, my point is that you can use the **"drink water and pee" routine as your Pomodoro**. Instead of using a timer to mark your focus periods, let your physiological system tell you when you need to take a break. By doing
this, you will be improving your health and your productivity at the same time.
In my case, by applying the "drink water and pee" I have about 1 hour and 15 minutes of focus period for every 10 minutes of distractions. The distraction phase includes going to the restroom to pee, fulfill the bottle of water, check my smartphone or perform any other activity that is not directly related to the development of the software that I’m working on, such as **take a walk** into the office.
To enable this routine, you need to manage your environment so that you can really be productive during the focus period by turning-off social media notifications and asking your colleagues to do not interrupt you during this period. If they need to talk to you, ask them to leave a message or an email that you can handle at the beginning of the next cycle.
Moreover, different from Pomodoro, the “drink water and pee” forces you to **stand up from your chair and take a walk**. That’s great for, again, improving your productivity and health.
Developing software requires you to tackle a set of different problems constantly, from designing the system to fixing bugs, which considerably involves creativity skills and processes. According to [science](https://www.ncbi.nlm.nih.gov/pubmed/24749966), taking walks during your work period can significantly help solving problems, especially when you are [stuck in a hard problem](https://news.stanford.edu/2014/04/24/walking-vs-sitting-042414/), taking a walk helps you to look at it with different perspectives.
Also, walking during your work time can positively impact your health since it prevents you from sitting for long periods of time. An impacting text titled ["Sitting is the new smoking"](https://www.startstanding.org/sitting-new-smoking/#extended) explains the main threats to your health caused by spending too much time in your chair, including cancers, diabetes, and cardiovascular disease. While sitting kills, moving heals. The text also exposes that even if you work out several times per week, it cannot overcome the damage done by extended periods of sitting. I strongly recommend the reading of this text.
**By the way, do you know any developer who has to treat back problems constantly?**
Back pain is a major productivity killer since it destroys your focus and forces you to go to the doctor more often than you would like =)
## Conclusion
Obviously, the "drinking water and pee" routine works for me on a regular working day as well as may vary from day-to-day. That's totally fine. When this routine becomes a habit, that cycle becomes so natural that you run it without even thinking about it, while still getting its benefits.
Adopting new healthy habits such as drinking water can be hard and "improving your health" reason may not be enough to convince you to adopt
them (otherwise, you would have already adopted them). The same can be said for productivity techniques, such as Pomodoro. However, associating health habits with practical productivity gains can be effective in convincing your brain to develop new habits. At least it works for me =)
<hr>
<span>**Let's get moving on!**</span>
| arthurmde |
253,830 | A simple implementation of Circuit Breaker Pattern in Spring Boot | This article assumes that you already know basics of Spring Boot :) Circuit Breaker pattern is a way... | 0 | 2020-02-02T20:21:48 | https://dev.to/mannik01/a-simple-implementation-of-circuit-breaker-pattern-in-spring-boot-140c | springboot, hystrix, java | This article assumes that you already know basics of Spring Boot :)
Circuit Breaker pattern is a way of preventing failures in a software system caused due to failed remote calls to another service. In this article, we are going to see how to implement **Spring Cloud Netflix Hystrix** library in Spring Boot to demonstrate this pattern.
First of all, we need to setup a service which is going to be called by a client. We can go to **start.spring.io** to bootstrap a Spring Boot project with Spring Web as a dependency.
Now, let's download the project and create a **InfoController** class that will have the endpoint which our client service will call.

After this, let's create a **InfoService** class, which has the method to return a string denoting it is a message from the server.

At this point, we are basically done with the server implementation. If we start this service and hit http://localhost:8080/info in the browser, we see "Hi, this message is from a Spring Boot service." as the response.
Okay, now, we need to create a client service to consume the endpoint response from above service. Same as above, let's bootstrap another Spring Boot service but with an additional dependency, **Spring Cloud Netflix Hystrix**.
Let's create an **InfoController** class here as well, like the following:

Also, let's create an **InfoService** class, like below.

As we can see above, we have added a call to the server in the "**getServerInfo()**" method. This will return "Hi, this message is from a Spring Boot service." response, if the service is working fine. However, in case the service does not respond, due to variety of reasons, we have defined a fallback method "**getFallBackInfo()**". This method is referred in the "**getServerInfo()**" method with the help of the "**@HystrixCommand**" annotation. This annotation is the key that will get triggered if our server application gets down.
Another thing we need to do is change the default port for this client application in the **application.properties** file, like below.

The reason we are doing this is that server application we build at first is already running on port 8080.
Another important thing to add is "**@EnableHystrix**" in the main class of the client application, which will, as the name suggests, enable hystrix fallback in our application.

To see the normal flow of our client-server system, let's start both of the applications, and hit http://localhost:8081/info endpoint in the browser. In this scenario, we will see "Hi, this message is from a Spring Boot service." as the response. Now, to see the Circuit Breaker in action, we need to stop the server application. Once we do this and call the above endpoint from the client, we will see "Hi, this is the fallback info from the server." as response, which is defined in our fallback method in the client application.
That is all! This is supposed to be a naive application of hystrix library from Netflix, so I have kept all the project structure simple. I hope this article is helpful!
| mannik01 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.