id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
110,846 | Is Web-development the Best Career Choice for 2019? | Before becoming a web developer, it is extremely important to know which type of web developer you would want to become. Read here how and why is web development the best career choice for 2019. | 0 | 2019-05-21T06:16:56 | https://dev.to/sadiakhan3/is-web-development-the-best-career-choice-for-2019-3dm2 | webdev, career | ---
title: Is Web-development the Best Career Choice for 2019?
published: true
description: Before becoming a web developer, it is extremely important to know which type of web developer you would want to become. Read here how and why is web development the best career choice for 2019.
tags: #webdev #career
---
Whether you are an experienced web developer or wish to become one, it is important to understand that the IT industry is getting more and more competitive due to continuous advancements and developments. With a rapidly increasing number of IT professionals entering into the field, many young aspirants face the dreaded question: is becoming a web developer a sound career choice or not? The process of selection for a programmer job has itself become a mammoth task, specifically after the drastic increase in the tasks involved in coding interviews.
Before becoming a web developer, it is extremely important to make an informed decision about which type of web developer you would want to become. You can either become a full stack web developer, a front end or a back end developer.
<h1>Major Responsibilities Of Web-Developers</h1>
Usually, full stack developers are those experienced professionals who possess all the necessary skills and expertise that are required to independently operate a whole website. Full stack developers are those experts who have the capabilities to work with both front and back end development. Therefore, in order to become a skilled full stack developer, it is extremely important that you learn the basics of both, front end and back end operations, to get your skills validated.
Whereas, front-end web developers are those professionals who work on the overall appearance and visual aesthetics of a site, which is usually the end product. Basically, front-end web-developers deal with the operations concerned more directly with the final product which clients get to view on their screens and engage with.
Lastly, back-end developers are responsible for managing the data servers for a website.
<h1>Prerequisite Skills For Each Type Of Web-Developer</h1>
Usually, front-end web developers are expected to have unmatchable skills in programming languages. Therefore, if you are planning to become a front-end web developer, you need to learn various programming languages such as JavaScript, HTML as well as CSS. However, back-end web development has been observed to be one of the IT skills in great demand and most desired by IT firms. To become a back-end web developer, you are required to cover numerous skills and areas of programming. Presently, back-end programming is done in a number of languages such as JavaScript, Ruby, Java, PHP and Python.
Also, having command over different technologies such as Kubernetes, NginX, Varnish and Docker are considered as an added bonus for back end developers. Therefore, to stand out and make your mark in the industry as an experienced programmer, it is extremely important that you have command on multiple programming languages as well as technologies. To achieve this goal, experts recommend professionals to opt for an <a href="https://edu.wagner.quickstart.com/">IT bootcamp</a> if they wish to enhance and improve their existing skills.
<h1>A Smart Career Move</h1>
If experts are to be believed, presently, a career in web development is a smart choice as it is a very secure and safe option for IT professionals. There are many areas of web development that are still in the developing stage and requires trained professionals to help it grow.
Instead of working as a web developer for a particular organization, you also have the option of becoming a freelance web developer, provided that you have the necessary skills and knowledge to find new clients. If you are a quick learner and have a knack to remain up to date with <a href="https://www.trendmut.com/top-web-development-trends-2019/">current trends</a>, then freelance web development might be a good option for you. To develop and improve your skills in web development, there are a number of online tutorials that might give you a basic idea of web programming.
If you work as a web developer and want to improve and update your skills in lesser time for getting better job offers, then you can choose <a href="https://edu.wagner.quickstart.com/bootcamps/web-development/">the web developer bootcamp</a> that is specially designed for experienced web developers to update their knowledge in web development. You can select any course according to your needs and budget.
Pursuing a career in the field of web development is not as simple as it seems, as it requires constant effort, hard work and undivided attention in learning different types of programming languages and becoming up to date with recent changes and development in web development.
<h1>Myths And False Notions</h1>
As a result of recent developments in technologies such as ML, data science, cloud computing and IoT, there is a wrong conception that these technologies have made web development obsolete. However, in reality, all these new technologies cannot take place in web development. Rather, they are seen to be interlinked with the skills of web development, complementing them in various uses.
The concept of web development is broad and extremely versatile. It includes the development of various games, web applications and most importantly, mobile applications. Pursuing a career in the field of web development is an excellent option for 2019 as it is constantly growing, and the demand for skilled web developers does not seem to decline shortly. Currently, skilled web-developers enjoy great demand in the job market, and you can easily make a six-figure income by working as a web developer.
It is safe to say that web development is an ideal choice for individuals who have just stepped into the IT industry. There is no doubt about the fact that web developers play a key role in the current IT industry. This is proved by the fact that there is a great demand for skilled web developers in each and every field of IT.
<h1>Final Word</h1>
Therefore, if you want to step into the field of web development, then now is the right time to do so. If you are planning to become a web developer, it is advisable to become familiar with different web developer roles. Each position has a different pay scale and requires varied skillsets. Web development is a growing field with a vast array of job opportunities.
However, it is still advisable that you first evaluate your strengths and skills before opting for a career as a front, back or full stack developer. Usually, the majority of the individuals who wish to become a web developer prefer working as a back end developer first, and then as a front developer. After gaining several years of experience as a back and front end developer, it becomes quite easy to become a full stack developer, as it requires knowledge and expertise of both; front and back end web development. | sadiakhan3 |
111,017 | How to deploy a Symfony 4 project in a shared host. | Hi, anybody knows if i can deploy a Symfony 4 project in a shared free host (http... | 0 | 2019-05-21T14:13:38 | https://dev.to/kevinhch/how-to-deploy-a-symfony-4-project-in-a-shared-host-197k | help, apache, symfony4, sharedhost | ---
title: How to deploy a Symfony 4 project in a shared host.
published: true
tags: help, apache, symfony4, shared host
---
Hi, anybody knows if i can deploy a Symfony 4 project in a shared free host (https://www.000webhost.com/).
I saw and read a lot of post about projects in Symfony 2 or 3, but im trying to deploy my personal project in Symfony 4, but i don't know how to configure the .htaccess or ENV_VARS and im starting to think it's impossible because maybe this type of host can't support Symfony 4.
Thanks. | kevinhch |
111,035 | SSL Certificates in Development | Recently I've been working a lot with multiple multi-app servers and their SSL certificates. Once y... | 0 | 2019-05-21T17:49:10 | https://jimbuck.io/ssl-certs-in-dev | security, powershell | ---
title: SSL Certificates in Development
published: true
tags: Security,PowerShell
canonical_url: https://jimbuck.io/ssl-certs-in-dev
---

Recently I've been working a lot with multiple multi-app servers and their SSL certificates. Once you get more than two or three environments up and running it can get a little tricky finding and wrangling certificates from each instance. Thankfully I've found a few super tips that have been a big time saver when it comes to solving issues with certs when your working in less than ideal system setups.
### Tip 1: Find and Destroy
I find there are certain times when the dev certs on my PC are just out of hand. The first step to cleaning them up is by removing the old ones. The [certificates snap-in](https://docs.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-view-certificates-with-the-mmc-snap-in) for the Microsoft Management Console is a decent tool if you want to look at a specific cert. Simply select the folder on the left and you can easily find the certs you need. But when you're looking for one or more possible certs, then try the following Powershell snippet:
<!--kg-card-begin: code-->
```
ls Cert:\ -Recurse | where { $_.Issuer -like "*Jim Buck*" }
```
<!--kg-card-end: code-->
This snippet will recursively search all installed certificates but only display those that match the criteria (in this case the `Issuer` must contain `Jim Buck`. You can filter on any of the following fields:
1. `Subject` - text (might be generic or the same as issuer)
2. `Issuer` - text (typically quite reliable to filter on)
3. `Thumbprint` - hash (unique per certificate)
4. `FriendlyName` - string (sometimes empty)
5. `NotBefore` - Datetime
6. `NotAfter` - Datetime
7. `Extensions` - List of `Oid` objects (they have `FriendlyName` and `Value` properties).
Early in the development process you might be adding quite a few certs to your store. Run the same command but pipe it to the `rm` command and you can easily remove all of the pesky old certificates:
<!--kg-card-begin: code-->
```
ls Cert:\ -Recurse | where { $_.Issuer -like "*Jim Buck*" } | rm
```
<!--kg-card-end: code-->
I don't recommend clearing out certs too often though. Ideally you only have to do it before installing the "good" certs. Once you have a nice clean cert store adding new certs that are known to be good should be just fine to have installed.
### Tip 2: Trust the CA Root certificate
Our application relies on an in-house data service. During the install of this service, it generates a CA Root certificate and an end-user certificate for the server's [FQDN](https://en.wikipedia.org/wiki/Fully_qualified_domain_name) (signed by the generated CA Root). We can't modify the install logic of the service, so we have to make due with the certs it produces. The best approach is to simply download the CA Root cert from each instance and install it in the Trusted Root of our Local Machine.
<!--kg-card-begin: gallery-->


<figcaption>I typically select "Local Machine" (just in case) and manually select "Trusted Root Certification Authorities".</figcaption>
<!--kg-card-end: gallery-->
Don't be fooled, certs of the same name (but different thumbprint) can be installed side-by-side. Just remember to restart your browser/client apps so they pick up the new certificates.
### Tip 3: Use a shared CA Root certificate
By far the best approach is to simply make your certificate generation use a shared CA Root. This would allow a project or even a whole department to use one common root cert that signs all server-specific development/test certs. No more downloading of certs to trust, no more hacks to ignore cert errors. No more rummaging through each environment trying to update all references of which cert to use. Just one shared cert that keeps developers, testers, managers, and product owners safe and secure.
I am currently finalizing a script to help create (dev-only) CA Root and SSL certs. Once I can test it a bit more and get the usage as simple as possible I will write a special article all about it.
### | jimbuck |
1,429,589 | How to Create a Yearly Count Down Timer | Introduction Countdown timers have been around since the early twentieth The development... | 0 | 2023-04-07T18:33:04 | https://dev.to/fortune42/how-to-create-a-yearly-count-down-timer-1d60 | ## Introduction
Countdown timers have been around since the early twentieth
The development of digital electronics in the 1950s resulted in the development of electronic countdown timers. These have been used in a wide range of applications, including scientific experiments, space missions, and sporting events.
Digital timers became more widely available in the 1970s and were used in consumer products such as microwave ovens and alarm clocks.
Countdown timers have become increasingly popular in the digital realm in recent years, appearing in online games, apps, and websites. Countdown timers are now widely used in a wide range of applications, from cooking and fitness to business and education.Countdown timers have been around since the early 1900s. Countdown timers were first used in aviation, where pilots used them to time flights and monitor fuel consumption.In this article i we will be creating a simple yearly count down timer with html, css, and javascript
## What is a Count Down Timer
A countdown timer is a tool that displays the amount of time until a specific event or deadline. It is commonly used to help keep track of time to avoid missing a significant event or deadline. Countdown timers are found in various contexts, such as sporting events, concerts, and online meetings. They are viewable on smartphones, computers, and dedicated digital clocks. Countdown timers can help people stay on track and manage their time more effectively.
## Prerequsite
Before getting started with this tutorial, you need to have basic knowledge of the following:
- Html
- CSS
- Javascript
## Developing the Frontend
creating our frontend with just html and css. copy and paste the following.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<link rel="stylesheet" href="style.css" />
<title>Document</title>
</head>
<body>
<h1>New Year Eve</h1>
<div class="container">
<div class="count-hours">
<p class="big-text" id="days">0</p>
<span>days</span>
</div>
<div class="count-hours">
<p class="big-text" id="hours">0</p>
<span>hours</span>
</div>
<div class="count-hours">
<p class="big-text" id="mins">0</p>
<span>mins</span>
</div>
<div class="count-hours">
<p class="big-text" id="seconds">0</p>
<span>seconds</span>
</div>
</div>
<script src="index.js"></script>
</body>
</html>
```
Adding style to our html design
```css
*{
box-sizing: border-box;
}
body{
background-size: cover;
background-position: center, center;
min-height: 100vh;
font-family: "poppins" sans-serif;
margin: 0;
display: flex;
flex-direction: column;
background-image: url(time.jpg);
align-items: center;
justify-content: center;
}
.container{
display: flex;
}
h1{
color:white;
font-size: 5rem;
padding-top:20px ;
}
.count-hours{
}
span{
color: white;
padding-left:40px;
}
.big-text{
color: azure;
font-size: 70px;
padding: 10px;
margin: 0 2rem;
}
```
Now that we are done with our frontend design, let's head straight to adding functionality to our work.
```javascript
const day = document.querySelector("#days");
const hour = document.querySelector("#hours");
const min = document.querySelector("#mins");
const sec = document.querySelector("#seconds");
const newYear = "1 jan 2024";
function countDown() {
const newYearDate = new Date(newYear);
const currentDate = new Date();
const totalSeconds = (newYearDate - currentDate) / 1000;
const days = Math.floor(totalSeconds / 3600 / 24);
const hours = Math.floor(totalSeconds / 3600) % 24;
const mins = Math.floor(totalSeconds / 60) % 60;
const seconds = Math.floor(totalSeconds % 60);
day.innerHTML = days;
hour.innerHTML = hours;
min.innerHTML = mins;
sec.innerHTML = seconds;
// console.log(days, hours, mins, seconds);
}
countDown();
setInterval(countDown, 1000);
```
From the code above, Firstly, we get all html documents needed for the id attribute. Then, declare the variable for the new year in anticipation.
The countdown() performs all the activities, like getting the total number of seconds, days, hours, mins, and seconds remaining in the new year(1 jan 2024). And displaying the data to our frontend using the innerHTML property
## Conclusion
We have been able to create a simple countdown timer using html, css, and javascript. If u follow this tutorial, your project should be working fine without any errors.
Thanks for reading. Please like and add your comments in the comment session | fortune42 | |
111,196 | Why I Love useReducer | I didn't realize until recently how much I loved the React Hook useReducer. It's one of those advance... | 0 | 2019-05-22T02:17:18 | https://dev.to/hswolff/why-i-love-usereducer-4mnk | react, hooks, reducer, redux | ---
title: Why I Love useReducer
published: true
description:
tags:
- react
- hooks
- reducer
- redux
cover_image: https://thepracticaldev.s3.amazonaws.com/i/agv1wz8romklodvwmqul.jpg
---
I didn't realize until recently how much I loved the [React Hook useReducer](https://reactjs.org/docs/hooks-reference.html#usereducer). It's one of those advanced hooks, and while I read the documentation about it and already have a good amount of experience with [Redux](https://redux.js.org/), it took a little while for me to fully understand just how powerful `useReducer` can make your components.
# Why do I love useReducer?
The simple answer is that it lets you separate the _What_ from the _How_. To expand upon that, it may be that _What_ a user wants to do is `login`.
With `useState` when a user wants to `login` I create function that handles a lot of the _How_. _How_ my component has to behave when a user wants to `login`:
- Sets `loading` to true
- Clears out old `error` state
- Disables the button.
With `useReducer` all my component has to do is think about _What_ the user wants. Which is:
- `dispatch('login')`
After that all the _How_ is handled inside the `loginReducer` function.
Furthermore, any future _How_ questions become completely centralized inside of that one `loginReducer` function. My component can just keep on thinking about the _What_.
It's a subtle distinction but an extremely powerful one.
To further show the point [you can check out the full source code here](https://github.com/hswolff/youtube/tree/master/videos/why-i-love-usereducer) or see these inline examples.
I'm going to ignore showing the UI, if you want to see that you can check out the repo. For now I just want to focus on the data we're storing and updating.
# Using useState
Here I have 5 calls to useState to manage all the distinct state transitions.
In my `onSubmit` call I have to careful orchestrate all the state changes that I want.
They're tightly coupled to the onSubmit handler and awkward to extract.
```js
function LoginUseState() {
const [username, setUsername] = useState('');
const [password, setPassword] = useState('');
const [isLoading, showLoader] = useState(false);
const [error, setError] = useState('');
const [isLoggedIn, setIsLoggedIn] = useState(false);
const onSubmit = async e => {
e.preventDefault();
setError('');
showLoader(true);
try {
await login({ username, password });
setIsLoggedIn(true);
} catch (error) {
setError('Incorrect username or password!');
showLoader(false);
setUsername('');
setPassword('');
}
};
return; // remaining UI code here
}
```
# Using useReducer
While it may be overall longer, I would argue that it's much easier to read and track what's going on.
If you jump straight to the `onSubmit` function I can now clearly show the intent of the user. There's only 3 behaviors that can happen, 'login', 'success', and 'error'. What that means is not a concern of my component, it's all handled in the `loginReducer`.
Even better, it becomes easier for me to make wide-ranging changes to state changes because all the state changes are centrally located.
And even more exciting is that all state changes become easy to share by default.
If I want to show my error state from elsewhere in the component I can easily re-use the same `dispatch({ type: 'error' })` and I'm good to go.
```js
function LoginUseReducer() {
const [state, dispatch] = useReducer(loginReducer, initialState);
const { username, password, isLoading, error, isLoggedIn } = state;
const onSubmit = async e => {
e.preventDefault();
dispatch({ type: 'login' });
try {
await login({ username, password });
dispatch({ type: 'success' });
} catch (error) {
dispatch({ type: 'error' });
}
};
return; // UI here
}
function loginReducer(state, action) {
switch (action.type) {
case 'field': {
return {
...state,
[action.fieldName]: action.payload,
};
}
case 'login': {
return {
...state,
error: '',
isLoading: true,
};
}
case 'success': {
return {
...state,
isLoggedIn: true,
isLoading: false,
};
}
case 'error': {
return {
...state,
error: 'Incorrect username or password!',
isLoggedIn: false,
isLoading: false,
username: '',
password: '',
};
}
case 'logOut': {
return {
...state,
isLoggedIn: false,
};
}
default:
return state;
}
}
const initialState = {
username: '',
password: '',
isLoading: false,
error: '',
isLoggedIn: false,
};
```
# Think like the user
`useReducer` gets you to write code the way a user will interact with your component.
You are encouraged to think in the _What_ and centralize all _How_ questions inside the reducer.
I'm so excited `useReducer` is now built-in to React. It's one more reason why I love it.
---
If you enjoyed this article you can find more like this [on my blog](https://hswolff.com/)!
And if you like to see my talk about things you can [check out my YouTube channel](https://www.youtube.com/user/hswolff) for tutorial videos!
| hswolff |
111,300 | 5 Awesome Laravel Nova Packages to Kickstart Your Next Project | A collection of five packages to kickstart your next project with Laravel Nova. These are some of my personal favourites that I use on every project. | 0 | 2019-05-22T09:58:37 | https://dev.to/jackabox/5-awesome-laravel-nova-packages-to-kickstart-your-next-project-1932 | laravel, nova, php, laravelnova | ---
title: 5 Awesome Laravel Nova Packages to Kickstart Your Next Project
published: true
description: A collection of five packages to kickstart your next project with Laravel Nova. These are some of my personal favourites that I use on every project.
tags: laravel, nova, php, laravel nova
---
I've been using Laravel Nova a lot since it's release and find myself using the same packages across projects just because they're awesome and work well. In this post I've compiled a list of my five favourite packages to include in your Laravel Nova install.
If you haven't used Nova before, it's an admin package written for Laravel and allows for rapid prototyping of a CRUD admin system quickly. It's also extensible and all written within Vue/Tailwind. I'd suggest checking it out for your projects at [https://nova.laravel.com/](https://nova.laravel.com/).
## Nova Flexible Content
Nova Flexible Content is a package written by [Whitecube](https://github.com/whitecube) and allows for a way to build repeatable fields on your models. It generally allows for any subfield or other package to be passed through and will collate the data into JSON to be saved in your table.
The team have been awesome at fixing out a few bugs and providing updates. They've recently included the ability to have nested flexible content which takes this package to the next level.
GitHub Link: [https://github.com/whitecube/nova-flexible-content](https://github.com/whitecube/nova-flexible-content)
## Advanced Nova Media Library
Advanced Nova Media Library takes the [Spatie's Media Library](https://github.com/spatie/laravel-medialibrary) and extends it to be integrated into Nova. I've been using Spatie's Media Library for several years and have found it to be an invaluable resource - especially with the generation of responsive images.
This package by [Eduard Bess](https://github.com/ebess) integrates all the Spatie goodies into Nova and allows for the defining of gallery, files or single images. It also allows for the clipping of images upon upload which makes package even nicer.
GitHub Link: [https://github.com/ebess/advanced-nova-media-library](https://github.com/ebess/advanced-nova-media-library)
## Nova Duplicate Field
Nova Duplicate Field is the first package I wrote for Nova and allows for the quick duplication of models and their relations. This package takes a few parameters and utilises them at the click of the button. It will then redirect you to the edit view so that you can make any necessary changes that you need.
It's currently at around ~2000 installs.
GitHub Link: [https://github.com/jackabox/nova-duplicate-field](https://github.com/jackabox/nova-duplicate-field)
## Nova Snowball
This one is a theme but [Stephen Lake](https://github.com/stephenlake) has generated a really clean interface on top of Nova which cleans up a few things to my personal preference such as collapsable field groups and reducing the white space around elements. It also provides some responsive elements which Nova does not do by default.
GitHub Link: [https://github.com/stephenlake/nova-snowball](https://github.com/stephenlake/nova-snowball)
## Nova Backup Tool
This one is direct from [Spatie](https://github.com/spatie) and provides a slick interface for monitoring the back ups of your site. It's crucially important to keep a solid backup and this package will allow you to trigger a manual backup or see the status of your scheduled backups.
GitHub Link: [https://github.com/spatie/nova-backup-tool](https://github.com/spatie/nova-backup-tool)
That's all for this list, but if you have any packages your particularly fond of? Drop me a tweet [@jackabox](https://twitter.com/jackabox) and let me know!
Originally written at [https://jackwhiting.co.uk](https://jackwhiting.co.uk) | jackabox |
111,302 | Elixirメタプログラミング 02: マクロ | Elixir公式サイトの許諾を得て「Macros」の解説にもとづき、加筆補正を加えて、Elixirにおけるマクロの定め方と使い方についてご説明します。 | 0 | 2019-12-20T08:53:00 | https://dev.to/gumi/elixir-02-7l | elixir, webdev, tutorial, programming | ---
title: Elixirメタプログラミング 02: マクロ
published: true
description: Elixir公式サイトの許諾を得て「Macros」の解説にもとづき、加筆補正を加えて、Elixirにおけるマクロの定め方と使い方についてご説明します。
tags: #elixir, #webdev, #tutorial, #programming
---
本稿はElixir公式サイトの許諾を得て「[Macros](https://elixir-lang.org/getting-started/meta/macros.html)」の解説にもとづき、加筆補正を加えて、Elixirにおけるマクロの定義方法についてご説明します。
# はじめに
Elixirには、マクロをできるだけ安全に使える環境が整えられています。とはいえ、マクロでクリーンなコードを書くことは開発者の責任です。マクロをつくるのは、通常のElixirの関数を使うより難しいといえます。むやみにマクロを用いるのは、避けた方がよいでしょう。
Elixirには、データ構造や関数によりわかりやすく読みやすいコードを書ける仕組みがすでに備わっています。コードは黙示的より明示的に、短くよりわかりやすくすべきです。マクロはどうしても必要な場合にお使いください。
# はじめてのマクロ
Elixirのマクロは[`defmacro/2`](https://hexdocs.pm/elixir/Kernel.html#defmacro/2)により定めます。本稿では、コードを基本的に`.exs`ファイルに書いて、`elixir ファイル名`または`iex ファイル名`のコマンドで実行しましょう。
簡単なマクロを書いて、動きを確かめてみましょう。モジュールは`macros.exs`に定め、マクロと関数を加えます。コードの中身はどちらも同じです。反転した条件に応じて、引数の式を実行します。
```elixir
defmodule Unless do
def fun_unless(clause, do: expression) do
if(!clause, do: expression)
end
defmacro macro_unless(clause, do: expression) do
quote do
if(!unquote(clause), do: unquote(expression))
end
end
end
```
関数は、受け取った引数を[`if/2`](https://hexdocs.pm/elixir/Kernel.html#if/2)に渡します。けれど、マクロが受け取るのは内部表現です(「[Elixirメタプログラミング 01: 内部表現 ー quote/2とunquote/1](https://dev.to/gumi/elixir---quote2unquote1-f4i-temp-slug-2512891?preview=aa737ae159320789d47880244c4d8f7932cf1d320b50728763c9f568b45c560772df16045eab26a2814ecb951abf68dc9c5df380e95ec4d0273989ea)」参照)。そして、それを差し込んだ別の内部表現を返します。
前述で定義したマクロを試すために、このモジュールで`iex`を開きましょう。
```
$ iex macros.exs
```
マクロを使うには、その前に[`require/2`](https://hexdocs.pm/elixir/Kernel.SpecialForms.html#require/2)でモジュールを要求しなければなりません。そのあと、関数と同じように呼び出せます。
```elixir
iex> require Unless
Unless
iex> Unless.macro_unless(true, do: IO.puts "this should never be printed")
nil
iex> Unless.fun_unless(true, do: IO.puts "this should never be printed")
this should never be printed
nil
```
マクロも関数も戻り値(`nil`)は同じでした。けれど、マクロは[`IO.puts/2`](https://hexdocs.pm/elixir/IO.html#puts/2)に渡した文字列が出力されません。文字列が関数で出力されたのは、値を返す前に引数が評価されるからです。これに対して、マクロは渡された引数を評価しません。引数は内部表現として受け取られ、別の内部表現にされるのです。今回、定義した`macro_unless`マクロは、`if`の内部表現になります。
前掲`macro_unless`の呼び出しは、引数につぎのような内部表現を用いたのと同じです。
```elixir
iex> Unless.macro_unless(
...> true,
...> [
...> do: {{:., [], [{:__aliases__, [alias: false], [:IO]}, :puts]}, [],
...> ["this should never be printed"]}
...> ]
...> )
```
さらに、マクロの定義も内部表現に展開すると、つぎのようになります。
```elixir
{:if, [context: Unless, import: Kernel],
[
{:!, [context: Unless, import: Kernel], [true]},
[
do: {{:., [],
[
{:__aliases__, [alias: false, counter: -576460752303422719], [:IO]},
:puts
]}, [], ["this should never be printed"]}
]
]}
```
引数の内部表現は、[`quote/2`](https://hexdocs.pm/elixir/Kernel.SpecialForms.html#quote/2)で確かめられるでしょう。さらに、その内部表現を展開するのが[`Macro.expand_once/2`](https://hexdocs.pm/elixir/Macro.html#expand_once/2)です。
```elixir
iex> expr = quote do: Unless.macro_unless(true, do: IO.puts "this should never be printed")
{{:., [], [{:__aliases__, [alias: false], [:Unless]}, :macro_unless]}, [],
[
true,
[
do: {{:., [], [{:__aliases__, [alias: false], [:IO]}, :puts]}, [],
["this should never be printed"]}
]
]}
iex> res = Macro.expand_once(expr, __ENV__)
{:if, [context: Unless, import: Kernel],
[
{:!, [context: Unless, import: Kernel], [true]},
[
do: {{:., [],
[
{:__aliases__, [alias: false, counter: -576460752303422719], [:IO]},
:puts
]}, [], ["this should never be printed"]}
]
]}
iex> IO.puts Macro.to_string(res)
if(!true) do
IO.puts("this should never be printed")
end
:ok
iex> IO.puts Macro.to_string(expr)
Unless.macro_unless(true) do
IO.puts("this should never be printed")
end
:ok
```
`Macro.expand_once/2`は内部表現を受け取って、現在の環境に応じて展開します。前述の例では、マクロ`Unless.macro_unless/2`が展開されて呼び出され、結果が返りました(`__ENV__`については後述します)。さらに戻り値の内部表現を[`IO.puts/2`](https://hexdocs.pm/elixir/IO.html#puts/2)で文字列に出力して確かめたということです。
なお、内部表現をコードの文字列表現として確かめたいときは、つぎのように書くと簡単です。
```elixir
iex> expr |> Macro.expand_once(__ENV__) |> Macro.to_string |> IO.puts
if(!true) do
IO.puts("this should never be printed")
end
:ok
```
以上が、マクロの基本的な働きです。内部表現を受け取って、別のものに変換するという役割を果たします。実際、Elixirの[`unless/2`](https://hexdocs.pm/elixir/Kernel.html#unless/2)の実装はつぎのようなものです。
```elixir
defmacro unless(clause, do: expression) do
quote do
if(!unquote(clause), do: unquote(expression))
end
end
```
`unless/2`や[`defmacro/2`](https://hexdocs.pm/elixir/Kernel.html#defmacro/2)、[`def/2`](https://hexdocs.pm/elixir/Kernel.html#def/2)、[`defprotocol/2`](https://hexdocs.pm/elixir/Kernel.html#defprotocol/2)などの構文、その他公式サイトのガイドに掲げられているコードは、純粋なElixirに加え、マクロで実装されているものも少なくありません。言語を構築している構文は、開発者がそれぞれ開発しているドメインに言語を拡張するために用いることもできます。
関数やマクロを用途に応じて定め、さらにElixirに組み込み済みの定義を上書きすることもできます。ただし、Elixirの特殊フォームだけは例外です。Elixirで実装されていないため、上書きができません。特殊フォームに何があるか、詳しくは「[`Kernel.SpecialForms`](https://hexdocs.pm/elixir/Kernel.SpecialForms.html)」をご参照ください。
# マクロの健全さ
Elixirのマクロは、あとで解決されます。つまり、マクロで定められた変数は、マクロが展開されるコンテキストに定義された変数と競合することはないということです。
たとえば、つぎのようにマクロと関数を、それぞれ別のモジュールに定義したとします。
```elixir
defmodule Hygiene do
defmacro no_interference do
quote do: a = 1
end
end
defmodule HygieneTest do
def go do
require Hygiene
a = 13
Hygiene.no_interference
a
end
end
```
関数が変数に値を定義したあと、呼び出したマクロが同名の変数に異なる値を与えても、関数の変数値は変わりません。
```elixir
iex> HygieneTest.go
13
```
あえて、マクロが呼び出されたコンテキストに影響を与えたいときには、[`var!/2`](https://hexdocs.pm/elixir/Kernel.html#var!/2)を使ってください。
```elixir
defmodule Hygiene do
# defmacro no_interference do
defmacro interference do
# quote do: a = 1
quote do: var!(a) = 1
end
end
defmodule HygieneTest do
def go do
require Hygiene
a = 13
# Hygiene.no_interference
Hygiene.interference
a
end
end
```
マクロが`var/2`に与えた変数は、呼び出されたコンテキストに上書きして定義されます。
```elixir
iex> HygieneTest.go
1
```
上書きされたもとの変数値は使われません。そのため、コンパイル時に、それを告げる警告が示されます。
```
warning: variable "a" is unused
```
変数のコンテキストは、内部表現の第3要素のアトムで示されます。そして、モジュールから`quote/2`で引用された変数は、そのモジュールをコンテキストにもつのです。そのため、他のコンテキストを汚すことなく健全さが保たれます(「[健全なマクロ](https://ja.wikipedia.org/wiki/%E5%81%A5%E5%85%A8%E3%81%AA%E3%83%9E%E3%82%AF%E3%83%AD)」参照)。
```elixir
defmodule Sample do
def quoted do
quote do: x
end
end
```
```elixir
iex> quote do: x
{:x, [], Elixir}
iex> Sample.quoted
{:x, [], Sample}
```
Elixirはインポートとエイリアスにも、同じ仕組みを与えます。マクロはもとのモジュールのもとで動作し、展開された先と競合することはありません。あえて、影響を及したいときに用いるのが、`var!/2`や[`alias!/1`](https://hexdocs.pm/elixir/Kernel.html#alias!/1)です。ただし、健全さが失われ、使われる環境を直接変えることになりますので、ご注意ください。
[`Macro.var/2`](https://hexdocs.pm/elixir/Macro.html#var/2)を使うと、動的に変数をつくることができます。第1引数が変数名で、第2引数はコンテキストです。
```elixir
defmodule Sample do
defmacro initialize_to_char_count(variables) do
Enum.map variables, fn(name) ->
var = Macro.var(name, nil)
length = name |> Atom.to_string |> String.length
quote do
unquote(var) = unquote(length)
end
end
end
def run do
initialize_to_char_count [:red, :green, :yellow]
[red, green, yellow]
end
end
```
```elixir
iex> Sample.run
[3, 5, 6]
```
# 環境
前述「はじめてのマクロ」の項で`Macro.expand_once/2`の第2引数に[`__ENV__`](https://hexdocs.pm/elixir/Kernel.SpecialForms.html#__ENV__/0)を渡しました。戻り値は[`Macro.Env`](https://hexdocs.pm/elixir/Macro.Env.html)構造体のインスタンスです。構造体にはコンパイル環境の有用な情報が納められています。たとえば、現在のモジュールやファイル、行番号、現在のスコープのすべての変数などです。`import/2`や`require/2`で加わったものも含まれます。
```elixir
iex> __ENV__.module
nil
iex> __ENV__.file
"iex"
iex> __ENV__.requires
[IEx.Helpers, Kernel, Kernel.Typespec]
iex> require Integer
Integer
iex> __ENV__.requires
[IEx.Helpers, Integer, Kernel, Kernel.Typespec]
```
`Macro`モジュールの多くの関数は環境を与えて呼び出します。詳しくは、「[Macro](https://hexdocs.pm/elixir/Macro.html)」をご参照ください。また、コンパイル環境については「[Macro.Env](https://hexdocs.pm/elixir/Macro.Env.html)」で解説されています。
# プライベートマクロ
Elixirは[`defmacrop/2`](https://hexdocs.pm/elixir/Kernel.html#defmacrop/2)で、プライベートマクロが定義されます。プライベートな関数になりますので、マクロを定義したモジュールの中で、コンパイル時にしか使えません。
```elixir
defmodule Sample do
defmacrop two, do: 2
def four, do: two + two
end
```
```elixir
iex> Sample.four
4
```
そして、プライベートマクロは、使う前に定義されていることが必要です。マクロは展開されてから、関数として呼び出せます。そのため、定義の前に呼び出すと、エラーが生じるのです。
```elixir
defmodule Sample do
def four, do: two + two # ** (CompileError) macros.exs: undefined function two/0
defmacrop two, do: 2
end
```
# 責任のあるマクロを書く
マクロはできることが豊富な構文です。Elixirはさまざまな仕組みで、責任のあるマクロが書けるようにしています。
- **健全**: デフォルトでは、マクロ内で定義された変数は、使う側のコードに影響を与えません。さらに、マクロのコンテキストにおける関数呼び出しやエイリアスも、ユーザーコンテキストからは切り離されます。
- **レキシカル**: コードやマクロをグローバルに差し込むことはできません。マクロが定められたモジュールを、明示的に`require/2`または`import/2`で使う必要があります。
- **明示**: マクロは明示的に呼び出さなければ実行できません。言語によってはパースやリフレクションなどといった仕組みも用いて、開発者が外からわからないように関数をすっかり書き替えられたりします。Elixirのマクロは、呼び出す側がコンパイルのとき明示的に実行しなければならないのです。
- **明確**: 多くの言語には`quote`や`unquote`に省略記法が備えられています。Elixirではフルに入力することにしました。マクロ定義と内部表現をはっきりと識別できるようにするためです。
このような仕組みはあるものの、マクロを書く責任の多くは開発者が担います。マクロの助けがいると判断した場合、マクロがAPIではないことは頭においてください。
マクロの定義は、内部表現も含めて短くしましょう。つぎのように書くのは、よくない例です。
```elixir
defmodule MyModule do
defmacro my_macro(a, b, c) do
quote do
do_this(unquote(a))
...
do_that(unquote(b))
...
and_that(unquote(c))
end
end
end
```
つぎのように書けば、コードは明確になり、テストや管理もしやすくなります。関数`do_this_that_and_that/3`は直接呼び出してテストできるからです。また、マクロに依存したくない開発者向けのAPIを設計するのにも役立つでしょう。
```elixir
defmodule MyModule do
defmacro my_macro(a, b, c) do
quote do
# マクロに書くのは最小限に
# その他の処理はすべて関数に
MyModule.do_this_that_and_that(unquote(a), unquote(b), unquote(c))
end
end
def do_this_that_and_that(a, b, c) do
do_this(a)
...
do_that(b)
...
and_that(c)
end
end
```
| gumitech |
111,339 | The visual learner’s guide to async JS |
Have you ever watched or read hours’ worth of tutorials but were still left con... | 0 | 2019-07-08T20:06:56 | https://blog.logrocket.com/the-visual-learners-guide-to-async-js-62a0a03d1d57 | javascript, learntocode, frontend, programming | ---
title: The visual learner’s guide to async JS
published: true
tags: javascript,learn-to-code,frontend,programming
canonical_url: https://blog.logrocket.com/the-visual-learners-guide-to-async-js-62a0a03d1d57
---

Have you ever watched or read hours’ worth of tutorials but were still left confused? That’s how I felt when I first dove into learning asynchronous JavaScript. I struggled to clearly see the differences between promises and async/await, especially because under the hood, they’re the same.
Async JS has evolved a lot over the years. Tutorials are great, but they often give you a snapshot of what is the “right” way to do things at that particular point in time. Not realizing I should pay attention to the content’s date (😅), I found myself mixing different syntaxes together. Even when I tried to only consume the most recent content, something was still missing.
I realized much of the material out there wasn’t speaking to my learning style. I’m a visual learner, so in order to make sense of all the different async methods, I needed to organize it all together in a way that spoke to my visual style. Here I’ll walk you through the questions I had about async and how I differentiated promises and async/await through examples and analogies.
### Why do we need async?
At its core, JavaScript is a synchronous, blocking, single-threaded language. If those words don’t mean much to you, this visual helped me better understand how asynchronous JS can be more time-efficient:
<figcaption><a href="https://eloquentjavascript.net/11_async.html">Thick lines = time the program spends running normally. Thin lines = time spent waiting for the network</a>.</figcaption>
We want to use async methods for things that can happen in the background. You wouldn’t want your entire app to wait while you query something from the database or make an API request. In real life, that would be the equivalent of not being able to do anything — no phone calls, no eating, no going to the bathroom — until the laundry machine is done. This is less than ideal.
Out of the box, JS is synchronous, but we have ways of making it _behave_ asynchronously.
[](https://logrocket.com/signup/)
### Evolution of async
When searching online for “async JS,” I came across many different implementations: callbacks, promises, and async/await. It was important for me to be clear about each method and its unique value proposition so I could code with consistent syntax throughout. Here’s a breakdown of each one:
#### **Callbacks**
Before ES6, we’d implement this async behavior using callbacks. I won’t get too deep into it here, but, in short, a callback is a function that you send as a parameter to another function that will be executed once the current function is finished executing. Let’s just say there’s a reason why people refer to it as “callback hell.”
In order to control the sequence of events, using callbacks, you’d have to nest functions within callbacks of other functions to ensure they occur in the order you expect.
<figcaption>“Callback hell.”</figcaption>
Since implementing this gave us all headaches, the JS community came up with the promise object.
#### **Promises**
As humans, it’s easier for us to understand and read synchronous code, so promises were created to _look_ more synchronous but _act_ asynchronously. Ideally, it would look something like this:

This might look nice, but it’s missing a few key elements, one of which is error handling. Have you ever gotten an unhandledPromiseRejection error or warning? This is because some error occurred, which caused the promise to be rejected instead of resolved.
In the snippet above, we only handle the case of “success,” meaning that an unhandled promise is never settled, and the memory it is taking up is never freed. If you’re not careful, a promise will silently fail, unless manually handled with catch:

#### **Async/await**
This is the syntactic sugar on top of promises, which helps the code look more readable. When we add the async keyword in front of the function, it changes its nature.
An async function will return a value inside of a promise. In order to access that value, we need to either .then() the method or await it.
Style and conventions aside, it is technically OK to use different async methods together in your code since they all implement async behavior. But once you fully understand the differences between each one, you’ll be able to write with consistent syntax without hesitation.
Since async/await utilizes promises, I initially struggled to separate the two methods in terms of syntax and conventions. To clear up the differences between them, I mapped out each method and its syntax for each use case.
### **Comparing promises and async/await**
These comparisons are a visually upgraded version of what I originally mapped out for myself. **Promises are on the left, async/await on the right**.
#### **Consuming**

getJSON() is a function that returns a promise. For promises, in order to resolve the promise, we need to .then() or .catch() it. Another way to resolve the promise is by awaiting it.
N.B., await can only be called inside of an async function. The async function here was omitted to show a more direct comparison of the two methods.
#### **Creating**

Both of these will return Promise {<resolved>: "hi"} . With async , even if you don’t explicitly return a promise, it will ensure your code is passed through a promise.
resolve() is one of the executor functions for promises. When called, it returns a promise object resolved with the value. In order to directly compare this behavior, the async method is wrapped in an immediately invoked function.
#### **Error handling**

There’s a few ways to catch errors. One is by using then/catch, and the other is by using try/catch. Both ways can be used interchangeably with promises and async/await, but these seem to be the most commonly used conventions for each, respectively.
A major advantage of using async/await is in the error stack trace. With promises, once B resolves, we no longer have the context for A in the stack trace. So, if B or C throw an exception, we no longer know A’s context.
With async/await, however, A is suspended while waiting for B to resolve. So, if B or C throw an exception, we know in the stack trace that the error came from A.
#### **Iterating**

I’m using single letters for names here to help you more clearly see the differences between the syntaxes. Before, I would read through code samples where I felt like I had to whack through the weeds of the function names to understand what was happening. It became very distracting to me, especially as such a visual learner.
N.B., even though each task is async, these both won’t run the tasks concurrently. I’ll touch on this in **Parallel execution** below.
#### **Testing**

There are subtle but important differences here. Remember that async functions return promises, so similarly, if you are using regular promises, you must return them.
Other things to note:
- Not putting await in front of something async results in an unresolved promise, which would make your test result return a false positive
- If you want to stub an async method that returns a promise, you can do something like this:

Now that we’ve covered most of the basic scenarios, let’s touch on some more advanced topics regarding async.
### Parallel vs. sequential async
Since async/await makes the syntax so readable, it can get confusing to tell when things are executed in parallel versus sequentially. Here are the differences:
#### **Parallel execution**
Let’s say you have a long to-do list for the day: pick up the mail, do laundry, and respond to emails. Since none of these things depend on one another, you can use Promise.all() to run each of these tasks. Promise.all() takes an array (for any iterable) of promises and resolves once all of the async methods resolve, or rejects when one of them rejects.

#### **Sequential execution**
Alternatively, if you have tasks that are dependent on one another, you can execute them in sequence. For example, let’s say you’re doing laundry. You have to do things in a sequence: wash, dry, fold. You cannot do all three at the same time. Since there’s an order to it, you would do it this way:

These functions are executed in sequence because the return values here are used as inputs for the next functions. So the function must wait until the value is returned in order to proceed executing.
### Tip for success
Everyone has a different learning style. No matter how many tutorials I watched or blog posts I read, there were still holes in my async knowledge. Only when I sat down and mapped everything out did I finally put the pieces together.
Don’t get frustrated or discouraged when you come across a concept you struggle with. It’s simply because the information isn’t being presented to you in a way that speaks to your learning style. If the material isn’t out there for you, create it yourself and share it! It might surprise you how many people out there are feeling the same way as you.
Thanks for reading 🙌! Would love to hear your thoughts, feel free to leave a comment.
Connect with me on [Instagram](https://www.instagram.com/edenadler) and check out my [website](https://edenadler.com) 👈.
### Plug: [LogRocket](https://logrocket.com/signup/), a DVR for web apps
[](https://logrocket.com/signup/)<figcaption><a href="https://logrocket.com/signup/">https://logrocket.com/signup/</a></figcaption>
[LogRocket](https://logrocket.com/signup/) is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.
In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.
[Try it for free](https://logrocket.com/signup/).
* * *
The post [The visual learner's guide to async JS](https://blog.logrocket.com/the-visual-learners-guide-to-async-js-62a0a03d1d57) appeared first on [LogRocket Blog](https://blog.logrocket.com).
| bnevilleoneill |
111,380 | On Glitch This Week | Just a few projects that caught our eye this week on Glitch (May 22, 2019). | 741 | 2019-05-22T14:45:31 | https://glitch.com/@glitch/glitch-this-week-may-22-2019 | glitch, glitchthisweek, showdev | ---
title: On Glitch This Week
published: true
canonical_url: https://glitch.com/@glitch/glitch-this-week-may-22-2019
series: Glitch This Week
cover_image: https://cdn.glitch.com/2bdfb3f8-05ef-4035-a06e-2043962a3a13%2FdevBanner4.png?1558536124883
description: Just a few projects that caught our eye this week on Glitch (May 22, 2019).
tags: glitch,glitchthisweek,showdev
---
Glitch This Week is a regular series in which we round-up just some of the amazing projects that fantastic creators have made on [Glitch](https://glitch.com/). Here are a few projects that caught our eye this week.
## [Create Your Own Pac-Man Game](https://glitch.com/~figma-pacman)
As the name suggests "[figma-pacman](https://glitch.com/~figma-pacman)" by [Gleb Sabirzyanov](https://glitch.com/@zyumbik) is a Pac-Man game that you can customize in Figma. With a copy of the Figma file, you can customize the game's level design and then upload it to the app to play. Gleb created a [pink heart-shaped Pac-Man level](https://twitter.com/zyumbik/status/1129361314504216576) and another one in the shape of [the Figma logo](https://twitter.com/zyumbik/status/1129564817738620929), while Ali Salah made a [Twitter Vs. Social Media version](https://twitter.com/alollou/status/1130491266192617472). What level will you create?
{% twitter 1129031291041669121 %}
{% glitch figma-pacman app %}
## [Make Twitter More Accessible](https://glitch.com/~tweet-a11y-stats)
[Cole Gleason](https://glitch.com/@colegleason) is a Ph.D. student researching accessibility technology for people with vision impairments. He created "[tweet-a11y-stats](https://glitch.com/~tweet-a11y-stats)" - a tool to tell Twitter users how many of their photos are accessible for users who are blind. Currently, only around 0.1% of photo tweets on Twitter have an image description, but image descriptions are critical for people with vision impairments to access visual content on Twitter as the descriptions get read by screen reader or Braille display software. Cole's app provides a useful reminder to build or maintain the habit of making sure content is accessible by all when sharing on Twitter.
{% glitch tweet-a11y-stats app %}
## [Get Started with Local Storage](https://glitch.com/~hello-local-storage)
[Kelly Lougheed](https://glitch.com/@kellylougheed), a Computer Science teacher created "[hello-local-storage](https://glitch.com/~hello-local-storage)" so her students could get started with storing info in the browser using local storage. You can use it too by remixing the app and checking out the [commented source code](https://glitch.com/edit/#!/hello-local-storage?path=script.js:1:0) to see how it works.
{% user kellylougheed1 %}
{% glitch hello-local-storage app %}
Other apps in [this week's round-up](https://glitch.com/@glitch/glitch-this-week-may-22-2019) include a Game of Thrones soundboard, a sparkly real-time web chat, and a 3D visualization of CSS named colors. [Check them out](https://glitch.com/@glitch/glitch-this-week-may-22-2019)! | _gw |
111,403 | How to Grow Your Business as a Developer | Breaking out on your own can be daunting, but also very rewarding. Here we pulled together 6 top tips to help you get on your way to success. | 0 | 2019-05-22T16:10:35 | https://dev.to/officialsilb/how-to-grow-your-business-as-a-developer-5dkg | freelance, businessasadeveloper, entrepreneurtips | ---
title: How to Grow Your Business as a Developer
published: true
description: Breaking out on your own can be daunting, but also very rewarding. Here we pulled together 6 top tips to help you get on your way to success.
tags: freelance, business as a developer, entrepreneur tips
---
As with most industries, many developers dream of turning their skills into a full-time business. However, going out on your own and finding clients can be a daunting task. There is often a lot to learn to go from skilled dev to skilled business owner. Here, we will look at some great ways to grow as a dev and build a business.
Build Your Technical Skills
Having a strong and [varied skillset](https://blog.teamtreehouse.com/how-to-build-your-professional-portfolio-as-a-developer) will help place you ahead of the competition when pitching for client work. Not only becoming a full stack developer but also a skilled marketer, a smart business man/woman and a savvy accountant will stand you in great stead for success in the future. If there is any area you feel you are lacking, try to fit time in to train or work alongside someone who can plug your knowledge gaps.
Network
[Networking](https://www.forbes.com/sites/theyec/2014/07/28/how-to-network-the-right-way-eight-tips/#14bd78fb6d47) is a great way to not only meet likeminded individuals but also to be introduced to potential clients or encourage others to recommend you to potential clients. Networking can also be a great place to meet potential mentors who can help guide you through the process of setting up a business as a developer.
Know Your Marketing
Having great software with no sales or amazing dev skills and no clients will quickly lead to failure. Know who you want to work with or sell to and write a list of ways you can reach them, perhaps via social media, LinkedIn or at networking events. Create a [marketing strategy](https://medium.com/career-change-coder/what-to-consider-when-marketing-yourself-as-a-developer-784c324c551c) in which you’ve identified your demographic and marketing platform along with your budget to help give your efforts some focus.
Attend Tech Events
Today there are many events and conferences held for web developers. Tech events are a great way to build awareness of your business as well as to learn loads about the industry from expert speakers. These events also attract lots of potential clients and investors, such as cyber security venture capital firm [C5 Capital](https://www.c5capital.com/Blog/tech-tour-2019-growth-summit/) who attended the Tech Tour 2019 Growth Summit in Geneva.
Have Contracts in Place
It’s important when working with any clients as a web developer that you have solid contracts in place. [A contract](https://www.crunch.co.uk/knowledge/contracts/protect-yourself-with-these-contract-samples/) will protect you should the client have an issue with your work or refuses to pay on time or even at all. Have important communications with clients in writing and make sure both parties have signed off and are in agreement. The contract doesn’t need to be overly complicated; if you’re unsure, there are many contract templates online you can amend to fit with your business.
Stay Focused
As your business grows it can be easy to lose focus. It can be tricky to keep clients happy, manage accounts, pitch for more work and have a life outside of your business. It’s for this reason that it’s so important to efficiently manage your time and keep your focus on the areas of business you deem to be the most important. It’s also useful to remember that if you’re struggling you can seek outside help in the form of freelancers or employees.
| officialsilb |
111,424 | Do you use isolated tests? | During my professional career, I've worked on projects where most of the tests are integration... | 0 | 2019-05-22T22:43:27 | https://dev.to/delbetu/do-you-use-isolated-tests-4ehc | discuss, testing, design | ---
title: Do you use isolated tests?
published: true
tags: discuss, testing, design
---
During my professional career, I've worked on projects where most of the tests are integration tests.
After watching this video --> [Integrated Tests Are A Scam](https://vimeo.com/80533536) it seems that we've been doing all wrong.
Are you using isolated tests in your projects?
How does it feel to program with these kinds of tests?
Thoughts??
| delbetu |
111,471 | The Uri composition mystery | A quick tale of an idiot wasting hours by not checking the docs earlier. | 0 | 2019-05-22T20:58:01 | https://blog.codingmilitia.com/2019/05/22/the-uri-composition-mystery | dotnet, csharp | ---
title: The Uri composition mystery
published: true
tags: dotnet,csharp
canonical_url: https://blog.codingmilitia.com/2019/05/22/the-uri-composition-mystery
cover_image: https://thepracticaldev.s3.amazonaws.com/i/u5fi889caoii2p4zjc7e.png
description: "A quick tale of an idiot wasting hours by not checking the docs earlier."
---
## Intro
This will be one of those posts of shame, that I'll use to make sure the next time I get into the same problem, I don't waste hours trying to figure it out 😛.
## So, what's the problem?
So I was using an `HttpClient`, passing it an `Uri` for the request to make. To compose the `Uri`, the constructor that gets two `Uri`s was being used, the first `Uri` represents the base address, so an absolute `Uri`, while the second should be a relative address.
With this in mind, looking at the following code, what would you expect to be written to the console?
```csharp
var baseUri = new Uri("https://api.dev/v3");
var routeUri = new Uri("stuff", UriKind.Relative);
var fullUri = new Uri(baseUri, routeUri);
Console.WriteLine(fullUri);
```
My expectation would be `https://api.dev/v3/stuff`, but alas, that's not what we get! The output is `https://api.dev/stuff`, because I didn't add a `/` to the end of the base address. If the base address ends with `/`, then the composition would work as expected.
But wait! There's more...
Even with the trailing slash in the base address, if the relative address starts with a slash, it will replace the relative part of the base address as well.
So, the following code:
```csharp
var baseUri = new Uri("https://api.dev/v3/");
var routeUri = new Uri("/stuff", UriKind.Relative);
var fullUri = new Uri(baseUri, routeUri);
Console.WriteLine(fullUri);
```
Will also output `https://api.dev/stuff`.
## "Adding insult to injury"
What's worse than the time I wasted on this, is that this behavior is described in the [docs](https://docs.microsoft.com/en-us/dotnet/api/system.uri.-ctor?view=netcore-2.2#System_Uri__ctor_System_Uri_System_Uri_).
> Remarks
>
> This constructor creates a new Uri instance by combining an absolute Uri instance, baseUri, with a relative Uri instance, relativeUri. If relativeUri is an absolute Uri instance (containing a scheme, host name, and optionally a port number), the Uri instance is created using only relativeUri.
>
> If the baseUri has relative parts (like /api), then the relative part must be terminated with a slash, (like /api/), if the relative part of baseUri is to be preserved in the constructed Uri.
>
> Additionally, if the relativeUri begins with a slash, then it will replace any relative part of the baseUri
>
> This constructor does not ensure that the Uri refers to an accessible resource.
The behavior just seemed so strange to me (although there's probably a good reason for it), I didn't think about looking at the docs, and kept scouring the code for some other reason to what was happening.
## Outro
Wrapping up, if we want to be sure the `Uri` composition works well, we should end the base address with a `/` and **not** start the relative part with one.
```csharp
var baseUri = new Uri("https://api.dev/v3/");
var routeUri = new Uri("stuff", UriKind.Relative);
var fullUri = new Uri(baseUri, routeUri);
Console.WriteLine(fullUri);
// outputs -> https://api.dev/v3/stuff
```
Hopefully I'll remember this the next time! 🙃 | joaofbantunes |
111,479 | Call for JavaScript library: "navigation history" | Based on the discussion in this post: Is there any way to det... | 0 | 2019-05-22T21:43:46 | https://dev.to/ben/call-for-a-navigation-history-library-2k43 | contributorswanted, opensource, webdev, javascript | Based on the discussion in this post:
{% link https://dev.to/ben/is-there-any-way-to-detect-if-a-user-can-go-back-in-pwas-desktop-pwa-most-specifically-203j %}
We got on to this thread:
{% devcomment b2jh %}
{% devcomment b2ke %}
{% devcomment b2mc %}
I believe all the information about determining `cangoback` would exist in the information about past actions taken on the site and persisted in whichever way was called for.
If someone wants to take a stab at this, it would be really useful! | ben |
113,649 | Meet HangHub, a new productivity tool for GitHub | What is HangHub HangHub is a team productivity tool that lets you see other users who are... | 0 | 2019-05-23T16:29:02 | https://ckeditor.com/blog/Meet-HangHub-a-new-productivity-tool-for-GitHub/ | github, productivity, opensource, c | #What is HangHub
HangHub is a team productivity tool that lets you see other users who are working on the same GitHub issue or pull request as you. They can be commenting, editing, simply viewing or merging. After you and your teammates install HangHub, your team will never have to worry about wasting time working on the exact same thing at the same time.
<p><a href="https://ckeditor.com/blog/Meet-HangHub-a-new-productivity-tool-for-GitHub/hanghub-screencast.gif" class="article-body-image-wrapper"><img src="https://ckeditor.com/blog/Meet-HangHub-a-new-productivity-tool-for-GitHub/hanghub-screencast.gif" alt=""></a></p>
HangHub is currently available to download as a browser extension for [Chrome](https://chrome.google.com/webstore/detail/hanghub/egnoioofamlapfbecfkjgeobkfmfflfo) and [Firefox](https://addons.mozilla.org/en-US/firefox/addon/hanghub/). You can also [build](https://github.com/ckeditor/hanghub) it (and enhance) by yourself. It is available for free, under a permissive MIT Open Source license.
#Why HangHub
With the arrival of CKEditor 5 and its collaboration features, we have shown our commitment to collaboration. We are still working on ways to make CKEditor 5 your ultimate collaboration framework and during this ongoing effort, we also learned a lot about collaboration itself.
During the annual CKEditor hackathon, our developers wanted to tackle the collaboration and communication troubles we have when using GitHub. Have you ever written a lengthy answer to a GitHub discussion only to discover that your teammate spent the last 30 minutes writing about the same? We most definitely have, and this is why HangHub was born.
#How HangHub works
To communicate with your collaborators, HangHub uses the WebSocket protocol. HangHub is based on Preact and VirtualDOM to make updates of the user state really fast and efficient. Thanks to this you can see exactly when the collaborators — other HangHub users — join in, leave, comment on, and edit a GitHub issue or merge a pull request.
Note that you can disable HangHub for any specific organization or repository.

Install HangHub now from [Chrome Web Store](https://chrome.google.com/webstore/detail/hanghub/egnoioofamlapfbecfkjgeobkfmfflfo) or [Firefox Add-ons](https://addons.mozilla.org/en-US/firefox/addon/hanghub/) and share it with your teammates to make the GitHub management process in your organization more efficient!
We hope that HangHub will make your work with GitHub easier and are curious to hear your thoughts about our new tool. For any questions or feedback you might have, you can join the discussion on Product Hunt or contact us. You can also head straight to the [GitHub repository](https://github.com/ckeditor/hanghub) to report issues and submit feature requests.
And don’t forget to star it on GitHub!
This article was originally posted on [ckeditor.com](https://ckeditor.com/blog/Meet-HangHub-a-new-productivity-tool-for-GitHub/) | gok9ok |
113,923 | The Social Worker Who Codes | The beginnings story of a social worker who learned to love code. | 0 | 2019-05-24T22:21:09 | https://dev.to/taylorpage71/trying-to-find-balance-in-change-3bjn | beginners, codenewbie, webdev | ---
title: The Social Worker Who Codes
published: true
description: The beginnings story of a social worker who learned to love code.
tags: beginners, codenewbie, webdev
---
I didn't really mean for it to happen, but I really became enraptured with the idea that one day I could become a web developer full-time. This has been a tough dream to wake up to, but maybe not for reasons you'd think. You see, I spent a lot of time and money in my earlier days building up a career in social work. I'm still paying on a mountain of student loan debt for a Master's degree and yet I really want to "jump ship" so to speak for a completely unrelated career field? I really don't want to become another stereotypical US student statistic who ends up getting an advanced degree but ends up working in a totally different career, but here I am.
## How it Started
I got my first taste for web development 5 or 6 years ago. My wife and I were on staff at a church and ran the worship and media teams. I also worked full-time as a social worker and was wrapping up my Masters degree (this was before kids, so I could stay very busy). Without much notice, we found out in a staff meeting that the guy who ran our website just left. *"Does anyone know anything about websites?"* Of course no one did. Without any better options available, I volunteered to do it. Why not? "I'm good with computers and technology," I thought. I knew I was also a strong self-learner and enjoyed learning new things.
### It's Hacked!
I started off very quickly in over my head. Thankfully, the previous web admin had left some sign-in credentials for our WordPress site so I at least was able to learn how to log in and set up my own username and password. *phew* I was confused quickly though to find a lot of pages had random ads and words that didn't make sense at all. Why would the admin insert all this odd stuff about product offers and promotions on a church website? Not only were they unrelated, they were just weird. After some additional research, I learned the site had been hacked and that this wasn't an altogether uncommon occurrence for WordPress sites. I did some Googling and found out that our website didn't implement a lot of common best practices for WordPress to avoid such hacks. So, I started cleaning it all up. And oddly enough, despite the frustration of it all, I found it really enjoyable.
### I Actually Like This
Over time, I found myself spending more free time learning about web development and how we could improve our online presence and implement more features on the website. I was having a lot of fun. I had changed jobs in social work for a more flexible schedule and working with the developmentally disabled population. I really enjoyed my full-time social work job. The flexibility, helping people, and getting to flex some creative and technical muscles at my church gig seemed to be the perfect situation. I even got to do a complete site remodel. I ended up cheating and rather than building my own theme I went with Divi from Elegant Themes. My hope was to train some other folks on content creation for the site so I could focus on maintenance and technical improvements.
### All Good Things Must Come to an End
But unfortunately, my ideal situation didn't last forever. Some changes in the church occurred and my wife and I decided to step down from our leadership positions. I offered to run the website until they could find someone else to take over the maintenance. Several months later I was logging into the site to do some updates and found that the entire site had been replaced with a Wix site. *A Wix site*. I was crushed. Shortly after we left that church altogether but I found that I *_really_* missed working on the website. So I searched around and found one of the most frequently recommended web development courses was Colte Steele's Online Bootcamp on Udemy. I signed up and got started learning the building blocks of the web.
## Derailed and Juggling Priorities
Not long after we stepped down from our leadership positions at church, I was offered a supervisor position at my social work job. I had always turned down or avoided advancing previously because our work in the church had taken priority, so I thought the timing was perfect.
I had started the Colte Steele course on Udemy but quickly fell off track with the discipline of online learning when I became a supervisor. I found myself working tons of hours and we now had an infant at home. I was getting mentally drained and exhausted at work and also not getting enough sleep with a baby at home. I found less and less time available for my newfound love of web development.
It would take me another year with this schedule to complete the course. But it was awesome. Learning how to build something as *advanced* as a Yelp-type web app was so cool. It wasn't just building websites I was interested in anymore - now I wanted to build web apps!
## Fast Forward to Today
My journey to figure out how I can code and not give up social work entirely has been a confusing one to say the least. I love helping people, so maybe I can do it with code? I've become really interested in accessibility and have found a real desire as well to work with local businesses/professionals in my area. I'm obviously still learning and have a long ways to go before I can call myself a real web developer.
I've found myself really enjoying working with React and Gatsby is one of my favorite tools to work with. I'm trying to learn Next.js and eventually I think I'd like to learn Vue.js but I know I really need to step up my core JavaScript skills first. Sometimes I'm all over the place with what I want to learn and it's a struggle to stay disciplined on one task before moving on to the next tutorial. I randomly spent 4 hours on a full-stack serverless tutorial before I even realized, *"Why am I doing this? I'm still getting React down - focus on JavaScript first!"* Needless to say, I'm still learning on how to stay focused on my self-learning path. It's amazing that there's so much available from the web development community online, but for a true outsider like me it can be very overwhelming.
### Looking Forward
My wife and I have started a business doing websites, social media, and content creation in hopes to work with small businesses in our area. I think one day, I'd like to get a full-time gig as a developer, but I'm content with social work at the moment. I am in the process of switching my full-time job to a different social work agency and giving up a supervisory role so I can have more time with my family and also with pursuing this passion of web development. I'll be working with a developing agency who is just starting to invest in some online technologies and my hope is to learn where I am so I can advance my skills in web development.
I'm excited for what the future holds and to learn more from this amazing community. Who knows? Maybe some day I'll find the perfect marriage in my love for web development and helping those who can't help themselves. For now, social work is my full-time work thing and web development is my free-time passion.
| taylorpage71 |
113,971 | What's One Thing You Wish You Knew Before Starting A Career In Tech? | If you were asked this what would be your response? | 0 | 2019-05-24T13:43:09 | https://dev.to/lauragift21/what-s-one-thing-you-wish-you-knew-before-starting-a-career-in-tech-423k | discuss | If you were asked this what would be your response? | lauragift21 |
114,054 | Customize shell prompt | Change the appearance of the terminal shell prompt: customize colors, text formatting, and... | 0 | 2019-05-24T17:37:15 | https://dev.to/pldg/customize-shell-prompt-2bpa | linux, bash, tutorial | Change the appearance of the terminal [shell prompt](https://en.wikibooks.org/wiki/Guide_to_Unix/Explanations/Shell_Prompt): customize colors, text formatting, and dynamically display other type of information (including git status).
We're going to use bash on Ubuntu but most concepts can also be applied in other unix-based systems (e.g. MacOS and Windows Subsystem for Linux).
## Prompts variables
Bash has four type of prompts controlled by those variables:
- `PS1` primary prompt.
- `PS2` displayed when you've to type more commands (multi-line commands).
- `PS3` displayed when the select command is waiting for input.
- `PS4` displayed when debugging Bash scripts.
If you `echo $PS1` you'll see a bunch of characters sequences:
```txt
\[\e]0;\u@\h: \w\a\]${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$
```
When you enter an interactive shell session, the shell read `PS1` and output something like this:
```txt
username@hostname:directory$
```
The dollar `$` symbol at the end signifies that you're a normal user, for root user it's replaced with the hash `#` symbol.
If you `echo $PS2` it'll display only the greater-than sign `>` symbol.
You can see both `PS1` and `PS2` in the screenshot below:

The `PS3` and `PS4` prompts are not very common. In this guide we'll focus on the primary prompt.
## Change the prompt
To control the output of your primary prompt, edit the `PS1` variable:
```sh
PS1='my_prompt: '
```

Because `PS1` was replaced directly in Bash, changes will disappear on shell `exit`. Later in this guide we'll learn how to make this changes permanent.
## Embedding commands
You can run commands inside `PS1` variable:
```sh
PS1='$(exit_status=$? ; if test $exit_status -ne 0 ; then echo "(err)" ; fi)my_prompt: '
```
If exit code is not equal to `0` it'll display `(err)`.

## Backslash-escaped characters
There are some backslash-escaped special characters you can use to dynamically display useful information in the prompt.
For example:
```sh
PS1='\t \u \h \w \$ '
```
- `\t` display time.
- `\u` display user name.
- `\h` display hostname.
- `\w` display current working directory.
- `\$` display dollar *$* symbol if you're normal user; display hash *#* symbol if you're root user.

Here is a complete list of [backslash-escaped characters](https://ss64.com/bash/syntax-prompt.html).
## ANSI escape sequences
Bash allows the user to call a series of [ANSI escape sequences](https://en.wikipedia.org/wiki/ANSI_escape_code) to change colors, text formatting, cursor location and other options of the terminal window. Those sequences are a set of non-printable [control characters](https://en.wikipedia.org/wiki/ASCII#Control_characters) which the shell interprets as commands.
An ANSI escape sequence always start with an escape character and a left-bracket character, followed by one or more control characters:
```txt
ESC[COMMAND
```
- The escape character `ESC` can be written as `\033` or `\e` or `\x1b`.
- `COMMAND` is the control character.
See the [list of ANSI sequences](https://bluesock.org/~willkg/dev/ansi.html#sequences) for all available commands (some terminals may have partial support for ANSI sequences).
## Colors and text formatting
To colorize the output of your text terminal use the following ANSI sequence:
```txt
ESC[CODEm
```
Where `CODE` is a series of one or more semicolon-separated color codes.
For example:
```sh
echo -e "\033[44mHello World\033[0m"
```
- `-e` enable `echo` to parse escape sequences.
- `\033[` mark the start of an ANSI sequence.
- `44` is the code for background color blue.
- `m` mark the end of color codes.
- `0` removes all text attributes (formatting and colors). It's important to reset attributes, otherwise the styles will be applied to all texts after *Hello World* (including the prompt and the text we type).

You can also modify a color by setting an "attribute" before its base value, separated by a `;` semi-colon.
So if you want a green background (`46`) with underline text (`4`), the sequence is:
```sh
echo -e "\033[4;46mHello World\033[0m"
```

You can combine multiple sequences together:
```sh
echo -e "\033[41m\033[4m\033[1;33mHello World\033[0m"
```

Here you can find a list of all [color codes](https://stackoverflow.com/a/33206814/).
## Save prompt changes permanently
To make the changes permanent you can modify the default `PS1` variable or add a new one at the end of *~/.bashrc* file:
```sh
PS1='$(exit_status=$? ; if test $exit_status -ne 0 ; then echo -e "(\[\033[31m\]err\[\033[0m\])${debian_chroot:+($debian_chroot)}" ; else echo ${debian_chroot:+($debian_chroot)} ; fi)\[\033[1;33m\]\u@\h\[\033[0m\]:\[\033[1;36m\]\w\[\033[0m\]\$ '
```
Non-printing characters must be surrounded with escaped square brackets `\[` (start of non-printing characters) and `\]` (end of non-printing characters). For example `\[\033[1;33m\]`. Otherwise Bash think they're printing characters and use them to calculate its size (cause the text to wraps badly before it gets to the edge of the terminal).
This line `${debian_chroot:+($debian_chroot)}` display to your prompt an indication of which [chroot](https://unix.stackexchange.com/a/3174/) you're in.
You can use `source ~/.bashrc` to refresh the changes instead of exit and re-enter the shell.

## Display git repository status
Git provide a script that allows to see repository status in your prompt.
Download [git-prompt.sh](https://github.com/git/git/blob/8976500cbbb13270398d3b3e07a17b8cc7bff43f/contrib/completion/git-prompt.sh):
```sh
curl -o ~/.git-prompt.sh https://raw.githubusercontent.com/git/git/master/contrib/completion/git-prompt.sh
```
The script comes with a function `__git_ps1` that can be used in two ways:
- You can call it inside `PS1` (this way you can't see colored hints).
- Or you can call it inside [`PROMPT_COMMAND`](https://www.tldp.org/HOWTO/Bash-Prompt-HOWTO/x264.html) (with this method you can enable colored hints).
By default `__git_ps1` will show only the branch you're in, you can also enable git status through a series of variables whose names start with `GIT_PS1_SHOW*`.
If you don't want colored hints you can simply use `$(__git_ps1 "(%s)")` inside `PS1` and sets `GIT_PS1_SHOW*` variables as you prefer.
To show colored hints you cannot edit `PS1` directly. Instead, you have to call `__git_ps1` function inside `PROMPT_COMMAND` variable. If `PROMPT_COMMAND` is set, the value is interpreted as a command to execute before the printing of primary prompt. In this mode you can request colored hints using `GIT_PS1_SHOWCOLORHINTS=true`.
Add the following code at the end of *~/.bashrc* file:
```sh
source ~/.git-prompt.sh
GIT_PS1_SHOWDIRTYSTATE="true"
GIT_PS1_SHOWSTASHSTATE="true"
GIT_PS1_SHOWUNTRACKEDFILES="true"
GIT_PS1_SHOWUPSTREAM="auto"
# Colored hints work only if __git_ps1 is called inside PROMPT_COMMAND
GIT_PS1_SHOWCOLORHINTS=true
PROMPT_COMMAND='__git_ps1 "$(exit_status=$? ; if test $exit_status -ne 0 ; then echo -e "(\[\033[31m\]err\[\033[0m\])${debian_chroot:+($debian_chroot)}" ; else echo ${debian_chroot:+($debian_chroot)} ; fi)\[\033[1;33m\]\u@\h\[\033[0m\]:\[\033[1;36m\]\w\[\033[0m\]" "\$ " "(%s)"'
```
- `source ~/.git-prompt.sh` will load *git-prompt.sh* script.
- `GIT_PS1_SHOW*` variables are used to add additional features.
- If `__git_ps1` is used inside `PROMPT_COMMAND` it must be called with at least two arguments, the first is prepended and the second appended to the state string when assigned to `PS1`. There is an optional argument used as [printf](https://ss64.com/bash/printf.html) format string to further customize the output (`%s` is the output of `__git_ps1` which in this case is wrapped in parenthesis).

For more info read comments inside *~/.git-prompt.sh* file.
## External resources
- https://www.digitalocean.com/community/tutorials/how-to-customize-your-bash-prompt-on-a-linux-vps
- https://misc.flogisoft.com/bash/home
- https://wiki.archlinux.org/index.php/Bash/Prompt_customization
- https://medium.freecodecamp.org/how-you-can-style-your-terminal-like-medium-freecodecamp-or-any-way-you-want-f499234d48bc | pldg |
114,097 | An emoji dictionary in Svelte | As an avid user of the 🅱️ emoji, I get (too much) enjoyment out of the few alphabetical characters in... | 0 | 2019-05-24T20:37:10 | https://dev.to/bryce/an-emoji-dictionary-in-svelte-be9 | emoji, svelte, javascript, tutorial | As an avid user of the [🅱️ emoji](https://knowyourmeme.com/memes/b-button-emoji-%F0%9F%85%B1), I get (too much) enjoyment out of the few alphabetical characters in the emoji alphabet.
But we can do more than just substitute a `b` with a 🅱️; I wanted to know many words can be written _entirely_ with emoji. Let's find out!
<img src="https://i.giphy.com/media/l2Je66zG6mAAZxgqI/giphy.gif" />
First I found an [(English) dictionary](https://github.com/dwyl/english-words) and wrote a quick & dirty [Rust script](https://github.com/brycedorn/emoji-dict/blob/master/src/main.rs) to generate the words. Just a list of words isn't fun though, it needs interactivity! I chose [Svelte](https://svelte.dev/) for this to get some hands-on with its dev experience (it's pretty good!) and performance.
To start, I made a basic webpack config with [svelte-loader](https://github.com/sveltejs/svelte-loader) and three files:
<figcaption>Note: if you want to skip to the end, the source is <a href="https://github.com/brycedorn/emoji-dict/tree/master/web">here.</a></figcaption>
- `index.html`, with a `<body>` where the Svelte app will be mounted to (just like `ReactDOM`)
- `main.js`, where the app is mounted & passed props
- `App.svelte`, for the component & filtering logic
---
<img src="https://thepracticaldev.s3.amazonaws.com/i/atloi9557ivpk64uvphk.png" width=200/>
# JavaScript
In `main.js`, the words are imported and prepared for the component:
```js
import words from 'output.txt';
// Associate letters & sequences with their
// emoji equivalents
const emojiHash = {
"id": "🆔",
"a": "🅰️",
...
"soon": "🔜"
};
// Replace the letters/sequences in a string as
// their respective emoji
const convertToEmoji = (word) => {
let emojified = String(word);
regexKeys.forEach((key, i) => {
emojified = emojified.replace(key, emojiHash[sortedKeys[i]];
}));
return emojified;
};
// Render letters/sequences as emoji by running
// the above function until there are no letters
// remaining
function emojify(word) {
let emojified = String(word);
do {
emojified = convertToEmoji(emojified);
} while (emojified.split('').some(e => /^[a-zA-Z]+$/.test(e)));
return emojified;
};
```
Then the component is mounted to the DOM:
```js
const app = new App({
target: document.body,
props: {
emoji: Object.values(emojiHash),
sort: 'default',
words: words.split('\n').map(emojify)
}
});
```
---
<img src="https://thepracticaldev.s3.amazonaws.com/i/ji7zisis4c0f4ce2cer1.png" width=200/>
# Svelte
Great! Now we have formatted data coming into the component, let's do something with it.
`*.svelte` files are HTML files with some syntactic sugar. The basic structure is as follows:
```html
<script>
// Functions, variables
export let words;
function clicky(e) {
console.log(e.target.innerText);
}
</script>
<!-- Any styles associated with the component -->
<style>
.container {
background: palevioletred;
}
</style>
<!-- The rendered markup -->
<div class="container">
<ul>
{#each words as word}
<li>
<p on:click={clicky}>
{word}
</p>
</li>
{/each}
</ul>
</div>
```
:tada: ta-da! :tada: A list of words rendered with Svelte! Note that since `words` is being passed in as a prop, the `export` keyword is needed.
For the sake of brevity I'll just go through adding filtering (sorting is in the [repo](https://github.com/brycedorn/emoji-dict/blob/master/web/App.svelte#L39) if you want to take a look).

Somewhere in the component, let's render a list of checkboxes for each emoji:
```html
Filter:
{#each emoji as moji}
<label>
<input on:change={handleFilterClick} type="checkbox" checked={!filteredEmoji.includes(moji)} value={moji}>
<span>{moji}</span>
</label>
{/each}
```
Since we're rendering the list via the `words` variable, we'll need to update it to reflect the filter.
```html
<script>
export let words;
// Keep an immutable version of words in memory
export let wordsImm = Array.from(words);
function handleFilterClick(e) {
const { checked, value } = e.target;
// Keep immutable version of original prop & make a copy
// to apply filters to
let wordsMut = Array.from(wordsImm);
// Either add or remove the current emoji from the filters
if (checked) {
filteredEmoji.splice(filteredEmoji.indexOf(value), 1);
} else {
filteredEmoji.push(value);
}
// If there are filters, apply them to list of words
if (filteredEmoji.length > 0) {
filteredEmoji.forEach(emoji => {
wordsMut = wordsMut.filter(word => !word.includes(emoji));
});
}
// Set variable to new list
words = wordsMut;
}
</script>
```
When `words` is updated to the filtered (mutated) version after selecting a filter, it will trigger an update and the DOM will render the new list.
<figcaption>Side-note: this could be refactored to have the filtering in the `{each}`, (preventing the need to update `words`) but I wanted to render the number of words in a different part of the component.</figcaption>
# Final thoughts
Svelte is nice & fast! I plan to use it again, ideally for something more resource intensive/visually demanding to really push it to its limits (beyond where React would have issues).
I also want to see how it is to work on a larger project using [Sapper](https://github.com/sveltejs/sapper), once the framework is more mature.
Go play with it here! [https://bryce.io/emoji-dict](https://bryce.io/emoji-dict)
View [source on Github](https://github.com/brycedorn/emoji-dict). | bryce |
114,147 | Find a framework Js for Project Laravel | Hi guys, I am a Vietnamese, I am building project Laravel + vuejs, I want to find a framework used to warn an error like toast, give me a name, please. | 0 | 2019-05-25T03:54:33 | https://dev.to/duongricky/find-a-framework-js-for-project-laravel-17f4 | javascript, laravel, vue | ---
title: Find a framework Js for Project Laravel
published: true
description: Hi guys, I am a Vietnamese, I am building project Laravel + vuejs, I want to find a framework used to warn an error like toast, give me a name, please.
tags: javascript, laravel, vuejs
---
| duongricky |
114,152 | Why are developers still using this legacy framework? | The trials and tribulations of working for a company using old tech | 0 | 2019-05-25T04:55:20 | https://dev.to/nprimak/why-are-developers-still-using-this-legacy-framework-4ji7 | discuss, angular, webdev, frontend | ---
title: Why are developers still using this legacy framework?
published: true
description: The trials and tribulations of working for a company using old tech
tags: discuss, angular, webdev, frontend
cover_image: https://i.imgur.com/3mrmBZd.png
---
Today I was browsing through my Medium recommendations and came across an article titled "Why are developers still using Angular?"
Check it out here: https://medium.com/@PurpleGreenLemon/why-are-developers-still-using-angular-b9ef29d1f97f
It struck a chord with me, as a developer who has worked for six different companies in my career so far, all of which used Angular. More specifically, the five of the four used AngularJS with the only exception being a very early stage startup where I was able to choose the framework, and went with Angular 6.
While the article focuses on the most recent version of Angular and how it differs dramatically from the original, the author was also quick to condemn AngularJS, hoping it would "die a silent death."
The problem with this attitude, and also the issue I want to discuss, is that many of us are still in jobs where we are using outdated frameworks and technologies. Certainly, we all want to catch up, learn React, learn ES6, Typescript, RxJS, and all the other fun things, but there would literally be no time left in the day and burn out is a serious risk.
So for those of us developers who work in companies that still support or actively program with legacy tech, how do you tackle these problems? Do you just try to get out as quickly as possible? Do you use your precious free time at the end of the day to keep learning the new stuff? Do you just not worry about it, knowing that legacy tech will always be there? I want to hear your thoughts.
| nprimak |
154,135 | Ember route's actions during Transitions | When i was working with the EmberJS webApp, i got struck with case where i was hit by the error Nothi... | 0 | 2019-08-08T12:50:35 | https://dev.to/sarath/ember-route-s-actions-during-transitions-59o4 | ember, route | When i was working with the EmberJS webApp, i got struck with case where i was hit by the error *Nothing handled the action 'alerthello'.* 😔
Consider,
I have a **alerthello** action in the route named *test.js* and a controller named *test.js* with function **triggerprinthello**.
The controller part looks like:

The route part looks like:

Is this the entire code ?, nope it’s just a sample (you may steal the code if i showed them 😂)
You may think who the hell in the world would write such a script, well i am that guy.
Ok, let’s come to the point.
I tried to execute the above code, instead of showing the alert it throwed the error.
Then i dig into this thing, then i found something interesting (maybe 🤔 to me).
## You know what i found ? 😲
The thing is that the ember maintains a active route list (Based on which the bubbling happens). 🤔
When I transitioned from some **Xroute** to **testroute** ember won’t add the **testroute** in the active Route List until transition is completed.
In my case, **setupcontroller** will call the function **triggerprinthello** which in turn will search for the action **alerthello** in the active route list, since the **testroute**(still in transitioning state) is not yet listed in the *Active route List*, so it will bubble till the **application** route looking for the action (**Ember:** did you forget to add the action thinking about your girlfriend) and then boom 💥 it throws the error.
So, how did i solved this:
In two ways,
1) I handled the action in the controller itself.
2) I handled the action in the **didTransition** hook of the route. 😌
You may ask why did this crap is posted here, i thought this post might be somewhat helpful to others.
Also, please share your thoughts if i am wrong at something, so that i could correct myself 😊.
| sarath |
114,257 | I Created A Tool To Download All Images From A Webpage | coding CLI tool in Go | 0 | 2019-05-25T13:09:17 | https://dev.to/abanoubha/i-created-a-tool-to-download-all-images-from-a-webpage-ffh | go, cli, coding | ---
title: I Created A Tool To Download All Images From A Webpage
published: true
description: coding CLI tool in Go
tags: #Go #CLI #coding
---
I thought of a tool to download all images from specific webpage! Why not code a tool for that? Let's code it.
I choose Go as [it is great for many reasons](https://dev.to/devabanoub/why-i-love-go-h3c). I screen-casted it and uploaded the video to [YouTube](https://youtu.be/qJ5RlAFk5QI). It was fun to code such a simple tool in Go and have some practice in Go. I also published the code on [Github](https://github.com/DevAbanoub/img-dl). I am happy I could write this simple program in Go.
After publishing the video/code, someone suggest me to use **xml parser** instead of **regular expressions**. I searched on _Google_, _Github_, _StackOverflow_, _Dev.to_,_Quora_ and _Medium_ to understand the difference and it is huge!
Using **xml parser** is more efficient for this purpose than the **regex**. As somebody said "_if you have alternative to regular expressions, just use it_".
I gain some more experience from coding, publishing, reading, and discussions on all platforms. It is a great community. Thank you all for the discussions and support.
Happy Coding! | abanoubha |
114,301 | I ported my blog to Gatsby 🎉 | Originally posted on coreyja.com (where you can also see this Gatsby site!) For a couple years now... | 0 | 2019-05-25T16:48:54 | https://coreyja.com/ported-blog-to-gatsby/ | blog, gatsby, rambling, react | ---
title: I ported my blog to Gatsby 🎉
published: true
tags: blog, gatsby, rambling, react
canonical_url: https://coreyja.com/ported-blog-to-gatsby/
---
_Originally posted on [coreyja.com](https://coreyja.com/ported-blog-to-gatsby/) (where you can also see this Gatsby site!)_
For a couple years now I’ve used the Middleman ruby framework to build my static blog site. I choose Middleman initially since it was a very similar environment to Rails, which I have more experience in. This definitely helped my get off the ground quickly!
But Middleman has some rough edges that I’d run into. One of the root issues is that Middleman is a slightly aging framework, and the community support is stating to lack a little bit. The other big, and related downside, is relying on the Sprockets Asset Pipeline for Javascript. As the Javascript community uses an abundance of awesome packages, when you are tied to the Asset Pipeline, you can lose the ability to use all of this open source goodness.
## Into these gaps fits Gatsby
Gatsby is a static site creator that is built around React and GraphQL, and so far I’ve been really enjoying it. Gatsby out of the box has some great stuff in it, but it really shines when you start adding different plugin. I started out using the [Gatsby blog template](https://github.com/gatsbyjs/gatsby-starter-blog) which already comes with a few plugins setup and configured!
## Progressive Web App
One thing that I really wanted for my blog, was to be able to use it as a Progressive Web App. There aren’t too many things about a PWA that I want to use for my Blog, but one small-ish feature that was pretty important to me is the ability to have a theme color in your web manifest, which allows mobile browsers to theme the browser to match your sites color scheme
Gatsby makes it super simple to get setup with a `manifest.json` which is required for a PWA, here is the config that I am currently using:
```
{
resolve: `gatsby-plugin-manifest`,
options: {
name: `Coreyja Blog`,
short_name: `coreyja Blog`,
start_url: `/`,
background_color: `#aa66cc`,
theme_color: `#aa66cc`,
display: `minimal-ui`,
icon: `content/assets/favicon.svg`
}
}
```
## Cacheing and Offline Support
This is a PWA feature that I didn’t have on my old blog, and was really excited to find out that I was getting it for pretty much free in Gatsby! Using the blog template my blog was already setup and ready to go to use service workers to cache all my assets, as well as add offline support for pages that are already cached.
I was really excited to test this out, and while doing so noticed another feature that was really cool! I was playing around in the Dev Tools, and realized that just by hovering over my blog cards I was triggering network requests for the next page. It turns out Gatsby is preloading the links before the User actually clicks on them, to make the loading experience even faster!

## React Components
Its a bit surprising that I’ve made it this far into this post without even mentioning React Components! React seems like it’s been taking over the world by storm, but I hadn’t gotten a chance to really work with it before. Since this was a port of my existing blog, I already had existing styles and HTML structure for the pages, and luckily these were already broken apart into components!
I really enjoyed dipping my toes into the React world! I know I barely scratched the surface, since none of my components are very interactive. But even for my very simple components I enjoyed how React components are written and structured.
It’s kinda an inversion from patterns I am used to, but React (through JSX) pushes you to intermingle your styles, HTML content and Javascript functionality. The idea is that these three make up the whole of your component, and as such should live close together! It was strangely liberating to use the HTML `style` tag to add styles to me site, without always worrying about creating a class to hold the styles I wanted to add. I didn’t inline the entirety of my styles, so I did keep some in SCSS files. However I used the new (to me) idea of CSS Modules to keep them independent. The very basic idea is that you can write your css, using whatever classes make the most sense for you, without worrying about name collisions, or specificity wars. Then when you want to use the styles, you import your css file into your JSX component, and there is some behind the scenes magic to add some unique identifiers to your css classes to keep them from colliding with each other. I enjoyed this approach to css, as it stopped my from having to worry about name spacing everything manually, as this was handled by the software.
## Wrapping Up
This is kinda a long rambling post, but I wanted to write something about this port and migration while it was still fresh in my mind! There are quite a few topics I didn’t get to cover, like how cool using GraphQL in Gatsby is but that can be covered in some follow up posts!
I did lose small feature or two when I ported over, but that was more cause I didn’t re-implement yet them, not cause there was a limitation of the frameworks or anything! Stay tuned for some future posts about that too! | coreyja |
114,344 | Nginx for Front-end Developers | Learn Nginx by deploying a react application | 1,071 | 2019-05-25T21:57:01 | https://cloudnweb.dev/2019/05/nginx-for-front-end-developers/ | showdev, react, tutorial, webdev | ---
title: Nginx for Front-end Developers
published: true
description: Learn Nginx by deploying a react application
tags: showdev,react,tutorial,webdev
canonical_url : https://cloudnweb.dev/2019/05/nginx-for-front-end-developers/
series: nginx
---
Support my work by reading this post from my blog : <a href="https://cloudnweb.dev/2019/05/nginx-for-front-end-developers/">Nginx For Front-end Developers</a>
<!-- wp:paragraph -->
<p>This article is to explain Nginx for Front-end Developers in a much simpler way</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Nginx is a powerful and high-effiency web server which primarily solves the problem of <a href="https://en.wikipedia.org/wiki/C10k_problem">C10k problem</a>. It can serve the data with blazing speed. we can use Nginx for other purposes also like reverse proxying,load balancing and caching files.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>You can ask me why I need to learn about Nginx as a front end developer . In a Modern Front end development world, everything that you work on frontend compiled into a single HTML,JS and css file. So, It will be useful to know how the web server handles your files in production.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Nginx Architecture:</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The basic nginx architecture consists of a master process and its workers. The master is supposed to read the configuration file and maintain worker processes, while workers do the actual processing of requests.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":530} -->
<figure class="wp-block-image"><img src="https://cloudnweb.dev/wp-content/uploads/2019/05/Screen-Shot-2019-05-26-at-1.34.56-AM-1024x580.png" alt="" class="wp-image-530"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>Nginx is a <a href="https://en.wikipedia.org/wiki/Master/slave_(technology)">Master- slave</a> , <a href="https://en.wikipedia.org/wiki/Event-driven_architecture">Event-driven </a>and <a href="https://anturis.com/blog/nginx-vs-apache/">Non-Blocking</a> architecture.</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><li><strong>Master</strong> - Master is reponsible for maintaining and validating the configurations. it is also reponsible for creating and terminating the worker processes.</li><li><strong>Worker</strong> - worker processes are responsible for handling the request in the shared socket. every worker process can handle thousands of requests since the processes are single-threaded and non-blocking</li><li><strong>Cache Loader</strong> - cache loader updates the worker instance with the data exists in the disk according to the request meta-data.</li><li><strong>Cache Manager</strong> - cache manager is responsible for validating and configuring the cache expiry</li></ul>
<!-- /wp:list -->
<!-- wp:heading {"level":3} -->
<h3>Nginx Installation:</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Let's see how we can install nginx in development machine. i am using macOS. feel free to install in linux,windows.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>To Install nginx in mac,you need to have <a href="https://brew.sh">Homebrew</a> installed on your machine.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Note : For Ubuntu or windows, please follow this official <a href="https://www.nginx.com/blog/setting-up-nginx/">installation guide</a></p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>$ brew install nginx
$ nginx -v</code></pre>
<!-- /wp:code -->
<!-- wp:paragraph -->
<p>Now , you can able to see something like this in screen</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":524} -->
<figure class="wp-block-image"><img src="https://cloudnweb.dev/wp-content/uploads/2019/05/Screen-Shot-2019-05-26-at-1.13.48-AM.png" alt="" class="wp-image-524"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>and to check the web server running, run <strong>http://localhost:8080</strong> in the browser. you should see the nginx default page</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":534} -->
<figure class="wp-block-image"><img src="https://cloudnweb.dev/wp-content/uploads/2019/05/Screen-Shot-2019-05-26-at-1.39.05-AM-1024x461.png" alt="" class="wp-image-534"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>Voila !!!!! we have successfully installed nginx in local machine.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Now we will see how to deploy an React application using nginx server. we will be deploying a <a href="https://github.com/ShanteDenise/React-Random-Quote-Generator">Random Quote generator</a> application from <a href="https://www.linkedin.com/in/shante-austin/">Shante Autsin</a></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Source code : <a href="https://github.com/ShanteDenise/React-Random-Quote-Generator">https://github.com/ShanteDenise/React-Random-Quote-Generator</a></p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Deploy App using nginx server</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>After installing nginx in the machine, we can access the nginx configuration file in <strong>/usr/local/etc/nginx </strong> location.you can see the files and directory in the nginx as follows</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":528} -->
<figure class="wp-block-image"><img src="https://cloudnweb.dev/wp-content/uploads/2019/05/Screen-Shot-2019-05-26-at-1.22.07-AM-1024x278.png" alt="" class="wp-image-528"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>Before configuring the server. we need to build the react application and move the files to nginx directory. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>In macOS , default file location for nginx is <strong>/usr/local/var/www</strong> . you need to move the build in to the nginx folder.</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>$ sudo mv -v /<build directory> /usr/local/var/www/demo</code></pre>
<!-- /wp:code -->
<!-- wp:paragraph -->
<p>After that, we need to configure the server in the <strong>nginx.conf</strong> file</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>$ sudo nano nginx.conf</code></pre>
<!-- /wp:code -->
<!-- wp:paragraph -->
<p>Mainly, there are two blocks in the nginx configuration file. <strong>http</strong> and <strong>server</strong> block. nginx configuration file will only have one http block and we can create serveral server blocks inside http block. we will see the directive and block concept of nginx in an another article.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Add the following code in the configuration file</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>http {
...
server {
listen 8080;
server_name localhost;
location / {
root /var/www/demo;
index index.html index.htm;
}</code></pre>
<!-- /wp:code -->
<!-- wp:paragraph -->
<p>After that, you need to restart the nginx service.</p>
<!-- /wp:paragraph -->
<!-- wp:code -->
<pre class="wp-block-code"><code>$ sudo brew services restart nginx</code></pre>
<!-- /wp:code -->
<!-- wp:paragraph -->
<p>Finally, run the <strong>localhost:8080</strong> in the browser and you will see the application running something like this.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":539} -->
<figure class="wp-block-image"><img src="https://cloudnweb.dev/wp-content/uploads/2019/05/Screen-Shot-2019-05-26-at-3.08.54-AM-1024x595.png" alt="" class="wp-image-539"/><figcaption>Random Quote Generator</figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>Yayy!!!.. Now the application running using the nginx web server.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>In next article , we will see how nginx works and how we can customize the nginx server.until then <strong>Happy Coding</strong>!!!</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>To Know more about <a href="https://cloudnweb.dev/category/web-dev/">web development</a></p>
<!-- /wp:paragraph --> | ganeshmani |
114,349 | my project | the logger sample | 0 | 2019-05-25T22:24:14 | https://dev.to/yurieco/my-project-14h2 | replit, java | the logger sample
{% replit @iuriecojocari/reading-file %} | yurieco |
114,529 | Web Development Part 2C: Time for another test! | It's time to do some work! We're going to learn how to make a nav bar look nice! Don't forget to use... | 0 | 2019-05-26T22:06:42 | https://dev.to/tevko/web-development-part-2c-time-for-another-test-51of | ---
title: Web Development Part 2C: Time for another test!
published: True
description:
tags:
cover_image: https://gallery.mailchimp.com/fbd7db24d41cbb15fbb6a072f/images/76567adb-d5bf-4b2b-aa20-ddbbcb8ce243.png
---
It's time to do some work!
We're going to learn how to make a nav bar look nice! Don't forget to use the hashtag #TimTeachesCode on twitter! Remember, you can all contact me whenever you have questions or feedback.
This course is completely free and will be free forever. It takes a bit of time to prepare and support, so if you'd like to contribute, you can do so at https://paypal.me/tevko. Any contribution helps!
This week: Using our new skills to make something.
In the last lesson, we added some CSS to our FAQ page to make it look great!
In this lesson, we're going to take a break from the FAQ page and apply our new skills to a new challenge. We're going to take some raw HTML for a nav item and apply some CSS to make it functional and nice looking.
[Copy (or Fork) the content in this codepen to get started.](https://codepen.io/tevko/pen/joxjZq)
The Rules:
- No Javascript
- You can add HTML, but you can't remove any
- On small screens, a button should appear. Clicking the button should cause the nav to fly out from the left ([like this](https://www.jqueryscript.net/images/Responsive-Off-canvas-Navigation-with-Gooey-Transition-Effect.jpg))
- On large screens, the nav should stick to the top of the screen
- Use the "[checkbox hack](https://css-tricks.com/the-checkbox-hack/)" to enable the fly out nav when the button is clicked.
- Add CSS to not only make it functional, but look nice as well!
A few tips:
- Don't overdesign. Focus on making the nav work on small screens and with the checkbox hack before anything else!
- This exercise is hard. Take your time and make sure you understand the exercise before you get started. Reach out if you have questions!
- While you may be tempted to add two navs and hide one on small / large screen sizes, don't do it! Duplicate navigations are bad for accessibility!
Homework:
Reply to the email with a link to the redesigned nav and a brief summary of what you've learned about CSS design. Also include your plans to continue studying CSS and what else you think is possible with the checkbox hack!
Extra points.
Add transitions and keyframe animations to the nav interactions!
| tevko | |
114,540 | 30 days of AWS (Day 2) | 30 days of AWS (Day 2) | 0 | 2019-05-26T23:51:24 | https://dev.to/gameoverwill/30-days-of-aws-day-2-50dj | aws, challenge, 30daysofaws | ---
title: 30 days of AWS (Day 2)
published: true
description: 30 days of AWS (Day 2)
tags: AWS, Challenge, 30DaysOfAWS
cover_image: https://thepracticaldev.s3.amazonaws.com/i/xk3wcpnc2h8nk8oy9xh6.png
---
Hello guys! Today I was studying something that I've confused me a lot VPCs.
Basically, this topic was very difficult for me because it's related to network, IPs and connectivity, and to be honest I'm pretty bad on these things.
Today I didn't finish the topic because I needed to watch the videos twice, took a lot of notes, and read the officials docs, but let me share with you what I've learned.
**1. Overview of the network services that AWS provides and Global infrastructure**: Here I read about what is an availability zone? what is an AWS region? and what is a data centre?
*AWS region*: A group of AWS resources located in a specific geographical location. The main Idea is that every customer or user can select a location closest to them, preventing latency.
*Availability zone (Az)*: It's a geographically isolated zone within a region that house AWS resources. Basically, those are different datacenter. Multiples Az in each region provides redundancy for AWS resources in that region.
*Datacenter*: It's physical hardware.
**2. VPC Basics**: from the Amazon Docs "Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own datacenter, with the benefits of using the scalable infrastructure of AWS." If you wanna read more about press [here](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html).
Here in the course provides one example that let me clarify a lot, the VPC structure first let's see this diagram.

They say that a VPC is like the internet connection that we have at home something that it's true and I didn't see before, see the image below:

So if you see, our laptops or mobile phones are like EC2 instances. The firewall is like Network Access Control List (NACL). The router or switch is like AWS Table Route. And our modem is like the Internet Gateway (IGW). This comparison is pretty good because if you disconnect one part you won't have internet also you'll be disconnected from the world, the same thing will happen to your app if you will switch off one piece described above.
Ok, so today my progress was slow but I don't care because I'm learning a lot. Tomorrow I will continue studying about VPC World.
Thanks for reading.
| gameoverwill |
146,652 | How to set up a REST Service or a Web Application in Django | Reading this article in order to learn more about How to set up a REST Service or a Web Application in Django. | 0 | 2019-07-23T06:04:46 | https://dev.to/divyeshaegis/how-to-set-up-a-rest-service-or-a-web-application-in-django-33gp | django, webapplication, python, restapi | ---
title: How to set up a REST Service or a Web Application in Django
published: true
description: Reading this article in order to learn more about How to set up a REST Service or a Web Application in Django.
tags: Django, Web Application, Python, REST API
---
<a href='https://postimages.org/' target='_blank'><img src='https://i.postimg.cc/wxybCkND/How-to-set-up-a-REST-Service-or-a-Web-Application-in-Django.jpg' border='0' alt='How-to-set-up-a-REST-Service-or-a-Web-Application-in-Django'/></a>
<strong>Introduction</strong>:
Django is a very versatile framework, primarily used for developing web applications, but it is also widely used in creating mobile app backends and REST APIs, among other things. Here we will take a quick look at how to make these things possible using Django.
As you all probably know that REST stands for “Representational State Transfer”. Basically what happens is that a user (or some software agent on one end) provides some input (or performs some activity) and the result of those inputs (or activities) are sent to the server side using a protocol that allows a request to be sent to the server (in most cases HTTP protocol is used, but it doesn't need to be HTTP, as long as it can support a request/response scheme). The server, on receiving the request, makes appropriate changes to the state of the system (hence we call it “State Transfer”).
<strong>A Brief Discussion on How to Create a Django Project</strong>:
Django runs on HTTP protocol, and hence it is very handy in creating a REST API or a RESTful application. It allows the user to provide inputs on a web interface (or any interface that is capable of sending a HTTP request object to the Django application). In our examples in this document, we will be considering web application interfaces, but you can easily stretch them to a mobile app or any other app that can send a similar HTTP request.
In order to understand how to do these things, we need to know a bit of Django here. This is my first post in a series of 5 posts on the same topic, so in this document, I will acquaint you with certain Django files that are of utmost importance, along with the basic structure of a Django app. (Please note that 'Django' is pronounced as 'jango', NOT D-jango. The first character 'D' is silent.)
A 'Django project' is composed of a number of 'Django applications' that are stitched together with some logic. The first thing that you do when creating a Django project is to install Django in your python library. This is usually done by running the command “pip install Django==<version number>”. You may skip the version number part, in which case the latest version of Django that can be handled by your OS and your python distro installed on your system will be installed. Fair enough.
Once the installation is successfully complete, you will find a file named “Django-admin” in the path of your computer. (Run “which django-admin” to see where it exists, on my system it is in “/usr/bin/django-admin”). To create a project, you need to execute the following command in a directory of your choice. Preferably create a new directory, 'cd' into it and run the following command:
<pre class="highlight python"><code>#> django-admin startproject <your project name></code>
</pre>
Your project name has be to provide to create the Django project. For example, let us assume I am creating a project named “testyard”. In order to create my project, I need to execute the following command:
<pre class="highlight python"><code>
#> django-admin startproject testyard</code></pre>
The above command creates the following directory structure in the directory where you executed the above mentioned command.
<pre class="highlight sql"><code>
testyard/
manage.py
testyard/
__init__.py
settings.py
urls.py
wsgi.py
</code>
</pre>
<strong>A Discussion on the Files Created Above</strong>:
The first file, manage.py, is very important. This is the file that will eventually assist you to create applications, run the Django test web server, run Django management tasks (not covered in this document, but we will take look at it in a later article), and a host of other activities. The __init__.py file in the next level ensures that we will be working in an object oriented environment. Next comes the 2 most important files: settings.py and urls.py. We will discuss settings.py file first and urls.py next.
settings.py: This file contains the settings for the Django applications to work. Some of the more important config params in this file are the Database connection parameters (username, password, database name, DB host, port,Engine, etc.), static files location, Template Loaders, Middleware classes (we will be dicussing this in another article, but just keep in mind that these are the programs that interact with the request object before they reach the “view” functions, so they are capable of changing the request before it can be processed), installed apps, root urlconf (we will discuss this in this article later), etc.
You are also free to write your own application specific settings file in the same location. However, you need to put a different name to it and then import settings.py in it. There are a load of other parameters in settings.py and I would request you to go through a settings.py file in the location “https://github.com/supmit13/python-ff-ext/blob/master/urldbapp/settings.py” and figure out the parameters.
urls.py: This file contains the map of the path requested by a client (browser or a bot) to the specific view function that needs to be called when the given path is requested by an HTTP request. You will find a sample of such a urls.py file here: “https://github.com/supmit13/python-ff-ext/blob/master/urldbapp/urls.py”. This file is also known as the “urlconf”.
The file named wsgi.py exists to allow the applications you are creating to run on uwsgi and nginx (the production environment stuff). We will be taking a look at this file later in this article.
Once these files have been created, you start creating applications inside the project. But first, you need to figure out if the previous procedure is working fine with a server. So for that purpose, you need to run the development server of the application
To do this, you need to run the following command on the command prompt. Note that in order to run this, you need to be in the same directory where “manage.py” exists.
#>python manage.py runserver 0.0.0.0:8080
This command will start the development server. Now, you should be able to go to a browser, type in localhost:8080/login/ , and you should be able to see the login screen if the urls.py has an entry for that url path. Alternatively, you may just type localhost: 8080/ to see if your Django server is running.
So that makes you capable enough to start creating a REST API using python/Django. In the subsequent sections we will demonstrate how to write code to create the REST API.
The views.py file: Django follows a variation of the MVC design pattern, and it is normally referred to as the MVT pattern. The 'M' stands for the model (the DB schema), V stands for views.py, which in an MVC framework is the controller part, while the 'T' stands for templates, which in MVC framework would be the view component.
Since we will be looking at the REST API first, we will concentrate on the views.py. First, have a look at the handlers.py at the following link: “https://github.com/supmit13/python-ff-ext/blob/master/urldbapp/savewhatyoulookedat/handlers.py”. In this program, the code that is to be in views.py has been put into handlers.py, but for all practical purposes, they work in the same manner. You may consider the code in handlers.py to be that of views.py in this example. (Actually, this was some code I wrote long back when I was just starting to dabbling in Django. Hence I made something to see if Django is flexible enough, and found that it sure was flexible).
<strong>The Structure and Behaviour of Django</strong>:
The “views.py” file will contain one or more functions (note that it may also contain classes, and we will take a look at classes in views.py in another post), and the functions will contain only one argument: “request”. The “request” is basically an instance of the HttpRequest class (defined by Django). All view functions in a Django App consume an instance of this HttpRequest class and return an instance of the HttpResponse class.
Why do we need to have multiple functions in the views.py file? This is because, you might want to serve multiple pages or responses, each of which has a different URL path. For example, you might want to serve a login page, a registration page, an activity to check the login credentials of the user who is trying to login into your app, and a dashboard page. Each of these pages will have a different URL path, say, http://mywebsite.com/login for login page, http://mywebsite.com/registration for registration page, and so on. So each of these URLs will need a separate function to handle the request. Hence, we need one function for each of these actions in the views.py file.
How do we associate each of the activities mentioned above with a specific views.py function? This is where the urls.py file comes into play. The urls.py has a map of each URL path to a specific function in views.py of a particular Django app (Remember we mentioned in the beginning that a Django project is composed of one or more Django apps. We will get to the apps part in a moment). An urls.py looks something like the following:
<pre class="highlight sql"><code>
urlpatterns = patterns('',
(r'^time/$', current_datetime),
(r'^savewhatyoulookedat/login/$', userLogin),
(r'^savewhatyoulookedat/register/$', userRegister),
(r'^savewhatyoulookedat/$', saveURL),
(r'^savewhatyoulookedat/logout/$', userLogout),
(r'^savewhatyoulookedat/downloadPlugin/$', firefoxPluginDownload),
(r'^savewhatyoulookedat/managedata/$', manageData),
(r'^savewhatyoulookedat/commandHandler/$', executeCommand),
(r'^savewhatyoulookedat/showimage/$', showImage),
(r'^savewhatyoulookedat/search/$', searchURL),
# (r'^admin/doc/', include('django.contrib.admindocs.urls')),
# Uncomment the another line to enable the authority:
# (r'^admin/', include(admin.site.urls)),
)
</code>
</pre>
Basically, as you can see, the mapping is actually between a regular expression that should match the URL path and a function in the views.py file. As you create more activity handlers in your views.py file, you keep adding an entry for each of them in the urls.py file. For a Django project with multiple applications, the urls.py file might also look like the following:
<pre class="highlight sql"><code>
urlpatterns += patterns('',
url(r'^%s$'%mysettings.REGISTER_URL, 'skillstest.Auth.views.register', name='newuser'),
url(r'^%s$'%mysettings.DASHBOARD_URL, 'skillstest.views.dashboard', name='dashboard'),
url("%s$"%mysettings.LOGIN_URL, 'skillstest.Auth.views.login', name='login'),
url(r'%s$'%mysettings.MANAGE_TEST_URL, 'skillstest.Tests.views.manage', name='managetests'),
url(r'%s$'%mysettings.CREATE_TEST_URL, 'skillstest.Tests.views.create', name='createtests'),
url(r'%s$'%mysettings.EDIT_TEST_URL, 'skillstest.Tests.views.edit', name='edittests'),
url(r'%s$'%mysettings.ABOUTUS_URL, 'skillstest.views.aboutus', name='aboutus'),
url(r'%s$'%mysettings.HELP_URL, 'skillstest.views.helpndocs', name='helpndocs'), url(r'%s$'%mysettings.CAREER_URL, 'skillstest.views.careers', name='careers'),
... ... ...
</code>
</pre>
'Auth' and 'Tests' are the names of the applications in the “skillsets” project. Don't worry about the variables in uppercases – they are defined elsewhere and are of no consequence to our example here.
So, now let us see how to create an application inside a project. We do that by executing the following command:
<pre class="highlight python">
<code>
python manage.py startapp <our-app-name></code>
</pre>
For example, if our app name is “Letsplay”, then we would run
python manage.py startapp Letsplay
The above command creates a directory structure like the following:
<pre class="highlight python">
<code>
Letsplay/
__init__.py
admin.py
apps.py
migrations/
__init__.py
models.py
tests.py
views.py
</code>
</pre>
In the above structure, we will focus mostly on the views.py and the models.py files. However, we will also touch upon the others first.
The “admin.py” file is required if you want to customize your admin panel in Django. Normally, if you try to access the URL “http://localhost:8080/admin”, you would see an admin panel. This will display you all the models you have (we will discuss models in just a bit), the config settings of your Django project (in read-only mode, of course), etc.
The “apps.py” file allows the creator of the Django app to put in some application specific parameters. Each app in a Django project has its own apps.py file.
The “tests.py” file allows the app creator to write tests for the app. This file needs to conform to a certain structure. It needs to define a class for a specific test case. This class needs to have a method named “setup”, and then it should have the tests defined as methods in the class itself. Unless you are a Django purist, you won't use this file to define your tests. Normally, in the real life scenarios, we have an application created using Django, another component created using some other technology and several other components fitted together to work as a service. In such cases, we need to write tests to check the functionality of the entire scheme of things rather than just the Django part. Hence, it is almost customary to create tests as a different suite using python or some other language like Perl or ruby (or whatever the tester prefers).
By and large, any application you write (in python using Django or any other language and framework), you eventually end up interacting with a database somewhere down the line. All Django apps also tend to do the same. This is where the models.py file steps in. The “models.py” file basically provides you with an ORM (Object-Relational-Mapping) scheme in the Django app. Hence for every table in your preferred database, you have a class defined for it in the models.py file. It looks something like this:
<pre class="highlight sql"><code>
from django.db import models
import os, sys, re, time, datetime
import inspect
"""
'Topic' is basically category or domain.
"""
class Topic(models.Model):
topicname = models.CharField(max_length=150)
user = models.ForeignKey(User, null=False)
createdate = models.DateField(auto_now=True)
isactive = models.BooleanField(default=True)
class Meta:
verbose_name = "Topics Table"
db_table = 'Tests_topic'
def __unicode__(self):
return "%s"%(self.topicname)
class Subtopic(models.Model):
subtopicname = models.CharField(max_length=150)
subtopicshortname = models.CharField(max_length=50)
topic = models.ForeignKey(Topic, blank=False, null=False)
createdate = models.DateField(auto_now=True)
isactive = models.BooleanField(default=True)
class Meta:
verbose_name = "Subtopics Table"
db_table = 'Tests_subtopic' # Name of the table in the database
def __unicode__(self):
return "%s (child of %s)"%(self.subtopicname, self.topic.topicname)
class Session(models.Model):
sessioncode = models.CharField(max_length=50, unique=True)
status = models.BooleanField(default=True) # Will be 'True' as soon as the user logs in, and will be 'False' when user logs out.
# The 'status' will automatically be set to 'False' after a predefined period. So users will need to login again after that period.
# The predefined value will be set in the settings file skills_settings.py. (skills_settings.SESSION_EXPIRY_LIMIT)
user = models.ForeignKey(User, null=False, blank=False, db_column='userid_id')
starttime = models.DateTimeField(auto_now_add=True) # Should be automatically set when the object is created.
endtime = models.DateTimeField(default=None)
sourceip = models.GenericIPAddressField(protocol='both', help_text="IP of the client's/user's host")
istest = models.BooleanField(default=False) # Set it to True during testing the app.
useragent = models.CharField(max_length=255, default="", help_text="Signature of the browser of the client/user") # Signature of the user-agent to guess the device used by the user.
# This info may later be used for analytics.
class Meta:
verbose_name = "Session Information Table"
db_table = 'Auth_session'
def __unicode__(self):
return self.sessioncode
def isauthenticated(self):
if self.status and self.user.active:
return self.user
else:
return None
def save(self, **kwargs):
super(Session, self).save(kwargs)
</code>
</pre>
The attributes in the classes are the fields in the respective tables in the database. The name of the DB table is defined in the “class Meta” of each of the Topic and Subtopic classes with the attribute named “db_table”. The database associated with these tables is defined in the settings.py file (remember when we discussed settings.py file attributes?). For the datatypes used in the models.py file, you need to look up the Django documentation as there are quite a few datatypes and relationships and they cannot be dealt with here. In fact, the documentation for them is quite substantial. However, we have used only a few of them above and they are quite self-explanatory.
Actually, Django is quite popular because of 2 reasons.
1. It provides the developer with all the boiler plate code, so the coder doesn't need to write all the boring stuff.
2. It provides the coder with the ORM, so retrieving or setting a value in a certain row of a specific table in the DB is quite easy. That is the “up side” of it. There is a “down side” too. When you use ORM, you do not use SQL statements, and hence if the operation is a little complex, the ORM can become quite inefficient. With SQL statements, you can do some optimization to make the statement run faster, but with ORM, there is no such possibility. For this reason, Django offers a way out. You can create “raw” SQL statements to query your DB, but this is rarely used by most developers. You should use “raw” SQL statements only when you see that the ORM way of manipulating the DB is distinctively inefficient.
Anyway, let us now move on to the final stages of this document. This happens to be the most important stage in the creation of a REST application. We will now take a look at the views.py file. Please refer to the example code below:
<pre class="highlight python">
<code>
# User login handler
def login(request):
if request.method == "GET":
msg = None
if request.META.has_key('QUERY_STRING'):
msg = request.META.get('QUERY_STRING', '')
if msg is not None and msg != '':
msg_color = 'FF0000'
msg = skillutils.formatmessage(msg, msg_color)
else:
msg = ""
# Display login form
curdate = datetime.datetime.now()
tmpl = get_template("authentication/login.html")
c = {'curdate' : curdate, 'msg' : msg, 'register_url' : skillutils.gethosturl(request) + "/" + mysettings.REGISTER_URL }
c.update(csrf(request))
cxt = Context(c)
loginhtml = tmpl.render(cxt)
for htmlkey in mysettings.HTML_ENTITIES_CHAR_MAP.keys():
loginhtml = loginhtml.replace(htmlkey, mysettings.HTML_ENTITIES_CHAR_MAP[htmlkey])
return HttpResponse(loginhtml)
elif request.method == "POST":
username = request.POST.get('username') or ""
password = request.POST.get('password') or ""
keeploggedin = request.POST.get('keepmeloggedin') or 0
csrfmiddlewaretoken = request.POST.get('csrfmiddlewaretoken', "")
userobj = authenticate(username, password)
if not userobj: # Incorrect password - return user to login screen with an appropriate message.
message = error_msg('1002')
return HttpResponseRedirect(skillutils.gethosturl(request) + "/" + mysettings.LOGIN_URL + "?msg=" + message)
else: # user will be logged in after checking the 'active' field
if userobj.active:
sessobj = Session()
clientip = request.META['REMOTE_ADDR']
timestamp = int(time.time())
# timestamp will be a 10 digit string.
sesscode = generatesessionid(username, csrfmiddlewaretoken, clientip, timestamp.__str__())
sessobj.sessioncode = sesscode
sessobj.user = userobj
# sessobj.starttime should get populated on its own when we save this session object.
sessobj.endtime = None
sessobj.sourceip = clientip
if userobj.istest: # This session is being performed by a test user, so this must be a test session.
sessobj.istest = True
elif mysettings.TEST_RUN: # This is a test run as mysettings.TEST_RUN is set to True
sessobj.istest = True
else:
sessobj.istest = False
sessobj.useragent = request.META['HTTP_USER_AGENT']
# Now save the session...
sessobj.save()
# ... and redirect to landing page (which happens to be the profile page).
response = HttpResponseRedirect(skillutils.gethosturl(request) + "/" + mysettings.LOGIN_REDIRECT_URL)
response.set_cookie('sessioncode', sesscode)
response.set_cookie('usertype', userobj.usertype)
return response
else:
message = error_msg('1003')
return HttpResponseRedirect(skillutils.gethosturl(request) + "/" + mysettings.LOGIN_URL + "?msg=" + message)
else:
message = error_msg('1001')
return HttpResponseRedirect(skillutils.gethosturl(request) + "/" + mysettings.LOGIN_URL + "?msg=" + message)
-------------------------------------------------------------------------------------------------------------------
# User registration handler
def register(request):
privs = Privilege.objects.all()
privileges = {}
for p in privs:
privileges[p.privname] = p.privdesc
if request.method == "GET": # display the registration form
msg = ''
if request.META.has_key('QUERY_STRING'):
msg = request.META.get('QUERY_STRING', '')
if msg is not None and msg != '':
var, msg = msg.split("=")
for hexkey in mysettings.HEXCODE_CHAR_MAP.keys():
msg = msg.replace(hexkey, mysettings.HEXCODE_CHAR_MAP[hexkey])
msg = "<p style=\"color:#FF0000;font-size:14;font-face:'helvetica neue';font-style:bold;\">%s</p>"%msg
else:
msg = ""
curdate = datetime.datetime.now()
(username, password, password2, email, firstname, middlename, lastname, mobilenum) = ("", "", "", "", "", "", "", "")
tmpl = get_template("authentication/newuser.html")
#c = {'curdate' : curdate, 'msg' : msg, 'login_url' : skillutils.gethosturl(request) + "/" + mysettings.LOGIN_URL, 'register_url' : skillutils.gethosturl(request) + "/" + mysettings.REGISTER_URL, 'privileges' : privileges, 'min_passwd_strength' : mysettings.MIN_ALLOWABLE_PASSWD_STRENGTH, }
c = {'curdate' : curdate, 'msg' : msg, 'login_url' : skillutils.gethosturl(request) + "/" + mysettings.LOGIN_URL, 'hosturl' : skillutils.gethosturl(request),\
'register_url' : skillutils.gethosturl(request) + "/" + mysettings.REGISTER_URL,\
'min_passwd_strength' : mysettings.MIN_ALLOWABLE_PASSWD_STRENGTH, 'username' : username, 'password' : password, 'password2' : password2,\
'email' : email, 'firstname' : firstname, 'middlename' : middlename, 'lastname' : lastname, 'mobilenum' : mobilenum, \
'availabilityURL' : mysettings.availabilityURL, 'hosturl' : skillutils.gethosturl(request), 'profpicheight' : mysettings.PROFILE_PHOTO_HEIGHT, 'profpicwidth' : mysettings.PROFILE_PHOTO_WIDTH }
c.update(csrf(request))
cxt = Context(c)
registerhtml = tmpl.render(cxt)
for htmlkey in mysettings.HTML_ENTITIES_CHAR_MAP.keys():
registerhtml = registerhtml.replace(htmlkey, mysettings.HTML_ENTITIES_CHAR_MAP[htmlkey])
return HttpResponse(registerhtml)
elif request.method == "POST": # Process registration form data
username = request.POST['username']
password = request.POST['password']
password2 = request.POST['password2']
email = request.POST['email']
firstname = request.POST['firstname']
middlename = request.POST['middlename']
lastname = request.POST['lastname']
sex = request.POST['sex']
usertype = request.POST['usertype']
mobilenum = request.POST['mobilenum']
profpic = ""
#userprivilege = request.POST['userprivilege']
csrftoken = request.POST['csrfmiddlewaretoken']
message = ""
# Validate the collected data...
if password != password2:
message = error_msg('1011')
elif mysettings.MULTIPLE_WS_PATTERN.search(username):
message = error_msg('1012')
elif not mysettings.EMAIL_PATTERN.search(email):
message = error_msg('1013')
elif mobilenum != "" and not mysettings.PHONENUM_PATTERN.search(mobilenum):
message = error_msg('1014')
elif sex not in ('m', 'f', 'u'):
message = error_msg('1015')
elif usertype not in ('CORP', 'CONS', 'ACAD', 'CERT'):
message = error_msg('1016')
elif not mysettings.REALNAME_PATTERN.search(firstname) or not mysettings.REALNAME_PATTERN.search(lastname) or not mysettings.REALNAME_PATTERN.search(middlename):
message = error_msg('1017')
....
....
return HttpResponse(html)
The above code has 2 functions, and we will discuss them shortly. But before that, please take a look at the corresponding urls.py file for these 2 above functions:
urlpatterns = patterns('',
(r'^savewhatyoulookedat/login/$', login),
(r'^savewhatyoulookedat/register/$', register)
)
</code>
</pre>
As you can see above, the 'login' function will be called when you try to access the following URL from your browser (or any other web client):
http://localhost:8080/savewhatyoulookedat/login/
The 'register' function will be called when you try to access the following URL:
http://localhost:8080/savewhatyoulookedat/register/
Note how the 'request' object has been used along with some other objects that are the product of Django ORM. For example, in the login function, there is an instance of the Session model. The DB table behind the Session model is named “Auth_session” and it is specified in the models.py file above. Thus, whenever a user hits one of the URLs mentioned above, the view runs some DB queries and figures out what response to send to the client. This is how a RESTful application should work, and as you can see, Django really makes it easy to develop one.
<strong>Conclusion</strong>:
<a href="https://www.nexsoftsys.com/technologies/python-development-services.html">Python Django development is a very extensive framework</a>, and since we were discussing REST applications, I deliberately left out Django templates. I will be explaining templates in another post, but since REST apps do not always need an HTML interface, I am skipping it for now. Also, we have just touched on some of the concepts of Django, and there is not enough room to discuss all of them in detail here. I would suggest that you go through this post, try and understand as best as you can, and then take a look at the official Django documentation for more details on the topic.
Hope you find this useful.
| divyeshaegis |
146,696 | Need Step by Step Guide for Road to Remote work | A post by jawwad22 | 0 | 2019-07-23T08:45:19 | https://dev.to/jawwad22/need-step-by-step-guide-for-road-to-remote-work-14ln | discuss | ---
title: Need Step by Step Guide for Road to Remote work
published: true
description:
tags: #discuss
---
| jawwad22 |
146,819 | Intro to Qvault | Qvault is a new opensource password manager, with an emphasis on user experience and customization op... | 0 | 2019-07-24T17:57:56 | https://qvault.io/2019/07/05/intro-to-qvault/ | passwords, qvaultcards, uncategorized, crypto | ---
title: Intro to Qvault
published: true
tags: Passwords,Qvault Cards,Uncategorized,crypto
canonical_url: https://qvault.io/2019/07/05/intro-to-qvault/
---
Qvault is a new [opensource](https://github.com/Q-Vault/qvault) password manager, with an emphasis on user experience and customization options.
Many who stumble upon Qvault ask,
> “How is this different from other password managers?”
In this article we explain what sets Qvault apart.
## 1. Open Source
<figcaption>Opensource.org</figcaption>
Many password managers that exist today do not publish their code for the public to review and collaborate with. This is a **huge security and privacy risk** because the user and the community can’t:
1. Ensure owners of the app aren’t **stealing user information**
2. Peer-review the code and check for **vulnerabilities**
We assert that open source is a necessity for any password manager.
## 2. Physical Qvault Cards
No other password manager gives users the option to upgrade security and recover-ability via physical cards. Qvault cards (once released) will come in packs of two and are the size of a credit card. There will be **a Key Card** and a **Recovery Card**.
The **Key Card will contain a random and unique 256-bit key** in the form of a **QR code**. The key is used in addition to the user’s master password to encrypt their vault. This provides an additional layer of security because an attacker would need to learn the user’s master password as well as obtain the key card.
The **Recovery Card** contains the same 256-bit key, as well as blank spaces on the back where the user will write a recovery code. The recovery code is a **16 character code generated by the app**. The QR code is used in case the user loses their Key Card, and the recovery code is used to reset the user’s master password in case they forget it.
<figcaption>Prototype Qvault Cards</figcaption>
Qvault cards are still in the prototyping phase, but will soon be listed for sale on [https://qvault.io](https://qvault.io) The sale of the cards will help fund the open source development and ensure future maintenance of the app.
## 3. User Friendly
Open source apps have always struggled with user experience and user interface. Qvault has one of the best software designers in the business with many years of experience, DJ Shott. DJ is leading Qvault to be the easiest and best looking password manager today.
Qvault is also built on top of [Electron](https://electronjs.org/), [Node](https://nodejs.org/), and [Vue](https://vuejs.org/) which makes it incredibly simple for us to develop engaging and beautiful user experiences.
## 4. Customization
Qvault allows the user to customize an experience that best fits their security and usability needs. Some currently supported options are:
- Users can use a master **password OR passphrase**.
- Users can use a custom master password/passphrase or generate a random one.
- A **virtual keyboard** is included that can optionally be used to bypass key logger malware.
- **Offline** use is fully supported, as well as the option to automatically backup encrypted vault files to the **Qvault cloud**.
- Vaults can be encrypted with just the master password/passphrase, or **dual encrypted** with a Qvault Key Card
- A recovery code can optionally be created using a Qvault Recovery Card to restore access in case a password is forgotten.
- All updates are prompted, never automatic. This is important for power users that want to verify the source for each update.
## 5. Active Maintenance
<figcaption> <a href="https://github.com/Q-Vault/qvault">https://github.com/Q-Vault/qvault</a> </figcaption>
We have an ambitious project roadmap. Here are some of the features we would like to implement soon:
- Generate “[cold wallets](https://en.bitcoin.it/wiki/Cold_storage)” within Qvault for various cryptocurrencies like [Bitcoin](https://bitcoin.org).
- Share secrets trustlessly with other Qvault users.
- Give access to sections of vaults to other users.
- Support generation of common secrets like PGP or SSH keys.
- Build a mobile app that integrates with the desktop version
- … And more. Let us know what you would like to see in the app, or feel free to contribute to the project on [github](https://github.com/Q-Vault/qvault)!
Follow us on medium! [https://medium.com/qvault](https://medium.com/qvault)
By Lane Wagner | wagslane |
151,715 | Preparing USB for Windows | NOTE: Please Backup your important data on a USB or an External Hard Drive and in case of your Deskto... | 0 | 2019-08-02T19:37:42 | https://dev.to/th3n00bc0d3r/preparing-usb-for-windows-2p3 | tutorial, beginners | _NOTE: Please Backup your important data on a USB or an External Hard Drive and in case of your Desktop, I highly recommend you disconnect any secondary or other hard drives, that might include important data._
First of all we need a 8GB USB Pen Drive. The Pen drive will be used as a virtual DVD ROM, where we will burn the image of the operating system in this case Windows 10. Now we will be using windows 10 officially available from the Microsoft Website.
Now go to the following URL
[https://www.microsoft.com/en-us/software-download/windows10](https://www.microsoft.com/en-us/software-download/windows10)
If you open this in Chrome or Firefox, we will be unable to get the ISO link which microsoft does not share. To be able to get the link, that get active for 24 hours for each user, we need to open this in **_Internet Explorer. _**So open the link in internet explorer.

Once your in internet explorer, click on the cog on the upper right corner of the screen, which brings in the settings menu and from the settings menu select F12 Developer Tools.

Once your select the F12 Developer Tools, a window will cascade on the bottom, with the default DOM Explorer opened. On the window, the last tab will be emulation. Click on Emulation.

Once in the Emulation Tab, you will find a dropdown by the name of User agent string, click on the drop down.

From the Dropdown, scroll down to find Apple Safar (IPAD). Once you select it, the page will refresh.

As the page refreshes, now the browser is making the microsoft website believe that you are using an Apple IPAD and browsing on the Safari Browser. This will make the microsoft website ask you to select edition. Once you click the Drop Down, as of current you will find Windows 10 May 2019 Update and below it Windows 10, select that by clicking it. Next it will ask you for the Language, I selected English, you can select as per your preference.

Once selected it will start loading again, and you will be presented with a dialog with 2 buttons. Select 64 bit Download. Now you will be downloading an ISO file.

Filename: Win10_1903_V1_English_x64.iso
Great, give yourself a Thumbs Up, you have just spoofed Microsoft and download an ISO image of windows without any copyright issues.
Now we need another piece of interesting software known as rufus, which can be downloaded from the following link, I am glad its a direct link.
[https://github.com/pbatard/rufus/releases/download/v3.6/rufus-3.6.exe](https://github.com/pbatard/rufus/releases/download/v3.6/rufus-3.6.exe)
Now you should specifically have 2 files that are required.
* Win10_1903_V1_English_x64.iso
* Rofus-3.6.exe

Double Click Rofus-3.6.exe so we can run the program. Once the program appears, you should have the screen as similar below. As of this instant, no USB is connected as show in the Device top on the top and it will show 0 Devices in the Bottom.

Now is the time to plug in your USB. REMEMBER ALL DATA WILL BE ERASED FROM THE USB, SO DON'T KEEP ANY IMPORTANT FILES ON IT. Once you plug in the USB, it should auto detect and show in the Device TAB. I have plugged in a 16GB USB. Now click on Select.

From the Explorer, browse to the Win10_1903_V1_English_x64.iso file and click Open.

Now this screen would identify the default configuration.
IMPORTANT THING
Operating systems have shifted to something called a UEFI boot, which is like when you buy a new laptop nowadays, it just feels as if there was no starting screen and the computer booted direct into windows. This is called UEFI and the old one is called LEGACY.
Now UEFI is much faster than the old one and it has a better communication with the drivers to the hardware. The same 3D card on the same driver does give a slighter higher gain in FPS, when you compare it with LEGACY or UEFI.

Now where do we set this option? In the BIOS.
Now if you dont have UEFI option in your BIOS, just click on it and Select MBR.as shown in the screen.
Once this is settled, click on start and wait for it to finish.

You have a WINDOWS Installation USB ready to go, Lets get ready to Rumble!!!!
For any questions and queries, please use the comments sections.
[Next: Configuring BIOS](https://dev.to/th3n00bc0d3r/configuring-bios-5ep5)
[Noob Index](https://dev.to/th3n00bc0d3r/noob-guides-index-4mne) | th3n00bc0d3r |
153,568 | Sheets-based Computer Aided Software Development (CASD) #1 | I used to use spreadsheets a lot to generate programming assets, particularly using Excel plus macros... | 1,803 | 2019-08-07T07:24:32 | https://dev.to/bugmagnet/sheets-based-computer-aided-software-development-1ib6 | programming, googlesheets, googleappsscript, javascript | I used to use spreadsheets a lot to generate programming assets, particularly using Excel plus macros to generate Interface Description Language files for DLL files. These files were then compiled using Microsoft's [MIDL](https://docs.microsoft.com/en-us/windows/win32/midl/com-dcom-and-type-libraries) compiler leaving a [typelib](http://resources.esri.com/help/9.3/arcgisdesktop/com/COM/ExtendAO/TypeLibrariesAndIDL.htm) file that could then be registered and the symbols used as if the DLL was a real COM DLL.
Using the spreadsheet saved a ton of time. I could specify each function and each parameter and have the macros cook up the relevant IDL code. Then I could copy and paste it into a text editor, clean it up a bit, and then compile to .TLB and so on through the process.
So after a long hiatus I'm doing the same again, this time generating JSON configuration objects from a Google Sheets file. In this case, column A contains the name of the field, column B the data type, and columns C and beyond the configuration data for each of the files/functions that will use the data.
For example, here's a sample A:C 
Below that, in cell C28, I have the following formula:
```
=GenerateSettings(C2:C26,A2:B26)
```
which generates the following:
```javascript
var settings = {
"activateSourceSheetOnExit": false,
"activeTargetSheetOnExit": false,
"cargo": {},
"checkboxOffset": -1,
"checkboxColumn": "",
"clearData": true,
"clearFormat": true,
"clearHeads": true,
"dataColumns": [],
"dataStartLine": -1,
"finalTasks": [],
"headsRow": -1,
"ignoredColumns": [],
"ignoreHeaderlessColumns": false,
"includeSidebar": false,
"querySelector": {
"sid": "REST_function_name",
"p1": "REST_parameter_value",
"p2": "6,7,8,9,10"
},
"removeCheckedLines": false,
"resetCheckbox": false,
"sourceName": "",
"sourceRange": "",
"targetName": "Main",
"targetRange": "A:J"
}
```
`GenerateSettings` is below. You'll see artifacts indicating `ts2gas`'s conversion of my TypeScript to ES3, the dialect used in Google Apps Script.
You may want to check the references on the extra parameters you can give to `JSON.stringify()` (below as `JSON.stringify(settings, null, ' ')`) as they do improve readability for humans.
```javascript
function GenerateSettings(valueRange, settingsRange) {
var settings = {};
var settingsValues = settingsRange; //.getValues();
var offset = 0;
for (var _i = 0, valueRange_1 = valueRange; _i < valueRange_1.length; _i++) {
var cell = valueRange_1[_i];
var nameType = settingsValues[offset];
var name = nameType[0];
var type = nameType[1];
var cellValue = cell[0];
offset++;
var formattedCellValue = void 0;
switch (type) {
case "object":
formattedCellValue = cellValue === "" ? '{}' : cellValue;
break;
case "object[]":
case "number[]":
formattedCellValue = cellValue === "" ? '[]' : '[' + cellValue.split(/,\s*/g).join(",") + ']';
break;
case "string[]":
formattedCellValue = cellValue === "" ? '[]' : '[' + cellValue.split(/,\s*/g).map(function (elt) { return "'" + elt + "'"; }).join(",") + ']';
break;
case "number":
formattedCellValue = cellValue === "" ? -1 : cellValue;
break;
case "boolean":
formattedCellValue = cellValue === "" ? 'false' : (cellValue ? 'true' : 'false');
break;
case "literal":
formattedCellValue = cellValue === "" ? '""' : cellValue; // was eval
break;
case "string":
formattedCellValue = cellValue === "" ? '""' : '"' + cellValue + '"';
break;
}
name = "['" + name.split(/\./g).join("']['") + "']";
var js = "if (!settings) settings = {}; settings" + name + "=" + formattedCellValue + ";";
eval(js);
}
return "var settings = " + JSON.stringify(settings, null, ' ');
}
```
| bugmagnet |
154,120 | How do you keep your environment variable synchronized among your development team. | env file synchronization | 0 | 2019-08-08T11:01:31 | https://dev.to/espoir/how-do-you-keep-your-environment-variable-synchronized-among-your-development-team-175a | env, productivity, discuss | ---
title: How do you keep your environment variable synchronized among your development team.
published: true
description: env file synchronization
tags: env, productivity, discuss
---
*The problem :*
We are working in a team of 10 developers, we are building an API, we are using a lot of environment variables which varies from access keys to apis keys. As you know those variables are not *gitable*.
Each one has a .env file in his local laptop and we are sharing a global sample.env which is also on our Github.
Sometimes a dev may update his .env file and change his environment variables while working on a feature and forget to update the sample.env file or update the team with the new environment variable.
This leads to some frustrating and useless bug when the other developer is testing the feature. Sometimes you end spending 1 hour on a bug and then discover that it was due to outdated environment variables.
*The question:*
How do you keep the .env file secret but synchronized within the team so that each time we update the environment variable it's updated for every member of the team?
I was thinking about a secret gist file with auto-update when the .env file is updated by any team member.
How do you handle this scenario in your team? | espoir |
154,197 | Developer Bliss- Serverless Fullstack React with Prisma 2 and GraphQL | In this post, we will show how you can deploy a totally serverless stack using Prisma 2 and Next.js.... | 0 | 2019-08-08T20:32:11 | https://www.codemochi.com/blog/2019-08-12-prisma-2-now/ | serverless, javascript, react, aws | In this post, we will show how you can deploy a totally serverless stack using Prisma 2 and Next.js. This type of solution has only been recently available and while it is still in beta, it really represents a full stack developer's paradise because you can develop an app, deploy it, forget about worrying about any of the DevOps particulars and be confident that it will work regardless of load.
**Benefits:**
- One command to deploy the entire stack (Now)
- Infinitely scalable, pay for what you use (lambda functions)
- No servers to maintain (lambda functions)
- All the advantages of React (composability, reusability and strong community support)
- Server-side rendering for SEO (Next.js)
- Correctly rendered social media link shares in Facebook and Twitter (Next.js)
- Easy to evolve api (GraphQL)
- One Schema to maintain for the entire stack (Prisma 2)
- Secure secret management (Now)
- Easy to set up development environment with hot code reloading (Docker)
- Strongly typed (GraphQL and Typescript) that is autogenerated when possible (graphql-gen)
Before you start, you should go ahead and set up an RDS instance and [configured like our previous blog post](https://www.codemochi.com/blog/2019-08-07-setup-rds-for-zeit-now-deployments/).
**Videos:**
_I. Install Dependencies_
{% youtube wxUyz1obiAw %}
_II. Add Environmental Parameters_
{% youtube gsMgdMbRc_I %}
_III. Configure the Backend_
{% youtube OzaRwi2_vSA %}
_IV. Configure the Now Service_
{% youtube Kac8RJFtl3A %}
_V. Set up Now Secrets and Deploy!_
{% youtube 5do8C4rJDp0 %}
We will pick up from the example from our multi-part blog series [**[1]**](https://www.codemochi.com/blog/2019-07-08-prisma-2-nextjs-docker/), [**[2]**](https://www.codemochi.com/blog/2019-07-10-prisma-2-nextjs-docker/), [**[3]**](https://www.codemochi.com/blog/2019-07-10-prisma-2-nextjs-docker/). If you aren't interested in following along from the start, you can start by checking out the repo from the `now-serverless-start` tag:
```bash
git clone https://github.com/CaptainChemist/blog-prisma2
git fetch && git fetch --tags
git checkout now-serverless-start
```
**I. Install and clean up dependencies**
1. Upgrade to Next v9
In the `frontend/package.json` make sure that next has a version of "^9.02" or greater. Previously we were using a canary version of 8.1.1 for typescript support, but since the post version 9 of next was released so we want to make sure we can take advantage of all the latest goodies.
2. Install webpack to the frontend
As a precaution, you should install webpack to the frontend folder. I've seen inconsistent behavior with `now` where if webpack is not installed, sometimes the deploy will fail saying that it needs webpack. When I read online it sounds like it shouldn't be required so this is likely a bug, but it can't hurt to add it:
```bash
npm install --save-dev webpack
```
3. Remove the main block from `package.json` and `frontend/package.json`
When we generated our `package.json` files, it auto-populated the `main` field. Since we are not using this feature and don't even have an `index.js` file in either folder, we should go ahead and remove them. In `frontend/package.json` go ahead and remove line 5. We didn't use it previously and it has the potential to confuse the `now` service.
```json
"main": "index.js",
```
Also, do the same in the `package.json` in the root folder.
4. Install Prisma2 to the backend
Although we globally install prisma2 in our docker containers, we need to now add it to our backend package.json file so that when we use the now service it will be available during the build step up in AWS. Navigate to the `backend` folder and install prisma2:
```bash
npm install --save-dev prisma2
```
5. Install Zeit Now
We should install `now` globally so that we will be able to run it from the command line:
```bash
npm install -g now
```
**II. Add Environmental Variables**
1. Add a `.env` file to the root of your project. Add the following variables which we will use across our docker environment.
```
MYSQL_URL=mysql://root:prisma@mysql:3306/prisma
BACKEND_URL=http://backend:4000/graphql
FRONTEND_URL=http://localhost:3000
```
2. Modify the `docker-compose.yml` file to inject these new variables into our docker containers. This is what the updated file looks like:
`docker-compose.yml`
```yml
version: '3.7'
services:
mysql:
container_name: mysql
ports:
- '3306:3306'
image: mysql:5.7
restart: always
environment:
MYSQL_DATABASE: prisma
MYSQL_ROOT_PASSWORD: prisma
volumes:
- mysql:/var/lib/mysql
prisma:
links:
- mysql
depends_on:
- mysql
container_name: prisma
ports:
- '5555:5555'
build:
context: backend/prisma
dockerfile: Dockerfile
environment:
MYSQL_URL: ${MYSQL_URL}
volumes:
- /app/prisma
backend:
links:
- mysql
depends_on:
- mysql
- prisma
container_name: backend
ports:
- '4000:4000'
build:
context: backend
dockerfile: Dockerfile
args:
- MYSQL_URL=${MYSQL_URL}
environment:
MYSQL_URL: ${MYSQL_URL}
FRONTEND_URL: ${FRONTEND_URL}
volumes:
- ./backend:/app
- /app/node_modules
- /app/prisma
frontend:
container_name: frontend
ports:
- '3000:3000'
build:
context: frontend
dockerfile: Dockerfile
environment:
BACKEND_URL: ${BACKEND_URL}
volumes:
- ./frontend:/app
- /app/node_modules
- /app/.next
volumes: #define our mysql volume used above
mysql:
```
Let's take a look at the parts that were changed, below are the parts snipped out that we added to the above file:
```yml
prisma:
environment:
MYSQL_URL: ${MYSQL_URL}
### ..more lines ###
backend:
build:
context: backend
dockerfile: Dockerfile
args:
- MYSQL_URL=${MYSQL_URL}
environment:
MYSQL_URL: ${MYSQL_URL}
FRONTEND_URL: ${FRONTEND_URL}
### ..more lines ###
frontend:
environment:
BACKEND_URL: ${BACKEND_URL}
```
We added environment blocks to the prisma studio, backend, and frontend containers. Since we have the `.env` file, any variables that we define in the `.env` file, such as `VAR1=my-variable`, we can call it in the yml as \${VAR1} and that will be like we used the `my-variable` string directly in that spot of the yml file.
3. Dynamically set Backend url on the frontend
We need to set the uri that the frontend connects to dynamically instead of hardcoding it. In the `frontend/utils/init-apollo.js` we previously had this line which would connect to localhost if the request came from a user or from the backend if it came from the next.js server:
```js
uri: isBrowser ? 'http://localhost:4000' : 'http://backend:4000', // Server URL (must be absolute)
```
We need to still keep track of whether we are in the browser or server in the docker environment. In addition, though, we need to check whether we are in a docker environment or whether we are deployed via `now` into a lambda function.
We can access environment variables by using the `process.env.ENVIRONMENTAL_VARIABLE`. We check if the url matches our local environment url and if so, we know that we are in a docker environment. Now our logic is that if we are in a docker environment and the browser is making the request, we return the localhost, otherwise we pass the `BACKEND_URL` as the uri.
`frontend/utils/init-apollo.js`
```js
function create(initialState) {
// Check out https://github.com/zeit/next.js/pull/4611 if you want to use the AWSAppSyncClient
const isBrowser = typeof window !== 'undefined'
const isDocker = process.env.BACKEND_URL === 'http://backend:4000/graphql'
return new ApolloClient({
connectToDevTools: isBrowser,
ssrMode: !isBrowser, // Disables forceFetch on the server (so queries are only run once)
link: new HttpLink({
uri:
isDocker && isBrowser
? 'http://localhost:4000/graphql'
: process.env.BACKEND_URL,
credentials: 'same-origin', // Additional fetch() options like `credentials` or `headers`
// Use fetch() polyfill on the server
fetch: !isBrowser && fetch,
}),
cache: new InMemoryCache().restore(initialState || {}),
})
}
```
Now that should really be all that we need to do, but since Next.js is both rendered on the server and in the client, we won't have access to server environmental variables unless we take one more step. We need to expose the variable in our `frontend/next.config.js` file:
`frontend/next.config.js`
```js
const withCSS = require('@zeit/next-css')
module.exports = withCSS({
target: 'serverless',
env: {
BACKEND_URL: process.env.BACKEND_URL,
},
})
```
Note that due to how exactly Next.js handles process.env, you cannot destructure variables off of it. So the line below will _not_ work, we need to use the entire `process.env.BACKEND_URL` variable.
```js
const { BACKEND_URL } = process.env // NO!
```
**III. Configure our backend server**
1. Update the backend server to the `/graphql` backend and configure CORS
We updated the url above to the `/graphql` endpoint for the backend server. We are doing this because in `now` we will deploy our backend graphql server to `ourdomain.com/graphql`. We need to make this change in our `backend/src/index.ts` so that the server will be running at the `/graphql` endpoint instead of `/`.
In addition, while we are here, we will disable subscriptions and enable CORS. CORS stands for _cross origin resource sharing_ and it tells the backend server which frontend servers it should accept requests from. This ensures that if someone else stood up a frontend next server that pointed to our backend server that all requests would fail. We need this because you could imagine how damaging this could potentially be if someone bought a domain `crazyamazondeals.com` (I'm just making this up) and pointed their frontend server to the real backend server of amazon's shopping portal. This would allow a fake amazon frontend to gather all sorts of customer information while still sending real requests to amazon's actual backend server. Yikes!
In order to enable CORS we will pass in our frontend url. We will also enable credentials for future authentication-related purposes.
`backend/src/index.ts`
```ts
server.start(
{
endpoint: '/graphql',
playground: '/graphql',
subscriptions: false,
cors: {
credentials: true,
origin: process.env.FRONTEND_URL,
},
},
() => console.log(`🚀 Server ready`)
)
```
2. Update the `backend/prisma/project.prisma` file to use environmental variables and set our platform.
We can use the `env("MYSQL_URL")` which will take our `MYSQL_URL` environmental variable. Starting with prisma preview-3+ we need to specify which platforms that we plan to use with prisma2. We can use "native" for our docker work, but we need to use "linux-glibc-libssl1.0.2" for Zeit Now.
`backend/prisma/project.prisma`
```prisma
datasource db {
provider = "mysql"
url = env("MYSQL_URL")
}
generator photon {
provider = "photonjs"
platforms = ["native", "linux-glibc-libssl1.0.2"]
}
// Rest of file
```
3. Update the `backend/Dockerfile` to pass the environmental variable into the prisma2 generate. We first have to define a docker argument using `ARG` named `MYSQL_URL`. Then, we take the `MYSQL_URL` environmental variable and assign it to this newly created `ARG`.
We need the `MYSQL_URL` environment variable so that our url from the prisma file gets evaluated properly.
`backend/Dockerfile`
```Dockerfile
FROM node:10.16.0
RUN npm install -g --unsafe-perm prisma2
RUN mkdir /app
WORKDIR /app
COPY package*.json ./
COPY prisma ./prisma/
ARG MYSQL_URL
ENV MYSQL_URL "$MYSQL_URL"
RUN npm install
RUN prisma2 generate
CMD ["npm", "start" ]
```
Note that the only reason we have access to the `$MYSQL_URL` variable in this Dockerfile is due to an `args` block that we previously added to the docker-compose.yml file. Adding variables to the `environment` block of docker-compose is only accessible during the runtime of the containers, not the building step which is where we are at when the Dockerfile is being executed.
```yml
backend:
build:
context: backend
dockerfile: Dockerfile
args:
- MYSQL_URL=${MYSQL_URL}
```
**IV. Add our Now Configuration**
1. Create now secrets
Locally, we have been using the `.env` file to store our secrets. Although we commit that file to our repo, the only reason why we can do that is because there are no sensitive environmental variables there. Ensure that if you ever add real secrets to that file, such as a stripe key, you need to never commit that to github or else you risk them being compromised!
For production, we need a more secure way to store secrets. `Now` provides a nice way to do this:
```bash
now secret add my_secret my_value
```
`Now` will encrypt and store these secrets on their servers and when we upload our app we can use them but we won't be able to read them out even if we try to be sneaky and read it out using console.logs. We need to create variables for the following variables that were in our `.env` file:
```
MYSQL_URL=mysql://user:password@your-mysql-database-url:3306/prisma
BACKEND_URL=https://your-now-url.sh/graphql
FRONTEND_URL=https://your-now-url
```
Note that by default `your-now-url` will be `yourProjecFoldername.yourNowUsername.now.sh` but you can always skip this step for now, get to Step V of this tutorial, deploy your site and then look at where it deploys to because it will be the last line of the console output. Then you come back to this step and add the now secrets and redeploy the site.
2. Add a `now.json` file to the root directory
We need to create a `now.json` file which will dictate details about how we should deploy our site. The first part of it has environmental variables for both the build and the runtime. We will be using secrets that we created in the previous step by using the `@our-secret-name`. If you forget what names you used, you can always type `now secrets ls` and you will get the names of the secrets (but critically _not_ the secrets themselves).
Next we have to define our build steps. In our case we have to build both our nextjs application and our graphql-yoga server. The nextjs is built using a specially designed `@now/next` builder and we can just point it to our `next.config.js` file which is in our `frontend` folder. Our other build will use the `index.ts` file in our `backend/src` directory and the builder is smart enough to compile the code down into javascript and deploy it to a lambda function.
Finally, we have to define our routes. The backend server will end up at the `/graphql` endpoint while the frontend directory will use everything else. This ensures that any page we go to under `ourdomain.com` will be forwarded onto the nextjs server _except_ the `/graphql` endpoint.
`now.json`
```json
{
"version": 2,
"build": {
"env": {
"MYSQL_URL": "@mysql_url",
"BACKEND_URL": "@backend_url",
"FRONTEND_URL": "@frontend_url"
}
},
"env": {
"MYSQL_URL": "@mysql_url",
"BACKEND_URL": "@backend_url",
"FRONTEND_URL": "@frontend_url"
},
"builds": [
{
"src": "frontend/next.config.js",
"use": "@now/next"
},
{
"src": "backend/src/index.ts",
"use": "@now/node",
"config": { "maxLambdaSize": "20mb" }
}
],
"routes": [
{ "src": "/graphql", "dest": "/backend/src/index.ts" },
{
"src": "/(.*)",
"dest": "/frontend/$1",
"headers": {
"x-request-path": "$1"
}
}
]
}
```
3. Add a `.nowignore` file to the root directory
Finally, we can add our ignore file which will tell now which things it shouldn't bother to upload.
`.nowignore`
```
**/node_modules
.next
Dockerfile
README.MD
```
**V. Deploy our now full stack site**
This part is easy. Simply type `now` from the root folder and let it fly!
**There is more where that came from!**
I created an entire course about using Zeit Now + Next.js to build a recipe sharing application, so if you liked this go check it out!

[Frontend Serverless with React and GraphQL](https://courses.codemochi.com/frontend-serverless-with-react-and-graphql)
[Click here to give us your email and we'll let you know when we publish new stuff.](https://gmail.us20.list-manage.com/subscribe?u=37f38485b2c7cff2f3d9935b5&id=e3bc056dde) We respect your email privacy, we will never spam you and you can unsubscribe anytime.
Originally posted at [Code Mochi](https://www.codemochi.com/blog/2019-08-12-prisma-2-now/).
| codemochi |
154,200 | Quick VS Code refactor/productivity trick | So I was browsing Dev.to , like I normally do while waiting for my build process to finish (🐱👤) and... | 1,789 | 2019-08-08T13:23:44 | https://dev.to/chiangs/quick-vs-code-refactor-productivity-trick-103k | vscode, productivity | So I was browsing ***Dev.to*** , like I normally do while waiting for my build process to finish (🐱👤) and I saw this post:
{% link https://dev.to/hexrcs/5-visual-studio-code-tricks-to-boost-your-productivity-27a4 %}
And I thought I would share my one of my favorites.
Often when I need to create an object and as a test or have common values except for a single character I use this flow. It also works in refactoring just pieces of code really fast.

Here's an example, let's say i have an array of objects where I want the values to be 'test1', 'test2', 'test3' respectively to each object in the array:
```
const test = 'test';
const objectArray = [
{
prop1: 'test`,
prop1: 'test`,
prop1: 'test`,
prop1: 'test`,
},
{
prop1: 'test`,
prop1: 'test`,
prop1: 'test`,
prop1: 'test`,
},
{
prop1: 'test`,
prop1: 'test`,
prop1: 'test`,
prop1: 'test`,
},
];
```
In this case you could just use a multi-cursor and type in the numerical value at the end, but here's another way that that helps out when it's not so easy.
You could use the `ctr + h` to replace `test` with `test1`, but that could replace more than what you had intended, like the string variable called `test` above `objArray`.
So here's what I do:
1. Select the block of code using some combination of `shift`, `ctrl` and `arrow keys` that is most appropriate.
2. `alt + l` (this notifies vs code that you intend to only do this operation on the selected block.
3. `ctrl + h` (opens the find/replace dialog box)
4. type in the value to be replaced
5. tab over to type in the value to be replaced
6. `ctrl + enter`. *update: new version of VS Code defaults to `ctrl + alt + enter` but can be changed back in they keyboard shortcuts
7. repeat for the next block
🔥For me, this is quite fast once the pattern becomes natural.
Voila❗ now it should look like the following:
```
const test = 'test';
const objectArray = [
{
prop1: 'test1`,
prop1: 'test1`,
prop1: 'test1`,
prop1: 'test1`,
},
{
prop1: 'test2`,
prop1: 'test2`,
prop1: 'test2,
prop1: 'test2,
},
{
prop1: 'test3`,
prop1: 'test3`,
prop1: 'test3`,
prop1: 'test3`,
},
]
```
And your fingers never had to leave the keyboard, cheers! 🍻
If you find this valuable, please leave a comment and follow me on [Dev.to @chiangs](https://dev.to/chiangs) and [Twitter @chiangse](https://twitter.com/chiangse), 🍻 cheers! | chiangs |
154,224 | About Jay Bradenburg-Nau | Jay first realized that he wanted to be a therapist at the age of seven while listening to a mental h... | 0 | 2019-08-08T14:30:47 | https://dev.to/jaybradenburgnau/about-jay-bradenburg-nau-46j6 | jayaustinbrandenburgnau, jaybrandenburgnau | Jay first realized that he wanted to be a therapist at the age of seven while listening to a mental health therapist on the radio. After working for ten years as a youth pastor, Jay earned a Masters Degree in Clinical Mental Health Counseling from Denver Seminary in 2010. Since then he has pastored at large churches in Chicago and Austin, and has worked extensively in clinical settings and private practice. His joy is to walk with men, women, and couples in places of brokenness in order to catalyze wholeness of heart and the journeying with others to a greater capacity to know and be known. Having grown up in Montana he has a huge love of the mountains and outdoors and is passionate about being on the rivers, rafting and boating.
| jaybradenburgnau |
154,228 | Advanced usage of yum-config-manager with setopts | Recently I used the yum-config-manager program from the yum-utils package to add a repository in a... | 0 | 2019-08-08T14:41:21 | https://www.zufallsheld.de/2019/08/08/yum-config-manager/ | centos, mysql, yum, ansible | ---
title: Advanced usage of yum-config-manager with setopts
published: true
tags: centos,mysql,yum,ansible
canonical_url: https://www.zufallsheld.de/2019/08/08/yum-config-manager/
---
Recently I used the yum-config-manager program from the yum-utils package to add a repository in a CentOS-box.
This is the easy part:
```
# install the yum-config-manager
yum -y install yum-utils
# add the official ansible repository
yum-config-manager --add-repo=https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/
```
What gets added in `/etc/yum.repos.d/` is a file with the following content:
```
[releases.ansible.com_ansible_rpm_release_epel-7-x86_64_]
name=added from: https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/
baseurl=https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/
enabled=1
```
Now how to set for example the gpg-key with this command? That’s trickier.
The man-page for yum says you should use `--setopt` to “set arbitrary config and repo options”. `man yum-config-manager` goes into a little more detail:
```
Set any config option in yum config or repo files. For options
in the global config just use: --setopt=option=value for repo
options use: --setopt=repoid.option=value. The latter form
accepts wildcards in repoid that will be expanded to the
selected sections. If repoid contains no wildcard, it will
automatically be selected; this is useful if you are
addressing a disabled repo, in which case you don't have to
additionally pass it as an argument.
```
So to set the gpg-key for the repository I created before, I had to use this command:
```
yum-config-manager --save --setopt=releases.ansible.com_ansible_rpm_release_epel-7-x86_64_.gpgkey=https://releases.ansible.com/keys/RPM-GPG-KEY-ansible-release.pub
``` | rndmh3ro |
154,387 | Porquê Linux? | O sistema operacional do desenvolvedor de software | 0 | 2019-08-08T22:43:01 | https://dev.to/hilam/porque-linux-3dic | linux, development, ambienteoperacional, desenvolvimento | ---
title: Porquê Linux?
published: true
description: O sistema operacional do desenvolvedor de software
tags: linux, development, ambiente_operacional, desenvolvimento
---
Para quaisquer linguagens de programação, exceto provavelmente para linguagens mantidas pela e para Apple, o melhor ambiente operacional para a atividade de desenvolvimento de sistemas é o [Linux](https://duckduckgo.com/?q=linux+operating+system&t=ffab).
O Windows 10 agora permite instalações Linux ([WSL](https://docs.microsoft.com/en-us/windows/wsl/install-win10)), tentando convencer aos desenvolvedores que não precisam mais instalar um Linux em dual boot, ou mesmo desinstalar o Windows.
Eu afirmo que, qualquer que seja sua escolha, será uma boa escolha. Se quiser experimentar, não vai faltar [opção](https://www.linux.org/pages/download/). Eu mesmo já usei CentOS, Red Hat, Debian, Ubuntu, entre outros. O Linux suporta a grande maioria dos servidores corporativos atualmente em atividade. Desde a pequena empresa até os grandes Cloud Providers. Nessas máquinas rodam nossos sistemas.
Desenvolver no mesmo ambiente operacional onde o sistema vai rodar é um importante motivo pelo qual você deve usar esse mesmo ambiente para programar. Além disso, é um ambiente estável, maduro, mais seguro, mais customizável. Enfim, se você não acha que já respondi à pergunta inicial, não perca mais seu tempo.
Minha escolha atual de ambiente operacional para desenvolvimento é o [Xubuntu](https://xubuntu.org/) 18.04.2 LTS, versão desktop. Trata-se de uma distribuição Debian-like, com uma interface gráfica (Xfce) leve e eficiente. A interface gráfica, no meu caso, é a mais simples possível. Faço poucas configurações visuais, embora haja muitas disponíveis. (Fontes e temas são fáceis de configurar, e é onde preciso mexer, com certeza.)
Aí é só instalar seu editor de textos ou IDE preferida, um multiplexador de terminal (você deverá usar bastante no processo de desenvolver, não tem jeito), e você estará pronto para começar. Em breve vou detalhar algumas ferramentas que tornam a vida do desenvolvedor mais fácil. Para instalação de linguagens de programação, por exemplo.
| hilam |
154,432 | Do's and Don'ts for people who are learning English | English is currently the global language of the world. Some people are busy talking it, and the... | 0 | 2019-08-09T04:14:26 | https://dev.to/zerquix18/do-s-and-don-ts-for-people-who-are-learning-english-3j67 | english, tech, people | ---
title: Do's and Don'ts for people who are learning English
published: true
description:
tags: english, tech, people
---
English is currently the global language of the world. Some people are busy talking it, and the others are busy learning it. People from different countries all around the world communicate using English, and that includes us, who sometimes don't even a have a translation for some of the terms we use in tech.
But learning English can be challenging and fun. Especially because we all want to write and speak good English. For most of us, non native speakers, the goal is to speak and write like a native speaker, or at least be really decent speakers.
I've been learning English since 2012, and I'm still not at the level of a native speaker, especially because I haven't traveled yet and most of my interactions are on the Internet. This is the case for many, but it is still possible to write good English and sound pretty decent, and based on feedback, I think I've done good progress.
So here are some tips to improve your English skills.
## Don't...
#### Think you have to know everything
You don't have to understand every word you read or you hear in a conversation. It is OK to understand the rest based on the context. Many times you will know what a word means because you've repeatedly will read or hear it, but you won't know how to translate it to your native language.
#### Think you have to understand everything
Sometimes you will listen to a song, a low quality audio or a person with a different accent than the one you're used to hear and you will not understand what it's said. Don't think you necessarily have to and don't beat yourself up because "you haven't made enough progress" (I've done this). Some songs are tricky to understand even to native speakers, and the same applies for some accents and obviously, low quality audios. To overcome this, think of some songs, accents or audios in your native language that you're not able to understand, and think that someone who's also learning your native language would not understand them either.
#### Be afraid to ask the other person to repeat, rephrase or slow down
Most people will understand that you're a learner and they will be nice to you so that the communication can be effective. There's no shame in asking them to repeat, slow down, and ultimately, rephrase what they're saying so that you can understand it.
Be sure to memorize the phrases you will use to ask people to repeat, rephrase or slow down, so that you don't get stuck thinking about how to properly ask!
## Do...
#### Use [WordReference.com](https://wordreference.com)
It's the best dictionary online if you speak Spanish, Portuguese, French, Italian, Arabic, Chinese, Russian and more. It'll not only teach you what a word means, but in what context you should use it. Some words are offensive, some words are only used it a particular place or have different meanings depending on where you use them.
In addition, some words are used in formal contexts or in way too informal contexts, and some words are archaic. This dictionary will tell you.
Finally, it supports phrasal verbs and expressions.
#### Have a list of words you've learned OR need to learn
This is not useful for absolutely everybody. However, having a list of words that you've learned or you need to learn is pretty useful to practice. The most common advice for learners of any language is repetition.
#### Write your code in English
Writing your variables, functions, files and comments in English is a great way to practice. This will not only benefit you, but will also benefit the community in general, because most people will be able to read your code. Most of the people who will read your code are probably familiar with English to some degree, so coding in English is coding for the entire world, and we code for humans.
#### Have your computer and devices in English
This will help you quickly learn terms and how to use them, and will keep you familiar with English all day.
In addition, most sentences will probably take up less space!
#### Write and speak, read and listen lots of English
Practice makes perfect. Great ways to practice through repetition are:
- Listening to music in English
- Tweeting in English
- Writing articles in English (It doesn't matter if the English is a bit broken. There's a lot of broken English on the Internet and most people don't mind!)
- Talking to your Google Assistance / Siri. This way you can see how much the machine can understand you!
- Assist to conferences.
- Listen to the [Ladybug Podcast](https://ladybug.dev).
#### Pick an accent
**THIS IS OPTIONAL. You don't need to sound native**. This is particularly good for advanced speakers, but if you really want to sound native, you have to pick an accent of the English language. Most learners will have the accent of their native languages, so people will notice you're not a native speaker when they can't recognize your accent or when they notice that you sound spanish-ish or italian-ish. The way to sound native is by picking an accent and sticking to it, learning how the words are pronounced, what terms they prefer to use and sometimes the voice tone. The most common people will pick is the General American Accent, which you probably recognize by now. But if you're already surrounded by English speakers, it's preferable to pick their accent. Well, your brain will do that, unconscionably.
I'm hoping this article will help someone! If you're learning English or you had to learn it, drop some tips in the comments section! | zerquix18 |
154,438 | What is AsyncStorage in React Native? | Learn how to offline applications with React Native and Expo | 0 | 2019-08-09T13:36:47 | https://www.instamobile.io/react-native-tutorials/offline-apps-react-native/ | tutorials, reactnative, react, mobiledevelopment | ---
title: What is AsyncStorage in React Native?
published: true
description: Learn how to offline applications with React Native and Expo
cover_image: https://thepracticaldev.s3.amazonaws.com/i/kotjuaqy9356q06ndqfc.jpg
tags: tutorials, reactnative, react, beginners, mobiledevelopment
canonical_url: https://www.instamobile.io/react-native-tutorials/offline-apps-react-native/
---
[Cover Image Credits to Unsplash and Tom Pumford](https://unsplash.com/photos/v8ppsvUTwGE)
`AsyncStorage` is a simple, asynchronous key-value pair used in React Native applications. It’s used for a variety of scenarios but mainly to store data when your app is not using any cloud services, or when you want to implement features in your app that require data storage.
It operates globally in a React Native app and comes with its own limitations. As a React Native developer, you have to know what these limitations are. One limitation of an `AsyncStorage` API is that on Android the size of the database is set to a default of 6MB limit. Also, `AsyncStorage` storage is based on SQLite.
Thus, it’s important to keep [SQLite limitations](https://www.sqlite.org/limits.html) in mind, too. Also, it’s hard to store complex and nested data structures in the form of key-value pairs. Knowing about these limitations will help you to opt for the best solution when developing a mobile app.
According to the [React Native’s official documentation](https://facebook.github.io/react-native/docs/asyncstorage):
>On iOS, AsyncStorage is backed by native code that stores small values in a serialized dictionary and larger values in separate files. On Android, AsyncStorage will use either RocksDB or SQLite based on what is available.
## Create an Expo app
To get started, you’ll need to generate a new Expo project. This could be done by opening a terminal window, navigating to a suitable location where you develop projects, and running the following commands in the order they’re described.
```shell
expo init offline-app
# navigate inside the app folder
cd offline-app
```
Note: yarn is currently being used as the package manager. You can also use npm instead of yarn.
## Utilizing the AsyncStorage API
In this section, let’s build an app that saves a value to `AsyncStorage` and fetches the value from the storage in the client-side React Native app. This will help you learn how to write basic operations using the storage API. Lastly, you’ll learn about how to clear the storage completely.
Open the `App.js` file and add the snippet below. Start by importing the necessary components from the React Native API. The most important one here is `AsyncStorage`. After that, define a variable named `STORAGE_KEY`. This variable will be used to store and retrieve the stored data using the `AsyncStorage` API.
Think of it as an identifier for the value being stored or the name of the key in the key-value pair. Since you’re going to store only one value at the moment, there’s only the requirement for one key.
```js
import React from 'react'
import { StyleSheet, Text, View, TextInput, AsyncStorage, TouchableOpacity } from 'react-native'
const STORAGE_KEY = '@save_name'
```
Next, let’s define an initial state with two empty strings. They’re going to be used to save the value of the user input and then retrieve the value to display it on the app screen. After defining the initial state, there’s going to be a lifecycle method that will load the data when the application starts for the first time, or when the App component renders.
```js
class App extends React.Component {
state = {
text: '',
name: ''
}
componentDidMount() {
this.retrieveData()
}
// ...
}
```
In the above snippet, note that the `App` component is actually a class component and not the default functional component that comes with boilerplate Expo app.
## Read the data
There are three asynchronous methods that will help to store the data, retrieve the data, and clear the app data that are stored. Each of these methods is going to utilize the appropriate API method from the `AsyncStorage` API. Every method in the AsyncStorage API is promise-based; hence, let’s use `async/await` syntax to follow best practices.
```js
retrieveData = async () => {
try {
const name = await AsyncStorage.getItem(STORAGE_KEY)
if (name !== null) {
this.setState({ name })
}
} catch (e) {
alert('Failed to load name.')
}
}
```
In the above snippet, the name of the method implies what it’s going to do in the app. The `retrieveData` method is what fetches the data from storage if it exists. It uses the same identifier that you defined previously, outside the class function component. It utilizes the parameter in the state object name.
Note that there’s an if condition inside this method. This condition makes sure that data is fetched only when a value for the name variable exists. This method also uses `try/catch`, as they are part and parcel of writing functions with modern `async/await` syntax. Lastly, this method is being invoked inside the lifecycle method.
## Save the Data
The next function is going to save the data. In the below snippet, you’ll find that it uses a parameter name, which, on success, is the value that’s stored. An alert message will be shown when the input data is saved.
```js
save = async name => {
try {
await AsyncStorage.setItem(STORAGE_KEY, name)
alert('Data successfully saved!')
this.setState({ name })
} catch (e) {
alert('Failed to save name.')
}
}
```
## Remove Everything
The last method that you are going to utilize from the `AsyncStorage` API is called `clear()`. This deletes everything that is previously saved. It is not recommended to use this method directly if you want to delete only a specific item from the storage. For that, there are methods like `removeItem` or `multiRemove` available by the API. You can read more about them in the official documentation [here](https://facebook.github.io/react-native/docs/asyncstorage#clear) or later when building the Todolist application.
```js
removeEverything = async () => {
try {
await AsyncStorage.clear()
alert('Storage successfully cleared!')
} catch (e) {
alert('Failed to clear the async storage.')
}
}
```
This snippet will throw an alert box on the device screen when everything is cleared from storage.
## Completing the App
The last two methods are going to be used to create a controlled input.
```js
onChangeText = text => this.setState({ text })
onSubmitEditing = () => {
const onSave = this.save
const { text } = this.state
if (!text) return
onSave(text)
this.setState({ text: '' })
}
```
After that, add the code snippet for the `render` method, followed by the styles for each UI component. Lastly, don’t forget to export the `App` component so it can run on the simulator or the real device.
```js
render() {
const { text, name } = this.state
return (
<View style={styles.container}>
<TextInput
style={styles.input}
value={text}
placeholder='Type your name, hit enter, and refresh'
onChangeText={this.onChangeText}
onSubmitEditing={this.onSubmitEditing}
/>
<Text style={styles.text}>Hello {name}!</Text>
<TouchableOpacity onPress={this.removeEverything} style={styles.button}>
<Text style={styles.buttonText}>Clear Storage</Text>
</TouchableOpacity>
</View>
)
}
} // class component App ends here
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: '#fff',
alignItems: 'center',
justifyContent: 'center'
},
text: {
fontSize: 20,
padding: 10,
backgroundColor: '#00ADCF'
},
input: {
padding: 15,
height: 50,
borderBottomWidth: 1,
borderBottomColor: '#333',
margin: 10
},
button: {
margin: 10,
padding: 10,
backgroundColor: '#FF851B'
},
buttonText: {
fontSize: 14,
color: '#fff'
}
})
export default App
```
## Running the App
Now to run the application, go to the terminal window and execute the command `expo start`. After that, you’ll see the following screen on the simulator.

Since there’s no data stored right now, the text after the word `Hello` is empty. Use the input field to save a string or a name or anything you’d like, and then press the enter key. You’ll get the following output. Whatever input you entered, it will be displayed next to the word `Hello`.

Even if you refresh the Expo client, the value stored doesn’t go away. Only when pressing the button below the `Hello` statement that says `Clear Storage` will the stored value be removed.

Refresh the Expo client after you clear the storage to get the following output.

## Conclusion
This completes the section about using `AsyncStorage` API to save and fetch the data.
If you'd like to learn more about using `AsynStorage API` in a real-time React Native app, **learn how to build a complete Todolist app [here](https://heartbeat.fritz.ai/building-offline-react-native-apps-with-asyncstorage-dcb4b0657f93)**. In this post, you are going to use [Native Base](https://nativebase.io/) UI component library.
Here is short **TLDR** of the **Todolist post**:
- Adding navigation
- Creating a Floating Action Button (FAB)
- Navigating between two screens
- Customize the header component
- Rendering a list of items using FlatList
- Reading data using the AsyncStorage API
- Adding a to-do list item
- Deleting a to-do list item
- Mark an item check or uncheck upon completion
- Passing data between different screens using the navigation
- Display each to-do list item
- Bonus section: Adding a segment
Here is a little demo of what you are going to build in the Todolist post:

---
I often write on Nodejs, Reactjs, and React Native. You can visit me on [amanhimself.dev](https://amanhimself.dev/blog) or you can subscribe to [my weekly newsletter](https://tinyletter.com/amanhimself) to receive all updates on new posts and content, straight in your inbox 💌.
| amanhimself |
154,494 | Evolving Ecosystem of Collaboration Platforms | Turning an idea into a fully-functional software or an application is the biggest dream for a technop... | 0 | 2019-08-09T09:17:24 | https://medium.com/startup-blog/evolving-ecosystem-of-collaboration-platforms-eb02e4708285 | collaboration, opensource, development | Turning an idea into a fully-functional software or an application is the biggest dream for a technopreneur. Thanks to the wonders of technology which made it quite easier for us to pursue our ideas without much hustle.
Lately, there's been a lot of buzz about the utilities being offered by the developer platforms. Now, what do these platforms actually do? The developer platforms have made the overall software development process easier, collaborative and standardized. For instance, you have a great idea for a gaming application in your mind. After exhaustive brainstorming, your team comes up with a strategy to work on this project. The question arises, how exactly have you determined the "right approach"? Traditionally, no other game developer is going to come to your aid since everyone out there is your rival. Here's when open source developer platforms kick in.
These platforms have made the software developers realize that the information they have, is meant for sharing. By collaborating with each other, people will get to know things and will strive to beat each other by writing quality code which is free from functional errors. The developer platforms that I'm talking about are equipped with advanced development tools and frameworks which foster a potentially productive working environment.
**How Does a Developer Platform Work?**
---------------------------------------
The main idea of a developer platform is, it acts as a huge repository for all project files. It also saves several versions of the codes which a fellow team member submits. The development teams either working on-site or remotely, can track the changes and the team leads can easily measure the progress of a contributing team member.
**Developer Platforms Help Maintaining Developers' Productivity:**
While working on a developer platform, you can manage a whole project remotely. You can assign different roles to different people based on their expertise and can track changes as well. Collaborative platforms are also being used by a huge number of Fortune-500 companies because these platforms are ensuring compliance with the quality standards which many programmers tend to ignore.
It doesn't stop here!
The collaboration platforms of today have evolved to a whole new level. Platforms like the Crowdsourcer.io, is setting new benchmarks for the already-present developer platforms. Developers these days can search any project well-suited to their level of expertise and skill-set, apply to become contributors in that project, get interviewed and accepted.
After acceptance, they pick modules of their own choosing and get started on working on their part of the system. The moment their tasks are completed, they commit the code on any of these version control developer platforms and get paid their fair share of the amount.
**Enhanced Code Quality:**
Different programmers have different coding habits. Some of them are in habit of writing small bits of code and make it go live while others submit completed files of the code.
The developer platforms ensure that a developer should follow a certain set of rules and regulations. Initially, the developer with a higher number of commits was considered as highly productive. Nowadays, it's not so. Productivity trackers in the developer platforms have ensured a thorough and logical determination of productive contributors.
The rest of the factors which play a vital role in determining and measuring the productivity of a developer are:
- Code language.
- Code quality.
- Compliance with OOP fundamentals.
- Code functionality.
That's the actual beauty of collaborative development!
Constituents of a Developer Platform:
-------------------------------------
There can be many fundamentals of a developer platform but generally, there are three.
**1. The Source Code:**
The core idea behind introducing the developer platforms was to maintain the sanctity of the source code which is a suite of well-planned instructions which forms the basis of any software project. Throughout the entire project, every contributor tries his level best to keep his/her code distinguished from the rest of the contributors.
**2. Repository:**
At this level, who don't know what does this word represent? Of course, it's central storage where the entire code and related files are placed which are related to a certain project. Except, in developer platforms, this central storage is cloud-based and the only thing which a contributor has to worry about, is the deadline.
**3. Version Control:**
The best thing about the version control feature is, it keeps a record of all the previous versions of a code related to a project in a tree-like structure. You start either from the roots and keep on building up till the parent node comes, or you start right from a sub-root and start developing your way down. At every level, the activities that you do are tracked and are totally visible to the team lead.
In order to stay updated about the changes and to track the work logs of every contributor in the team, there is a strong need to leverage the potential of developer platforms.
Be it the Github.com or Gitlab or even the Crowdsourcer.io, the main idea behind these platforms is to increase productivity and promote an environment of contribution and take it to the next level.
The contributor platforms have changed the way people used to plan, code, save and update their tasks. These platforms have also ensured strict standards of professionalism which the developer's community always lacks. Software projects are being developed at a lower cost and within timing constraints. This is the age of developer platforms and how far it'll go, only time will tell! | opnx |
154,507 | 5 Common Problems When Working Remotely for the First Time | Originally published on my blog In my previous post, I talked about the 5 benefits when working remo... | 0 | 2019-08-09T10:14:16 | https://dev.to/arthlimchiu/5-common-problems-when-working-remotely-for-the-first-time-k67 | career, productivity, hiring | ---
title: 5 Common Problems When Working Remotely for the First Time
published: true
description:
tags: career, productivity, hiring
---
*Originally published [**on my blog**](https://www.arthlimchiu.com/2019/08/08/5-common-problems-working-remotely.html)*
In my [previous post](https://dev.to/arthlimchiu/5-benefits-of-working-remotely-from-a-first-time-remote-worker-1401), I talked about the 5 benefits when working remotely from a person who used to work at an office job and why it’s wonderful.
This one will be all about the challenges that I’ve encountered and how I overcame them.
# Problem #1: Distractions
If you’re working at home, then most likely kids might go near your workspace and do a lot of noises or your parents knocking on your door. It’s quite understandable because most likely, remote working is new to them.
Solution: **Have a Space for Work**
* You can renovate or partition your room
* Invest on some good noise-cancelling headphones
* You can rent out
It’s best to explain your family, wife, or girlfriend your situation and let them understand that what you do is work and that the only difference is that you're not in the office.
# Problem #2: Internet Connection
When working remotely, having a stable and fast internet connection is super important because meetings are held in video calls.
If a lot of people are using your internet at home such as streaming or downloading, it might affect your online meetings. Personally, the only times that I need the internet to be stable are during calls. Other than that, the websites that I frequently go to - StackOverflow, Documentation websites, etc. since I’m a developer, rarely needs a lot of bandwidth to open.
However, it’s quite different if you’re a designer as the websites that you frequently go to contains a lot of imagery.
Solution: **Upgrade Your Internet Connection**
* Ask your parents/brother/sister to share the monthly bill
* If you have a data plan, tether to your phone during calls
You can also tell them 5 minutes before your call to stop streaming and downloading but this can get tiring over time.
# Problem #3: Procrastination Is Your Enemy
When working remotely, nobody sees what you do. It’s good that you no longer have that pressure but it’s bad at the same time since you can surf the web all you want.
You open Facebook or Youtube and suddenly time passes by and by the time you’re done it’s been several hours already and you haven’t worked on anything yet.
Solution: **Set the Deck in Your Favor**
* Use website blocker apps such as StayFocusd or similar
* Invest on a comfortable office desk and chair
* Everything that you need should be within a hand’s reach (notebook, pen, water, etc.)
* Lessen the amount of steps to start your work
If you set the deck in your favor, then your setting yourself up to win. Schedule your website block apps and make sure you’re comfortable with your workspace. Your computer or laptop should be setup already on your desk and you don’t need to arrange stuff or anything so that it takes a lot less friction to start working.
Procrastination is the enemy of productivity and success and a good way to overcome that is to **start your day with a win**.
# Problem #4: Forgetting to Balance Work and Life
When working remotely, you tend to forget that what you do is work and that the only difference is where you’re working at. Now that it’s easier to start working, you keep on working and working the whole day and you forget that there’s also life outside of work.
Solution: **Set to Default or Reset**
* Close EVERYTHING
* Don’t eat at your desk
* Breaks are important
If I’m done with work, I literally close everything - Google Chrome, Slack, IDE, etc. I close my laptop as well. It signals my brain that “I’m done for the day”. If you leave them on, you’re tempted to look at it.
It’s also good to take a break from time to time. Tend to your loved ones when you’re on break. I’m still struggling with eating at my desk but I always make the effort not to. The problem with eating at your desk is you're telling yourself that even eating is part of work.
# Problem #5: If You’re Stuck, You’re Most Likely on Your Own
When working remotely, you have the flexibility of your time and your colleagues as well. Your work schedules might not divulge so it might be hard to ask for help when you’re stuck. Unlike in an office, you can just go to them in person and ask for their help.
Solution: **Important Tasks First, Less Important Tasks After**
* Work on hard problems when your brain is fresh
* Move on to the next task if you’ve done everything that you could and leave a message in Slack or whatever communication tool that you use within your company
If you’re stuck, you need your brain to be fresh to process all the information that you found to solve the problem. That’s why it’s important to work on the hard problems first so that by the time you’re done, remaining tasks require less effort to do and you’ll be able to finish your work for the day.
# Final Thoughts
Working remotely is a wonderful experience but we need to know that it’s not all sunshine and rainbows. That’s just how life is. However, if you encounter these problems you now have an idea what to do and how to overcome them and it’s not that hard to do as well.
These are all my personal experience and I hope that it will be valuable for you later on when you start working remotely.
*If you think remote working is up for you or you want to give it a try, we do just exactly that at [Appetiser App Development](https://appetiser.com.au/). You can check out our openings below.*
We’re looking for:
* Senior Mobile & Web Developers (Cebu/Davao/Manila)
* Digital Marketing Channel Specialists x 2 (Manila)
* SEO
* CRM/E-mail
* Content Marketers (Manila)
* Marketing Human Resources (Manila)
* Marketing Assistant (Manila)
* HubSpot & Landing Page Developer: APIs/Integrations (Manila)
* Web Designers: UX/UI (Manila)
* CRO (Manila)
If you're interested, you can send your CV at [joinus@appetiser.com.au](mailto:recruitment@appetiser.com.au).
| arthlimchiu |
154,556 | So You Want to Access a FIPSed EL7 Host Via RDP | One of the joys of the modern, corporate security-landscape is that enterprises f... | 0 | 2019-08-09T16:19:18 | https://thjones2.blogspot.com/2019/08/so-you-want-to-access-fipsed-el7-host.html | centos7, graphicadesktop, rdp, rhel7 | ---
title: So You Want to Access a FIPSed EL7 Host Via RDP
published: true
tags: CentOS 7,Graphica Desktop,RDP,RHEL 7
canonical_url: https://thjones2.blogspot.com/2019/08/so-you-want-to-access-fipsed-el7-host.html
---
One of the joys of the modern, corporate security-landscape is that enterprises frequently end up locking down their internal networks to fairly extraordinary degrees. And, as software and operating system vendors offer new bolts to tighten, organizations will tend to do so - they sometimes do so without considering the full impact of what that tightening will do.
Several of my customers protect their networks not only with inbound firewalls, but firewalls that severely restrict _outbound_ connectivity. Pretty much, their users' desktop systems can only access an external service if its offered via HTTP/S.
Similarly, their users' desktop systems are configured with application whitelisting enabled. This prevents not only power users from installing software that requires privileged-access to install, but also prevents users from installing things that are wholly constrained to their home directories. This kind of security-posture is suitable for the vast majority of desktop users, but is considerably less so for developers. Worse, getting things added to the whitelists borders on "act of god" level of request.
The group I work for provides cloud-enablement services. This means that we are both developers and provide services to our customers' developers. Both for our own needs (when on-site) and for those of customers' developers, this has meant needing to have remote (cloud-hosted), "developer" desktops. The cloud service providers (CSPs) we and our customers use provide remote desktop solutions (e.g., AWS's "Workspaces"). However, these services are typically not usable at our customer sites due to the previously-mentioned network and desktop lockdowns: even if the local desktop has tools like RDP and SSH clients installed, those tools are only usable within the enterprises' internal networks; if the remote desktop offering is reachable via HTTP/S, it's typically through a widget that the would-be remote desktop user would install to their local workstation if application-whitelisting didn't prevent it.
To solve this problem or both our own needs (when on-site) and our customers' developers' needs, we stood up a set of remote (cloud-hosted), Windows-based desktops. To make them usable from locked-down networks, we employed Apache's Guacamole service. Guacamole makes remote Windows and Linux desktops available within a user's web browser.
Guacamole-fronted Windows desktops proved to be a decent solution for several years. Unfortunately, as the cloud wars heat up and CSPs try to find ways to bring - or force - customers into their datacenters, what was once a decent solution can become _not_ decent - often due to pricing factors. Sadly, it appears that Microsoft may be trying to pump up Azure-adoption by increasing the price of cloud-hosted Windows services when those services are run in o[ther CSPs' datacenters](https://www.microsoft.com/en-us/licensing/news/updated-licensing-rights-for-dedicated-cloud).
While we wait to see if and how this plays out, financially, we opted to see "can we find lower-cost alternatives to Windows-based (remote) developer desktops." Most of our and our customers' developers are Linux-oriented - or at least Linux-comfortable: it was a no-brainer to see what we could do using Linux. Our Guacamole service already uses Linux-based containers to provide the HTTP/S-encapsulation for RDP and Guacamole natively supports the fronting of Linux-based graphical desktops via VNC. That said, given that the infrastructure is built around an RDP, it might prove to ease some of the rearchitecting-process by keeping communications RDP-based even without Windows in the solution-stack.
Because our security guidance has previously required us to use "hardened" Red Hat and CentOS-based servers to host Linux applications, that was our starting-point for this process. This hardening almost always introduces "wrinkles" into deployment of solutions - usually because the software isn't SELinux-enabled or relies on kernel-bits that are disabled under FIPS mode. This time, the problem was FIPS mode.
While installing and using RDP on Linux has become a _lot_ easier than it used to be (tools like [XRDP](http://www.xrdp.org/) now actually ship with SELinux policy-modules!), not all of the kinks are gone, yet. What I discovered, when starting on the investigation path, is that the XRDP installer for Enterprise Linux 7 isn't designed to work in FIPS mode. Specifically, when the installer goes to set up its encryption-keys, it attempts to do so using MD5-based methods. When FIPS mode is enabled on a Linux kernel, MD5 is disabled.
Fortunately, this only effects legacy RDP connections. The currently-preferred solution for RDP leverages TLS. Both TLS and its preferred ciphers and algorithms are all FIPS compatible. Further, even though tin installer fails to set up the encryption keys, these keys are effectively optional: a file at the expected location for keys merely needs to _exist_, not actually be a valid key. This meant that the problem in the installer was trivially worked around by adding a `touch /etc/xrdp/rsakeys.ini` to the install process. Getting a cloud-hosted, Linux-based, graphical desktop ultimately becomes a matter of:
1. Stand up a cloud-hosted Red Hat or CentOS 7 system
2. Ensure that the "GNOME Desktop" and "Graphical Administration Tools" package-groups are installed (since, if your EL7 starting-point is like ours, no GUIs will be in the base system-image)
3. Once those are installed, ensure that the system's default run-state has been set to "graphical.target". The installers for the "GNOME Desktop" package-group should have taken care of this for you. Check the run-level with `systemctl get-default`. If the installers for the "GNOME Desktop" package-group didn't properly set things, correct it by executing `systemctl set-default graphical.target`
4. Make sure that firewalld allows connections to the XRDP service by executing `firewall-cmd --add-port=3389/tcp --permanent`
5. Similarly, ensure that whatever CSP-layer networking controls are present allow TCP port 3389 inbound to your XRDP-enabled Linux host.
6. ...And if you want users of your Linux-based RDP host to be able remotely-access actual Windows-based servers, install [Vinagre](https://wiki.gnome.org/Apps/Vinagre).
7. Reboot to ensure everything is in place and running.
Once the above is done, you can test things out by RDPing into your new Linux host from a Windows host …and, if you've installed Vinagre, RDP from your new, XRDP-enabled Linux host to Windows host (for a nice case of RDP-inception).
[](https://1.bp.blogspot.com/-uAiYd88g9ac/XU1oc7Yi98I/AAAAAAABlV0/hkI8CVXj1akCXKB5g3Lr0hYw4ZWLodO8QCLcBGAs/s1600/RDPinception.png)
**References:**
- Null `/etc/xrdp/rsakeys.ini` file solution on [GitHub](https://github.com/neutrinolabs/xrdp/issues/1032)
- EPEL XRDP [installer-bug](https://bugzilla.redhat.com/show_bug.cgi?id=1739176) | ferricoxide |
154,704 | Unit Testing A/B Test Code | There are a few underlying considerations that make how to Unit Testing A/B Test Code a fairly remarkable question. | 2,154 | 2019-11-18T15:16:57 | https://dev.to/rfornal/unit-testing-a-b-test-code-4dl1 | unittesting, webdev, testing, ab | ---
title: Unit Testing A/B Test Code
published: true
description: There are a few underlying considerations that make how to Unit Testing A/B Test Code a fairly remarkable question.
tags: Unit Testing, WebDev, Testing, AB
series: Front-End Testing
cover_image: https://thepracticaldev.s3.amazonaws.com/i/tl582r2fapl5975lfyx1.png
---
Now, this seems like something that should be pretty straight forward ... testing the test and all that; what-what. But there are a few underlying considerations that make how to Unit Testing A/B Test Code a fairly remarkable question.
## Common Language
While writing this article about testing test code, the language started to get a bit wonky.
Here are the definitions I will use moving forward ...
* **A-TEST CODE** refers to the code that should already exist when it is decided that and A/B Test is needed. The expectation should be that Unit and Integration tests already exist against this code.
* **B-TEST CODE** refers to the code that will be written. Unit and Integration Tests should be written against this code, as well.
This article is focused on the fact that additional tests (Unit and Integration) may need to be created to ensure these two sets of code work together properly with the existence of the other. Having said that ...
* **NEGATIVE-SCENARIOS** should not be confused with Negative Testing which examines ways the code can fail. Negative Scenarios are the inverse of the tests for the alternate scenario (A-TEST CODE <-> B-TEST CODE).
## Overview of Considered Scenarios
Examining the scenarios this way ...
| Will the B-TEST move forward? | General Test Coverage<sup>*</sup> | Negative Scenario Coverage<sup>**</sup> |
| ---------------------------- | --------------------- | ------------------------- |
| Little or no chance | Few or no tests needed | Few or no tests needed |
| Maybe | Key areas | Should have these |
| Very high chance | Complete set of tests needed | Complete set of tests needed |
<sup>\*</sup> These are tests written **against** the B-TEST code, testing the new functionality.
<sup>\*\*</sup> These are tests that should be written as **negative scenarios**, testing for bleed-over of code between A-TEST and B-TEST scenarios. The tests for the alternate area should fail appropriately (B-TEST tests should include the INVERSE of A-TEST tests).
## The Questions
1. There should be test code for both the A-TEST and B-TEST scenarios. The question is how much test coverage should there be for the "new" code?
2. Both scenarios need to be concerned about bleed-over from the other code. Are the negative versions of the other scenarios tests enough?
3. Is there common code between the two scenarios that is covered appropriately (and therefore not included as negative test scenarios)?
| rfornal |
154,737 | A Weird Way to Substring in C++ | A Weird Way to Substring in C++ | 0 | 2019-08-09T21:31:34 | https://dev.to/therealdarkmage/a-weird-way-to-substring-in-c-49o2 | c, programming, coding, gamedev | ---
title: A Weird Way to Substring in C++
published: true
description: A Weird Way to Substring in C++
tags: C++, Programming, Coding, Gamedev
---
```
/*
author: mike bell
twitter: @therealdarkmage
date: Fri Aug 9 5:12 PM
website: https://mikebell.xyz
I was playing around and discovered a weird way to perform a substring on a string in C++.
Take the address of a string and then, using array indexing, add the position to substring from.
Works with constant and arbitrary strings! Cool/weird!
Compiles on current macOS g++ as of this writing.
*/
#include <iostream>
#include <string>
using std::string;
using std::cout;
using std::endl;
int main() {
string s = &"Hello, World"[7];
string ss = &s[2];
cout << s << endl; // prints "World"
cout << ss << endl; // prints "rld"
return 0;
}
``` | therealdarkmage |
154,871 | Case Study Factory, Native Lazy Loading, Excitement Transfer — and more UX links this week |
A weekly selection of design links, brought to you by your friends at th... | 0 | 2019-08-10T21:07:35 | https://uxdesign.cc/case-study-factory-native-lazy-loading-excitement-transfer-and-more-ux-links-this-week-4eb163dbb5db | startup, design, visualdesign, productdesign | ---
title: Case Study Factory, Native Lazy Loading, Excitement Transfer — and more UX links this week
published: true
tags: startup,design,visual-design,product-design
canonical_url: https://uxdesign.cc/case-study-factory-native-lazy-loading-excitement-transfer-and-more-ux-links-this-week-4eb163dbb5db
---
#### _A weekly selection of design links, brought to you by your friends at the UX Collective._

[**The Case Study Factory →**](https://essays.uxdesign.cc/case-study-factory/)
As editors of a large-scale online design publication, we receive a high volume of emails every week pitching case studies to be published on our site. Most of what we receive is from students and designers just starting out who are looking for an opportunity to have their work reach a broader audience.
But given the plethora of case studies out there, how can companies find the talent they are looking for? And how can designers differentiate themselves when applying to a position?
To be able to answer these questions, we decided to step back and try to figure out why so many case studies were being produced in the first place.
[**(Continue reading…)**](https://essays.uxdesign.cc/case-study-factory/)
### Stories from the community

[**Excitement transfer**](https://uxdesign.cc/excitement-transfer-why-having-a-great-idea-isnt-enough-efa8c8ca7bc8?source=friends_link&sk=fdfcd03f60b51ce21f8ba00567d0431c) →
Why having a great idea isn’t enough.

[**How to hire UX designers**](https://uxdesign.cc/insiders-guide-how-to-find-evaluate-and-hire-ux-designers-879f766397bb?source=friends_link&sk=e8a703fff63bae72de6021af19c1c2cb) →
An insider’s guide on how to find, evaluate, and hire UX designers.

[**Humanizing machine learning**](https://uxdesign.cc/humanizing-machine-learning-11a64feb21a8?source=friends_link&sk=689af28ac9abfbe23ae50a70f5f8ea08) →
Lessons learned from adaptive page testing.
More top stories:
- [**What to do when user research doesn’t fit in a sprint**](https://uxdesign.cc/heres-what-to-do-when-user-research-doesn-t-fit-in-a-sprint-2f8b5db7d48c?source=friends_link&sk=fb2ab773f388cf392d6103ca9c635532) →
- [**Why you should forget about the number of clicks**](https://uxdesign.cc/why-you-should-forget-about-the-number-of-clicks-b80532475fae?source=friends_link&sk=70d2ba166411c399887c076633075eb8) →
- [**What to do when user research doesn’t fit in a sprint**](https://uxdesign.cc/heres-what-to-do-when-user-research-doesn-t-fit-in-a-sprint-2f8b5db7d48c?source=friends_link&sk=fb2ab773f388cf392d6103ca9c635532) →
- [**Why you should forget about the number of clicks**](https://uxdesign.cc/why-you-should-forget-about-the-number-of-clicks-b80532475fae?source=friends_link&sk=70d2ba166411c399887c076633075eb8) →
- [**Meet Dimension**](https://uxdesign.cc/meet-dimension-bf8d76bd0e1a?source=friends_link&sk=619ca7e9c5b4885bc07f78f03345eeb8) →
- [**How being a cat 🐈 dad makes me a better UX Designer**](https://uxdesign.cc/how-being-a-cat-dad-makes-me-a-better-ux-designer-6d1ff705272a?source=friends_link&sk=e42eca661ad50733287e4105f7552286) →
- [**The topic nobody wants to talk about: professional burnout**](https://uxdesign.cc/professional-burnout-the-topic-nobody-wants-to-talk-about-e9991ec2bcae?source=friends_link&sk=1d947233320fb06a50b76a6b9b6603e6) →
- [**Your first 90 days as a UX writer**](https://uxdesign.cc/your-first-90-days-as-a-ux-writer-c9e9f329ffc5?source=friends_link&sk=ba74f5a01a8e698ea707c931d121f26b) →
### News & ideas
- [**Lazy-Loading**](https://web.dev/native-lazy-loading) →
Native lazy-loading is coming to the web.
- [**Tiny Pulitzer**](https://tinypulitzer.com/1) →
A sarcastic collection of short chat-stories.
- [**Kid Beatboxing**](https://beatboxingforkids.fun/) →
Teach your kid to beatbox using common words.
- [**World’s Worst UI**](https://www.fastcompany.com/90385875/this-is-the-worlds-worst-ui-and-it-speaks-volumes-about-design-today) →
Why side project User Inyerface speaks volumes about design today.



<figcaption><a href="https://www.instagram.com/yosigo_yosigo/"><strong>Featured work: Yosigo →</strong></a></figcaption>
### Tools & resources
- [**ProtoPie 4.0**](https://blog.protopie.io/meet-protopie-4-0-accelerate-your-workflow-with-components-e2c97f17bf3e) →
Everyone’s fave tool now offers components.
- [**Summarize App**](https://summarize.landen.co/) →
“What was the last book you read?”
- [**AI-Driven Design**](https://www.awwwards.com/AI-driven-design) →
E-book on how to design meaningful experiences in the AI era.
- [**Duotone Icons**](https://blog.fontawesome.com/introducing-duotone/) →
A duotone icon style now available as part of Font Awesome Pro.
* * * | fabriciot |
154,982 | What made you get into programming? | Why (and when) did you get into programming? Do you feel that the modern "everyone should code" push... | 0 | 2019-08-10T19:50:13 | https://dev.to/kalium/what-made-you-get-into-programming-3bie | inclusion, beginners, career, discuss | Why (and when) did you get into programming?
Do you feel that the modern "everyone should code" push is what got you into it? | kalium |
155,032 | How I Got an Internship in Software Development as a Computer Science student🎓 | Introduction👋 I’d like to share the different stages involved in finding an internship/placement as a... | 0 | 2019-08-10T21:54:27 | https://dev.to/yusufcodes/how-i-got-an-internship-in-software-development-as-a-computer-science-student-4587 | <b>Introduction👋</b>
I’d like to share the different stages involved in finding an internship/placement as a Computer Science student. I’m going to keep this information as brief as possible, but if you have any questions feel free to send me a message (contact details at the end of the article)!
I applied for my internship during the second year of my degree (2018–19), and you should start applying as soon as your second year begins. Internships vary in how long they are from a few weeks to an entire year. The length of your internship is totally up to you. If you go for a year-long placement however, you will need to take that year out of your degree, increasing the length of your studies by 1 year.
I study in the UK so things may be a little different where you are from — but I imagine most things are the same.
<b>Stages Involved in Finding an Internship🔎</b>
• <b>Write a CV (Curriculum Vitae) / Resume</b>
This should be a 1–2 page collation of all of your experiences to date. This should include academic attainment and any work experience you may have.
• <b>Search for jobs</b>
You should be searching for jobs including ‘placement’ or ‘internship’, for example ‘Software Developer Placement’, and so on. Places that I used to search were: my university’s job posting site, GlassDoor, [RateMyPlacement](http://wwww.ratemyplacement.co.uk) and Indeed. I’m sure there are other sites that also advertise these vacancies. If you know of a company you would like to work for, you could check their website directly and see if they are running any internships.
• <b>Write a Cover Letter (if required)</b>
Most vacancies will require you to submit both a CV and a cover letter. A cover letter is a short written piece where you explain why you are interested in the internship, and affirm that you are a suitable candidate. Employers usually read these and then go through your CV, so making sure this is written well is important!
• <b>Keep track of all the places that you’ve applied to</b>
I made a basic spreadsheet and detailed the following things:
> Company name
> Date applied
> Vacancy details: name, deadline for applications, job description
This is really helpful when you’ve applied to a few companies, because it means you won’t forget where you’ve applied to. You can come back to this spreadsheet as and when you receive invitations to interviews (or sadly, any rejections) to keep on top of all of the companies that you have applied for.
• <b>Set up a Personal Portfolio or a GitHub account</b>
This gives a company a way to see the types of things you’ve worked on, which could increase your chances of getting a job if they like what they see.
I personally have a couple of my university work posted on my GitHub. You don’t need to have dozens of projects or a super pretty website — just some way to show an employer what you have done. Remember, these employers aren’t looking for a perfect coder, just <b>someone with a genuine passion to excel in the field.</b> Showcasing your work is a great way to highlight this.
• <b>Prepare for the interviews</b>
Great — you’ve been invited to an interview with a company — <b>now what?</b>
I was left feeling overwhelmed when I received offers from places that I applied for — mainly due to my lack of self-confidence. If you end up feeling like this, you’re not alone!
Preparing for interviews is a very broad topic which, if you searched online, you’d find many different resources. I’ll give a little bit of information on what I personally did, but <i>I recommend you also do your own research into interview preparation.</i>
‣ <b>Research the company</b>
Company research is so important. You’re going to be asked to show your interest in the role you applied for and the company itself, so you’ll need to do a bit of research on them. Find out exactly who they are, what they do, and things that you find interesting about them. This way, you’ll be able to demonstrate you passion and enthusiasm for both the role and the company.
‣ <b>Brush up on any technical skills</b>
For example, if they’re advertising a position working with JavaScript, make sure you know the basics. Most companies will tell you whether or not you will be undertaking a technical interview, in which case it should indicate what you may need to prepare for it.
A great way to brush up on your general programming skills is by using websites like HackerRank or LeetCode. Both websites are similar on the basis that they provide you with programming exercises to complete, of varying levels of difficulty, in most programming languages.
Algorithms and Data Structures is an important topic to cover as well. I personally went over my university unit which covered all of this information, but if you haven’t studied the topic yet, there are many resources online to read up about it!
‣ <b>Know what you’ve written on your CV and Cover Letter</b>
The employers will most likely have the CV and Cover Letter right in front of them, so make sure you know exactly what you’ve put on there. It will make it easy for you to explain the different areas of your academic experience and other work experience you may have.
‣ <b>Dress appropriately</b>
Dressing in smart or smart-casual attire is the best idea. I’d personally stay clear of wearing casual clothes to an interview — show the company that you care by taking care of your appearance!
<b>My Tips</b>
• Make use of your university’s careers services.
• Make sure your CV and Cover Letter are perfect — no errors in spelling or grammar.
• Tailor your CV and Cover Letter to each vacancy — generic applications aren’t generally taken seriously and decreases your chances of being invited to an interview.
• Attend networking events — your university may host employability events where employers come in to talk about what they do and the jobs they have to offer. These can help you massively during the application process if an employer remembers you!
• <b>Don’t give up!</b> — Placements and internships are very competitive, so rejections are totally normal. Don’t let it knock your confidence and continue to search for and apply to more internships.
<b>Conclusion</b>
I hope this brief post is useful to you if you’re looking for a student placement / internship. Follow me on <a href="https://www.instagram.com/yusufcodes">Instagram</a> and <a href="https://www.twitter.com/yusufcodes">Twitter</a> to keep up to date with my life as a student, and things I get up to! You can also contact me through these platforms if you have any questions.
If you found this post useful, I’d greatly appreciate you sharing it on your own platforms to benefit other people! Thanks for reading😊👋 | yusufcodes | |
155,136 | Ruby Compact Internals | I encountered a weird phenomenon regarding ruby's compact method, and I thought I'd share my findings... | 0 | 2019-08-11T23:25:04 | https://dev.to/oryanmoshe/ruby-compact-array-of-hashes-19ho | ruby, internals, c, research | I encountered a weird phenomenon regarding ruby's `compact` method, and I thought I'd share my findings with you.
When working on "The Grid" dashboard here at [monday.com](https://monday.com), I got to a point where I had an array of hashes, and some of the hashes were empty.
I needed to get rid of those.
---
##Bottom line first
To get the result I wanted I just did `a.reject { |v| v.blank? }` instead of `a.compact`.
---
##Expectation VS Reality.
When I used `compact` on the array I was a bit surprised by the result.
```ruby
a = [{ key: 'value' }, { key: 'value2' }, {}]
puts a.compact
#[{ key: 'value' }, { key: 'value2' }, {}]
```
I expected the `compact` method to just go over each element in the array, check if it's `present?` and if it is return it.
Something like this:
```ruby
def compact
select { |value| value.present? }
end
```
So when I had an array with empty hashes that wouldn't go away I had to find the explanation.
---
##The implementations behind the scenes
As usual, I dug deeper for the implementations.
From past research I know ruby's Hash also has a `compact` method and I know it's pretty straightforward, for each key in the hash we check if it's `nil`, if it's not we return it.
Super easy. Similar to what I suggested above, easy to understand, although it wouldn't solve the issue (because we use `nil?` instead of `blank?` or `!present?`)*
_activesupport/lib/active_support/core_ext/hash/compact.rb, line 12_
```ruby
def compact
select { |_, value| !value.nil? }
end
```
In arrays however the implementation is more complex. It is implemented purely in C.
```c
static VALUE
rb_ary_compact(VALUE ary)
{
ary = rb_ary_dup(ary);
rb_ary_compact_bang(ary);
return ary;
}
```
---
##C is love, C is life
If you don't have any background in C, this next part might be less interesting to you. I'll try to make it as understandable as possible, but I highly recommend learning C if you have the time!
###Pointers background
In case you don't know, the way we access our program's memory (the place where the variables are stored) is by using **pointers**.
These pointers are basically just an address in the memory where we can find the beginning of our variable (each variable takes a different amount of memory, we just get the required number of bytes starting at the pointer's position)
Arrays work similarly (actually, an array is literally a pointer), let's say we have an array of 3 characters, each character consists of one byte, all 3 bytes are stored one after the other in memory.
We can go to the first address in the array (or the pointer to that array), and take 1 byte, that's the first character. Then we take another, that's the second, and then the third.
```c
char arr[3] = {'m', 'i', 'p'}
/*
Behind the scenes it's stored like so:
... 6d 69 70 ...
^ arr[0] == *arr
... 6d 69 70 ...
^ arr[1] == *(arr + 1)
... 6d 69 70 ...
^ arr[2] == *(arr + 2)
*/
```
Another thing to note: in C, if we have a pointer stored in a variable, let's say `ptr`, this variable will contain the address to the value. To get to the value _itself_ we have to use the **"dereference" operator** (an asterisk before the variable name) like so `*ptr`.
###Back to code
Above we saw the C implementation of ruby's array `compact`, we can see a function that receives an array, duplicates it (using the `rb_ary_dup` function), `compact`s it using the `rb_ary_compact_bang` function ("bang" is the same as the "!" symbol, so it basically runs `compact!`), and then returns the duplicated, compacted array.
To find that `rb_ary_compact_bang` function I went to [ruby's source code](https://github.com/ruby/ruby/blob/9d298b9dab831f966ea4bf365c712161118dd631/array.c):
_ruby/array.c, line 5022_
```c
static VALUE
rb_ary_compact_bang(VALUE ary)
{
VALUE *p, *t, *end;
long n;
rb_ary_modify(ary);
p = t = (VALUE *)RARRAY_CONST_PTR_TRANSIENT(ary); /* WB: no new reference */
end = p + RARRAY_LEN(ary);
while (t < end) {
if (NIL_P(*t)) t++;
else *p++ = *t++;
}
n = p - RARRAY_CONST_PTR_TRANSIENT(ary);
if (RARRAY_LEN(ary) == n) {
return Qnil;
}
ary_resize_smaller(ary, n);
return ary;
}
```
Don't close the post! This code isn't friendly for people with no experience in C, but most of it is memory management, some games of allocating memory correctly, resizing arrays to prevent memory leaks etc. I can do a deeper dive in a seperate post if anyone wants.
The interesting part is this one:
```c
while (t < end) {
if (NIL_P(*t)) t++;
else *p++ = *t++;
}
```
###Line by line
As I explained above, an array is just a set of variables stored one after the other in our program's memory, so we can get a pointer to the first variable, and then just increase it by 1 every time to get to the next one (`t++` increases `t` by 1)
What we have here is a loop that goes from the beginning of the array, with our current position in the `t` variable (a pointer to the array), and we iterate over every element until we reach the `end` of the array (the address of the beginning of the array + the length of the array)
Every time, we check whether the current value is `nil` (using the `NIL_P` macro defined in _include/ruby/ruby.h line 482_)
If it is `nil` we just increase the pointer, advancing one spot in our array.
If it's not `nil` we put the current value from the original `t` (remember, the `*` dereferences the pointer) into another array we keep stored in the pointer `p`, and then advance the pointer one spot.
##Bottom line
In a nutshell, it's the same as doing `a.reject { |v| v.nil? }` but more memory aware and efficient.
Just to show the difference in efficiency I benchmarked both ways against an array of 20,000,000 elements with alternating integers and `nil`s.

As you can see, array's `compact` method using C null pointer elimination is 6.6 times faster than the `reject` method!
I hope you enjoyed learning a bit about ruby's internal mechanisms!
\* This is the activesupport implementation of Hash `compact`, if you'd like to see the C implementation you can go to _[ruby/hash.c](https://github.com/ruby/ruby/blob/4daff3a603d1a8b2656e82108e2f7d0abf8103c9/hash.c) line 4110_, but it's pretty similar! | oryanmoshe |
155,253 | A Bank Should Authenticate With You - And Vice Versa | I don't often see security-related articles on dev.to so I might as well get started. A while back,... | 0 | 2019-08-16T15:46:50 | https://dev.to/antjanus/a-bank-should-authenticate-with-you-and-vice-versa-db1 | security | I don't often see security-related articles on dev.to so I might as well get started.
A while back, I saw a tweet from [Troy Hunt](https://twitter.com/troyhunt) about having a bank authenticate with him over the phone as well as the other way around. I think about that tweet a lot and recently, I was in a situation where that type of thinking came in handy.
## How we authenticate with services over the phone
You know how it is, you call your bank or your internet provider and they ask you a series of questions:
1. what are the last 4 of your SSN?
2. what's your address?
3. First and last name on the account?
4. Last 4 digits of your bank account number?
We provide some of this information, sometimes several times which gets frustrating.
It gives our bank, service provider, or whomever a reasonable-enough amount of trust that you are who you are.
Online on the other hand, you do this by providing your username/password. Or when you forget your username, you might get a reset email. Again, the service provider/bank trusts that your email is a trustworthy method of authentication.
(sidenote: some banks may require extra online authentication for password reset, like your Debit card information).
## But how do services authenticate with us?
Let's start with `online` services. This one is *easy*. SSL.
Yeah, SSL. We trust that little lock in the top left next to the URL to tell us that we're not on a site that *looks* like our target site but is not (check out `dev.to`'s certificate!). SSL works in a funny way.
Browsers trust certain certification entities and have their encryption (public) keys. This way, anyone *that entity* trusts, the browsers trusts as well. It's a chain of trust! When you initiate a connection, the browser checks against that chain of trust, and tells you if you're secure or not and who issued the certificate and for what URL. And so on.
Here's the thing, banks and other entities get a more *secure* SSL certificate which in turn displays their information directly in the browser! Go to your bank site and check out the address bar. You'll notice that you'll see the Bank's full legal name in the address bar! If you click through, you'll even get their address.
*Easy* verification that you're on the right site.
But how does a bank or a provider authenticate with you over the phone?
They don't, do they?
## The problem with unauthenticated phone calls
Here's the thing. My wife's bank called her yesterday to verify something (a new account opening) and they asked her a verification question: how many accounts do you currently have open with the bank? She answered, it didn't match what the person over the phone saw on their screen. Red flag. BIG red flag. I seriously expected them to start asking more personal questions (full SSN, full debit card number, etc.).
We looked up the phone number online and received mixed reports. The phone number was listed as a scam. Other times, it was listed as an official number. Which was it? More importantly, it could've been spoofed! (just like you can spoof emails!).
So what do you do? We confronted the caller. We asked her, "How do we know you're from the bank?". And she was taken aback. She insisted she's with the bank.
How could *we* know however? How can a bank authenticate with us without divulging very important information? And how can we authenticate with a bank without divulging very important information to the wrong party?
## Phone authentication methods
Eventually, the caller came up with an idea. She gave us the timestamp of when my wife applied to open the account. Day, hour, minute. Not super important information but it proved it was the bank. And we moved forward. The account mismatch (the original authentication question) happened because you can have an active but closed account. Or an inactive but open account. And we moved forward.
Here are some other methods I've seen:
1. authentication PIN. You set one for yourself, and one for the service provider. During a phone call, you authenticate with basic info (address, name, etc.) and then exchange PIN numbers
2. simultaneous online authentication. A PIN appears on your and their screen during a conversation. Upside is that this has to be triggered by the service provider
3. unimportant but private information exchange. The caller telling my wife *when* she applied for an account was proof enough for example.
## Have you had any similar experiences?
Whenever someone calls me these days, I immediately look up unknown numbers and I tend to be skeptical whenever the caller claims to know me somehow (like a bank, or electric company, or whomever). How about you?
[Credit for cover goes to Tomasz Frankowski](https://unsplash.com/@sunlifter?utm_medium=referral&utm_campaign=photographer-credit&utm_content=creditBadge) | antjanus |
155,289 | Internal and external connectivity in Kubernetes space | Services and networking — from ClusterIP to Ingress As you are making your way through a... | 1,815 | 2019-08-11T22:54:37 | http://rastko.tech/kubernetes/2019/07/13/services-and-networking.html | kubernetes, services, nginx | # Services and networking — from ClusterIP to Ingress

As you are making your way through all the stages of your app’s development and you (inevitably) get to consider using Kubernetes, it is time to understand how your app components connect to each other and to outside world when deployed to Kubernetes.
> Knowledge you will get from this article also covers “services & networking” part of [CKAD exam](https://www.cncf.io/certification/ckad/), which currently takes 13% of the certification exam curriculum.
## Services
**Kubernetes services** provide networking between different components within the cluster and with outside world (open internet, other applications, networks etc).
There are different kinds of services, and here we’ll cover some:
* *NodePort* — service that exposes Pod through port on the node
* *ClusterIP* — service that creates virtual IP within the cluster to enable different services within the cluster to talk to each other.
* *LoadBalancer* — creates (provisions) load balancer to a set of servers in kubernetes cluster.
### NodePort
NodePort service maps (exposes) port on the Pod to a port on the Node. There are actually 3 ports involved in the process:
* *NodePort* service exposes deployment (set of pods) to the outside of the k8s node
* *targetPort* — port on the Pod (where your app listens). This is optional parameter, if not present, Port is taken
* *Port* — port on the service itself (usually same as pod port)
* *NodePort* — port on the node that is used to access web server externally. By standard, NodePort can be in the range between 30000 and 32767. This is optional parameter, if not present, random available port in valid range will be assigned.
#### Creating the NodePort service
```yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-nodeport-service
spec:
type: NodePort
ports:
- targetPort: 6379
port: 6379
NodePort: 30666
selector:
db: redis
```
#### Linking pods to a service
**Selectors** are the way to refer (link) service to certain set of pods.
As set of pods gets selected based on the selector (in almost all cases, pods from same deployment), **service starts sending traffic to all of them in random manner effectively acting as load balancer**.
> If mentioned pods are distributed across the nodes, service will be created across the nodes to be able to link all the pods. In case of multi node service, service exposes same port on all nodes.
### ClusterIP
In case of application consisting of multiple tiers deployed to different sets of pods, way to establish communication between different tiers inside the cluster is necessary.
For example, we have:
* 5 pods of API number 1
* 2 pods of API number 2
* 1 pod of redis
* 10 pods of frontend app
Each of above mentioned 18 pods have their own distinct IP addresses, but making communication that way would be:
* Unstable since pods can die and be recreated with new IP any time.
* Inefficient since we would have to load-balance within the integration part of each app.
**ClusterIP service provides us with unified interfaces to access each group of pods — it provides a group of pods with internal name/IP.**
```yaml
apiVersion: v1
kind: Service
metadata:
name: api-1
spec:
type: ClusterIP
ports:
- targetPort: 9090
port: 9090
selector:
app: api-1
type: backend
```
**ClusterIP** is default type of service, so if service type is not specified, k8s assumes ClusterIP.
When this service gets created, other applications within the cluster can access the service through service IP or service name.
### LoadBalancer
In short, LoadBalancer type of service is provisioning external load balancer in cloud space — depending on provider support.
Deployed load balancer will act as NodePort, but will have more advanced load balancing features and will also act as if you got additional proxy in front of NodePort in order to get new IP and some standard web port mapping (30666 > 80). As you see, it’s features position it as the main way to expose service directly to outside world.

Main downside of this approach is that any service you expose needs it’s own load balancer, which can, after a while, have significant impact on complexity and price.
Let’s briefly review the possibilities:
```yaml
apiVersion: v1
kind: Service
metadata:
name: lb1
spec:
externalTrafficPolicy: Local
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: lb1
```
Above creates external load balancer and provisions all the networking setups needed for it to load balance traffic to nodes.
> **Note from k8s docs**: With the new functionality, the external traffic will not be equally load balanced across pods, but rather equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability for specifying the weight per node, they balance equally across all target nodes, disregarding the number of pods on each node).
If you want to add AWS ELB as external load balancer, you need to add following annotations to load balancer service metadata:
```yaml
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol:"tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol:"*"
```
## Ingress
### Real life cluster setup
When getting into space where we are managing more than one web server with multiple different sets of pods, above mentioned services turn out to be quite complex to manage in most of the real life cases.
Let’s review example we had before — 2 APIs, redis and frontend, and imagine that APIs have more consumers then just frontend service so they need to be exposed to open internet.
Requirements are as following:
* frontend lives on *www.example.com*
* API 1 is search api at *www.example.com/api/search*
* API 2 is general (everything else) api that lives on *www.example.com/api*
Setup needed using above services:
* **ClusterIP** service to make components easily accessible to each other within the cluster.
* **NodePort** service to expose some of the services outside of node, or maybe
* **LoadBalacer** service if in the cloud, or
* **proxy server** like nginx, to connect and route everything properly (30xxx ports to port 80, different services to paths on the proxy etc)
* **Deciding on where to do SSL implementation and maintaining it across**
### So
ClusterIP is necessary, we know it has to be there — it is the only one handling internal networking, so it is as simple as it can be.
External traffic however is different story, we have to set up at least one service per component plus one or multiple supplementary services (load balancers and proxies) in order to achieve requirements.
> ## Number of configs / definitions to be maintained skyrockets, entropy rises, infrastructure setup drowns in complexity…
### Solution
Kubernetes cluster has **ingress** as a solution to above complexity. Ingress is, essentially, layer 7 load balancer.
> **Layer 7** load balancer is name for type of load balancer that covers layers 5,6 and 7 of networking, which are **session**, **presentation** and **application**
Ingress can provide load balancing, SSL termination and name-based virtual hosting.
It covers HTTP, HTTPS.
> For anything other then HTTP and HTTPS service will have to be published differently through special ingress setup or via a NodePort or LoadBalancer, but that is now a single place, one time configuration.
#### Ingress setup
In order to setup ingress we need two components:
* **Ingress controller** — component that manages ingress based on provided rules
* **Ingress resources** — Ingress HTTP rules
#### Ingress controller
There are few options you can choose from, among them nginx, GCE (google cloud) and Istio. Only two are officially supported by k8s for now — nginx and GCE.
We are going to go with **nginx** as the ingress controller solution. For this we, of course, need new deployment.
```yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
args:
- /nginx-ingress-controller
- configMap=$(POD_NAMESPACE)/ingress-config
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
```
Deploy ConfigMap in order to control ingress parameters easier:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
```
Now, with basic deployment in place and ConfigMap to make it easier for us to control parameters of the ingress, we need to setup the service to expose ingress to open internet (or some other smaller network).
For this we setup node port service with proxy/load balancer on top (bare-metal /on-prem example) or load balancer service (Cloud example).
In both mentioned cases, there is a need for Layer 4 and Layer 7 load balancer:
* NodePort and possibly custom load balancer on top as L4 and Ingress as L7.
* LoadBalancer as L4 and Ingress as L7.
> *Layer 4 load balancer* — Directing traffic from network layer based on IP addresses or TCP ports, also referred to as transport layer load balancer.
NodePort for ingress yaml, to illustrate the above:
```yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
-targetPort: 80
port: 80
protocol: TCP
name: http
-targetPort: 433
port: 433
protocol: TCP
name: https
selector:
name: nginx-ingress
```
This NodePort service gets deployed to each node containing ingress deployment, and then load balancer distributes traffic between nodes
What separates ingress controller from regular proxy or load balancer is additional underlying functionality that monitors cluster for ingress resources and adjusts nginx accordingly. In order for ingress controller to be able to do this, service account with right permissions is needed.
```yaml
apiVersion: v1
kind: ServiceAccount
matadata:
name: nginx-ingress-serviceaccount
```
> Above service account needs specific permissions on cluster and namespace in order for ingress to operate correctly, for particularities of permission setup on [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) enabled cluster [look at this document in nginx ingress official docs](https://kubernetes.github.io/ingress-nginx/deploy/rbac/).
When we have all the permissions set up, we are ready to start working on our application ingress setup.
#### Ingress resources
Ingress resources configuration lets you fine tune incoming traffic (or fine-route).
Let’s first take simple API example. Assuming that we have just one set of pods deployed and exposed through service named simple-api-service on port 8080, we can create *simple-api-ingress.yaml*.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-api-ingress
spec:
backend:
serviceName: simple-api-service
servicePort: 8080
```
When we **kubectl create -f simple-api-ingress.yaml** we setup an ingress that routes all incoming traffic to simple-api-service.
#### Rules
Rules are providing configuration to route incoming data based on certain conditions. For example, routing traffic to different services within the cluster based on subdomain or a path.
Let us now get to initial example:
* frontend lives on **www.example.com** and everything **not /api**
* api 1 is search api at **www.example.com/api/search**
* api 2 is general (everything else) api that lives on **www.example.com/api**
Since everything is on the same domain, we can handle it all through one rule:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proper-api-ingress
spec:
rules:
-http:
paths:
-path: /api/search
backend:
serviceName: search-api-service
servicePort: 8081
-path: /api
backend:
serviceName: api-service
servicePort: 8080
-path: /
backend:
serviceName: frontend-service
servicePort: 8082
```
There is also a **default** backend that is used to serve default pages (like 404s) and it can be deployed separately. In this case we will not need it since frontend will cover 404s.
You can read more at https://kubernetes.io/docs/concepts/services-networking/ingress/

#### Bonus — More rules, subdomains and routing
And, what if we changed the example to:
* frontend lives on **app.example.com**
* api 1 is search api at **api.example.com/search**
* api 2 is general (everything else) api that lives on **api.example.com**
It is also possible, with the introduction of a new structure in the rule definition:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: proper-api-ingress
spec:
rules:
-host: api.example.com
http:
paths:
-path: /search
backend:
serviceName: search-api-service
servicePort: 8081
-path: /
backend:
serviceName: api-service
servicePort: 8080
-host: app.example.com
http:
paths:
-path: /
backend:
serviceName: frontend-service
servicePort: 8082
```
> **Note (out of scope)**: You can notice from the last illustration that there are multiple ingress pods, which implies that ingress can scale, and it can. Ingress can be scaled like any other deployment, you can also have it auto scale based on internal or external metrics (external, like number of requests handled is probably the best choice).
> **Note 2 (out of scope)**: Ingress can, in some cases, be deployed as DaemonSet, to assure scale and distribution across the nodes.
#### Wrap
This was a first pass through the structure and usage of k8s services and networking capabilities that we need in order to structure communication inside and outside of the cluster.
I, as always, tried to provide to the point and battle tested guide to reality… What is written above should give you enough knowledge to deploy ingress and setup basic set of rules to route traffic to your app and give you context for further fine tuning of your setup.
> **Important piece of advice:** Make sure to keep all the setups as a code in files, in your repo — infrastructure as a code is essential part of making your application reliable. | metaphorical |
155,347 | There are very few cases where the language choice actually matters? | The title is a reference to this blog post. The author writes Before I talk about how I evaluate la... | 0 | 2019-08-12T02:43:48 | https://dev.to/bachmeil/there-are-very-few-cases-where-the-language-choice-actually-matters-64k | languagechoice | The title is a reference to [this blog post](https://stackoverflow.blog/2019/08/07/what-every-developer-should-learn-early-on/). The author writes
> Before I talk about how I evaluate languages, I want to make something very clear. There are very few cases where the language choice actually matters. There are things you can obviously not do in some languages. If you write frontend code, you don’t get a language choice. There are also specific contexts where performance is important and X language just won’t do, those situations are quite rare. In general, language choice is usually one of the least important issues for a project.
I strongly disagree with this statement *in general*. Of course there are many cases where it is true, but as a general statement that every developer should learn, I don't think it's anywhere close to correct.
I would say that one of the most important things you can do is choose the correct language. Here are some things that you do need to take into account when you're working on a software project:
- What are the performance requirements? This is not limited to the small subset of programming problems, like high frequency trading, where you need to squeeze every ounce of performance from the processor. Is the interface responsive? Will the program scale? Can I write in an idiomatic style, or do I have to resort to things like vectorization, which destroys readability and is hard to do? Performance concerns are not as big as they used to be, but they are not rare. Jeff Atwood had [a few things to say about performance a while back](https://meta.discourse.org/t/the-state-of-javascript-on-android-in-2015-is-poor/33889).
- How am I going to model the problem? Sometimes a functional approach is best, sometimes an OOP approach is best, and sometimes you just want to write a quick script. Haskell's not the right choice for an OOP program. Java's not the right choice for a script. But it's more than that. We resist doing things that our language doesn't do well. We limit our projects to the things we can do easily in the language we've chosen to use.
- Which libraries do I need? Not all languages have equal access to libraries. Even if they exist, they might not be complete, they might not be supported, they might not be documented well, or they might be awkward to use. Poor/nonexistent library support can sink a problem or require a costly rewrite.
- What parts of the ecosystem have to be strong? If you prefer using an IDE, some languages are better choices than others. If you run into trouble, is there a community that can help you? Is there good documentation to guide you to do what you need to do?
- Which languages are fun to use? Nothing can kill your motivation like having to use a crappy language. If you dread the thought of open the project up because the language sucks, I can say with certainty that you will be less productive. If a language is fun, and you enjoy learning about it, you will leave for the office 30 minutes early and stay 30 minutes late at the end of the day.
- Which languages do you know well? Do you need to hire others to work on the project?
It's surprisingly common to see people say the choice of language is unimportant. I find that argument to be bizarre. You wouldn't expect a mechanic to say "The choice of tools is not important. If all you have is a hammer and a socket set, use that." If you ordered an Uber and the driver showed up in a Kenworth T800, it's not likely that you'd think "The choice of vehicle isn't important. All we need is a tool to take me from my house to the airport." Sure, you'd have plenty of space for luggage, but it would not seem to be an unimportant implementation detail. And you wouldn't say the choice of spouse is an unimportant detail so long as they are capable of carrying out the spousal duties.
No, the choice of language matters, and it matters a lot. I won't speculate on why someone might claim otherwise. It is an observation that conflicts with my experience. | bachmeil |
155,723 | Do you use Axios or Fetch? | As a lover of all things programming, I send a lot of time exploring why we follow certain convention... | 0 | 2019-08-12T23:09:06 | https://dev.to/jasterix/do-you-use-axios-or-fetch-2a7p | discuss, axios, javascript, ajax | As a lover of all things programming, I send a lot of time exploring why we follow certain conventions.
More recently, I've been playing around with hosting an API on Heroku. Most of the tutorials used Axios, with no thought to the decision. Several hours later, it's still not clear if
1) the industry is moving towards using Axios
2) there are certain scenarios where Axios is the best tool
3) it's something else altogether
Would love to hear your thoughts on the two
This[Youtube video](https://www.youtube.com/watch?v=965sASYM220) offers an interesting POV, but he ultimately defaults to Fetch | jasterix |
155,724 | Let's Build A Currency Exchange Part II | React Hooks | GraphQL | Apollo | Node.js | 0 | 2019-08-19T11:38:04 | https://dev.to/marlonanthony/let-s-build-a-currency-exchange-part-ii-eh0 | react, graphql, apollo, javascript | ---
title: Let's Build A Currency Exchange Part II
published: true
description: React Hooks | GraphQL | Apollo | Node.js
tags: react, graphql, apollo, javascript
cover_image: https://store-images.s-microsoft.com/image/apps.43630.14323591455066440.4fa2d451-8a1f-44fd-a536-5fda778060ee.5ad45e3a-3125-4acf-8a9f-8c6dc46f060e?mode=scale&q=90&h=1080&w=1920
---
In this half of the tutorial we'll focus on the frontend. The code for this project is on my [GitHub](https://github.com/marlonanthony/forex). You can find the first half of this article [here](https://dev.to/marlonanthony/let-s-build-a-currency-exchange-part-i-52g1). We'll store the code for our frontend in a folder named `client`. Create `client` at the root level, cd into this folder and run the following command in the terminal:
```javascript
npx create-react-app .
```
We use `npx` so that we don't have to install create-react-app globally. Run the following command in your terminal and let's get our dependencies:
```jsx
npm i @apollo/react-hooks apollo-cache-inmemory apollo-client apollo-link-http graphql-tag react-chartjs-2 chart.js react-router-dom
```
With our dependencies in tow, let's do a little spring cleaning. Delete `logo.svg`, `serviceWorker.js`, `App.test.js` and `App.css`. Now remove their imports (and all those weird semicolons galavanting about) from `index.js` and `App.js`. Afterwards, adjust `index.js` such that it resembles the below code:
```jsx
// index.js
import React from 'react'
import ReactDOM from 'react-dom'
import { BrowserRouter } from 'react-router-dom'
import { ApolloClient } from 'apollo-client'
import { ApolloProvider } from '@apollo/react-hooks'
import { InMemoryCache } from 'apollo-cache-inmemory'
import { HttpLink } from 'apollo-link-http'
import App from './App'
import './index.css'
const cache = new InMemoryCache()
const client = new ApolloClient({
cache,
link: new HttpLink({
uri: 'http://localhost:4000/graphql',
credentials: 'include'
})
})
ReactDOM.render(
<ApolloProvider client={client}>
<BrowserRouter>
<App />
</BrowserRouter>
</ApolloProvider>, document.getElementById('root')
)
```
First, we handle our imports. Then we create a new instance of `InMemoryCache` and add it to our `ApolloClient` configuration Object. We use `HttpLink` to hit our GraphQL API and just as we did in the [GraphQL Playground](http://localhost:4000/graphql), we add `credentials: 'include'` to ensure that our cookie is sent along with every request.
Inside of our render function we wrap everything with React Router's `BrowserRouter`. `react-router` describes `BrowserRouter` as, "A router that uses the HTML5 history API to keep your UI in sync with the URL."
We pass `ApolloProvider` our new instance of `ApolloClient` so that later we can `consume` it (akin to the React `Context` API). As I write this `@apollo/react-hooks` is a nice ripe age of one day old. This is a minified version of `react-apollo` which doesn't offer render prop functionality, but reduces bundle size by 50%!
Open up `App.js` and add the following code:
```jsx
// App.js
import React from 'react'
import { Route } from 'react-router-dom'
import Landing from './pages/Landing'
const App = () => <Route exact path='/' component={ Landing } />
export default App
```
React Router's `Route` component allows us to define a routes `path`, and assign said path a component to be rendered. In our case this component is `Landing`. Create a `pages` folder inside of the `src` folder. Inside `pages` create a new file and name it `Landing.js`. Insert the following code:
```jsx
// Landing.js
import React from 'react'
const Landing = () => <div>Hello world!</div>
export default Landing
```
Once more, we demonstrate our respect for tradition and muster our most majestic, 'Hello world' yet! Nothing quite tucks me in like a well-groomed, "Hello world!"
Inside the `src` folder, create another folder and name it `graphql`. Inside of this folder create two subfolders: `mutations` and `queries`. Inside of `queries` create a new file and name it `currencyPairInfo.js`.
Add the following code:
```javascript
// currencyPairInfo.js
import gql from 'graphql-tag'
export const CURRENCY_PAIR_INFO = gql`
query CurrencyPairInfo($fc: String, $tc: String) {
currencyPairInfo(tc: $tc, fc: $fc) {
fromCurrency
fromCurrencyName
toCurrency
toCurrencyName
exchangeRate
lastRefreshed
timeZone
bidPrice
askPrice
}
}
`
```
First, we import `gql` from `graphql-tag` so that we can define our mutations and queries. Inside of this file we're doing the exact same thing we did in the GraphQL Playground, except we add an additional name (`CurrencyPairInfo`) to our query and further describe the shape of our schema. Finally, we store this query in the constant `CURRENCY_PAIR_INFO`.
Now that we have our query, let's return to Landing.js and use it.
```jsx
// Landing.js
import React, { useState } from 'react'
import { useQuery } from '@apollo/react-hooks'
import { CURRENCY_PAIR_INFO } from '../graphql/queries/currencyPairInfo'
const Landing = () => {
const [ fc, setFc ] = useState('EUR'),
[ tc, setTc ] = useState('USD'),
{ data, loading, error, refetch } = useQuery(CURRENCY_PAIR_INFO, {
variables: { fc, tc }
})
if(loading) return <p>Loading...</p>
if(error) return <button onClick={() => refetch()}>Retry</button>
return data && (
<section>
<h3>Currency Exchange</h3>
<div>
<select
value={`${fc}/${tc}`}
onChange={e => {
const [ fromCurrency, toCurrency ] = e.target.value.split('/')
setFc(fromCurrency)
setTc(toCurrency)
}}>
<option>EUR/USD</option>
<option>JPY/USD</option>
<option>GBP/USD</option>
<option>AUD/USD</option>
<option>USD/CHF</option>
<option>NZD/USD</option>
<option>USD/CAD</option>
</select>
<button onClick={() => refetch()}>refresh</button>
</div>
<div className='landing_pair_data'>
{ data.currencyPairInfo && Object.keys(data.currencyPairInfo).map(val => (
<div key={val} className='data'>
<p><span>{val}: </span>{ data.currencyPairInfo[val] }</p>
</div>
))}
</div>
</section>
)
}
export default Landing
```
We import `useQuery` from `@apollo/react-hooks`, the query we wrote in `currencyPairInfo.js` and `useState` from `React`. Instead of using a class component to initialize state via `this.state`, and later using `setState` to update it, we're going to be using the React Hook `useState`. `useState` takes the initial state as an argument and returns the current state and a function to update said state. This state will be used to collect user input. We provide our query this input as `variables` and `useQuery` returns the response.
The most traded pairs of currencies in the world are called the Majors. They constitute the largest share of the foreign exchange market, about 85%, and therefore they exhibit high market liquidity. The Majors are: EUR/USD, USD/JPY, GBP/USD, AUD/USD, USD/CHF, NZD/USD and USD/CAD. These are the currency pairs we'll provide to our users.
We create a `select` list, each `option` providing the variables to our query. These options make up the Majors. Apollo provides a `refetch` function that will reload the given query. We place this function in a button so that `onClick` the user can get up-to-date data. Take heed not to ping the Alpha Vantage API too often. If you send too many request, they'll graciously provide you with a timeout lasting a few seconds. Just enough time to ponder your insolence.
Our data is returned to us via `data.currencyPairInfo`. We map over said data and provide it to the DOM. You'll notice we're rendering `__typename: PairDisplay`. Apollo Client uses `__typename` and `id` fields to handle cache updates. If you query a different currency pair, then query the original pair again, you'll notice that the previous pairs data is instantly available via `apollo-cache-inmemory`.
I can't stare at our data pressed up against the left margin like this. Head into index.css and just add a quick `text-align: center` to the `body`.
With that quick aside, let's clean up `Landing.js`. Create a new folder in `src` and call it `components`. Inside of `components` create a `pairs` folder. Inside of `pairs` create a new file `SelectList.js` and insert the following:
```jsx
// SelectList.js
import React from 'react'
const SelectList = ({ fc, setFc, tc, setTc }) => (
<select
value={`${fc}/${tc}`}
onChange={e => {
const [ fromCurrency, toCurrency ] = e.target.value.split('/')
setFc(fromCurrency)
setTc(toCurrency)
}}>
<option>EUR/USD</option>
<option>JPY/USD</option>
<option>GBP/USD</option>
<option>AUD/USD</option>
<option>USD/CHF</option>
<option>NZD/USD</option>
<option>USD/CAD</option>
</select>
)
export default SelectList
```
Back in Landing.js replace `select` with `SelectList` and pass the necessary props.
```diff
import React, { useState } from 'react'
import { useQuery } from '@apollo/react-hooks'
import { CURRENCY_PAIR_INFO } from '../graphql/queries/currencyPairInfo'
+import SelectList from '../components/SelectList'
const Landing = () => {
const [ fc, setFc ] = useState('EUR'),
[ tc, setTc ] = useState('USD'),
{ data, loading, error, refetch } = useQuery(CURRENCY_PAIR_INFO, {
variables: { fc, tc }
})
if(loading) return <p>Loading...</p>
if(error) return <button onClick={() => refetch()}>Retry</button>
return data && (
<section>
<h3>Currency Exchange</h3>
<div>
+ <SelectList fc={fc} tc={tc} setFc={setFc} setTc={setTc} />
<button onClick={() => refetch()}>refresh</button>
</div>
<div className='landing_pair_data'>
{ data.currencyPairInfo && Object.keys(data.currencyPairInfo).map(val => (
<div key={val} className='data'>
<p><span>{val}: </span>{ data.currencyPairInfo[val] }</p>
</div>
))}
</div>
</section>
)
}
export default Landing
```
Much better! Now that we're receiving data from the Aplha Vantage API let's get to navigation. Open up `App.js` and make the following adjustments:
```jsx
// App.js
import React from 'react'
import { Route, Switch } from 'react-router-dom'
import Landing from './pages/Landing'
import Navbar from './components/navbar/Navbar'
const App = () => (
<main>
<div className='navbar'><Navbar /></div>
<Switch>
<Route exact path='/' component={ Landing } />
</Switch>
</main>
)
export default App
```
We import `Switch` from `react-router-dom` and a file named `Navbar` that we're about to create. The `Switch` component renders the first child (`Route` or `Redirect`) that matches a routes `path` and displays it.
Inside of `components` create a new folder and call it `navbar`. Inside create a new file named `Navbar.js` and insert the following:
```jsx
// Navbar.js
import React from 'react'
import { NavLink } from 'react-router-dom'
import './Navbar.css'
const Navbar = () => (
<div className='navigation'>
<header><NavLink exact to='/'>Forex</NavLink></header>
<ul>
<li><NavLink exact to="/login">Login</NavLink></li>
<li><NavLink exact to='/register'>Sign Up</NavLink></li>
<li>Logout</li>
</ul>
</div>
)
export default Navbar
```
This article is not about styling. I wanted to be careful not to pollute the codebase with styled components, making it both time consuming and harder for some to reason about the logic. For this reason, I've decided to use only two CSS files: `index.css` and `Navbar.css`. We'll be using very little CSS — just enough for dark mode. 😎
Inside of the `navbar` folder create `Navbar.css` and insert the below code:
```css
/* Navbar.css */
.navbar { margin-bottom: 55px; }
.navigation {
position: fixed;
left: 0;
top: 0;
background: var(--secondary-color);
width: 100vw;
height: 55px;
display: flex;
justify-content: space-between;
align-items: center;
}
.navigation header a {
text-decoration: none;
color: var(--header-text-color);
margin-left: 10px;
}
.navigation ul {
display: flex;
list-style: none;
margin-right: 15px;
}
.navigation li {
margin: 0 15px;
color: var(--header-text-color);
}
.navigation li:hover {
cursor: pointer;
color: var(--main-color);
}
.navigation a {
text-decoration: none;
color: var(--header-text-color);
}
.navigation a:hover,
.navigation a:active,
.navigation a.active {
color: var(--main-color);
}
```
Adjust `index.css` to the following:
```css
/* index.css */
/* Global */
* {
--main-color: rgb(0,0,0);
--secondary-color: rgb(55,131,194);
--text-color: rgba(200,200,200, 0.6);
--header-text-color: rgb(200,200,200);
}
body {
font-family: Arial, Helvetica, sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
margin: 0;
background: var(--main-color);
text-align: center;
box-sizing: border-box;
}
a { text-decoration: none; color: rgb(0,0,0); }
section { padding-top: 50px; }
span { color: var(--secondary-color); }
p { color: var(--text-color); font-size: 14px; }
p:hover { color: rgba(200,200,200, 0.4); }
h1,h2, h3, h4 { color: var(--header-text-color); }
button, select { cursor: pointer; }
/* Landing && Pair */
.landing_pair_data {
margin: 20px 0 20px calc(50% - 170px);
padding: 20px;
width: 300px;
border-radius: 20px;
box-shadow: 1px 1px 1px 1px var(--secondary-color),
-1px -1px 1px 1px var(--secondary-color);
}
.data {
border-bottom: 1px solid var(--secondary-color);
width: 280px;
margin-left: calc(50% - 140px);
text-align: start;
text-transform: capitalize;
padding: 2px 2px 2px 0;
}
.modal {
position: absolute;
background: rgb(225,225,225);
color: var(--main-color);
width: 280px;
left: calc(50% - 160px);
top: 25%;
padding: 20px;
animation: modal .5s;
}
.modal p {
color: var(--main-color);
}
@keyframes modal {
from { opacity: 0; }
to { opacity: 1; }
}
/* Account */
.pair_divs {
padding: 20;
border: 1px solid rgba(255,255,255,0.1);
border-radius: 5px;
width: 400px;
margin: 10px auto;
}
.pair_divs p {
text-align: start;
padding-left: 20px;
}
.pair_divs:hover {
border: 1px solid rgba(55,131,194, 0.3);
}
/* Chart */
.chartData {
padding-top: 50px;
height: calc(100vh - 105px);
}
.chartData form input,
.chartData form button {
margin: 10px;
}
/* Login && Register */
.login input,
.register input {
padding: 5px;
margin: 10px 0px;
width: 60%;
max-width: 400px;
background: var(--main-color);
color: var(--header-text-color);
font-size: 13px;
}
.login form,
.register form {
display: flex;
justify-content: center;
flex-direction: column;
align-items: center;
}
```
These two files represent the entirety of our CSS. Save your files and take a look at the browser.

Now that we have our navbar, let's create a register route. Inside of `graphql/mutations` create a new file named `register.js` and insert the below code:
```javascript
// graphql/mutations/register.js
import gql from 'graphql-tag'
export const REGISTERMUTATION = gql`
mutation RegisterMutation($email: String!, $password: String!, $name: String!) {
register(email: $email, password: $password, name: $name)
}
`
```
Inside `components`, create a new folder and name it `auth`. Inside of `auth` create a new file and name it `Register.js`. Insert the following:
```jsx
// components/auth/Register.js
import React, { useState } from 'react'
import { useMutation } from '@apollo/react-hooks'
import { REGISTERMUTATION } from '../../graphql/mutations/register'
export default function Register(props) {
const [ email, setEmail ] = useState(''),
[ password, setPassword ] = useState(''),
[ name, setName ] = useState(''),
[ register, { error } ] = useMutation(REGISTERMUTATION, {
variables: { email, password, name }
})
return (
<div className='register'>
<form onSubmit={ async e => {
e.preventDefault()
await register()
props.history.push('/login')
}}>
<h2>Sign Up</h2>
<input
required
name='email'
type='email'
value={ email }
onChange={ e => setEmail(e.target.value) }
placeholder='Enter your email'
/>
<input
required
type='password'
value={ password }
onChange={ e => setPassword(e.target.value) }
placeholder='Enter your password'
/>
<input
required
type='text'
value={ name }
onChange={ e => setName(e.target.value) }
placeholder='Enter your name'
/>
{ error && <p>{ error.message }</p> }
<button>SignUp</button>
</form>
</div>
)
}
```
We use a `form` to collect the users data and place it in the components state. `onSubmit`, we pass the state as `variables` to the `register` mutation. Since we're setting the emails input `type` to 'email' and passing the `required` prop, we won't see the error we threw on the backend. We're not comparing passwords so there's no error to be thrown there. The only error we created that will make it to us is 'User already exists.' That's why I'm not checking for individual errors and just displaying the error under all of the inputs.
Open up `App.js`. Import `Register.js` and create the Register components `Route`.
```jsx
// App.js
import React from 'react'
import { Route, Switch } from 'react-router-dom'
import Landing from './pages/Landing'
import Navbar from './components/navbar/Navbar'
import Register from './components/auth/Register'
const App = () => (
<main>
<div className='navbar'><Navbar /></div>
<Switch>
<Route exact path='/' component={ Landing } />
<Route path='/register' component={ Register } />
</Switch>
</main>
)
export default App
```
If you navigate to our Register component, you'll be able to register a new user. We can confirm this by checking our [database](https://www.mongodb.com/cloud/atlas).
Inside of `graphql/mutations` create a new file, name it `login.js` and insert the following:
```javascript
// graphql/mutations/login.js
import gql from 'graphql-tag'
export const LOGINMUTATION = gql`
mutation LoginMutation($email: String!, $password: String!) {
login(email: $email, password: $password) {
id
email
name
}
}
`
```
Inside of `graphql/queries` create a new file named `me.js` and add the following code:
```javascript
// graphql/queries/me.js
import gql from 'graphql-tag'
export const MEQUERY = gql`
query MeQuery {
me {
id
email
name
bankroll
}
}
`
```
Head to the `auth` folder, create a new file and name it `Login.js`. Adjust Login.js such that it resembles the below code:
```jsx
// Login.js
import React, { useState } from 'react'
import { useMutation } from '@apollo/react-hooks'
import { MEQUERY } from '../../graphql/queries/me'
import { LOGINMUTATION } from '../../graphql/mutations/login'
export default function Login(props) {
const [ email, setEmail ] = useState(''),
[ password, setPassword ] = useState(''),
[ login, { error } ] = useMutation(LOGINMUTATION, {
variables: { email, password },
update: (cache, { data }) => {
if(!data || !data.login) return
cache.reset()
cache.writeQuery({
query: MEQUERY,
data: { me: data.login }
})
}
})
return (
<div className='login'>
<form onSubmit={ async e => {
e.preventDefault()
await login()
props.history.push('/')
}}>
<h2>Login</h2>
<input
required
name='email'
type='email'
value={ email }
onChange={ e => setEmail(e.target.value) }
placeholder='Enter your email'
/>
<input
required
type='password'
value={ password }
onChange={ e => setPassword(e.target.value) }
placeholder='Enter your password'
/>
{ error && <p>{ error.message }</p> }
<button type='submit'>Login</button>
</form>
</div>
)
}
```
`onSubmit` we log the user in and redirect them back to the home page. ApolloClient provides us with an `update` function that we can use to update the cache once a user logs in. Once a user logs in successfully we `update` the cache such that the `me` query represents the current user.
From the docs:
> The update function is called with the Apollo cache as the first argument. The cache has several utility functions such as cache.readQuery and cache.writeQuery that allow you to read and write queries to the cache with GraphQL as if it were a server. There are other cache methods, such as cache.readFragment, cache.writeFragment, and cache.writeData, which you can learn about in our detailed caching guide if you're curious.
> Note: The update function receives cache rather than client as its first parameter. This cache is typically an instance of InMemoryCache, as supplied to the ApolloClient constructor when the client was created. In case of the update function, when you call cache.writeQuery, the update internally calls broadcastQueries, so queries listening to the changes will update. However, this behavior of broadcasting changes after cache.writeQuery happens only with the update function. Anywhere else, cache.writeQuery would just write to the cache, and the changes would not be immediately broadcast to the view layer. To avoid this confusion, prefer client.writeQuery when writing to cache.
> The second argument to the update function is an object with a data property containing your mutation result. If you specify an optimistic response, your update function will be called twice: once with your optimistic result, and another time with your actual result. You can use your mutation result to update the cache with cache.writeQuery.
If a user enters an invalid email they will see the HTML error, not ours. If a user enters a valid but incorrect email we throw, 'Email or password is incorrect!' If a user enters an incorrect password, we throw the exact same error, making it harder for a bad actor to decipher which input is incorrect. This being the case, we probably don't want to display the error in the place that it occurs, lest we give away the game.
Open up App.js and make the following adjustments:
```jsx
// App.js
import React from 'react'
import { Route, Switch } from 'react-router-dom'
import Landing from './pages/Landing'
import Navbar from './components/navbar/Navbar'
import Register from './components/auth/Register'
import Login from './components/auth/Login'
const App = () => (
<Switch>
<Route path='/login' component={ Login } />
<Route path='/' render={() => (
<main>
<div className='navbar'><Navbar /></div>
<Route exact path='/' component={ Landing } />
<Route path='/register' component={ Register } />
</main>
)} />
</Switch>
)
export default App
```
Since we're clearing the cache before a user logs in, and the navbar utilizes the `me` query for authorization, we're going to place the Login component outside of the navbar.
We can now login a user and we are persisting the users session id in a cookie. If you open up your DevTools, under the Application folder, and inside the Cookies tab, you'll see our cookie.
It'd probably be best if we weren't simultaneously displaying both Login and Logout in our navbar. Adjust Navbar.js like so:
```jsx
// Navbar.js
import React from 'react'
import { NavLink, Redirect } from 'react-router-dom'
import { useQuery } from '@apollo/react-hooks'
import { MEQUERY } from '../../graphql/queries/me'
import './Navbar.css'
const Navbar = () => {
const { data, loading, error } = useQuery(MEQUERY)
if(loading) return <p>Loading....</p>
if(error) return <Redirect to='/login' />
if(!data) return <p>This is unfortunate</p>
return (
<div className='navigation'>
<header><NavLink exact to='/'>Forex</NavLink></header>
{ !data.me ? (
<ul>
<li><NavLink exact to='/login'>Login</NavLink></li>
<li><NavLink exact to='/register'>SignUp</NavLink></li>
</ul> )
: (
<ul>
<li>Logout</li>
</ul>
)}
</div>
)
}
export default Navbar
```
In the case of an error in our `me` query, we `Redirect` the user to login. If a user is returned we display Logout, `else` Login/SignUp. At the moment Logout isn't very useful. We'll start with the mutation. Create a new file named `logout.js` inside of `graphql/mutations` and insert the code below.
```javascript
// graphql/mutations/logout.js
import gql from 'graphql-tag'
export const LOGOUT_MUTATION = gql`
mutation Logout {
logout
}
`
```
Create `Logout.js` inside of `components/auth` and insert the following:
```jsx
// components/auth/Logout.js
import React from 'react'
import { useMutation } from '@apollo/react-hooks'
import { withRouter } from 'react-router-dom'
import { MEQUERY } from '../../graphql/queries/me'
import { LOGOUT_MUTATION } from '../../graphql/mutations/logout'
const Logout = props => {
const [logout] = useMutation(LOGOUT_MUTATION, {
update: cache => {
cache.writeQuery({
query: MEQUERY,
data: { me: null }
})
props.history.push('/')
}
})
return <div onClick={() => logout()}>Logout</div>
}
export default withRouter(Logout)
```
When a user clicks `Logout` three things happen:
* The `logout` mutation destroys the session on the `req` Object.
* We update the cache such that the `me` query returns `null`.
* We redirect the user to the home page.
If a component isn't rendered by React Router (passed as a `component` prop to a `Route`), then we won't have access to `history.push`. React Router's HOC `withRouter` provides us access to the history Object via props. We utilize `props.history.push('/')` to navigate the user back to the home page. Don't forget to wrap the Logout component with `withRouter` when exporting the file.
Import `Logout.js` into `Navbar.js` and replace `<li><Logout></li>` with our new component. With that adjustment thou shalt logout!
We can now focus on allowing users to open long/short positions. Open up `Landing.js` and make the following adjustments:
```jsx
// Landing.js
import React, { useState } from 'react'
import { useQuery } from '@apollo/react-hooks'
import { MEQUERY } from '../graphql/queries/me'
import { CURRENCY_PAIR_INFO } from '../graphql/queries/currencyPairInfo'
import SelectList from '../components/pairs/SelectList'
import OpenLongPosition from '../components/positions/OpenLongPosition'
const Landing = () => {
const [ fc, setFc ] = useState('EUR'),
[ tc, setTc ] = useState('USD'),
[ askPrice, setAskPrice ] = useState(0),
[ bidPrice, setBidPrice ] = useState(0),
[ showModal, setShowModal ] = useState(false),
user = useQuery(MEQUERY),
{ data, loading, error, refetch } = useQuery(CURRENCY_PAIR_INFO, {
variables: { fc, tc }
})
if(loading) return <p>Loading...</p>
if(error) return <button onClick={() => refetch()}>Retry</button>
return data && (
<section>
<h2>Currency Exchange</h2>
{ user.data.me && <p>Available Balance { user.data.me.bankroll.toLocaleString()}.00</p> }
<div>
<SelectList fc={fc} tc={tc} setFc={setFc} setTc={setTc} />
<button onClick={() => refetch()}>Refresh</button>
{ user.data.me && (
<OpenLongPosition
fc={fc}
tc={tc}
pairData={data}
askPrice={askPrice}
setAskPrice={setAskPrice}
showModal={showModal}
setShowModal={setShowModal}
/>)}
<button>Sell</button>
</div>
<div className='landing_pair_data'>
{ data.currencyPairInfo && Object.keys(data.currencyPairInfo).map(val => (
<div key={val} className='data'>
<p><span>{val}: </span>{ data.currencyPairInfo[val] }</p>
</div>
))}
</div>
</section>
)
}
export default Landing
```
We import `MEQUERY` and a file we'll need to create called `OpenLongPosition`. We integrate `useState` to store/update the `askPrice`, `bidPrice`, and to toggle a modal. After we have our user, we display their bankroll (available funds). If a user alters the currency pair or refreshes the data, we change the state of `askPrice` and `bidPrice` accordingly. Finally, if a user is found we display a 'Buy' button (`OpenLongPosition`).
Inside of `graphql/mutations` create a new file, name it `openPosition.js`, and add the below code:
```javascript
// openPosition.js
import gql from 'graphql-tag'
export const OPENPOSITION = gql`
mutation OpenPosition(
$pair: String!,
$lotSize: Int!,
$openedAt: Float!,
$position: String!
) {
openPosition(
pair: $pair,
lotSize: $lotSize,
openedAt: $openedAt,
position: $position
) {
success
message
pair {
id
user
position
pair
lotSize
openedAt
}
}
}
`
```
In `graphql/queries` create a new file named `getPairs.js` and insert the below code:
```javascript
// graphql/queries/getPairs.js
import gql from 'graphql-tag'
export const GETPAIRS = gql`
query GetPairs {
getPairs {
id
user
pair
lotSize
openedAt
closedAt
pipDif
profitLoss
open
position
createdAt
updatedAt
}
}
`
```
In `components` create a new folder and name it `positions`. Inside create a new file named `OpenLongPosition.js` and add the following code:
```jsx
// OpenLongPosition.js
import React from 'react'
import { Link } from 'react-router-dom'
import { useMutation } from '@apollo/react-hooks'
import { OPENPOSITION } from '../../graphql/mutations/openPosition'
import { MEQUERY } from '../../graphql/queries/me'
import { GETPAIRS } from '../../graphql/queries/getPairs'
const OpenLongPosition = ({
fc,
tc,
pairData,
askPrice,
setAskPrice,
showModal,
setShowModal
}) => {
const [ openPosition, { data, loading, error }] = useMutation(OPENPOSITION, {
variables: {
pair: `${fc}/${tc}`,
lotSize: 100000,
openedAt: askPrice,
position: 'long'
},
update: cache => {
const user = cache.readQuery({ query: MEQUERY })
user.me.bankroll -= 100000
cache.writeQuery({
query: MEQUERY,
data: { me: user.me }
})
},
refetchQueries: [{ query: GETPAIRS }]
})
if(loading) return <p>Loading...</p>
if(error) return <p>{ error.message }</p>
return openPosition && (
<>
<button onClick={ async () => {
await setAskPrice(+pairData.currencyPairInfo.askPrice)
alert('Are you sure you want to buy?')
await openPosition()
setShowModal(true)
}}>
Buy
</button>
{ data && data.openPosition.message && showModal && (
<div className='modal'>
<button onClick={() => setShowModal(false)}>x</button>
<p>{ data.openPosition.message }</p>
<p>Currency Pair: { data.openPosition.pair.pair }</p>
<p>Lot Size: { data.openPosition.pair.lotSize.toLocaleString() }.00</p>
<p>Opened At: { data.openPosition.pair.openedAt }</p>
<p>Position: { data.openPosition.pair.position }</p>
<Link to={{ pathname: '/account', state: { data } }}>
<button>Details</button>
</Link>
</div>
)}
</>
)
}
export default OpenLongPosition
```
We pass our mutation the required variables. Once the user clicks the 'Buy' button we'd usually want to display some data and allow them to confirm the purchase. Here we're just using an `alert`. The user is then displayed a modal describing their transaction and a `details` button that will redirect them to a page we still need to create — `Account`. Open up MongoDB Atlas and you'll see the newly created position.
Apollo provides us a number of ways to update the cache after a mutation. We've implemented a few of them in this project. In this component we're utilizing `refetchQueries` to update our pairs. Let's take a look at the docs:
> refetchQueries is the simplest way of updating the cache. With refetchQueries you can specify one or more queries that you want to run after a mutation is completed in order to refetch the parts of the store that may have been affected by the mutation.
We've seen a few of the options that the Mutation hook accepts. Take a peek at the [docs](https://www.apollographql.com/docs/react/essentials/mutations/) for the full list.
Before we get to creating the `Account` component, let's allow a user to open a `short` position. Open up `components/positions`, create a new file named `OpenShortPosition.js` and add the below code:
```jsx
// components/positions/OpenShortPosition.js
import React from 'react'
import { Link } from 'react-router-dom'
import { useMutation } from '@apollo/react-hooks'
import { OPENPOSITION } from '../../graphql/mutations/openPosition'
import { MEQUERY } from '../../graphql/queries/me'
import { GETPAIRS } from '../../graphql/queries/getPairs'
const OpenShortPosition = ({
fc,
tc,
pairData,
bidPrice,
setBidPrice,
showModal,
setShowModal
}) => {
const [ openPosition, { data, loading, error }] = useMutation(OPENPOSITION, {
variables: {
pair: `${fc}/${tc}`,
lotSize: 100000,
openedAt: bidPrice,
position: 'short'
},
update: cache => {
const user = cache.readQuery({ query: MEQUERY })
user.me.bankroll -= 100000
cache.writeQuery({
query: MEQUERY,
data: { me: user.me }
})
},
refetchQueries: [{ query: GETPAIRS }]
})
if(loading) return <p>Loading...</p>
if(error) return <p>{ error.message }</p>
return openPosition && (
<>
<button onClick={ async () => {
await setBidPrice(+pairData.currencyPairInfo.bidPrice)
alert('Are you sure you want to sell short?')
await openPosition()
setShowModal(true)
}}>
Sell
</button>
{ data && data.openPosition.message && showModal && (
<div className='modal'>
<button onClick={() => setShowModal(false)}>x</button>
<p>{ data && data.openPosition.message }</p>
<p>Currency Pair: { data.openPosition.pair.pair }</p>
<p>Lot Size: { data.openPosition.pair.lotSize.toLocaleString() }.00</p>
<p>Opened At: { data.openPosition.pair.openedAt }</p>
<p>Position: { data.openPosition.pair.position }</p>
<Link to={{ pathname: '/account', state: { data } }}>
<button>Details</button>
</Link>
</div>
)}
</>
)
}
export default OpenShortPosition
```
Here we do the exact same thing we did in `OpenLongPosition` except we pass `bidPrice` instead of `askPrice` and `position: short` instead of `position: long` as arguments.
Back in Landing.js replace the 'Sell' button with our newly created `OpenShortPosition` component.
```jsx
// Landing.js
import OpenShortPosition from '../components/positions/OpenShortPosition'
{ user.data.me && (
<OpenShortPosition
fc={fc}
tc={tc}
pairData={data}
bidPrice={bidPrice}
setBidPrice={setBidPrice}
showModal={showModal}
setShowModal={setShowModal}
/>)}
```
With that our users are able to sell short. We still need to create our `Account` component. Let's get to it! In the `pages` folder create `Account.js` and add the below code:
```jsx
// Account.js
import React, { useState } from 'react'
import { useQuery } from '@apollo/react-hooks'
import { Link, Redirect } from 'react-router-dom'
import { GETPAIRS } from '../graphql/queries/getPairs'
import { MEQUERY } from '../graphql/queries/me'
const Account = props => {
const [ open, setOpen ] = useState(true),
user = useQuery(MEQUERY),
{ data, loading, error } = useQuery(GETPAIRS)
if(user.error) return <Redirect to='/login' />
if(!user.data || !user.data.me) return <p>A man has no name.</p>
if(loading) return <p>Loading...</p>
if(!data) return <p>Nothing to show!</p>
if(error) return <p>{ error.message }</p>
return (
<section>
<h2>{ user.me.name }</h2>
<div>
<p><span>Available Balance: </span>{ user.me.bankroll.toLocaleString() }.00</p>
</div>
<br />
{ props.location.state && (
<div>
<h3>New Position</h3>
<div className='pair_divs'>
<p><span>Pair: </span>{ props.location.state.data.openPosition.pair.pair }</p>
<p><span>Lot Size: </span>{ props.location.state.data.openPosition.pair.lotSize.toLocaleString() }.00</p>
<p><span>Pip Dif: </span>{ props.location.state.data.openPosition.pair.openedAt }</p>
<p><span>Position: </span>{ props.location.state.data.openPosition.pair.position }</p>
</div>
</div>
)}
<br />
<h3>Currency Pairs</h3>
<button onClick={() => setOpen(true)}>open</button>
<button onClick={() => setOpen(false)}>closed</button>
<div>
{ data.getPairs && data.getPairs.map(pair => pair.open && open && (
<div className='pair_divs' key={pair.id}>
<Link to={{ pathname: '/pair', state: { pair, me: user.me } }}>
{ pair.pair && <p><span>Currency Pair: </span>{ pair.pair }</p> }
{ pair.lotSize && <p><span>Lot Size: </span>{ pair.lotSize.toLocaleString() }.00</p> }
{ pair.position && <p><span>Position: </span>{ pair.position }</p> }
{ pair.openedAt && <p><span>Opened At: </span>{ pair.openedAt.toFixed(4) }</p> }
{ pair.createdAt && <p><span>Created At: </span>{ new Date(+pair.createdAt).toLocaleString() }</p> }
{ pair.updatedAt && <p><span>Updated At: </span>{ new Date(+pair.updatedAt).toLocaleString() }</p> }
</Link>
</div>
))}
{ data.getPairs && data.getPairs.map(pair => !pair.open && !open && (
<div className='pair_divs' key={ pair.id }>
<div>
{ pair.pair && <p><span>Currency Pair: </span>{ pair.pair }</p> }
{ pair.lotSize && <p><span>Lot Size: </span>{ pair.lotSize.toLocaleString() }.00</p> }
{ pair.position && <p><span>Position: </span>{ pair.position }</p> }
{ pair.openedAt && <p><span>Opened At: </span>{ pair.openedAt.toFixed(4) }</p> }
{ pair.closedAt && <p><span>Closed At: </span>{ pair.closedAt.toFixed(4) }</p> }
{ <p><span>Pip Dif: </span>{ pair.pipDif || 0 }</p> }
{ <p><span>Profit/Loss: </span>{ pair.profitLoss.toFixed(2) || 0 }</p> }
{ pair.createdAt && <p><span>Created At: </span>{ new Date(+pair.createdAt).toLocaleString() }</p> }
{ pair.updatedAt && <p><span>Updated At: </span>{ new Date(+pair.updatedAt).toLocaleString() }</p> }
</div>
</div>
))}
</div>
</section>
)
}
export default Account
```
React Router's `Link` component allows us to pass state when navigating a user to another view. This is convienient if we wanted to render unique views when coming from certain routes. We use this to display the new position that the user just opened — if any. You could get creative here but we'll keep it simple and just display some data about the new position.
Under the new position (if there is one), we display all of the users positions. Open positions are shown by default, but we provide a button to toggle between open and closed. If the position is open, the user can click on the currency pair. This will navigate them to `/pair` (which we need to create) and provide further options. This component is a bit verbose. We'll refactor in a moment.
Let's import `Account.js` into App.js and create its `Route`.
```jsx
// App.js
import React from 'react'
import { Route, Switch } from 'react-router-dom'
import Landing from './pages/Landing'
import Navbar from './components/navbar/Navbar'
import Register from './components/auth/Register'
import Login from './components/auth/Login'
import Account from './pages/Account'
const App = () => (
<Switch>
<Route path='/login' component={ Login } />
<Route path='/' render={() => (
<main>
<div className='navbar'><Navbar /></div>
<Route exact path='/' component={ Landing } />
<Route path='/register' component={ Register } />
<Route path='/account' component={ Account } />
</main>
)} />
</Switch>
)
export default App
```
We'll also want `Account` to be accessible from the Navbar when a user is logged in.
```jsx
// Navbar.js
return (
<ul>
<li><NavLink to='/account'>Account</NavLink></li>
<li><Logout /></li>
</ul>
)
```
When navigating to `/account` from the navbar you'll notice 'New Position' isn't being displayed. Cool! Now let's refactor `Account.js` and add some functionality. Inside of `components/pairs` create a new file named `NewPosition.js`. Cut the following code from `Account.js` and insert it into your newly created file.
```jsx
// components/pairs/NewPosition.js
import React from 'react'
export default function NewPosition({ state }) {
return (
<div>
<h3>New Position</h3>
<div className='pair_divs' style={{ textAlign: 'center' }}>
<p><span>Pair: </span>{ state.data.openPosition.pair.pair }</p>
<p><span>Lot Size: </span>{ state.data.openPosition.pair.lotSize.toLocaleString() }.00</p>
<p><span>Pip Dif: </span>{ state.data.openPosition.pair.openedAt }</p>
<p><span>Position: </span>{ state.data.openPosition.pair.position }</p>
</div>
</div>
)
}
```
In the same folder create a new file and name it `Pairs.js`. Cut the following code from `Account.js` and add it to this file.
```jsx
// components/pairs/Pairs.js
import React from 'react'
import { Link } from 'react-router-dom'
const Pairs = ({ data, open, user }) => (
<div>
{ data.getPairs && data.getPairs.map(pair => pair.open && open && (
<div className='pair_divs' key={ pair.id }>
<Link to={{ pathname: '/pair', state: { pair, me: user.data.me } }}>
{ pair.pair && <p><span>Currency Pair: </span>{ pair.pair }</p> }
{ pair.lotSize && <p><span>Lot Size: </span>{ pair.lotSize.toLocaleString() }.00</p> }
{ pair.position && <p><span>Position: </span>{ pair.position }</p> }
{ pair.openedAt && <p><span>Opened At: </span>{ pair.openedAt.toFixed(4) }</p> }
{ pair.createdAt && <p><span>Created At: </span>{ new Date(+pair.createdAt).toLocaleString() }</p> }
{ pair.updatedAt && <p><span>Updated At: </span>{ new Date(+pair.updatedAt).toLocaleString() }</p> }
</Link>
</div>
))}
{ data.getPairs && data.getPairs.map(pair => !pair.open && !open && (
<div className='pair_divs' key={ pair.id }>
<div>
{ pair.pair && <p><span>Currency Pair: </span>{ pair.pair }</p> }
{ pair.lotSize && <p><span>Lot Size: </span>{ pair.lotSize.toLocaleString() }.00</p> }
{ pair.position && <p><span>Position: </span>{ pair.position }</p> }
{ pair.openedAt && <p><span>Opened At: </span>{ pair.openedAt.toFixed(4) }</p> }
{ pair.closedAt && <p><span>Closed At: </span>{ pair.closedAt.toFixed(4) }</p> }
{ <p><span>Pip Dif: </span>{ pair.pipDif || 0 }</p> }
{ <p><span>Profit/Loss: </span>{ pair.profitLoss.toFixed(2) || 0 }</p> }
{ pair.createdAt && <p><span>Created At: </span>{ new Date(+pair.createdAt).toLocaleString() }</p> }
{ pair.updatedAt && <p><span>Updated At: </span>{ new Date(+pair.updatedAt).toLocaleString() }</p> }
</div>
</div>
))}
</div>
)
export default Pairs
```
Okay. We should implement an `addFunds` button while we're working on `Account.js`. Create a new file named `addFunds.js` inside of `graphql/mutations` and insert the following:
```javascript
// graphql/mutations/addFunds.js
import gql from 'graphql-tag'
export const ADDFUNDS = gql`
mutation ($amount: Int!) {
addFunds(amount: $amount) {
success
message
bankroll
}
}
`
```
In the `components/pairs` folder create a new file named `AddFunds.js` and add the below code:
```jsx
// components/pairs/AddFunds.js
import React, { useState } from 'react'
import { useMutation } from '@apollo/react-hooks'
import { ADDFUNDS } from '../../graphql/mutations/addFunds'
export default function AddFunds() {
const [ showModal, setShowModal ] = useState(false),
[ addFunds, { data, loading, error } ] = useMutation(ADDFUNDS, {
variables: { amount: 1000000 }
})
if(loading) return <p>Loading...</p>
if(error) return <p>{ error.message }</p>
return addFunds && (
<>
<button onClick={ async () => {
alert('Are you sure?')
await addFunds()
setShowModal(true)
}}>Add Funds</button>
{ data && data.addFunds.message && showModal && (
<div className='modal'>
<button onClick={() => setShowModal(false)}>x</button>
<p>{ data.addFunds.message }</p>
</div>
)}
</>
)
}
```
Usually the user would have some say over how much they deposit. That said, who's making a fuss when we're just out here handing out milli's?
It's high time we got back to `Account.js`.
```jsx
// Account.js
import React, { useState } from 'react'
import { useQuery } from '@apollo/react-hooks'
import { Redirect } from 'react-router-dom'
import { GETPAIRS } from '../graphql/queries/getPairs'
import { MEQUERY } from '../graphql/queries/me'
import AddFunds from '../components/pairs/AddFunds'
import Pairs from '../components/pairs/Pairs'
import NewPosition from '../components/pairs/NewPosition'
export default function Account(props) {
const [ open, setOpen ] = useState(true),
user = useQuery(MEQUERY),
{ data, loading, error } = useQuery(GETPAIRS)
if(user.error) return <Redirect to='/login' />
if(!user.data || !user.data.me) return <p>A man has no name.</p>
if(loading) return <p>Loading...</p>
if(!data) return (
<section>
<h2>{ user.data.me.name }</h2>
<div>
<p><span>Available Balance: </span>{ user.data.me.bankroll.toLocaleString() }.00</p>
<AddFunds />
</div>
</section>
)
if(error) return <p>{ error.message }</p>
return (
<section>
<h2>{ user.data.me.name }</h2>
<div>
<p><span>Available Balance: </span>{ user.data.me.bankroll.toLocaleString() }.00</p>
<AddFunds />
</div>
{ props.location.state && <NewPosition state={ props.location.state } /> }
<h3>Currency Pairs</h3>
<button onClick={() => setOpen(true)}>open</button>
<button onClick={() => setOpen(false)}>closed</button>
<Pairs data={ data } open={ open } user={ user } />
</section>
)
}
```
First, we handle our imports. Next, we implement `useQuery` to find out about the user. If there's no `getPair` data we just display information about the user and the `AddFunds` button `else` we display all the data.
Our users can now open positions and add money to their account. Let's allow them to close positions. Once again this starts with a mutation. In `graphql/mutations` create `closePosition.js` and add the following:
```javascript
// graphql/mutations/closePosition.js
import gql from 'graphql-tag'
export const CLOSEPOSITION = gql`
mutation ClosePosition($id: ID!, $closedAt: Float!) {
closePosition(id: $id, closedAt: $closedAt) {
success
message
pair {
id
user
pair
lotSize
position
openedAt
closedAt
pipDif
profitLoss
open
createdAt
updatedAt
}
}
}
`
```
When a user clicks on an open position, they get navigated to `/pair`. This is where they'll be able to close their positions. In the `pages` folder, create `Pair.js` and adjust it such that it resembles the below code:
```jsx
// Pair.js
import React from 'react'
import { useQuery } from '@apollo/react-hooks'
import { CURRENCY_PAIR_INFO } from '../graphql/queries/currencyPairInfo'
import ClosePosition from '../components/positions/ClosePosition'
import PairDetails from '../components/pairs/PairDetails'
export default function Pair(props) {
const { createdAt, lotSize, openedAt, pair, position, id } = props.location.state.pair,
{ bankroll, name } = props.location.state.me,
[ fc, tc ] = pair.split('/'),
{ data, loading, error, refetch } = useQuery(CURRENCY_PAIR_INFO, {
variables: { fc, tc }
})
if(loading) return <p>Loading...</p>
if(error) return <p>{ error.message }</p>
const { bidPrice, lastRefreshed, askPrice } = data.currencyPairInfo,
pipDifLong = (bidPrice - openedAt).toFixed(4),
pipDifShort = (openedAt - askPrice).toFixed(4),
potentialProfitLoss = position === 'long'
? pipDifLong * lotSize
: pipDifShort * lotSize,
date = new Date(lastRefreshed + ' UTC')
return data && (
<section>
<div className='landing_pair_data'>
<h3>Pair Details</h3>
<div>
<p>{ name } your available balance is { bankroll.toLocaleString() }.00</p>
<div>
<button onClick={() => refetch()}>Refresh</button>
<ClosePosition
id={id}
bidPrice={bidPrice}
askPrice={askPrice}
position={position}
/>
</div>
</div>
<PairDetails
pair={pair}
lotSize={lotSize}
openedAt={openedAt}
position={position}
createdAt={createdAt}
askPrice={askPrice}
bidPrice={bidPrice}
lastRefreshed={date.toLocaleString()}
pipDifLong={pipDifLong}
pipDifShort={pipDifShort}
potentialProfitLoss={potentialProfitLoss}
/>
</div>
</section>
)
}
```
Once we have our state we pass in the query variables to `currencyPairInfo`. The response provides the data required to complete our `closePosition` mutation. Depending on whether the `position` is long or short, we use either the `askPrice` or `bidPrice` to calculate the difference in price since the initial purchase. This difference in price is what we're calling the pip difference (pipDif).
As described by [dailyfx.com](https://www.dailyfx.com/forex/education/trading_tips/daily_trading_lesson/2019/01/18/what-is-a-pip.html):
> PIP - which stands for Point in Percentage - is the unit of measure used by forex traders to define the smallest change in value between two currencies. This is usually represented by a single digit move in the fourth decimal place. The pip value is calculated by multiplying one pip(0.0001) by the specific lot/contract size. For standard lots this entails 100,000 units of the base currency and for mini lots, this is 10,000 units. For example, looking at EUR/USD, a one pip movement in a standard contract is equal to $10(0.0001 x 100,000).
Each currency pair has its own relative relationship, so we calculate profit/loss by simply comparing the `openedAt` price to the `closedAt` price. We calculate the `pipDif` by first figuring out if the `position` is long or short. If the position is long we subtract the `openedAt` price from the `bidPrice`. Conversely, if the position is short, we subtract the `askPrice` from the `openedAt` price. This will provide our `pipDif`. Once we have the difference in price, we multiply it by the `lotSize`.
You can see how easily this is calculated once demonstrated visually. For a standard lot (100,000 units) each pip (usually fourth decimal place) movement equates to 10 currency units of profilt/loss.

For a mini lot (10,000 units) we do the same but every pip movement equates to 1 currency unit profit/loss.

It's important to understand that we're not converting one currency to another. We're just betting on which currency will be worth more relative to the other. For clarity, if you wanted to buy (or long) EUR against USD, you'd sell EUR/USD or buy USD/EUR. Conversely, to long USD against the EUR, you'd buy EUR/USD or sell USD/EUR. Rollover (interest) and margin are outside the scope of this tutorial so we'll focus exclusively on the pipDif.
We need to create `ClosePosition` and `PairDetails`. Inside of `components/positions`, create `ClosePosition.js` and add the following:
```jsx
// components/positions/ClosePosition.js
import React, { useState } from 'react'
import { useQuery, useMutation } from '@apollo/react-hooks'
import { Link } from 'react-router-dom'
import { CLOSEPOSITION } from '../../graphql/mutations/closePosition'
import { MEQUERY } from '../../graphql/queries/me'
import { GETPAIRS } from '../../graphql/queries/getPairs'
export default function ClosePosition({ id, bidPrice, askPrice, position }) {
const [ showModal, setShowModal ] = useState(false),
{ refetch } = useQuery(MEQUERY),
[ closePosition, { data, loading, error } ] = useMutation(CLOSEPOSITION, {
variables: position === 'long'
? { id, closedAt: +bidPrice }
: { id, closedAt: +askPrice },
refetchQueries: [{ query: GETPAIRS }]
})
if(loading) return <p>Loading...</p>
if(error) return <p>{ error.message }</p>
return closePosition && (
<>
<button onClick={ async () => {
alert(`Are you sure you want to close your ${
position === 'long' ? 'long' : 'short' } position?`)
await closePosition()
setShowModal(true)
refetch()
}}>
{ position === 'long' ? 'Sell' : 'Buy' }
</button>
{ data && data.closePosition.message && showModal && (
<div className='modal'>
<button onClick={() => setShowModal(false)}>x</button>
<p>{ data.closePosition.message }</p>
<Link to='/account'><button>Account</button></Link>
</div>
)}
</>
)
}
```
All this file is doing is deciphering whether the `position` is long or short and providing the `closePosition` mutation the appropriate variables (pair `id` and `bidPrice/askPrice`). The `closePosition` response message will be displayed via a modal.
We're using the `useQuery` hook to gain access to the `me` queries `refetch` function. We add the `refetch` method to our button such that after the `closePosition` mutation runs, `refetch` will refresh the users data. If we didn't use `refetchQueries` here, after our mutation runs the `open` pairs would be up-to-date, but the `closed` pairs wouldn't be.
In the components folder create `PairDetails.js` and add the code below:
```jsx
// components/PairDetails.js
import React from 'react'
const PairDetails = ({
pair,
lotSize,
openedAt,
position,
createdAt,
askPrice,
bidPrice,
lastRefreshed,
pipDifLong,
pipDifShort,
potentialProfitLoss
}) => (
<div>
<p><span>Currency Pair: </span>{pair}</p>
<p><span>Lot Size: </span>{lotSize.toLocaleString()}.00</p>
<p><span>Opened At: </span>{(+openedAt).toFixed(4)}</p>
<p><span>Position: </span>{position}</p>
<p><span>Created At: </span>{new Date(+createdAt).toLocaleString()}</p>
{ position === 'long'
? (
<>
<br />
<p><span>Current Bid Price: </span>{(+bidPrice).toFixed(4)}</p>
<p><span>Last Refreshed: </span>{lastRefreshed}</p>
<p><span>Current Pip Difference: </span>{pipDifLong}</p>
<p><span>Potential PL: </span>
{potentialProfitLoss.toLocaleString()}.00
</p>
</> )
: (
<>
<br />
<p><span>Current Ask Price: </span>{(+askPrice).toFixed(4)}</p>
<p><span>Last Refreshed: </span>{lastRefreshed}</p>
<p><span>Current Pip Difference: </span>{pipDifShort}</p>
<p><span>Potential PL: </span>
{potentialProfitLoss.toLocaleString()}.00
</p>
</>
)
}
</div>
)
export default PairDetails
```
We display the open positions data. We also display the current `askPrice`/`bidPrice` and the `potentialProfitLoss` that closing the position would provide.
Import Pair.js into App.js and create its `Route`.
```jsx
// App.js
import React from 'react'
import { Route, Switch } from 'react-router-dom'
import Landing from './pages/Landing'
import Navbar from './components/navbar/Navbar'
import Register from './components/auth/Register'
import Login from './components/auth/Login'
import Account from './pages/Account'
import Pair from './pages/Pair'
const App = () => (
<Switch>
<Route path='/login' component={ Login } />
<Route path='/' render={() => (
<main>
<div className='navbar'><Navbar /></div>
<Route exact path='/' component={ Landing } />
<Route path='/register' component={ Register } />
<Route path='/account' component={ Account } />
<Route path='/pair' component={ Pair } />
</main>
)} />
</Switch>
)
export default App
```
If you navigate to /account as a result of opening a new position, you should see the following:

Click on an open pair and take a good gander at the browser.

And with that a user can close positions. Best we don't just rest on our laurels. Time to implement our chart! We'll start with the query. In `graphql/queries` create a new file and name it `monthlyTimeSeries.js`. Insert the following:
```javascript
// graphql/queries/monthlyTimeSeries.js
import gql from 'graphql-tag'
export const MONTHLYTIMESERIES = gql`
query MonthlyTimeSeries($fc: String, $tc: String) {
monthlyTimeSeries(fc: $fc, tc: $tc) {
timesArray
valuesArray
}
}
`
```
In the `pages` folder create a new file named `Chart.js` and add the below code:
```jsx
// Chart.js
import React, { useState } from 'react'
import { Line } from 'react-chartjs-2'
import { useQuery } from '@apollo/react-hooks'
import { MONTHLYTIMESERIES } from '../graphql/queries/monthlyTimeSeries'
export default function Chart() {
const [ fc, setFc ] = useState('EUR'),
[ tc, setTc ] = useState('USD'),
[ fromCurrency, setFromCurrency ] = useState('EUR'),
[ toCurrency, setToCurrency ] = useState('USD'),
{ data, error, loading, refetch } = useQuery(MONTHLYTIMESERIES, {
variables: { fc, tc }
})
if(loading) return <p>loading...</p>
if(error) return <button onClick={() => {
refetch({ fc: 'EUR', tc: 'USD' })
window.location.href = '/chart'
}}>retry</button>
const labels = data && data.monthlyTimeSeries.timesArray,
chartData = data && data.monthlyTimeSeries.valuesArray
return (
<div className='chartData'>
<form onSubmit={e => {
e.preventDefault()
setFc(fromCurrency)
setTc(toCurrency)
}}>
<input
name='fromCurrency'
value={fromCurrency}
placeholder='From Currency'
onChange={e => setFromCurrency(e.target.value.toUpperCase())}
/>
<input
name='toCurrency'
value={toCurrency}
placeholder='To Currency'
onChange={e => setToCurrency(e.target.value.toUpperCase())}
/>
<button>submit</button>
</form>
<Line data={{
labels,
datasets: [
{
label: `${fc}/${tc} Time Series FX (Monthly)`,
fill: true,
lineTension: 0.1,
backgroundColor: 'rgb(55, 131, 194)',
borderColor: 'white',
borderCapStyle: 'butt',
borderDash: [],
borderDashOffset: 0.0,
borderJoinStyle: 'miter',
pointBorderColor: 'white',
pointBackgroundColor: '#fff',
pointBorderWidth: 1,
pointHoverRadius: 5,
pointHoverBackgroundColor: 'white',
pointHoverBorderColor: 'rgba(220,220,220,1)',
pointHoverBorderWidth: 2,
pointRadius: 1,
pointHitRadius: 10,
data: chartData
}
]
}} />
</div>
)
}
```
We use our `monthlyTimeSeries` query to fetch our chart data. We provide a couple inputs so that the user can choose which currency pair they'd like to investigate. If the user enters an incorrect pair we present them with a refresh button. The `refetch` function accepts arguments to its associated `query`. `onClick` we use this function to display EUR/USD again. We feed the `Line` component that we get curtesy of `react-chartjs-2` the two arrays from our query: `labels` and `chartData`. Finally, we add some styling and return our chart.
We'll need to import `Chart.js` into `App.js` and give it a `path` in `Navbar.js`. Let's start with `App.js`:
```jsx
// App.js
import React from 'react'
import { Route, Switch } from 'react-router-dom'
import Landing from './pages/Landing'
import Navbar from './components/navbar/Navbar'
import Register from './components/auth/Register'
import Login from './components/auth/Login'
import Account from './pages/Account'
import Pair from './pages/Pair'
import Chart from './pages/Chart'
const App = () => (
<Switch>
<Route path='/login' component={ Login } />
<Route path='/' render={() => (
<main>
<div className='navbar'><Navbar /></div>
<Route exact path='/' component={ Landing } />
<Route path='/register' component={ Register } />
<Route path='/account' component={ Account } />
<Route path='/pair' component={ Pair } />
<Route path='/chart' component={ Chart } />
</main>
)} />
</Switch>
)
export default App
```
Navbar.js:
```jsx
// Navbar.js
import React from 'react'
import { NavLink, Redirect } from 'react-router-dom'
import { useQuery } from '@apollo/react-hooks'
import { MEQUERY } from '../../graphql/queries/me'
import Logout from '../auth/Logout'
import './Navbar.css'
const Navbar = () => {
const { data, loading, error } = useQuery(MEQUERY)
if(loading) return <p>Loading....</p>
if(error) return <Redirect to='/login' />
if(!data) return <p>This is unfortunate</p>
return (
<div className='navigation'>
<header><NavLink exact to='/'>Forex</NavLink></header>
{ !data.me ? (
<ul>
<li><NavLink exact to='/login'>Login</NavLink></li>
<li><NavLink exact to='/register'>SignUp</NavLink></li>
</ul> )
: (
<ul>
<li><NavLink to='/chart'>Chart</NavLink></li>
<li><NavLink to='/account'>Account</NavLink></li>
<li><Logout /></li>
</ul>
)}
</div>
)
}
export default Navbar
```
Once you save your files our app will be complete and should resemble the video below:
[](https://youtu.be/mbx9sLVk5UQ)
You'll notice that the chart is fully responsive and not so bad on the old spectacles.
BEHOLD! We've created a currency exchange and hopefully learned a little something along the way. I know I did.
___
Reach out: [Twitter](https://twitter.com/marlonanthony10) | [Medium](https://medium.com/@marlonanthony) | [GitHub](https://github.com/marlonanthony) | marlonanthony |
155,770 | ArangoDB Tutorial - Getting Started with ArangoDB | ArangoDB: One engine. One query language. Multiple models. This course will provide everything you n... | 0 | 2019-08-13T03:34:55 | https://dev.to/javascript_tuto/arangodb-tutorial-getting-started-with-arangodb-101g | sql, webdev, tutorial, beginners | ArangoDB: One engine. One query language. Multiple models.
This course will provide everything you need to get up and running with ArangoDB.
This course introduces you to:
A brief history of databases
The basic AQL syntax
Performing CRUD operations with AQL
Data modeling
Indexing
Joins
Grouping and Aggregation
Graphs
An intro to the Foxx Microservices Framework
The second section (coming soon) of this course covers some more advanced and enterprise features including:
Smart Joins
Smart Graphs
Setting up a cluster environment
Deployment
And more
We hope by the end of this course you will have a better understanding of everything ArangoDB has to offer, what a native multi-model database is, and have the confidence to know when ArangoDB is the best solution for your product.
What you'll learn:
An overview of everything ArangoDB has to offer.
{% youtube LaLYZt_h118 %}
| javascript_tuto |
155,880 | How to Get the First or Last Value in a Group Using Group By in SQL | I recently had to produce reports on a table containing events of a user's account balance. The user... | 0 | 2019-08-17T10:00:35 | https://hakibenita.com/sql-group-by-first-last-value | postgres, sql |
I recently had to produce reports on a table containing events of a user's account balance. The user can deposit and withdraw from their account, and support personnel can set the account's credit, which is the maximum amount the user can overdraw.
The table looked roughly like this:
```psql
db=# SELECT * FROM event;
id | account | type | happened_at | data
----+---------+------------+------------------------+-----------------------------------
1 | 1 | created | 2019-08-01 15:14:13+03 | {"credit": 0, "delta_balance": 0}
2 | 1 | deposited | 2019-08-01 15:15:15+03 | {"delta_balance": 100}
3 | 1 | withdraw | 2019-08-02 09:35:33+03 | {"delta_balance": -50}
4 | 1 | credit_set | 2019-08-03 16:14:12+03 | {"credit": 50}
5 | 1 | withdraw | 2019-08-03 14:45:44+03 | {"delta_balance": -30}
6 | 1 | credit_set | 2019-08-03 16:14:12+03 | {"credit": 100}
7 | 1 | withdraw | 2019-08-03 16:15:09+03 | {"delta_balance": -50}
(7 rows)
```

To get the current balance of an account, we sum the changes in `delta_balance`:
```psql
db=# SELECT
account,
SUM((data->'delta_balance')::int) AS balance
FROM
event
GROUP BY
account;
account | balance
---------+---------
1 | -30
```
The `data` field contains information specific to each type of event. To extract the value of `delta_balance` from the `data` column we use the [arrow operator provided by PostgreSQL](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-OP-TABLE).
The result of the query shows that the current balance of account 1 is -30. This means the account is in overdraft. To check if this is within the allowed range, we need to compare it to the credit set for this account. The credit for account 1 was initially set to 0 when the account was created. The credit was then adjusted twice, and is currently set to 100.
To get the current state of an account, we need its aggregated balance and the latest credit that was set for it.
## The Problem
In Oracle there is a function called [`last`](https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions071.htm) we can be use to get the last `credit_set` event. A query using `last` might look like this:
```sql
-- Oracle
SELECT
account,
MAX(CASE WHEN type = 'credit_set' THEN data ELSE null END)
KEEP (DENSE_RANK LAST ORDER BY id) AS credit
FROM
event
GROUP BY
account;
```
PostgreSQL also has a [`LAST_VALUE`](https://www.postgresql.org/docs/current/functions-window.html#id-1.5.8.26.6.2.2.10.1.1) analytic function. Analytics functions cannot be used in a group by the way aggregate functions do:
```psql
db=# SELECT
account,
LAST_VALUE(data) OVER (
PARTITION BY account ORDER BY id
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS credit
FROM
event
WHERE
type = 'credit_set'
GROUP BY
account;
ERROR: column "event.data" must appear in the GROUP BY clause or be used in an aggregate function
LINE 3: LAST_VALUE(data) OVER (
```
The error tells us that the `data` field used in the analytic function must be used in the group by. This is not really what we want. To use PostgreSQL's `LAST_VALUE` function, we need to remove the group by:
```psql
db=# SELECT
account,
LAST_VALUE(data) OVER (
PARTITION BY account ORDER BY id
RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) AS credit
FROM
event
WHERE
type = 'credit_set';
account | credit
---------+-----------------
1 | {"credit": 100}
1 | {"credit": 100}
```
These are not exactly the results we need. Analytic, or window functions, operate on a set of rows, and not in a group by.
**PostgreSQL doesn't have a built-in function to obtain the first or last value in a group using group by.**
To get the last value in a group by, we need to get creative!
***
## Old Fashioned SQL
The plain SQL solution is to divide and conquer. We already have a query to get the current balance of an account. If we write another query to get the credit for each account, we can join the two together and get the complete state of an account.
To get the last event for each account in PostgreSQL we can use [`DISTINCT ON`](https://www.postgresql.org/docs/current/sql-select.html#SQL-DISTINCT):
```psql
db=# SELECT DISTINCT ON (account)
account,
data->'credit' AS credit
FROM
event
WHERE
type = 'credit_set'
ORDER BY
account,
id DESC;
account | credit
---------+--------
1 | 100
```
Great! Using `DISTINCT ON` we got the last credit set for each account.
*DISTINCT ON: I've written about [the many faces of DISTINCT in PostgreSQL](https://hakibenita.com/the-many-faces-of-distinct-in-postgre-sql).*
The next step is to join the two queries, and get the complete account state:
```psql
db=# SELECT
a.account,
a.balance,
b.credit
FROM
(
SELECT
account,
SUM((data->'delta_balance')::int) AS balance
FROM
event
GROUP BY
account
) AS a
JOIN
(
SELECT DISTINCT ON (account)
account,
data->'credit' AS credit
FROM
event
WHERE
type = 'credit_set'
ORDER BY
account,
id DESC
) AS b
ON a.account = b.account;
account | balance | credit
---------+---------+--------
1 | -30 | 100
```
We got the expected result.
Before we move on, let's take a glance at the execution plan:
```
QUERY PLAN
---------------------------------------------------------------------------------------------
Hash Join (cost=44.53..49.07 rows=4 width=44)
Hash Cond: (event.account = b.account)
-> HashAggregate (cost=25.00..27.00 rows=200 width=12)
Group Key: event.account
-> Seq Scan on event (cost=0.00..17.50 rows=750 width=36)
-> Hash (cost=19.49..19.49 rows=4 width=36)
-> Subquery Scan on b (cost=19.43..19.49 rows=4 width=36)
-> Unique (cost=19.43..19.45 rows=4 width=40)
-> Sort (cost=19.43..19.44 rows=4 width=40)
Sort Key: event_1.account, event_1.id DESC
-> Seq Scan on event event_1 (cost=0.00..19.39 rows=4 width=40)
Filter: (type = 'credit_set'::text)
```
The event table is being scanned twice, once for each subquery. The `DISTINCT ON` subquery also requires a sort by `account` and `id`. The two subqueries are then joined using a hash-join.
**PostgreSQL is unable to combine the two subqueries into a single scan of the table.** If the event table is very large, performing two full table scans, and a sort and a hash join, might become slow and consume a lot of memory.
*Common Table Expression: It's tempting to use common table expression (CTE) to make the query more readable. But, [CTE's are currently optimization fences](https://hakibenita.com/be-careful-with-cte-in-postgre-sql), and using it here will most definitely prevent PostgreSQL from performing any optimization that involves both subqueries.*
***
## The Array Trick
Using good ol' SQL got us the answer, but it took two passes on the table. We can do better with the following trick:
```psql
db#=>SELECT
account,
SUM((data->'delta_balance')::int) AS balance,
(MAX(ARRAY[id, (data->'credit')::int]) FILTER (WHERE type = 'credit_set'))[2] AS credit
FROM
event
GROUP BY
account;
account | balance | credit
---------+---------+--------
1 | -30 | 100
```
This is so much simpler than the previous question, so let's break it down.
### How PostgreSQL Compares Arrays
To understand what exactly is going on here, we first need to understand [how PostgreSQL compares arrays](https://www.postgresql.org/docs/current/functions-array.html):
> Array comparisons compare the array contents element-by-element, using the default B-tree comparison function for the element data type.
When comparing arrays, PostgreSQL will go element by element and compare the values according to their type. To demonstrate:
```psql
db=# SELECT greatest(ARRAY[1, 200], ARRAY[2, 100]);
greatest
----------
{2,100}
```
The first element of the second array (2) is larger than the first element of the first array (1), so it's the greatest.
```psql
db=# SELECT greatest(ARRAY[1, 200], ARRAY[1, 201]);
greatest
----------
{1,201}
(1 row)
```
The first elements of both arrays are equal (1), so PostgreSQL moves on to the next element. In this case, the second element of the second array (201) is the greatest.
### Max by Key...
Using this feature of PostgreSQL, we construct an array where the first element is the value to sort by, and the second element is the value we want to keep. In our case, we want to get the credit by the max `id`:
```sql
MAX(ARRAY[id, (data->'credit')::int])
```
Not all events set credit, so we need to restrict the result to `credit_set` events:
```sql
MAX(ARRAY[id, (data->'credit')::int]) FILTER (WHERE type = 'credit_set')
```
The result of this expression is an array:
```psql
db#=>SELECT
account,
MAX(ARRAY[id, (data->'credit')::int]) FILTER (WHERE type = 'credit_set'))
FROM
event
GROUP BY
account;
account | max
---------+---------
1 | {6,100}
```
We only want the second element, the value of `credit`:
```sql
(MAX(ARRAY[id, (data->'credit')::int]) FILTER (WHERE type = 'credit_set'))[2]
```
And this is it! This way we can get the last credit set for each account.
Next, let's examine the execution plan:
```
QUERY PLAN
---------------------------------------------------------------
HashAggregate (cost=32.50..34.50 rows=200 width=16)
Group Key: account
-> Seq Scan on event (cost=0.00..17.50 rows=750 width=72)
```
Simple plan for a simple query!
The main benefit of this approach is that it only needs one pass of the table and no sort.
### Caveats
This approach is very useful, but it has some restrictions:
**All elements must be of the same type.**
PostgreSQL does not support [arrays with different types of elements](https://www.postgresql.org/docs/current/arrays.html#ARRAYS-DECLARATION). As an example, if we wanted to get the last credit set by date, and not by id:
```psql
db=# SELECT
account,
SUM((data->'delta_balance')::int) AS balance,
(MAX(
ARRAY[happened_at, (data->'credit')::int]
) FILTER (WHERE type = 'credit_set'))[2] AS credit
FROM
event
GROUP BY
account;
ERROR: ARRAY types timestamp with time zone and integer cannot be matched
LINE 4: (MAX(ARRAY[happened_at, (data->'credit')::int]) FILTER...
```
PostgreSQL tells us that an array cannot contain both timestamps and integers.
We can overcome this restriction in some cases by casting one of the elements:
```psql
db=# SELECT
account,
SUM((data->'delta_balance')::int) AS balance,
(MAX(
ARRAY[EXTRACT('EPOCH' FROM happened_at), (data->'credit')::int]
) FILTER (WHERE type = 'credit_set'))[2] AS credit
FROM
event
GROUP BY
account;
account | balance | credit
---------+---------+--------
1 | -30 | 100
```
We [converted the timestamp to epoch](https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT), which is the number of seconds since 1970. Once both elements are of the same type, we can use the array trick.
**Query might consume a little more memory.**
This one if a bit of a stretch, but as [we demonstrated in the
past](how-we-solved-a-storage-problem-in-postgre-sql-without-adding-a-single-bytes-of-storage),
large group and sort keys consume more memory in joins and sorts. Using the array trick, the group key is an array, which is a bit larger than the plain fields we usually sort by.
Also, the array trick can be used to "piggyback" more than one value:
```sql
(MAX(ARRAY[
id,
(data->'credit')::int,
EXTRACT('EPOCH' FROM happened_at)
]) FILTER (WHERE type = 'credit_set'))[2:]
```
This query will return both the last credit set, and the date in which it was set. The entire array is used for sorting, so the more values we put in the array, the larger the group key gets.
## Conclusion
The array trick is very useful, and can significantly simplify complicated queries and improve performance. We use it to produce reports on time series data, and to generate read models from event tables.
***
**Call out to my readers:** I'm pretty sure using [user-defined aggregate function in PostgreSQL](https://www.postgresql.org/docs/current/xaggr.html) it should be possible to create a function with the signature `MAX_BY(key, value)`. I haven't had time to dig deep into custom aggregate functions, but if any of the readers do, please share your implementation and i'll be happy to post it here.
***
*UPDATED: Aug 17, 2019*
## Comments From Readers
In the few days following the publication of this article, I received several suggestions and comments from readers. This is a summary of the comments I received, and my thoughts on them.
***
One [commenter on Reddit](https://www.reddit.com/r/PostgreSQL/comments/cpskf7/how_to_get_the_first_or_last_value_in_a_group/ewsbdqo/) suggested using `ARRAY_AGG`:
```psql
db=# SELECT
account,
((ARRAY_AGG(data ORDER BY happened_at DESC)
FILTER (WHERE type = 'credit_set'))[1] -> 'credit')::int AS credit
FROM
event
GROUP BY account;
account | credit
---------+--------
1 | 50
```
This approach obviously works, and it doesn't require the key and the value to be of the same type, which is a big limitation of the array trick.
The downside to this approach is that it requires a sort which might become expensive with very large data sets:
```psql
QUERY PLAN
------------------------------------------------------------------
GroupAggregate (cost=1.17..1.36 rows=7 width=8)
Group Key: account
-> Sort (cost=1.17..1.19 rows=7 width=76)
Sort Key: account
-> Seq Scan on event (cost=0.00..1.07 rows=7 width=76)
```
***
Another [commenter on Reddit](https://www.reddit.com/r/PostgreSQL/comments/cpskf7/how_to_get_the_first_or_last_value_in_a_group/ewu6juf/) suggested using window functions in combination with `DISTINCT ON`. This is the original suggestion:
```sql
SELECT DISTINCT ON (account)
account,
FIRST_VALUE((data->'credit')::int) OVER w,
LAST_VALUE((data->'credit')::int) OVER w,
SUM((data->'credit')::int) OVER w
FROM
event
WHERE
type = 'credit_set'
WINDOW w AS (
PARTITION BY account
ORDER BY id
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
)
```
The query uses both `DISTINCT ON` and window functions. It works by calculating the aggregates using the window function on the entire set (all rows of the account), and then fetch the first or last row using `DISTINCT ON`.
To make the window functions behave like a "group by" and calculate the aggregates on the entire set, the bound is defined as `BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING`, meaning "for the entire partition".
To avoid repeating the window for every aggregate, the query uses [a WINDOW clause](https://www.postgresql.org/docs/current/sql-select.html#SQL-WINDOW) to define a named window that can be used multiple times in the query.
This query however, is not really working because the where clause is restricted to events with type `credit_set`. To get the complete status of the account, we also need to aggregate the balance of *all* events.
To actually make this approach work, we need to make the following adjustments:
```psql
db=# SELECT DISTINCT ON (account)
account,
LAST_VALUE((data->'credit')::int) OVER (
PARTITION BY account
ORDER BY (
CASE
WHEN type = 'credit_set' THEN happened_at
ELSE null
END
) ASC NULLS FIRST
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
) as credit,
SUM(COALESCE((data->'delta_balance')::int, 0)) OVER (PARTITION BY account) AS balance
FROM
event;
account | credit | balance
---------+--------+---------
1 | 100 | -30
```
What changes did we make:
- We had to ditch the where clause so all events are processed.
- We also had to do some "creative sorting" to get the `last_credit` set event.
- We removed the named window because it was no longer reused.
The plan also gotten more complicated:
```psql
Unique (cost=1.19..1.55 rows=7 width=24)
-> WindowAgg (cost=1.19..1.54 rows=7 width=24)
-> WindowAgg (cost=1.19..1.38 rows=7 width=48)
-> Sort (cost=1.19..1.20 rows=7 width=44)
Sort Key: account, (CASE WHEN (type = 'credit_set'::text)
THEN happened_at ELSE NULL::timestamp with time zone
END) NULLS FIRST
-> Seq Scan on event (cost=0.00..1.09 rows=7 width=44)
```
Two sorts, and several aggregates. The bottom line, in my opinion, is that this approach is harder to maintain and it yields a significantly more complicated plan. I wouldn't use it in this case. However, like the previous approach, it is not restricted by the type of the key and value.
***
In response to [my tweet](https://twitter.com/mdevanr/status/1161275759786418177), a reader pointed me to [an old wiki page](https://wiki.postgresql.org/wiki/First/last_(aggregate)) with an implementation of two custom aggregate functions `FIRST` and `LAST`.
After creating the custom aggregates in the database as instructed in the wiki page, the query can look like this:
```sql
SELECT
account,
SUM(COALESCE((data->>'delta_balance')::int, 0)) AS balance,
LAST((data->>'credit')::int) FILTER (WHERE type = 'credit_set') AS credit
FROM
event
GROUP BY
account;
```
The main issue I found with this approach is that the order seems to be arbitrary. First and last can only be defined in the context of some order. I couldn't find a way to provide a field to sort by, so I consider this approach flawed for this use case.
***
As I suspected, this use case is ideal for custom aggregates and extensions, and indeed, [another reader on Reddit](https://www.reddit.com/r/PostgreSQL/comments/cpskf7/how_to_get_the_first_or_last_value_in_a_group/ewudq11/) pointed me [the extension "first_last"](https://pgxn.org/dist/first_last). The API is roughly similar to the custom aggregate above, but is also offers a way to sort the results so the first and last are not arbitrary.
I did not install the extension, but the query should look something like that:
```sql
SELECT
account,
SUM(COALESCE((data->>'delta_balance')::int, 0)) AS balance,
LAST((data->>'credit')::int, 1 ORDER BY happened_at)
FILTER (WHERE type = 'credit_set') AS credit
FROM
event
GROUP BY
account;
```
| haki |
155,963 | Notes: Ruby Conf Taiwan 2019 | Get start This article is the note and thought I get from the conference. I would mention... | 0 | 2019-08-14T01:25:06 | https://dannypsnl.github.io//language/gc/2019/08/13/ruby_conf_taiwan_2019.html | language, gc | ---
title: Notes: Ruby Conf Taiwan 2019
published: true
tags: language,gc
canonical_url: https://dannypsnl.github.io//language/gc/2019/08/13/ruby_conf_taiwan_2019.html
---
### Get start
This article is the note and thought I get from the conference.
I would mention the following topics:
- type checker for Ruby(steep)
- GC compaction
### type checking for Ruby
#### Ruby 3
Basically, I understand what Matz(creator of Ruby) want is a type inferer generate a type definition file, then we can modify it to be more accurate. Finally, the checker check program based on the file modified by you. So that we can avoid typing any extra information but still have a static type analyzer.
Few questions I have are:
- what if I regenerate the type definition file after I modified it?
- seems like would be replaced right now
- how it avoids Hindley Milner type system limit?
For example, a mutable cell holding a list of values of unspecified type.
p.s. What we can do is [value restriction](http://users.cs.fiu.edu/~smithg/cop4555/valrestr.html) here. Anyway, we need some extension for the type system.
#### [steep](https://github.com/soutaro/steep)
[Video](https://youtu.be/KU1JM4NSKe8)
Then we keep going since I have more talk with Steep creator(Soutaro Matsumoto) XD.
Not going to talk about too many type inference here but I would talk a little bit for the question from Soutaro Matsumoto.
The problem he has is:
```ruby
class Array<T>
def `[]=`:
# for arr[1] = T.new
(Integer, T) -> T
# for arr[0, 1] = T.new
| (Integer, Integer, T) -> T
# for arr[0, 1] = [T.new]
| (Integer, Integer, Array<T>) -> Array<T>
# for arr[0..1] = T.new
| (Range<Integer>, T) -> T
# for arr[0..1] = [T.new]
| (Range<Integer>, Array<T>) -> Array<T>
end
```
Now we have a wrong typed example:
```ruby
arr = [1, 2, 3] # Array<Integer>
arr[0] = "foo"
```
From human view, we expect an error like `expected: Integer but got: String`.However, for overloading, we have to try other methods since we can’t sure it would match or not. So the checker keeps going on, and failed at the final definition and report `expected: Array<Integer> but got: String` which is a little bit confusing.
Let’s consider more general overloading, at here general means semantic is less limited if the overloading is a normal method than an operator.
```ruby
class C<T>
def foo:
(Integer, T) -> T
| (Integer, Integer, T) -> T
end
```
I think, in this case, we can’t do more than say can’t find a method for blablabla. For example:
```ruby
c = C<String>.new
c.foo(1, 2)
```
We have no idea user is trying to match `(Integer, Integer, T) -> T` by missing one argument, or trying to type `c.foo(1, "2")` but wrong.
However, in my understanding, we can’t add more operators into Ruby by the problem.
So operators actucally are different than normal methods. For example:
```ruby
arr = [1, 2, 3]
arr[0] = 2
arr[0, 2] = 9
```
If `arr[0, 2] = ` didn’t follow a right hand side expression, it actually represents the same type as `arr[0] = 2` which is obviously wrong. So we can depend on this extra semantic and redefine the type of `[]=` operator to:
```ruby
class Array<T>
def `[]=`:
(Integer) -> (T) -> T
| (Integer, Integer) -> (T) -> T
| (Integer, Integer) -> (Array<T>) -> Array<T>
| (Range<Integer>) -> (T) -> T
| (Range<Integer>) -> (Array<T>) -> Array<T>
end
```
But since we actually don’t want the assign missing the right-hand side so we have to have another special notation for this special function, so it could be:
```ruby
class Array<T>
def `[]=`:
(Integer) => (T) -> T
| (Integer, Integer) => (T) -> T
| (Integer, Integer) => (Array<T>) -> Array<T>
| (Range<Integer>) => (T) -> T
| (Range<Integer>) => (Array<T>) -> Array<T>
end
```
`=>` cuts semantic steps. `->` followed by the finally return type.
Which represents:
```ruby
class Array<T>
def `[]=`:
[Integer] = T -> T
| [Integer, Integer] = T -> T
| [Integer, Integer] = Array<T> -> Array<T>
| [Range<Integer>] = T -> T
| [Range<Integer>] = Array<T> -> Array<T>
end
```
p.s. After I mention the idea with Soutaro Matsumoto, he points out the new syntax might not a good idea, that also make sense in the context. Since break current program won’t be anyone wants. But in type checker can use the abstraction to do better inference.
### GC compaction
[Video](https://youtu.be/0ypPiULlKfQ)
Why we need compaction? Without compaction, we can have out of memory due to the fragmented heap. What is a fragmented heap?
This is:

After compaction:

Without compaction, we would find there still has enough space but is not contiguous space so can’t allocate the object.
#### Two finger compaction
Algorithm use by Lisp. The idea is using two pointers, one move from left to right(ideally) called free, one from right to left called scan, free stops while getting an empty chunk, scan stops while getting a non-empty chunk, when both stop, swap the chunk, end this process until they point to the same chunk, then we update all references.
Cons: Only work with a fixed-size heap -> How to improve it?
#### Pinning Bits
When a pointer in C points to a reference from Ruby, and GC moved the object or delete it, what would happen? We can separate into different conditions:
- moved, in this condition, null reference caused exit
- delete, same as the previous one
- moved and move another object in, in this case, UB would happen, since you might treat an array object as a string object, the behavior is random, this is the craziest condition
To avoid them, we have to tell GC what is hold by C pointer, so in C code not just allocate and free memory also have to write something like `gc_c_mark(pointer_to_ruby_object)` so GC know these objects must be pinned, it can’t move them.
#### Allowing movement in C
Following things are related to how to allow movement in C
- compaction callback
GC calls the C callback function after compaction is done, give C a chance to update its reference.
- No Pin marking
use no pin function for reference object, so GC can manage the object.
- new location function
`rb_gc_location` would return the new location of the object.
#### Known issue
```
Ruby Object C Object
| |
| |
\---> Third Object <---/
```
Ruby automatically marked some object(gc\_mark\_no\_pin), and not mark from C.
So when compacter run, GC would move the Third Object however C still reference to old place so program explode.
#### Debugging GC
1. Maximum Chaos: doubling Heap
2. Zombie Objects: let some slots can’t be GC, always use a zombie object fill it
3. Address Sanitizer: https://github.com/google/sanitizers/wiki/AddressSanitizer
#### More information
- https://bugs.ruby-lang.org/issues/15626
- https://www.ruby-forum.com/t/rb-gc-register-address-or-rb-gc-mark/219828/2
- https://ruby-hacking-guide.github.io/gc.html
### Additional part
For the rest part I don’t have enough knowledge to sort out or I didn’t take a look or I’m lazy. Here are video links.
- [Automatic Differentiation for Ruby](https://youtu.be/Drxa_DiLV3s)
- [Virtual Machines: their common parts and what makes them special](https://youtu.be/x6FrRQMF5tg)
- [The Journey to One Million](https://youtu.be/Dtn9Uudw4Mo)
### Conculsion
Yes, another Ruby conference I join though I didn’t know Ruby :). | dannypsnl |
155,987 | Another Gksu Alternative For Ubuntu (Xorg) | Back when Debian and Ubuntu removed the gksu package, which was used to allow elevating your permissi... | 0 | 2019-08-13T14:17:52 | https://dev.to/logix2/another-gksu-alternative-for-ubuntu-xorg-2356 | ubuntu | Back when Debian and Ubuntu [removed the gksu package](https://jeremy.bicha.net/2018/04/18/gksu-removed-from-ubuntu/), which was used to allow elevating your permissions when running graphical applications, the recommendations were:
- for application developers to use PolicyKit and only use elevated privileges for specific actions
- for users to take advantage of the gvfs admin backend, by [using the `admin://` prefix](https://www.linuxuprising.com/2018/04/gksu-removed-from-ubuntu-heres.html)
Using `admin://` has an issue on Ubuntu though - it asks for your password twice the first time you use it during a session. This does not happen in Fedora, but it happens in both Ubuntu 18.04 and 19.04.
So what's a good alternative for gksu to use on Ubuntu under its default Xorg session, which doesn't ask for the password twice? An alternative that doesn't depend on the Linux distribution or desktop environment you use (you do need PolicyKit installed), is the following command, which works with any graphical application (GUI) that you want to run with elevated permissions / as root:
```
pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS`
```
The command exports your current `$DISPLAY`, `$XAUTHORITY` and `$DBUS_SESSION_BUS_ADDRESS` environment variables, allowing you to run any GUI application as root (with no need of having PolicyKit files for those apps), and without asking for the password twice, like ``admin://`` does on Ubuntu.
To make it easier to use, since the command is quite long, you can create an alias for this command, called `gksu`.
To make this `gksu` alias for the `pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS` command permanent, add the following to your `~/.bashrc` file:
```
alias gksu='pkexec env DISPLAY=$DISPLAY XAUTHORITY=$XAUTHORITY DBUS_SESSION_BUS_ADDRESS=$DBUS_SESSION_BUS_ADDRESS'
```
You can add it to `~/.bashrc` by issuing the following command (run this command only once since it adds this line every time you run it):
```
echo "alias gksu='pkexec env DISPLAY=\$DISPLAY XAUTHORITY=\$XAUTHORITY DBUS_SESSION_BUS_ADDRESS=\$DBUS_SESSION_BUS_ADDRESS'" >> ~/.bashrc
```
After this, source `~/.bashrc`:
```
. ~/.bashrc
```
Now you can run this alias exactly like you'd use the old gksu command.
For example, you can use it to run Gedit (replace with any other GUI text editor you wish) with elevated privileges and edit the /etc/default/grub file:
```
gksu gedit /etc/default/grub
```
Another example - run Nautilus (or some other file manager) and open the system themes folder:
```
gksu nautilus /usr/share/themes
```
For more articles I've written, check out [LinuxUprising.com](https://www.linuxuprising.com/), a Linux & open source blog. | logix2 |
156,011 | Are you ready for the MongoDB 3.4 EOL? | If you are running Sitecore version 8.2.x, you are running MongoDB 3.4. Unfortunately, that also... | 0 | 2019-08-15T11:14:36 | https://jasonstcyr.com/2019/08/13/are-you-ready-for-the-mongodb-3-4-eol/ | sitecore, mongodb | ---
title: Are you ready for the MongoDB 3.4 EOL?
published: true
tags: sitecore,mongodb
canonical_url: https://jasonstcyr.com/2019/08/13/are-you-ready-for-the-mongodb-3-4-eol/
---
If you are running Sitecore version 8.2.x, you are running MongoDB 3.4. Unfortunately, that also means you are impacted by the announcement by MongoDB that version 3.4 will reach end-of-life in January 2020. You have some options on how you want to fix this and it sort of depends on how much tolerance your organization … [Continue reading Are you ready for the MongoDB 3.4 EOL? →](https://jasonstcyr.com/2019/08/13/are-you-ready-for-the-mongodb-3-4-eol/) | jasonstcyr |
156,036 | YDKJS — Scopes and Closures — Part2 | Welcome to the Part2 of YDKJS series. As told in part1 this series is based on my learnings from lege... | 1,853 | 2019-08-13T15:57:04 | https://thewebdev.tech/ydkjs-scopes-and-closures-2 | javascript, tutorial, webdev, beginners | Welcome to the Part2 of YDKJS series. As told in part1 this series is based on my learnings from legendary book series You don’t know JS by Kyle Simpson and Javascript series by Kaushik Kothagul from Javabrains.
Before understanding scope in details we have to understand two concept. Every operation in JS is either **Read operation** or **Write Operation**.
Consider the below statement -
> **var a = 10;**
Now this is a **write operation** because we write 10 to variable a.
Now, consider the below statement.
> **console.log(a);**
This is a **read operation** because we read the value of variable a.
Let consider another example.
> **var a =10;
var b;
b=a;**
Now in this the line **b=a** is interesting, because for **a** it’s a **read operation** and for **b** it’s a **write operation**.
Let consider another example to make the concept even more clear.
*read write example*
Now at line 4, there is a **write operation** because when *greet(“Nabendu”)* is called from line 8, we have **function greet(name=”Nabendu”)** here. So, assigning name variable the value “Nabendu”.
And at line 5 the usual read operation as we are reading the value of a.
Let’s move to one more important concept related to read and write operations.
> **If we use a variable without declaring it. It is ok to do a write operation, but not ok to do a read operation.**
Let’s see example to be more clear.
*read operation*
*In the above we tried to do a read operation on a variable foo, without even declaring it and the compiler throws an error.*
Now consider below.
*write opertion*
*We do a write operation in line 1 ie foo=10. Here also we are not declaring foo. Now this is perfectly right and the compiler doesn’t throws an error.*
Window object is the object in which all global variables are created as properties.
Let’s define these below two variables in firefox scratchpad and then do reload and run.
*firefox scratchpad*
In the firefox console write window and we can see the Window object containing these two variables.
*Window object*
Also, can access these two variables as window object’s properties.
*window object properties*
| nabendu82 |
156,083 | The history and legacy of jQuery | Written by Danny Guo✏️ jQuery is the most widely used JavaScript library in the world. The web... | 0 | 2019-08-13T19:07:15 | https://blog.logrocket.com/the-history-and-legacy-of-jquery/ | javascript, webdev | ---
title: The history and legacy of jQuery
published: true
tags: javascript, webdev
canonical_url: https://blog.logrocket.com/the-history-and-legacy-of-jquery/
cover_image: https://thepracticaldev.s3.amazonaws.com/i/z2j9uu1wohrywd2vzpis.jpeg
---
**Written by [Danny Guo](https://blog.logrocket.com/author/dannyguo/)**✏️
[jQuery](https://jquery.com/) is the [most widely used](https://trends.builtwith.com/javascript/jQuery) JavaScript library in the world. The web development community embraced it in the late 2000s, creating a rich ecosystem of websites, plugins, and frameworks that use jQuery under the hood.
But in the past several years, jQuery’s status as the number one tool for web development has diminished. Let’s take a look at why jQuery became popular, why it has somewhat fallen out of favor, and when it would still be a good choice for a modern website.
## A brief history of jQuery
[John Resig](https://johnresig.com/) developed the initial version of jQuery in 2005 and [released it in 2006](https://johnresig.com/blog/barcampnyc-wrap-up/) at an event called BarCampNYC. On the [original jQuery website](https://web.archive.org/web/20060203025710/http://jquery.com/), he wrote:
> jQuery is a Javascript library that takes this motto to heart: **Writing Javascript code should be fun.** jQuery achieves this goal by taking common, repetitive tasks, stripping out all the unnecessary markup, and leaving them short, smart and understandable.
jQuery had two main value propositions. The first was to provide an ergonomic API for manipulating a webpage. In particular, it provided powerful methods for selecting elements.
Beyond selecting elements just based on their ids or classes, jQuery allowed for complex expressions, like selecting elements based on their relationship with other elements:
```jsx
// Select every item within the list of people within the contacts element
$('#contacts ul.people li');
```
The selection engine was eventually extracted into its own library called [Sizzle](https://sizzlejs.com/).
The second selling point was that it abstracted away differences between browsers. Back then, it was hard to write code that would work robustly on all browsers.
A lack of standardization meant that developers had to account for many different browser behaviors and edge cases. Just take a look at [this early jQuery source code](https://github.com/daniellmb/jquery-archive/blob/master/jquery.2006-07-01.js), and search for `jQuery.browser` to see some examples. Here’s one:
```jsx
// If Mozilla is used
if ( jQuery.browser == "mozilla" || jQuery.browser == "opera" ) {
// Use the handy event callback
jQuery.event.add( document, "DOMContentLoaded", jQuery.ready );
// If IE is used, use the excellent hack by Matthias Miller
// http://www.outofhanwell.com/blog/index.php?title=the_window_onload_problem_revisited
} else if ( jQuery.browser == "msie" ) {
// Only works if you document.write() it
document.write("<scr" + "ipt id=__ie_init defer=true " +
"src=javascript:void(0)><\/script>");
// Use the defer script hack
var script = document.getElementById("__ie_init");
script.onreadystatechange = function() {
if ( this.readyState == "complete" )
jQuery.ready();
};
// Clear from memory
script = null;
// If Safari is used
} else if ( jQuery.browser == "safari" ) {
// Continually check to see if the document.readyState is valid
jQuery.safariTimer = setInterval(function(){
// loaded and complete are both valid states
if ( document.readyState == "loaded" ||
document.readyState == "complete" ) {
// If either one are found, remove the timer
clearInterval( jQuery.safariTimer );
jQuery.safariTimer = null;
// and execute any waiting functions
jQuery.ready();
}
}, 10);
}
```
By using jQuery, developers could leave it up to the jQuery team to deal with these pitfalls.
Later on, jQuery made it easy to adopt more sophisticated techniques, like animations and [Ajax](https://en.wikipedia.org/wiki/Ajax_(programming)). jQuery virtually became a standard dependency for websites. It continues to power an enormous part of the internet today. W3Techs estimates that about [74 percent of all websites use jQuery](https://w3techs.com/technologies/details/js-jquery/all/all).
The stewardship of jQuery also became more formal. In 2011, the jQuery team formally [created the jQuery Board](https://blog.jquery.com/2011/11/18/getting-board-of-jquery/). In 2012, the jQuery Board [formed the jQuery Foundation](https://blog.jquery.com/2012/03/06/announcing-the-jquery-foundation/).
In 2015, the jQuery Foundation merged with the Dojo Foundation to [form the JS Foundation](https://blog.jquery.com/2015/09/01/jquery-foundation-and-dojo-foundation-to-merge/), which then merged with the Node.js Foundation in [2019](https://medium.com/@nodejs/introducing-the-openjs-foundation-the-next-phase-of-javascript-ecosystem-growth-d4911b42664f) to form the [OpenJS Foundation](https://openjsf.org/), with jQuery as one of its “[impact projects](https://openjsf.org/projects/#impact).”
[](https://logrocket.com/signup/)
## Changing circumstances
However, jQuery has [declined in popularity](https://trends.google.com/trends/explore/TIMESERIES/1564319400?hl=en-US&tz=240&date=all&geo=US&q=jquery&sni=3) in recent years. GitHub [removed jQuery from their website’s front end](https://github.blog/2018-09-06-removing-jquery-from-github-frontend/), and Bootstrap v5 will [drop jQuery](https://github.com/twbs/bootstrap/pull/23586) because it is Bootstrap’s “[largest client-side dependency for regular JavaScript](https://blog.getbootstrap.com/2019/02/11/bootstrap-4-3-0/)” (it’s currently 30KB, minified and gzipped). Several trends in web development have weakened jQuery’s standing as a must-use tool.
### Browsers
Browser differences and limitations have become less important for several reasons. The first is that standardization has improved. The major browser vendors (Apple, Google, Microsoft, and Mozilla) collaborate on [web standards](https://spec.whatwg.org/) through the [Web Hypertext Application Technology Working Group](https://whatwg.org/).
While browsers still differ in significant ways, the vendors at least have a method to find and develop common ground rather than [waging nonstop war](https://thehistoryoftheweb.com/browser-wars/) with each other. Accordingly, browser APIs have become more capable. For example, the [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) can replace jQuery’s Ajax functions:
```jsx
// jQuery
$.getJSON('https://api.com/songs.json')
.done(function (songs) {
console.log(songs);
})
// native
fetch('https://api.com/songs.json')
.then(function (response) {
return response.json();
})
.then(function (songs) {
console.log(songs);
});
```
The [`querySelector`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector) and [`querySelectorAll`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll) methods replicate jQuery’s selection capabilities:
```jsx
// jQuery
const fooDivs = $('.foo div');
// native
const fooDivs = document.querySelectorAll('.foo div');
```
Manipulating element classes can be done with `classList` now:
```jsx
// jQuery
$('#warning').toggleClass('visible');
// native
document.querySelector('#warning').classList.toggle('visible');
```
The [You Might Not Need jQuery](http://youmightnotneedjquery.com/) website lists several more cases in which jQuery code can be replaced with native code. Some developers always reach for jQuery because they just aren’t aware that these APIs are available, but as developers learn about them, they become less reliant on jQuery.
Using native capabilities can also improve the performance of a webpage. Many [jQuery animation effects](https://api.jquery.com/category/effects/) can now be implemented [much more efficiently](https://dev.opera.com/articles/css3-vs-jquery-animations/) with CSS.
The second reason is that browsers are updated more quickly than in the past. Most browsers now have an [evergreen update strategy](https://www.techopedia.com/definition/31094/evergreen-browser), with Apple’s Safari being the main exception. They can update themselves silently without user intervention and aren’t tied to operating system updates.
This means that new browser functionalities and bug fixes get adopted by users more quickly, and developers don’t have to wait as long for [Can I Use](https://caniuse.com/) usage percentages to reach acceptable levels. They can confidently use features and APIs without loading jQuery or [polyfills](https://en.wikipedia.org/wiki/Polyfill_(programming)).
The third reason is that Internet Explorer is getting closer to being fully irrelevant. IE has long been the bane of web developers everywhere. IE-specific bugs were notoriously common, and because IE was the dominant browser of the 2000s and lacked evergreen updates, older versions stubbornly hung around.
Microsoft sped up IE’s deprecation, [ending support](https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support) for IE 10 and below in 2016, leaving IE 11 as the last supported version. It is becoming more common that web developers have the luxury of ignoring IE compatibility.
Even jQuery dropped support for IE 8 and below with the release of [version 2.0](https://blog.jquery.com/2013/04/18/jquery-2-0-released/) in 2013. While some special circumstances like legacy websites still require IE, these situations are becoming rarer.
### New frameworks
A plethora of web frameworks have emerged since jQuery’s release, with some of the current front-runners being [React](https://reactjs.org/), [Angular](https://angular.io/), and [Vue](https://vuejs.org/). These frameworks have two significant advantages over jQuery.
The first is that they make it easy to break up a UI into components. They are designed to handle rendering a page as well as updating it. jQuery is typically only used for updating a page, relying on the server to provide the initial page.
React, Angular, and Vue components, on the other hand, allow for a tight coupling between HTML, code, and even CSS. In the same way that we might break a codebase down into multiple self-contained functions and classes, breaking a UI down into reusable components makes it easier to build and maintain a complex website.
The second advantage is that the newer frameworks encourage the declarative paradigm, in which the developer describes what the UI should be like and leaves it up to the framework to make the changes necessary to get there. This approach is in contrast to the imperative approach that is characterized by jQuery code.
With jQuery, you explicitly write the steps to make any changes. With a declarative framework, you say, “Based on this data, this is what the UI should be like.” This can significantly reduce the amount of mental bookkeeping you have to do to write bug-free code.
Developers have embraced these new approaches to building websites, reducing jQuery’s relevance.
## When to use jQuery
So when _should_ we choose to use jQuery?
If the project in question is expected to grow in complexity, it’s best to start with a different library or framework that will allow you to sanely deal with that complexity, such as by breaking the UI into components. Using jQuery for such a website can be fine at first, but it can quickly evolve into spaghetti code, where you aren’t sure what code affects what parts of the page.
I’ve dealt with this before, and the situation produces a feeling of uneasiness whenever you want to make a change. It’s hard to be sure that you aren’t breaking anything because jQuery selectors depend on HTML structure that is produced by the server.
On the other end of the spectrum, you have simple websites that only need a small amount of interactivity or dynamic content. For these cases, I would still default to not using jQuery because we can do much more now with native APIs.
Even if I did need something more powerful, I would look for a specific library for the use case, such as [axios](https://github.com/axios/axios) for Ajax or [Animate.css](https://daneden.github.io/animate.css/) for animations. Using libraries like these is generally more lightweight than loading the entirety of jQuery for just a bit of its functionality.
I think the best justification for using jQuery is that it provides comprehensive functionality for powering the front end of a website. Instead of having to learn all the various native APIs or special-purpose libraries, you can read just the jQuery documentation and immediately be productive.
Its imperative approach doesn’t scale well, but it’s also more straightforward to learn than the declarative approach of other libraries. For a website with a clearly limited scope, it’s reasonable to drop in jQuery and move on; it doesn’t need any sort of sophisticated build or compilation process.
jQuery, then, is a good choice when you are reasonably confident the website won’t become much more complicated, and you don’t want to bother with native functionality, which can certainly be more verbose than the equivalent jQuery code.
Another use case emerges when you must support old versions of IE. In that case, jQuery serves as well as it did when IE was the dominant browser.
## Looking forward
jQuery isn’t going away anytime soon. It’s [under active development](https://github.com/jquery/jquery) and plenty of developers prefer using its API even when native methods are available.
It has helped a generation of developers make websites that work on every browser. While it has been supplanted in many respects by new libraries, frameworks, and paradigms, jQuery played a huge, positive role in making the web what it is today.
Barring a significant change in jQuery functionality, it seems likely that jQuery will continue to experience a slow but steady decline in usage in the next several years. New websites tend to be built from the beginning with more modern frameworks, and the good use cases for jQuery are becoming rarer.
Some people are unhappy with the rate of churn in web development tooling, but to me, that is a sign of rapid progress. jQuery gave us better ways to do things. Its successors have done the same.
* * *
**Editor's note:** Seeing something wrong with this post? You can find the correct version [here](https://blog.logrocket.com/the-history-and-legacy-of-jquery/).
## Plug: [LogRocket](https://logrocket.com/signup/), a DVR for web apps

[LogRocket](https://logrocket.com/signup/) is a frontend logging tool that lets you replay problems as if they happened in your own browser. Instead of guessing why errors happen, or asking users for screenshots and log dumps, LogRocket lets you replay the session to quickly understand what went wrong. It works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store.
In addition to logging Redux actions and state, LogRocket records console logs, JavaScript errors, stacktraces, network requests/responses with headers + bodies, browser metadata, and custom logs. It also instruments the DOM to record the HTML and CSS on the page, recreating pixel-perfect videos of even the most complex single-page apps.
[Try it for free](https://logrocket.com/signup/).
* * *
The post [The history and legacy of jQuery](https://blog.logrocket.com/the-history-and-legacy-of-jquery/) appeared first on [LogRocket Blog](https://blog.logrocket.com). | bnevilleoneill |
156,130 | How to Align Your Team on the Need for Accessibility | Considering or planning to teach your team about web accessibility? Here is a free presentation and guide to help them get on the same page. | 0 | 2019-08-13T22:00:48 | https://dev.to/seejamescode/how-to-align-your-team-on-the-need-for-accessibility-n7o | a11y, design, webdev, pwa | ---
title: How to Align Your Team on the Need for Accessibility
published: true
description: Considering or planning to teach your team about web accessibility? Here is a free presentation and guide to help them get on the same page.
tags: Accessibility,Design,Web Development,PWA
cover_image: https://thepracticaldev.s3.amazonaws.com/i/uxtpimi3ojb9exxrzc1t.jpg
---
We all learn about web accessibility at different points in our career. That means a lot of time you are not on the same page as your teammates. I had the privilege a couple of months ago to speak at [Pingboard](https://pingboard.com) about accessibility. Our goal was to get the whole team at the same knowledge starting point. If we all have a basic understanding of whom web accessibility affects and how it affects them, we can ship better experiences.
You probably find yourself in the same opportunity at your company to present on accessibility. So I would like to do two things to help: I am going to [give you my presentation](https://drive.google.com/file/d/1W62aya8uk0LgMPyMUBSIAJVOQBewmiKd/view?usp=sharing) as a starting point and walk you through the points I like to touch on.
> You probably find yourself in the same opportunity at your company to present on accessibility. …I am going to [give you my presentation](https://drive.google.com/file/d/1W62aya8uk0LgMPyMUBSIAJVOQBewmiKd/view?usp=sharing) as a starting point and walk you through the points I like to touch on.
# Remind the team that you are talking about real people.
When we read accessibility documentation, it is easy to forget the human element. It makes sense because you are reading technical docs meant to influence code. It is great to start with this shared definition:
> *A person with a disability:* A person who has a physical or mental impairment that substantially limits one or more major life activity.
We use this to establish friendly dialog. People do not want to be called "disabled". They want to be called their name. We also need to clarify how wide of a range disabilities can be. Try expanding past assumptions early on with these points:
- Some disabilities come at birth, some come later.
- Some disabilities are permanent, some are temporary.
- Some disabilities always affect, some come and go.
- Some disabilities are visible, some are invisible.
# Go over some disability categories with emotional experiences and quick tips.
Now that we have established that we are talking about people, it is time to talk about their experiences. I like to mix this section with quick tips for common disability categories. Remind your audience that there are way more disabilities than you are covering. They are difficult to categorize, which is why the technical documentation focuses on the solutions.
Something you will notice about the presentation is that there is a lot of video and audio. I find it more effective to have those with disabilities speak more than me about the issue. The multimedia in the presentation makes it possible for those people to not even have to be there.
## Visual
{% youtube UzffnbBex6c %}
I love to share this video of Tommy Edison using a screenreader because he keeps things lighthearted, but also goes through the whole process of sending an email. After the video, you can point out that fellow Mac users can try their screenreader with `CMD + F5` at anytime.
Quick tips:
- People with dyslexia prefer to override font settings.
- People with low vision need to be able to zoom correctly.
- People with color blindness need an overall color contrast ratio of 4.5:1. Text 19px or larger can have a ratio of 3:1.
- People with color blindness need labels and patterns for differentiations.
## Auditory and Seizure
Auditory disabilities are easier to talk about with digital product teams. Remind your team that all audio should be paired with visual cues and captions. Encourage the team to do content audits to check all videos for closed captioning.
Strobing, flickering, and flashing can trigger seizures. Other triggers include animations longer than 250ms, parallax, and images moving under text.
## Motor
{% youtube yx7hdQqf8lE %}
There are two demos I like to show teammates when it comes to motor disabilities. The first is hidden inside a longer video. A fellow named Gordin Richins shows what it is like to use a mouth stick. It is an older video, but I try to point out that new technologies can be more expensive.
{% youtube FEQv7buTNxw %}
The second video is a wholesome video of an eye tracking product. These are great because they can provide mouse capabilities to those with motor disabilities. However, we should still make all experiences keyboard accessible to be safe.
## Cognitive
Cognitive disabilities can be difficult to convey. For this last category, I stuck with quick tips to keep the presentation alternative between facts and emotion. Here are the quick tips I share:
- For memory, keep processes short and remind users of context as much as possible.
- For problem-solving, error messages should be as explanatory as possible.
- For attention, use visual cues to highlight the most important points or sections of content.
- For reading, linguistic, and verbal comprehension, provide supplemental media that helps processes.
# Emphasize how common disabilities actually are.
Did you know that one out of five people in the US have at least a one disability? ([source](https://www.census.gov/newsroom/releases/archives/miscellaneous/cb12-134.html)) It may not seem like that in the workplace, but we should consider why. This is the point of the presentation when people should understand invisible disabilities. Invisible disabilities can be hidden from the naked eye. Here is a great interview where Carly Medosch talks about working with an invisible disability:
[*NPR: People with 'Invisible Disabilities' Fight For Understanding*](https://www.npr.org/2015/03/08/391517412/people-with-invisible-disabilities-fight-for-understanding)
This story is a great transition to a big question: What can we as enterprise software teams do to help those with disabilities?
Well, 79% of people of a working age in the US have employment. Only 41% of people of a working age in the US that have disabilities have employment. ([source](https://www.census.gov/newsroom/releases/archives/miscellaneous/cb12-134.html)) If more jobs were accessible, that gap would close. This means that we as enterprise software teams can make it our mission to close the gap!
# End with the legal risk for those that need extrinsic motivation.
It may not feel great, but some people may still need more reasoning about why the team should work towards accessible experiences. This is why I like to close the presentation on the legal implications of accessibility.
 <figcaption>https://www.adatitleiii.com/2019/01/number-of-ada-title-iii-lawsuits-filed-in-2018-tops-10000/</figcaption>
In 1990, the Americans with Disabilities Act was signed. This provides those with disabilities the same protection that's given in the Civil Rights Act of 1964. Section 508 says digital experiences in government departments and agencies have accessibility requirements.
Lawsuits continue to grow saying that the ADA also covers digital experiences from any company. Over 10,000 lawsuits were filed in 2018 alone. In fact, one of those [cases is headed towards supreme court](https://www.cnbc.com/2019/07/25/dominos-asks-supreme-court-to-say-disability-protections-dont-apply-online.html).
# Be a part of the good fight.
Are you considering presenting to your team on web accessibility? You really should. You don't have to be an expert and it's okay if not everyone listens to you. Every effort to make the web a more friendly place is worth it.
I hope these resources helped you shape a future presentation. Please steal everything from me (but keep the citations).
If you appreciate this, please consider voting for my SXSW talk idea. I want to teach product managers, designers, and everyone else about progressive web apps. The cool thing about PWAs, is that a lot of accessibility criteria is baked in! If you wanna learn more, check out the video below.
[Please take minute to vote for my talk and share it with others.](https://panelpicker.sxsw.com/vote/95517)
{% youtube aRwfB7Iiaqo %}
Got any other good resources for accessibility presentations? Please share them in the comments or tweet me them at [@seejamescode](https://twitter.com/seejamescode). I will retweet the best ones! | seejamescode |
156,177 | Security in open source projects | In recent years, the amount of open source components used by developers has experienced significant growth. Millions of open source libraries are distributed through centralized systems such as Maven Central (Java), NPM (JavaScript) and GitHub (Go). | 0 | 2019-08-14T02:23:38 | https://dev.to/jmortega/security-in-open-source-projects-5c50 | security, opensource | ---
title: Security in open source projects
published: true
description: In recent years, the amount of open source components used by developers has experienced significant growth. Millions of open source libraries are distributed through centralized systems such as Maven Central (Java), NPM (JavaScript) and GitHub (Go).
tags: security,open source
---
In recent years, the amount of open source components used by developers has experienced significant growth. Millions of open source libraries are distributed through centralized systems such as Maven Central (Java), NPM (JavaScript) and GitHub (Go).
tags: security,open source
In this talk, I will present the common security problems faced by companies that use open source. We will also talk about how to manage the risks of open source software using people, processes and tools

[Slides]
https://www.slideshare.net/jmoc25/security-in-open-source-projects
[Video]
https://media.ccc.de/v/froscon2019-2361-security_in_open_source_projects
| jmortega |
156,642 | Apama REST services | Implementing request/response protocols with the HTTP connectivity transport Apama's REST... | 0 | 2019-08-23T12:38:40 | http://techcommunity.softwareag.com/techniques-blog/-/blogs/apama-rest-services | apama, rest, http | ---
title: Apama REST services
published: true
tags: apama, rest, http
canonical_url: http://techcommunity.softwareag.com/techniques-blog/-/blogs/apama-rest-services
---
# Implementing request/response protocols with the HTTP connectivity transport
**Apama's REST server, provided by the HTTP server connectivity transport, could only provide event submission over REST. With the 10.3.1 release (available to both Software AG 10.3 and 10.4 customers), it can now also provide request/response APIs. This article explores some of the new functionality.**
## REST use cases with Apama
Apama is an event-processing engine, which needs to integrate with other systems to consume and emit events. This has traditionally been through either our own client libraries and protocol or via messaging busses, such as Java® Message Service (JMS) or Universal Messaging. In the past few releases, we have been adding ways to directly integrate with Apama through standard web-based protocols, such as REST.
**Querying/invoking a REST service from Apama**
Some Apama applications need data from external services in order to process events. The results of other applications are actions on an external service. In Apama 10.0, we introduced an HTTP Client transport, which could be used to make such requests from your Apama application, wait for the response and handle it.
**Event delivery to Apama over REST**
Alternatively, you might have an external service that needs to deliver an event stream to Apama. In 10.1, we introduced an HTTP Server transport which provides an HTTP REST endpoint, which you can invoke from other services. This version of the HTTP Server was one way. You could deliver an event to Apama, but Apama couldn't give you any response from the application.
**Apama provides a REST service**
Rather than just providing a submission-only API, you might want to provide a full REST service where the query is handled by your Apama application which decides on how to respond. This response is then what's sent back to the client. This use case is what we're providing in 10.3.1 and will be the subject of the rest of this article.
## Adding HTTP Server support to a project
To add an HTTP server to your project, you need to add the “HTTP Server” bundle. This can either be done in Software AG Designer through the “Connectivity and Adapters” node or using the new apama\_project tool also introduced in 10.3.1.
This will add an additional YAML configuration file and associated properties to your project under _config/connectivity/HTTPServer_. You will need to modify the properties file to specify the port you want to run the server on and change any other settings like TLS support. More details about the configuration options can be found in Apama documentation.
The default configuration is a submission-only JSON™-based API, which assumes a particular format of the JSON payload. You will have to change this configuration file to enable request/response and provide a mapping between the bodies of the REST API to events within Apama (see below).
## Enabling REST responses from EPL
By default, the HTTP server bundle will have a submission-only API and not allow you to send responses from your EPL. In order to be a full REST server, you'll need to change the configuration file. We're going to create a Create Request Update Delete (CRUD) store in which you can make a request to store data in Apama and then retrieve it later. The full example can be found in the samples/connectivity\_plugin/application/httpserver-responses directory of your Apama installation.
First, you have to stop the transport from sending responses automatically by changing a flag on the transport:
```
HTTPServerTransport: automaticResponses: false allowedMethods: [PUT]
```
Second, the transport provides some information which needs to be mapped into an event so that you can use it in EPL. This is done by changing the rules for the mapperCodec in your chain definition in the configuration file:
```
mapperCodec: “\*”: towardsHost: mapFrom: - payload.requestId: metadata.requestId defaultValue: - payload.channel: “@{httpServer.responseChannel}”
```
This snippet is applying to all request messages, and we're setting two fields in the payload. The request ID is taken from each message and the response channel is a constant substituted in when the chain is created. You will need to add these mapping rules in addition to any others you have for all types of request. The request ID is used to match up responses to the appropriate request. The channel tells EPL how to send responses back to the transport.
We also need to make some changes to messages coming back from EPL. Specifically, we need to provide that response ID from the corresponding request:We also need to make some changes to messages coming back from EPL. Specifically, we need to provide that response ID from the corresponding request:
```
mapperCodec: “\*”: towardsTransport: mapFrom: - metadata.requestId: payload.requestId
```
## Handling REST requests and sending responses
Finally, we're ready to write some EPL, which will handle a request and send an appropriate response. Let’s use the “create” part of the CRUD sample as an example. You can see all the other versions in the full sample.
```
**on** **all** CreateObject() **as** co { **try** { **string** path := generateNewKey(); datastore.add(path, co.data); **send** Success(co.requestId, path) **to** co.channel; } **catch** (Exception e) { **send** Error(co.requestId, 500, e.getMessage() **to** co.channel; } }
```
This shows that we're taking the request id from the _CreateObject_ request and passing it back in both the _Success_ and _Error_ responses. These responses are being sent to the channel that's also part of the request object.
There will also need to be mapping rules for the success and error messages being sent back to the client. You can see the details in the full sample.
## Summary
In this article we've learned how to handle REST requests in your Apama application and generate tailored responses to be sent to the client. Full details are available in the _httpserver-responses_ sample in your installation and in the Apama documentation under “The HTTP Server Transport Connectivity Plug-in.” For more help, check out our regular blog posts on [www.apamacommunity.com](http://www.apamacommunity.com) or ask a question on Stack Overflow with the **apama** tag. | tpetrov9 |
156,230 | How to be a successful Product Manager for AI products | You can’t exist in the tech world today without hearing about AI at every turn. The business value is... | 0 | 2019-08-27T08:12:40 | https://www.lftechnology.com/blog/ai/product-manager-ai/ | productdevelopment, ai, productmanager, machinelearning | You can’t exist in the tech world today without hearing about AI at every turn. The business value is clear— automate tasks that would take humans far longer and gain insight into valuable new data. Given that our product managers regularly use data to support their hypotheses about new features, where do they fit into this AI equation? How can they overcome any knowledge gaps and lead successful product launches utilizing machine learning?
The answer is not as complicated as it may seem. A product manager with a solid grasp for the tactical process can rise to the challenge.
## What product managers need to know about AI
We know that product managers need to think tactical, but how deep should their AI knowledge go? Ultimately, product managers just need to know what AI can do, and conversely, what the limitations are. This is important in determining whether AI is even a viable solution for the feature in question. Knowing what data is available, and how it can further AI development allows the product manager to tactically steer the product.
Taking it a step further, the product manager should be able to know how AI will integrate into the current product, and how to improve the AI models moving forward. Do the collected data aid future initiatives? What other data sets could be useful for future AI capabilities in the product? These are questions that can help propel efficiency in the machine learning model. The key is to not get bogged down with implementation level questions and stick to the high-level strategy.
A good exercise is to test your knowledge of useful applications of AI by thinking about how it can aid your own company. A good place to look is at your rule-based systems that depend on certain classifications. For example, here at Leapfrog, we came up with our management of employee supervised learning. This system depends on classifications such as whether people are high or low performers, their total length of employment, and employee feedback. We can use this to determine goalposts of what employees can achieve in 3 months, 6 months, or in one year. Continued data and feedback over time will grow these models. Ultimately, we can strengthen company culture, morale, and growth by investing AI into our employees. Looking at simple internal solutions allows us to start flexing our machine learning muscles.
## How to structure AI products
A product manager can use their strategic knowledge to structure an AI project appropriately. The following high-level steps can guide the flow necessary for an AI project.
1. **Get the data:** Some clients may desire an AI solution, but will not have the tools in place for this to be viable. Without the right set of customer data, you will not be able to strategically apply it to an AI problem.
2. **Explore and analyze the data:** How can I use the data? Does this data support my product hypothesis? What initial findings do I see? The product manager can engage with the data at a high level to see if this fulfills the needs of the feature in question.
3. **Validate the hypothesis with users:** As with any project, it is imperative that we validate our assumptions with our user. Testing our initial ideas will allow us to make valuable iterations throughout the product cycle. By starting small and iterating, product development can gain speed.
4. **Create quick and dirty models:** The team can use the initial research and hypothesis to begin building quick models. More questions will arise, but don’t lose sight of the initial hypothesis and goal.
5. **Tune the models:** Iterating on the initial models will continue to confirm or deny the original hypothesis. We can ask ourselves, “What is the minimum level of functionality that is acceptable?” Keep tuning this model until solid results are delivered.
## Ask the right questions
At the end of the day, the biggest measure of success will be if a product manager can ask the right questions at every stage of the product cycle. They are steering the features from hypothesis through development. Success in machine learning projects is ultimately measured similarly to regular development projects. If the product manager can retain their tactical approach, keep the team aligned around key product goals, and stay focused on the experience of the end-user, then the project will be in good hands.
#### Want to take the next leap and learn more about integrating AI in your product?
#### [](https://landing.lftechnology.com/ai/?utm_source=devto&utm_medium=blog&utm_campaign=initial_distribution) | lftechnology |
156,251 | Designing the database | User-centric design of a database | 3,597 | 2019-08-14T08:57:35 | https://dev.to/aninditabasu/building-a-chatbot-with-the-ramp-stack-part-2-designing-the-database-131e | databasedesign | ---
title: Designing the database
published: true
series: Building a chatbot with the RaMP stack (RaspberryPi, MySQL, Python)
description: User-centric design of a database
tags: database_design
cover_image: https://thepracticaldev.s3.amazonaws.com/i/9sfbec8ouz5dhgcw5h2n.PNG
---
My chatbot is going to answer queries on the Mahabharat, so my database will be a Mahabharat<sup><a href = "#1">1</a></sup> database. Let's design one :construction:
I should be able to apply the design principles in this post to any story corpus, epic, or novel (and not <i>just</i> the Mahabharat). The Bible, for example, or the Harry Potter series. :star2::zap:
<blockquote>
<dl>
<dt>Who am I designing the database for?</dt><dd>A chabot.</dd>
<dt>Who will use the chatbot?</dt><dd>Human beings.</dd>
<dt>Why will they use the chatbot?</dt><dd>To get quick info on the story.</dd>
<dt>What kind of info?</dt><dd>Info about the people in the epic, what they do, how they live day to day, how they die.</dd>
</dl>
</blockquote>
I know that my chatbot is not a story-telling bot; it is an info-supplying bot. How do human beings look for info? They do so by asking questions.
<blockquote>
<dl>
<dt><u>Why</u> is the sky blue?</dt><dd> </dd>
<dt><u>How</u> do I dance the foxtrot?</dt><dd> </dd>
<dt><u>When</u> is that moron going to resign?</dt><dd> </dd>
<dt><u>Where</u> were they when the lights went out?</dt><dd> </dd>
<dt><u>Who</u> framed Peter Rabbit?</dt><dd> </dd>
</dl>
</blockquote>
Applying these questions to my project, I can see two things:
<ul>
<li>That I need to group the information into categories</li>
<li>That a category should answer one - and only one - type of question</li>
<li>That an answer to a question in one category can lead to further questions that can belong to either the same category or to another</li>
</ul>
I believe, in geek parlance these are known as <i>taxonomy</i>, <i>ontology</i>, and <i>normalisation</i>?
So, categories will be my database tables. The rows in the tables will be the Who, What, Where of that category. The columns will contain the information about one specific Who, What, Where. The info in one cell of one table can be exactly equal to the info in a cell of one or more tables. :boom:
My Mahabharat database is going to have the following categories (tables):
- persons
- families
- clans
- weapons
- killings
- places
A character in the story (a person) can belong to persons, families, clans, weapons, and killings. In one table, a person can appear in one - and only one - row. In all other tables, that person can appear at several rows, but only once in each such row.
I believe, in geek parlance this is known as <i>primary key</i> and <i>foreign key</i>?
Now that I have an idea of what my database will look like, I should write it all down. So, here we go: <img src = "https://thepracticaldev.s3.amazonaws.com/i/9sfbec8ouz5dhgcw5h2n.PNG"/>
To sum up, for designing my database, I:
- Ask what kind of questions my database will answer.
- Group the questions into categories.
- Create a table for each category.
- Turn the sub-groups of these categories into the table columns.
- Link one table to another through a common column.
Next, I'll create these tables in MySQL on my <a href = "https://dev.to/aninditabasu/building-a-chatbot-with-the-ramp-stack-part-1-preparing-my-raspberry-pi-1jai">RaspberryPi</a> and populate them with the data.
<hr/>
<small>Footnote:</small>
<a name = "1"></a><small>1. The Mahabharat is one of the two epics of ancient India, is the longest epic in the world, and is the epic that contains the Bhagvad Gita (a mini-book that's sacred to the Hindus).</small>
| aninditabasu |
156,314 | JavaScript Array Splice Issue | This is an issue that cropped up for me today: I was splicing a name out of an array and was getting the wrong result back. The issue was a simple one with a quick fix that took me several hours to track down what was going on. | 2,302 | 2019-08-15T14:23:04 | https://dev.to/rfornal/javascript-array-splice-issue-5301 | javascript, array, splice, webdev | ---
title: JavaScript Array Splice Issue
published: true
description: This is an issue that cropped up for me today: I was splicing a name out of an array and was getting the wrong result back. The issue was a simple one with a quick fix that took me several hours to track down what was going on.
tags: JavaScript, Array, Splice, WebDev
series: JavaScript Issues
cover_image: https://thepracticaldev.s3.amazonaws.com/i/3vn8ke839ubkd7t7d79q.png
---
This is an issue that cropped up for me today: I was splicing a name out of an array and was getting the wrong result back. The issue was a simple one with a quick fix that took me ***several hours to track down*** what was going on.
Now that I've seen the answer, it's obvious ... at the time I was seeing the issue, it was frustrating to say the least.
Finding a good way to query the issue (couldn't come up with good search terms) led me to creating this article.
I wrote the following code ...
```javascript
triggerDelete: async (name) => {
let stored = ['one', 'two', 'three', 'four', 'five'];
stored = stored.splice(stored.indexOf(name), 1);
return stored;
}
```
I've simplified it some. The issue was simple, when I passed in **'two'** the array returned was ...
```javascript
triggerDelete('two');
/*
* returns ['two'], not ['one', 'three', 'four', 'five']
*/
```
I expected **['one', 'three', 'four', 'five']** to be the array returned.
... after two hours of searching and finally asking for a second pair of eyes, the solution was ...
```javascript
triggerDelete: async (name) => {
let stored = ['one', 'two', 'three', 'four', 'five'];
stored.splice(stored.indexOf(name), 1);
return stored;
}
```
Quite simply, the issue was that I was not changing the array, I was assigning the result of the **splice** back into the **stored** variable; hence, **['two']** ...
Hopefully this article will save someone else some of the pain I felt down the road!
| rfornal |
156,600 | SQL Access To Mainframe Data | CONNX Issue 3, 2018 Download PDF Mainframe Data Over the decades, ma... | 0 | 2019-09-13T11:17:44 | http://techcommunity.softwareag.com/techniques-blog/-/blogs/sql-access-to-mainframe-data | adabas, connx, sql | ---
title: SQL Access To Mainframe Data
published: true
tags: #adabas, #connx, #sql
canonical_url: http://techcommunity.softwareag.com/techniques-blog/-/blogs/sql-access-to-mainframe-data
---
# CONNX
| Issue 3, 2018 | [<font color="#000000"><img alt="" id="yui_patched_v3_11_0_1_1435667347988_855" src="http://techcommunity.softwareag.com/image/image_gallery?uuid=202eb7e1-9148-4c65-b29a-54c4302ab65e&groupId=10157&t=1326987658425" style="width: 22px; height: 22px; vertical-align: text-bottom;"> <span id="yui_patched_v3_11_0_1_1492771711676_1198">Download PDF</span> </font>](http://bit.ly/2msfoqz) |
|-|-|-|
## Mainframe Data
Over the decades, many have predicted the death of the mainframe, and all of those predictions were wrong. IBM®, the leader in the mainframe industry, had annual revenues of nearly $80 billion in 1997. Twenty years later, IBM still had annual revenue of $80 billion (excluding inflation). When factoring in inflation, revenue for mainframe has decreased a bit over time. However, the sheer size of the revenue number shows that mainframes will be a part of major corporations for decades to come. The average age of mainframe software engineers and administrators is increasing and reaching that critical point of retirement.
Additionally, the number of software engineers learning about mainframe technology (COBOL, VSAM™, IMS™, Adabas) out of college is at an all-time low. This creates an ever-growing gap between available resources and the number of resources required to maintain and enhance these critical systems running on the mainframe.
## Using SQL to bridge the gap
SQL itself is a very mature technology, but it is still taught in universities and is very much a part of modern software development. It is the primary way to access modern relational enterprise databases such as Oracle®, SQL Server® and DB2. The concepts in SQL are simple and easy to understand—and these concepts can be applied to access data from a variety of databases from disparate vendors. There are several data access APIs based on SQL. For Java® – based applications there is JDBC®. For other programming languages, there are ODBC, OLE DB and Microsoft® .NET. The combination of SQL along with these data-access APIs has enabled entire segments of the business integration market to thrive. Thousands of applications access data generically using SQL through ODBC and JDBC.
CONNX bridges the gap between these modern data-access methods and the legacy data sources that exist on the mainframe that do not support SQL nor ODBC/JDBC natively, such as Adabas, VSAM and IMS.
## Using CONNX to virtualize mainframe data via SQL
All SQL-based databases have an information schema. This is a metadata repository that contains information about all of the object in the database and their attributes. In SQL databases, there is the concept of a table, which contains records with an identical structure of information. The structure is defined with a collection of columns—where each column specifies the exact attributes of a piece of data. (Is it a date? Is it a text field? What is the maximum length? etc.) The definition of the structure of tables and columns is (in general) uniform across all SQL databases regardless of vendor. It is this uniformity that enables applications developers to use SQL to access data without knowing ahead of time the particular details of the database vendor. There are of course exceptions to this, and each vendor has its own set of extensions to the ANSI SQL syntax. But all enterprise relational database vendors support the same core ANSI SQL syntax. In order for CONNX to provide SQL access to mainframe data, an equivalent SQL catalog must be created that describes the mainframe data being accessed. In CONNX, the catalog is called the CONNX Data Dictionary (CDD).

**Fig 1:** CONNX Data Dictionary Entry for Adabas File
The CDD contains all of the information necessary to access the mainframe data sources and all of the metadata required to represent the data as a relational table. It contains the mapping between SQL data types, SQL table names and SQL column names, and the physical field/offset/data type information necessary to access the mainframe data. Legacy mainframe applications, such as COBOL applications, actually have a very similar, much older concept called a COBOL copybook. This copybook enables COBOL applications to refer to legacy data using meaningful names and also describes the format of the data being stored or retrieved. CONNX capitalizes on the existence of these copybooks and enables imports into the CDD directly from mainframe copybooks. Mainframe Adabas has a similar concept, a Natural DDM, which provides longer descriptive names to the fields in an Adabas file. CONNX imports metadata from this format along with dozens of others.

**Fig 2:** Importing Data from COBOL Copybook
While the data for many companies is on the mainframe, the integration points may not be on the mainframe. An integration point could be a data lake/mart/warehouse or a web-based application. If the integration point is not on the mainframe, then additional challenges arise. Linux®, UNIX® and Windows® (LUW) systems store data differently than mainframe systems. For example, most mainframe text-based data is encoded in something call Extended Binary Coded Decimal Interchange Code (EBCDIC for short). On LUW systems, either Unicode®
or ASCII is more common for text data. The bytes of integer numbers are stored in opposite order on the mainframe versus most LUW systems. And, there are many other difference between the platforms. CONNX smooths over all of these differences and complexities by storing all of the required information to properly read and write the numeric, character and date data in the native mainframe format, while also providing the data in a standard SQL format.
Application developers do not have to worry or care about the fact that the data is physically being stored differently on the mainframe; it makes it completely transparent to them once the CDD has been defined.

**Fig 3:** Query Builder to Join SQL Tables
CONNX not only provides singular access to a particular mainframe data source, but it also makes all of the mainframe data sources appear as if they are in a single virtual database. Organizations that have data in both VSAM and in IMS, for example, can use a single connection to query, combine or move data from the data sources.
## Securing access to the mainframe data
A foremost concern companies have is that someone will obtain unauthorized access to the data on the mainframe. CONNX addresses that concern head-on with several security features and concepts. Authentication is the first line of defense. In order to access the data on the mainframe via CONNX, you must first provide the correct mainframe authentication credentials. Another way of putting this is, if you do not have the credentials to log on to the mainframe via a green screen/terminal emulator, you cannot do so with CONNX either. This ensures that any existing user-based security restrictions on the mainframe are honored. The second line of defense is the CONNX global security setting for the CDD. All CDDs are read/only by default, and no-write permission is allowed unless explicitly granted. This can be configured at the CDD level. The third line of defense is the CONNX SQL security layer. Relational databases have the concept of security—where you can specify which users have select, insert, update and delete access to which table. CONNX provides this same SQL security mechanism to all of the mainframe data sources. CDD administrators can define exact access permissions for either individual users or groups of users. CONNX views are the fourth line of defense. Some organizations want to tightly control not only what columns of data individual users can see but also which rows are visible. This is possible with a CONNX view, where the name/ID of the currently logged on user can be joined to a field within a table to filter which rows are returned. This type of view will return different rows depending on who has logged on—ensuring that the logged-on user can only see the rows belonging to them. The fifth line of defense is the encryption of sensitive data when transmitted across TCP/IP. All CONNX TCP/IP connections can be encrypted with state-of-the-art TLS/SSL. And even when TLS/SSL is not enabled, any sensitive information (like username and password) is encrypted at all times.
## Unified access from SQL-based applications
After a CONNX CDD has been established and secured, it enables the querying of mainframe data. Java applications and frameworks use the JDBC interface to access data. Establishing a connection to CONNX just requires authentication information, the name of the data source (CDD) and the name of the JDBC Server. The JDBC Server is the CONNX service for Java that does all of the data access and SQL query processing. This service runs on any LUW platform. The follow code snippet shows one way a Java application connects to CONNX (other connection methods are possible as well):
```
try { Class.forName("com.Connx.jdbc.TCJdbc. TCJdbcDriver").newInstance(); } catch (Exception ex) { ex.printStackTrace(); } Properties info = new Properties(); String connString = "jdbc:connx:DD=\<connx\_datasource\_name\>;port=7500;GATEWAY=\<connx\_jdbcserver\_name\_ or\_ip\>"; info.put("user", "my\_userid"); info.put("password", "my\_password"); connectionObj = DriverManager. getConnection(connString, info);
```
Once the connection has been established, the application can use standard SQL to query the data, or update the data if permissions permit. CONNX delivers transaction capability to all mainframe data sources that support rollback capability (which is true of Adabas, VSAM under CICS, and IMS).
CONNX also supports ODBC access to the data from all LUW platforms, and the same authentication and encryption features are available through this data access API.
CONNX provides a query tool called Infonaut, which visualizes the data returned from SQL statements.

**Fig 4:** Showing Query Results
## Bridging the gap
CONNX enables a new generation of software engineers who are training and familiar with SQL to access mainframe data. Regardless of whether a company is building a new application, extending an existing one, or integrating mainframe data into a data lake, CONNX makes the task secure, easy and fast.
To find out more about CONNX, contact your Software AG sales representative and visit [www.softwareag.com/connx](http://www.softwareag.com/connx). | tpetrov9 |
156,621 | NaturalONE | Get ready for Natural 9 Get your Natural application development environment ready for 205... | 0 | 2019-09-04T14:16:03 | http://techcommunity.softwareag.com/techniques-blog/-/blogs/naturalone | naturalone, natural, adabas | ---
title: NaturalONE
published: true
tags: #naturalone, #natural, #adabas
canonical_url: http://techcommunity.softwareag.com/techniques-blog/-/blogs/naturalone
---
# Get ready for Natural 9
**Get your Natural application development environment ready for 2050 and beyond! Starting now, NaturalONE is the default development environment of Natural with the 9.0 release.**
| Issue 4, 2018 | [<font color="#000000"><img alt="" id="yui_patched_v3_11_0_1_1435667347988_855" src="http://techcommunity.softwareag.com/image/image_gallery?uuid=202eb7e1-9148-4c65-b29a-54c4302ab65e&groupId=10157&t=1326987658425" style="width: 22px; height: 22px; vertical-align: text-bottom;"> <span id="yui_patched_v3_11_0_1_1492771711676_1198">Download PDF</span> </font>](http://techcommunity.softwareag.com/ecosystem/download/techniques/2018-issue4/SAG_NaturalONE_TECHniques_Oct18_WEB.pdf) |
|-|-|-|
## 2050 and beyond
Natural 9 is a major release with very important implications for all our customers. Not only does this new release provide many valuable new features, but it is a major shift in the paradigm of how applications are developed and maintained with Natural now and into the future. Starting with Natural 9, NaturalONE will be the default development environment of Natural.
**What will change?**
In Natural 9 for the mainframe, UNIX® and Linux®, the editors are disabled by default. This includes the program editor, the data editor and the map editor. Instead of the editors, customers will receive NaturalONE and the Natural Development Server free of charge. Natural for Windows® is not affected by this change.
**What does this mean for you in practice?**
If you order Natural 9 from the Software AG logistics service center, you will receive a new Natural license key. With this license key, you can activate free versions of NaturalONE and the Natural Development Server that have the same functionality as the commercial versions.
After installation, you can use the “Tech” command to check the status of the Natural editor setting as shown in Figure 1. The editor setting in the Tech command should show “disabled." If you then invoke an editor, you will receive the Natural System Message NAT7743 “Editor is disabled in Natural. Use NaturalONE instead.” This message will also appear in other situations when you invoke an editor such as the List command.

**Fig 1:** Use the Tech command to verify Natural Editors are disabled.
Please note that in programs that use editor APIs, the “Edt” and “Adhoc” commands are not disabled. Add-on products such as Natural Construct and Natural ISPF will also continue to work as before.
If you attempt to use the free NaturalONE version with a Natural version where the editor is still enabled, you will receive the Natural System Message NAT7744 “The free NaturalONE can only be used if the Editor is disabled.” If you purchased NaturalONE, you will not receive the Natural System Message NAT7744.
## Developing with Natural—the new way
**Getting started**
Up to now, you likely managed your Natural source code in libraries and kept the source code in the Fuser file. With Natural 9 and NaturalONE, you need to stop using the Fuser for managing the source code and begin using a repository integrated with NaturalONE instead. We recommend that you use an open source repository like Git or Subversion®, but repositories from other vendors like Microsoft® Team Foundation Server may also be used. You can still use products such as Predict Application Control (PAC) or Treehouse N2O to control the stages from test environments to production.
After you have selected a compatible repository, your new and improved development journey can begin. Simply add your applications (libraries including step libraries) step-by-step into a NaturalONE project in your NaturalONE workspace and commit them to the repository. From this moment on, the single “source of truth” is the repository not the Fuser file. All existing Natural source code is now under the control of the repository, including all Natural object types, like programs, maps and data areas. All modifications to the existing Natural source code and new Natural objects created with NaturalONE editors will now be committed to the repository. You no longer need to keep source code in the Fuser file.
**Repository-based team development**
Improved team collaboration is among the many advantages you gain working with NaturalONE. A developer can simply “check out” a copy of the latest version of Natural source code from the repository. If several developers work on the same source code, either by accident or intentionally, all changes to the code can be kept when the code is merged later upon “check in." A developer can also create a new branch where he can work independently from other developers and then consolidate it with the master branch as soon his work on the feature is finished.
Maintaining a history of code changes is another advantage of working with a repository in comparison to the Fuser approach. All versions of the source code are kept in the repository and it is possible to re-activate older source code and compare different versions to see what really has changed.
Repository-based team development has many advantages over the traditional development approach as shown in Figure 2.

**Fig 2:** With NaturalONE you reap the benefits of repository-based team development.
**Deployment**
When one or more features are ready for integration test or production, you can deploy the latest version from the repository to a target Natural environment on mainframe, UNIX, Linux or Windows. Deployment here means that all Natural source code changes since the previous deployment will be automatically identified and copied to a library or several libraries on the target environment. At this moment, Fuser comes back into play. The Natural source code is temporarily saved in the Fuser file and cataloged. After the catalog process is complete, the source code is deleted from Fuser. This avoids possible inconsistencies between the source code in Fuser with the source code in the repository.
You may continue to use existing tools like Predict Application Control, N2O or utilities like Sysmain and Sysobjh to copy Natural object code to integration test or production.
We strongly recommend that you optimize the processes described here by following a DevOps approach. This will enable you to make implemented features available much faster to end users without sacrificing quality. The DevOps approach covers all phases from development to operations and continuous integration to deployment.
## Bring DevOps to Natural and manage the generational change
By moving editing work to NaturalONE, you will now benefit from using a modern, Eclipse™-based development environment for Natural that supports agile development and DevOps. When you embrace the DevOps approach, you can develop new applications and modernize existing Natural applications faster, to better meet changing business requirements and reduce application development costs. To learn more about the benefits of bringing DevOps to Natural, read [www.b2b.com/naturalone-bringing-devops-to-natural](http://www.b2b.com/naturalone-bringing-devops-to-natural)
Providing a state-of-the-art development platform like NaturalONE also makes it easier for you to recruit new talent and manage the generational change of the workforce. Several customers of Adabas & Natural have figured out how to manage the generational change of talent within their team using NaturalONE. They attract and retain young developers, as well as engage all generations to work together, share skills and knowledge, and innovate. Here is their recipe for successfully managing the generational change of the workforce: [https://resources.softwareag.com/adabas-natural/managing-generational-change](https://resources.softwareag.com/adabas-natural/managing-generational-change)
## Success story
A European insurance company attracts and retains young developers—medium age 34 with half in their 20s—through a robust training program and use of NaturalONE. A 17-year-old intern learned Natural in just five days. Learn how this insurance company attracts and retains young developers in this video: [https://resources.softwareag.com/youtube-product-videos-for-uberflip/see-how-insurance-company-alte-oldenburger-masters-generational-change](https://resources.softwareag.com/youtube-product-videos-for-uberflip/see-how-insurance-company-alte-oldenburger-masters-generational-change)
## Summary
Many of our customers have achieved immediate benefits using a repository-based approach to development with NaturalONE. We believe that you will greatly benefit as well. Moving to NaturalONE allows you to leverage numerous free open source plug-ins available for Eclipse and is very attractive to the next generation of developers, making it easier to hire new staff and motivate young developers to work with Natural.
Java® programmers will find NaturalONE appealing to use because of its familiar Graphical User Interface (GUI) and wizards. NaturalONE also gives you the ability to harmonize application development across your entire organization, whether you are developing applications in Natural, COBOL or Java to run natively on the mainframe or LUW.
Moving away from the green screen development environment by making NaturalONE the default Natural development environment is an important cornerstone to implementing the Adabas & Natural 2050+ strategy. By reusing your investments into the unique Adabas & Natural application logic and high-value data in an agile, digital-ready architecture, you can make the transformation to a digital-software-driven business quickly and with minimal disruption. | tpetrov9 |
156,928 | Replacing MouseEvents with PointerEvents | How can the necessity of handling different types of inputs (e.g., mouse, touch, and pen) by duplicat... | 0 | 2019-08-16T05:38:00 | https://stephencharlesweiss.com//2019-08-14/input-events-with-pointers/ | eventhandler, mouseevent, pointerevents, ui | ---
title: Replacing MouseEvents with PointerEvents
published: true
tags: eventhandler, mouseevent, pointerevents, ui
canonical_url: https://stephencharlesweiss.com//2019-08-14/input-events-with-pointers/
---
How can the necessity of handling different types of inputs (e.g., mouse, touch, and pen) by duplicating event handler logic? The Pointer Event makes a compelling case that it’s a solid solution.
[](/static/7aac3619127b537520b5240c5349b4eb/b100d/pointer.png)_Figure1A pointer is a hardware agnostic representation of input devices that can target a specific coordinate (or set of coordinates) on a screen._ <sup>1</sup>
> Pointer Events provide all the usual properties present in Mouse Events (client coordinates, target element, button states, etc.) in addition to new properties for other forms of input: pressure, contact geometry, tilt, etc. <sup>2</sup>
While Pointer Events are not yet supported by all browsers, there is wide adoption among major modern browsers (Safari being the major exception, though support is coming in v13).<sup>3</sup>
[](/static/d5a8fc72a4b6eb9dda8ce0e7f269111d/1589a/can-i-use-pointer-events.png)
This is a boon as pointer events will make the act of dealing with different input events much simpler.
# <svg aria-hidden="true" focusable="false" height="16" version="1.1" viewbox="0 0 16 16" width="16"><path fill-rule="evenodd" d="M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z"></path></svg>Resources:
- <sup>1</sup> [Pointer Events | W3](https://www.w3.org/TR/pointerevents/)
- <sup>2</sup> [_ibid._](https://www.w3.org/TR/pointerevents/)
- <sup>3</sup> [Pointer Events | Can I use…](https://caniuse.com/#search=pointerdown) | stephencweiss |
156,953 | Attempting: AWS Certified Solutions Architect Exam, any Tips? | Well Guyz, in 2 weeks time, i will be attempting the AWS Certified Solutions Architect exam, any tips... | 0 | 2019-08-14T19:00:26 | https://dev.to/th3n00bc0d3r/attempting-aws-certified-solutions-architect-exam-any-tips-3kph | help | Well Guyz, in 2 weeks time, i will be attempting the AWS Certified Solutions Architect exam, any tips for me?
PS: Like i am myself still in the process of learning, therefore I shall stop it when i die, from my birth it has already started.
| th3n00bc0d3r |
157,053 | What's 'this'? | Over the past few weeks, I've been learning tons about JavaScript and React. Between the two of them,... | 0 | 2019-08-15T15:30:06 | https://dev.to/aliyalewis/what-s-this-3ban | beginners, javascript | Over the past few weeks, I've been learning tons about JavaScript and React. Between the two of them, I find myself confused and also very clear about what the keyword **this** is. In React, I feel pretty good about using it and understanding what it's doing but in JavaScript? Ehhh, not so much. To try and make sense of it, I decided to do some research on **this** in JS.

The first place I looked was the MDN documentation which has this to say:
*"In most cases, the value of this is determined by how a function is called. It can't be set by assignment during execution, and it may be different each time a function is called."*
That was a little vague so I kept looking. This is what W3Schools says:

Hmm...okay, that makes sense but I'm more of an auditory/visual learner so my next step was to watch videos on YouTube. I came across a couple of videos that I found very helpful and below are a list of things that I found to be the most informative.
##Implicit Binding
Implicit binding tells the function to look to the left of the dot to the object it was called on and have **this** reference that object. Here's an example:

In the above example, we look to the left of the speak function to identify the object that **this** will reference. We see that the object Jak has a name property that **this** is attaching itself to, it's implicitly bound because the function that it's created in is inside of the object it's attaching to. Pretty simple right? Great, so now let's dig a little deeper.
##Explicit Binding
There are three ways to explicitly bind **this** in JS, with **call()**, **apply()**, or **bind()**. **call()** tells the function which object to attach to and can accept additional arguments, like elements in an array. **apply()** is similar to **call()** in that it takes in an argument of an object to attach to but instead of passing elements in an array, you can pass the whole array in as an argument. Last but not least is **bind()**. **bind()** is also similar to call except that it returns a new function instead of invoking the original function. Below are some examples of each.
###call()

###apply()

###bind()

I hope that this helps someone out as much as it's helped me! For more in-depth explanations, I highly recommend watching the YouTube videos I've linked below. Do you have some good ways to explain what *this* is? Let me know in the comments!
Sources:
* https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/this
* https://www.w3schools.com/js/js_this.asp
* https://codeburst.io/all-about-this-and-new-keywords-in-javascript-38039f71780c
* https://www.youtube.com/watch?v=zE9iro4r918
* https://www.youtube.com/watch?v=gvicrj31JOM
* https://www.youtube.com/watch?v=NV9sHLX-jZU
| aliyalewis |
157,063 | The trouble with NDAs | Let’s face it: Non–Disclosure Agreements (NDAs) are a pain in the backside for freelancers. Not only... | 1,866 | 2019-08-15T05:51:14 | https://worknotes.co.uk/contracts/the-trouble-with-ndas/ | freelancing, career, webdev, design | Let’s face it: Non–Disclosure Agreements (NDAs) are a pain in the backside for freelancers. Not only are NDAs undecipherable, they prevent you from displaying your work and limit your ability to take on related projects.
## Not just a piece of paper
A while ago, a freelance friend was discussing an NDA they’d been made to sign. It was described to them as a ‘formality’ and, when they’d asked about displaying their work, they were told that ‘would be ok’.
The NDA included a clause stating the opposite. If displaying the work is fine, then why not take that clause out?
NDAs are not a ‘formality’. As with [any contract](https://worknotes.co.uk/contracts/an-email-is-not-a-contract), you have to assume that the person issuing it is serious about the implications of breaking it.
Describing an NDA as a ‘formality’ suggests to me that the issuer doesn’t understand what they’re asking you to sign. That worries me and it should worry you, too.
## The ‘secret sauce’
In the tech industry, it’s common for a potential client to think that their _idea_ is so life-changing, they need an NDA just to talk about it. I’ve experienced this on a few occasions.
A few days before I sat down to write this, I received a request for proposal that read:
>I have two ventures at the moment, both cannot be disclosed at the moment due to legal reasons. However, once an NDA is signed, all details can be presented.
## One bitten, twice shy
My first experience of this kind of upfront NDA was about five years ago. A client I’d been working for was partnering up with another firm to add some functionality to the site.
My client wouldn’t tell me what the functionality was, and neither would the firm unless I signed an NDA. I was uncomfortable about it and made that clear, but ultimately caved as I was new to the game.
An initial Skype meeting with the firm revealed that the functionality was…a learning management system. What was discussed could have been talked about without an NDA.
By the end of the call I’d already decided I no longer wanted to work on the project. I swiftly moved on, but it will come as no surprise that the client took [three months to settle the final invoice](https://worknotes.co.uk/contracts/an-email-is-not-a-contract).
That was the last NDA I signed.
## When NDAs are OK
I don’t have a problem with the concept of an NDA. If I was hiring someone to do some work with me, I might even ask them to sign one.
The trouble with most NDAs is this: they’re full of complicated legal jargon, have unreasonable terms and are far too general.
For me to sign an NDA, it has to:
* Protect my reputation should something go wrong
* Be beneficial to both sides
* Be limited to a reasonable period (spoiler alert: five years is too long)
* Be specific about what it covers
* Be written in plain English
* Allow me to talk about the project after it goes live
If a client’s NDA doesn’t meet the criteria, I won’t sign it.
## Why freelancers shouldn’t be asked to sign generic NDAs
The main issue freelancers face with NDAs is that it prevents them from talking about their work.
If the freelancer is paid handsomely for the privilege of not discussing their work, then fine, but often they’re not.
If you value and respect the freelancers you employ, why make it difficult for them to get work? Empty portfolios are a big problem for freelancers who have worked for clients that use punitive NDAs.
It’s not just about money, either. Freelancers choose their career path because they enjoy their work and are proud of what they produce. If they produce something special, perhaps even award-winning, it’s a shame if they can’t talk about their achievements or use it as a case study.
And there's another issue: NDAs limit a freelancers ability to work on similar projects with other clients. It's easy to breach the terms of the NDA without realising. This is an additional liability which hurts freelancers.
I would urge clients who use NDAs to consider what they’re asking freelancers to sign. Don’t just download a template and blindly enforce it.
If you need inspiration, Andy Clarke has produced a [brilliant plain-English NDA](https://stuffandnonsense.co.uk/projects/three-wise-monkeys/) that can be tweaked for your needs. | websmyth |
157,154 | Upgrading the Blogcast billing system: A SCA Tale | In case you aren't aware, new PSD2 regulation by the European Union will require Strong Customer... | 0 | 2019-09-18T06:53:04 | https://miguelpiedrafita.com/blogcast-sca/ | showdev, blogcast, sca, laravel | ---
title: Upgrading the Blogcast billing system: A SCA Tale
published: true
tags: showdev,Blogcast,SCA,Laravel
canonical_url: https://miguelpiedrafita.com/blogcast-sca/
cover_image: https://miguelpiedrafita.com/content/images/2019/08/Artboard-1.jpg
---
_In case you aren't aware, [new PSD2 regulation by the European Union will require Strong Customer Authentication (SCA) for many online payments beginning September 14](https://stripe.com/payments/strong-customer-authentication). This is the tale of upgrading a [Laravel](https://laravel.com) application support SCA using the new [Laravel Cashier](https://github.com/laravel/cashier) release (v10)._
Blogcast doesn't follow the traditional _Software as a Service_ (SaaS) billing model. Instead of billing a flat fee each month, it bills a flat fee every time a user converts an article. Before the upgrade, Blogcast asked for a payment method on registration, then billed it from the server each time a new article was created.
Unfortunately, this system is no longer possible with SCA, as it may require additional authentication from the cardholder at any time, which isn't always possible when creating charges from the server. This update also blocks single charges (charges not related to a subscription) from using stored payment methods, making server charges impossible, again. _Keep in mind these limitations are due to the way Stripe implemented the legislation into their platform and not explicitly to the legislation._
With my current implementation deemed impossible, I had to brainstorm a new one. I didn't want to change my usage-based billing model, but wanted to use some kind of subscription-like implementation as it's what both Cashier and Stripe are optimized for. Fortunately, Stripe offers _[metered usage subscriptions](https://stripe.com/docs/billing/subscriptions/metered-billing)_, that is, subscriptions that allow you to bill per-unit. I created a new subscription and set the billing period to days instead of months. This way, users would be billed at the end of the day instead of monthly, making this system similar to the old one.
The only thing left now was updating the backend code to record usage instead of billing users directly. Fortunately, I could reuse some code I already had from a different application, so it was a matter of minutes instead of hours. Here's the code to increment the usage record of a specific user:
```php
\Stripe\UsageRecord::create([
'quantity' => 2, // number of units to increment
'timestamp' => now()->timestamp, // when was this charge made?
'action' => 'increment', // if you skip this it'll override the units instead of adding them
'subscription_item' => $user->subscription('recurring')->asStripeSubscription()->items->data[0]['id'], // id of the user's subscription, you can get it using Stripe's API
]);
```
Now, Stripe will handle off-session billing by themselves, emailing the users if they need extra authentication. The other advantage to the old system is that there's only one charge per day instead of one per article, making it easier for the users to recognize the charges when they show up in their bank account.
As easy as it may seem, there were a lot of changes involved in making everything work, from updating the frontend to handle the new [Stripe Elements](https://stripe.com/payments/elements) methods to updating all the automated tests.
<figcaption>Final Pull Request that implemented this changes</figcaption>
There's still some before the legislation is passed on September 14, but if your application processes payments online, you should definitely start looking into it. It's already hard to convince people to pay for your application, and the last thing anyone wants is a bad payment flow getting in the way. | m1guelpf |
1,292,019 | WhatsApp It! – My first useful app | Who is this apps for? Anyone who always in need to contact other people through WhatsApp... | 0 | 2023-05-25T07:26:52 | https://blog.afrieirham.com/whatsapp-it-my-first-useful-app | ---
title: WhatsApp It! – My first useful app
published: true
date: 2018-10-31 09:11:12 UTC
tags:
canonical_url: https://blog.afrieirham.com/whatsapp-it-my-first-useful-app
---
# Who is this apps for?
Anyone who always in need to contact other people through WhatsApp but have no interest in saving their contact details. All you need to do is just type in the number and it will direct you to WhatsApp.
It is essentially a click to chat feature. Try it [here](https://whatsappit.afrieirham.com).
# The inspiration
When I was in charge of taking t-shirt orders for my colleges event which is the Software Engineering Day recently, I find out that it is unnecessary to save the customers phone number just to tell them about the payment.
I am a person who dont like to keep my contact list full with unknown people or someone who I have contacted once. Thus, this app is made. :)
# Click to Chat feature
I first found out about this feature when I was trying to shop online on Instagram. Most sellers provide a link on their bio for us to click and then it opens up WhatsApp with their number. Sometimes they even pre-filled it with some generic text, which I find really handy.
# How it works
The logic behind it is pretty simple.
First, I save the input from the user in a Variable. Then I append it to the Click to Chat [Custom URL from WhatsApp](https://faq.whatsapp.com/general/chats/how-to-use-click-to-chat) and save it in another Variable. When the user click the WhatsApp It! button, it will open up that link in a new tag.
For example, user type in 012324xxxx, the number will then append to the link Variable, creating `https://wa.me/012324xxxx`. Finally, the user will be directed to that link when they click on WhatsApp It!
# What are the tools that I use?
I use Bootstrap for the front-end and JavaScript for the logic. I also use GitHub to host it with GitHub page.
# New features that I have in mind
I am currently trying to add more country code into it so that you can use it to WhatsApp anyone, as long as they have WhatsApp account.
I have read some reviews about a similar app on Google PlayStores that gave me some ideas on other features that might be helpful. My friends and family also give me some feedback which helps a lot.
Thank you for reading, appreciate it! | afrieirham | |
157,332 | Learn about Quantum, Knative, Blockchain and NLP – watch July 2019 online meetup recordings | IBM Developer SF team hosts weekly online meetups on various topics. Online event... | 0 | 2019-08-21T17:16:59 | https://maxkatz.org/2019/08/15/learn-about-quantum-knative-blockchain-and-nlp-watch-july-2019-online-meetup-recordings/ | quantum, knative, onlinemeetup, video | ---
title: Learn about Quantum, Knative, Blockchain and NLP – watch July 2019 online meetup recordings
published: true
tags: Quantum, Knative, online meetup, video
canonical_url: https://maxkatz.org/2019/08/15/learn-about-quantum-knative-blockchain-and-nlp-watch-july-2019-online-meetup-recordings/
---
[IBM Developer SF team](https://www.meetup.com/IBM-Developer-SF-Bay-Area-Meetup/) hosts weekly online meetups on various topics. [Online events are one of the best ways to scale](https://dev.to/ibmdeveloper/using-online-meetups-to-scale-your-developer-relations-program-17li) your Developer Relations program and reach developers anywhere, anytime and for a long time after the event.
The following online meetups we hosted in July 2019 with links to watch the recordings. I also encourage you to join our [meetup](https://www.meetup.com/IBM-Developer-SF-Bay-Area-Meetup/) so you will always know when our online meetups are scheduled. Our meetups are hosted by the wonderful [Lisa Jung](https://twitter.com/LisaHJung) 👋.
**⭐ A moderator using NLP, VR and Serverless (July 10, 2019)**
In this online meetup with [Upkar Lidder](https://twitter.com/lidderupk) developers learned:
- Natural Language Understanding: how to detect different sentiments and emotions in text sent in Slack channels
- Visual Recognition: how to detect offensive images sent in Slack channels
- Apache OpenWhisk: how to glue together different pieces and create a complete application
[Watch the recording](https://www.crowdcast.io/e/a-moderator-using-nlp-vr) 📺
**⭐ Knative – bringing serverless to your Kubernetes cluster (July 17, 2019)**
In this online meetup with [Nima Kaviani](https://twitter.com/nimak) developers learned about Knative.
Kubernetes by large has won the platforms’ war. However, for an application developer, the details of getting an application up and running on Kubernetes is beyond what they need to care about. Efforts of implementing Serverless on top of Kubernetes aim at making the user experience simpler by removing the unnecessary complexities of Kubernetes.
In this talk we discussed Knative as one of the major efforts in implementing Serverless on top of Kubernetes. You will learn the user experience, and deployment experience through real life examples.
[Watch the recording](https://www.crowdcast.io/e/knative---bringing) 📺
**⭐ The Intersection of Quantum Computing, Artificial Intelligence and Cryptography (July 17, 2019)**
Many experts refer to Quantum Computing as the next growth field for processors and infrastructure. Quantum has the potential to revolutionize the way data is encrypted, decrypted, processed and queried, and applies a new processing paradigm to extant problem spaces.
But how much of this is buzz words and hype, and how much is implementable? Watch this online meetup with [Dave Nugent](https://twitter.com/drnugent) to learn and cover these topics:
- The History of AI Research
- Quantum Theory
- Frameworks & Projects
- Building a Quantum Program
- AI Applications
- Future Predictions
[Watch the recording](https://www.crowdcast.io/e/the-intersection-of/) 📺
**⭐ Getting started with IBM Blockchain Platform 2.0 in the IBM Cloud (July 31, 2019)**
In this onilne meetup with [Lennart Frantzell](https://www.linkedin.com/in/lennartfrantzell/) developers learned about:
- IBM Blockchain Platform in the IBM Cloud
- IBM Blockchain Platform Extension for VS Cod
- How to develop, package, and test client applications and Smart Contracts on our laptops, before pushing the code up to the IBM Blockchain Platform.
[Watch the recording](https://www.crowdcast.io/e/0w6ccq4v) 📺 | maxkatz |
157,453 | The Business Benefits of Offshore Software Development | Offshoring allows companies to access talented engineers at a lower cost than at home. Great! But wha... | 0 | 2019-08-16T05:34:00 | https://dev.to/meghamaheshwar6/the-business-benefits-of-offshore-software-development-4fe9 | <!-- wp:paragraph -->
<p>Offshoring allows companies to access talented engineers at a lower cost than at home. Great! But what are the actual <strong>business benefits</strong> of doing that? Financially. Operationally. What do companies gain by investing in an offshore team?</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>What is offshoring, really?</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>When discussing offshoring, most people dive too quickly into specifics: <em>you can build a bigger software team! You can save money! It's safer than outsourcing!</em></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>All true, but we should slow down. <a rel="noreferrer noopener" href="https://thescalers.com/guide-to-offshore-development-services-india/" target="_blank">What is offshoring?</a> Fundamentally, it's a way to increase a business's output, and improve its technical expertise, without undue stress, hassle, or cost. </p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":4040} -->
<figure class="wp-block-image"><img src="https://thescalers.com/wp-content/uploads/2019/06/software-development-1024x349.png" alt="Image stating that other industries, not just software development, can use offshoring. " class="wp-image-4040"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>At its most basic, offshoring is simply expanding your business with a new team, with new (or complementary) skills. Aside from being geographically distant (<a href="https://thescalers.com/strategies-to-make-remote-teams-work/">which isn't a major issue in 2019</a>) it's no different from hiring locally: advertise, recruit, work, reward - it's the same process.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>So if it's the same process, then why are companies using offshoring? What are the <strong>real business benefits of offshoring?</strong> </p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>1. Higher profit margin</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>A lot of companies and articles talk about "lower costs" when offshoring. We want to be a bit more helpful and specific.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Say you establish a 5-person development team in Bangalore, along with your 5-person team at home in New York. Now let's look at two comparisons: first, a pure salary comparison between India and the USA, and then the total cost of doing business offshore versus in-house.</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><li><strong>Salaries only -</strong> If you hire the same team in Bangalore (same ranks, experience, and quality) then the salary difference will be substantial. Comfortably 3-4 times cheaper - but this comparison is too simple to be useful. </li><li><strong>Total cost</strong> - For our average client, the total cost of doing business here (including premises, administrative staff, legal work, payroll, developer salaries, insurance - the works) typically comes in at <strong>50% the cost of your at-home developers' <em>salaries</em></strong>.</li></ul>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>There's no industry where skilled Indian workers cost <em>more</em> than in Western Europe or the USA. The differences in cost of living are just too substantial.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>It's everything else - offices, admin, and these things - which are most important. <a href="https://thescalers.com/verify-your-offshore-development-partner/">Handled correctly by your dedicated offshore partner</a>, offshoring offers <strong>higher margins </strong>and, therefore,<strong>higher profits</strong>.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Why are these salaries so different?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We need to remember there are two sides to the story. Pound for pound, Indian workers are always cheaper than their British or American equivalents. But that's been the case for decades; longer than decades.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>So why do we suddenly want to cash in on Indian labour?</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li><strong>The quality of Indian workers has improved</strong>. Specifically in IT, engineers across India are getting the same education, training, and practical experience as Westerners. They're incredibly capable, and that's attractive to businesses.</li><li><strong>Lower cost of living = lower salaries, period</strong>. It's basic economics. Until India's economy grows <em>significantly</em> compared to the west, the Rupee will be worth less than the Dollar. Therefore, salaries will remain smaller in India - regardless of the quality of service.</li><li><strong>Engineering scarcity in the west</strong> - This is something we can't afford to forget. In isolation, quality Indian engineers cost more than they used to. However, in Europe, the UK, and the US, engineers are <em>very</em> hard to find and <em>extremely</em> expensive. Because of scarcity. Part of why Indian labour is so "cheap" is actually because local labour is so expensive.</li></ol>
<!-- /wp:list -->
<!-- wp:image {"id":4042} -->
<figure class="wp-block-image"><img src="https://thescalers.com/wp-content/uploads/2019/06/highest-level-of-formal-education-1024x605.png" alt="Graph of what level of formal education engineers have had in various countries. " class="wp-image-4042"/><figcaption><em>A greater proportion Indian engineers have Bachelor's degrees than the other nations, and only Germany has more Masters-qualified engineers. </em></figcaption></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>So perhaps we should stop dismissing "cheap" Indian rates with so much scepticism: it's all relative to the obscene wages that engineers command at home. Today, the main reason western businesses invest in offshoring isn't to be cheap - it's to hire <strong>equally-skilled talent</strong> at<strong>affordable costs</strong>.</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>2. Access to niche or rare talent</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>We've looked at how expensive engineers are to hire in western countries and cities. There's no hiding from it: you could be facing $100+ an hour for a capable senior developer. And why are they so expensive? Because their skills are incredibly in demand.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>In the USA alone, <a href="https://www.arcgis.com/apps/MapJournal/index.html?appid=b1c59eaadfd945a68a59724a59dbf7b1">250,000 software engineering jobs are currently unfilled</a>, and increasing. There simply aren't enough new developers to fill the gaps.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Colleges are pushing IT and engineering hard, but there's an inevitable delay before those graduates trickle into the industry. To make things harder, the number and variety of tech companies are also increasing. Good candidates can pick and choose their employers.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Hence the holy grail of incentives: crazy wages.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":4043} -->
<figure class="wp-block-image"><img src="https://thescalers.com/wp-content/uploads/2019/06/software-enginerring-jobs-remain-unfilled-1024x516.png" alt="250,000 software engineering jobs remain unfilled in the USA" class="wp-image-4043"/></figure>
<!-- /wp:image -->
<!-- wp:heading {"level":3} -->
<h3>A problem of availability</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Today, the main reason companies are looking to build offshore teams and engage Indian developers isn’t cost - it’s availability.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>India produces 1.5 million engineering graduates every year, a good proportion of which are in software. Thanks to the high training pedigree (including niche specialisations) and extensive English language training, these engineers roll out of university ready to work.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":4044} -->
<figure class="wp-block-image"><img src="https://thescalers.com/wp-content/uploads/2019/06/average-increase-new-engineers-1024x770.png" alt="graph of the annualised increase in new engineers for several countries. India is biggest. " class="wp-image-4044"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p><strong>English is the language of the industry. Indian engineers can communicate with general fluency, but can also be extremely clear and concise in a highly technical environment.</strong></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Even if you ignore the lower rates, there are some niche skill sets that are just plain-old hard to find. It wouldn't be unreasonable to work with an abroad contractor to get those skills. </p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Offshoring can sometimes be seen as an extension of that idea: access to highly-skilled developers you can actually <strong>find</strong> (and afford!) who happen to live elsewhere.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Combined with the lower rates, the argument is a no-brainer. <a href="https://thescalers.com/recruit-world-class-software-development-engineers/">Recruiting world-class engineers</a> requires an experienced hand, but in India, there's an extremely accessible pool of talent to get you started. </p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>3. The opportunity to scale up sustainably</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Scaling up a company is never as easy as it sounds. Taking on new employees (especially skilled, i.e. <em>expensive</em> ones) is a huge investment and presents a significant financial risk.</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><li>If those employees decide to leave early (very common in this work era) you could be overburdened with work. The quality drops, clients go elsewhere, and the whole business is in trouble. Worst case, but it does happen!</li><li>But what if everyone stays, but you can't drum up as many new clients as you expected? You're left with spare resources on the payroll, twiddling thumbs and burning up cash. Again, a nasty situation - and an unsustainable one.</li></ul>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>With an offshore team, things are a little different.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>Simplified scaling</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>First, the cost of recruiting and paying staff is lower. Thanks to the reduced payroll, a business could shoulder down periods more easily with an offshore team. At the same time, expanding your offshore team can be done <em>much</em> faster than in-house. We have brought multiple groups of engineers to our clients' teams in as little as 4 weeks. Vetted, tested, and ready for work.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Second, since the offshore team will have ample, inexpensive office space, there’s no concern over moving to larger property.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3>A game of margins</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>The more you grow your offshore team to provide increased capacity, the higher your profit margin.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Say your current staff wages are £250,000 a year for 5 developers. Revenue is £1m and business is going great, but you want to grow.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>You manage to negotiate another £1m in jobs for the coming year. You have 3 options:</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><li><strong>Hire 5 more local developers</strong> - For another £250k per year, you can simply double the size of your existing team. Assuming zero extra costs in doing so (which, of course, is impossible) your total profit is now £1.5m. In reality, it's some amount <em>less than</em>, but close to, £1.5m.</li><li><strong>Hire 5 new developers in an offshore team</strong> - The total cost of business with this team is, worst case, 50% of the at home salaries, remember? Let's say £125k, which means the <em>other</em> £125k goes straight into your profits.</li><li><strong>Dismiss the in-house team for a 10-person offshore team</strong> - £250k for your offshore team, but still £2m in revenue, so a whopping £1.75m in profit.</li></ol>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>So while option 1 amounts to some amount <em>less</em> than £1.5m in profit, option 3 offers a guaranteed £1.75m profit. </p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":4045} -->
<figure class="wp-block-image"><img src="https://thescalers.com/wp-content/uploads/2019/06/offshore-team-vs-in-house-team-1024x349.png" alt="your offshore team will cost, at worst, 50% of your in-house team" class="wp-image-4045"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>Now we don't want companies to ditch their in-house teams. It's just a matter of demonstrating how significantly offshoring can impact the bottom line. In brief, it impacts it <strong>a lot</strong>.</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>4. The administrative burden is not increased</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>On its own, this is not a good enough reason to build an offshore team. "Things won't get any worse" is not an incentive - but stick with us.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Given the first 3 points on our list, this is an important topic. Expanding any team is usually a big logistical effort. With offshoring, the administration, payroll, recruitment (except final decisions), accommodation, insurance, and so much more are<strong>all taken care of by us</strong>.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>As an offshoring partner, our role is to make this as effective as possible for you, while also limiting stress and inconvenience wherever possible. Building a similar team at home would be much more resource-intensive.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>At the end of the day, offshoring with the help of an experienced, versatile partner makes all the difference. It allows companies to leverage <em>all</em> the benefits of offshoring without compromise.</p>
<!-- /wp:paragraph -->
<!-- wp:heading -->
<h2>Over to you</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>As we said, <em>how</em> offshoring works is no big secret: you go to India and build a team of developers there. It’s the benefits, the real, <em>tangible business benefits</em> of offshoring that we rarely cover. To recap:</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":4049} -->
<figure class="wp-block-image"><img src="https://thescalers.com/wp-content/uploads/2019/06/tangible-business-benefits-of-offshoring-1-1024x604.png" alt="tangible business benefits of offshoring" class="wp-image-4049"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>To learn more about offshoring and whether your business can reap these benefits, simply get in touch. Our team is always happy to field questions, so feel free to fill in the form and let us know what's on your mind. </p>
<!-- /wp:paragraph -->
Originally Published at : https://thescalers.com/the-business-benefits-of-offshore-software-development/ | meghamaheshwar6 | |
157,522 | Recovery and USB Boot | Now we are going to clean install MacOS, therefore we need the image from Apple and then we will crea... | 0 | 2019-08-16T08:39:51 | https://dev.to/th3n00bc0d3r/recovery-and-usb-boot-2ck | beginners, tutorial | Now we are going to clean install MacOS, therefore we need the image from Apple and then we will create a USB to boot from it.
Open App Store and Search for Mojave.

Click Oo Get and it should start downloading it.

Mine, took me to the update section and started downloading the image.

Once that is done, connect a USB to your Mac and Open Disk Utility.

Now select the USB and Click on Erase Partition and Name is USB



Next open Terminal and paste the Following Command
```
sudo /Applications/Install\ macOS\ Mojave.app/Contents/Resources/createinstallmedia --volume /Volumes/USB--nointeraction && say Mojave Drive Created
```


Once Complete You have the USB Ready.
| th3n00bc0d3r |
157,582 | Weekly Links #1 | I'm the "Link guy" and I like collecting useful stuff https://kgoralski.gitbook.io/wiki/ Decisions... | 2,087 | 2019-08-16T13:03:31 | https://dev.to/kgoralski/weekly-links-1-je1 | weekly, architecture, microservices, java | I'm the "Link guy" and I like collecting useful stuff https://kgoralski.gitbook.io/wiki/
* [Decisions & Tradeoffs](https://ruthmalan.com/Journal/2019/20190629SlideDocTechnicalLeadershipDecisions.pdf) by Ruth Malan https://ruthmalan.com/
* [Things I Learnt The Hard Way in 30 Years of Software Development](https://blog.juliobiason.net/thoughts/things-i-learnt-the-hard-way/) by Julio Biason
* [What does success look like?](https://zef.me/what-does-success-look-like-590f775b7b58) by Zef Hemel
* [Hail the maintainers](https://aeon.co/essays/innovation-is-overvalued-maintenance-often-matters-more) - "when people start new open source projects, new start ups, new blogs, they celebrate it. When we maintain Linux distribution, kernels, glibc, and gcc, they get no thanks." quote by Scott McCarty
* [Designing the Right Things](https://jarango.com/2019/08/10/designing-the-right-things/) by Jorge Arango
* [Upgrade your cargo cult for the win](https://meaningness.com/metablog/upgrade-your-cargo-cult) by David Chapman
* [Latency SLOs Done Right](https://www.usenix.org/conference/srecon19asia/presentation/schlossnagle-latency) by Theo Schlossnagle
* [Hash table tradeoffs: CPU, memory, and variability](https://medium.com/@leventov/hash-table-tradeoffs-cpu-memory-and-variability-22dc944e6b9a) by Roman Leventov
* [Strengthen the Weakest Link](https://getnave.com/blog/theory-of-constraints/) by Sonya Siderova
* [Why our team cancelled our move to microservices](https://medium.com/@steven.lemon182/why-our-team-cancelled-our-move-to-microservices-8fd87898d952) by Steven Lemon
* [Java Project Loom Update](https://youtu.be/NV46KFV1m-4) with Alan Bateman and Rickard Bäckman
* [Java vs. Kotlin — Part 1: Performance](https://medium.com/@bards95/comparative-evaluation-of-selected-constructs-in-java-and-kotlin-part-1-dynamic-metrics-2592820ce80) by Jakub Anioła
* [Paged Out!](https://pagedout.institute/?page=issues.php) is a new experimental (one article == one page) free magazine
* [Preventing The Capital One Breach](https://ejj.io/blog/capital-one) by Evan J
* [Major breach found in biometrics system used by banks, UK police and defence firms](https://www.theguardian.com/technology/2019/aug/14/major-breach-found-in-biometrics-system-used-by-banks-uk-police-and-defence-firms)
* [Stripe Home](https://stripe.com/gb/blog/stripe-home) - Stripes created "People", a directory to help Stripes meet and get to really know each other. "People" focused on connecting individuals across the company
* [Platform Engineering at DAZN](https://youtu.be/-1rVhPWjd_s)
* [AMD Rome Second Generation EPYC Review: 2x 64-core Benchmarked](https://www.anandtech.com/show/14694/amd-rome-epyc-2nd-gen)
* [LXD 3.16 has been released](https://discuss.linuxcontainers.org/t/lxd-3-16-has-been-released/5445) | kgoralski |
157,670 | Advanced Vue: Controlling Parent Slots (Case Study) | Is there a way to populate a parent's slot from a child component? It's a thorny Vue architecture problem, but also a very interesting one. | 0 | 2019-08-16T17:24:35 | https://michaelnthiessen.com/advanced-vue-controlling-parent-slots | vue, javascript, tutorial | ---
title: "Advanced Vue: Controlling Parent Slots (Case Study)"
canonical_url: https://michaelnthiessen.com/advanced-vue-controlling-parent-slots
description: "Is there a way to populate a parent's slot from a child component? It's a thorny Vue architecture problem, but also a very interesting one."
published: true
tags:
- vue
- javascript
- tutorial
---
Let me ask you about something you've _probably_ never thought about:
> Is there a way to populate a parent's slot from a child component?
Recently a coworker asked me this, and the short answer is:
Yes.
But the solution I arrived at is probably _very_ different from what you're thinking right now.
You see, my first approach turned out to be a terrible idea, and it took me a few attempts before I figured out what I think is the best approach to this problem.
It's a thorny Vue architecture problem, but also a very interesting one.
In this article we'll go through each of these solutions one by one, and see why I think they aren't that great. Ultimately we'll land on the best solution at the end.
But why did we have this problem in the first place?
## Why this obscure problem?

In our application we have a top bar that contains different buttons, a search bar, and some other controls.
It can be slightly different depending on which page you're on, so we need a way of configuring it on a per page basis.

To do this, we want each page to be able to configure the action bar.
Seems straightforward, but here's the catch:
This top bar (which we call an `ActionBar`) is actually part of our main layout scaffolding, which looks like this:
```html
<template>
<div>
<FullPageError />
<ActionBar />
<App />
</div>
</template>
```
Where `App` is dynamically injected based on the page/route you're on.
There are some slots that `ActionBar` has that we can use to configure it. But how can we control those slots from the `App` component?
## Defining the Problem
First it's a good idea to be as clear as we can about what exactly we are trying to solve.
Let's take a component that has one child component and a slot:
```html
// Parent.vue
<template>
<div>
<Child />
<slot />
</div>
</template>
```
We can populate the slot of `Parent` like this:
```html
// App.vue
<template>
<Parent>
<p>This content goes into the slot</p>
</Parent>
</template>
```
Nothing too fancy here...
Populating the slot of a child component is easy, that's how slots are _usually_ used.
But is there a way that we can control what goes into the `slot` of the `Parent` component from inside of our `Child` component?
Stated more generally:
> Can we get a child component to populate the slots of a parent component?
Let's take a look at the first solution I came up with.
## Props down, events up
My initial reaction to this problem was with a mantra that I keep coming back to:
> Props down, events up
The only way data flows down through your component tree is through using _props_. And the only way you communicate back up the tree is by emitting _events_.
This means that if we need to communicate from a child to a parent, we use events for that.
So we'll use events to pass content into the `ActionBar`s slots!
In each application component we'll need to do the following:
```js
import SlotContent from './SlotContent';
export default {
name: 'Application',
created() {
// As soon as this component is created we'll emit our events
this.$emit('slot-content', SlotContent);
}
};
```
We package up whatever we want to put in the slot into a `SlotContent` component (the name is unimportant). As soon as the application component is created, we emit the `slot-content` event, passing along the component we want to use.
Our scaffold component would then look like this:
```html
<template>
<div>
<FullPageError />
<ActionBar>
<Component :is="slotContent" />
</ActionBar>
<App @slot-content="component => slotContent = component" />
</div>
</template>
```
It will listen for that event, and set `slotContent` to whatever our `App` component sent us. Then, using the built-in `Component`, we can render that component dynamically.
Passing around components with events feels weird though, because it's not really something that "happens" in our app. It's just part of the way the app was designed.
Luckily there's a way we can avoid using events altogether.
## Looking for other $options
Since Vue components are just Javascript objects, we can add whatever properties we want to them.
Instead of passing the slot content using events, we can just add it as a field to our component:
```js
import SlotContent from './SlotContent';
export default {
name: 'Application',
slotContent: SlotContent,
props: { /***/ },
computed: { /***/ },
};
```
We'll have to slightly change how we access this component in our scaffolding:
```html
<template>
<div>
<FullPageError />
<ActionBar>
<Component :is="slotContent" />
</ActionBar>
<App />
</div>
</template>
```
```js
import App from './App';
import FullPageError from './FullPageError';
import ActionBar from './ActionBar';
export default {
name: 'Scaffold',
components: {
App,
FullPageError,
ActionBar,
}
data() {
return {
slotContent: App.slotContent,
}
},
};
```
This is more like static configuration, which is a lot nicer and cleaner 👌
But this still isn't right.
**Ideally, we wouldn't be mixing paradigms in our code, and _everything_ would be done declaratively.**
But here, instead of taking our components and composing them together, we're passing them around as Javascript objects.
It would be nice if we could just write what we wanted to appear in the slot in a normal Vue way.
## Thinking in portals
This is where portals come in.
And they work exactly like you would expect them to. You're able to teleport anything from one location to another. In our case, we're "teleporting" elements from one location in the DOM somewhere else.
We're able to control where a component is rendered in the DOM, regardless of what the component tree looks like.
For example, let's say we wanted to populate a modal. But our modal has to be rendered at the root of the page so we can have it overlay properly. First we would specify what we want in the modal:
```html
<template>
<div>
<!-- Other components -->
<Portal to="modal">
Rendered in the modal.
</Portal>
</div>
</template>
```
Then in our modal component we would have another portal that would render that content out:
```html
<template>
<div>
<h1>Modal</h1>
<Portal from="modal" />
</div>
</template>
```
This is certainly an improvement, because now we're actually writing HTML instead of just passing objects around. It's far more declarative and it's easier to see what's going on in the app.
Except that in some ways it _isn't_ easier to see what's going on.
Because portals are doing some magic under the hood to render elements in different places, it completely breaks the model of how DOM rendering works in Vue. It looks like you're rendering elements normally, but it's not working normally at all. This is likely to cause lots of confusion and frustration.
There's another huge issue with this, but we'll cover that later on.
At least with adding the component to the `$options` property, it's clear that you're doing something different.
I think there's a better way still.
## Lifting state
"Lifting state" is a term that's thrown around the front end development circles a bit.
All it means is that you move state from a child component to a parent, or grandparent component. You move it up the component tree.
This can have profound effects on the architecture of your application. And for our purposes, it actually opens up a completely different — and simpler — solution.
Our "state" here is the content that we are trying to pass into the slot of the `ActionBar` component.
But that state is contained within the `Page` component, and we can't really move page specific logic into the layout component. Our state has to stay within that `Page` component that we're dynamically rendering.
So we'll have to lift the whole `Page` component in order to lift the state.
Currently our `Page` component is a child of the `Layout` component:
```html
<template>
<div>
<FullPageError />
<ActionBar />
<Page />
</div>
</template>
```
Lifting it would require us to flip that around, and make the `Layout` component a child of the `Page` component. Our `Page` component would look something like this:
```html
<template>
<Layout>
<!-- Page-specific content -->
</Layout>
</template>
```
And our `Layout` component would now look something like this, where we can just use a slot to insert the page content:
```html
<template>
<div>
<FullPageError />
<ActionBar />
<slot />
</div>
</template>
```
But this doesn't let us customize anything just yet. We'll have to add some named slots into our `Layout` component so we can pass in the content that should be placed into the `ActionBar`.
The most straightforward way to do this would be to have a slot that replaces the `ActionBar` component completely:
```html
<template>
<div>
<FullPageError />
<slot name="actionbar">
<ActionBar />
</slot>
<slot />
</div>
</template>
```
This way, if you don't specify the "actionbar" slot, we get the default `ActionBar` component. But you can still override this slot with your own custom `ActionBar` configuration:
```html
<template>
<Layout>
<template #actionbar>
<ActionBar>
<!-- Custom content that goes into the action bar -->
</ActionBar>
</template>
<!-- Page-specific content -->
</Layout>
</template>
```
To me, this is the ideal way of doing things, but it does require you to refactor how you lay out your pages. That could be a huge undertaking depending on how your app is built.
If you can't do this method, my next preferred method would probably #2, using the `$options` property. It's the cleanest, and most likely to be understood by anyone reading the code.
## We can make this simpler
When we first defined the problem we stated it in it's more general form as this:
> Can we get a child component to populate the slots of a parent component?
But really, this problem has nothing to do with props specifically. More simply, it's about getting a child component to control what is rendered outside of it's own subtree.
In it's _most_ general form, we would state the problem as this:
> What is the best way for a component to control what is rendered outside of it's subtree?
Examining each of our proposed solutions through this lens gives us an interesting new perspective.
### Emitting events up to a parent
Because our component can't directly influence what happens outside of it's subtree, we instead find a component whose subtree contains the target element we are trying to control.
Then we ask it nicely to change it for us.
### Static configuration
Instead of actively asking another component to do something on our behalf, we simply make the necessary information available to other components.
### Portals
You may be noticing a pattern here among these first 3 methods.
So let me make this assertion:
**There is no way for a component to control something outside of it's subtree.**
(proving it is left as an exercise to the reader)
So each method here is a different way to get another component to do our bidding, and control the element that we are actually interested in.
The reason that portals are nicer in this regard is that they allow us to encapsulate all of this communication logic into separate components.
### Lifting State
This is where things really start to change, and why lifting state is a simpler and more powerful technique than the first 3 we looked at.
Our main limitation here is that what we want to control is outside of our subtree.
The simplest solution to that:
Move the target element into our subtree so we can control it!
Lifting state — along with the logic to manipulate that state — allows us to have a larger subtree and to have our target element contained within that subtree.
If you can do this, it's the simplest way to solve this specific problem, as well as a whole class of related problems.
Keep in mind, this doesn't necessarily mean lifting the _entire component_. You can also refactor your application to move a piece of logic into a component higher up in the tree.
## It's really just dependency injection
Some of you who are more familiar with software engineering design patterns may have noticed that what we're doing here is dependency injection — a technique we've been using for decades in software engineering.
One of it's uses is in making code that is easy to configure. In our case, we're configuring the `Layout` component differently in each `Page` that uses it.
When we flipped the `Page` and `Layout` components around, we were doing what is called an inversion of control.
In component-based frameworks the parent component controls what the child does (because it is within it's subtree), so instead of having the `Layout` component controlling the `Page`, we chose to have the `Page` control the `Layout` component.
In order to do this, we supply the `Layout` component what it needs to get the job done using slots.
As we've seen, using dependency injection has the effect of making our code a lot more modular and easier to configure.
## Conclusion
We went through 4 different ways of solving this problem, showing the pros and cons of each solution. Then we went a little further and transformed the problem into a more general one of controlling something outside of a component's subtree.
I hope that you'll see that lifting state and dependency injection are two very useful patterns to use. They are wonderful tools for you to have in your arsenal, as they can be applied to a myriad of software development problems.
But above all, I hope you take this away:
By using some common software patterns we were able to turn a problem that only had ugly solutions into a problem that had a very elegant one.
Many other problems can be attacked in this way — by taking an ugly, complicated problem and transforming it into a simpler, easier to solve problem.
If you want some more advanced content on slots, I [replicated the v-for directive](https://michaelnthiessen.com/nested-slots-in-vue), showing how to use nested slots and nested scoped slots recursively. It's one of my favourite articles, so do check it out!
| michaelthiessen |
160,790 | Explain azure devops Like I'm Five | A post by Ankita Singhal | 0 | 2019-08-24T13:15:29 | https://dev.to/ankitasinghal/explain-azure-devops-like-i-m-five-4j5o | explainlikeimfive, azure, devops | ankitasinghal | |
157,783 | Introducing the Best 10 Node.js Frameworks for 2019 and 2020 | Originally published at softwareontheroad.com I’m so tired of reading articles claiming what is the... | 1,640 | 2019-08-23T11:03:13 | https://softwareontheroad.com/nodejs-frameworks/ | node, javascript, frameworks | ---
title: Introducing the Best 10 Node.js Frameworks for 2019 and 2020
published: true
tags: [node, javascript, frameworks]
cover_image: "https://thepracticaldev.s3.amazonaws.com/i/z5j0wnr711voyslzldah.jpg"
canonical_url: https://softwareontheroad.com/nodejs-frameworks/
series: JavaScript on Data
---
**Originally published at [softwareontheroad.com](https://softwareontheroad.com/nodejs-frameworks/)**
I’m so tired of reading articles claiming what is the best node.js framework based on biased opinions or sponsorships _(yes, that’s a thing)_
So here are the top node.js frameworks ranked by daily downloads, the data was taken from npmjs.com itself _(sorry yarn)_.
{% youtube P0Xk8UhawEQ %}
# What is a node.js framework?
# How to choose a node.js framework for my application?
You have to consider mainly 2 things:
1. The scalability and robustness of the framework
2. If the development process is something you feel comfortable working with.
Regardless of scalability and robustness, every node.js web framework is built on top of the `http` module.
Some of these frameworks add too much … and that makes a huge impact on the server’s throughput.
In my opinion, working with a barebone framework like Express.js or Fastify.js is the best when the service you are developing is small in business logic but need to be highly scalable.
By the other hand, if you are developing a medium size application, it’s better to go with a framework that helps you have a clear structure like next.js or loopback.
There is no simple answer to the question, you better have a peek on how to declare API routes on every framework on this list and decide for yourself.
<cta-container type="hire" copy="node"></cta-container>
# 10. Adonis
[_Adonis.js_](https://adonisjs.com/) is an MVC (Model-View-Controller) node.js framework capable of building an API Rest with JWT authentication and database access.
## What’s is this framework about?
The good thing is that Adonis.js framework comes with a CLI to create the bootstrap for applications.
```bash
$ npm i -g @adonisjs/cli
$ adonis new adonis-tasks
$ adonis serve --dev
```
The typical Adonis app has an MVC structure, that way you don’t waste time figuring out how you should structure your web server.
Some apps built with adonis can [be found here.](https://madewithadonisjs.com/)
# 👉 GET MORE ADVANCED NODE.JS DEVELOPMENT ARTICLES
Join the other 2,000+ savvy node.js developers who get article updates.
[](https://softwareontheroad.us20.list-manage.com/subscribe/post?u=337d8675485234c707e63777d&id=14f1331817
)
# 9. Feathers
[_Feather.js_](https://feathersjs.com/) is a node.js framework promise to be a REST and realtime API layer for modern applications.
## See what’s capable of!!
This is all the code you need to set-up your API REST + realtime WebSockets connection thanks to the socket.io plugin
```ts
const feathers = require('@feathersjs/feathers');
const express = require('@feathersjs/express');
const socketio = require('@feathersjs/socketio');
const memory = require('feathers-memory');
// Creates an Express compatible Feathers application
const app = express(feathers());
// Parse HTTP JSON bodies
app.use(express.json());
// Parse URL-encoded params
app.use(express.urlencoded({ extended: true }));
// Add REST API support
app.configure(express.rest());
// Configure Socket.io real-time APIs
app.configure(socketio());
// Register a messages service with pagination
app.use('/messages', memory({
paginate: {
default: 10,
max: 25
}
}));
// Register a nicer error handler than the default Express one
app.use(express.errorHandler());
// Add any new real-time connection to the `everybody` channel
app.on('connection', connection => app.channel('everybody').join(connection));
// Publish all events to the `everybody` channel
app.publish(data => app.channel('everybody'));
// Start the server
app.listen(3030).on('listening', () =>
console.log('Feathers server listening on localhost:3030')
);
```
Pretty sweet right?
Here are some apps [built with feathers.js.](https://github.com/feathersjs/awesome-feathersjs#projects-using-feathers)
# 8. Sails
_[Sails.js](https://sailsjs.com/) Ye’ olde node.js framework_
With 7 years of maturity, this is a battle-tested node.js web framework that you should definitively check out!
## See it in action
Sails.js comes with a CLI tool to help you get started in just 4 steps
```bash
$ npm install sails -g
$ sails new test-project
$ cd test-project
$ sails lift
```
# 7. Loopback
Backed by IBM, [Loopback.io](https://loopback.io) is an enterprise-grade node.js framework, used by companies such as GoDaddy, Symantec, IBM itself.
They even offer Long-Term Support (LTS) for 18 months!
This framework comes with a CLI tool to scaffold your node.js server
```bash
$ npm i -g @loopback/cli
```
Then to create a project
```bash
$ lb4 app
```
Here is what an API route and controller looks like:
```js
import {get} from '@loopback/rest';
export class HelloController {
@get('/hello')
hello(): string {
return 'Hello world!';
}
}
```
# 6. Fastify
[Fastify.io](https://www.fastify.io) is a node.js framework that is designed to be the replacement of express.js [with a 65% better performance](https://www.fastify.io/benchmarks/).
## Show me the code
```ts
// Require the framework and instantiate it
const fastify = require('fastify')({
logger: true
})
// Declare a route
fastify.get('/', (request, reply) => {
reply.send({ hello: 'world' })
})
// Run the server!
fastify.listen(3000, (err, address) => {
if (err) throw err
fastify.log.info(`server listening on ${address}`)
})
```
And that’s it!
I love the simplicity and reminiscence to Express.js of Fastify.js, definitively is the framework to go if performance is an issue in your server.
# 5. Restify
[Restify](http://restify.com/) claims to be the future of Node.js Web Frameworks.
This framework is used in production by NPM, Netflix, Pinterest and Napster.
## Code example
Setting up a Restify.js server is just as simple as this
```ts
const restify = require('restify');
function respond(req, res, next) {
res.send('hello ' + req.params.name);
next();
}
const server = restify.createServer();
server.get('/hello/:name', respond);
server.head('/hello/:name', respond);
server.listen(8080, function() {
console.log('%s listening at %s', server.name, server.url);
});
```
# 👉 GET MORE ADVANCED NODE.JS DEVELOPMENT ARTICLES
Join the other 2,000+ savvy node.js developers who get article updates.
[](https://softwareontheroad.us20.list-manage.com/subscribe/post?u=337d8675485234c707e63777d&id=14f1331817
)
# 4. Nest.js
A relatively new node.js framework, [Nest.js](https://nestjs.com) has a similar architecture to Angular.io, so if you are familiar with that frontend framework, you'll find this one pretty easy to develop as well.
## Example
```ts
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.setViewEngine('hbs');
await app.listen(3000);
}
bootstrap();
```
# 3. Hapi
One of the big 3 node.js frameworks, [hapi.js](https://hapi.dev) has an ecosystem of libraries and plugins that makes the framework highly customizable.
Although I never used hapi.js on production, I’ve been using its validation library Joi.js for years.
## Creating a server
A hapi.js webserver looks like this
```ts
const Hapi = require('@hapi/hapi');
const init = async () => {
const server = Hapi.server({
port: 3000,
host: 'localhost'
});
await server.start();
console.log('Server running on %s', server.info.uri);
};
init();
```
# 2. Koa
[Koa](https://koajs.com) is a web framework designed by the team behind Express.js the most famous and used node.js framework.
Koa aims to be a smaller, more expressive, and more robust foundation for web applications and APIs than express.js.
Through leveraging generators Koa allows you to ditch callbacks and greatly increase error-handling.
Koa does not bundle any middleware within the core and provides an elegant suite of methods that make writing servers fast and enjoyable.
## Example
```ts
const Koa = require('koa');
const app = new Koa();
app.use(async ctx => {
ctx.body = 'Hello World';
});
app.listen(3000);
```
# 1. Express
[Express.js](https://expressjs.com) is definitively the king of node.js frameworks, will reach the incredible mark of 2 million daily downloads by the end of 2019.
Despite being such an old framework, Express.js is actively maintained by the community and is used by big companies such as User, Mulesoft, IBM, and so on.
## Example
Just add it to your node.js project
`$ npm install express`
Then declare some API routes
```ts
const express = require('express')
const app = express()
const port = 3000
app.get('/', (req, res) => res.send('Hello World!'))
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
```
And that’s all you need to start using it!
# Conclusion
There are tons of node.js frameworks out there, the best you can do is go and try them all ‘til you find the ones that suit your needs.
Personally, I prefer Express.js because, through these 6 years of node.js development, I build a strong knowledge of good architectural patterns, all based on trial and error.
But that doesn’t mean you have to do the same, here is all the secrets of a good express.js framework project.
{% link santypk4/bulletproof-node-js-project-architecture-4epf %}
## Now tell me, what is your favorite node.js framework?
Send me a tweet to [@santypk4](https://twitter.com/santypk4), come on! I want to know what the people are using, I don’t want to fall behind!
# 👉 GET MORE ADVANCED NODE.JS DEVELOPMENT ARTICLES
Join the other 2,000+ savvy node.js developers who get article updates.
[](https://softwareontheroad.us20.list-manage.com/subscribe/post?u=337d8675485234c707e63777d&id=14f1331817
) | santypk4 |
158,225 | How to Write Unit Tests with JUnit 5 in Java (Eclipse) | What is JUnit? JUnit is a Java testing framework. It can be used for both Unit Testing and... | 0 | 2019-08-17T22:02:39 | https://dev.to/yongliang24/how-to-write-unit-tests-with-junit-5-in-java-eclipse-341h | ---
title: How to Write Unit Tests with JUnit 5 in Java (Eclipse)
published: true
description:
tags:
---
## What is JUnit?
JUnit is a Java testing framework. It can be used for both Unit Testing and UI Testing. It is a great tool and fun to use. In this article, I would like to demonstrate how to use this framework in Eclipse IDE for beginners.
Junit 5 comes with Eclipse so all we need to do is add it to our project when we first creating the project.
Steps:
1. add Junit to the project.
2. add new file -> create a new class file.
3. add new file -> create JUnit Test file.
4. write some methods in the class.
5. create an object of that class inside the Junit Test file.
6. use the object to invoke class methods to test inputs and outputs.
```
//This class has one method which will return a string argument.
public class HelloWorld {
public String printThis(String str){
return str;
}
}
```
```
//The JUnit file would look like this
class HelloWord{
//instantiate a HelloWorld object called hello.
HelloWorld hello = new HelloWorld();
//test method names tends to be descriptive
@Test
void should_return_a_same__passing_string_value(){
//one way to think about constructing a test is use these 3 steps
//given (the inputs)
String cake = "chocolate cake";
//when (the actions)
String cakeTest = "chocolate cake";
//then (the results)
assertEquals(cakeTest, hello.printThis(cake) );
//this will pass the test if both values are equal.
}
}
```
These are the steps to perform a most basic unit test.
| yongliang24 | |
158,588 | Self-hosted agents at Azure DevOps: a little cost-saving trick | Azure DevOps does a great job when providing hosted agent services. They come loa... | 0 | 2019-09-11T14:42:32 | https://dev.to/akuryan/self-hosted-agents-at-azure-devops-a-little-cost-saving-trick-4297 | devops, azure, azuredevops | ---
title: Self-hosted agents at Azure DevOps: a little cost-saving trick
published: true
tags: DevOps,Azure,Azure Devops
canonical_url:
---
Azure DevOps does a great job when providing hosted agent services. They come loaded with all required software, they care about updates and everything else, but they have some major drawbacks:
* No static external IP-address (so it's not possible to have an additional layer of security))
* You get a new VM each time, so you need to clone your repository fresh, install a fresh set of NPM packages, install all those base Docker images (not a big deal for alpine-based images, but when it comes to Microsoft one, it is really a hitter)
* For closed source projects there is a hard limit of 1800 minutes per month per 1 hosted job (I do not like limits even if I never hit them 😀 )
And so on – you name it
To overcome this, one can deploy self-hosted agents, but then you'd have to deal with updates, installation of tooling and **extra cost**. How to deal with updates and actual management is covered in [this blog post](https://wouterdekort.com/2018/02/25/build-your-own-hosted-vsts-agent-cloud-part-1-build/), but it still leaves the question of cost partially open. I have spent some time and did several improvements to the work of Wouter de Kort in a fork of his repository, check out at [my repository](https://github.com/akuryan/self-hosted-azure-devops-agents). Here I did some scripting optimization and improvement, but the main thing, which I wish to cover in this blog post, is the cost-optimization tool I built for our fleet of self-managed agents.
##Problem
We have had only 2 hosted VS2015/17 jobs for performing builds and releases for 10 projects, each of which was requiring anywhere from 5 to 20 minutes to build and somewhere between 15 and 30 minutes to release. That was quite taxing, especially when queues built-up.
##Idea
In our particular situation we have 7 parallel jobs for self-hosted agents, which comes for "free", through subscription to Visual Studio, so I began searching how to leverage those to improve our build and release speed. Initial setup was strictly following the [Wouter de Kort blog series](https://wouterdekort.com/2018/02/25/build-your-own-hosted-vsts-agent-cloud-part-1-build/), with automated switching of VMs in a Virtual Machines Scale set nightly and on weekends to save costs. But, as soon as I began receiving requests to start some VMs for the weekend or start them later at evening to fulfill some off-hour tasks, or earlier in the morning to do urgent deployments, I started seeking for a way to automate these tasks. Which lead to having the idea of a continuous web job, which will continuously monitor the queue in the target pool and start VMs when they are needed (and stop them when they are not needed).
##Realization
I came to Azure DevOps with a strong TeamCity background, so I was hoping to find something similar to a build queue in TeamCity, but, alas, they have another approach: all tasks are assigned to a pool, where they are picked up by the first agent to be online and free in a FIFO manner (first in, first out). There is no queue at all, all tasks just have a property "Result". If it is null, then this task has not yet been executed. If all tasks assigned to the pool have a non-null property "Result", then there is nothing to do. So, if there are some tasks in a pool with a null property "Result", then the code will check how much online agents are present in the pool. If the agent count is more than or equal to the tasks count, again, there is nothing to do. If the agent count is less than the tasks count, we need to start more virtual machines in the virtual machines scale set for our agents. If there are more agents online present in our pool than the number of assigned tasks, we need to stop extra agents in the virtual machines in scale set. Also, there is an option to define business hours and days of the week when there is a minimum required amount of agents online to speed up development (so, teams do not have to wait for an agent to become active and consume a task, but the task will be started immediately). The check to provision more VMs is done once every 2 minutes, to minimize waiting time and not abuse the API too much. The check to deprovision VMs is done once in 15 minutes, this allows more runtime for agents. Normally, I observe that almost immediately after a successful build a developer will wish to deploy to the Test environment.
In my humble opinion, this solution is better than statically switching off/on virtual machines on some schedule, because it allows to fulfill any task (compile, test, release, whatever is executed on your agents) at any time in a Just-In-Time manner. Though, naturally, if all agents were switched off, it will take some time for them to become online, but due to the business hour / day option, this will only happen in off-hours.
All settings and deployment instructions for the Autoscaler application are described [here](https://github.com/akuryan/self-hosted-azure-devops-agents/blob/master/autoscalingApp/README.md). I would not duplicate them in this blog post, as overtime they could change, and the readme document will be kept up-to-date.
The code of the Autoscaler app can be seen at [this location](https://github.com/akuryan/self-hosted-azure-devops-agents/tree/master/autoscalingApp/AgentsMonitor).
There is also an [ARM template](https://github.com/akuryan/self-hosted-azure-devops-agents/tree/master/autoscalingApp/arm-template) for the baseline configuration of a web app which is suitable if you have only one pool in Azure DevOps to monitor, as it defines settings at an App Settings level of the web app itself. If there is more than one pool, only shared settings shall be defined in the App Settings of web app, while specific settings should be added to the App Settings of an individual web job. You can host as much web jobs as you need, but mind the web app limits.
Be aware, that the ARM template by default will deploy to a D1 web app (Shared web app, which allows limited amount of CPU time and only 1 Gb of RAM **without Always On**). The "Always On" feature ensures that the hosting process is always awake and is not shut down after 20 minutes of inactivity. So, if a web job will be deployed without additional precautions, it would not work, as the web app runtime will shut down the Kudu process after 20 minutes of inactivity. There is a nice trick to keep it up and running: you need to ping the Kudu homepage of your web app at least once every 20 minutes. I am using https://www.happyapps.io/ to visit the Kudu homepage of my web app once per 5 minutes on the address https://webappName.scm.azurewebsites.net/
##Deployment hints
By default, the Azure Web app runtime does not execute the continuous web job from the path where it is deployed to, but I still wish to be sure that it is not running when I am deploying it, so I am using the following PowerShell scripts to stop/start the web job
To stop webjob:
```powershell
Invoke-AzureRmResourceAction -ResourceGroupName resourceGroupName -ResourceType Microsoft.Web/sites/continuouswebjobs -ResourceName webAppName/webJobName -Action Stop -Force -ApiVersion 2018-02-01
```
To start webjob:
```powershell
Invoke-AzureRmResourceAction -ResourceGroupName resourceGroupName -ResourceType Microsoft.Web/sites/continuouswebjobs -ResourceName webAppName/webJobName -Action Start -Force -ApiVersion 2018-02-01
```
The same scripts are used when rebuilding the Virtual Machines Scale set to ensure that the web job will not attempt to stop the VMs before they have been registered at the pool.
This blog post have been created by me and edited by my friend [Rob Habraken](https://www.robhabraken.nl/). | akuryan |
159,267 | Managing Key-Value Constants in TypeScript | A lot of applications have a dropdown select menu in a form. Let's imagine a form control like below;... | 0 | 2019-08-20T15:24:08 | https://dev.to/angular/managing-key-value-constants-in-typescript-221g | typescript, angular | ---
title: Managing Key-Value Constants in TypeScript
published: true
description:
tags: TypeScript, Angular
---
A lot of applications have a dropdown select menu in a form. Let's imagine a form control like below;

Typically, each select menu's item has **ID** and **label**. The ID is responsible to communicate with other components, services, or server-side. The label is responsible to display text for users.
This post explains how to manage constants for the menu items which has ID and mapping for its label. It uses TypeScript's `as const` feature which is introduced since v3.4.
## Define colorIDs Tuple
In TypeScript, a tuple is an array, but its length and items are fixed. You can define a tuple with `as const` directive on the array literal. (`as const` directive needs TypeScript 3.4+)
Create `colors.ts` and define `colorIDs` tuple as following;
```typescript
export const colorIDs = ['green', 'red', 'blue'] as const;
```
The type of `colorIDs` is not `string[]` but `['green', 'red', 'blue']` . Its length is absolutely 3 and `colorIDs[0]` is always `'green'`. This is a tuple!
## Extract ColorID Type
A Tuple type can be converted to its item's **union type**. In this case, you can get `'green' | 'red' | 'blue'` type from the tuple.
Add a line to `colors.ts` like below;
```typescript
export const colorIDs = ['green', 'red', 'blue'] as const;
type ColorID = typeof colorIDs[number]; // === 'green' | 'red' | 'blue'
```
Got confusing? Don't worry. It's not magic.
`colorIDs[number]` means "fields which can be access by number", which are `'green'` , `'red'`, or `'blue'` .
So `typeof colorIDs[number]` becomes the union type `'green' | 'red' | 'blue'`.
## Define colorLabels map
`colorLabels` map is an object like the below;
```typescript
const colorLabels = {
blue: 'Blue',
green: 'Green',
red: 'Red',
};
```
Because `colorLabels` has no explicit type, you cannot notice even if you missed to define `red` 's label.
Let's make sure that `colorLabels` has a complete label set of all colors! `ColorID` can help it.
TypeScript gives us `Record` type to define Key-Value map object. The key is `ColorID` and the value is string. So `colorLabels` 's type should be `Record<ColorID, string>` .
```typescript
export const colorIDs = ['green', 'red', 'blue'] as const;
type ColorID = typeof colorIDs[number];
export const colorLabels: Record<ColorID, string> = {
green: 'Green',
red: 'Red',
blue: 'Blue',
} as const;
```
When you missed to define `red` field, TypeScript compiler throw the error on the object.

By the way, **Angular v8.0+ is compatible with TypeScript v3.4**. The demo app in the above is the following;
```typescript
import { Component } from '@angular/core';
import { FormControl } from '@angular/forms';
import { colorIDs, colorLabels } from './colors';
@Component({
selector: 'app-root',
template: `
<label for="favoriteColor">Select Favorite Color: </label>
<select id="favoriteColor" [formControl]="favoriteColorControl">
<option *ngFor="let id of colorIDs" [ngValue]="id">
{{ colorLabels[id] }}
</option>
</select>
<div>Selected color ID: {{ favoriteColorControl.value }}</div>
`,
})
export class AppComponent {
readonly colorIDs = colorIDs;
readonly colorLabels = colorLabels;
readonly favoriteColorControl = new FormControl(this.colorIDs[0]);
}
```
## Conclusion
- `as const` turns an array into a **tuple**
- `typeof colorIDs[number]` returns an **union type** of its item
- Define an object with `Record<ColorID, string>` for keeping a complete field set. | lacolaco |
159,343 | Scaffolding Redux boilerplate with code generators | The article was originally posted on my personal blog. In the previous post we saw how easy it is to... | 0 | 2019-08-20T19:13:47 | https://claritydev.net/blog/scaffolding-redux-boilerplate-with-code-generators | javascript, productivity, react, redux | _The article was originally posted on [my personal blog]
(https://claritydev.net/blog/scaffolding-redux-boilerplate-with-code-generators)._
In the [previous post](https://claritydev.net/blog/speed-up-your-react-developer-workflow-with-code-g) we saw how easy it is to get up and running with JavaScript code generators on the example of React components. In this post we'll build on that knowledge and dive deeper into generating code with a more advanced example - scaffolding Redux boilerplate.
When I first started working with Redux I was impressed with its capabilities and how nicely it can abstract some complex component logic into actions. However, I was also surprised by how much boilerplate it requires to get properly setup in complex applications.
First you need to declare action types, then import them into action creators and define action creators themselves. Of course action creators are optional, but they make the code cleaner. Finally the action types have to be imported into reducer, which also requires setting up. The number of steps increases when you throw Redux middleware into the mix. This is particularly relevant in case you use Redux to handle API calls. In such case you often want to show loading indicator when data is being fetched, and then either display the data after it's loaded or show an error message when something goes wrong. I'd end up using three action types just for one API call - `ACTION_BEGIN`, `ACTION_SUCCESS` and `ACTION_ERROR,` or some variation of them.
Let's speed up this particular case of setting up Redux actions for data fetching by generating boilerplate code with a generator. This generator will have two options - create new action from scratch or modify existing by adding a new one. The final code is available on [Github](https://github.com/Clarity-89/react-generator).
We'll continue building on the example from the previous post and add a separate prompt for the Redux actions. First let's move the templates and config for the React component generator into their own separate folders and add the folders for Redux actions.
After these changes we have file structure as follows.
```
mysite/
src/
scripts/
generator/
config/
react.js
redux.js
templates/
react/
redux/
config.js
index.js
listComponents.js
```
Separating configs for both generators will make it easier to navigate and update the code. We'll still keep all the prompts in the same file, however that can also be separated if needed.
We'll start by adding more prompts to our main `config.js`.
```javascript
description: "Generate new React component or Redux action",
prompts: [
{
type: "list",
name: "select",
choices: () => [
{ name: "React Component", value: "react_component" },
{ name: "Redux Action", value: "redux_action" }
]
},
// React component prompts
// ...
{
type: "list",
name: "create_or_modify",
message: "Do you want to create a new action or modify an existing one?",
when: answer => answer.select === "redux_action",
choices: () => [
{
name: "Create (will create new actions file)",
value: "create"
},
{
name: "Modify (will add the action to an existing one) ",
value: "modify"
}
]
},
{
type: "list",
name: "action",
message: "Select action folder",
when: ({ select, create_or_modify }) => {
return select === "redux_action" && create_or_modify === "modify";
},
choices: listComponents("actions")
},
{
type: "input",
name: "action_prefix",
message: "Action prefix (e.g. 'user'):",
when: ({ select, create_or_modify }) =>
select === "redux_action" && create_or_modify === "create",
validate: value => {
if (!value) {
return "A name is required";
}
return true;
}
},
{
type: "input",
name: "action_name",
message: "Action name:",
when: answer => answer.select === "redux_action",
validate: value => {
if (!value) {
return "A name is required";
}
return true;
}
},
{
type: "confirm",
name: "reducer_confirm",
message: "Do you want to import actions into reducer?",
when: ({ select }) => select === "redux_action"
},
{
type: "list",
name: "reducer_name",
choices: listComponents("reducers"),
when: ({ select, create_or_modify, reducer_confirm }) => {
return (
select === "redux_action" &&
create_or_modify === "modify" &&
reducer_confirm
);
},
message: "Select reducer"
},
],
```
At the topmost level we ask the user if they want to scaffold React component or Redux action. After this we'll have to add `when: answer => answer.select === "redux_action"` to all the prompt objects related to Redux actions and a similar one, checking for the answer with `react_component`, to React prompts. After that we follow a familiar path - checking if the user wants to create a new action from scratch or modify an existing one. If the choice is to create a new action we'll need to get a prefix for it (for ex. if you're scaffolding user actions, you provide `user` prefix and the generator will create `userActions`, `userReducer`, etc.). In case the choice is to modify existing action the user is asked to select which file to add the actions to. It should be mentioned that the following generator assumes you structure your Redux setup as follows, although it can be easily adjusted to any folder structure.
```
mysite/
src/
actions/
actionTypes.js
testActions.js
reducers/
initialState.js
rootReducer.js
testReducer.js
```
Also note that `listComponents` was modified to accept `type` parameter, so it's able to list files of different types.
```javascript
const fs = require("fs");
module.exports = (type = "components") => {
const names = fs.readdirSync("src/" + type);
return names.map(i => i.replace(".js", ""));
};
```
After going through the prompts, it's time to get to the core of the generators, which is it's actions. We add them to **redux.js **file inside the **config **folder.
```javascript
exports.reduxConfig = data => {
const dirPath = `${__dirname}/../../..`;
const reduxTemplates = `${__dirname}/../templates/redux`;
let actions = [
{
type: "append",
path: `${dirPath}/actions/actionTypes.js`,
templateFile: `${reduxTemplates}/actionTypes.js.hbs`
}
];
let actionPath = `${dirPath}/actions/{{camelCase action_prefix}}Actions.js`;
if (data.create_or_modify === "create") {
actions = [
...actions,
{
type: "add",
path: actionPath,
templateFile: `${reduxTemplates}/create/actions.js.hbs`
}
];
// Create reducer
if (data.reducer_confirm) {
actions = [
...actions,
{
type: "add",
path: `${dirPath}/reducers/{{camelCase action_prefix}}Reducer.js`,
templateFile: `${reduxTemplates}/create/reducer.js.hbs`
},
// Add new reducer to the root reducer
{
type: "modify",
path: `${dirPath}/reducers/rootReducer.js`,
pattern: /\/\/plopImport/,
templateFile: `${reduxTemplates}/create/rootReducer.js.hbs`
},
{
type: "modify",
path: `${dirPath}/reducers/rootReducer.js`,
pattern: /\/\/plopReducer/,
template: ",{{action_prefix}}\n//plopReducer"
}
];
}
}
if (data.create_or_modify === "modify") {
actionPath = `${dirPath}/actions/{{camelCase action}}.js`;
let reducerPath = `${dirPath}/reducers/{{reducer_name}}.js`;
const actionType = "append";
actions = [
...actions,
{
type: actionType,
path: actionPath,
pattern: /import {/,
templateFile: `${reduxTemplates}/modify/actionImports.js.hbs`
},
{
type: actionType,
path: actionPath,
templateFile: `${reduxTemplates}/modify/actions.js.hbs`
}
];
if (data.reducer_confirm) {
actions = [
...actions,
{
type: actionType,
path: reducerPath,
pattern: /import {/,
templateFile: `${reduxTemplates}/modify/actionImports.js.hbs`
},
{
type: "modify",
path: reducerPath,
pattern: /\/\/plopImport/,
templateFile: `${reduxTemplates}/modify/reducer.js.hbs`
}
];
}
}
return actions;
};
```
That's quite a bit of code, however in essence it boils down to 3 main pieces: actions for creating a new Redux action, actions for modifying it, and common actions for both cases. The common action here is to declare action types, the template for which looks like this:
```handlebars
// actionTypes.js.hbs
export const {{constantCase action_name}}_BEGIN = "{{constantCase action_name}}_BEGIN";
export const {{constantCase action_name}}_SUCCESS = "{{constantCase action_name}}_SUCCESS";
export const {{constantCase action_name}}_ERROR = "{{constantCase action_name}}_ERROR";
```
That's a lot of manual typing automated already! However, this is only the beginning. When creating or updating actions, we can scaffold action creators in a similar manner with this template:
``` handlebars
// actions.js.hbs
import {
{{constantCase action_name}}_BEGIN,
{{constantCase action_name}}_SUCCESS,
{{constantCase action_name}}_ERROR
} from './actionTypes';
export const {{camelCase action_name}}Begin = payload => ({
type: {{constantCase action_name}}_BEGIN,
payload
});
export const {{camelCase action_name}}Success = payload => ({
type: {{constantCase action_name}}_SUCCESS,
payload
});
export const {{camelCase action_name}}Error = payload => ({
type: {{constantCase action_name}}_ERROR,
payload
});
```
The reducer can be scaffolded like so:
``` handlebars
// reducer.js.hbs
import {
{{constantCase action_name}}_BEGIN,
{{constantCase action_name}}_SUCCESS,
{{constantCase action_name}}_ERROR
} from "../actions/actionTypes";
import initialState from "./initialState";
export default function(state = initialState.{{camelCase action_name}}, action) {
switch (action.type) {
case {{constantCase action_name}}_BEGIN:
case {{constantCase action_name}}_SUCCESS:
case {{constantCase action_name}}_ERROR:
return state;
//plopImport
}
}
```
The rest of the templates can be examined in the [Github repository](https://github.com/Clarity-89/react-generator/tree/master/src/scripts/generator).
We're using a new action type - `modify`, which in contrast to `append`, replaces the text in the file located at `path`. In our case we use `modify` action to add generated code at a particular point in the template. To specify at which point the code should be inserted we provide a special `//plopImport` comment (it can be named anything) and then reference it in the `pattern` property of the action object. Since plop will replace this comment with the template it received, we need to remember to add the comment into the template, in the same place we'd like new code to be added. Another option could be to [create own action](https://github.com/amwmedia/plop#setactiontype) to have more granular control over code generation.
The final touch is to add the newly created Redux generator actions and combine them with existing React generator in the main **config.js** file.
```javascript
// config.js
const { reactConfig } = require("./config/react");
const { reduxConfig } = require("./config/redux");
module.exports = {
// Prompts
actions: data => {
return data.select === "react_component"
? reactConfig(data)
: reduxConfig(data);
}
}
```
Now the newly created generator is ready for a test drive. Note that before using it, you need to create **actions** and **reducer** folders, the latter one containing **rootReducer.js**.

And with that we have a handy generator that will abstract away a lot of manual work. This example can be extended further, for example to scaffold middleware actions, be it redux-thunk, redux-saga or anything else.
Got any questions/comments or other kinds of feedback about this post? Let me know in the comments here or on [Twitter](https://mobile.twitter.com/Clarity_89). | clarity89 |
191,664 | Micronaut: Declarative HTTP Client | Just a hint of what can you do with the declarative HTTP Client of Micronaut | 0 | 2019-10-19T23:31:51 | https://dev.to/mrduckieduck/micronaut-declarative-http-client-5jn | java, microunaut, microservices | ---
title: Micronaut: Declarative HTTP Client
published: true
description: Just a hint of what can you do with the declarative HTTP Client of Micronaut
tags: java,microunaut,microservices
---
Currently I'm working on a side project and I decided to use [Microunat](https://micronaut.io/) (if you don't know anything about [Microunat](https://micronaut.io/) [here](https://www.baeldung.com/micronaut) you can read what it is and how it works) and there is a lot of cool stuff within this "new" framework.
Some I like the most is the really low memory footprint, [Micronaut Data](https://micronaut-projects.github.io/micronaut-data/latest/guide/) and the possibility to be compiled to native code using [GraalVM](https://www.graalvm.org/). But those are not the only "cool" features this framework has out of the box.
In order to make our lives easier as developers the [Microunat](https://micronaut.io/) guys included a declarative client to easily test our endpoints, also less verbose. Is a very simple to use, first you'll need to create a simple `interface` to "declare" the operations to be tested and delegate to the `application-context` the binding using the `url` and the requested operations.
Let's assume we need to test the `get by id operation` for users:
```java
@Controller(value = "/user")
class UserController {
@Inject
private UserService userService;
@Get(value = "/{id}")
public User getById(@PathVariable("id") final Long id) {
return userService.get(id);s
}
}
```
So, we can use the `EmbeddedServer` and the `HttpClient` (or the `RxHttpClient`) to test the operation, as it shows the following example:
```java
@MicronautTest
class UserControllerTest {
@Inject
private EmbeddedServer embeddedServer;
@Test
void testGet_WithBlocking() throws MalformedURLException {
final var url = format("http://%s:%s", embeddedServer.getHost(),
embeddedServer.getPort());
final var client = HttpClient.create(new URL(url));
final HttpResponse<User> response = client.toBlocking()
.exchange(HttpRequest.GET("/user/1"), User.class);
Assertions.assertEquals(response.getStatus(), HttpStatus.OK);
Assertions.assertNotNull(response.body());
}
}
```
As you can see is a lot of things to be done, quite easily nevertheless a lot of things to do. But *Is there less verbose way?* The answer is **yes**. Now let's try using the `declarative client`, first of all we need to create an `interface` to "declare" the operations to be tested:
```java
@Client(value = "/user")
interface UserClient {
@Get(value = "/{id}")
User getById(final Long id);
}
```
As you can see is only a *contract* to be satisfied and the annotation `@Client` will tell to the `test application context` that we're are creating a `HttpClient` and it has to look for an implmentation or `Controller` to satisfy the operations we're requesting.
And now our `test-class` will look as the following:
```java
@MicronautTest
class UserControllerTest {
@Inject
private UserClient userClient;
@Test
void testGet_WithBlocking() {
final var user = userClient.getById(1L);
Assertions.assertNotNull(user);
}
}
```
As you can see is pretty simple and we're avoiding using the server and the creation of the request, we're delegating to the context itself the satisfaction of our contract. Using the `declarative client` we can improve our code defining a simple contract to be used as based for the implementation and the client.
```java
class UserController {
@Get("/{id}")
User getById(final Long id);
}
```
For the implementation
```java
@Controller(value = "/user")
class UserControllerImpl implements UserController {
@Inject
private UserService userService;
@Override
public User getById(@PathVariable("id") final Long id) {
return userService.get(id);
}
}
```
For the test class
```java
@MicronautTest
class UserControllerTest {
@Client(value = "/user")
@Inject
private UserClient userClient;
@Test
void testGet_WithBlocking() throws MalformedURLException {
final var user = userClient.getById(1L);
Assertions.assertNotNull(user);
}
}
```
Leaving us with a version more clear and simplier, avoiding the unecessary classes or interfaces, now we can ensure we're satisfaying exactly the same contract in the implematation and the test.
I'll leave the official documentation for the [Declarative Http Client](https://guides.micronaut.io/micronaut-http-client-groovy/guide/index.html).
Great coding!
| mrduckieduck |
159,359 | Setting Up openSuse... And getting My official OS tester | The past few days I have been kinda going under the radar. Does anybody know anything about "networki... | 0 | 2019-08-20T22:59:03 | https://dev.to/lazar/setting-up-opensuse-and-getting-my-official-os-tester-4eod | beginners, testing, linux, operatingsystems | The past few days I have been kinda going under the radar. Does anybody know anything about "networking issues"? That's what our internet or *"innernet"* provider told us. Anyway in short it would load website ***REALLY*** slow, and some just refused to load at all (Dev.to) was one.
Anyway, today all of a sudden, the WiFi started working again, so I decided to let my people know I am alive and kicking.
So what is this about openSuse?
A few days ago, I was thinking about writing an article on a distro I had never written about before. Well I decided that first I would first need a distro I had never tried before. [This is what I ended up looking up](https://www.google.com/search?q=linux+distros&oq=linux+distros&aqs=chrome..69i57j0l5.5229j0j0&sourceid=chrome&ie=UTF-8) Tell you the truth I have tried about every distro on there... except: 
Well, I might just have to download it!
Well let me say I got it wrong the first 2 times.
1. I downloaded openSuse live... I don't know what I was seeing, and they explicitly tell you not to download it for installation purposes, so I meddled around for an hour reading the docs wondering why I was getting strange errors.
2. I downloaded the Net-only version, which is a serious mistake for a laptop like mine, I tend to have network issues all day long with installers. (Any help on that, ping me.)
#### So I downloaded the main version, and It finally worked.
The installer gives you 5 "desktops" (2 aren't)
1. Gnome
2. Kde Plasma
3. Generic Desktop
4. Server
5. Transactional Server
Gonna tell you right now, I was ***SO*** tempted to install the Generic Desktop just to see how it looked! But my good senses ruled out... and I ended up installing KDE.
Next was suggested partitioning. It did about everything right, except giving me a swap space. Well, instead of going in there and manually changing it, my computer runs pretty good without one anyway, so I forged ahead.
The rest was just normal installation like name, timezone, blah blah blah. UNTIL! I saw this option to import user data from partitioning the former installation. Well I clicked on it, and **LO AND BEHOLD** "nico@niners" (Get it, Nico and the Niners? I did that for fun...) showed up.
#### **All Linux Distros! You *ALL* need this feature in your installer!!!**
Okay, lets get down to business, Software, speed, and pre-installed software.
Well, boot up ain't too slow, but it ain't too fast either. With Arch Linux on the same computer I blink and my computer starts up 1 minute at least. (Obviously hyperbole. Probably in 30 seconds realistically.)
Preinstalled software is:
1. Mahjongg Solitaire
2. Reversi
3. Patience Card game
4. Minesweeper
5. Sudoku
6. Okular
4. LibreOffice SUITE
5. Skanlite
6. Gwenview
7. Konversation
8. Kmail
9. Akregator
10. Pim Setting Exporter
11. TigerVnc
12. Sieve Editor
13. KTnef
14. Firefox
15. VLC
16. KAdressBook
17. Wacom Tablet Finder
18. Ark
19. Dolphin
20. Discover
18. System = [ YaST, Ark, Termianal, KSysGuard, Konsole]
19. KmouseTool
20. Kleopatra
21. Knotes
22. Kcalc
23. Kompare
24. Kmag
25. Spectacle
26. KTnef
Well, gonna be honest with you. Really I don't know how I feel about pre-installed software. It actually seems like openSuse did a pretty good job not including a bunch of bloatware, and most of it is actually just stuff that comes with KDE anyway.
But all the repo software ain't spectacular. BUT, all .RPMs that work on Fedora basically work on *"Suse"*.
My first impression is alright, but I ran into a few bugs already so I think I might have to take a little while using the distro before I make a definite decision. Which is exactly why I just hired my "Senior Software Tester" A.K.A My older brother. He isn't exactly a Linux groupee (said with the roll of the tongue), and is sort of a newbie to the whole Linux "thing", which is exactly why I think he will be perfect for testing, because He comes in with a open mind, unlike me. So I am going to let him use the distro for a week or so and see how he likes it.
Okay, y'all have a great day!
| lazar |
159,609 | Is talking with a recruiter as a junior dev worth it? | We interviewed a recruiter this week on Tech Jr, he spilled all his dirty secrets on how you can get hired! | 0 | 2019-08-21T14:08:26 | https://dev.to/techjr/is-talking-with-a-recruiter-as-a-junior-dev-worth-it-5amm | career, beginners, webdev | ---
title: Is talking with a recruiter as a junior dev worth it?
published: true
cover_image: "https://techjr.s3.amazonaws.com/images/techjrsite.png"
description: We interviewed a recruiter this week on Tech Jr, he spilled all his dirty secrets on how you can get hired!
tags: career, discuss, beginners, webdev
---
This week on [Tech Jr](https://techjr.dev/episodes/2019/recruiter-mike-torres-shares-job-search-strategies-and-secrets) we spoke with recruiter-turned-dev Mike Torres about dealing with recruiters during the job search.
{% podcast https://dev.to/techjrpodcast/recruiter-mike-torres-shares-job-search-strategies-and-secrets %}
We didn't have much luck with recruiters as junior developers, but what about you?
Let us know in the comments if you've had good/bad luck talking to recruiters. | leewarrickjr |
159,754 | Random Server Ports and Spring Cloud Service Discovery with Netflix Eureka |
Recently, I've had an issue were I want to be able to run multiple spring boot ... | 0 | 2019-08-21T19:18:12 | https://blog.joshmlwood.com/eureka-spring-cloud-random-port/ | springcloud, springframework | ---
title: Random Server Ports and Spring Cloud Service Discovery with Netflix Eureka
published: true
tags: spring cloud,spring framework
canonical_url: https://blog.joshmlwood.com/eureka-spring-cloud-random-port/
---

Recently, I've had an issue were I want to be able to run multiple spring boot services locally for testing and development purposes. Unfortunately they all run on the same port so they fail to start!
Fixing this is simple enough, in our Spring Boot application, we can just configure a server port in our `application.yml` like so:
```
server:
port: 0
```
We can choose `0` for our port to be randomly chosen at startup. If we want to specify another port, we can manually set it to `8081` or `8082`, etc... However, manually specifying ports is painful, especially if you have many services you'd like to run.
Another downside to specifying ports manually is deployments. When you deploy your application to a server (on-prem or in the cloud), then you have to make sure that there are no port collisions on the server the application is being deployed to. This is a tedious and time consuming effort for the team responsible for deploying services, and annoying to the developer for documenting the port(s) the application requires. To complicate matters further, imagine you are deploying in a docker container and using a container orchestration framework. Likely you will not know ahead of time which node the application will actually be deployed on, if it's successfully deployed at all. In this instance your orchestrator could run into port conflicts, or fail to provision your application due to a lack of nodes with your container's port available.
Setting a random port comes with downsides as well. It is difficult to access your service locally since it is always on a different port after restarting. This can be addressed by using a gateway, like Spring Cloud Gateway and configuring auto-discovery using Spring Cloud Discovery with Netflix Eureka to direct traffic to your application through a common gateway port. Again, we will run into another problem, your application during initialization will report port `0` (random) to Eureka as the port it's running on, resulting in an unreachable service.
This is not what we want, so we need a way to customize initialization of the application to choose a random port for us and allow Eureka / Service Discovery to configure itself with that new random port. We can achieve this in Spring Boot 2 (Spring Framework 5+) by extending the `WebServerFactoryCustomizer` class like so:
```
@Configuration
public class WebServerFacotryCustomizerConfiguration implements WebServerFactoryCustomizer<ConfigurableServletWebServerFactory> {
@Value("${port.number.min:8080}")
private Integer minPort;
@Value("${port.number.max:8090}")
private Integer maxPort;
@Override
public void customize(ConfigurableServletWebServerFactory factory) {
int port = SocketUtils.findAvailableTcpPort(minPort, maxPort);
factory.setPort(port);
System.getProperties().put("server.port", port);
}
}
```
Here we'll use Spring's SocketUtils to find a port that is available in our range, set that port on the servlet web server factory configuration, and also set it as a system property. Doing this will ensure that our application will get an available port that doesn't collide with a port already in use and it will also allow our service discovery to initialize itself properly!
On a side note, I have not tested this in a production environment or deployed in any manner, so it's possible there will be some issues. This solution may also be difficult to use in your deployment process which may require knowing the port of the application ahead of time. However if your services all depend on Eureka for service discovery amongst themselves, Eureka will report the correct application port and IP address for your service to be reachable. As long as your services do not require additional port forwarding for your port(s) be be reachable outside of the host this should work just fine. | jmlw |
159,936 | Designing an API | Over the last year I have been given the chance to work with some amazing people and we all have been... | 0 | 2019-08-22T13:15:58 | https://dev.to/jccguimaraes/designing-an-api-1e3d | microservices, design, productivity, opensource | Over the last year I have been given the chance to work with some amazing people and we all have been developing micro-services that expose RESTful APIs.
Through out all this year we had to integrate with other APIs, such as RESTful, NVP - Name-Value Pair, etc, and both internal or external 3rd parties.
Making sure the project runs without any major incident is hard work. It involves a lot of people from a lot of departments and their ability to share information is the most vital part before developing the API.
> In no way am I stating that these are the right approaches to take when designing an API. I am just sharing what I have learned the hard way.
So here are some points I find important to keep in mind when given a new task, aka, build a new micro-service.
- Understand the API dependencies;
- a *dependency* can be an internal API, external API or an AWS service you need to integrate with, etc...
- Understand if those dependencies provide all the functionality the API requires (both tech and business);
- does it fit the purpose;
- how well it is documented/supported;
- is it too strict;
- how flexible regarding unforeseen/future changes.
- Understand what the API **MUST** deliver;
- current deliver;
- future deliveries *(although in an agile environment this is a fuzzy topic)*.
- An API contract;
- a *contract* is an agreement between two or more parties that develop/consume the API.
- API documentation;
- API testing.
All above topics can be grouped into 3 main categories:
1. API dependencies
2. API specification
3. API development
## API dependencies
As I stated above, this is where typically you will have the business and technical requirements and you or your team start brainstorming/spiking on the best course of action.
This means, understanding what are the API dependencies and their added value.
From personal experience, you always miss some edge cases so it is a good practice to use UML sequence diagrams to help you structure how your endpoints will behave, such as request headers, payloads, responses, etc.
### What is an UML sequence diagram?
An **UML sequence diagram** describes how operations are carried out in an interactive format. They are time based and they show the order of those interactions. They also specify all the participants in the workflow.
A visual interpretation helps you keep track of the workflow and define happy paths as well as errors from any 3rd party API and how your own API should deal with that information.
How can we as a developer take advantage of sequence diagrams? By using tools such as [PlantUML](http://plantuml.com/) or [mermaidJS](https://mermaidjs.github.io/) which allows us to generate diagrams from textual representations.
A simple example with PlantUML (taken from the official website):
```puml
@startuml
Alice -> Bob: Authentication Request
Bob --> Alice: Authentication Response
Alice -> Bob: Another authentication Request
Alice <-- Bob: Another authentication Response
@enduml
```
which will generate the next image:
![small-example][small-example]
This is a cool feature because it can be control versioned.
I find that [mermaidJS](https://mermaidjs.github.io/) is still a little behind of [PlantUML](http://plantuml.com/) in terms of integrations and functionalities but they are both powerful tools and I have used both in different contexts. You should use the one that best fits your needs.
If you use [Confluence](https://www.atlassian.com/software/confluence), there is a nice plugin for [PlantUML](http://plantuml.com/).
## API specification
After you have defined the diagrams, the next step is to start drafting the contract.
This contract should be "signed off" in a standard specification that most developers can be familiarised with. Luckily the **[OpenAPI Specification](https://www.openapis.org/)** has been here for a while.
The specification is written in yaml and it can also be control versioned.
Once again and from personal experience, the drafted contract may suffer small to medium changes. Which is normal, it's a contract where more than one team is involved and feedback is always a good thing.
Always be open to suggestions but don't forget that your team owns the API.
Discussion is healthy and allows us to see different angles to achieve the same goal.
Keep in mind that your API may also suffer changes in the future which may impact production environments. So think wisely on versioning your API being that through path versioning such as `/v1`, etc. Or by headers such as GitHub's example `Accept: application/vnd.github.v3+json`.
> If you are like me and your OCD kicks in when the topic "API versioning" comes to the table then read this interesting post about Evolvable APIs from Fagner Brack - [To Create An Evolvable API, Stop Thinking About URLs](https://medium.com/@fagnerbrack/to-create-an-evolvable-api-stop-thinking-about-urls-2ad8b4cc208e).
## API development
It's time to take all the value we gained before to start implementing it into code. Just make sure to follow the contract and protect the micro-service against unexpected 5xx popping into production.
But depending on the language you choose to code, a big part of the development is testing - unit, functional, etc...
With the right tools you can prepare functional test scenarios by using [Postman](https://www.getpostman.com/) or [Insomnia](https://insomnia.rest/).
[Postman](https://www.getpostman.com/) has a neat feature which is called [Newman](https://github.com/postmanlabs/newman) where you can run a collection against a file to check if your endpoints follow the contract.
At this point I had shared tools that can be version controlled along with the current code. Making it easy to keep all of them synced.
# Demoing with an example
Nothing is better than an "almost real" example to demonstrate everything described above.
This example is based on making capture, void and refund transactions, given an authorization identifier.
> an authorization identifier means that we locked some amount from the payment method used by a customer.
## Fictional requirements
- capture the authorization;
- charge the account an amount lower or equal than the locked funds in a specified currency;
- returns a transaction identifier for possible refund.
- void the authorization;
- release the locked funds.
- refund the account;
- providing a valid transaction identifier;
- returns a refund transaction identifier.
- all above actions **MUST** be validated against another fictional internal API;
- validate that accountId is linked to the authorizationId.
- all above actions **MUST** have a required [X-Api-Key](https://api.data.gov/docs/api-key/) header;
- for security reasons.
- all above actions **SHOULD** have an [X-Correlation-Id](https://hilton.org.uk/blog/microservices-correlation-id) header.
- for keeping track of workflows.
Let's name the micro-service as `process-transactions`.
## Planning with sequence diagrams
From the previous fictional requirements we can define 3 participants:
- **USER** - The API consumers;
- **MS** - The API micro-service;
- **API** - The API consumed.
A draft of the diagram should resemble as the following image:
![capture][capture]
### Tooling for PlantUML
Consider the following file structure.
<pre>
<b>./images
./plantuml
├── capture.puml</b>
</pre>
Where `capture.puml` has the following content.
```puml
@startuml
participant "USER" as A
participant "MS" as B
participant "API" as C
title //process-transactions// micro-service capture workflow
rnote left A
**headers**
X-Api-Key<font color="red">*</font> //<string>//
X-Correlation-Id //<string>//
end note
activate A
A -> B: **POST** ""/capture/:authorizationId""
rnote left A
**payload**
accountId:<font color="red">*</font> //<string>//
amount:<font color="red">*</font> //<number>//
currency:<font color="red">*</font> //<string>//
end note
rnote left B
**headers**
X-Api-Key<font color="red">*</font> //<string>//
X-Correlation-Id //<string>//
end note
activate B
B -> C: **POST** ""/validate""
rnote left B
**payload**
accountId:<font color="red">*</font> //<string>//
authorizationId:<font color="red">*</font> //<string>//
end note
alt success request
rnote right B
**headers**
X-Correlation-Id //<string>//
end note
activate C
B <-- C: ""**200** OK""
rnote right B
**payload**
success: //true//
end note
|||
B -> B: capture amount
activate B
deactivate B
rnote right A
**headers**
X-Correlation-Id //<string>//
end note
A <-- B: ""**200** OK""
rnote right A
**payload**
transactionId: //<string>//
end note
|||
else failure request
rnote right B
**headers**
X-Correlation-Id //<string>//
end note
B <-- C: ""**200** OK""
deactivate C
rnote right B
**payload**
success: //false//
end note
rnote right A
**headers**
X-Correlation-Id //<string>//
end note
A <-- B: ""**422** UNPROCESSABLE ENTITY""
deactivate B
rnote right A
**payload**
error: //true//
reason: Conditions could not be met
end note
|||
end
deactivate A
@enduml
```
We can use the package `node-plantuml` for generating the sequence diagram as an image.
- `npm install node-plantuml`
- `puml generate -s -o ./images/capture.svg ./plantuml/capture.puml`
Now we have a version controlled file that describes our `/capture` endpoint.
## Writing the contract in the OpenAPI Specification
`PlantUML` gives us a pretty good view of what the `capture` endpoint expects as a request and responses.
Remember that at this point the micro-service logic is still a **black-box**, and it should remain that way for now.
We are trying to achieve a contract that does what business is expecting.
It's also expected that all the dependencies of the micro-service regarding 3rd/internal parties APIs are clear on their purposes and which of their endpoints suits our needs.
In our `capture` endpoint we assume some generic response. But we could be calling x number of endpoints if needed before the `capture` replies with anything.
Anyways, the OpenAPI is defined as a yaml file with all the specifications.
But if we have a few endpoints and a lot of responses, it might be useful to have separate files for each section of the Specification.
Ultimately this will ease the burden of maintaining the specification.
### Organising the contract structure
Updating the above file structure.
<pre>
./images
├── capture.svg
./plantuml
├── capture.puml
<b>./open-api
├── components
│ ├── headers
│ │ └── x-correlation-id.yaml
│ ├── headers.yaml
│ ├── parameters
│ │ ├── authorization-id.yaml
│ │ └── x-correlation-id.yaml
│ ├── responses
│ │ ├── capture-200.yaml
│ │ └── capture-422.yaml
│ ├── responses.yaml
│ ├── schemas
│ │ ├── capture-200.yaml
│ │ ├── capture-422.yaml
│ │ └── capture.yaml
│ └── schemas.yaml
├── components.yaml
├── index.yaml
├── info.yaml
├── paths
│ └── capture.yaml
└── paths.yaml</b>
</pre>
> Instead of using the normal `'#/components/...'`, ref is a relative link to the file, which after the compile step will be properly OAS "reffed".
Content of `./open-api/index.yaml`:
```yaml
openapi: 3.0.2
tags:
- name: capture
info:
$ref: './info.yaml'
paths:
$ref: './paths.yaml'
components:
$ref: './components.yaml'
security:
- X-Api-Key: []
```
Content of `./open-api/paths.yaml`:
```yaml
/capture/{authorizationId}:
post:
$ref: './paths/capture.yaml'
```
Content of `./open-api/paths/capture.yaml`:
```yaml
summary: Capture an amount
tags:
- capture
operationId: capturePost
parameters:
- $ref: '../components/parameters/authorization-id.yaml'
- $ref: '../components/parameters/x-correlation-id.yaml'
requestBody:
content:
application/json:
schema:
$ref: '../components/schemas/capture.yaml'
responses:
'200':
$ref: '../components/responses/capture-200.yaml'
'422':
$ref: '../components/responses/capture-422.yaml'
```
### Tooling for OAS
We can use the package `swagger-cli` for generating the compiled Specification file.
- `npm install swagger-cli`
- `swagger-cli bundle -o open-api.yaml --type yaml open-api/index.yaml`.
And the full specification:
```yaml
openapi: 3.0.2
tags:
- name: capture
info:
version: 1.0.0
title: Process Transactions Micro-service
description: 'Capture, void and refund an account.'
paths:
'/capture/{authorizationId}':
post:
summary: Capture an amount
tags:
- capture
operationId: capturePost
parameters:
- name: authorizationId
description: Authorization Id which allows to capture the locked funds
in: path
required: true
schema:
type: string
- name: X-Correlation-Id
description: Correlation Id to keep track of workflow
in: header
schema:
type: string
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/capture'
responses:
'200':
$ref: '#/components/responses/capture-200'
'422':
$ref: '#/components/responses/capture-422'
components:
securitySchemes:
X-Api-Key:
type: apiKey
in: header
name: X-Api-Key
responses:
capture-200:
description: Capture of funds has succedeed
headers:
X-Correlation-Id:
$ref: '#/components/headers/X-Correlation-Id'
content:
application/json:
schema:
$ref: '#/components/schemas/capture-200'
capture-422:
description: Capture did not succeed
headers:
X-Correlation-Id:
$ref: '#/components/headers/X-Correlation-Id'
content:
application/json:
schema:
$ref: '#/components/schemas/capture-422'
schemas:
capture:
type: object
required:
- accountId
- amount
- currency
properties:
accountId:
type: string
amount:
type: number
example: 9.99
currency:
type: string
example: EUR
capture-200:
type: object
properties:
transactionId:
type: string
description: The transaction id which will allow to refund
capture-422:
type: object
properties:
success:
type: boolean
default: false
reason:
type: string
example: Conditions could not be met
headers:
X-Correlation-Id:
schema:
type: string
security:
- X-Api-Key: []
```
## Testing against the API
Now that we defined diagrams and the contract is settled, we are ready to implement it.
Let's say you have a server up and running with all of the requirements in place.
Wouldn't it be better to have a Postman collection based on the OAS instead of manually creating it?
### Converting OpenAPI to Postman Collection
We can use the package `openapi-to-postmanv2` for generating the Postman collection.
- `npm install openapi-to-postmanv2`
- `openapi2postmanv2 -s open-api.yaml -o postman-collection.json -p` it will generate the a Postman collection almost pre-filled.
Obvious you will need to fill the blanks such as the `X-Api-Key` and alike.
## Conclusions
Building the Postman collection through the OpenAPI Specification will help find breaches in the development.
Just keep in mind that changes to the contract often happen during development or even when all the functionality is being tested.
Hope this workflow helps in any way possible to fasten and tighten the development.
> **What are your thoughts? Please share your experience and all constructive feedback is welcome!**
# Links
**Specifications**
- [UML Specification](https://www.omg.org/spec/UML/)
- [OpenAPI Specification](https://www.openapis.org/)
**Tools**
- [PlantUML](http://plantuml.com/)
- [mermaidJS](https://mermaidjs.github.io/)
- [Postman](https://www.getpostman.com/)
- [Insomnia](https://insomnia.rest/)
- [Newman](https://github.com/postmanlabs/newman)
- [Swagger Editor](https://editor.swagger.io/)
- [LiveUML](https://liveuml.com/)
- [node-plantuml](https://github.com/markushedvall/node-plantuml)
- [swagger-cli](https://github.com/APIDevTools/swagger-cli)
- [openapi-to-postmanv2](https://github.com/postmanlabs/openapi-to-postman)
[small-example]: https://thepracticaldev.s3.amazonaws.com/i/6d99osiyx4nluwrjastf.png
[capture]: https://thepracticaldev.s3.amazonaws.com/i/hw2d3up9w0aqav7ey0xc.png
| jccguimaraes |
160,043 | The Ultimate Beginner Git Cheatsheet | Hello everyone! I wanted to share a guide that I give to everyone who wants to start understand Git... | 0 | 2019-08-22T14:06:47 | https://dev.to/kennethacurtis/the-ultimate-beginner-git-cheatsheet-h32 | beginners, git, github | Hello everyone!
I wanted to share a guide that I give to everyone who wants to start understand Git at a basic and advanced level. It's something that I still use and reference every day at work. Enjoy [The Ultimate Beginner Git Cheatsheet](https://mukulrathi.com/git-beginner-cheatsheet/)! | kennethacurtis |
161,033 | Performance benchmark: gRPC vs. REST in .NET Core 3 Preview 8 | Evaluating Performance of REST vs. gRPC in .NET Core 3 Preview 8 | 0 | 2019-08-25T07:07:54 | https://dev.to/thangchung/performance-benchmark-grpc-vs-rest-in-net-core-3-preview-8-45ak | grpc, rest, dotnet, dotnetcore | ---
title: Performance benchmark: gRPC vs. REST in .NET Core 3 Preview 8
published: true
description: Evaluating Performance of REST vs. gRPC in .NET Core 3 Preview 8
tags: grpc,rest,dotnet,dotnetcore
---
Recently, I have used gRPC and REST in the same host on my project. And I always wonder what if I can know at least what makes different between gRPC and REST when I do the inter-communication between services inside a Kubernetes cluster.

I have found the blog about the performance benchmark written by [Ruwan Fernando](https://medium.com/@EmperorRXF). That was awesome, and that's clear after spent sometimes to have a look the code-based. I decide to fork it and write out some more code for .NET Core Preview 8 to see how different between gRPC and REST protocol in this new SDK which is published by Microsoft and also using it for my project.
Let run the benchmark as following
- REST API:
```bash
$ cd RESTvsGRPC\RestAPI
$ dotnet run -p RestAPI.csproj -c Release
```
- gRPC API:
```bash
$ cd RESTvsGRPC\GrpcAPI
$ dotnet run -p GrpcAPI.csproj -c Release
```
- Benchmark project:
```bash
$ cd RESTvsGRPC\RESTvsGRPC
$ dotnet run -p RESTvsGRPC.csproj -c Release
```
Now, what we can do is waiting until it finished. And we will have a result as below

When you get the small data with REST, then it is quite faster than gRPC
```bash
$ RestGetSmallPayloadAsync | 100 | 14.99 ms | 0.2932 ms | 0.2743 ms |
```
```bash
$ GrpcGetSmallPayloadAsync | 100 | 19.60 ms | 0.3096 ms | 0.2896 ms |
```
This is I think because .NET Core team has already optimized the performance of JSON processing in the core. But it is not effective if we run with the large data with REST. See below
```bash
$ RestGetLargePayloadAsync | 100 | 1,181.00 ms | 13.9860 ms | 12.3982 ms |
```
```bash
$ GrpcGetLargePayloadAsListAsync | 100 | 187.93 ms | 1.7881 ms | 1.6726 ms |
```
You can see how different gRPC vs. REST when we deal with the big chunk of data.
### Summary
.NET Team makes a huge difference when dealing with small data in the payload of the message, but actually with the large payload of data, the different has gone. gRPC is still a winner in this area. I'm not saying which one is better than another. What I'm going to say is we need the appropriate strategy for using what kind of protocol with your business cases.
I usually use REST communication in outside communication with the external world such as external service integration, communication with front-end... And the whole communication inside the Kubernetes is all about gRPC because of power that we have with HTTP/2. I know that we can configure HTTP/2 with REST on the Kestrel as well, but it comes with a cost and ineffective because we need to maintain the cert in Kestrel. Otherwise, we normally offload inside the Kubernetes cluster to make it simple and fast for the performance in communication.
I love what Kubernetes architecture used the payload format for the transport protocol as following

The source code can be found at https://github.com/thangchung/RESTvsGRPC, and the original code-based at https://github.com/EmperorRXF/RESTvsGRPC
### References:
- https://medium.com/@EmperorRXF/evaluating-performance-of-rest-vs-grpc-1b8bdf0b22da
- https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel
- https://gooroo.io/GoorooTHINK/Article/16623/One-Weird-Trick-To-Improve-Web-Performance/21564#.Vx9o5UdkldB
- https://devblogs.microsoft.com/aspnet/asp-net-core-2-2-0-preview1-http-2-in-kestrel/
- https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hacked/
| thangchung |
161,092 | Problems that forced me to leave ProtonVPN for Mullvad | This purely based on my own experience with both VPN apps. Others may find it dif... | 0 | 2019-08-27T10:05:51 | https://kevinhq.com/problems-that-forced-me-to-leave-protonvpn-for-mullvad/ | vpn, protonvpn, protonvpnreview | ---
title: Problems that forced me to leave ProtonVPN for Mullvad
published: true
tags: VPN,protonvpn,protonvpn review,vpn
canonical_url: https://kevinhq.com/problems-that-forced-me-to-leave-protonvpn-for-mullvad/
---
This purely based on my own experience with both VPN apps. Others may find it different because there are so many factors involved. Please keep in mind that Mullvad doesn’t have an affiliate program at the time I write this. This means I am not writing this to promote their service. The comparison may not… [Continue reading »Problems that forced me to leave ProtonVPN for Mullvad](https://kevinhq.com/problems-that-forced-me-to-leave-protonvpn-for-mullvad/)  | kevinhq |
161,185 | Finding A Tech Job Made Easy | As a computer science graduate of Northern Illinois University. I would like to share my experience a... | 0 | 2019-08-25T19:01:07 | https://dev.to/varuntangalla/finding-a-tech-made-easy-4l5o | hiring, career, beginners, computerscience | As a computer science graduate of Northern Illinois University. I would like to share my experience and knowledge gained during the job search.
If you are a Full Stack Developer looking for a job in the market, here are the websites where you can submit your job applications:
Linkedin
Indeed
Dice
Vault
Company Website's(You can find openings in the careers portal of the company)
College Website's(Every University has its own career portal)
Glass Door
Angel List
StackOverflow
Monster
Zip Recruiter
Krop
Mashable
The Muse
SimplyHired
RubyNow
AngelList
Do you have any tips or resources you would like to share?
| varuntangalla |
161,218 | Strive to be known for how you made people feel and not what you've done | Some thoughts on my career in technology so far and what I plan to focus on going forward | 0 | 2019-08-25T20:40:02 | https://www.daveinden.com/be-known-for-how-you-made-people-feel/ | career, discuss, beginners | ---
title: Strive to be known for how you made people feel and not what you've done
published: true
description: Some thoughts on my career in technology so far and what I plan to focus on going forward
tags: career, discuss, beginners
cover_image: https://thepracticaldev.s3.amazonaws.com/i/on3mz46470a3vmyzl1b8.jpg
---
>Cover image by [Christal Yuen](https://unsplash.com/@dearskye) on [Unsplash](https://unsplash.com)
I recently left my job to focus on my education. Working as a Developer Advocate I could see the gaps in my knowledge about code that came from teaching myself the last few years and how it could hold me back a little. After a lot of discussion, both with myself and my wife, I made the scary decision to go back to school. Specifically, I am doing a coding bootcamp. I want to get a focused education to build up my software development skills and fill in the gaps of my knowledge while also having the ability to be a part of my family. I did some research on a few and landed on Lambda School. At the time of this writing I just finished my third week. I am really enjoying it and having the opportunity and privilege to go back to school is awesome. Once I am done I may go back into a Developer Advocate role or work to get a position somewhere as a Software Developer. I still have the itch and desire to be a Software Developer specifically. Being a Developer Advocate was something I really liked and I will probably come back to it after having some time as a developer under my belt. With more experience I can be of better service to people in a role like that. Ultimately, I like my work to be something where I can make a positive impact on people's lives. Perhaps I will go into teaching people how to code or something similar. For now, I just want to be part of building something.
After I gave my notice to my manager and told a few other folks who needed to know word of my departure began to get around the company. I had folks coming up to me in person or reaching out via chat or email to tell me they heard I was leaving and asking if it was true. Each one of them told me how happy they were for me to be making the choice of going back to school, but that they were sad for me to be leaving. They took the moment to let me know how much they appreciated me and my positive attitude at work or gave some example of how I made them feel happy with the things I did. Some people noted they learned a lot from me while others just thought I was awesome to be around. It was very humbling and quite special to hear these things.
This also sparked some reflection of my own and I began to wonder to myself if anyone would bring up something I did, like a specific action of some kind rather than just general sentiments. I began to question what I had actually accomplished in the years I spent at the company if no one was bringing anything up that I had done. Didn't they remember anything I did to help our customers and fellow employees? Then as I thought about it more I realized that people were taking their time to tell me how I made them feel and it was important to them that I knew I made them happy in some way. My attitude and the way I carried myself brought them a small bit of joy in their life. That is a greater impact on someone than any specific action I could have done.
I plan to take this forward with me in my career. Wherever I end up after completing Lambda School I want to make sure it is a place where I can positively impact people, both the companies customers and especially my fellow employees. I will strive to be someone who brings a positive attitude to the team and helps make people feel good about their work. If you are starting out in your career or even if you have been working somewhere for awhile, I would challenge you to do the same. It's great that we can make big sales or solve tough technical issues and those things deserve to be celebrated, but only in the moment. When it comes time to look back on what you have accomplished strive to be someone that people will remember as a person who built people up and made their lives better. | daveskull81 |
161,800 | Caching in Python Applications | Caching in Python Caching, is a concept that was gifted to software world from the... | 0 | 2020-01-23T12:32:21 | https://bhavaniravi.com/WIP_caching_in_python/ | ---
title: Caching in Python Applications
published: true
tags:
canonical_url: https://bhavaniravi.com/WIP_caching_in_python/
---
# Caching in Python
Caching, is a concept that was gifted to software world from the hardware world. A cache is a temporary storage area that stores the used items for easy access. To put it in layman’s terms, it is the chair we all have.

This blog covers the basics of
1. What are Caches?
2. Caching Operations
3. Cache Eviction Policies
4. Implementation of Cache Eviction Policies
5. Distributed Caching
6. Caching In Python
## Conventional Caches
In the world of computer science, Caches are the hardware components that store the result of computation for easy and fast access. The major factor that contributes to the speed is its memory size and its location. The memory size of the cache is way less than an RAM. This reduces the number of scans to retrieve data. Caches are located closer to the consumer (CPU) hence the less latency.
## Caching Operations
There are two broad types of caches operation. Cache such as Browser caches, server caches, Proxy Caches, Hardware caches works under the principle of `read` and `write` caches.
When dealing with caches we always have a huge chunk of memory which is time consuming to read and write to, DB, hard disk, etc., Cache is a piece of software/hardware sitting on top of it making the job faster.
### Read Cache
A read cache is a storage that stores the accessed items. Every time the client requests data from storage, the request hits the cache associated with the storage.
1. If the requested data is available on the cache, then it is a **cache hit**.

1. if not it is a **Cache miss**.

1. Now when accessing a data from cache some other process changes the data at this point you need to reload the cache with the newly changed data this it is a **Cache invalidation**

### Write Cache
Write caches as the name suggests enables fast writes. Imagine a write-heavy system and we all know that writing to a DB is costly. Caches come handy and handle the DB write load which is later updated to the DB in batches. It is important to notice that, the data between the DB and the cache should always be synchronized. There are 3 ways one can implement a write cache.
1. Write Through
2. Write Back
3. Write Around

#### Write Through
The write to the DB happens through the cache. Every time a new data is written in the cache it gets updated in the DB.
**Advantages** - There won’t be a mismatch of data between the cache and the storage
**Disadvantage** - Both the cache and the storage needs to be updated creating an overhead instead of increasing the performance
#### Write Back
Write back is when the cache asynchronously updates the values to the DB at set intervals.
This method swaps the advantage and disadvantage of _Write through_. Though writing to a cache is faster of **Data loss and inconsistency**
#### Write Around
Write the data directly to the storage and load the cache only when the data is read.
**Advantages**
- A cache is not overloaded with data that is not read immediately after a write
- Reduces the latency of the write-through method
**Disadvantages**
- Reading recently written data will cause a cache miss and is not suitable for such use-cases.
## Cache Eviction Policies
Caches make the reads and write fast. Then it would only make sense to read and write all data to and from caches instead of using DBs. But, remember the speed comes only because caches are small. Larger the cache, longer it will take to search through it.
So it is important that we optimize with space. Once a cache is full, We can make space for new data only by removing the ones are already in the cache. Again, it cannot be a guessing game, we need to maximize the utilization to optimize the output.
The algorithms used to arrive at a decision of which data needs to be discarded from a cache is a **cache eviction policy**
1. LRU - Least Recently Used
2. LFU - Least Frequently Used
3. MRU - Most Recently Used
4. FIFO - First In First Out
#### LRU - Least Recently Used
As the name suggest when a cache runs out of space remove the `least recently used` element. It is simple and easy to implement and sounds fair but for caching `frequency of usage` has more weight than when it was last accessed which brings us to the next algorithm.
#### LFU - Least Frequently Used
LFU takes both the age and frequency of data into account. But the problem here is frequently used data stagnates in the cache for a long time
#### MRU - Most Recently Used
Why on earth will someone use an MRU algorithm after talking about the frequency of usage? Won’t we always re-read the data we just read? Not necessarily. Imaging the image gallery app, the images of an album gets cached and loaded when you swipe right. What about going back to the previous photo? Yes, the probability of that happening is less.
#### FIFO - First In First Out
When caches start working like queues, you will have an FIFO cache. This fits well for cases involving pipelines of data sequentially read and processed.
## LRU Implementation
Caches are basically a hash table. Every data that goes inside it is hashed and stored making it accessible at O(1).
Now how do we kick out the least recently used item, we by far only have a hash function and it’s data. We need to store the order of access in some fashion.
One thing we can do is have an array where we enter the element as and when it is accessed. But calculating frequency in this approach becomes an overkill. We can’t go for another Hash table it is not an access problem.
A doubly linked list might fit the purpose. Add an item to the linked list every time it is accessed and maintain it’s a reference in a hash table enabling us to access it at O(1).

When the element is already present, remove it from its current position and add it to the end of the linked list.
## Where to Place the Caches?
The closer the caches are to its consumer faster it is. Which might implicitly mean to place caches along with the webserver in case of a web application. But there are a couple of problems
1. When a server goes down, we lose all the data associated with the server’s cache
2. When there is a need to increase the size of the cache it would invade the memory allocated for the server.

The most viable solution is to maintain a cache outside the server. Though it incorporates additional latency, it is worth for the reliability of caches.
**Distributed Caching** is the concept of hosting a cache outside the server and scaling it independently.
## When to Implement Caches?
Finding the technologies to implement caches is the easiest of all steps. Caches promises high speed APIs and it might feel stupid not to incorporate them, but if you do it for wrong reasons, it just adds additional overhead to the system. So before implementing caches make sure
1. The hit to your data store is high
2. You have done everything you can to improve the speed at DB level.
3. You have learnt and researched on various caching methodologies and systems and found what fits your purpose.
## Implementation In Python
To understand caching we need to understand the data we are dealing with. For this example I am using a simple MongoDB Schema of `User` and `Event` collections.
We will have APIs to get `User` and `Event` by their associated IDs. The following code snippet comprise a helper function to get a respective document from MongoDB
```
def read_document(db, collection, _id):
collection = db[collection]
return collection.find_one({"_id": _id})
```
### Python inbuilt LRU-Caching
```
from flask import jsonify
from functools import lru_cache
@app.route(“/user/<uid>“)
@lru_cache()
def get_user(uid):
try:
return jsonify(read_user(db, uid))
except KeyError as e:
return jsonify({”Status": “Error”, “message”: str(e)})
```
LRU-Caching like you see in the following example is easy to implement since it has out of the box Python support. But there are some disadvantages
1. It is simple that it can’t be extended for advanced functionalities
2. Supports only one type of caching algorithm
3. LRU-Caching is a classic example of server side caching, hence there is a possibility of memory overload in server.
4. Cache timeout is not implicit, invalidate it manually
### Caching In Python Flask
To support other caches like redis or memcache, Flask-Cache provides out of the box support.
```
config = {’CACHE_TYPE’: ‘redis’} # or memcache
app = Flask( __name__ )
app.config.from_mapping(config)
cache = Cache(app)
@app.route(“/user/<uid>“)
@cache.cached(timeout=30)
def get_user(uid):
try:
return jsonify(read_user(db, uid))
except KeyError as e:
return jsonify({”Status": “Error”, “message”: str(e)})
```
With that, We have covered what caches are, when to use one and how to implement it in Python Flask.
## Resources
1. [You’re Probably Wrong About Caching](https://msol.io/blog/tech/youre-probably-wrong-about-caching/)
2. [Caching - Full Stack Python](https://www.fullstackpython.com/caching.html)
3. [Redis Vs Memcache](https://medium.com/@Alibaba_Cloud/redis-vs-memcached-in-memory-data-storage-systems-3395279b0941)
4. [Caching - A Trip Down the Rabbit Hole](https://www.youtube.com/watch?v=bIWnQ3F1eLA)
5. [All About Caching](https://www.mnot.net/blog/caching/) | bhavaniravi | |
162,146 | Openship - Open source dropshipping application and more | Openship started as a personal backend to manage my e-commerce shops. Inventory for my shops were com... | 0 | 2019-10-17T18:29:39 | https://dev.to/junaid33/openship-open-source-dropshipping-application-and-more-2j1h | [Openship](https://openship.org) started as a personal backend to manage my e-commerce shops. Inventory for my shops were coming from multiple sources like Amazon, Aliexpress, Rakuten 3PL, ShipBob, etc. Before, I was using Zapier and Google Sheets to manage routing of orders, returns, and replacements, but realized I could optimize more by connecting with the APIs directly. That's when I decided to build Openship.

Openship is a Shopify application that allows you to connect and manage all your shops' operations from one dashboard. It allows you to dropship items from Aliexpress and Amazon, but also route your orders to your own inventory stores like FBA, ShipBob, or a custom 3PL solution. Dropshipping is a good way to test items before buying in bulk, but the model is not sustainable in the long-run. Building a brand and differentiating are the next steps and Openship allows that transition smoothly without being locked-in to any fulfillment service or platform.

Openship is also a development tool. It allows you to access Shopify's GraphQL Admin API for each of your shops using the built-in GraphQL Playground. You can make quick changes to your products or orders right from the dashboard. You can also make micro applications that use the API. For example, our customer service agents found it cumbersome to locate an order's tracking. We built a simple input where the agent could enter the customer's name, address, or order name and it would fetch the tracking. Internally, we have also built micro applications that open tickets when we are contacted, send replacements for lost items, handle returns, buy more inventory when our Rakuten 3PL inventory is running low, etc. We plan to clean these up and release them as well.
**What's the catch? How can you offer this application for free and have it be [open source](https://github.com/openshiporg/openship)? How do you plan to monetize?**
All great questions. Our goal is simple. Provide an intuitive operations backend with a clean UI that can be customized and adapted. Within this dashboard, we are building a marketplace where users can list their products for others using the platform to source from. This includes one-time orders or bulk buying and private labelling. The marketplace will have an industry low seller fee of 4%.

This is called an open core model. We are providing an application and then monetizing around that free service. We believe this will align our goals with those of our users since their success will be linked with ours.
**Tech Stack**
React.js
Next.js
Evergreen UI
Koa.js
You can read more about the tech stack [here](https://www.docs.openship.org/tech-stack).
| junaid33 | |
162,283 | Rails 6 raises ArgumentError for invalid :limit and :precision | BigBinary Blog | https://blog.bigbinary.com/2019/08/27/rails-6-raises-argumenterror-for-invalid-limit-and-precision.ht... | 0 | 2019-08-27T03:09:49 | https://dev.to/amitchoudhary/rails-6-raises-argumenterror-for-invalid-limit-and-precision-bigbinary-blog-578m | rails, ruby | https://blog.bigbinary.com/2019/08/27/rails-6-raises-argumenterror-for-invalid-limit-and-precision.html | amitchoudhary |
162,847 | Predicting BS using ML.Net | Predicting BS using ML.Net Machine learning is typically the realm for R/Python. But can .... | 0 | 2020-04-02T00:25:58 | https://dev.to/chris_mckelt/predicting-bs-using-ml-net-5hen | net, chargeid, machinelearning, mlnet | ---
title: Predicting BS using ML.Net
published: true
tags: .net,chargeid,machine learning,ml .net
date: 2018-08-08 13:00:51 UTC
canonical_url:
---
# Predicting BS using ML.Net
Machine learning is typically the realm for R/Python. But can .Net move into this space?
In this talk we will run through Microsoft’s new ML.Net framework including what it currently offers, how to build a learning pipeline and how to deploy a model to an Azure Service.
[](/wp-content/uploads/2018/08/image-7.png)
[https://slides.com/chrismckelt/deck-451cdb94-a37d-47b7-9d49-6686065e7d03](https://slides.com/chrismckelt/deck-451cdb94-a37d-47b7-9d49-6686065e7d03 "https://slides.com/chrismckelt/deck-451cdb94-a37d-47b7-9d49-6686065e7d03")
<iframe width="576" height="420" src="//slides.com/chrismckelt/deck-451cdb94-a37d-47b7-9d49-6686065e7d03/embed?style=dark&byline=hidden&share=hidden" frameborder="0" scrolling="no" allowfullscreen="" mozallowfullscreen="" webkitallowfullscreen=""></iframe> | chris_mckelt |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.